๐Ÿ”ท
Locentra OS
๐Ÿ”ท
Locentra OS
  • ๐Ÿง  Introduction
  • โš™๏ธ Features
  • ๐Ÿ›  Under The Hood
  • ๐Ÿงฉ Installation
  • ๐Ÿš€ Usage
  • ๐Ÿงฎ CLI Commands
  • ๐Ÿ”Œ API Reference
  • ๐Ÿค– Agents System
  • ๐Ÿง  Semantic Memory
  • ๐ŸŽ“ Training & Fine-Tuning
  • ๐Ÿ” $LOCENTRA Token Access
  • ๐Ÿ— System Architecture
  • ๐Ÿงฉ Extending the System
  • ๐Ÿงช Testing & Quality Assurance
  • ๐Ÿ“„ License & Open Source
Powered by GitBook
On this page

๐Ÿงฎ CLI Commands

Prefer terminals over tabs? Locentra OS ships with a full CLI interface that lets you query, train, vectorize, and manage without ever opening a browser or hitting an API endpoint.

All CLI scripts live in:

backend/cli/

Run them like this:

python cli/<command>.py [arguments]

Each script has access to:

  • ๐Ÿ” Locentra Registry

  • ๐Ÿง  Vector memory engine

  • ๐Ÿ”ง Config settings (.env)

  • ๐Ÿงพ Logger system

  • ๐Ÿงฑ Model + tokenizer


๐Ÿ“ฅ query.py

Send a prompt to the active model and print the output to your terminal.

Usage:

python cli/query.py --prompt "Summarize LSDfi"

Options:

Flag
Description

--prompt

The text prompt to send to the model

--max_tokens

Maximum number of tokens to generate

--temperature

Sampling temperature (creativity control)

--verbose

Show internal debug info and memory scores

Output:

[Model Output]
LSDfi refers to DeFi protocols built around Liquid Staking Derivatives...

๐Ÿง  train.py

Push new knowledge into the model using prompt โ†’ completion training.

Basic Example:

python cli/train.py \
  --prompt "What is TVL?" \
  --completion "TVL stands for Total Value Locked..."

Flags:

Flag
Description

--prompt

Input prompt to train on

--completion

Expected response (used for supervised tuning)

--vectorize

Store the prompt in vector memory

--tags

Attach semantic tags for future filtering

--dry-run

Simulate training without applying it

--meta

Attach metadata (e.g. source, user, origin)

๐Ÿงช Tip: Automate batch training with:

bash scripts/train_batch.sh

๐Ÿงน clean_memory.py (Planned Module)

Optional memory housekeeping utility (suggested extension):

python cli/clean_memory.py --older-than 30d --tags "demo,test"

Concept:

  • Delete or archive outdated memory entries

  • Filter by vector score, timestamp, or tag


๐Ÿงฉ Integration Use Case:

Build your own CLI pipelines by chaining commands and external data sources.

๐Ÿ”น Train from a scraped article:

curl https://some-article.ai | \
  grep "LLM" | \
  python cli/train.py --prompt "$(cat -)" --completion "..."

๐Ÿ”น Log every query to disk:

python cli/query.py --prompt "Explain zkEVM" >> history.log

Locentra CLI is scriptable and Unix-friendly. Perfect for automated fine-tuning loops, daily prompt logging, or memory introspection.


๐Ÿงฐ Extend the CLI

Every script in backend/cli/ has access to:

  • registry.get() and .register()

  • memory_service.log_prompt()

  • embed_text() for semantic recall

  • fine_tune_model() for instant training

  • All system settings via from backend.core.config import settings

Add your own:

backend/cli/my_custom_script.py

Youโ€™ll immediately inherit full access to the runtime system.

Previous๐Ÿš€ UsageNext๐Ÿ”Œ API Reference

Last updated 1 day ago

Page cover image