๐งฎ CLI Commands
Prefer terminals over tabs? Locentra OS ships with a full CLI interface that lets you query, train, vectorize, and manage without ever opening a browser or hitting an API endpoint.
All CLI scripts live in:
Run them like this:
Each script has access to:
๐ Locentra Registry
๐ง Vector memory engine
๐ง Config settings (
.env
)๐งพ Logger system
๐งฑ Model + tokenizer
๐ฅ query.py
query.py
Send a prompt to the active model and print the output to your terminal.
Usage:
Options:
--prompt
The text prompt to send to the model
--max_tokens
Maximum number of tokens to generate
--temperature
Sampling temperature (creativity control)
--verbose
Show internal debug info and memory scores
Output:
๐ง train.py
train.py
Push new knowledge into the model using prompt โ completion training.
Basic Example:
Flags:
--prompt
Input prompt to train on
--completion
Expected response (used for supervised tuning)
--vectorize
Store the prompt in vector memory
--tags
Attach semantic tags for future filtering
--dry-run
Simulate training without applying it
--meta
Attach metadata (e.g. source, user, origin)
๐งช Tip: Automate batch training with:
๐งน clean_memory.py
(Planned Module)
clean_memory.py
(Planned Module)Optional memory housekeeping utility (suggested extension):
Concept:
Delete or archive outdated memory entries
Filter by vector score, timestamp, or tag
๐งฉ Integration Use Case:
Build your own CLI pipelines by chaining commands and external data sources.
๐น Train from a scraped article:
๐น Log every query to disk:
Locentra CLI is scriptable and Unix-friendly. Perfect for automated fine-tuning loops, daily prompt logging, or memory introspection.
๐งฐ Extend the CLI
Every script in backend/cli/
has access to:
registry.get()
and.register()
memory_service.log_prompt()
embed_text()
for semantic recallfine_tune_model()
for instant trainingAll system settings via
from backend.core.config import settings
Add your own:
Youโll immediately inherit full access to the runtime system.
Last updated