
🚀 Usage
Once the system is live—either manually or via Docker—you can interface with Locentra OS through three fully integrated layers:
🖥 Web UI (for interaction and monitoring)
🧪 CLI tools (for training and dev workflows)
⚙️ REST API (for programmatic access)
Each layer is built on the same backend and shares memory, model state, and logs.
🖥 1. Web UI (Frontend Interface)
Once the stack is running, access the Locentra dashboard at:
http://localhost
⚠️ Ensure both the backend (port 8000) and frontend (port 3000) are active or containerized.
Key Features:
🔍 Prompt the model and receive live responses
🧠 Inspect memory (semantic vector traces of past inputs)
📈 View real-time logs from agents, training, inference
🧠 Trigger agents and simulate autonomous workflows
🛠️ Fine-tune the model with a prompt or session directly in-browser
The UI is built for interactive development and runtime observability.
🧪 2. CLI Tools
Located at:
backend/cli/
🔹 query.py
— Terminal Inference
Query the active model from the command line:
python cli/query.py --prompt "What is Solana?"
Output:
[Model Output]
Solana is a high-performance blockchain designed for speed and scalability...
🔹 train.py
— Fine-Tune via CLI
Feed prompt-completion pairs to the training pipeline:
python cli/train.py --prompt "Describe LSTs" --completion "LSTs are Liquid Staking Tokens..."
Options:
--dry-run
Simulate training without applying it
--vectorize
Embed the prompt into vector memory
--meta
Attach searchable tags to the memory entry
CLI is ideal for testing, scripting, and dev automation.
⚙️ 3. API Endpoints
All interactions are served by the FastAPI backend:
http://localhost:8000
Docs live at:
http://localhost/api/docs
(Swagger UI)
🔹 Query LLM
POST /api/llm/query
Request:
{
"prompt": "Explain MEV in Ethereum"
}
Response:
{
"response": "MEV stands for Maximal Extractable Value..."
}
🔹 Train Model
POST /api/llm/train
Train the model in real-time with your own data:
{
"texts": [
"Describe validator slashing in proof-of-stake systems."
]
}
🔹 Stream Logs
GET /api/system/logs
Returns live system logs with optional filters (?level=INFO&limit=100
).
🔁 End-to-End Workflow Example
A real-world interaction across all system layers:
🧑 User submits a prompt via CLI or API
🤖 Agent monitors response quality → scores low
🔁
AutoTrainer
triggers fine-tuning📚 Prompt is embedded and persisted in vector memory
🧠 Future prompts trigger semantic recall
🖥 Logs and memory updates show up in the Web UI live
🔄 Your Workflow, Your Way
Use the Web UI to monitor, the CLI to build, and the API to integrate. All three are connected. All three evolve the model.
Want to extend agent behavior or memory scoring?
Last updated