๐ Usage
Once the system is liveโeither manually or via Dockerโyou can interface with Locentra OS through three fully integrated layers:
๐ฅ Web UI (for interaction and monitoring)
๐งช CLI tools (for training and dev workflows)
โ๏ธ REST API (for programmatic access)
Each layer is built on the same backend and shares memory, model state, and logs.
๐ฅ 1. Web UI (Frontend Interface)
Once the stack is running, access the Locentra dashboard at:
โ ๏ธ Ensure both the backend (port 8000) and frontend (port 3000) are active or containerized.
Key Features:
๐ Prompt the model and receive live responses
๐ง Inspect memory (semantic vector traces of past inputs)
๐ View real-time logs from agents, training, inference
๐ง Trigger agents and simulate autonomous workflows
๐ ๏ธ Fine-tune the model with a prompt or session directly in-browser
The UI is built for interactive development and runtime observability.
๐งช 2. CLI Tools
Located at:
๐น query.py
โ Terminal Inference
Query the active model from the command line:
Output:
๐น train.py
โ Fine-Tune via CLI
Feed prompt-completion pairs to the training pipeline:
Options:
--dry-run
Simulate training without applying it
--vectorize
Embed the prompt into vector memory
--meta
Attach searchable tags to the memory entry
CLI is ideal for testing, scripting, and dev automation.
โ๏ธ 3. API Endpoints
All interactions are served by the FastAPI backend:
Docs live at:
http://localhost/api/docs
(Swagger UI)
๐น Query LLM
Request:
Response:
๐น Train Model
Train the model in real-time with your own data:
๐น Stream Logs
Returns live system logs with optional filters (?level=INFO&limit=100
).
๐ End-to-End Workflow Example
A real-world interaction across all system layers:
๐ง User submits a prompt via CLI or API
๐ค Agent monitors response quality โ scores low
๐
AutoTrainer
triggers fine-tuning๐ Prompt is embedded and persisted in vector memory
๐ง Future prompts trigger semantic recall
๐ฅ Logs and memory updates show up in the Web UI live
๐ Your Workflow, Your Way
Use the Web UI to monitor, the CLI to build, and the API to integrate. All three are connected. All three evolve the model.
Want to extend agent behavior or memory scoring?
Last updated