๐Ÿ”ท
Locentra OS
๐Ÿ”ท
Locentra OS
  • ๐Ÿง  Introduction
  • โš™๏ธ Features
  • ๐Ÿ›  Under The Hood
  • ๐Ÿงฉ Installation
  • ๐Ÿš€ Usage
  • ๐Ÿงฎ CLI Commands
  • ๐Ÿ”Œ API Reference
  • ๐Ÿค– Agents System
  • ๐Ÿง  Semantic Memory
  • ๐ŸŽ“ Training & Fine-Tuning
  • ๐Ÿ” $LOCENTRA Token Access
  • ๐Ÿ— System Architecture
  • ๐Ÿงฉ Extending the System
  • ๐Ÿงช Testing & Quality Assurance
  • ๐Ÿ“„ License & Open Source
Powered by GitBook
On this page

๐Ÿš€ Usage

Once the system is liveโ€”either manually or via Dockerโ€”you can interface with Locentra OS through three fully integrated layers:

  1. ๐Ÿ–ฅ Web UI (for interaction and monitoring)

  2. ๐Ÿงช CLI tools (for training and dev workflows)

  3. โš™๏ธ REST API (for programmatic access)

Each layer is built on the same backend and shares memory, model state, and logs.


๐Ÿ–ฅ 1. Web UI (Frontend Interface)

Once the stack is running, access the Locentra dashboard at:

http://localhost

โš ๏ธ Ensure both the backend (port 8000) and frontend (port 3000) are active or containerized.

Key Features:

  • ๐Ÿ” Prompt the model and receive live responses

  • ๐Ÿง  Inspect memory (semantic vector traces of past inputs)

  • ๐Ÿ“ˆ View real-time logs from agents, training, inference

  • ๐Ÿง  Trigger agents and simulate autonomous workflows

  • ๐Ÿ› ๏ธ Fine-tune the model with a prompt or session directly in-browser

The UI is built for interactive development and runtime observability.


๐Ÿงช 2. CLI Tools

Located at:

backend/cli/

๐Ÿ”น query.py โ€” Terminal Inference

Query the active model from the command line:

python cli/query.py --prompt "What is Solana?"

Output:

[Model Output]
Solana is a high-performance blockchain designed for speed and scalability...

๐Ÿ”น train.py โ€” Fine-Tune via CLI

Feed prompt-completion pairs to the training pipeline:

python cli/train.py --prompt "Describe LSTs" --completion "LSTs are Liquid Staking Tokens..."

Options:

Flag
Description

--dry-run

Simulate training without applying it

--vectorize

Embed the prompt into vector memory

--meta

Attach searchable tags to the memory entry

CLI is ideal for testing, scripting, and dev automation.


โš™๏ธ 3. API Endpoints

All interactions are served by the FastAPI backend:

http://localhost:8000

Docs live at: http://localhost/api/docs (Swagger UI)

๐Ÿ”น Query LLM

POST /api/llm/query

Request:

{
  "prompt": "Explain MEV in Ethereum"
}

Response:

{
  "response": "MEV stands for Maximal Extractable Value..."
}

๐Ÿ”น Train Model

POST /api/llm/train

Train the model in real-time with your own data:

{
  "texts": [
    "Describe validator slashing in proof-of-stake systems."
  ]
}

๐Ÿ”น Stream Logs

GET /api/system/logs

Returns live system logs with optional filters (?level=INFO&limit=100).


๐Ÿ” End-to-End Workflow Example

A real-world interaction across all system layers:

  1. ๐Ÿง‘ User submits a prompt via CLI or API

  2. ๐Ÿค– Agent monitors response quality โ†’ scores low

  3. ๐Ÿ” AutoTrainer triggers fine-tuning

  4. ๐Ÿ“š Prompt is embedded and persisted in vector memory

  5. ๐Ÿง  Future prompts trigger semantic recall

  6. ๐Ÿ–ฅ Logs and memory updates show up in the Web UI live


๐Ÿ”„ Your Workflow, Your Way

Use the Web UI to monitor, the CLI to build, and the API to integrate. All three are connected. All three evolve the model.

Want to extend agent behavior or memory scoring?

Previous๐Ÿงฉ InstallationNext๐Ÿงฎ CLI Commands

Last updated 1 day ago

Page cover image