🔷
Locentra OS
🔷
Locentra OS
  • 🧠 Introduction
  • ⚙️ Features
  • 🛠 Under The Hood
  • 🧩 Installation
  • 🚀 Usage
  • 🧮 CLI Commands
  • 🔌 API Reference
  • 🤖 Agents System
  • 🧠 Semantic Memory
  • 🎓 Training & Fine-Tuning
  • 🔐 $LOCENTRA Token Access
  • 🏗 System Architecture
  • 🧩 Extending the System
  • 🧪 Testing & Quality Assurance
  • 📄 License & Open Source
Powered by GitBook
On this page

⚙️ Features

Locentra OS is a modular, self-learning LLM Operating System—engineered for crypto-native teams, LLM builders, and full-stack developers who need more than just a wrapper. It’s a complete runtime stack that gives you fine-tuning, memory, inference, and agent logic—all locally controlled.


🔁 Dynamic Fine-Tuning

Train your model on-the-fly—while it’s running.

Sources:

  • Real-time prompt input

  • Logged feedback from users

  • High-frequency prompt patterns

  • Autonomous scoring agents

No preprocessing. No restart. No external tools. Locentra triggers fine-tuning cycles dynamically based on usage and context.

Every response can become a training signal.


🧠 Semantic Memory Engine

Forget stateless chat history. Locentra builds contextual awareness over time.

Under the hood:

  • Embedding-based vector recall using SentenceTransformers

  • Top-k semantic similarity search across previous prompts

  • Taggable, queryable memory logs

  • Full persistence in PostgreSQL

Use it to:

  • Build long-term memory

  • Inject historical context

  • Automatically adapt tone, intent, and domain

It doesn’t just respond—it remembers.


🤖 Autonomous Agents

Locentra includes an internal agent layer that watches, scores, and optimizes system behavior.

Agents can:

  • Detect low-quality or overused prompts

  • Trigger fine-tuning thresholds (AutoTrainer)

  • Rewrite prompts and outputs (FeedbackLoopAgent)

  • Improve clarity, correctness, and engagement dynamically

These agents are:

  • Fully scriptable

  • Chainable

  • Deployable via CLI or config

  • Built into the OS core

Your model evolves—even when you’re not watching.


🧰 Built for Developers

Locentra isn’t a black box. It’s built to be inspected, extended, and controlled.

Toolkit Includes:

  • FastAPI backend

  • React + Vite + Tailwind frontend

  • Docker deployment stack

  • PostgreSQL + SQLAlchemy database

  • CLI tools for training, inference, and debugging

  • Live logs, memory, and analytics dashboards

Everything is modular—every layer can be swapped, extended, or scripted.

Local-first, file-based, terminal-friendly.


🔐 Privacy-First Architecture

Locentra respects ownership—from tokens to training data.

  • ✅ 100% self-hosted

  • ✅ No API key needed

  • ✅ No cloud calls

  • ✅ No telemetry

  • ✅ No vendor lock-in

  • ✅ Full data transparency

Your data stays where it belongs: with you.

Previous🧠 IntroductionNext🛠 Under The Hood

Last updated 1 day ago

Page cover image