🔷
Locentra OS
🔷
Locentra OS
  • 🧠 Introduction
  • ⚙️ Features
  • 🛠 Under The Hood
  • 🧩 Installation
  • 🚀 Usage
  • 🧮 CLI Commands
  • 🔌 API Reference
  • 🤖 Agents System
  • 🧠 Semantic Memory
  • 🎓 Training & Fine-Tuning
  • 🔐 $LOCENTRA Token Access
  • 🏗 System Architecture
  • 🧩 Extending the System
  • 🧪 Testing & Quality Assurance
  • 📄 License & Open Source
Powered by GitBook
On this page

🛠 Under The Hood

From backend API layers to model routing and live vector memory, Locentra OS is built with a modern, container-native tech stack that’s made to scale—locally or in production. Everything is modular. Everything is swappable. Everything is under your control.


🧠 Backend Architecture

Language: Python 3.10+ Core Stack:

Layer
Technology

API Routing

FastAPI (async, OpenAPI-ready)

Database ORM

SQLAlchemy

Model Inference

Transformers (HuggingFace)

Embeddings & Vectors

SentenceTransformers

Web Serving

Uvicorn (dev) / Gunicorn (prod)

Built for LLMs. Tuned for async performance.


🖥 Frontend Stack

Everything you need for a fast, reactive developer UI—without the bloat.

Language: TypeScript Tools & Frameworks:

  • React – Component-driven architecture

  • Vite – Ultra-fast builds and hot module reload

  • Tailwind CSS – Utility-first responsive design

Fast enough to feel native. Minimal enough to stay out of your way.


⚙️ DevOps & Infra

Locentra ships as a fully containerized application. No global Python installs. No backend/frontend mismatch. Just run and build.

Component
Tool

Runtime

Docker, Docker Compose

Reverse Proxy

NGINX

Env Config

.env files (dotenv)

Scripting

Makefile, Bash utilities

Install in seconds. Launch in one command.


🧬 Model Compatibility

Whether you're working with lightweight test models or full-scale transformer architectures, Locentra adapts.

Tested Models (via HuggingFace):

  • Falcon (tiiuae/falcon-rw-1b, etc.)

  • GPT-J

  • Mistral

  • LLaMA

  • Any other AutoModelForCausalLM-compatible transformer

Switch models by updating MODEL_NAME in .env. All loading is dynamic. Hot reload supported.


🔄 Flexible, Swappable, Extendable

Locentra’s architecture isn't monolithic—it’s built in layers:

  • Swap vector engines

  • Replace tokenizer logic

  • Extend agents

  • Build custom adapters

  • Modify memory scoring

  • Plug in external tools


✅ Everything You Need, Nothing You Don’t

Locentra OS was built to feel like infrastructure—not magic. You can trace every call, customize every layer, and run it entirely offline.

Previous⚙️ FeaturesNext🧩 Installation

Last updated 1 day ago

Page cover image