🔷
Locentra OS
🔷
Locentra OS
  • 🧠 Introduction
  • ⚙️ Features
  • 🛠 Under The Hood
  • 🧩 Installation
  • 🚀 Usage
  • 🧮 CLI Commands
  • 🔌 API Reference
  • 🤖 Agents System
  • 🧠 Semantic Memory
  • 🎓 Training & Fine-Tuning
  • 🔐 $LOCENTRA Token Access
  • 🏗 System Architecture
  • 🧩 Extending the System
  • 🧪 Testing & Quality Assurance
  • 📄 License & Open Source
Powered by GitBook
On this page

🧪 Testing & Quality Assurance

Reliable LLM systems don’t happen by accident—they’re built, broken, and verified. Locentra OS ships with a modular test suite that covers everything from real-time inference to memory scoring and agent workflows.

Tests are written with pytest and structured for clarity, speed, and layered validation.


✅ What’s Currently Covered

Subsystem
Coverage Focus

Core Config

Registry, settings, flags

Inference Engine

Prompt → Model → Output

Memory Layer

Vector embedding + recall

Agents

Retraining + feedback loop

API Routes

CRUD, responses, edge-cases

CLI Tools

Live queries, fine-tuning

Coverage improves with every release — current targets defined for v1.2 below.


🚀 Running the Tests

From the project root, run:

cd backend
pytest tests/

Need more detail?

pytest -v tests/

You’ll get per-test output including tracebacks for failed assertions.


🧰 Utility Structure & Fixtures

All test scaffolding lives in:

backend/tests/conftest.py

Includes:

  • Dummy prompt builders

  • Temporary memory inserts

  • Mocked response objects

  • Config overrides (e.g. fake tokens or models)

Use this file to globally patch components or simulate failure modes (e.g. invalid registry keys or missing memory).


🧪 Test Snippet (Example)

Here’s how a basic inference test looks in tests/test_infer.py:

from backend.core.registry import registry

def test_inference_response():
    prompt = "Explain zk-rollups"
    model = registry.get("model")
    output = model(prompt)
    assert "rollup" in output.lower()

No mocking required—Locentra’s test suite uses the same model and runtime paths as production by default.


⚙️ Writing Custom Tests

Want to extend coverage? It’s simple:

  1. Create or extend a file in tests/

  2. Import what you need from:

    • core/

    • models/

    • agents/

    • api/

  3. Use standard assert syntax to validate outputs

  4. Run pytest before committing


📈 Coverage Goals (v1.2)

Module
Target Coverage

Core Config / Registry

100%

Inference + Adapters

90%+

API Routes

90%

CLI Tools

60%+

Agent Logic

75%

Memory Scoring / Vectors

85%

Automated coverage reports (pytest-cov) coming in future milestone.


🤖 CI Integration (Planned)

To run tests automatically on every push:

# .github/workflows/tests.yml
name: Locentra Tests

on: [push]

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - name: Setup Python
        uses: actions/setup-python@v2
        with:
          python-version: '3.10'
      - name: Install deps
        run: pip install -r backend/requirements.txt
      - name: Run Pytest
        run: pytest backend/tests/

Add a .test.env to configure test environment settings, and mock any outbound model loading calls if needed.


🧪 Test Philosophy

“If it breaks, it should be visible immediately.”

Locentra favors:

  • Small over large tests

  • Live-run validation over mock-only

  • Semantic memory lifecycle testing

  • Trigger-based agent simulations (coming soon)

Previous🧩 Extending the SystemNext📄 License & Open Source

Last updated 21 hours ago

Page cover image