๐ $LOCENTRA Token Access
Itโs not just another governance token. $LOCENTRA is the control surface for Locentra OS. It decides what agents you can launch, how far your model can learn, and how much bandwidth you get on decentralized compute rails.
Itโs infrastructure powerโmeasured in tokens.
๐งญ What It Enables
Holding $LOCENTRA unlocks three core pillars of the Locentra OS runtime:
Feature Access Access private models, memory tiers, and advanced tools.
Agent Deployment Run automated training agents, evaluators, and vector optimizers.
Governance Participation (future) Influence the global intelligence layer via staking, data curation, and prompt voting.
๐ Access Tiers (Wallet-Gated)
Your token balance determines what you can do. The more $LOCENTRA you hold, the deeper you go:
0โ9,999
Basic
Inference, memory view, prompt optimizer
10kโ49,999
Extended
CLI training, agent visibility, logs
50k+
Operator
Launch agents, retrain live, stream metrics
100k+
Protocol Builder
Full system control, vote, stake + share models
Wallets are checked via Solana RPC calls from the backend. Authentication handled via signature challenge in the frontend (Phantom, Backpack).
You can configure access logic in:
core/config.py
core/registry.py
services/user_service.py
โ๏ธ How It Works (Technical Flow)
Everything is stateless and scoped per session.
Example: A user with 50k $LOCENTRA can launch an AutoTrainer agent via CLI or UI, but cannot modify core registry keys.
๐งช Real-World Example
Letโs say you're building a Twitter agent that listens to mentions of your token and retrains the model accordingly.
Requirements:
โ Wallet with 50k+ $LOCENTRA
โ Access to agent modules
โ AgentRunner CLI available
โ Token verified on-chain at runtime
This structure keeps LLM deployment trustless, modular, and wallet-native.
๐ Upcoming: Staking System
Weโre building a staking mechanism where:
You stake $LOCENTRA to unlock bandwidth on shared LLMs
Vote on:
Public dataset inclusion
Prompt libraries
New agent behaviors
Earn protocol rewards via:
Training contribution
Evaluation feedback
Vector memory curation
Itโs not about speculation. Itโs about bandwidth, ownership, and modular AI power.
Last updated