███╗   ███╗███████╗███╗   ███╗ █████╗ ██████╗ ██████╗  █████╗ 
████╗ ████║██╔════╝████╗ ████║██╔══██╗██╔══██╗██╔══██╗██╔══██╗
██╔████╔██║█████╗  ██╔████╔██║███████║██████╔╝██████╔╝███████║
██║╚██╔╝██║██╔══╝  ██║╚██╔╝██║██╔══██║██╔══██╗██╔══██╗██╔══██║
██║ ╚═╝ ██║███████╗██║ ╚═╝ ██║██║  ██║██████╔╝██║  ██║██║  ██║
╚═╝     ╚═╝╚══════╝╚═╝     ╚═╝╚═╝  ╚═╝╚═════╝ ╚═╝  ╚═╝╚═╝  ╚═╝

Intuition-driven control plane for agent memory & action selection.

Python 3.11+ License: MIT Tests


🧠 What is memabra?

Most agents memorize. memabra intuits.

memabra is a local-first, observable, trainable, and replayable control plane for agent memory and action orchestration.

Instead of acting like a dusty filing cabinet, memabra functions as a meta-cognitive controller: given any task, it rapidly decides whether to answer directly, recall memory, load a skill, or invoke a tool — then learns from outcomes to sharpen those instincts over time.

  • 🏠 Local-first — no cloud lock-in, your data stays on disk
  • 📊 Observable — every decision is tracked, versioned, and inspectable
  • 🎓 Trainable — online learning loop improves routing automatically
  • 🔄 Replayable — replay trajectories, audit decisions, roll back versions

Quick Start

git clone https://github.com/TacitLab/memabra.git
cd memabra
python -m venv venv
source venv/bin/activate
pip install -e ".[dev]"

1. Peek under the hood

memabra --help

2. Run a safe dry-run evaluation

See the full workflow without actually promoting a new router version:

memabra run --dry-run --format text

3. Check system pulse

memabra status --format text

4. Inspect your router lineage

memabra version list --format text

5. Time-travel (rollback)

memabra version rollback <version-id> --format text

🎮 CLI Commands

Command Description
memabra run 🚀 Execute the online learning workflow
memabra status 💓 Show current system health & metrics
memabra version list 📜 List all saved router versions
memabra version rollback <id> Roll back to a specific version

🖨️ Operator-Friendly Output

By default, memabra speaks JSON. For humans, add --format text:

memabra run --dry-run --format text

Sample output:

Memabra online learning result
Summary
Report ID: report-58f9f22
Skipped: no
Promoted: yes
Dry run: yes

Baseline
Reward: 0.7200
Error rate: 0.1200
Latency (ms): 145.0000

Challenger
Reward: 0.8100
Error rate: 0.0800
Latency (ms): 132.5000

Deltas
Reward delta: 0.0900
Error rate delta: -0.0400
Latency delta (ms): -12.5000

Decision
Accepted: yes
Reason: challenger improved reward and reduced error rate

Normalized booleans (yes/no/none)
Fixed-precision metrics for easy comparison
Sectioned layout — Summary → Baseline → Challenger → Deltas → Decision


🧪 Running Tests

pytest tests/ -q

Current status: 126 passed 🟢


📚 Documentation


🏷️ License

MIT — use it, break it, improve it.

Built with caffeine and curiosity.

Description
No description provided
Readme 281 KiB
Languages
Python 100%