A curated memory architecture for LLMs with dependency-aware truth maintenance. Your AI remembers what it learned—and knows when that knowledge is outdated.
# Agent learns a fact from code analysis
claim = await truthkeeper.create_claim(
content="UserService handles authentication via JWT tokens",
evidence=[
Evidence(source="file://src/services/user.py:42-87", type="CODE"),
Evidence(source="file://docs/auth.md", type="DOCUMENTATION")
],
dependencies=[
Dependency(target="src/services/user.py", type="HARD")
]
)
# Later: developer refactors authentication to AuthService
# TruthKeeper automatically detects the change and marks claim STALE
stale_claims = await truthkeeper.get_stale_claims()
# => [Claim(content="UserService handles authentication...", state=STALE)]
# Reverification runs, claim marked OUTDATED
# Agent now knows its old knowledge is invalidAI Coding Assistants Don't Know What They Don't Know
Current AI tools lack persistent memory. When they do remember (via RAG), they don't know when that information has become stale. They act on outdated knowledge and make mistakes.
Every session starts fresh. Developers re-explain architecture, patterns, and decisions repeatedly.
RAG systems retrieve information without staleness awareness. No mechanism to detect when stored knowledge is wrong.
Stored “facts” are taken at face value. No confidence scoring, no evidence tracking, no human review.
Memory That Maintains Itself
TruthKeeper treats memory as a living system that watches sources, detects changes, and automatically updates or flags affected claims.
Claim State Machine
Every claim flows through a well-defined state machine. Know exactly what state your knowledge is in.
Verified and currently valid
Use confidently
No longer valid
Don't use
Conflicting evidence exists
Check review queue
Awaiting initial verification
Wait
Claims are scored using a weighted formula combining multiple verification signals:
confidence = 0.4 × minicheck_score
+ 0.2 × authority_score
+ 0.3 × agreement_score
+ 0.1 × recency_score| Confidence | Action |
|---|---|
| ≥ 0.9 | Auto-accept (if low blast radius) |
| ≥ 0.8 | Mark SUPPORTED |
| 0.5-0.8 | Mark CONTESTED |
| < 0.5 | Mark OUTDATED |
Built for Production
┌──────────────────────────────────────────────────────────────────┐
│ AI Coding Agent │
│ (Claude Code, Cursor, etc.) │
└──────────────────────────────────────────────────────────────────┘
│
┌───────────────┴───────────────┐
▼ ▼
┌───────────────────────────┐ ┌───────────────────────────────────┐
│ Memory CI Pipeline │ │ Verification │
├───────────────────────────┤ ├───────────────────────────────────┤
│ • Source Watchers │ │ • MiniCheck (NLI) │
│ • Temporal Workflows │ │ • AST Verifier │
│ • Human Review Queue │ │ • Multi-Source Corroboration │
└───────────────────────────┘ │ • Confidence Calculator │
│ └───────────────────────────────────┘
│ │
└───────────────┬───────────────┘
▼
┌──────────────────────────────────────────────────────────────────┐
│ TruthKeeper Core │
├──────────────────────────────────────────────────────────────────┤
│ • Claim State Machine • Dependency Graph • Blast Radius │
│ • Cascade Engine • Bi-temporal Store • JSONL Export │
└──────────────────────────────────────────────────────────────────┘
│
┌───────────────┴───────────────┐
▼ ▼
┌───────────────────────────┐ ┌───────────────────────────────────┐
│ PostgreSQL + pgvector │ │ SQLite (Offline) │
│ (Production) │ │ (Local Development) │
└───────────────────────────┘ └───────────────────────────────────┘Who Is TruthKeeper For?
Building AI coding assistants or agents? TruthKeeper provides the memory layer that knows when its knowledge is stale.
Teams using AI coding assistants who want shared, verified knowledge about their codebase that survives sessions.
Finance, healthcare, and government teams requiring audit trails, explainability, and compliance for AI decisions.
Get Early Access
TruthKeeper is currently in closed beta. Request an API key to be among the first to build AI agents with verified, self-maintaining memory.
Join the closed beta and be among the first to build AI agents with verified, self-maintaining memory.