Now accepting closed beta applications. Apply now

Verified Memory
for AI Agents

A curated memory architecture for LLMs with dependency-aware truth maintenance. Your AI remembers what it learned—and knows when that knowledge is outdated.

truthkeeper
# Agent learns a fact from code analysis
claim = await truthkeeper.create_claim(
    content="UserService handles authentication via JWT tokens",
    evidence=[
        Evidence(source="file://src/services/user.py:42-87", type="CODE"),
        Evidence(source="file://docs/auth.md", type="DOCUMENTATION")
    ],
    dependencies=[
        Dependency(target="src/services/user.py", type="HARD")
    ]
)

# Later: developer refactors authentication to AuthService
# TruthKeeper automatically detects the change and marks claim STALE

stale_claims = await truthkeeper.get_stale_claims()
# => [Claim(content="UserService handles authentication...", state=STALE)]

# Reverification runs, claim marked OUTDATED
# Agent now knows its old knowledge is invalid

The Problem

AI Coding Assistants Don't Know What They Don't Know

Current AI tools lack persistent memory. When they do remember (via RAG), they don't know when that information has become stale. They act on outdated knowledge and make mistakes.

Context Resets

Every session starts fresh. Developers re-explain architecture, patterns, and decisions repeatedly.

Stale Knowledge

RAG systems retrieve information without staleness awareness. No mechanism to detect when stored knowledge is wrong.

No Verification

Stored “facts” are taken at face value. No confidence scoring, no evidence tracking, no human review.

Features

Memory That Maintains Itself

TruthKeeper treats memory as a living system that watches sources, detects changes, and automatically updates or flags affected claims.

Truth Maintenance System
Every claim is tracked with full justification chains and evidence provenance. Know exactly why your AI believes what it believes.
Dependency-Aware Cascading
When sources change, affected claims are automatically identified through a three-tier dependency model (HARD, SOFT, DERIVED).
Human-in-the-Loop Review
High-impact changes are escalated for human review based on blast radius analysis. Keep humans in control of critical decisions.
Multi-Strategy Verification
AST-based code verification, MiniCheck fact-checking, and multi-source corroboration ensure claims are accurate.
Bi-temporal Versioning
Query what was known at any point in time. Full audit trail for compliance and debugging.
Code-Aware Semantics
Understands code structure, not just text. Claims reference stable symbol identifiers that survive refactoring.

How It Works

Claim State Machine

Every claim flows through a well-defined state machine. Know exactly what state your knowledge is in.

SUPPORTED

Verified and currently valid

Use confidently

OUTDATED

No longer valid

Don't use

CONTESTED

Conflicting evidence exists

Check review queue

HYPOTHESIS

Awaiting initial verification

Wait

Confidence Scoring

Claims are scored using a weighted formula combining multiple verification signals:

confidence = 0.4 × minicheck_score
           + 0.2 × authority_score
           + 0.3 × agreement_score
           + 0.1 × recency_score
ConfidenceAction
≥ 0.9Auto-accept (if low blast radius)
≥ 0.8Mark SUPPORTED
0.5-0.8Mark CONTESTED
< 0.5Mark OUTDATED

Architecture

Built for Production

┌──────────────────────────────────────────────────────────────────┐
│                       AI Coding Agent                            │
│                  (Claude Code, Cursor, etc.)                     │
└──────────────────────────────────────────────────────────────────┘
                                │
                ┌───────────────┴───────────────┐
                ▼                               ▼
┌───────────────────────────┐   ┌───────────────────────────────────┐
│    Memory CI Pipeline     │   │         Verification              │
├───────────────────────────┤   ├───────────────────────────────────┤
│  • Source Watchers        │   │  • MiniCheck (NLI)                │
│  • Temporal Workflows     │   │  • AST Verifier                   │
│  • Human Review Queue     │   │  • Multi-Source Corroboration     │
└───────────────────────────┘   │  • Confidence Calculator          │
                │               └───────────────────────────────────┘
                │                               │
                └───────────────┬───────────────┘
                                ▼
┌──────────────────────────────────────────────────────────────────┐
│                        TruthKeeper Core                          │
├──────────────────────────────────────────────────────────────────┤
│  • Claim State Machine    • Dependency Graph    • Blast Radius   │
│  • Cascade Engine         • Bi-temporal Store   • JSONL Export   │
└──────────────────────────────────────────────────────────────────┘
                                │
                ┌───────────────┴───────────────┐
                ▼                               ▼
┌───────────────────────────┐   ┌───────────────────────────────────┐
│    PostgreSQL + pgvector  │   │         SQLite (Offline)          │
│    (Production)           │   │         (Local Development)       │
└───────────────────────────┘   └───────────────────────────────────┘

Use Cases

Who Is TruthKeeper For?

AI Tool Builders

Building AI coding assistants or agents? TruthKeeper provides the memory layer that knows when its knowledge is stale.

Development Teams

Teams using AI coding assistants who want shared, verified knowledge about their codebase that survives sessions.

Regulated Industries

Finance, healthcare, and government teams requiring audit trails, explainability, and compliance for AI decisions.

Closed Beta

Get Early Access

TruthKeeper is currently in closed beta. Request an API key to be among the first to build AI agents with verified, self-maintaining memory.

What you'll get:

  • Early access to TruthKeeper API
  • Direct line to the development team
  • Influence the product roadmap
  • Priority support during beta

Support Development (optional)

Help us build TruthKeeper faster. Supporters receive 1 month free when we launch.

$

We'll never share your email. Donations are processed securely via Stripe.

Ready to Give Your AI Agent Memory It Can Trust?

Join the closed beta and be among the first to build AI agents with verified, self-maintaining memory.