Stop building
amnesiac agents.
Because "similar" isn't the same as "remembered." Reeve is a temporal knowledge graph that understands what you store — not just what's similar.
LLMs forget.
Every conversation
starts from zero.
Common workarounds — chat history, vector stores, RAG pipelines — all break down over time. They retrieve similar text. They can't handle contradictions. They have no concept of time or state evolution.
Ask any of them: "What changed about me since last year?"
Silence. Ask Reeve — it knows.
No cross-session memory
Every new conversation resets context entirely. Your agent is a stranger every time it wakes up.
No contradiction handling
"I moved to New York" and "I live in San Francisco" coexist in a vector store. No resolution. No truth.
No sense of time
Ask "what changed since last year?" — silence. Memory systems store facts, not state evolution.
Three steps.
Lifetime memory.
Store anything
Call store() with any text. Reeve's LLM parses it into structured entities, states, actions, and locations — writing a living temporal knowledge graph to Neo4j. Not chunks. Not embeddings. Structured understanding.
from reeve import store
store("I just joined Google as a software engineer")
store("I love playing football")
store("I moved from San Francisco to New York") Graph evolves, history preserved
New facts don't overwrite old ones — they create SUPERSEDES chains. "I moved to New York" marks San Francisco as historical, not deleted. Entity resolution ensures "Google", "my company", and "work" resolve to one canonical node.
(city: New York) ──SUPERSEDES──▶ (city: San Francisco)
active: true active: false
"Google" = "my company" = "work" → one node Query in natural language
Ask anything. The 3-lane retrieval engine (semantic + temporal + recency) surfaces the right memory — not just the most similar text, but the most relevant knowledge at this moment in time. Landmark memories bypass recency decay entirely.
from reeve import query
query("Where do I live?")
# → "New York."
query("Should I play football with my friend?")
# → "Yes — you love football."
query("Did I ever live in SF?")
# → "Yes, before moving to New York." Built for permanence,
not prototypes.
3-Lane Retrieval
Most systems rank by vector similarity alone. Reeve combines three parallel lanes — semantic, temporal, and recency-weighted — into a single unified score. Important memories surface regardless of age.
score = 0.65×similarity + 0.30×importance + 0.05×recency State Supersession
Facts evolve. Reeve tracks this with explicit SUPERSEDES chains — current answers are always accurate, history is always preserved.
Landmark Memory
Major life events — promotions, moves, milestones — are protected with an importance floor. They bypass recency decay and surface instantly, no matter how old they are.
MCP-Native
Works with any MCP-compatible client — Claude Desktop, LM Studio, AnythingLLM, Cursor. Paste 4 lines of JSON. Done.
{"mcpServers": {"reeve": {"url": "..."}}} Temporal Knowledge Graph
Built on Neo4j with typed relationships — Episodes, Entities, Actions, States, Roles, Locations. Not an embedding dump. A living, evolving model of everything you've stored.
Entity Resolution
"Google", "my company", "work" — all resolve to one canonical node via 3-layer matching: exact, substring, and embedding similarity. One identity, many names.
Lifespan-Aware Scaling
Search depth scales dynamically with graph size — 2% of total episodes, clamped between 50 and 500. Efficient at day one. Deep at year ten. Ready for a lifetime.
Up in minutes.
{
"mcpServers": {
"reeve": {
"type": "sse",
"url": "https://api.reeve.co.in/mcp"
}
}
} Restart your client after saving. Your AI will remember everything from this point forward.
Give your agent
a lifetime.
Memory that persists, evolves, and never forgets what matters. Built on a temporal knowledge graph engineered to last decades.
Build with Reeve →