v2 — OpenClaw Plugin

Long conversations.
Nothing lost.

Memory plugin for OpenClaw that runs 100% locally. Smarter search, memory quality checks, and compaction rescue — so your agent remembers everything, no matter how long the conversation. No API keys. No subscriptions. Just install and go.

$49 — one-time purchase, no subscription
openclaw-memory
# Install — one command, done
$ ./install.sh --key=oc-starter-xxxxxxxxxxxx
[OK] License verified
[OK] Registered as OpenClaw memory plugin
 
# Restart OpenClaw — memory is now active
$ openclaw gateway restart
[OK] Memory Stack v2 registered (engines: fts5+qmd+rescue)
 
# Just talk — memory works in the background
You: What did we decide about the API design?
[memory_search] Found 3 results (engines: fts5, qmd, rescue)
Agent: We decided to use REST with /api/v2/...
// why memory stack

What OpenClaw's built-in memory can't do.

Three features that make long conversations actually work.

🔗

Knowledge Graph

Native memory does flat text search. Memory Stack builds a local knowledge graph from your conversations — tracking projects, people, APIs, and how they connect. Ask "what depends on service X?" and get real answers.

🧠

Self-Evolving Memory

Memories pile up and get noisy. Memory Stack finds clusters of similar entries and suggests consolidation — turning 5 scattered notes about the same topic into one clear summary. Your memory gets sharper over time, not messier.

🛡

Compaction Rescue

Long conversations get compressed — and key decisions get lost. Memory Stack saves important facts (decisions, deadlines, requirements) before compression happens, and brings them back automatically. No matter how long the conversation.

🔍

Smarter Search

MMR diversity ranking prevents redundant results. Temporal decay prioritizes recent memories. Session indexing searches across past conversations. All running locally — no API keys, no extra cost.

🧹

Memory Health

Built-in quality checks find duplicates, stale entries, and noise. Scores your memory 0-100 and tells you exactly what to clean up. Never deletes anything automatically — you stay in control.

Zero Config

Install, restart OpenClaw, done. No embedding providers to configure, no API keys to manage, no project initialization needed. Memory improves the moment you install it.

// how it works

Install once. Memory improves forever.

Memory Stack plugs into OpenClaw as a native memory provider. No manual setup after install.

You talk to OpenClaw
auto-recall kicks in
Relevant memories found
injected before response
Agent responds
key facts saved to rescue
Conversation gets long
compaction happens
You ask about old decisions
rescue store has them
// vs native memory

What you get that built-in memory doesn't have.

OpenClaw's native memory is good. Memory Stack makes it better.

Feature Native Memory Memory Stack
Text search (BM25) Yes Yes
Vector search Yes (needs API key or local setup) Yes (QMD, zero config)
Hybrid search + MMR Yes Yes
Temporal decay Yes Yes
Knowledge graph No Yes — tracks entities and relationships
Self-evolving memory No Yes — finds duplicates, suggests consolidation
Compaction rescue LLM-based flush only (costs tokens) Rule-based extraction (zero token cost)
Memory health check No Yes — scores quality 0-100
Multi-engine search Single backend 5 engines with fallback routing
Session search Experimental Yes
Embedding setup needed Yes — requires provider config No — works out of the box
$49
One-time purchase. No subscription. No SaaS. Just files.