# memory-lancedb-pro

> A LanceDB-backed OpenClaw memory plugin that gives AI agents persistent long-term memory with hybrid retrieval, cross-encoder reranking, and multi-scope isolation.

memory-lancedb-pro is a production-grade long-term memory plugin for OpenClaw agents, backed by LanceDB vector storage. It automatically captures preferences, decisions, and project context from conversations, then intelligently recalls them in future sessions — eliminating the "AI amnesia" problem where agents forget everything between chats. The plugin features LLM-powered smart extraction, Weibull decay-based memory lifecycle management, and a full CLI for backup, migration, and export.

- **Hybrid Retrieval** — *combines vector (ANN cosine) and BM25 full-text search with configurable weights, fused scoring, and cross-encoder reranking for high-quality recall*
- **Smart Memory Extraction** — *LLM-powered 6-category classification (profile, preferences, entities, events, cases, patterns) with L0/L1/L2 layered storage and two-stage deduplication*
- **Weibull Decay Engine** — *composite lifecycle scoring using recency, access frequency, and intrinsic importance; three-tier promotion system (Peripheral ↔ Working ↔ Core)*
- **Auto-Capture & Auto-Recall** — *automatically extracts memories at conversation end and injects relevant context before each agent reply via `before_prompt_build` hooks*
- **Multi-Scope Isolation** — *per-agent, per-user, per-project, and global memory boundaries with configurable agent-level access control*
- **Any Embedding Provider** — *works with OpenAI, Jina, Gemini, Voyage, Ollama, or any OpenAI-compatible embedding API*
- **Cross-Encoder Reranking** — *built-in adapters for Jina, SiliconFlow, Voyage AI, and Pinecone; graceful fallback to cosine similarity on API failure*
- **Noise Filtering & Adaptive Retrieval** — *skips retrieval for greetings and simple commands; forces retrieval for memory keywords; CJK-aware thresholds*
- **Full Management CLI** — *`openclaw memory-pro list/search/stats/delete/export/import/reembed/upgrade/migrate` commands for production operations*
- **Install via OpenClaw CLI** — *run `openclaw plugins install memory-lancedb-pro@beta`, configure `openclaw.json`, then validate with `openclaw config validate && openclaw gateway restart`*

## Features
- Hybrid vector + BM25 full-text retrieval
- LLM-powered 6-category smart memory extraction
- Weibull decay-based memory lifecycle management
- Three-tier memory promotion (Peripheral/Working/Core)
- Auto-capture from conversations
- Auto-recall context injection before agent replies
- Cross-encoder reranking (Jina, SiliconFlow, Voyage, Pinecone)
- Multi-scope isolation (global, agent, user, project)
- Any OpenAI-compatible embedding provider support
- MMR diversity filtering
- Noise filtering and adaptive retrieval
- Full management CLI (list, search, export, import, migrate)
- Session memory support
- L0/L1/L2 layered memory storage
- Two-stage deduplication (vector similarity + LLM semantic)

## Integrations
OpenClaw, LanceDB, OpenAI, Jina, Google Gemini, Voyage AI, Ollama, SiliconFlow, DashScope, Pinecone, Hugging Face TEI, npm

## Platforms
CLI, API, DEVELOPER_SDK

## Pricing
Open Source

## Version
v1.1.0-beta.10

## Links
- Website: https://github.com/CortexReach/memory-lancedb-pro
- Documentation: https://github.com/CortexReach/memory-lancedb-pro/blob/master/README.md
- Repository: https://github.com/CortexReach/memory-lancedb-pro
- EveryDev.ai: https://www.everydev.ai/tools/memory-lancedb-pro
