MemLibMemLib

How It Works

MemLib's architecture — BYOD storage, multi-provider AI, and the memory lifecycle

Overview

MemLib is a memory API that sits between your AI application and your own infrastructure. You bring your own database and AI provider keys — MemLib orchestrates the memory pipeline.

What MemLib Does

  1. Extracts facts from natural language using LLM inference
  2. Embeds each fact into a vector for semantic search
  3. Deduplicates against existing memories via cosine similarity
  4. Resolves conflicts when new information contradicts old (e.g., "moved from NYC" → "lives in Berlin")
  5. Retrieves memories by meaning with hybrid scoring
  6. Synthesizes context paragraphs tailored to the current conversation

What You Provide

You ProvideWhy
PostgreSQL database (with pgvector)Your data stays in your infrastructure
Embedding provider API keyFor converting text → vectors
LLM provider API keyFor fact extraction, conflict resolution, and synthesis

This is the Bring Your Own Database (BYOD) model. MemLib never stores your data — it connects to your database at request time and runs the memory pipeline.


Request Lifecycle

Every API call follows the same pattern:

Key details:

  • Authentication — every request requires an API key (Authorization: Bearer sk_...)
  • Project isolation — each API key maps to a project with its own database URL and AI provider config
  • Stateless — the API connects to your database per-request and disconnects after

Components

MemLib is composed of several layers:

LayerWhat It Does
SDK (memlib on npm)TypeScript client with typed methods for store, recall, prepare, diff
REST APIHTTP endpoints under /v1 with API key auth and OpenAPI schema
MCP ServerModel Context Protocol server — lets Claude, Cursor, and other MCP clients use your memories directly
Memory EngineThe core pipeline: fact extraction, embedding, deduplication, conflict resolution, consolidation
DashboardWeb UI for creating projects, managing API keys, browsing memories, and testing recall

Data Model

Memories are stored in two tables in your PostgreSQL database:

memories table

Each row is an atomic fact with its vector embedding:

FieldDescription
idUUID primary key
namespaceProject/tenant scope (e.g., "my-app")
entityEntity within the namespace (e.g., "user-123")
contentThe memory text
embedding1536-dimensional vector (pgvector)
importance0.0–1.0 importance score
categoryAuto-assigned category (preference, personal, etc.)
tagsUser-defined tags
archivedWhether the memory has been consolidated or superseded

memory_events table

Every mutation is logged as an audit event:

EventWhen
ADDNew memory inserted
MERGETwo memories combined into a richer one
REPLACENew fact superseded an old one
DELETEMemory deleted or archived due to contradiction
SKIP_DUPLICATENear-duplicate detected and skipped
KEEP_EXISTINGConflict resolved in favor of the existing memory

This audit trail powers the diff feature — zero-LLM-cost changelog queries.


Supported Providers

MemLib supports multiple AI providers out of the box:

LLM Providers (for fact extraction + conflict resolution)

ProviderModels
Google GeminiGemini 2.5 Flash (default), Gemini 2.5 Pro, Gemini 2.0 Flash, Gemini 2.0 Flash Lite, Gemini 1.5 Flash, Gemini 1.5 Pro
OpenAIGPT-4o Mini (default), GPT-4o, GPT-4 Turbo, GPT-4.1 Mini, GPT-4.1, GPT-4.1 Nano, o3 Mini, o4 Mini
AnthropicClaude Sonnet 4 (default), Claude 3.5 Haiku, Claude 3.5 Sonnet, Claude 3 Opus
MistralMistral Small (default), Mistral Medium, Mistral Large, Devstral Small, Codestral
GroqLlama 3.3 70B (default), Llama 3.1 8B, Gemma 2 9B, Mixtral 8x7B, Llama 4 Maverick
xAIGrok 3 Mini Fast (default), Grok 3 Mini, Grok 3, Grok 3 Fast, Grok 2
CohereCommand R (default), Command R+, Command R 7B, Command A
OpenRouterAny model via OpenRouter (Gemini 2.5 Flash, Claude Sonnet 4, GPT-4o Mini, Llama 3.3 70B, DeepSeek R1, Mistral Large)
ProviderModels
Google GeminiGemini Embedding 001 (default), text-embedding-004
OpenAItext-embedding-3-small (default), text-embedding-3-large, Ada 002
CohereEmbed v4.0 (default), embed-english-v3.0, embed-multilingual-v3.0, embed-english-light-v3.0
Mistralmistral-embed
VoyageVoyage 3 (default), Voyage 3 Lite, Voyage Code 3

See the Providers guide for detailed setup instructions.

Next Steps

On this page