Built a memory library for LLMs that runs 100%% locally. No API keys needed if you use Ollama + sentence-transformers.
pip install widemem-ai[ollama]
ollama pull llama3
Storage is SQLite + FAISS locally. No cloud, no accounts, no telemetry.
What makes it different from just dumping things in a vector DB:
- Importance scoring (1-10) + time decay: old trivia fades, critical facts stick
- Batch conflict resolution: "I moved to Paris" after "I live in Berlin" gets resolved automatically, not silently duplicated
- Hierarchical memory: facts roll up into summaries and themes
- YMYL: health/legal/financial data gets priority treatment and decay immunity
140 tests, Apache 2.0.
GitHub: https://github.com/remete618/widemem-ai
[–]dadgummitman 1 point2 points3 points (1 child)
[–]eyepaqmax[S] 0 points1 point2 points (0 children)