How loci differs
Four approaches to persistent AI memory.
| Karpathy-style | MemPalace | LLMChronicle | loci | |
|---|---|---|---|---|
| Core idea | Markdown files with semantic search | Hierarchical memory palace metaphor | Automated conversation logging | Local-first searchable archive |
| Storage | Local markdown | Cloud (Firebase) | SQLite + cloud sync | Markdown + SQLite (apps) |
| Dependencies | Python + embeddings API | Firebase, React, cloud auth | Electron, sync infrastructure | Zero external services |
| Search | Vector similarity | Keyword + hierarchy nav | Full-text + filters | Full-text + fuzzy match |
| AI character | Manual prompt files | Palace-defined personas | No character layer | CLAUDE.md native integration |
| Growth | Manual curation | Structured placement | Automatic logging | Automatic + manual gardens |
| Multi-agent | — | Yes (rooms) | — | Yes (personas) |
| Browser extension | — | — | Yes | Yes — v1.2 |
| Desktop app | — | — | — | Yes — v0.1 (Tauri) |
| Team / friends mode | — | Yes (cloud) | Yes (sync) | In development ✦ |
| Setup time | ~30 min | ~10 min (account) | ~5 min | ~2 min |
On Karpathy-style memory
The "dump everything to markdown and embed it" approach pioneered by Andrej Karpathy is elegant and hackable. It works beautifully if you're comfortable writing Python, managing embedding APIs, and curating your knowledge base manually. The semantic search is genuinely powerful. But it requires active maintenance — you're building a second brain that needs constant gardening. For engineers who enjoy that process, it's ideal. For everyone else, the activation energy is too high.
On MemPalace
MemPalace's spatial metaphor is compelling — the idea of walking through rooms of memory resonates with how humans actually think. The cloud-first architecture makes collaboration possible and removes local setup friction. The tradeoff is clear: your memories live on someone else's servers. For teams building shared AI workflows, this might be acceptable. For individuals who care about data sovereignty, it's a dealbreaker. The Firebase dependency also means your archive is only as durable as your subscription.
On LLMChronicle
LLMChronicle solves the "I said something brilliant three weeks ago and can't find it" problem directly. The browser extension approach captures conversations without manual effort, and the Electron app provides a decent search interface. The sync layer adds convenience but also complexity — more moving parts mean more potential failure modes. It's a solid choice if you value automatic capture above all else and don't mind the cloud sync tradeoff.
Where loci stands
loci takes an opinionated position: your AI memory should be local, private, and zero-dependency. No accounts, no cloud, no embeddings API, no sync infrastructure. Just a SQLite database on your machine and a search interface that works. The Chrome extension is in beta — automatic capture from Claude.ai and ChatGPT, no setup required. Team and friends mode is in active development: shared rooms, shared crystals, context that travels between collaborators without a cloud dependency. What you get today is durability (it's just files), privacy (nothing leaves your machine), and simplicity (it works offline, forever). loci is for people who want their AI memory to be as reliable as a text file and as searchable as their email — and soon, as useful for a team as it is for one.