Your meeting notes belong in a SQLite file, not someone else's cloud
Mem, Reflect, Notion AI and Otter all index your meetings in their cloud. There is a quieter option: a local SQLite file on your Mac, exposed to any AI you trust through MCP. Here is why that matters in 2025.
Your meeting notes belong in a SQLite file, not someone else's cloud
There is a question every knowledge worker eventually asks. Not loudly, usually around the third time a vendor changes their pricing or quietly updates a privacy notice.
Where do my meetings actually live?
If you use Mem, Reflect, Notion AI, Otter, Fireflies or any of the other "second brain" tools, the honest answer is: in their database, on their servers, behind their dashboard. You search through their chat. You pay every month for storage you cannot inspect. And if you stop paying, your second brain stays with them.
There is a quieter option that most teams have not thought about yet. It became practical in 2025 and we think it will be the default by 2027.
Your meetings live in a SQLite file in your home folder. Any AI you trust can read from it. Nothing leaves your Mac unless you ask it to.
This post is about why that shift matters, what changed technically to make it possible, and what kind of work it unlocks.
The cloud second brain has a custody problem
Pick any cloud meeting tool and read its data flow carefully. The pattern is consistent:
- Audio is uploaded to a cloud bucket.
- Speech-to-text runs on a vendor cluster.
- The transcript is stored in a vendor database.
- AI features run by sending the transcript to a model provider, often a different vendor.
- You search through the vendor's chat or web app.
That is at least three custody handoffs for a single one-hour conversation. Each one is governed by a Data Processing Agreement that almost nobody reads, with sub-processors listed in a spreadsheet that quietly grows over time.
For a casual standup that is fine. For a board call, a customer escalation, a hiring debrief or anything covered by GDPR, it is a stack of paperwork you do not see and cannot easily inspect.
The cloud-second-brain category does not differentiate on this. Mem, Reflect, Notion and Otter all index in their cloud, all ask you to trust them with your knowledge graph, and all answer the question "is this safe" with "we have SOC 2". That is not the same answer.
What changed in 2025
Three things converged in late 2024 and early 2025 that made local-first second brains practical for the first time.
On-device language models got good enough. Apple released MLX, a framework that runs quantised transformer models on the Neural Engine of any Apple Silicon Mac. Microsoft's Phi-4 Mini and Meta's Llama 3.1 8B both ship at sizes that fit on a base-spec MacBook (around 2.4 GB and 4.5 GB respectively at 4-bit quantisation). The output quality is not GPT-5. But for the structured tasks meetings actually need (summaries, action items, decisions, follow-ups) it is now competitive with the small cloud models that meeting tools were already using.
MCP standardised how AI reads external data. Anthropic released the Model Context Protocol in late 2024. By 2025 it had been adopted by Claude Desktop, Cursor, Zed and a growing list of agentic frameworks. MCP is a JSON-RPC protocol that lets any compliant client talk to any compliant server. A locally-running MCP server can expose your data to Claude Desktop through a Unix socket, with no cloud, no auth flow and no egress.
Storage stopped being the bottleneck. A modern Mac has 512 GB of disk minimum. Indexing every meeting you record, with embeddings, full-text search, and audio, takes well under a gigabyte per hundred meetings. The economics of cloud-storing meeting transcripts never made technical sense; they made business sense for the vendor.
Put those three together and the data path collapses to:
- Audio captured on your Mac.
- WhisperKit transcribes on the Neural Engine.
- Apple MLX summarises locally.
- SQLite indexes locally with FTS5 and 384-dim Apple NLEmbedding vectors.
- An MCP server exposes that index to whichever AI you already use.
No vendor is in step 1, 2, 3, 4 or 5. That is the local second brain.
What this looks like in practice
Concretely, on a MeetMemo install that is what happens.
Your meetings live in ~/Documents/MeetMemo/ (audio plus Markdown). The index lives in ~/Library/Application Support/MeetMemo/index.sqlite. A toggle in Settings exposes that index over MCP. One click writes the right entry to ~/Library/Application Support/Claude/claude_desktop_config.json.
After that, when you open Claude Desktop and ask "what did we agree about pricing across the last five product meetings", Claude calls the MeetMemo MCP server, which retrieves the relevant chunks, runs the local LLM to draft an answer, and returns it with citations like [1][2][3] resolved to specific meeting plus timestamp pairs.
Three properties drop out of this shape:
- Your relationship with Anthropic does not change. You already have a Claude Desktop subscription. We are not asking you to use a different chat. We are giving Claude something to read.
- Your knowledge graph is yours. It is a SQLite file. You can
sqlite3 ~/Library/Application Support/MeetMemo/index.sqliteand run your own queries. You can back it up, sync it, archive it, or delete it. - Cancelling the subscription does not cancel your memory. A one-time license keeps the app running. The index keeps existing on your disk. The MCP server keeps working.
What about offline, audit and team use
A few honest answers.
Offline. Yes, fully. Recording, transcription and summarisation all run on your Mac. The only outbound calls are the one-time model download from Hugging Face when you first install (around 2.4 GB) and the Sparkle update check.
Audit. Every MCP tool call from Claude or Cursor is written to a local audit_log table. Settings shows it. You can see exactly what each AI client read from your data and when.
Team use. Today, MeetMemo v3 is one-Mac-only. iCloud-encrypted index sync is on the v3.x roadmap and will let one user run the same second brain across their Macs. Cross-Mac shared indexes for teams are a research item; we are not pretending the local-first claim and shared SaaS workspaces fit cleanly together.
This is the trade-off. A SaaS product can ship a multi-user dashboard tomorrow because the data is in their cloud. A local-first product has to choose carefully which sharing flows are worth the architectural cost. We chose individual second brain first, sync second, shared workspaces last.
Why this matters for the next few years
The cloud-second-brain category will not disappear. Plenty of users want a hosted dashboard, do not record sensitive meetings, and get value from the SaaS model. Mem, Reflect and Notion AI will be fine.
But the slice of the market that records confidential calls, that has compliance to satisfy, that pays attention to where their data lives, has a real alternative for the first time. A local SQLite file with an MCP socket is not exotic. It is what you would build if you were starting from a blank sheet of paper in 2025 and you valued your users' ownership of their own knowledge.
If that sounds like the way you would prefer to work, download MeetMemo. It runs on macOS 14 and above, on any Apple Silicon Mac, free until you record more than three meetings per month. The index is yours. The AI you use with it is yours. The cloud is not in the picture.
