Claude forgets you. Your notes don't.
Stop paying a subscription to teach the same model the same things every week. Move your memory into a vault you own — local, searchable, yours. Ask Claude in plain English; it finds what you wrote and answers from it.
Three pieces, one workflow.
Each piece does one thing. They wire together with MCP — Claude's plugin protocol — so Claude Desktop can read your notes and run local models without any of it leaving your machine.
Obsidian
Your notes. Markdown files in a folder you own. Add color-coded folders and a clean theme via community plugins.
Claude Desktop
The brain interface. Talks to your vault through MCP. Asks the smart questions; you read the smart answers.
Ollama
Local models. Embeds your notes into a vector database so Claude can find them by meaning, not keywords.
The vault
A dialed-in starter folder with `🏠 Welcome`, `Tools/`, `Projects/`, and the conventions Claude reads.
One-click installer, or do it yourself.
Both end at the same place. Pick the one that matches how you like to learn.
Pick your platform.
The installer ships as one file per OS. Click your platform — the next slide will show the right download.
Download. Double-click.
Grab the installer file from the GitHub repo. It expects to live next to the vault-template/ folder.
curl -L -O https://raw.githubusercontent.com/Coherence-Daddy/use-ollama-to-enhance-claude/main/install.command chmod +x install.command open install.command
Invoke-WebRequest -Uri "https://raw.githubusercontent.com/Coherence-Daddy/use-ollama-to-enhance-claude/main/install.bat" -OutFile install.bat .\install.bat
install.bat is a tiny launcher; the work happens in install.ps1 (also in the repo).
curl -L -O https://raw.githubusercontent.com/Coherence-Daddy/use-ollama-to-enhance-claude/main/install.sh chmod +x install.sh ./install.sh
A terminal window will open with a coral banner and step-by-step progress. Total time: 3–8 minutes (most of it pulling the local models).
What the installer does (so you trust it).
- Checks dependencies. Installs Homebrew (mac) / uses winget (Windows) / detects apt-dnf-pacman (Linux). Adds Claude Desktop, Obsidian, and Ollama if missing.
-
Pulls local models.
llama3.1for chat,nomic-embed-textfor embeddings. ~5 GB combined; cached after this. -
Asks where to put the vault.
Defaults to
~/Documents/Obsidian Vault/Brain/. Confirms before overwriting. -
Copies the vault template.
🏠 Welcome.md,Tools/,Projects/,_Claude_Ollama_Human/, plus pre-configured plugins and the coral-dark CSS snippet. -
Wires Claude Desktop.
Writes
claude_desktop_config.jsonwith the MCP filesystem server pointed at your vault. Backs up any existing config first. - Opens Obsidian and Claude Desktop. Both apps launch on the new vault. Done.
First open: try it in 30 seconds.
- Obsidian opens on your new vault.You'll land on
🏠 Welcome. - Click into
Tools/🔍 Ask Brain.Follow the one-line instruction at the top. - Switch to Claude Desktop.Ask: "What's in my Brain vault?"
- Watch Claude list your folders and files.If it does, MCP is wired correctly. You're done with Path A — skip to slide 14.
Welcome
This vault gives Obsidian a memory and lets Claude search it for you in plain English.
Write notes the way you always have. Ask Claude later — it actually finds the right ones and answers from them.
→ Open Tools/🔍 Ask Brain to try it.
Install three apps.
If you already have one or two of these, skip those.
Optional · Claude Code
CLI that reads the same MCP config — useful if you want terminal access too.
npm i -g @anthropic-ai/claude-code
Pick your model strategy — Cloud (recommended) or Local.
Same Ollama tooling, two ways to run the actual brains. Ollama Cloud runs much higher-quality models on Ollama's hardware (free tier — Gemma, GLM, Kimi, Qwen3-Coder). Local keeps everything on your laptop. Most people should start with Cloud and only flip to Local if total privacy matters more than answer quality.
| ☁️ Ollama Cloud (recommended) | 🔒 Local | |
|---|---|---|
| Models | gemma4:31b-cloud, glm-5.1:cloud, kimi-k2.6:cloud, qwen3-coder-next |
llama3.1 (chat), nomic-embed-text (embeddings) |
| Quality | Frontier-tier — these punch alongside GPT-4-class models | Solid for everyday lookup; weaker on hard reasoning |
| Cost | Free tier on a personal Ollama account | Free; uses your CPU/GPU |
| Privacy | Notes summaries leave for inference; Ollama logs minimal | Nothing leaves your machine. Air-gap-grade. |
| Disk + RAM | ~0 (models live in the cloud) | ~5 GB download, 8 GB RAM at runtime |
ollama pull gemma4:31b-cloud ollama pull glm-5.1:cloud ollama pull kimi-k2.6:cloud ollama pull qwen3-coder-nextSee the companion tutorial → Save 90% on Claude Code with Ollama Cloud for the full router setup.
ollama pull llama3.1 # chat / reasoning (~4.7 GB) ollama pull nomic-embed-text # embeddings (~274 MB)Pick this if you want total privacy, work offline, or have a beefy machine you'd rather use than a free cloud tier.
Verify either path with ollama list. The 🤖 Models button in your vault flips between them on the fly — no reinstall needed.
Create the vault structure.
A clean starter shape that scales. Every folder has one job.
vault-template/ directory with this exact shape pre-built. Copy it to ~/Documents/Obsidian Vault/Brain/ and open it as a vault.
Install community plugins.
Settings → Community plugins → Browse. Search and install each:
.obsidian/plugins/*/data.json by hand, it'll overwrite your changes on close. Cmd-Q first.
Wire Claude Desktop to your vault.
Edit claude_desktop_config.json — Claude looks for it in a per-OS location.
~/Library/Application Support/Claude/claude_desktop_config.json
%APPDATA%\Claude\claude_desktop_config.json
~/.config/Claude/claude_desktop_config.json
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"/absolute/path/to/your/Brain"
]
}
}
}
filesystem server lets Claude read AND write inside the vault path you give it. Ollama is reached via its local HTTP API at localhost:11434 — no MCP entry needed unless you want Claude to call models directly.
Restart and verify.
- Quit Claude Desktop fully.Cmd+Q on mac, Alt+F4 on Windows. MCP servers only load at cold start.
- Reopen Claude Desktop.Watch the bottom-right for an MCP indicator (a small plug icon when servers connect).
- Ask: "What's in my Brain vault?"Claude should list the top-level folders.
- Ask: "Read 🏠 Welcome.md and summarize it."If you get a real summary, MCP filesystem is wired.
claude_desktop_config.json. Paste it into jsonlint.com to catch a stray comma. Also check the path is absolute and exists.
·
Projects/ — your active projects (3 folders)·
Tools/ — the action notes I can run (Ask, Import, Intake, Sync, Dashboard, Models)·
_Claude_Ollama_Human/ — the conventions I read to behave consistentlyThere's also a
🏠 Welcome.md at the root that points new users to Ask Brain. filesystemThe daily loop.
Don't change how you write notes. The brain works better the more you use Obsidian normally.
- Capture.New note in Obsidian. Frontmatter is optional. Don't pre-organize.
- Sync.Run
/syncfrom Tools (or let it auto-run periodically). This re-embeds new notes into the local Qdrant vector store. - Ask.Open Claude Desktop. Ask in plain English. "What did I figure out about the auth migration?" Claude searches by meaning, not keywords.
- Refine.Claude's answer cites the notes it pulled from. Click through, edit, re-sync.
Importing a project: source stays where it lives.
When you import a repo, the brain reads it and writes a summary. Code never moves into Obsidian.
What gets written
Projects/<name>/README.md— purpose, owners, status.brain.yml— paths, services, modelsarchitecture.mmd.md— Mermaid mapdecisions.md— running log
What never gets written
- Source code
- Build artifacts,
node_modules - Secrets,
.env - Large binaries
source_path in .brain.yml tells it where to read from.
Lite vs. Full — pick how deep you want to go.
Steps 1–14 give you the Lite brain: Claude Desktop reads your vault as plain files. Good for most people. If you want a real semantic brain — searches by meaning, callable from Claude Code with one keystroke, with one-click sidebar buttons in Obsidian — say "yes" at Step 5 of the installer.
| Lite | Full | |
|---|---|---|
| Search | Filename + content (filesystem MCP) | Semantic — BGE-M3 + Qdrant vector DB |
| Ergonomics | Type into Claude Desktop | 8 sidebar buttons + 7 slash commands + auto-recall MCP |
| Extra deps | None | Docker + Python venv (installer handles it) |
| Time to add later | — | Re-run the installer; pick "yes" at Step 5 |
What Full adds — eight buttons, seven commands, one MCP.
The Full installer drops a ~/local-brain/ folder containing every piece of the system. None of it leaves your machine.
- 📋 Open Dashboard · 🔍 Ask Brain · 🔄 Sync Brain — the everyday loop.
- 📥 Intake · 📂 Import — start a new project, or teach the brain about an existing folder. Both ask which vault to write into.
- 🤖 Models — toggle between Anthropic (Claude) and Ollama for the engine the buttons use.
- 📐 View Mermaid · 🎨 View Excalidraw — open any project's
infrastructure.mdas a clean schematic, or convert it to a hand-editable Excalidraw canvas. - Slash commands —
/ask,/sync,/intake,/import,/dashboard,/status,/diagramdrop into~/.claude/commands/automatically. - FastMCP server —
local-brainregisters with Claude Code so any session can callbrain_search,brain_status,brain_list_projectson its own. Claude reaches for the brain when it would help; you don't have to ask.
Plain-English flowcharts of every button live at walkthroughs/index.html in the repo.
Your brain is live.
Write whatever. Ask Claude later. Watch it find things you forgot you wrote. Everything stays on your machine.