How to give Claude (or ChatGPT) memory of your work
*May 13, 2026 · 11 min read*
Most AI conversations start at zero. You re-paste the same project context, the same prior decisions, the same research, and lose most of the compounding value of working with a model over months. Built-in features like ChatGPT's "memory" and Claude's "projects" help a little — but they're shallow, opaque, and locked inside the chat app. They can't read the notes you already wrote in Obsidian, the highlights you saved from Readwise, or the spec doc you finished last quarter.
If you want a model that actually remembers everything you've worked on, you need a connected knowledge base it can read directly. That's what this guide walks through: the architecture, the trade-offs, and the exact setup that lets Claude, ChatGPT, Claude Desktop, Claude Code, Codex, and other modern AI clients pick up where they left off — every single conversation.
TL;DR
Real AI memory has three pieces: (1) structured notes you already write, (2) an authenticated bridge that lets the AI read and write those notes safely (Model Context Protocol with OAuth is the modern answer), and (3) an AI client that speaks that bridge. Built-in chat-memory features are not memory — they're scratch-pads inside the chat app. To get persistent, queryable context that survives across conversations and across AI clients, point your AI at a real knowledge base. MindWiki is purpose-built for that pattern.
What "AI memory" actually means in 2026
The phrase gets used loosely. Three things are routinely conflated:
- In-context memory. What you paste into the current message. Lasts until the context window fills.
- Provider-side chat memory. ChatGPT's "memory" feature, Claude's "Projects" workspace, Gemini's recall. The model summarizes a few facts ("you live in Brooklyn", "you prefer concise answers") and re-injects them on future turns. Useful for tone, not for thinking.
- External knowledge base. Your real notes, files, decisions, source material — stored somewhere the model can fetch from on demand. This is what gives an AI durable context.
Only the third is what most people actually want when they say "I wish my AI remembered my work." It's also the only kind that survives switching from ChatGPT to Claude to Codex without losing the thread.
Why provider-side memory falls short
The chat-app memory features are designed for personalization, not for knowledge work:
- Opaque. You can read the summary the model wrote, but you can't read the underlying data, version it, link it, or share it across tools.
- Lossy. A million-token vault gets boiled down to a few hundred summary tokens. The high-resolution detail you actually need is gone.
- Locked in. Memory in ChatGPT isn't readable by Claude. Switching providers wipes it.
- No querying. You can't ask "show me every decision we made about pricing in the last quarter" against memory. It's a soft state, not a database.
Provider-side memory is useful for keeping your AI from re-asking your name. It's not where your second brain should live.
The real pattern: connected knowledge base + AI client
The architecture that actually works:
┌──────────────────────────┐
│ Your notes │ markdown / pages / properties / graph
│ (the knowledge base) │
└────────────┬─────────────┘
│
MCP / REST / OAuth
│
┌────────────▼─────────────┐
│ AI client │ Claude / ChatGPT / Codex / Claude Code
│ (uses tools to query) │
└──────────────────────────┘Three properties make this work:
- Persistent storage. The notes live as files or pages outside the chat. They keep accumulating. Every conversation has more material to draw from.
- Structured retrieval. Wikilinks, properties, search, and a graph let the model fetch the right page instead of guessing. Embedded similarity finds adjacent thinking the model wouldn't otherwise know existed.
- Bounded permissions. The AI authenticates against the knowledge base with scoped credentials. You see what it's reading. You approve writes. Nothing happens silently.
Model Context Protocol (MCP) in one paragraph
MCP is the open protocol — created by Anthropic, adopted by OpenAI, Microsoft, JetBrains, Cursor, and others — that lets an AI client connect to an external tool over a single HTTP endpoint. The AI client discovers what tools the server exposes (search, read, write, ask, similarity, graph), authenticates with OAuth, and calls those tools on your behalf during a conversation. MCP is the right answer for AI memory because it's standard, scoped, and reversible. You can revoke a connection from a settings page and the AI loses access immediately.
How to actually wire it up
Walking through the MindWiki setup as a concrete example, because it's the production-quality MCP server for personal knowledge work.
Step 1 — get a knowledge base your AI can read
MindWiki stores notes as plain markdown with YAML frontmatter and [[wikilinks]]. The format is open and portable, and the knowledge graph, search, properties, and capture surfaces are all queryable via MCP tools.
You can:
- Start with an empty vault and capture as you go.
- Drop an existing Obsidian-style markdown folder into MindWiki — the format is the same.
- Pipe Notion exports, Readwise highlights, Apple Notes archives, or any markdown source into the capture endpoint.
Step 2 — connect your AI client
The MCP endpoint is https://api.mindwiki.io/mcp. Modern AI clients pick this up over OAuth so you never paste a token:
- Claude.ai — Settings → Connectors → Add custom connector.
- Claude Desktop — Settings → Integrations → Add a remote MCP server.
- Claude Code —
claude mcp add mindwiki https://api.mindwiki.io/mcp. - ChatGPT (web) — Settings → Connectors → Add custom connector.
- OpenAI Codex — Add to the Codex MCP servers list.
- Anything else — point any MCP-aware client at the URL and approve the OAuth flow.
Full per-client steps live on the setup guide. For older clients that don't support remote OAuth yet, you can mint a personal API key and use the token URL form.
Step 3 — verify the AI can see your vault
The fastest sanity check: open the AI client after the OAuth approval and ask, *"What folders are in my MindWiki vault?"* If you get back real folder names, the connection is live. From there:
- *"Summarize the decisions I've written about pricing in the last month."*
- *"Find pages that mention
[[X]]and tell me what I was working on."* - *"Capture this thread as a new page under
capture/."*
Every one of those works because the AI is calling typed tools (mindwiki_search, mindwiki_read_page, mindwiki_capture, etc.) against your real vault — not guessing from context.
What this changes about how you work
People who run this setup for a few months tend to converge on the same patterns:
- Conversations start higher up the stack. You don't waste the first 200 tokens re-introducing your project. The AI already has the relevant pages.
- Captures get written down faster. Anything worth keeping gets dropped into the vault during the conversation (manually or by the AI via
mindwiki_capture). - Cross-tool continuity. Claude can pick up a thread Codex started, because both of them are reading the same vault.
- Audit-by-default. Every tool call shows up in the agent activity log. You see what the AI actually used, and you can revoke any client without affecting the others.
Common mistakes to avoid
- Dumping notes into a chat upload UI. The chat is not a knowledge base. Uploads are transient, can't be queried, and don't accumulate.
- Treating provider memory as memory. ChatGPT's "memory" is a personalization layer. Don't trust it for important context.
- Letting an AI write to your vault without proposals. A well-designed MCP server (MindWiki's included) supports a proposal mode for non-trivial writes so you stay the editor.
- Storing in a proprietary format. If your notes only exist as Notion blocks or Roam blocks, the AI surface is locked to one tool. Plain markdown keeps every option open.
How MindWiki fits
MindWiki is the AI-connected knowledge base purpose-built for this exact workflow:
- Markdown vault on macOS and the web with full sync.
- 20 MCP tools including search, ask, similar, graph, capture, write, and a proposal layer.
- OAuth-first for Claude, ChatGPT, Codex, Claude Desktop, Claude Code, and any other modern MCP client. API keys for legacy clients and scripts.
- Scheduled Pro automations (Auto-Linker, Weekly Classifier, Pattern Detection, Monthly Summary) keep the vault organized between AI conversations.
- Free tier covers the editor, vault, sync, search, and graph. Pro adds MCP, API keys, and the automation layer.
If you've been waiting for the "AI memory" feature that actually works the way the marketing says, this is the shape it takes.