Back to Blog

How to organize research notes that survive past the next deadline

*May 12, 2026 · 11 min read*

Research notes have a specific failure mode: you collect a hundred sources for one project, half of them are gone six months later when you start the next one. The links rot, the highlights aren't findable, and you can't remember which paper said the thing you needed. The system below treats every source as a first-class page so the work compounds across projects.

The short version: one page per source, frontmatter for metadata, wikilinks between sources and projects, weekly triage. That's the whole shape. The rest is what each piece looks like in practice.

The folder structure

Three top-level folders matter:

  • sources/ — one page per source (paper, article, talk, dataset).
  • projects/ — one page per active research project. Links out to relevant sources/ pages.
  • concepts/ — atomic concept pages. These end up linked from both sources and projects.

A capture/ inbox catches things you haven't triaged yet. Move them out into sources/ or concepts/ during weekly review.

What goes on a source page

The page itself is just markdown. The YAML frontmatter does the structural work:

---
title: Predictive coding and the active inference framework
authors: [Friston, K.]
year: 2010
journal: Nature Reviews Neuroscience
citation: "Friston, K. (2010). The free-energy principle..."
doi: 10.1038/nrn2787
status: read
area: research/predictive-coding
tags: [neuroscience, free-energy]
---

The body of the page is your annotations: short summary, key claims, doubts, links to your project pages where this source matters.

Every time you write something about a source on a project page, link the source: [[Friston 2010 — predictive coding]]. The source page's backlinks panel automatically tells you every project that depends on it. Six months later when you revisit the source, you see what you built on it.

The reverse also matters: when you write a finding on the project page, link the underlying sources right there. You stop having to dig for citations later.

Weekly review

This is the load-bearing ritual:

  • Move capture/ pages into their right folder (sources, concepts, projects).
  • Scan recent source pages for the status: skim ones you actually read this week; promote to status: read and fill out the summary if you haven't.
  • Look at project page backlinks. Anything you should pull into the main writeup?
  • Optional: ask your AI to summarize. With MindWiki, "summarize sources tagged predictive-coding from this year" works because the AI reads the live vault.

This takes about thirty minutes if you stay current. It compounds across years.

When to bring AI in

The pattern that works:

  • Capture and triage stay human. The act of writing is part of how you understand the source.
  • Synthesis can be AI-assisted. Once you have ten source pages on a topic, ask your AI to draft a literature review. Then you edit it down.
  • Retrieval is AI-shaped. "What did I read about X?" is faster as a question to your AI than a search you have to phrase.

In MindWiki, this is what the MCP layer is for. Claude or ChatGPT connects via OAuth, gets read access to your vault, and uses mindwiki_search / mindwiki_ask / mindwiki_similar to pull the right sources. See the developers page for setup.

When to give up on a system

If you've been collecting sources for two months and you can't find anything, the system is wrong. Three usual culprits:

  • Too many tags. Tags don't scale past about fifteen. Use folders + frontmatter areas instead.
  • Wikilinks not being used. Without links, retrieval falls back to search, which falls back to remembering exact phrases.
  • No weekly review. The system rots in two weeks of skipped reviews.

The fix is almost never "switch tools" and almost always "do the boring thing for a month." That said, the tool matters too — markdown vaults make all of this much more flexible than block-based stores, because you can rewrite frontmatter and rerun queries without rebuilding the database.