Pro Automation

MindWiki Pro adds four scheduled automations that run inside the MindWiki backend. They use MindWiki-owned AI — not your Claude or ChatGPT subscription. Each one is proposal-first: the automation creates entries you review before any change lands in your vault. The single exception is Monthly Summary, which writes one additive page per month.

The four jobs

JobWhen it runsWhat it doesOutput
Auto-LinkerDaily, 02:00 UTCScans recent pages for places where another page's title appears unlinked. Two-stage: a cheap classifier finds candidates, then a reasoning model writes the proposal.update_links proposals
Weekly ClassifierSunday, 03:00 UTCReviews recent captures, decides whether to suggest a move, suggest tags, or leave alone.move_note / add_tags / create_summary proposals
Pattern DetectionSunday, 04:00 UTCLooks across the vault for cross-domain patterns, near-duplicate pages, and structural drift.create_note / merge_notes / flag_contradiction / update_links / create_summary proposals
Monthly Summary1st of month, 05:00 UTCAggregates the previous month's activity (writes, captures, top areas, top tags, proposals) and writes a narrative page.One additive page at outputs/monthly-summary/YYYY-MM.md

All four jobs are gated to Pro accounts. Free accounts are skipped silently.

Whose AI runs the work

The MindWiki backend runs the four jobs on a tiered model router. The tier varies by job:

  • Classifier tier (cheap, batch) — used by Auto-Linker stage 1 and Weekly Classifier stage 1. Default model: @cf/qwen/qwen3-30b-a3b-fp8.
  • Reasoning tier (high-quality general purpose) — used for proposal text, pattern explanations, and the default Monthly Summary. Default model: @cf/openai/gpt-oss-120b.
  • Long-context tier (huge context, high cost) — used only when both AI_ENABLE_LONG_CONTEXT="true" is set and the user opted in via automation_settings.long_context_opt_in. Default model: @cf/moonshotai/kimi-k2.6.

Your AI subscription (Claude, ChatGPT, etc.) is interactive only — used when you talk to your AI client through MCP. It is not what runs the scheduled jobs. A scheduler can't wake up your Claude account; that's why the automation runs on MindWiki-owned infrastructure that we operate and pay for.

Idempotency and safety

Each run is keyed by (user_id, job_type, period_key):

  • Auto-Linker: period is the day (YYYY-MM-DD)
  • Weekly Classifier and Pattern Detection: period is the ISO week (YYYY-Www)
  • Monthly Summary: period is the previous month (YYYY-MM)

If a run for a given period has already completed, the next cron firing for the same period is skipped. If a run failed, the next firing replaces it and retries. If a run has been stuck in running for more than 30 minutes, it's treated as crashed and replaced.

Bounded inputs

Every job caps the work it sends to the model. Hard limits today:

  • Auto-Linker: 30 recently-changed pages, 2,000 chars each, max 8 proposals per day.
  • Weekly Classifier: 50 capture pages, 2,000 chars each, max 12 proposals per week.
  • Pattern Detection: 80 pages of metadata + 1,000-char snippet, max 6 proposals per week.
  • Monthly Summary: top 120 pages by edit count, 1,500 chars each, one page output per month.

These caps keep cost predictable and runtime well within platform limits. If a vault is too large to summarize fully in one pass, the summary is honest about what was sampled.

Failure handling

If the AI model returns invalid JSON or fails schema validation:

  • The service retries once with the schema error inlined into the prompt.
  • If validation still fails, the run is recorded as failed in automation_runs with the error message.
  • No malformed AI output is ever written to your vault. A failed run is visible in the Agents activity log; the next scheduled firing for that period will retry from scratch.

Reviewing automation activity

In the macOS app's Mission Control view, automation runs appear under the recent activity panel alongside human MCP activity. Each run shows:

  • The job type
  • The model that handled it
  • Status (running, completed, failed, skipped)
  • A short result summary
  • Counts of pages read, pages written, and proposals created
  • Started / completed timestamps

You can also pull the same data via the REST API:

GET https://api.mindwiki.io/vault/automation-runs?limit=50&job=auto_linker
Authorization: Bearer mw_...

Manual trigger

If you want to test a job without waiting for cron:

POST https://api.mindwiki.io/vault/automation/run
Authorization: Bearer mw_...
Content-Type: application/json

{ "job": "auto_linker" }

If a run for the current period already exists, the trigger is a no-op. To re-run the same period (and get a new set of proposals), pass force: true. To bypass your per-user toggle for that one trigger, pass ignore_settings: true.

Per-user toggles

Each Pro user has an entry in the automation_settings table with one boolean per job. By default all four are enabled. A future settings page in the web app will let you toggle them; for now, the toggles can be set programmatically.

long_context_opt_in is a fifth flag on the same row. When both that flag is 1 AND the global AI_ENABLE_LONG_CONTEXT is "true", the Monthly Summary job may use the long-context model (Kimi K2.6) instead of the default reasoning model. Default is off because the long-context model is significantly more expensive.

Where to go next