Smart Connections gives you semantic neighbours. Copilot gives you chat. Khoj gives you search. None of them tell you how your thinking changed, surface what you forgot, or hand you a signed proof that the AI didn't make it up. ChronRecall does all three. Locally.
The first Obsidian plugin to answer "what did I know then?" — not just "what's related now?" Filter any query by date range. Compare how your understanding of a topic evolved from 2022 to 2025. Every answer arrives with source references and timestamps so you can verify the result inside your actual notes.
Under the hood: each note chunk is indexed with its first-written and last-modified timestamp. The retrieval layer accepts a date predicate as a first-class filter, not a post-hoc rerank. Answers cite the time-window they came from.
A proactive weekly digest of high-relevance notes you haven't touched in 60+ days. Configurable threshold. Not a random "note of the day" — a relevance-ranked list based on what you've been writing about recently. Rediscover decisions you made, ideas you captured, research you forgot you did.
The ranking signal: cosine similarity between this week's writing and the dormant chunk, weighted by chunk length and inverse recall-frequency. Notes that nobody has read in a year but that match your current attention rise to the top.
Every AI-generated answer is logged with SHA-256 hashes of the source chunks it pulled from. Export an audit-trail PDF that proves the answer came from your vault and wasn't hallucinated by the model. Built for lawyers, M&A analysts, due-diligence teams, and journalists who need AI-assisted research they can defend in front of a partner, an editor, or a court.
Every export includes: the question asked, the chunks retrieved, their hashes, their file paths and timestamps, the model used, and the vault root hash at the moment of the query. Tampering with any source after the fact breaks the chain — visibly.
In the last 18 months Microsoft shipped Recall — a system that screen-captures your machine every few seconds and indexes the contents. Apple shipped Personal Context, an LLM layer that ingests your messages, mail, and calendar to serve a personal assistant. Both are pitched as private. Both store data the company can — under the right legal pressure — be compelled to produce.
The cloud-AI default is: send your knowledge to a server, get a better answer back. For a vault that contains client work, unfinished thinking, decisions, contracts, medical notes, and the actual private record of your life, that trade is wrong on its face. A private knowledge tool that only works when connected to a third-party model is not private.
ChronRecall does not have a backend server. No telemetry. No analytics. No account creation. The plugin reads your local vault files and stores its index in a subfolder of the vault itself. The only network call it ever makes is the same one Obsidian itself makes — to check the plugin marketplace for updates. If you want to use a cloud model (OpenAI, Anthropic), that's an explicit choice you make per-query, never a default.
The rule: if the feature works offline, it must work offline. Cloud is opt-in, never opt-out.
ChronRecall scales from a Raspberry Pi to a Mac Studio. Quality of the answer scales with the local model you run. We don't promise Frontier-grade reasoning on a laptop — we promise a private workflow that gets meaningfully better as your hardware does.
| Tier | Local LLM | Hardware floor | What to expect |
|---|---|---|---|
| Light | 3B (Llama 3.2-3B, Qwen2.5-3B) | 8 GB RAM, any 2020+ laptop | Fast retrieval, basic summarisation. Cross-Temporal Recall works. Reasoning is short and literal. |
| Solid | 7B (Qwen2.5-7B, Llama 3.1-8B) | 16 GB RAM, M1/M2 or modern Intel | The sweet spot. Coherent multi-paragraph answers, reliable source-grounding, sub-second retrieval on 10k-note vaults. |
| Premium | 30B+ (Qwen2.5-32B, Llama 3.3-70B-Q4) | 32 GB+ unified memory, M3 Pro / RTX 4090 | Frontier-90% quality. Comparable to GPT-4-class output for retrieval-grounded questions. |
| Hosted (opt-in) | OpenAI / Anthropic API | Your own API key, used per-query | Best quality. Note: enabling this opt-in sends only the retrieved chunks to the API, never the full vault. |
Join the waitlist. No payment now. Waitlist members get first access and the Lifetime €99 offer when v0.1 launches.
Join the waitlist — Lifetime €99 reserved for first 100 members.