Research
Structured, source-backed research output saved to Projects/Research/ as standalone files.
The Problem
AI research output tends toward unsourced text with inconsistent depth. Product comparisons collapse variants, conflicts get silently resolved, and gaps go unmentioned.
The Solution
A guidelines file in your vault that Jesse reads whenever research is requested, producing scoped, cited reports with actionability sections.
Vault Structure
Knowledge/
Jesse-Guidelines/
Research-Guidelines.md # Research output standards and citation rules
Projects/
Research/ # Completed research output (standalone files)
Configuration
The template already includes research. JESSE.md has a pointer in the Rules section:
### Research
Follow [[Knowledge/Jesse-Guidelines/Research-Guidelines]] for all research tasks. Any inbox item or instruction
that asks to research, investigate, explain, or answer a substantive question produces a file
in `Projects/Research/`, not a conversational reply.
The Guidelines File
The template includes this file at Knowledge/Jesse-Guidelines/Research-Guidelines.md:
# Research Guidelines
Research output is for human consumption. It must be accurate, but also accessible -- written so the reader can come away understanding the topic, not just holding a pile of citations. Start with a scope statement (what's covered, what's excluded, and why), followed by a 2-3 sentence executive overview that delivers the takeaway up front. The body should explore the topic from multiple angles, choosing angles relevant to the reason the research was requested when known. Structure depth per section based on what the material needs: use accessible overviews where a concept just needs to land, and include technical deep-dives where the subject demands full understanding. Not every section needs both -- match the depth to the content, don't force a uniform structure across the whole piece.
When a product or system comes in multiple tiers, configurations, or variants, identify all of them early. Don't collapse a product line into a single spec number -- a "Max" chip with 32-core and 40-core GPU options has two different bandwidth figures, and using just one misrepresents the picture. Build comparison structures (tables, side-by-side breakdowns) that show the full variant landscape so the reader can locate themselves in it.
Find multiple sources and prefer primary over secondary: manufacturer specs, published papers, official announcements, and direct interviews over blog recaps and aggregator summaries. Non-English sources are valid and sometimes critical -- even when a topic appears US- or English-centric, actively check whether relevant non-English perspectives exist (a Dutch sysadmin's career, a European standards body, an Italian regulatory filing). Don't force irrelevant sources, but don't default to English-only either. Note each source's publication date and flag when key claims rest on outdated data. Rate source trustworthiness inline and state why: an official spec sheet outranks an anonymous forum post, a peer-reviewed study outranks a vendor whitepaper. Use numbered citations [1] [2] inline throughout the text, with a full source list (URL, title, date, language if non-English) at the end.
Distinguish reported fact from analyst speculation from your own synthesis, and make the boundaries obvious. When sources conflict, surface both claims with context on methodology -- don't silently pick a winner or split the difference. When a conflict is noticed mid-analysis, resolve it inline: state both figures, identify which source is more authoritative and why, use the stronger source for any calculations or conclusions, and flag the remaining uncertainty so the reader knows the number isn't settled. Explicitly note gaps: if no source covers an obvious question, or if all available data is outdated or single-sourced, say so. "No independent benchmarks exist yet" is a finding, not an omission. But if a gap could be closed with additional source work -- a GitHub repo, a changelog, a release blog -- close it rather than flagging it as unresolved.
End with a detailed summary that draws all threads together into a coherent picture, followed by an actionability section. If the research was triggered by a decision, state what the findings mean for that decision. If there's nothing actionable yet, say that -- the section is never skipped. Always save research output to `Projects/Research/` as a standalone file.
How It Works
- Drop a research request in Inbox/ or ask Jesse directly.
- Jesse reads
Knowledge/Jesse-Guidelines/Research-Guidelines.md, scopes the research, gathers sources, and writes a structured report. - Output lands in
Projects/Research/with a summary and actionability section. - Any inbox item or instruction that asks to research, investigate, explain, or answer a substantive question produces a file in
Projects/Research/. The conversational summary is secondary to the written report.
Tips and Gotchas
Don’t collapse variants. If something comes in multiple configurations, show all of them.
Non-English sources matter. Especially for regulatory, standards, or international topics.
Gaps are findings. “No data exists” is valuable – say so explicitly.
The actionability section is never skipped. Even when the answer is “nothing to do yet.”
The guidelines file is yours to edit. Different citation style, depth, domain needs – change it directly.