Scaffold a grant proposal
An NSF call for proposals has just dropped. It looks relevant to your lab's work, and you have two weeks to decide whether to submit and who in the lab should lead which piece. Before you commit, you want to understand how closely the call actually maps to your existing research, where the fit is weak, and which other labs are likely to apply with similar framing.
Here, agentic AI's utility shouldn't be in writing the proposal. It should be in the coordination underneath: unpacking the call's specific aims, pulling related past winners, deciding which student or postdoc leads which aim, and organising references by the section they belong in.
AI does exactly that kind of organising well. Point it at the call, your lab's team notes, and the literature, and it matches each aim to the right team member, flags coverage gaps, and spawns a subagent to verify every citation. You still write the proposal. The writing just goes from weeks of coordination to an afternoon of focused drafting.
The two approaches below differ in how much of your lab's context you can feed in.
It is simple to paste the grant call links, name your lab's actual research areas, and list a few recent publications for the AI to anchor on.
Here is the full text of an NSF call for proposals: [paste the call or its URL].
My lab works on:
- [research area 1, one sentence]
- [research area 2, one sentence]
Our recent publications include [list two or three representative papers with DOIs].
Please do three things:
1. Read the call carefully. Pull out every specific aim, evaluation criterion, and required component.
2. For each specific aim in the call, tell me how closely our existing work maps to it. Where is the fit strong, where is it weak, and what would we need to add (a new method, a collaborator, a preliminary-data effort) to be competitive?
3. Based on agency style and past funded grants on similar topics, identify which other labs in the US are likely to apply with the same framing. Where is our angle most distinctive?
Return the output as a markdown document I can hand to my co-investigators as the skeleton for the proposal: call requirements on the left, our fit and positioning on the right, gaps called out explicitly.When the mapping comes back, stay in the same conversation for follow-ups. Ask it to propose three framing angles, or to draft a one-paragraph summary of how your work differentiates from likely competitors. The agent already has all the context. Your follow-up is one line.
Before any of this goes into a real proposal, verify every citation explicitly. The mapping is a starting point. You write things yourself and verify before you cite.
Let's walk through an example. Say you're a PI pulling together a grant proposal. You have a handful of graduate students and postdocs, each working on a different piece of the lab's research. The proposal needs to synthesise their work into a single coherent story, against a specific call, backed by proper citations you can defend.
The point isn't to have Claude write the proposal for you. The point is to have Claude do the organising and structuring: pulling the call and past winners into one place, matching the call's specific aims to your team's expertise, clustering references by aim, and running a parallel subagent to verify every citation before the team starts writing. The prose stays yours. Claude hands you the scaffolding so the writing is fast and focused.
Pull the call, past winners, and supporting context into one folder
The first move is to put every piece of context the proposal will touch in one place: the call itself, the five most recent funded grants on similar topics, the agency's writing guidance, the most-cited papers on the topic area. Claude works out what access it needs to download each one and asks you for anything it can't reach on its own.
I'm preparing a grant proposal for a specific NSF call. Before any writing starts, help me pull every piece of context I'll need into a single folder.
Download to context/:
- The full text of the call for proposals (tell me which call you're fetching before you pull it).
- The five most recently funded grants on topics in this call's area, from the agency's award database.
- The agency's current writing guidance and reviewer rubric.
- The 20 to 30 most-cited papers from the last five years on the topic area.
Set up whatever access you need to do this (agency site, PubMed, Semantic Scholar, OpenAlex). Tell me what you need from me. When it's done, write context/README.md listing what you pulled and anything you couldn't find.Match the call's aims to your team's expertise
Different students and postdocs in your lab are funded out of different grants. When you're writing a new proposal, matching aims to team members is about both expertise and whose salary the proposal needs to cover. Point Claude at your lab's team notes (or have it build them from CVs) and it matches each aim to the right lead. Coverage gaps become visible immediately.
My lab has a team/ folder with one markdown file per graduate student and postdoc, listing their recent projects, publications, and technical expertise. If these index files don't exist yet, build them from CVs in team/cvs/ before continuing.
Read the call in context/ and pull out each specific aim it defines. For each aim, tell me:
1. Which team members are the natural lead based on their current work.
2. Which team members contribute supporting work.
3. Where we have a coverage gap: either a new collaborator, a new hire, or a preliminary-data effort we need before the proposal is credible.
Write the result to planning/aim-to-team.md as a clean table the team can react to.Organise references by aim, not by topic
A reviewer reads one aim at a time and looks for the citations that aim requires. The reference set should reflect that. Rather than a single bibliography, have Claude cluster the references by which aim they support, separating foundational citations, your own prior work, competitor work, and emerging results. Thin sections (where your own work isn't yet credible for an aim) surface on their own.
Read planning/aim-to-team.md and the papers in context/. For each specific aim, produce a ranked reference list:
- Foundational citations reviewers will expect us to cite.
- Our own prior work relevant to this aim.
- Competitor work we need to acknowledge and position against.
- Emerging results from the last 18 months that strengthen our angle.
Write it to planning/references-by-aim.md, one section per aim. Flag any aim where our own work is thin and we need more preliminary data to be credible.Spawn parallel subagents for outlines and reference checks
This is where the time savings land. Rather than outlining each aim one at a time and manually checking every DOI at the end, Claude Code can spawn parallel subagents via the Task tool. One subagent per aim drafts a structural outline pulling from the matched team members' work and references. In parallel, a dedicated subagent verifies every citation against Crossref and flags any that fail. The outlines are structural only. The team writes the prose from there.
I want a structural outline for each specific aim in parallel, plus a reference verification pass at the same time. Spawn subagents via the Task tool.
One subagent per aim:
1. Read planning/aim-to-team.md and pull out its assigned aim.
2. Read planning/references-by-aim.md for that aim.
3. Read the team members' index files in team/ for anyone assigned to this aim.
4. Write an outline at planning/aim-<number>-outline.md containing: hypothesis, specific objectives, approach (drawing on the assigned team members' actual methods), expected outcomes, and what we would do if the approach fails.
The outline should be structural only. Do not write prose. The point is that each assigned team member opens their aim's outline and fills in the prose in their own voice.
In parallel, spawn one more subagent to run reference verification: pull every DOI referenced across all the outlines, check each one against Crossref, confirm title and first author match, and flag any issues at planning/reference-issues.md. Catch fabricated or mis-cited papers before the team starts writing.
The Harvard paper-reviewer skill at github.com/Harvard-Agentic-Science/Skills is worth dropping into ~/.claude/skills/paper-reviewer/
alongside this workflow. When a student hands in their prose for
an aim, running paper-reviewer in skeptic mode on the draft
catches overstated claims before the PI round of edits.