Choose Your Domain
Each section contains self-contained prompts you can copy directly into Claude.ai (browser) or Claude Code (CLI). Replace [placeholders] with your content.
| If you work with... | Jump to |
|---|---|
| Data files, CSV/Excel, visualizations, EDA pipelines | Code & Data |
| Writing abstracts, introductions, discussions, titles | Paper Writing |
| Papers, citations, literature reviews, BibTeX files | Literature Search & Citation Verification |
| Email, teaching materials, admin documents, rebuttals | Writing & Admin |
| R, clinical workflows, C++/TypeScript, econometrics | Technical Workflows |
| Automated pipelines, Zotero+MCP, privacy audits | Advanced Research |
General principles: Verify everything that matters. Start with one task. Rephrase, don't repeat.
Code & Data Visualization
Prompts for exploratory data analysis, publication-quality figures, and data cleaning.
EDA Pipeline
Prerequisites: Claude Code + data file on disk, or Claude.ai + paste/upload data
Try this now:
Load [path/to/data.csv]. Produce an EDA report:
1. Shape, dtypes, missing values per column
2. Distribution plots for all numeric columns
3. Correlation heatmap
4. Top 3 anomalies or unexpected patterns
Save all figures to outputs/ as PDF (vector).What to verify: Row/column counts match your expectations. Missing value percentages are plausible. Correlation values align with domain knowledge.
Related skills: /code-simplify, /tikz-figures
Publication-Quality Figure
Prerequisites: Claude Code + matplotlib/seaborn installed
Try this now:
Create a publication-quality figure from [path/to/data.csv]:
- Use the Okabe-Ito colorblind-safe palette
- Font size: 10pt for labels, 8pt for ticks
- Export as PDF (vector) at 3.5 inches wide (single-column)
- No title (caption goes in LaTeX)
- Include error bars where appropriateWhat to verify: Colors are distinguishable in grayscale. Axis labels are readable at print size. Error bars represent the correct statistic (SD vs. SEM vs. CI).
Related skills: /tikz-figures, /latex-consistency
Paper Writing
Prompts for drafting core paper sections — abstract, introduction, discussion, and titles. Works best when you supply your method, results, and context directly.
Abstract
Prerequisites: Claude.ai or Claude Code — no installation needed
Try this now:
/paper-abstract
My paper: [one-paragraph summary of your method and results]
Venue: [NeurIPS / ICML / ICLR / AAAI / other]
Key result: [single most important finding, with numbers]
Contribution type: [new method / new dataset / analysis / theory]What to verify: All numbers match your results section exactly. No claims that aren't backed by your experiments. The problem statement matches your introduction.
Related skills: /paper-abstract
Introduction
Try this now:
/paper-introduction
Problem: [what problem does your paper solve, and why does it matter?]
Gap: [what does existing work fail to do?]
Approach: [one sentence describing your method]
Contributions: [3-4 bullet points listing your specific contributions]
Related work to cite: [list key papers you want positioned against]What to verify: The gap claim is accurate — don't overstate what prior work misses. Contribution bullets are falsifiable. The narrative flows from problem to gap to your solution.
Related skills: /paper-introduction
Discussion Section
Try this now:
/paper-discussion
Main findings: [paste your key results and numbers]
Surprising result (if any): [anything that didn't go as expected]
Limitations: [what your method can't do or where it breaks down]
Broader impact: [what this enables for future work]What to verify: Limitations are honest and complete — reviewers will add any you omit. Broader impact claims aren't overclaimed. All "future work" suggestions are technically plausible.
Related skills: /paper-discussion
Paper Titles
Try this now:
/paper-abstract
Abstract: [paste your abstract]
Venue: [target venue]
Style preference: [descriptive / catchy acronym / question-form / neutral]What to verify: The title accurately reflects your actual contribution. If using an acronym, make sure it's not already taken by a prominent paper.
Related skills: /paper-abstract
Literature Search & Citation Verification
Prompts for literature search, synthesis, gap identification, and citation verification — with explicit guards against hallucination.
Warning: Studies report LLMs hallucinate 18–29% of citations in literature reviews. These prompts build in verification from the start.
Methodology-Focused Search
Try this now:
Search for papers that use [specific method] to study [specific problem].
For each paper found:
- Full citation (authors, title, venue, year)
- One-sentence summary of the methodological contribution
- How it differs from [your approach]
If you are uncertain whether a paper exists, write [UNCERTAIN] rather than guessing.What to verify: Every citation exists. Check at least 3 on Semantic Scholar.
Related skills: /literature-synthesizer, /paper-references
Citation Verification (CLI)
Try this now:
/paper-referencesChecks every entry in your .bib file against Semantic Scholar and CrossRef. Flags mismatches, preprints with published versions, and entries that don't resolve to any known paper.
Citation tools comparison:
| Tool | What it checks | Notes |
|---|---|---|
| SwanRef | Citation existence | Free, batch upload |
| Semantic Scholar API | Programmatic access | Free API key |
| Elicit | Structured data extraction | Free tier available |
Writing & Admin
Everything here works in Claude.ai with zero installation.
Administrative Email Draft
Try this now:
Draft a professional email:
- From: [your name, your role]
- To: [recipient, their role]
- Purpose: [what you need]
- Tone: [formal/collegial/brief]
- Constraints: [max length, deadline to mention, attachments to reference]
Write the subject line and body. Do not invent facts.What to verify: No fabricated details. Tone matches your relationship with the recipient. All facts are correct.
Rebuttal Paragraph
Try this now:
Reviewer wrote: "[paste reviewer comment]"
Draft a rebuttal paragraph that:
1. Acknowledges the concern (one sentence)
2. Provides evidence addressing it
3. Describes what changed in the revision
Tone: professional, concessive opening, then firm evidence.What to verify: The evidence you cite is real. The changes described actually happened. The tone is not defensive.
Related skills: /review-triage
Technical Workflows
Prompts for R/econometrics, clinical/medical imaging setup, C++/TypeScript projects, and referee responses. Assumes terminal familiarity.
R Fixed-Effects Panel Regression
Try this now:
Read [path/to/panel_data.csv].
Estimate a two-way fixed-effects model:
Y = [outcome] ~ [treatment] + [controls] | [unit_FE] + [time_FE]
Use the fixest package. Cluster standard errors at the [unit] level.
Output a modelsummary table comparing 3 specifications.
Export as LaTeX (booktabs).What to verify: Number of observations matches your data. Fixed effects are applied to the correct dimensions. Standard errors are clustered at the right level.
Medical/Clinical Setup (Architecture First, No Data)
Try this now:
I'm building a [imaging modality] pipeline for [clinical task].
Do NOT process any patient data.
Design the project architecture:
1. Directory structure (data/, models/, configs/, outputs/)
2. .claudeignore excluding all DICOM, NIfTI, and patient-identifiable files
3. Config template for model parameters
4. A CLAUDE.md that encodes "never read files in data/"What to verify: The .claudeignore actually blocks all sensitive file types. The CLAUDE.md verification requirements are explicit. No patient data paths are referenced.
See also: Privacy & GDPR guide
Advanced Research
Prompts for connecting external tools (Zotero MCP), auditing your setup, and maintaining AI-generated code.
Zotero + MCP Library Search
Prerequisites: Claude Code + Zotero desktop + Zotero MCP server configured
Try this now:
Search my Zotero library for papers related to [your topic].
For each match, show: title, authors, year, and which collection it's in.
Then identify 3 gaps — topics I should have papers on but don't.What to verify: The papers returned actually exist in your Zotero library. The "gaps" are genuine, not hallucinated subfields.
Privacy Audit
Try this now:
Audit this session's privacy posture:
1. List every file you've read in this session
2. Flag any that contain secrets, credentials, or PII
3. Check if .claudeignore exists and what it blocks
4. Report whether DISABLE_TELEMETRY is set
5. Recommend fixes for any issues foundWhat to verify: Cross-check the file list against your expectations. Verify the .claudeignore recommendations are complete.
Related skills: /audit-my-setup
Cognitive Debt Audit
Try this now:
Audit [path/to/project] for cognitive debt:
1. Which files were likely AI-generated? (heuristics: uniform style, no TODOs, generic variable names)
2. For each: can a human maintainer understand it without the AI context?
3. Are there tests? If not, which functions most need them?
4. Is there a CONTRIBUTING.md that explains the architecture?
Produce a prioritized refactoring plan. Do NOT refactor — just plan.What to verify: The AI-generated file detection is plausible but imperfect. Read the refactoring plan before acting on it — it's a starting point, not a prescription.