Prerequisites
Before you start, run these two commands. The whole setup takes under 2 minutes.
Step 1 — Install Claude Code:
Step 2 — Add the Research Agora plugin:
Run Step 2 inside a Claude Code session. If you're using Claude.ai in the browser, no installation is needed — but skills (slash commands) are CLI-only. The browser path uses direct prompts instead.
Choose Your Path
Here's how to get this running. It takes 5 minutes.
| Path | Setup time | What you get |
|---|---|---|
| Browser (Claude.ai) | 0 min | Chat interface, no file access, no installation |
| Claude Code (CLI) | 5–10 min | Full agent: reads/writes files, runs code, executes skills |
If you want to see citation verification work right now, the browser path gets you there in 3 steps. The CLI path gives you access to the full skill library and runs against your actual project files.
Not sure which path fits your role? See the role-based guides for PIs, researchers, and students.
↑ Back to topBrowser Path (0 min setup)
You don't need to install anything. Open Claude.ai in a browser tab.
Step 1
Go to claude.ai and sign in (free tier works).
Step 2
Paste this prompt:
You are a BibTeX librarian.
Objective: Check each entry in the following BibTeX snippet against Semantic Scholar.
Flag entries where the title, authors, or year don't match any known publication.
Output: Table with columns — cite key, status (verified / unverified / mismatch), details.
[paste 3–5 entries from your .bib file here]Step 3
Review the table. Any row marked mismatch or unverified is a potential hallucination. Fix or remove it before submission.
That's it. The browser path has no file access — you paste content in manually. For automated verification against your full .bib file, use the CLI path below.
CLI Path (5–10 min setup)
Prerequisites: Node.js installed, a Claude subscription (Pro or API key).
Install Claude Code
npm install -g @anthropic-ai/claude-codeNavigate to your project
cd /path/to/your/projectStart your first session
claudeOn first run, Claude Code opens a browser tab to authenticate. Follow the prompts. Once authenticated, you're at the interactive agent prompt.
↑ Back to topQuick Start: Verify Citations
This is the most concrete way to evaluate what a skill does.
/plugin marketplace add rpatrik96/research-agoraOr manually: follow the installation instructions in README.md.
Step 2 — Navigate to a project with a .bib file
cd /path/to/project-with-references.bib
claudeStep 3 — Run the citation verification skill
/paper-referencesStep 4 — Read the output
You'll see something like:
Checking 47 entries against Semantic Scholar and CrossRef...
✓ vaswani2017attention — verified (Vaswani et al., 2017, NeurIPS)
✓ lecun1989backprop — verified (LeCun et al., 1989, Neural Computation)
⚠ smith2023efficiency — MISMATCH: title found but year differs (paper is 2022, not 2023)
✗ johnson2024emergent — NOT FOUND: no matching publication on any indexed source
✓ goodfellow2016deep — verified (Goodfellow et al., 2016, MIT Press)
...
Summary: 44 verified, 2 mismatches, 1 not foundThat run costs $0.10–0.30 in API tokens depending on bibliography size.
Fallback — no .bib file?
Paste this sample into a file called demo.bib in your current directory:
@inproceedings{vaswani2017attention,
title={Attention is all you need},
author={Vaswani, Ashish and Shazeer, Noam and Parmar, Niki and others},
booktitle={NeurIPS},
year={2017}
}
@article{invented2024hallucination,
title={Emergent reasoning through chain-of-thought distillation at scale},
author={Chen, Wei and Park, Soo-Jin and Mueller, Hans},
journal={ICML},
year={2024}
}The second entry is fabricated. The agent flags it as not_found.
Fallback — no CLI yet?
Run the browser-path version above. Note: the browser version relies on the LLM's knowledge, not the database-backed verification the CLI skill provides.
Popular first skills for paper writing
/paper-abstract— Draft or improve your paper abstract/paper-experiments— Structure and write your experiments section/literature-synthesizer— Find and synthesize related work/paper-review— Get a critical review of your draft
What Just Happened
The agent read your .bib file, extracted each citation's title, authors, and year, then queried Semantic Scholar and CrossRef to find matching records. For each entry it compared the metadata against what's actually indexed. Entries that don't resolve to a real publication — hallucinated references, typos, or year errors — are flagged. This is the same check a careful human reviewer would do manually, run programmatically against scholarly databases in under a minute. The agent didn't guess: it verified against ground truth.
Setup Time Reference
| Component | First-time setup | Ongoing overhead |
|---|---|---|
| Claude Code installation | 5–10 min (one command + login) | Auto-updates |
| CLAUDE.md for a project | 20–45 min (or 5 min via /onboard) |
5–10 min every few weeks |
| MCP server (e.g., GitHub) | 5–15 min per server | None once configured |
| Custom skill | 10–30 min for a first skill | Minutes to update |
| Per-session startup | Automatic (~10 sec) | Context management via /clear, /compact |
Total initial investment: 1–2 hours for a full setup (Claude Code + CLAUDE.md + 2–3 MCP servers + a first skill). After that, per-session overhead is under 5 minutes.
What's Next
- Concepts — How AI agents actually work: the five-level ladder from Chat to Skills, what to delegate vs. protect, where Research Agora fits.
- Verification — The full verification hierarchy: formal checks, automated heuristics, manual review. When to use each and why ideation gets verified ~7× less than code.
/onboard— Run this skill in your project directory. It reads your codebase and drafts aCLAUDE.mdtailored to your project in under 2 minutes.
For a CLAUDE.md template you can copy and customize, see templates/CLAUDE.md.researcher.
↑ Back to top