Methodology & Transparency
How we verify citations, what our accuracy claims mean, and where our system has limitations.
What "0% Fabricated Citations" Means
Our accuracy claim refers specifically to citation verification:
- 100% of citations with DOI identifiers have been verified against CrossRef or arXiv databases (30/30 in testing)
- Web sources validated via trusted domain allowlist (includes .gov, .edu, consulting firms, intl orgs, major news) + URL accessibility check
- 100% of citations passed validation in testing (34/34) — DOI or trusted domain
Citation verification ≠ content accuracy. AI-generated prose, arguments, and interpretations always require human review. Citation verification only confirms that the cited sources exist and are correctly attributed.
What "200M+ Research Papers" Means
The 200M+ figure refers to the combined accessible corpus via our integrated APIs:
- CrossRef: 150M+ DOI records from academic publishers worldwide
- arXiv: 2.4M+ preprints in physics, mathematics, CS, and related fields
- Semantic Scholar: 200M+ papers with citation graph and abstracts
Not all papers are accessible in full text—coverage depends on open access availability and publisher agreements.
Generation Time: What Affects It
Typical generation time is 10-20 minutes for a research paper draft. Actual time depends on:
Times measured on standard consumer hardware (M1/M2 Mac, modern Windows PC) with stable internet connection.
Research Pipeline
Every draft goes through a rigorous filtering process. We start broad and narrow down to only verified sources.
Why Citations Get Rejected
Our system rejects citations that cannot be verified. Here's what causes rejections and how often they occur:
No DOI/identifier found
Citation lacks a verifiable identifier in academic databases
Metadata mismatch
Author names, publication year, or title don't match database records
Source not in databases
Paper exists but isn't indexed in CrossRef or arXiv
Retracted paper
Source has been retracted and is flagged in databases
Known Limitations
OpenDraft works best for mainstream academic topics with good database coverage. Here's where it may underperform:
Very niche or emerging topics
Topics with limited published research may have fewer verified citations available.
Mitigation: The system will use fewer citations rather than include unverified ones.
Non-English sources
CrossRef and arXiv have better coverage of English-language publications.
Mitigation: Regional databases may have limited integration. Always verify non-English citations manually.
Recent publications
Papers published in the last 3-6 months may not yet be indexed in databases.
Mitigation: Very recent citations are flagged for manual verification.
Interdisciplinary topics
Topics spanning multiple fields may have uneven citation coverage.
Mitigation: Consider running multiple focused drafts for complex interdisciplinary work.
When Human Review Is Critical
AI-generated drafts are starting points, not finished products. Human review is essential for:
- Verify all citations exist and are correctly attributed
- Check that quoted or paraphrased content is accurate
- Validate that methodology descriptions match the cited sources
- Ensure statistical claims are supported by the cited research
- Review for logical coherence and argument flow
- Add your own analysis, insights, and original contributions
Verify Citations Yourself
Every citation in your draft includes identifiers you can verify independently:
Questions about our methodology?
Ask in GitHub Discussions