Federico De Ponte
Founder, OpenDraft
You Probably Shouldn't Use OpenDraft (Yet)
OpenDraft has been getting attention. Before you try it, here's what you should know.
OpenDraft is NOT:
- A one-click paper generator — You will need to edit, verify, and substantially revise the output
- A replacement for reading papers — AI synthesis is lossy; critical reading is irreplaceable
- Safe to use without verification — Every claim needs human review
- A way to bypass peer review — This generates drafts, not finished research
- A shortcut for academic integrity — Your institution's AI policy applies
What It Actually Does (and Fails At)
OpenDraft regularly:
- Misses important papers — Database coverage isn't perfect. Semantic Scholar has 200M+ papers, but that's not everything.
- Produces weak synthesis — AI summarization has limits. Nuance gets lost.
- Generates awkward transitions — Multi-agent handoffs sometimes create jarring section breaks.
- Over-relies on citation count — Popular papers get prioritized; obscure but important work gets missed.
- Requires significant manual editing — Plan to rewrite 30-50% of the output.
The key distinction
The citations are real — verified against Semantic Scholar, CrossRef, and arXiv. But "real citation" doesn't mean "best citation" or "correctly interpreted citation."
Who Should Use OpenDraft
Yes, if you're:
- A researcher exploring how AI might assist early-stage drafting
- A developer interested in multi-agent architectures
- Someone willing to debug, criticize, and contribute
- Looking for a starting point, not a finished product
No, if you're:
- Expecting polished, submission-ready output
- A student looking for shortcuts (check your institution's AI policy first)
- A researcher who won't verify every claim in the output
- Someone who wants a "set and forget" tool
Why We're Saying This
Most tools oversell and underdeliver. We'd rather undersell and overdeliver.
OpenDraft is an experiment in multi-agent research drafting. It's not a product claiming to solve academic writing. It's an open-source exploration of whether separating research capabilities across specialized agents produces better results than prompting a single LLM.
The honest answer: sometimes yes, sometimes no.
We're still figuring out when.
If You're Still Interested
After reading all of this, if you still want to try it, here's what we'd actually find valuable:
- Find failure modes — Where does it break? What prompts cause bad output?
- Open issues — Criticism is more valuable than stars
- Fork and experiment — Try different agent configurations
- Share your results — What worked? What didn't?
And if you find something broken, please tell us.
That's more valuable than a star.
Related Articles
Citation Hallucination is a Design Failure
Why we built OpenDraft the way we did.
How 19 AI Agents Work Together
Technical architecture deep-dive.
AI for Academic Writing: Beginner's Guide
If you're new to AI research tools.
Documentation
Full setup and usage guide.
About the Author: Federico De Ponte is the developer of OpenDraft. This is an honest assessment, not false modesty. The limitations are real.