The review system of ICLR 2026 is being heavily infiltrated by AI "ghosts": third-party detection shows that 21% of the 76,000 reviews were fully generated by large models with one click, and another 35% were edited to varying degrees by AI, leaving only 43% written by humans. These "machine reviews" are on average longer and score higher, but often contain "hallucinated citations" or accuse papers of non-existent numerical errors, prompting authors to collectively complain on social media.
Facing a crisis of trust, the organizing committee has issued the "strictest ban" ever:
- Submission side: If a paper extensively uses LLMs without declaring it, it will be desk rejected immediately;
- Reviewing side: AI can be used as an aid, but the reviewer is fully responsible for the content. If false citations or "AI nonsense" appear, their own submissions may also be rejected;
- Reporting channel: Authors can privately flag suspected AI reviews, and the program chair will conduct concentrated investigations and publicly disclose the results within the next two weeks.
The conference chair admitted that the exponential growth in the AI field has forced each reviewer to review 5 papers within two weeks, far exceeding previous workloads, which is a structural cause of the rampant "AI ghostwriting." The "AI review crisis" at ICLR 2026 shows that when large models become reviewers, the academic community must first use rules and detection tools to block these "ghost votes," otherwise peer review will become an automated experiment with no one taking responsibility.
