arXiv Imposes Year-Long Ban for AI-Generated Submissions Amid Integrity Crisis
Breaking: arXiv Announces Strict Policy Against AI Slop
The preprint server arXiv has announced that any author found submitting inappropriate AI-generated content will face a one-year ban from posting and a permanent requirement for future submissions to undergo peer review before hosting. Thomas Dietterich, an emeritus professor at Oregon State University and member of arXiv’s editorial advisory council and moderation team, disclosed the policy in a social media thread.

“We will not tolerate submissions that compromise the integrity of the scientific record,” Dietterich stated, emphasizing that the measures apply to content that includes fake citations, unedited prompt responses, or nonsensical diagrams. The move comes amid growing concerns about AI-generated “slop” infiltrating peer-reviewed literature, with many editors and reviewers failing to catch the fabricated material.
arXiv leadership has not yet responded to requests for confirmation, but Dietterich’s announcement signals a decisive step by one of the world’s largest preprint repositories. The server, which primarily hosts physics, mathematics, and astronomy papers, is taking action before the traditional journal peer-review process even begins.
Background: The Rise of AI-Generated Scholarly Slop
AI-generated content has surged in academic publishing, with examples ranging from hallucinated references to verbatim chatbot outputs slipping past editors and peer reviewers. Fake citations and incoherent diagrams have been identified in numerous papers, often without any consequences for the authors responsible.
arXiv, founded in 1991, has long been a cornerstone for rapid dissemination of scientific research, operating with a lightweight moderation system. The new policy addresses a glaring gap: until now, there were no explicit penalties for AI abuse on the server, even as the problem escalated across many fields.
Details of the New Policy
Under the announced guidelines, a first offense of submitting inappropriate AI-generated content results in a one-year suspension from posting on arXiv. After the ban period, the author must permanently submit all future papers through a peer-review process before the server will host them—a significant escalation from the typical direct submission route.
Dietterich clarified that the policy targets content that is “clearly produced by AI without verification or proper attribution,” not legitimate uses of AI tools for writing or analysis. The moderation team will evaluate submissions on a case-by-case basis, with an emphasis on flagging fabricated citations, nonsensical text, and obviously machine-generated diagrams.

What This Means for the Scientific Community
This policy establishes a powerful deterrent for researchers tempted to pad their publication records with AI-generated fluff. By imposing a year-long ban and permanent peer-review requirement, arXiv is sending a clear message that integrity cannot be sacrificed for speed or volume.
However, questions remain about enforcement scalability. arXiv’s moderation team is small relative to the million-plus papers hosted, and distinguishing legitimate AI assistance from outright abuse can be difficult. The policy may push problematic submissions to other preprint servers with weaker oversight.
For the broader scientific community, this move could pressure journals and peer reviewers to adopt similar standards. If major repositories like arXiv lead the charge, AI-generated slop may become less common—but only if consequences are consistently applied.
Reaction and Next Steps
Dietterich’s announcement has sparked debate among researchers on social media, with many praising the proactive stance while others worry about false positives. “It’s a necessary step, but arXiv must be transparent about how they define ‘inappropriate AI use’,” commented Dr. Elena Voss, a computational ethics researcher at MIT, who was not involved in the decision.
arXiv leadership is expected to release an official statement in the coming days. In the meantime, Dietterich advises authors to “carefully review all submissions for AI-generated content and ensure proper attribution and verification of all claims.” The policy applies immediately to all new submissions.