The academic race for priority has always been a high-stakes game of speed, but the introduction of large language models has accelerated the pace to a breaking point. In the corridors of computer science and physics, the pressure to upload a preprint to arXiv before a competitor can claim a discovery has created a dangerous incentive to outsource the heavy lifting of writing to AI. This trend has led to a recurring and embarrassing phenomenon: papers appearing in the public record that still contain the phrase As an AI language model, I cannot... This is no longer viewed as a simple clerical error, but as a fundamental failure of scholarly responsibility.

The Mechanics of the One-Year Ban

Thomas Dietterich, the Chair of the Computer Science section at arXiv, has drawn a hard line in the sand regarding the boundaries of author accountability. The core of the new policy is simple: if there is clear evidence that an author failed to verify the output of a large language model, the entire integrity of the submission is compromised. Dietterich explicitly stated that the responsibility of the author remains absolute, irrespective of how the contents are generated.

Evidence of negligence is defined by specific, undeniable markers. The most egregious examples include the presence of raw dialogue between a user and an LLM left within the final text, or the inclusion of fabricated references—hallucinated citations that look authentic but do not exist in any real-world database. When such evidence is discovered, the penalty is immediate and severe. The author is slapped with a one-year ban from submitting any new work to arXiv.

This penalty is not a temporary timeout. The path to redemption is intentionally rigorous. Once the one-year ban expires, the author cannot simply resume uploading preprints. To regain their submission privileges, they must first secure approval from a trusted, peer-reviewed academic journal. This requirement effectively forces the author to prove their competence and the validity of their work through a traditional, human-led vetting process before they can return to the fast-track world of preprints.

This policy shift coincides with a broader organizational transition for arXiv. After two decades of support from Cornell University, the platform is moving toward becoming an independent non-profit organization. This structural change is designed to secure the resources necessary to combat AI slop—the flood of low-quality, AI-generated content that threatens to drown out genuine scientific contribution.

From Entry Barriers to One-Strike Enforcement

For years, arXiv managed the quality of its submissions through a system of endorsements. New users were required to be vouched for by established authors, creating a social barrier to entry that acted as a proxy for quality control. However, the rise of generative AI has rendered this pre-submission gatekeeping insufficient. The problem is no longer about who is allowed into the system, but how those inside the system are behaving.

The platform has now pivoted to a one-strike rule. The process is streamlined for enforcement: moderators identify problematic content and report it to the section chair. Once the chair confirms the evidence of unverified AI generation, the penalty is applied instantly. While an appeals process exists for authors to contest these decisions, the burden of proof has shifted heavily toward the researcher.

This transition reflects a growing crisis in the wider scientific community. Recent studies in biomedical research have highlighted a systemic increase in manipulated or fabricated citations driven by LLM usage. What was once dismissed as the carelessness of a few outliers has become a systemic risk to the scientific record. By implementing a one-strike policy, arXiv is signaling that the convenience of AI-assisted writing does not exempt a researcher from the duty of verification.

The tension here lies in the definition of AI use. arXiv is not banning the use of LLMs for drafting, polishing, or organizing thoughts. Instead, it is penalizing the act of blind trust. If a paper contains biased content, plagiarism, factual errors, or fake citations, the AI is viewed as the tool, but the author is viewed as the culprit. The cost of utilizing these tools is now a heightened requirement for manual auditing.

Academic prestige has always been built on the foundation of trust, and the era of treating preprints as a low-risk dumping ground for rapid output is ending. The price of speed is now a potential year of academic silence.