TL;DR

ArXiv will impose a one-year ban on authors who submit papers containing AI-generated content without proper verification. The move aims to address the rise of low-quality, AI-produced research and ensure accountability.

ArXiv, the prominent open repository for preprint research, will ban authors for one year if their submissions contain AI-generated content that has not been properly checked or verified, according to an announcement by Thomas Dietterich, chair of arXiv’s computer science section.

The new policy states that if a paper includes incontrovertible evidence of authors relying on large language models (LLMs) without verifying results—such as hallucinated references or unreviewed AI comments—the authors will face a one-year ban from submitting to arXiv. After this period, they must submit through peer-reviewed venues before resubmitting to arXiv.

This measure is not an outright ban on AI use but emphasizes that authors must take full responsibility for content, regardless of how it was generated. If AI tools are used to produce or assist in writing, authors are expected to verify all results and references thoroughly. Failure to do so, especially if it involves copying inaccurate or misleading AI-generated content, will trigger the ban.

Moderators will flag potential violations, and section chairs will confirm evidence before enforcing the ban. Authors will have the right to appeal decisions. The policy aims to curb the rise of low-quality and AI-manipulated research, which has recently been linked to an increase in fabricated citations in biomedical fields, among other issues.

Why It Matters

This development matters because arXiv is a key platform for disseminating early-stage research across disciplines like computer science and mathematics. The policy aims to improve research integrity amid growing concerns over AI-generated misinformation and low-quality submissions, which can undermine scientific progress and trust.

Amazon

AI verification software for research

As an affiliate, we earn on qualifying purchases.

As an affiliate, we earn on qualifying purchases.

Background

Over recent years, the use of large language models in research has increased, leading to concerns about fabricated references, unverified data, and low-quality publications. arXiv, which hosts preprints before peer review, has taken steps to address these issues, including requiring endorsements for first-time posters and now implementing strict rules against unverified AI use. This move follows broader industry debates on AI ethics and responsibility in scientific publishing.

“If a submission contains incontrovertible evidence that the authors did not check the results of LLM generation, this means we can’t trust anything in the paper.”

— Thomas Dietterich

“This will be a one-strike rule, but moderators must flag the issue and section chairs must confirm the evidence before imposing the penalty.”

— Dietterich

Academic Writing

Academic Writing

As an affiliate, we earn on qualifying purchases.

As an affiliate, we earn on qualifying purchases.

What Remains Unclear

It is not yet clear how strictly the policy will be enforced in practice, or how many authors might be affected initially. Details on appeals and specific procedures for verifying AI-generated content are still emerging.

Amazon

AI content review software

As an affiliate, we earn on qualifying purchases.

As an affiliate, we earn on qualifying purchases.

What’s Next

ArXiv will begin implementing this policy immediately, with moderation teams trained to identify violations. Future updates may clarify enforcement procedures and potential adjustments based on initial experiences.

Amazon

reference verification tools for research

As an affiliate, we earn on qualifying purchases.

As an affiliate, we earn on qualifying purchases.

Key Questions

Can authors still use AI tools in their research?

Yes, but they must verify all AI-generated content thoroughly and take responsibility for its accuracy. Unverified or misleading AI content can lead to a one-year ban.

What constitutes incontrovertible evidence of unverified AI use?

Examples include hallucinated references, unreviewed AI comments, or content copied directly from AI outputs without verification, as stated by arXiv officials.

Will authors be able to appeal the ban?

Yes, authors can appeal decisions, though specific procedures are still being finalized.

Does this ban apply to all research fields on arXiv?

The policy specifically targets submissions in fields like computer science and mathematics where AI-generated content is more prevalent, but it may influence broader research practices.

You May Also Like

Alphabet beats Berkshire with record 576bn yen bond offering

Alphabet raises over 576 billion yen via a record-breaking bond offering in Japan, surpassing Berkshire Hathaway’s previous record, highlighting foreign firms’ growing presence.

Kyber (YC W23) Is Hiring a Founding Marketer

Kyber is actively recruiting a founding marketer to own its content and community efforts, aiming to scale its AI-native enterprise document platform.

All Those A.I. Note Takers? They’re Making Lawyers Nervous

AI-powered note-taking tools are raising concerns among legal professionals about confidentiality, accuracy, and job security, prompting calls for regulation.

Agent Patterns for AI Agent Development

An in-depth analysis of recent developments in agent patterns for AI development, highlighting new frameworks, design principles, and industry implications.