
The Rise of AI in Preprints: How Researchers are Battling Fake Science
2025-08-12
Author: Amelia
The Red Flags of AI-Generated Research
A recent preprint titled ‘Self-Experimental Report: Emergence of Generative AI Interfaces in Dream States’ raised eyebrows for psychologist Olivia Kirtley of Catholic University of Leuven. Posted on PsyArXiv, a platform for non-peer-reviewed psychological research, it consisted of just a few pages and named a sole author without affiliation. Kirtley’s instincts told her something was off.
Moderators Take Action Against Dubious Submissions
Upon examining the paper, Kirtley flagged it for the PsyArXiv managers, who promptly removed it. The manuscript utilized AI methods yet failed to transparently disclose its usage, violating the site’s guidelines. Dermot Lynott, head of PsyArXiv's scientific advisory board, emphasized the importance of clear declarations when it comes to AI contributions.
A Battleground Against Paper Mills and Fake Research
PsyArXiv isn’t alone. Numerous preprint servers and academic journals are wrestling with suspicious submissions that bear the hallmarks of 'paper mills'—entities that churn out research on demand—or AI-generated content laden with inaccuracies. This influx of questionable research raises serious concerns about the integrity of the scientific process.
The Dilemma of Quality Control
Katie Corker from the Society for the Improvement of Psychological Sciences noted the delicate balance moderators face: ensuring quality while maintaining accessibility for researchers. "No one wants a world where individual readers must sift through dubious claims to judge legitimacy," she cautioned.
AI's Growing Influence
Despite the issues, many preprint services report that only a small fraction of submissions show signs of AI generation. For instance, the team at arXiv claims that about 2% of manuscripts are flagged as being AI-produced or from paper mills. However, Richard Sever of openRxiv mentioned that they reject more than ten suspicious manuscripts a day across approximately 7,000 monthly submissions.
A Surge in AI-Driven Content
The situation appears to be worsening. Following the release of ChatGPT in late 2022, arXiv moderators have perceived a rising tide of AI-generated articles, leading them to sound the alarm. The Center for Open Science echoed this by stating they have witnessed a significant increase in papers seemingly generated or heavily aided by AI tools.
Unrelenting Challenges in Moderation
Kirtley’s flagged manuscript was swiftly succeeded by a nearly identical submission, causing further concern. The follow-up author claimed to be an independent researcher from China, citing his limited involvement of AI in mathematical processes. Nonetheless, this second version also faced removal.
Future Projections of AI in Research
Recent findings suggest that by September 2024, a staggering 22% of computer-science abstracts on arXiv could be composed by large language models, while around 10% of biology abstracts on bioRxiv may follow suit. This revelation raises the stakes in the ongoing battle against misinformation and academic deception fueled by AI.