
Google Engineer's AI Faux Pas: The 'Squared Blunder' That Shook the Research Community
2025-04-24
Author: Siti
Is AI Overstepping Its Bounds in Academia?
In a shocking turn of events, Anurag Awasthi, an engineering lead in AI infrastructure at Google, has pulled a manuscript after being called out for using AI inappropriately. The document, titled "Leveraging GANs For Active Appearance Models Optimized Model Fitting," was published on arXiv.org in January but mysteriously vanished on April 7.
AI's Unfortunate Wordplay
Upon investigation, eagle-eyed commenters on PubPeer highlighted a series of bizarre, AI-generated phrases throughout the paper, including the head-scratching term "squared blunder." The term showcases an alarming trend: when AI tools rephrase common terminology, the results can be dangerously misleading. For example, "linear regression" was hilariously transformed into "straight relapse," while "error rate" morphed into "blunder rate."
A Learning Experience Gone Wrong
Awasthi later clarified that he used AI tools during a previous revision to add variety to his writing—an approach that backfired spectacularly. He described the paper as a "personal learning exercise" and attempted to justify the peculiar language, stating that these were unintended artifacts.
The Similarity Controversy
However, the situation escalated when another commenter pointed out striking similarities between Awasthi's work and a 2016 paper by different authors. Critics noted that not only was the structure similar, but much of the language was nearly identical. Awasthi again claimed this was an "unintended artifact" of his AI-driven process.
A Cautionary Tale for Researchers
As the scrutiny intensified, Awasthi admitted to underestimating the gravity of preprints and the responsibility that comes with publishing research. He acknowledged the backlash, but his responses have raised further questions about the ethical implications of using AI in academic writing.
The Fallout and Future Implications
While Awasthi has redirected all media inquiries to Google's press office, which remains unresponsive, this incident serves as a warning to researchers everywhere: the use of AI tools in academia must be approached with caution and transparency. As AI technology continues to evolve, so too must our standards and practices in the academic field.