
AI in Academia: Singaporean Professors Warn It’s a Battle They Can't Win
2025-07-08
Author: Wei
SINGAPORE — In a landscape rapidly transformed by artificial intelligence, a third-year engineering student at Nanyang Technological University (NTU) reveals how easily he navigated his assignment with ChatGPT. With the aid of a senior's essay, he had the AI generate a new piece, polished it, and submitted it as his original work.
"Getting caught is incredibly difficult," he confessed, opting to remain anonymous.
This student's experience highlights a troubling trend among university students who increasingly lean on AI for academic success. As educational institutions scramble to address this shift, professors find themselves at a loss, struggling to detect and curb AI misuse.
The Controversy Unfolds
A recent incident involving three NTU students accused of misusing AI by generating false citations ignited a heated debate about AI regulations in the classroom. The students contested the claims, questioning the fairness of the process, prompting NTU to form a review panel that includes experts in AI.
Most educators interviewed acknowledged the overwhelming challenge presented by current AI detection tools, revealing that identifying AI-generated content in student submissions is mostly a fruitless endeavor.
Questionable Detection Methods
Lecturer Eunice Tan from NTU’s Language and Communication Centre shared her frustrations with plagiarism detection systems like Turnitin, which often yield unreliable results and even false positives. In one shocking case, a student received a zero for an essay despite using AI, due to the software's failure to identify any AI-generated content.
Instead, Tan focuses on spotting inconsistencies in students' writing styles and verifying cited sources. "In extreme cases, it’s apparent that students haven’t even read their sources," she lamented.
Rarity of Consequences
While universities maintain that faculty have the discretion to manage AI use, punitive actions remain scarce. NTU confirmed that no expulsion has yet occurred due to AI-related violations. Similarly, Singapore Management University (SMU) reported addressing only a handful of AI misconduct cases over the past three years.
Other institutions, such as the Singapore University of Technology and Design (SUTD) and Singapore University for Social Sciences (SUSS), have observed only slight increases in AI-related integrity violations, mainly tied to plagiarism.
Navigating the New Norms
Amid confrontations with AI misuse, NTU declared that students can use AI in their assignments as long as they disclose it and ensure the information is accurate. This reflects a broader shift in educational policy where AI is gradually being integrated into the learning process.
Instructors across Singapore’s universities are reevaluating their assessments to mitigate passive reliance on AI. For example, at SUTD, students may be required to critique AI-generated work, helping to foster critical thinking.
Student Perspectives on AI Usage
Interestingly, many students admit to using AI tools in their assignments, often without full disclosure. Out of ten students interviewed, only two felt their use was within the university guidelines. The majority preferred to remain anonymous to avoid facing disciplinary action.
One student even shared how he’s used AI to produce substantial portions of his work, especially for assignments he deemed unchallenging. "You're digging yourself a hole by telling them what you did," he candidly remarked.
As the reliance on AI grows, students express a mixture of caution and indifference regarding how to navigate its use. Some, like Pauline, a recent graduate, admitted that her writing quality has declined due to heavy AI dependence.
The Educators’ Dilemma
With AI use widespread among students, educators like Associate Professor Aaron Danner from NUS argue that battling every instance of AI usage is futile. "We need to adapt our assignments to this new reality," he stated.
Dr. Grandee Lee of SUSS supports a tailored approach, suggesting that while AI shouldn't be allowed in basic courses, it can play a beneficial role in advanced classes.
In stark contrast, some professors advocate for the inclusion of AI as a tool for learning, stressing the importance of fostering creativity and individuality in a world where everyone has access to AI.
The Road Ahead
As educators navigate this complex landscape, the focus must shift from policing AI misuse to fostering a collaborative learning environment that embraces technology. Dr. Lee Li Neng of NUS emphasizes the need for transparency—both students and teachers should work together to understand the implications of AI use, transforming potential adversaries into allies in education.
The future of academic integrity in the age of AI depends not on winning the battle against these tools, but on reshaping how students and educators perceive and utilize them.