As artificial intelligence (AI) detection tools become routine in universities worldwide, some UP students are questioning whether these systems can be trusted to judge their work fairly. While these systems are designed to flag work that may have been generated by AI, concerns are emerging about whether the technology is as reliable as many assume it is.
A recent PDBY poll on Instagram shows that 93% of students said they do not believe that students are fully informed as to how AI detection works, and 82% said they do not trust AI detectors to tell the difference between human and AI-generated text. When asked what worried them more – being caught using AI or being wrongly accused – 92% said they feared false accusations.
This fear is not unfounded. Although no official statistics are publicly available, several students have privately shared that they were called in after AI detectors flagged their work for AI-generated content, even when they were “100% sure” they had not used AI. These cases were eventually resolved, but they reveal how a single detection score can have serious academic and emotional consequences.
Prof. Tivani Mashamba-Thompson, Deputy Dean of Research and Postgraduate Studies in UP’s Faculty of Health Sciences and a leading researcher in diagnostics, recently co-authored a detailed guide on the responsible and ethical use of AI. She cautions that detection tools must never be trusted as the final word. “AI detection tools are not infallible, and relying on them without context can create challenges,” she said. “Universities should adopt a balanced approach: AI detection should never be the sole basis for accusations of misconduct.”
Prof. Mashamba-Thompson also said that academic staff should review any flagged work in context, considering whether a student’s writing demonstrates genuine understanding and engagement. Emphasising the need for clear policies, she said, “Universities need transparent policies and clear communication so that students feel protected and supported rather than threatened.”
According to the PDBY poll, 76% of students said it is making university life more stressful, and 78% believe UP should set stricter rules on how AI detection results are used.
Prof. Mashamba-Thompson believes that a more informed approach can help both students and staff. She points out that many misconceptions persist, including the idea that AI outputs are always correct or that using AI leaves no trace. “AI can be useful for brainstorming or summarising, but it cannot and should not replace the intellectual contribution that academic work requires,” she said.
While AI is certain to play a growing role in research and teaching, Prof. Mashamba-Thompson said, “If done well, AI can play a transformative role in strengthening the global competitiveness of our research and enhancing the quality of our graduates. But it must align with principles of equity and ethical innovation.”
For now, students remain wary of a system they see as opaque. The debate at UP reflects a broader challenge that universities worldwide are facing: how to embrace AI’s potential while protecting academic integrity and the rights of students whose work is, primarily, their own.