Books and Bots: The Double-Edged Sword of AI in Academia

by Aleksa Kruger | Aug 5, 2024 | Breaking Headlines, Features

Artificial Intelligence (AI) is a term that has quickly taken over many aspects of daily life. A simple definition for the term is that it is the science of developing technology and machinery so that it can think like human beings. Accessible with only a few keystrokes, AI is like a second brain that is able to process great amounts of data in various ways. It is the bane of a lecturer’s existence, but a student’s dream come true. Its impact on modern education is similar to the rapid advancements seen in science fiction movies, sparking students’ awe and wonder in the phenomenon. 

 

As the use of AI becomes more common, several critical issues have become apparent. One of the most significant concerns in the academic setting is AI’s potential to overshadow original thinking. Lecturers and academics feel uneasy as AI progresses with precision and advanced capabilities, raising serious questions about the ability to distinguish between authentic student work versus AI-generated work.

 

This issue is a popular topic in grading and the awarding of degrees. Given this, it is urgent and crucial that the university uses reliable methods to identify and differentiate between human and AI-generated work. But how dependable is the current system, and is the university risking bias against students by using unverified models to detect the use of AI?

 

Currently, most modules at the University of Pretoria strictly prohibit the use of AI, with most study guides including specific policies on plagiarism, academic dishonesty, and AI use. Lecturers insist that students only hand in original work. Many faculties use AI detection tools to single out the use of generative AI, including ChatGPT, Quillbot, Gemini, and even Grammarly.

 

However, in trying to combat the use of AI, a few key issues arise in identifying AI-generated work, which emphasise the complexity of the task.

 

Issues in dealing with AI use

Firstly, the way that most AI detectors flag for AI use is unreliable at best. AI content detectors use machine learning and natural language processing to inspect linguistic patterns and sentence structure. This helps to determine whether work is AI-generated or human-written. Essentially, AI detectors look for certain words or phrases consistently used by AI in a specific sequence and flag them if they are present in a student’s work. This includes words such as ‘moreover’ and ‘bedrock of society’, which may easily be used in original, human-written work.

 

Take Turnitin, the AI detective tool used most frequently at UP, for example. An article by Nikola Baldikov, founder of Inbound Blogging, describes this issue. Though the Turnitin detection system is a step in the right direction, it still has its limitations. The model dissects the paper into sections and looks for an overlap of content. It identifies word sequence probabilities commonly seen in generative AI but not in human-written content. However, this leaves potential blind spots in plagiarism detection that lecturers are often unaware of, highlighting the need for further improvement. The software often struggles to distinguish AI-generated text, such as work generated from ChatGPT, from human writing, which can lead to false flags on genuinely human-written text. Baldikov stresses the importance of upholding academic integrity but cautions against over-reliance on Turnitin as the sole judge of originality.

 

Another issue regarding AI detection at the university involves inconsistencies between departments. While most faculties have a blanket principle that prohibits AI use, the punishments vary considerably. Some departments in the Engineering, Built Environment and Information Technology (EBIT) Faculty allow students who are flagged for AI use to redo their assignments, while some departments in the Humanities Faculty immediately give students zero.

 

Recently, the issues regarding AI detection at the university were apparent in an assignment that third-year law students submitted to the Department of Jurisprudence. Students who were flagged for AI use in this instance were required to attend an AI hearing in which they were required to defend their work in front of a panel of lecturers and academic associates. If they failed to do so, they would face a permanent grade of zero. The most alarming part of this process was that the students had to refute this claim without the lecturers’ providing evidence to show that AI use was detected.

 

Most notably, when speaking to several students who went through this process, it was clear that most of their marks had been changed from zero. However, they were not awarded full credit for their work. Emails regarding this issue stated that though the department still believed that the students had used generative AI, they were still granted their marks. It appears that the department was caught between a rock and a hard place, neither wanting to concede to an error on the part of AI detection, nor having overwhelming proof to reinforce this result.

 

This is not a niche issue. PDBY conducted a poll on Instagram which received responses from students across various faculties. According to this poll, 83% of students who were flagged for plagiarism said that they did not use it.

 

The final issue that must be addressed is the university’s inconsistent definition for AI. Most faculties have banned AI entirely as per their policy statements but have neglected to consider spelling and grammar checkers. These checkers use AI technology to automatically detect and correct grammatical and spelling errors in written text. This includes Microsoft Editor, a tool that provides AI-assisted spelling, grammar, and writing tools in Microsoft Word, Outlook, as well as in the Microsoft Edge and Google Chrome browsers. These tools are widely used by almost anyone creating a document on Microsoft Word. The line between acceptable and unacceptable assistance has, therefore, become blurred.

 

UP has released a guide that details how ChatGPT and other forms of AI should be used in teaching and learning. The guide emphasises that generative AI like ChatGPT can be a valuable tool for teaching and learning at universities if the specified principles are followed. It describes how there are several ways in which ChatGPT can be utilised to enhance teaching, learning, assessment, and student support. When lecturers use ChatGPT effectively, it can improve students’ understanding, foster critical thinking skills, and aid them in their planning.

 

The path forward

Integrating AI into academia presents a complex blend of opportunities and challenges. As AI continues to evolve, its impact on education grows, raising important questions about academic integrity and the detection of AI-generated work. It is clear from AI’s potentially negative implications regarding original thought and ideas that a way to detect AI is needed to maintain the sanctity of academic honesty. However, from the university’s current policies, more research and training must be conducted to do so confidently and without potentially prejudicing students.

 

There is an evident inconsistency between the rigid prohibitions against AI use and the university’s recognition of AI’s benefits, such as enhancing teaching and fostering critical thinking. This suggests that a more practical approach might involve educating students and faculty staff on responsible AI use rather than banning it outright. By embracing AI’s capabilities while ensuring strict academic standards, universities can maintain academic integrity and harness AI’s transformative power for educational advancement.

Aleksa Kruger
view posts