It is time that we confront the reality of artificial intelligence (AI). Recently, the University of Cape Town (UCT) announced their plans to stop using AI detection software for student assignments. This begs the question: is the University of Pretoria (UP) adequately preparing students for a world driven by AI? One way that UP is addressing AI literacy is through the compulsory Academic Information Management (AIM) module for first-year students. The problem is that, despite efforts to teach ethical use of this technology, students find ways to bypass security measures to produce fraudulent work.
UCT Leaves AI Detection Software in the Past
UCT made national headlines when they reported that they will stop using AI detection software, such as Turnitin, effective 1 October. The university adopted an AI in Education Framework that, as reported by UCT’s news page, will further literacy and academic integrity in terms of AI use. Additionally, the university will move towards “future-ready” curricula.
This boils down to the fact that AI detection software has proven to be unreliable. Their approach aims to ensure that students are equipped to integrate AI into their studies and professions without shying away from a rapidly changing world. The announcement has, obviously, been met with a combination of praise and controversy. However, it is important to teach students how to use these technologies ethically. In a televised interview with Newzroom Afrika, the head of UCT’s Centre for Innovation in Learning and Teaching Sukaina Walji explains that a different approach must be taken in assessing students. “AI is here. Students are using [AI] tools, and so assessment strategies need to change,” she said.
Academic Information Management
UP has implemented various methods to equip students with the necessary skills needed to excel in a world dominated by AI. One of the ways that UP is doing this is by teaching AI literacy through the AIM module presented to the majority of first-year students. PDBY had the opportunity to interview Mrs Pariksha Singh and Ms Jayshree Harangee from the AIM Department to gain insight into what the module’s goals are.
Q: What is the main purpose of the AIM module, who is required to take the module, and what are the requirements to pass the module?
A: AIM 111 and 121 [are] level 5, 4-credit modules spanning approximately 12 weeks per semester. This module aims to equip students with the skills to find, evaluate, process, manage, and present information resources for academic purposes using current technology, such as the most [recent] versions of Windows and MS Office. Key learning outcomes include understanding basic computer concepts, information literacy, MS Word, MS Excel, effective library use, academic searching and referencing, information ethics, presentation design using MS PowerPoint, and basic AI Literacy. Assessment is continuous, involving assignments, training, class exercises, projects, and semester tests, utilising resources such as e-books, clickUP Ultra, and online systems. All first-year students, except those in engineering programs, are required to complete the AIM modules.
Q: How does AIM address AI literacy to equip students for the future?
A: AI literacy is newly founded in AIM, and the curriculum is being updated as AI evolves. In the Information Literacy section, new topics were added to ensure [that] students understand and can navigate the landscape of artificial intelligence and generative AI, thereby preparing them for future interactions with these technologies in an academic and potentially professional context. We have also introduced prompt development and integration with the MS Office suite.
Q: Why is it important to be able to use AI software/applications?
A: As artificial intelligence (AI) increasingly influences how individuals access information, communicate, and make decisions, developing AI literacy has become essential for navigating daily life, fostering purposeful creation, and preparing for the evolving landscape of learning and work. AI literacy is crucial for equipping both learners and educators to comprehend the inherent risks and opportunities presented by AI, enabling them to make meaningful and ethical decisions regarding its application. It empowers learners to critically evaluate AI’s impact on their lives, educational journeys, and communities, thereby preparing them to actively shape the future. Recognising this growing imperative, modules [like] AIM are integrating AI literacy into their curriculums. The primary objective of modules like AIM 111 and AIM 121 is to enable students to find, evaluate, process, manage, and present information resources for academic purposes using appropriate technology. Within this framework, AIM specifically addresses AI by including “Generative Artificial Intelligence” as part of its “Computer Literacy” content. Furthermore, its “Navigating Information Literacy” content features a dedicated chapter – “Being information literate in an AI and Gen-AI world”. This demonstrates a proactive step towards ensuring students are proficient in understanding and interacting with AI technologies, thereby enhancing their preparedness for future academic and professional challenges. However, to fully realise the potential of AI literacy in shaping learning, several key barriers to implementation must be addressed. These include a prevailing lack of a shared understanding of what constitutes AI literacy and effective pedagogical approaches for teaching it, as well as uncertainty regarding how AI seamlessly integrates into various subject areas.
Q: How can the education sector adapt to incorporate AI into lectures, tasks, and tests? Do you believe current curricula are keeping up with the rapid advancements in AI, or are we at risk of falling behind?
A: By explicitly incorporating AI [literacy] and some generative AI, as well as comprehensive chapters on navigating an AI-influenced information world, programmes like AIM are beginning to lay the groundwork for overcoming these challenges and fostering a generation of information-literate individuals. However, we still have a long way to go. Higher education [structures] will also need proper policies and processes that are not clearly defined as yet. Some guidelines do exist, but with the fast-paced updates of technology, these guidelines become outdated very quickly. Curricula all over the world have been updating their syllabi at all educational levels to include AI topics and themes. However, Africa has to [quickly] catch up with this era or we risk falling behind.
Q: How do you integrate critical thinking about AI (e.g. ethical and legal implications) into the module, ensuring that students do not over-rely on AI?
A: Integrating critical thinking about AI into AIM involves teaching students to critically question and assess any AI-generated information, including its ethical and legal implications, while structuring activities that require independent reasoning and verification beyond AI outputs. We try to foster ethical awareness by addressing AI’s societal, privacy, and bias issues and encouraging reflective dialogue around responsible AI use. To avoid over-reliance, guidelines should emphasise AI as a supportive tool rather than a primary solution, include active student engagement with data and problem-solving without AI, and promote continuous critical inquiry through questioning and comparison with multiple sources. We are working on this and hoping to implement this soon, not just in AIM, but I am sure in many other subjects as well.
Q: Some students, unfortunately, often rely on AI tools to assist them in completing tasks, especially in completing AIM assignments. Some even sell their services to other students. Is this a growing problem? What are the consequences for using AI tools (or selling one’s “expertise”) to complete assignments?
A: Importantly, the majority of assessments in AIM are designed to be simulation- or project-based. This format inherently supports meaningful student learning through active engagement with authentic tasks. In these assessments, students do not merely consume AI-generated outputs passively. Rather, they must critically interact with AI tools to replicate, apply, and demonstrate skills within the prescribed platforms. Consequently, the use of AI in select instances becomes a catalyst for deeper learning as students develop hands-on expertise by working directly with tools where the skill is expected to be replicated in professional contexts. This approach fosters not only technical proficiency, but also critical evaluation skills as students learn to integrate AI outputs with their own reasoning and problem-solving processes. Thus, project-based assessments function dually as evaluative measures and immersive learning experiences that effectively balance AI utilisation with essential human cognitive engagement. Where we have assessments that AI can output an answer to, we are busy changing these assessments so students need to use critical thinking instead. We are still in the process of not wanting to discourage AI usage, but we want to ensure that students use the tool correctly and ethically.
Academic dishonesty through the outsourcing of assessments is a challenge that extends beyond AI Literacy to higher education as a whole. Many students outsource parts or entire assessments for various reasons, which threatens academic standards and student learning. At AIM, we have taken active steps to address this issue by introducing in-class tests that require physical attendance, emphasising the critical role of presence in acquiring essential skills and completing assessments. This has resulted in reduced weighting of traditional assignments, making it more difficult for students to rely solely on outsourced work. The commercial sale of assessment services represents a serious breach of academic integrity, and there must be consequences not only for students who engage in such misconduct but also for those who provide these services. In response, [the AIM department] has engaged [with] the university’s legal department to address these challenges with the necessary expertise. We have also enforced university regulations that have led to reprimands for students found guilty of academic dishonesty in the past. Combating outsourcing requires a collaborative approach involving assessment design, student engagement, institutional policy, and legal measures to uphold the credibility and value of higher education qualifications.
Q: Finally, what advice would you give to students who want to use AI responsibly to enhance their learning (rather than replace it) in the context of AIM?
A: We welcome this mindset and will encourage students to ethically utilise AI to enhance their learning by guiding them during sessions on the ethical and appropriate use of AI. We also want to make students aware that AI should be used as a tool rather than a shortcut. Students should also verify AI information and maintain academic integrity.