Generative AI and assessment
Generative AI technologies—such as ChatGPT and AI-based writing or plagiarism services—pose significant challenges to many traditional assessment formats. This guide provides an overview of these challenges and offers strategies for adapting assessment practices in higher education.

The current state of the assessment
Universities use a variety of assessment formats. Essays, for example, require students to respond to a question without guidance from the teacher. This method is widely regarded as one of the most effective ways to evaluate whether students can produce complex responses to challenging questions. It is particularly suitable for home exams and was widely adopted during the COVID-19 pandemic as an alternative to on-site examinations.

This diagram shows the assessment forms which are affected by ChatGPT and other similar services.
The ones which affected are all within the category Written Assessment, and include Essay, Short Answer Question, Completion Questions and Dissertation. There are however many other forms within this category which are entirely unaffected including Multiple Choice Questions, Extended Matching Questions, Patient Management Problems and Reports, as well as the entirely unaffected categories Clinical/Practical Assessments, Observations, Portfolios and Other Performance Records and Peer and Self-Assessment.
Secure Examinations and AI Detection
A common concern is that generative AI enables cheating, but the lines between cheating and innovative academic practices are not always clear. Research shows that students and teachers may interpret academic integrity differently, and guidance on AI use is often lacking. Currently, there are no reliable tools for detecting AI-generated text. Existing plagiarism detection systems used at KI, such as Ouriginal (Urkund) and iThenticate, do not offer AI detection. Moreover, AI detectors can be biased, especially against students writing in a second language. Submitting student work to external AI detection services may also raise privacy and copyright concerns.
Adapting Examination Practices
Given the limitations of AI detection, and recognising that students will need to use these technologies in their future careers, it is recommended to support students in using AI tools as part of their studies, develop students’ AI literacy and critical evaluation skills, and clearly communicate what is permitted and not permitted in each assessment.
Teachers’ Response to ChatGPT
Assessment strategies should be tailored to different academic levels. The SOLO taxonomy helps structure written assessments to evaluate knowledge at various depths. For foundational knowledge (declarative), students may be asked not to use external resources—including books and AI tools—during assessment. However, at higher levels (relational and extended abstract), AI can support students in organising knowledge before applying it critically.
Process vs Product
Consider how to observe and assess the student’s learning process, not just the final product. For example, you might ask students to document how they used AI tools, include reflective components or drafts as part of the assessment, and shift some focus from product to process.

This diagram shows the SOLO taxonomy which was first described by Kevin Collis and John Biggs in 1982. SOLO, which stands for the Structure of the Observed Learning Outcome, is a means of classifying learning outcomes in terms of their complexity, enabling us to assess students’ work in terms of its quality not of how many bits of this and of that they have got right.
AI and Plagiarism Detection
The integration of AI into public search engines has democratised access to knowledge—and, unfortunately, to cheating. Previously, only students with financial means could access human-generated cheating services; now, AI makes this widely available. Companies such as Turnitin (which owns Urkund/Ouriginal) are investing in tools to detect AI-generated content. Universities are also developing their own detection systems. OpenAI has released a detector for identifying AI-written text, but these tools remain unreliable and can be biased against non-native English writers.

Considerations for Examination Design
As AI becomes integrated into common writing tools, it is impractical to ban its use entirely in open-book or take-home exams. Decide what forms of AI use are appropriate for your context and communicate these clearly. Require students to declare how and where they have used AI tools. Adjust grading criteria to emphasise correct referencing, as AI tools may fabricate citations.
How Does Generative AI Affect Students’ Learning?
The impact of generative AI on learning depends on how students use these tools. Using AI to generate large sections of text without reflection can hinder learning. Using AI for brainstorming, feedback, or overcoming writer’s block can support learning, provided students remain critical and engaged.
Responsible AI Principles
- Legal aspects: Do not share protected or copyrighted data with generative AI services.
- Safety: Information literacy is essential for evaluating AI output and identifying errors or fabrications.
- Accountability: Students are responsible for the texts they create using AI; AI is a tool, not a co-author.
- Transparency: Students should be able to explain and document their use of AI.
- Fairness: Be aware of potential biases in AI tools.
Tips for teachers
Familiarise yourself with AI tools and test them with past exam questions. Design authentic tasks using recent research or experimental data. Check references for accuracy and teach students about hallucinations in AI-generated citations. Prepare students for the future workplace by integrating AI literacy into assessments.
Tips for students
Understand the boundaries of acceptable AI use and discuss these openly. Critically evaluate AI outputs and learn to verify information. Be aware of detection tools and university policies. Always declare your use of AI tools in your work.
Tips for policymakers
University leadership should articulate clear positions on AI in education, including training for teachers, guidelines for student use, and policies for handling misconduct.
