Generative AI and teaching - Advice for educators

Generative AI technologies such as ChatGPT have caused upheaval in the academic world in the last year, creating both new possibilities and raising many questions for educators. How can KI's teaching and examination practices be adapted to technologies that can mimic humans?

An illustrative image for a webinar on ‘Generative AI in Student Writing’, featuring symbols of AI and writing
Generative AI and writing. Photo: Generated by Bing Chat/Dall-E.

What is Generative AI? How does it affect my role as a teacher?

Generative AI is the umbrella term for AI models that have been trained on massive amounts of data AI models and can use this to generate 'new' statistically likely but not necessarily correct text, images and other media based on a prompt. The best-known generative AI tools are perhaps ChatGPT & Google Gemini, but a trend within this space is the integration of generative AI into narrow-use, user-friendly services, and into familiar services such as Microsoft Office and Grammarly, and newer services such as NotebookLM. All KI staff have access to Microsoft Copilot, a tool which can process and generate images and texts.

The arrival of this new technology poses the question of how the academic world can adapt. Whilst we generally accept that the generative AI is likely to become embedded and entwined in academic contexts in the medium term, the challenge posed is how to manage the transition to a world where such tools are the norm, and our AI-literacy is at an appropriate level in order to take full advantage of the affordances of these new tools, while avoiding the downsides.

Notes about these pages

The pages in this AI and education hub provides advice and suggestions and should not be construed as guidelines or rules. Keep in mind that the information on the page is general and not adapted to any specific area, programme, or course. It is also important to know that AI technology is developing rapidly and continuously. The pages will therefore be updated regularly.

These pages has been created by the Unit for Teaching and Learning (UoL) within the framework of the project "Use of generative AI in teaching and examination" (autumn 2023), on behalf of the Committee for Higher Education (KU).

The last significant revision was done in December 2025.

Ethics and regulations - Important considerations in the use of generative AI

The use of generative AI is a complex subject both ethically (Holmes, 2023; Porayska-Pomsta & Holmes, 2022) and legally. Below are some key areas to consider.

  • Factual correctness: AI can generate factual errors although AI can be used to produce coherent text that is perceived as authentic and human. One example is that AI can fabricate references to books and articles (Walters & Wilder, 2023).
  • Bias: AI is not neutral. Generative AI are based on algorithms (rules for handling large amounts of data), but AI have no 'judgment'. Values are embedded in the data (text, images) on which AI is trained. This means that different types of 'bias' are reflected in the material generated with the help of AI. That is, there is always some kind of bias in the content that is created.
  • Access and Educational Inequality: There is a risk that generative AI will perpetuate existing inequalities within education. For example, what happens if only the students with the most means have access to the best generative AI models?
  • Information security: The AI learns from the data you feed it, and all the data ends up in the hands of companies who are free to do as they please with the information. Therefore, you should not submit protected material such as patient data, personal data or research data when you use generative AI. It is not appropriate to submit students' work to such AI services without explicit consent.
  • GDPR: always applies and should be taken into account when using generative AI. Keep in mind that encouraging students to create their own accounts for services will have consequences in terms of their personal data.
  • Intellectual property: : Generative AI tools can produce work that violates trademark and copyright protection. In both the US and Europe, legal challenges are underway regarding how AI-generated material should be viewed from the perspective of copyright.
  • Other considerations: Energy consumption to train and use AI algorithms is high (Bender et al, 2021), and some services rely on low-paid workers to screen manually for harmful material in the training data (Perrigo, 2023).

How does generative AI affect the conditions for secure examinations?

A common concern is that generative AI enables cheating, but the lines between cheating and innovative academic practices are not always clear. A study at Chalmers highlights that issues in how we communicate about academic integrity, which may create differences in students' and teachers' interpretations and values. In the study (which is not statistically representative), just over half of the students stated that they viewed the use of chatbots in examinations as cheating, and only a minority had received any guidance on the use of AI (Malmström, Stöhr & Ou, 2023).

A key aspect in the new emerging AI practices is the need for AI detection to identify AI-generated material.

A photo of a silver balance scale with a digital background. The left side of the scale is filled with binary code, while the right side is filled with handwritten text.
Photo: Created with Bing Chat/Dall-E.

Is it possible to reliably detect AI-written work?

There are no tools available on the market which can reliably detect AI-written texts (Webb, 2023; Weber-Wulff et al, 2023). Such tools may have a tendency to incorrectly flag students who are writing in a second language as having cheated (Liang et al, 2023). ). If students' work is submitted to AI services (with which KI has no agreement) to check for cheating, the students' work is handed over to a third party who can then arbitrarily use the material. As it stands, KI uses Ouriginal (Urkund) and iThenticate to detect possible plagiarism, but both of these services are based on text matching.

It should also be noted that generative AI services are not designed to answer questions such as "Did you write this text?".

How can I adapt my examination practice?

The topic of how to adapt examination practice is explored in length over the following pages. Note that it is particularly important to communicate around the boundaries of acceptable AI use for the assignment at hand.

A robot hand with black joints, resting on a wooden desk, holds a gold pen and writes on cream-colored paper in an office setting.
Photo: Created with Bing Chat/Dall-E.

How does generative AI affect students' learning?

How generative AI affects students’ learning when they write depends entirely on how students use AI tools. If students use AI tools to create long passages or even entire texts without further reflection, this will likely affect learning negatively. So, not even if the AI output is correct would copying and pasting be an effective way of learning, and AI output is often incorrect or heavily biased (Ho, 2023).  

On the other hand, students may use AI tools, for example, to brainstorm ideas or as a sounding board to polish their argument, while carefully considering each suggested change before implementing or rejecting it. In this case they may learn as much as they did when they wrote in more traditional ways – if not more.

In addition, if students use AI tools wisely, assessment may become fairer and students may be able to use their time more wisely. Menial tasks such as spelling and other basic grammatical errors can easily be fixed with the help of AI tools (Huang & Tan, 2023) - even though students must, of course, still check any AI suggestion. Using AI tools to make the more mechanical aspects of writing more efficient may level the field for students writing in other languages than their mother tongue (Nordling, 2023), potentially making assessment fairer. Doing so may also free up time to spend on more advanced aspects of writing, for example, analysis.

Students still need to know what good writing is

Using AI tools may also help students deal with writer’s block (Nordling, 2023). A student stressing over not being able to start writing may ask an AI tool to produce a few words on the topic, and in that way avoid having to start writing on a blank page.

For students to use AI tools well, they still need to know what good writing is. Students need to be aware of several aspects of writing, for example, how an academic text should be structured, what it means to adjust one’s text to an intended audience, how to use field-specific terminology, and that vocabulary is always contextual; that the meaning of words may change depending on the context.

To ensure that students use AI tools wisely - in a manner which enables them to retain critical skills and learn well, we need to help students make good choices (van Dis et al, 2023). We need to nuance the discussion about AI tools and try to keep from categorical statements such as “using AI tools is forbidden”. Instead, we can give clear examples of potentially useful ways of using AI, and potentially unsuitable ways. We also need to be clear about if, and in what ways, students may use AI tools in examinations. 

We also need to strive to encourage open communication about AI, so that students feel comfortable asking questions and having a dialogue with their teachers. If the message is that all use of AI is disallowed and that all use by default equals cheating, you will lose the opportunity to be invited into discussions with the students, and you will not be able to affect your students’ use of AI. 

Part of teaching students to use AI tools wisely is teaching them to be as transparent as they can regarding their use of generative AI tools in the writing. You will know how students used AI tools in each text, and will be able to fairly assess the work at hand. However, you may still of course tell the student that they have not fulfilled the course criteria and therefore need to repeat the examination. And, you will have the chance to have yet another discussion about effective AI use.

The use of generative AI as a teaching assistant or study aide is of course popular, but there are some important caveats! Here are some of the suggestions which we have put together. When selecting activities, you must of course consider the consequences which students learning factually inaccurate or false information could have in the context. It is similarly important to suggest AI-driven activities which are appropriate in relation to the students' level of AI literacy and experience.

  • As a sounding board - AI chatbots can give quick feedback. That said, they are not always correct!
  • As a debating partner
  • In programming - AI can be a useful aid in programming, and can provide code suggestions for different context. The AI can also explain how code works, comment it, and suggest how to debug it.
  • Avoiding writers block - While in most contexts copying and pasting text from generative AI into your work is inappropriate, it can be extremely useful in producing starting sentences or a structure for your work.
  • As a language coach - These tools can help pedagogically correct languages or even work as a talking partner when learning another language.

Key topics to talk about with your students

Our goal as a university should be to produce students who are AI literate, competent and responsible, but as this is a new field, the definitions are likely to shift. In addition to discussing different ways of using AI effectively and in ways that promote learning, the diagram below could be used as a starting point diagram below could for example be used as a starting point for discussions about academic integrity, safety and generative AI.

Flowchart for using generative AI: considers output truth, user expertise, and responsibility. Outcomes:
A flowchart showing how users should judge their relationship to generative AI. Adapted from Aleksandr Tiulkanov's original work. Used under a Creative Commons CC-BY-licence Photo: N/A

As it stands, KI does not have it's own definition of AI literacy which covers generative AI, but the authors of this page provide the following points as a starting point for discussion:

Responsible AI principles

  • Legal aspects - Do not share protected data, such as personal data, research data and patient data, with generative AI services. Be aware of intellectual property rights, and do not share copyrighted material. It is worth noting here that student work is covered by copyright.
  • Safety - Information literacy is key in assessing AI output and for identifying fallacies and hallucinations (fabrications) in data generated by AI. This is a particularly important point when it comes to study techniques - reading summaries generated by AI should not be considered an appropriate and safe alternative to engaging with the recommended course materials and activities.
  • Accountability - it is always the students' responsibility for the texts they create using generative AI, as it is a tool, and should not be viewed as, for example, a co-author.
  • Transparency – students should be able to explain or document how they have used generative AI, discussing how it's use may have affected the outcome
  • Fairness - Such as bias towards Anglo-American perspectives, racism, sexism, ageism etc.

Prompt engineering 

How does one write prompts for desired outcomes?

Information searching

How information searching using large language models (LLMs) is not recommended as it can hallucinate (make up) references. The university library can provide guidance on other AI-driven services for this sort of usage case.

Talking about generative AI could also be a good entry point to discussing other AI-driven technologies.

ChatGPT and artificial intelligence in higher education: quick start guide & Guidance for generative AI in education and research
Here are the two UNESCO guides for generative AI in higher education which we have used when creating this guide. The quickstart is both appropriate for teachers and students.

On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?
This is somewhat of a classic article in regards to generative AI and ethics, and covers bias, fallacies, environmental impact and a range of other issues.

How AI chatbots like ChatGPT or Bard work – visual explainer
An appropriately technical, but readable explanation of how Large Language Models work from The Guardian

The Unintended Consequences of Artificial Intelligence and Education

Authors

Page written by Andrew Maunder och Henrika Florén with contributions from Anna Borgström, Arash Hadadgar and Jenny Siméus. Project lead by Henrika Florén.