Generative AI and teaching - Advice for educators

Generative AI technologies such as ChatGPT have caused upheaval in the academic world in the last year, creating both new possibilities and raising many questions for educators. How can KI's teaching and examination practices be adapted to technologies that can mimic humans?

An illustrative image for a webinar on ‘Generative AI in Student Writing’, featuring symbols of AI and writing
Generative AI and writing. Photo: Generated by Bing Chat/Dall-E.

What is Generative AI? How does it affect my role as a teacher?

Generative AI is the umbrella term for AI models that have been trained on massive amounts of data AI models and can use this to generate 'new' statistically likely but not necessarily correct text, images and other media based on a prompt. The best-known generative AI tools are perhaps ChatGPT, Midjourney and Dall-e, but a trend within this space is the integration of generative AI into narrow-use, user-friendly services, and into familiar services such as Microsoft Office and Grammarly. Recently KI staff gained access to Bing Chat Enterprise/Bing Co-Pilot, a tool which can process and generate images and texts.

The arrival of this new technology poses the question of how the academic world can adapt. Whilst the majority accept that the generative AI is likely to become embedded in the academic context in the medium term, the challenge is how to handle the transition to this new world where such tools are the norm, and our AI-literacy is at an appropriate level in order to take full advantage of the affordances of these new tools, while avoiding the downsides.

Notes about this page

This page provides advice and suggestions and should not be construed as guidelines or rules. Keep in mind that the information on the page is general and not adapted to any specific area, programme, or course. It is also important to know that AI technology is developing rapidly and continuously.  The page will therefore be updated regularly.

The page has been created by the Unit for Teaching and Learning (UoL) within the framework of the project "Use of generative AI in teaching and examination" (autumn 2023), on behalf of the Committee for Higher Education (KU).

Ethics and regulations - Important considerations in the use of generative AI

The use of generative AI is a complex subject both ethically (Holmes, 2023; Porayska-Pomsta & Holmes, 2022) and legally. Below are some key areas to consider.

  • Factual correctness: AI can generate factual errors although AI can be used to produce coherent text that is perceived as authentic and human. One example is that AI can fabricate references to books and articles (Walters & Wilder, 2023).
  • Bias: AI is not neutral. Generative AI are based on algorithms (rules for handling large amounts of data), but AI have no 'judgment'. Values are embedded in the data (text, images) on which AI is trained. This means that different types of 'bias' are reflected in the material generated with the help of AI. That is, there is always some kind of bias in the content that is created.
  • Access and Educational Inequality: There is a risk that generative AI will perpetuate existing inequalities within education. For example, what happens if only the students with the most means have access to the best generative AI models?
  • Information security: The AI learns from the data you feed it, and all the data ends up in the hands of companies who are free to do as they please with the information. Therefore, you should not submit protected material such as patient data, personal data or research data when you use generative AI. It is not appropriate to submit students' work to such AI services without explicit consent. 
  • GDPR: always applies and should be taken into account when using generative AI. Keep in mind that encouraging students to create their own accounts for services will have consequences in terms of their personal data.
  • Intellectual property: : Generative AI tools can produce work that violates trademark and copyright protection. In both the US and Europe, legal challenges are underway regarding how AI-generated material should be viewed from the perspective of copyright.
  • Other considerations: Energy consumption to train and use AI algorithms is high (Bender et al, 2021), and some services rely on low-paid workers to screen manually for harmful material in the training data (Perrigo, 2023).

How does generative AI affect the conditions for secure examinations?

A common concern is that generative AI enables cheating, but the lines between cheating and innovative academic practices are not always clear. A study at Chalmers highlights that issues in how we communicate about academic integrity, which may create differences in students' and teachers' interpretations and values. In the study (which is not statistically representative), just over half of the students stated that they viewed the use of chatbots in examinations as cheating, and only a minority had received any guidance on the use of AI (Malmström, Stöhr & Ou, 2023).

A key aspect in the new emerging AI practices is the need for AI detection to identify AI-generated material.

A photo of a silver balance scale with a digital background. The left side of the scale is filled with binary code, while the right side is filled with handwritten text.
Photo: Created with Bing Chat/Dall-E.

Is it possible to reliably detect AI-written work?

There are no tools available on the market which can reliably detect AI-written texts (Webb, 2023; Weber-Wulff et al, 2023). Such tools may have a tendency to incorrectly flag students who are writing in a second language as having cheated (Liang et al, 2023). ). If students' work is submitted to AI services (with which KI has no agreement) to check for cheating, the students' work is handed over to a third party who can then arbitrarily use the material. As it stands, KI uses Ouriginal (Urkund) and iThenticate  to detect possible plagiarism, neither of which offers AI detection at the moment.

It should also be noted that generative AI services are not designed to answer questions such as "Did you write this text?".

How can I adapt my examination practice?

As there are no current technologies available to reliably detect AI-generated work, and considering that students will need to use these technologies in their future jobs, we recommend supporting students in the use of AI in their studies, and in developing their AI literacy as a whole.

In the written assessment we often use the SOLO taxonomy (Biggs, 2011). In some courses, we need to ensure that students know the basics before starting the next level. One proposal could be to ask students not to use external resources (books, and conversation AI) during the assessment.

However, to reach the “relational” and “extended abstract” levels, it is good to use some kind of AI to help students organize their declarative knowledge first and then try to make it more functional. Since the validity of the information produced by AI is not always correct, students need to be able to analyze and evaluate the results, and are encouraged to use AI with caution.

You can use the diagram (below) to help you decide how to design assessments. If the purpose is to assess basic concepts that students should remember and understand, it is more reliable to carry out the assessment in an examination hall without access to the internet. Alternatively, we need to adapt and change how the assessment is carried out so that it can work in a situation where generative AI could otherwise affect the outcome.

Consider the purpose of the examination. What knowledge is it that you want to have evidence for? Then you need to reflect on the current shape of the examination and whether the use of AI would skew the results. If you find that the examination needs to be redesigned, keep in mind the intended learning outcomes (what you need evidence for) and think about alternative ways of finding that information. For essays and short answer questions in take-home exams you can consider adding the use of AI as part of the assessed task. 

Flow chart on assessment methods and strategies. Description: 'Assessment' splits into 'Formative' (with "Use AI if needed") and 'Summative'. Summative includes strategies like "Exam without internet" and "Oral exams".
Caption: A flow chart illustrating the differentiation between formative and summative assessments, highlighting specific strategies and focus areas. Photo: N/A.

Format of the assessment

In an essay, the students are asked to write running text. This form of assessment is one of the most effective ways to check whether the student can provide a complex answer to difficult questions (Swanwick, Forrest & O'Brien, 2018). The essay format is often used for take-home exams.  As with any assessment the purpose should be taken into account. Ask yourself, what kind of learning/knowledge/skill do you want to evidence? Adapting or redesigning an essay or take-home exam may mean that you rephrase the questions.

Another possibility is to discuss essays in seminars and thus add an oral part.  Teachers and students will use artificial intelligence in different ways in everyday life.

Process vs product

Think about how you can catch sight of the student's learning process when AI is used.  An important aspect to consider is how to catch sight of the student's learning process and not rely entirely on assessing a final product. 

  • Add other modes (speech, image etc.) to written tasks and assessments.
  • Formulate questions where the answers require applied knowledge or familiarity with specific contexts or experiences that should be unfamiliar to AI.
  • Add activities where you can see the process. In the case of essays, you could ask the student to submit draft 1, 2, 3 ... and then the last text is graded submission. 
  • Include AI in the process by requiring the student to report how they 'prompted' and what the response was. If AI is used, a suggestion could be that the student should give the same prompt to at least two different AIs and compare and problematise the result.

Consider the following when formulating your examinations

As AI is integrated into common writing tools, such as Word, it is impractical to completely ban the use of AI in open examinations such as take-home exams. Most generative AI services offer many different features (text generation, summarisation, enhancement, etc.). Therefore, it is probably more useful to think about your context and what usage you see as appropriate or inappropriate.

For example, you could choose to allow the use of AI as an aide for translation of citations between different languages. The use of AI to suggest text improvements such as Microsoft Editor and Grammarly rarely raises objections, but you may choose to draw the line at using generative AI tools to re-write (or generate) paragraphs in their entirety, unless appropriately referenced. Similarly, there may be situations where specific usages of generative AI are encouraged or necessary. We direct you to this article for more depth on the complexities of wording.

When assessing the students' work, there are several aspects to take into account, partly which learning objectives must be achieved for a passing grade, partly what the students are allowed to use/do during the examination. In cases where a student is suspected of attempting to mislead, it is important that what is permitted has been described in the instructions for the examination, and based on that - what is not permitted.

Other considerations

  • Is it ok to use generative AI to provide structure for an essay in the form of headings?
  • Is it ok to consult the service for inspiration before I produce my own work?
  • Should students declare where and how they have used AI to aide their work?
  • Since generative AI has a tendency to fabricate references, it may be worthwhile to adjust grading criteria to lift the importance of correct references.

If you want to adapt your examination practice further, for example change the examination format, it may be necessary to adapt your syllabus to, for example, using an oral examination as a complement to an assignment, or to shift the focus of the written assessment from product to process. Changes to syllabi need to be approved. 

It is worth stressing the importance of clearly communicating examination rules with students.

A robot hand with black joints, resting on a wooden desk, holds a gold pen and writes on cream-colored paper in an office setting.
Photo: Created with Bing Chat/Dall-E.

How does generative AI affect students' learning?

How generative AI affects students’ learning when they write depends entirely on how students use AI tools. If students use AI tools to create long passages or even entire texts without further reflection, this will likely affect learning negatively. So, not even if the AI output is correct would copying and pasting be an effective way of learning, and AI output is often incorrect or heavily biased (Ho, 2023).  

On the other hand, students may use AI tools, for example, to brainstorm ideas or as a sounding board to polish their argument, while carefully considering each suggested change before implementing or rejecting it. In this case they may learn as much as they did when they wrote in more traditional ways – if not more.

In addition, if students use AI tools wisely, assessment may become fairer and students may be able to use their time more wisely. Menial tasks such as spelling and other basic grammatical errors can easily be fixed with the help of AI tools (Huang & Tan, 2023) - even though students must, of course, still check any AI suggestion. Using AI tools to make the more mechanical aspects of writing more efficient may level the field for students writing in other languages than their mother tongue (Nordling, 2023), potentially making assessment fairer. Doing so may also free up time to spend on more advanced aspects of writing, for example, analysis.

Students still need to know what good writing is

Using AI tools may also help students deal with writer’s block (Nordling, 2023). A student stressing over not being able to start writing may ask an AI tool to produce a few words on the topic, and in that way avoid having to start writing on a blank page.

For students to use AI tools well, they still need to know what good writing is. Students need to be aware of several aspects of writing, for example, how an academic text should be structured, what it means to adjust one’s text to an intended audience, how to use field-specific terminology, and that vocabulary is always contextual; that the meaning of words may change depending on the context.

To ensure that students use AI tools wisely - in a manner which enables them to retain critical skills and learn well, we need to help students make good choices (van Dis et al, 2023). We need to nuance the discussion about AI tools and try to keep from categorical statements such as “using AI tools is forbidden”. Instead, we can give clear examples of potentially useful ways of using AI, and potentially unsuitable ways. We also need to be clear about if, and in what ways, students may use AI tools in examinations. 

We also need to strive to encourage open communication about AI, so that students feel comfortable asking questions and having a dialogue with their teachers. If the message is that all use of AI is disallowed and that all use by default equals cheating, you will lose the opportunity to be invited into discussions with the students, and you will not be able to affect your students’ use of AI. 

Part of teaching students to use AI tools wisely is teaching them to be as transparent as they can regarding their use of generative AI tools in the writing. You will know how students used AI tools in each text, and will be able to fairly assess the work at hand. However, you may still of course tell the student that they have not fulfilled the course criteria and therefore need to repeat the examination. And, you will have the chance to have yet another discussion about effective AI use.

The use of generative AI as a teaching assistant or study aide is of course popular, but there are some important caveats! Here are some of the suggestions which we have put together. When selecting activities, you must of course consider the consequences which students learning factually inaccurate or false information could have in the context. It is similarly important to suggest AI-driven activities which are appropriate in relation to the students' level of AI literacy and experience.

  • As a sounding board - AI chatbots can give quick feedback. That said, they are not always correct!
  • As a debating partner
  • In programming - AI can be a useful aid in programming, and can provide code suggestions for different context. The AI can also explain how code works, comment it, and suggest how to debug it.
  • Avoiding writers block - While in most contexts copying and pasting text from generative AI into your work is inappropriate, it can be extremely useful in producing starting sentences or a structure for your work.
  • As a language coach - These tools can help pedagogically correct languages or even work as a talking partner when learning another language.

Key topics to talk about with your students

Our goal as a university should be to produce students who are AI literate, competent and responsible, but as this is a new field, the definitions are likely to shift. In addition to discussing different ways of using AI effectively and in ways that promote learning, the diagram below could be used as a starting point diagram below could for example be used as a starting point for discussions about academic integrity, safety and generative AI.

Flowchart for using generative AI: considers output truth, user expertise, and responsibility. Outcomes: "Safe", "Possibly", "Unsafe".
A flowchart showing how users should judge their relationship to generative AI. Adapted from Aleksandr Tiulkanov's original work. Used under a Creative Commons CC-BY-licence Photo: N/A

As it stands, KI does not have it's own definition of AI literacy which covers generative AI, but the authors of this page provide the following points as a starting point for discussion:

Responsible AI principles

  • Legal aspects - Do not share protected data, such as personal data, research data and patient data, with generative AI services. Be aware of intellectual property rights, and do not share copyrighted material. It is worth noting here that student work is covered by copyright.
  • Safety - Information literacy is key in assessing AI output and for identifying fallacies and hallucinations (fabrications) in data generated by AI. This is a particularly important point when it comes to study techniques - reading summaries generated by AI should not be considered an appropriate and safe alternative to engaging with the recommended course materials and activities.
  • Accountability - it is always the students' responsibility for the texts they create using generative AI, as it is a tool, and should not be viewed as, for example, a co-author.
  • Transparency – students should be able to explain or document how they have used generative AI, discussing how it's use may have affected the outcome
  • Fairness - Such as bias towards Anglo-American perspectives, racism, sexism, ageism etc.

Prompt engineering 

How does one write prompts for desired outcomes?

Information searching

How information searching using large language models (LLMs) is not recommended as it can hallucinate (make up) references. The university library can provide guidance on other AI-driven services for this sort of usage case.

Talking about generative AI could also be a good entry point to discussing other AI-driven technologies.

ChatGPT and artificial intelligence in higher education: quick start guide & Guidance for generative AI in education and research
Here are the two UNESCO guides for generative AI in higher education which we have used when creating this guide. The quickstart is both appropriate for teachers and students.

On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?
This is somewhat of a classic article in regards to generative AI and ethics, and covers bias, fallacies, environmental impact and a range of other issues.

How AI chatbots like ChatGPT or Bard work – visual explainer
An appropriately technical, but readable explanation of how Large Language Models work from The Guardian

The Unintended Consequences of Artificial Intelligence and Education

Authors

Page written by Andrew Maunder och Henrika Florén with contributions from Anna Borgström, Arash Hadadgar and Jenny Siméus. Project lead by Henrika Florén.

HF
Content reviewer:
21-02-2024