Using generative AI when writing the comprehensive summary for your doctoral thesis
Many advanced text-generating AI tools have been launched over the last couple of years, such as ChatGPT/GPT4, Copilot, Perplexity, and Gemini. These AI tools may indeed be helpful, but you need to use them responsibly. When writing the comprehensive summary of your doctoral thesis (also called the “kappa”), you may use AI to assist you, but if you do, you need to describe in a transparent way how you have used generative AI.
In this text, ”AI tools” and ”generative AI tools” mainly refer to Large Language Models (LLMs).
Can you trust generative AI tools to assist you in writing your comprehensive summary?
First of all, you need to ask yourself: can you trust generative AI tools to assist in writing the comprehensive summary of your thesis? Most of us have probably heard by now that AI tools sometimes provide incorrect information, while at the same time sounding confident. Generative AI tools are not like search engines; rather, they produce text based on statistical probability without any evaluation of the information. As a consequence, there may not only be errors but also a replication of common biases and misconceptions. In addition, it is unclear what data generative AI tools are based upon, which further undermines the credibility of the information they present. Some AI tools will provide references, which might make them appear more reliable. However, it is often unclear how these sources were chosen. Depending on where these AI tools retrieve sources, even questionable journals (often referred to as “predators”) may be included. Additionally, generative AI tools may also summarize the information from sources incorrectly, and the summaries often rely only on the abstracts of articles since many AI tools cannot access information protected by paywalls.
Since AI tools are currently unreliable for information, you cannot use them as sources. The absolute minimum may be to fact-check everything written by AI tools conscientiously and with reliable sources – and add these sources to your text. But please be aware that if you only fact-check the information provided by generative AI tools, you will only look for information that supports the ideas in your text – and you will likely end up with a biased text.
Moreover, it is clearly stated that both the content and the text of your comprehensive summary should be your own, so this is yet another reason that you cannot rely too much on generative AI tools.
Finally, you need to be aware that generative AI tools may store the data that you feed them. However, doctoral students who are employed by or affiliated with KI may use the Microsoft Copilot app, which ensures that your data is protected when you are logged in through KI. (Please be aware that Microsoft Copilot is not the same as Microsoft 365 Copilot – the latter requires a paid license).
How can you use generative AI tools?
Despite the limitations of generative AI tools and the requirement that your comprehensive summary should be written by you, generative tools can be used both to find sources and to assist you in the writing process.
Some generative AI tools may be useful for searching for literature, such as Elicit, Scispace, Connected Papers, and ResearchRabbit. Unlike traditional databases, these tools can interpret natural language queries, similar to google searches. While using these tools may serve as a useful starting point, it is crucial to recognize these tools’ limitations in reproducibility and transparency. To ensure a more comprehensive literature search, you need to complement the AI tool search with a search in traditional databases.
Generative AI tools may indeed help you both with your writing process and with the quality of your text. At the start of your writing process, generative AI can help you by brainstorming about a topic, or help you overcome writer’s block by listing points to cover in an introduction to your chosen research topic. You could then use the suggested topics as a basis on which to develop your own text, which effectively gets you past the stress of sitting in front of a blank page. Later on, generative AI can provide feedback on your writing, or help you improve the language, grammar, and/or flow of your text. Using AI tools in these ways may allow you to write a well-written text that is still your own work.
When you use generative AI tools, you always need to reflect upon your use of them. Is it reasonable? Does the text serve its purpose, and can it still be called your own work? Does your use of AI tools still allow you to fulfill the intended learning outcomes for a doctoral degree? The comprehensive summary is considered pivotal in indicating that you have fulfilled the intended learning outcomes.
You also need to consider whether using AI actually makes your text better. Much of the answer to that question depends on what you are asking AI tools to do, and how you are asking them to do that (i.e. the exact prompt you are using). For example, an open-ended prompt such as asking generative AI tools to simply “improve” your writing is much more likely to result in changes to the wording and the tone of your writing, or new additions to the content. If you instead prompt generative AI to specifically revise your writing for spelling, punctuation, and sentence structure, it will probably only apply grammar rules to your text and revise your writing based on those rules, without adding new content or changing the words.
If you do choose to use generative AI in your writing process, it is important to consider the output as mere suggestions, each of which you need to consider carefully. Remember, for example, that generative AI tools may adjust your word choice based on a large pool of texts which are not medical. While you may be talking about vaccine effectiveness in your original text, for example, a generative AI tool may change that to vaccine efficacy – the terms “effectiveness” and “efficacy” are very close in everyday language, but there is a significant difference in a medical context. Even if you check everything carefully, such mistakes are easy to miss, especially when copying and pasting a larger chunk of text from generative AI output into one’s own document.
Declaring your use of AI tools
You also need to be transparent about how generative AI and AI-assisted technologies have been used during the process of writing your text. The KI instructions for writing the comprehensive summary require you to add a statement at the end of the comprehensive summary, before the Reference list. The statement must fully disclose your use of such generative AI tools, both in terms of which tools you have used and how you have used them. For example, if you have used AI tools to generate ideas for improving the content of your text, to ask for feedback regarding the structure of the text, or to ask an AI tool to re-write it, you need to declare these specific uses (and, of course, doublecheck everything and ensure that you can be accountable for everything in the text).
However, if you have only used AI tools to manage your references or as language and grammar support, you do not have to add such a statement. But what does that mean? Generally, you do not need to disclose that you used tools that automate time-consuming tasks where the end result essentially remains the same. For example, you may use reference management systems such as Endnote, Zotero, or Mendeley to provide your sources in the exact format you need them. Similarly, you may use word processing programs that help you with the spelling, grammar, level of style, and concision of your text. These suggestions are based on grammatical rules and stylistic principles, and these tools will not re-write your text for you. Examples of such programs are Word, the basic feature of Grammarly, and Instatext. Of course, you are still responsible for the output, so make sure that you check it carefully.
If you are uncertain whether or not you should declare your use of AI tools, we suggest that you discuss the matter with your supervisor. Always err on the side of caution; it is safer to declare AI use when it may not be needed than to withhold that declaration when it is required.
A few final words
Finally, we need to remember that advanced AI tools are new and that they can do things they could not do, up until recently – so we do not have all the answers about how to use them responsibly yet. It is important to be humble about that fact, to perhaps be a bit cautious, to communicate with supervisors and peers with open minds, to be as transparent as we can, and to learn together as we move along.
Additional resources
If you want to learn more about generative AI tools and how you may use them, you can look at some recorded lectures from the KI event “AI in practice” May 28, 2024. If you have questions, please contact Karolinska institutet University Library.