Photographer: Charlotte Perhammar
She also points out that universities need to take great responsibility by setting clear guidelines, otherwise the issue risks being handled inconsistently from one supervisor to another.
One of the major problems with AI is its ability to generate answers that look convincing but are in fact incorrect. This is known as “hallucinations” – a consequence of language models being based on probabilities rather than genuine understanding.
Photographer: Jesper Eriksson/Örebro universitet
“It’s important to remember that AI doesn’t think because it consists of powerful, but mechanical, calculations. If we trust the results too much, we risk building research on faulty foundations,” says Amy Loutfi, programme director for WASP.
She also underlines that researchers must understand how models are trained, since the underlying data is often incomplete or biased, with implications for both conclusions and ethical considerations.
Understanding AI’s possibilities and limitations is therefore becoming a new essential skill for researchers. Many argue that “AI literacy” – the ability to critically assess, question and responsibly use AI – must become an integral part of doctoral education.
“We already see PhD students using AI to improve the language and structure of their texts. This can provide valuable support, especially for those who don’t have English as their first language. But it may also mean that important training moments are lost,” says Katarina Sperling, who conducts research on AI in education.
She also cautions that academic writing could become more uniform and less nuanced if researchers rely too heavily on AI, reducing the diversity of voices in scholarly work.
Authorship and responsibility
“We risk losing the very core of doctoral education – developing the ability to formulate questions, reason critically and independently evaluate answers. These are skills that AI cannot replace,” says Lars Lindblom, researcher at the Division of Philosophy and Applied Ethics.
Ways forward
The role of AI in research and doctoral education was recently discussed at a seminar at Linköping University, with contributions from researchers across several disciplines. Several proposals were highlighted on how academia might address the challenges of AI:
- Clear university-level guidelines on how AI may be used in research and doctoral education.
- Agreements between supervisors and doctoral students at the start of the programme concerning expectations regarding AI use.
- Requirements for transparency, for example that theses and articles specify to what extent AI tools have been used.
- Training in AI literacy, to provide researchers and doctoral students with the tools they need to use AI critically and responsibly.
More than a technical issue
At the same time, researchers emphasise that AI is not only a technical matter. The technology is owned and controlled by large global companies, which means that academia must also address its political and societal dimensions.
“AI is not just a tool; it is also a question of power – and we don’t have the full picture of what these large tech companies that have these technologies want from us. We need to talk about what this means for our understanding of knowledge and research,” says Katarina Sperling.