29 August 2025

Artificial intelligence, AI, has become an important part of research and doctoral education thanks to new tools that are constantly evolving. It creates new opportunities but also raises questions about integrity, data management and academic responsibility.

Woman lectures
Katarina Sperling at the Department of Behavioural Sciences and Learning (IBL) means that AI can support doctoral students, especially those writing in a second language, but warns that it may weaken the learning process. Photographer: Olov Planthaber

Katherine Harrison, associate professor and docent at the Department of Thematic Studies (TEMA).
Photographer: Charlotte Perhammar
“We need to support PhD students in using AI in ways that strengthen their critical skills rather than replace them,” says Katherine Harrison, researcher at the Department of Thematic Studies.

She also points out that universities need to take great responsibility by setting clear guidelines, otherwise the issue risks being handled inconsistently from one supervisor to another.

One of the major problems with AI is its ability to generate answers that look convincing but are in fact incorrect. This is known as “hallucinations” – a consequence of language models being based on probabilities rather than genuine understanding.

Porträtt (Amy Loutfi).
Amy Loutfi divides her time between a professorship at the Department of Science and Technology (ITN) at Linköping University, professorship at Örebro University and her role as programme director for Wallenberg AI, Autonomous Systems and Software Program (WASP).
Photographer: Jesper Eriksson/Örebro universitet

“It’s important to remember that AI doesn’t think because it consists of powerful, but mechanical, calculations. If we trust the results too much, we risk building research on faulty foundations,” says Amy Loutfi, programme director for WASP.

She also underlines that researchers must understand how models are trained, since the underlying data is often incomplete or biased, with implications for both conclusions and ethical considerations.

Understanding AI’s possibilities and limitations is therefore becoming a new essential skill for researchers. Many argue that “AI literacy” – the ability to critically assess, question and responsibly use AI – must become an integral part of doctoral education.

“We already see PhD students using AI to improve the language and structure of their texts. This can provide valuable support, especially for those who don’t have English as their first language. But it may also mean that important training moments are lost,” says Katarina Sperling, who conducts research on AI in education.

She also cautions that academic writing could become more uniform and less nuanced if researchers rely too heavily on AI, reducing the diversity of voices in scholarly work.

Authorship and responsibility

Man speaks in microphone
Lars Lindblom is senior associate professor at the Department of Culture and Society (IKOS).Photographer: Sara Läthén
The question of who is actually the author of a text is also becoming more complex. In some theses, PhD candidates have begun to include a description of how AI was used during the writing process. This can add transparency and clarity, but also raises new issues of authorship and responsibility.

“We risk losing the very core of doctoral education – developing the ability to formulate questions, reason critically and independently evaluate answers. These are skills that AI cannot replace,” says Lars Lindblom, researcher at the Division of Philosophy and Applied Ethics.

Ways forward

The role of AI in research and doctoral education was recently discussed at a seminar at Linköping University, with contributions from researchers across several disciplines. Several proposals were highlighted on how academia might address the challenges of AI:

  • Clear university-level guidelines on how AI may be used in research and doctoral education.
  • Agreements between supervisors and doctoral students at the start of the programme concerning expectations regarding AI use.
  • Requirements for transparency, for example that theses and articles specify to what extent AI tools have been used.
  • Training in AI literacy, to provide researchers and doctoral students with the tools they need to use AI critically and responsibly.

More than a technical issue

At the same time, researchers emphasise that AI is not only a technical matter. The technology is owned and controlled by large global companies, which means that academia must also address its political and societal dimensions.

“AI is not just a tool; it is also a question of power – and we don’t have the full picture of what these large tech companies that have these technologies want from us. We need to talk about what this means for our understanding of knowledge and research,” says Katarina Sperling.

Contact

More about AI at LiU

Latest news from LiU

Female PhD-student, brown hair.

Unpackaged food can reduce emissions

How do consumers respond to unpackaged food? And how can the producers and supermarkets design solutions that rely less on single-use packaging? These are some questions explored by PhD student Elena Jiménez Romanillos.

Fawlty Towers - the invisible subtitlers revealed

Swedes read a lot - especially if you include film and TV subtitles. But does the subtitler themselves play any role? In search of an answer, researcher Lars Jämterud has looked at the translation of the classic British comedy series Fawlty Towers.

“Skin in a syringe” a step towards a new way to heal burns

Researchers have created what could be called “skin in a syringe”. The gel containing live cells can be 3D printed into a skin transplant, as shown in a study conducted on mice. This technology may lead to new ways to treat burns and severe wounds.