09 March 2022

For artificial intelligence (AI) to be helpful within healthcare, people and machines must work effectively together. A new study shows that doctors who use AI when examining tissue samples can work faster while still doing high-quality work.

man pointing at a digital histology image on a big screen.Claes Lundström examines a digital pathology slide. The tools of the future are tested using AI technology and imaging at the national research arena AIDA. Photo credit Kajsa Juslin How can AI systems in healthcare be designed to facilitate the interaction between people and computers? Martin Lindvall has researched this very question, with a particular focus on the area of machine learning called “deep learning”. In simple terms, this involves AI trained by finding patterns in large amounts of data. This kind of AI can, for example, be trained to find cancer in medical pictures.

A big challenge for those who develop AI for healthcare purposes is that AI doesn’t always get things right.Martin Lindvall.Martin Lindvall. Photo credit Magnus Johansson

“We have learnt to expect that the AI will make mistakes. However, we also know that we can improve it over time by telling it when it is wrong or right. While bearing this flawed nature of AI in mind, we need to ensure that these systems are efficient and effective for users. It’s also important that the users feel that the machine-learning adds something positive”, says Martin Lindvall, who has recently taken his industry-based doctorate at the Wallenberg AI Autonomous Systems and Software Program (WASP) at LiU.

Can we trust AI to get it right?

“Computer programmes that use machine learning will inevitably make mistakes in ways that are hard to anticipate”, says Martin Lindvall.

Within medical imaging, AI can be trained to find abnormalities in, for example, tissue samples. But it turns out that models trained using machine learning are sensitive and easily affected by little things, such as changing the manufacturer of the chemicals used to dye the tissue cuts, how thick they are, and whether there is dust on the glass in the scanner. These kinds of disruptive effects can lead to the model malfunctioning.

“These factors are now well-known among AI developers, and we make sure to check for them. But we can’t be sure that other sources of disruptions will not emerge in the future. So we want to ensure that there are barriers to prevent problems that we’re not even aware of yet.”

In turn, this can make it hard for users to know whether they can trust AI. For AI tools to work in clinical environments, they must, of course, fit with healthcare workflows effectively and safely.

“AI-based support has to be designed such that the user doesn’t spend just as much time checking the AI’s conclusions as it would take to do the task without any AI support at all.”

Putting the user in the drivers’ seat

Together with his colleagues, Martin Lindvall has developed a human-AI interface for helping doctors to examine tissue samples. Here, AI works as an assistant for the human user rather than an autonomous agent that replaces the human. The machine learning component can help pathologists examine tissue samples of lymph nodes. Such lymph nodes are routinely extracted after the surgical removal of colon cancer. If the pathologist finds tumour cells in any of the lymph nodes, it can mean that cancer has spread to other parts of the body, in which case the patient is offered treatment.

“We chose this task because pathologists have told us that it’s relatively easy, but tedious and time-consuming. AI could have something to bring to the table here. The challenge lay in creating AI support that can help the process go faster. Usually, pathologists do these kinds of things very fast. They’re amazing”, says Martin Lindvall.

In the interface, which the researchers call “Rapid Assisted Visual Search” (or RAVS), the pathologist first gets an overview of the tissue. The AI then indicates several areas of suspected cancer. The sample is considered cancer-free if the doctor does not see anything in those areas. Martin Lindvall points out that there is a balance between examining all tissue in detail and speeding up the process. The goal is for the doctor to feel confident in the result, speed up the diagnostic process and avoid incorrect decisions. Six pathologists have evaluated the interface, and the researchers presented their conclusions at the International Conference on Intelligent User Interfaces (IUI ’21).

One distinguishing aspect of the interface is that the researchers have made it possible for the user to at any point ignore the AI-generated suggestions and instead examine all tissue as they normally do.

“Most users start out in the same way. They see what the AI suggests, but ignore it. Over time, however, they gain confidence in the AI, and start to use it more. So this interactive aspect of the system works as a safety barrier as well as a trust-building mechanism. The user is more in control compared to more autonomous AI products”, says Martin Lindvall.

The researchers’ conclusions in the study are that the pathologists worked faster when using the RAVS interface. Martin Lindvall believes that the interaction between people and assistive AI can play an essential role in speeding up the introduction of AI in medical decision-making since it can improve both perceived and factual safety.

Furthermore, this kind of system can learn as it is being used. Since the human expert reviews all AI findings, the system can gradually improve. This opens up the possibility for interactive interfaces of this kind to act as stepping stones for more independent AI systems in the future.

The importance of “soft” values

Lots have happened within AI and medical imaging research during Martin Lindvall’s PhD at LiU.

“I started as a PhD student in 2016, and at that time, there were vanishingly few studies that applied deep learning to tissue samples. There are now several huge studies of this kind, with AI applications that have been shown to perform better than specialist doctors for certain tasks. It’s awe-inspiring. I’ve wondered before: ‘Is this just hype?’ But no, this is for real. Nevertheless, there are challenges. If you don’t take care of the ‘soft’ values, such as the user’s confidence in the system, then there’s a risk that it will take longer than necessary before we see these systems used in healthcare.”

Martin Lindvall has done his research at the Center for Medical Image Science and Visualization, and is employed at the medical technology company Sectra.

The study: Rapid Assisted Visual Search: Supporting Digital Pathologists with Imperfect AI, Martin Lindvall, Claes Lundström, and Jonas Löwgren, (2021), 26th International Conference on Intelligent User Interfaces (IUI ’21), April 14–17, 2021, https://doi.org/10.1145/3397481.3450681

Dig into AI within medical imaging 

AI at LiU

Latest news from LiU

Nathalie Hallin and Hajdi Moche in conversation.

Religious people are not more generous – with one exception

Believers are no more generous than atheists – at least as long as they don’t know what the recipient believes in. This is the conclusion of a study carried out at LiU

Two persons in a room full och wires and optical instruments.

Quantum theory and information theory connected

With the help of a new experiment, researchers at LiU, have succeeded in confirming a ten-year-old theoretical study, which connects the complementarity principle with information theory.

happy young woman works on computer with books next to her, she has a sweater with math coach online symbols

Maths Coach Online – university students help with maths

“Mattecoach på nätet”, the online maths coach, has helped close to 70,000 students with maths. This collaboration between KTH, Linköping University and Chalmers has now spread internationally.