27 September 2024

The race between hackers trying to crack systems vital to the functioning of society and cybersecurity experts is constantly ongoing. Researchers at Linköping and Umeå universities and KTH will now develop artificial intelligence that can detect hacker attacks and take action before any damage is done.

A part of the supercomputer that glows red.
The project, called Air2 (AI for Attack Identification, Response, and Recovery), is coordinated by Linköping University and is part of the investment in cyber security by Wallenberg AI, Autonomous Systems and Software Program (WASP). Photographer: Thor Balkhed

Today’s society is increasingly connected, with more and more systems interacting with each other via cloud-based services. Many of these systems are vital to the functioning of society and can handle everything from cars, healthcare equipment, and payment systems to mining, traffic management and energy supply. Should a hacker for some reason want to damage an individual company or community services, these systems would be potential targets.

“These are systems that mustn’t go down. They must be robust and able to withstand attacks. But with the integration of AI into the systems and the number of systems that are supposed to communicate with each other, there are hundreds of thousands of different parameters that a hacker can exploit and attack. Then you must be able to identify the location of the attack and stop it before any damage is done,” says Simin Nadjm-Tehrani, professor of computer science at Linköping University.

Machine learning and reasoning

In order to succeed, the researchers intend to use AI in addition to a human to monitor systems security. But that is easier said than done. In this case, according to the researchers, so-called deep learning models that draw conclusions from patterns in large amounts of data would not be the only option. Systems based on deep learning are currently unable to explain the basis for recognising an attack. In addition, a hacker may attempt to manipulate the data on which the model is trained so that it goes under the radar.

Portrait Simin Nadjm-Tehrani.
Simin Nadjm-Tehrani, rofessor of computer science at Linköping University.
Photographer: Peter Modin

Therefore, deep learning models must be combined with what is known as reasoning models. What the researchers primarily have in mind is an AI that can create all the hundreds of thousands of possible routes that an attacker can exploit, while developing a contingency plan for each unique attack and responding autonomously. The AI must also be able to prove that the response is safe to perform and that it in itself does not harm the system. This combination is often referred to as neurosymbolic reasoning.

“It can be likened to a healthy person who is constantly monitored for all possible diseases where a cure can be administered at the first sign of symptoms,” says Simin Nadjm-Tehrani.

Identifying a real attack

But it is important that the AI can distinguish an attack from a deviant but harmless pattern. In other words, the system must not “cry wolf” every time something deviant is found, but only in case of a real attack.

“In the human example above, we can compare this to really being sick or jetlagged and tired. This requires different measures. For the measures to be effective, any response to an attack must also pinpoint the exact cause of the symptom. That’s what would make any direct action possible at all,” says Simin Nadjm-Tehrani.

WASP-project

The project, called Air2 (AI for Attack Identification, Response, and Recovery), is coordinated by Linköping University and is part of the investment in cyber security by Wallenberg AI, Autonomous Systems and Software Program (WASP). The project group is led by Simin Nadjm-Tehrani, professor at the Department of Computer and Information Science at LiU. Other project participants are Jendrik Seipp, associate professor, also at the Department of Computer and Information Science at LiU, Monowar Bhuyan, assistant professor at Umeå University and Rolf Stadler, professor at KTH. Together they will supervise six young researchers in the project.

Contact

A forceful initiative

More about AI at LiU

Latest news from LiU

Person (Robert Forchheimer) with cellphone.

The hidden costs of free apps – more than personal data

Procrastination, sleep deprivation and reduced focus are part of the price we pay for free mobile apps. This is according to researchers at LiU and RISE, who have investigated the costs hidden behind the free apps.

Water in front of a bridge and a building. Blue skies and a tree in autumn colours.

LiU climbs in global ranking list

Linköping University rises to the 201–250 band when British Times Higher Education releases its annual ranking of world universities.

Podcast turned the history teacher into a popular educator

Daniel Hermansson is an upper secondary school teacher who has become a great educator. Whether in podcast, TV or book form, the Alum of the Year tells us about our history in a way that is as entertaining as it is well-informed.