Today’s society is increasingly connected, with more and more systems interacting with each other via cloud-based services. Many of these systems are vital to the functioning of society and can handle everything from cars, healthcare equipment, and payment systems to mining, traffic management and energy supply. Should a hacker for some reason want to damage an individual company or community services, these systems would be potential targets.
“These are systems that mustn’t go down. They must be robust and able to withstand attacks. But with the integration of AI into the systems and the number of systems that are supposed to communicate with each other, there are hundreds of thousands of different parameters that a hacker can exploit and attack. Then you must be able to identify the location of the attack and stop it before any damage is done,” says Simin Nadjm-Tehrani, professor of computer science at Linköping University.
Machine learning and reasoning
In order to succeed, the researchers intend to use AI in addition to a human to monitor systems security. But that is easier said than done. In this case, according to the researchers, so-called deep learning models that draw conclusions from patterns in large amounts of data would not be the only option. Systems based on deep learning are currently unable to explain the basis for recognising an attack. In addition, a hacker may attempt to manipulate the data on which the model is trained so that it goes under the radar.
Therefore, deep learning models must be combined with what is known as reasoning models. What the researchers primarily have in mind is an AI that can create all the hundreds of thousands of possible routes that an attacker can exploit, while developing a contingency plan for each unique attack and responding autonomously. The AI must also be able to prove that the response is safe to perform and that it in itself does not harm the system. This combination is often referred to as neurosymbolic reasoning.
“It can be likened to a healthy person who is constantly monitored for all possible diseases where a cure can be administered at the first sign of symptoms,” says Simin Nadjm-Tehrani.
Identifying a real attack
But it is important that the AI can distinguish an attack from a deviant but harmless pattern. In other words, the system must not “cry wolf” every time something deviant is found, but only in case of a real attack.
“In the human example above, we can compare this to really being sick or jetlagged and tired. This requires different measures. For the measures to be effective, any response to an attack must also pinpoint the exact cause of the symptom. That’s what would make any direct action possible at all,” says Simin Nadjm-Tehrani.
WASP-project
The project, called Air2 (AI for Attack Identification, Response, and Recovery), is coordinated by Linköping University and is part of the investment in cyber security by Wallenberg AI, Autonomous Systems and Software Program (WASP). The project group is led by Simin Nadjm-Tehrani, professor at the Department of Computer and Information Science at LiU. Other project participants are Jendrik Seipp, associate professor, also at the Department of Computer and Information Science at LiU, Monowar Bhuyan, assistant professor at Umeå University and Rolf Stadler, professor at KTH. Together they will supervise six young researchers in the project.