Cybersecurity research
Here you can find information about ongoing cybersecurity research projects at the Department of Computer and Information Science at Linköping University. The projects are listed by topic. You can also find recent theses in cybersecurity.
Brief facts about our funders, expertise and collaboration partners
Funders
ELLIIT, EU, Swedish Foundation for Strategic Research (SSF), Swedish Research Council (VR), Vinnova, Wallenberg AI, Autonomous Systems and Software Programme (WASP), and others.
Researchers
Around ten principal investigators and a total of forty researchers work in the field of cybersecurity at the Dept of Computer and Information Science at Linköping University.
Partners
Universities, research institutes, and companies in Sweden and abroad, such as RISE, Ericsson, LFV Luftfartsverket Air Traffic Control, SAAB, Sectra Communications AB.
AI and security
About the topic
AI for security and security for AI
Projects
AI-powered attack identification, response and recovery (AIR2)
Topic: AI security; Critical infrastructure security; Network, IoT, and cloud security
Duration: 2024-2029
Funding: Wallenberg AI, Autonomous Systems and Software Program (WASP)
The goal of the 20 MSEK 5-year project starting in April 2024 is to enhance the capability of AI-powered attack identification, response and recovery (AIR2) in future generation of networks that are complex systems with thousands of configuration possibilities. It will focus on software-intensive high-performing communication infrastructures in which:
- Prevention of cyberthreats is both anticipated and adapted over time,
- Detection of ongoing adverse scenarios are managed using well-understood components including machine learning-based ones,
- Evolving scenarios include reactions (autonomous or partially autonomous) that can be understood and explained, despite the potential changes over time (concept drift) and considering the trade-offs involved.
Research leaders at LiU: Simin Nadjm-Tehrani (PI), Jendrik Seipp (co-PI)
Team members at LiU: Federica Uccello
Partners: AIR2 is a NEST project (building on Novelty, Excellence, Synergy, and Team) coordinated at LiU at the Department of Computer and Information Science with participants from KTH (Rolf Stadler), Umeå University (Monowar Bhuyan).
Projects on privacy-related attacks against facial recognition and other deep learning (DL) models
Funding: Wallenberg AI, Autonomous Systems and Software Program (WASP), The Swedish Research Council (VR), Graduate School in Computer Science (CUGS)
We have multiple projects on privacy-related attacks against facial recognition and other deep learning (DL) models, funded primarily via WASP, VR, and CUGS. Thus far we have focused on developing practical, high-quality image and multimodal defenses that frustrate automated recognition and privacy leakage—without degrading human utility. Our goals are to: (i) empower individuals with usable “privacy filters” for everyday sharing; (ii) enable dataset owners to release high-utility, anonymised corpora; and (iii) get ahead of next-gen risks where leakage accumulates across images, models, and time. Methodologically, we consider both attacks and defenses. For example, we first try to break systems using techniques that recover hidden face information or fool a model into accepting the wrong person (e.g., IdDecoder) to reveal weak points. We then design generative defenses that preserve realism while disrupting identity, including latent-space anonymisation (StyleID), semantics-aware adversarial editing (StyleAdv), and diffusion-based protection (DiffPrivate) that remains robust even against purifiers such as DiffPure. For scale and emerging modalities, we synthesise realistic people to protect datasets while maintaining downstream utility (RAD), mitigate structured cross-image leakage in vision-language models (“privacy chains”) via targeted perturbations (ChainShield), and we are extending these guarantees to immersive, multi-view/VR streaming so identities remain protected across angles, views, and time.
Research leader: Niklas Carlsson
Team members: Minh-Ha Le, Karol Wojtulewicz, Minxing Liu
Secure operation of uncontrolled and reliable computing on the edges (SOURCE)
Duration: 2024-2029
Funding: Wallenberg AI, Autonomous Systems and Software Program (WASP)
This project aims to leverage generative AI to address critical challenges related to secure resource allocation in dynamic edge scenarios. Researchers explore how to specify, train, and verify generative AI models for creating orchestration mechanisms and protocols. Their goals include detecting and mitigating attacks against intelligent Function as a Service (FaaS) systems and enabling runtime security verification for both manually and automatically generated orchestration mechanisms.
Research leaders at LiU (co-PI:s): Mikael Asplund, Fredrik Heintz
Team members: Christian Gustavsson, Animesh Thakur
Partner: Lund University
Where AI meets safety and security
Topic: AI security; Cyber-physical system security
Duration: 2022-2026
Funding: Wallenberg AI, Autonomous Systems and Software Program (WASP)
To guarantee that machine learning models yield outputs that are not only accurate, but also robust, recent works propose formally verifying robustness properties of machine learning (ML) models. To be applicable to realistic safety-critical systems, the used verification algorithms need to manage the combinatorial explosion resulting from vast variations in the input domain, and be able to verify correctness properties derived from versatile and domain-specific requirements. Tools to achieve this are only beginning to emerge and are essential to safety qualification processes. In this project we look one step further and consider the process of safety assurance, i.e. formal documentation of absence of harm to humans and environment. The evidence for claiming safety will be based on formalised models that not only provide transparency and accountability for ML based systems, but also show resistance to security threats that impact safety.
Research leader: Simin Nadjm-Tehrani
Team member: Valency Colaco
Critical infrastructure security
About the topic
Projects
Cybersecurity for resilient energy communit (Cyrec)
Duration: 2023-2026
Funding: Vinnova
The aim of this project is to enable secure deployment of energy communities. We investigate the security of cloud-based energy systems which is quickly becoming a reality, but also the next phase in the evolution of sustainable energy systems - energy communities. These have been identified by the EU as important for the transition to a more sustainable energy system, but are being held back by the risk of cyberattacks. We will develop new security methods that meet the problems that arise when combining operational technology (OT) with cloud and IoT systems. The project also intends to develop new collaboration models that meet the needs of different stakeholders to uphold system-level safety and security.
Research leader: Mikael Asplund
Team members: Zeeshan Afzal, Roland Plaka
Partners: RISE, Sectra Communications AB, Emulate Energy AB, Utvecklingsklustret AB
MONAD - Monitoring and anomaly detection of data streams and decision processes in advanced digital ATS
Duration: 2024-2027
Funding: Swedish Transport Administration (Trafikverket) MONAD
The MONAD project aims to enhance cybersecurity in Air Traffic Service (ATS) systems by detecting anomalies and mitigating threats in critical air-ground communications like ADS-B and ADS-C.
The project combines AI techniques, including DGCN, LSTM-Autoencoders, and DNNs, with real-world data to build a robust anomaly detection framework.
A key outcome is a cyberattack simulator designed to train ATC personnel using realistic threat scenarios.
The solution integrates deep learning for anomaly detection, reinforcement learning for countermeasure generation, and Linux-based simulation tools to visualise and respond to attacks.
Research leader: Andrei Gurtov
Team members: Suleman Khan, Supathida Boonsong (LFV)
Partners: Luftfartsverket (LFV) affiliated with Automation Program II (Part C) and SEC-AIRSPACE (SESAR3)SEC-AIRSPACE - Cyber security risk assessment in virtualized AIRSPACE scenarios and stakeholders' awareness of building resilient ATM
Duration: 2023-2026
Funding: EU SESAR-JU SESAR Programme: Digital European Sky
The SEC-AIRSPACE project aims to enhance cyber resilience in virtualised Air Traffic Management (ATM) systems through holistic risk assessment, personalised training, and validation in realistic scenarios. Its goals are to:
- Develop a reusable knowledge base, modeling guidelines, and dynamic risk analysis methodologies.
- Create tailored People Analytics-driven training frameworks to reduce human-related cyber risks.
- Design validation exercises for future communication infrastructure and virtualised ATS environments.
The methodology integrates threat modeling, cascading effect analysis, machine learning for trainee clustering, ROTI evaluation, and web-based tools and dashboards. These will be validated through structured use cases and dedicated exercises. The outcome will be a comprehensive risk assessment framework, actionable guidelines, and training solutions to strengthen cybersecurity and resilience in ATM operations.
Research leaders: Andrei Gurtov (PI), Gurjot Singh (co-PI)
Partners: Luftfartsverket (Sweden), Cefriel (Italy), Collins Aerospace (Italy), Deep Blue (Italy), German Aerospace Agency (Italy), SINTEF (Norway), Skyway (Spain), ZenaByte (Italy)
Research leaders (PI:s) in Critical infrastructure security
Cybersecurity didactics
About the topic
Projects
Cybersecurity academy for education and experience (CYCERONE)
Duration: 2025-2027
Funding: Cybercampus and Digital Europe
Cycerone, an initiative supported by the European Union in collaboration with leading universities and educational partners, is designed to address the critical need for enhanced cybersecurity skills across Europe. In an era where many organisations, particularly small and medium-sized enterprises (SMEs) and public administrations, face significant challenges due to a skills gap in cybersecurity, Cycerone seeks to empower these entities to protect themselves against the growing landscape of cyber threats.
Project leader (PI) at LiU: Mikael Asplund
Team members: August Ernsson, Susan Harrington, Zeeshan Afzal
Partners: Aalto University (Finland), KTH Royal Institute of Technology (Sweden), RISE (Sweden), Lusófona University (Portugal), National Technical University of Athens (Greece), Riga Technical University (Latvia), Babeș-Bolyai University (Romania), Eötvös Loránd University (Hungary), Politecnico di Milano (Italy), University of Trento (Italy), Tampere University (Finland)
Digital4Business
Duration: 2024-2026
Funding: Digital Europe and Vinnova
The purpose of the Digital4Business project is to develop a revolutionary new European masters programme aimed at digital upskilling across many industries. D4B will promote long-term competitiveness and growth in European SMEs and companies, and develop a culture of excellence in advanced digital skills. LiU contributes to the project and the Master's programme with expertise and courses in AI and Cybersecurity.
Project leader (PI) at LiU: Fredrik Heintz
Team members (cybersecurity): Mikael Asplund, Gurjot Singh, Susan Harrington
Partners: Adecco Group, Akkodis, Digital Technology Skills, LHH, Matrix Internet, National College of Ireland, NOVA Information Management School (Portugal), Schuman Associates, Skillnet Ireland, Terawe Technologies Limited, University of Digital Science, University of Bologna (Italy)
Research leaders (PI:s) in Cybersecurity didactics
Cyber-physical system security
About the topic
Projects
Reconciling safety and security in avionics platforms with next-generation multi-core processors
Topic: Cyberphysical system security; Physical-layer security
Duration: 2024-2026
Funding: Vinnova in collaboration with Saab Aeronautics
Using multi-core processors for safety-critical applications is in itself a challenge when timeliness requirements are strict, and adding security-related measures will require additional analysis. This project aims to take a step towards trustworthy (correct and secure) computing infrastructures, based on multi-core processors, that are both functionally safe and security hardened. We aim to provide a new methodology to achieve time-deterministic computations or isolation of safety-critical functions from non-critical ones, while at the same time, considering security aspects to ensure the absence of information leakage. Side channels and our proposed software and hardware-based mitigations are analysed is conjunction with the safety aspects.
Research leader: Simin Nadjm-Tehrani
Team members: Andreas Wrisley (Industrial PhD student from Saab Aeronautics), Ingemar Söderquist (co-supervisor from Saab Aeronautics)
Partner: Saab Aeronautics
ScentUSV: Automated test scenario synthesis for verifying collision avoidance of autonomous vessels
Duration: 2024-2026
Funding: Office of Naval Research (ONR Global)
Safe traffic in open sea encounters is controlled by the International Regulations for Preventing Collisions at Sea (COLREGS). While COLREGs compliance has been demonstrated for unmanned surface vehicles (USVs) in both simulations and controlled field tests, existing efforts are limited to simple collision avoidance scenarios. However, there is a lack of effective techniques for assuring COLREGs compliance of USVs in complex traffic scenarios, involving multiple vessels and/or static obstacles. Such complex scenarios can represent extremely rare combinations of events and special circumstances, which are unlikely to be covered by traditional simulations. The ScentUSV project will develop novel test scenario generation approaches for system-level assurance of COLREGS-compliance for USVs in multi-vessel encounters.
Methodology: The project develops a model-based test generation approach for automatically deriving initial scenes of complex sea encounters involving multiple vessels by multi-step refinement. Our experiments involving synthetic and real-world test scenarios evaluate the relevance, diversity, completeness, scalability and speed of test scenarios.
Research leaders: Dániel Varró (PI), Ulf Kargén (co-PI)
Team member: Dominik Frey
TripleA: Attestation, authentication and assurance
Duration: 2021-2026
Funding: Wallenberg AI, Autonomous Systems and Software Program (WASP)
This project addresses the problem of trustworthy state in embedded devices. It combines methods for ensuring that devices are in their expected state (using a mechanism called remote attestation), as well as methods for updating the state when this is required. By considering these two mechanisms holistically, we are able to improve overall system security.
Research leaders: Mikael Asplund, Ahmad Usman
Research leaders (PI:s) in Cyber-physical system security
Formal methods for security
About the topic
Projects
Automating security assurance using formal methods
Duration: 2022-2027
Funding: Wallenberg AI, Autonomous Systems and Software Program (WASP)
When providing security solutions for critical systems, one must provide assurance arguments that the solutions are secure by design and do not introduce new unexpected vulnerabilities. This research project is concerned with the (partial) automation of such security assurance cases. This could have an enormous potential to reduce the time to market for critical security solutions and to reduce the risk of undetected design flaws. We leverage existing work in formal protocol security verification and verification of cyber-physical systems together with Sectra’s cutting edge security expertise to create new automated security assurance methods.
Research leaders: Mikael Asplund, Johannes Wilson
Partner: Sectra Communications AB
Protocol security verification using dynamic key structures
Duration: 2023-2028
Funding: ELLIIT
Provably secure communication solutions will be needed for the continued trust in future digital services. In this PhD project we propose a new approach to taming the inherent computational complexity of protocol security analysis by providing the means and the tools to leverage model structures (e.g., dynamic key dependencies) in models of security mechanisms and to use these structures to automate security analysis. The project is composed of three main tasks (i) automated model structure analysis, (ii) developing a theory on dependency relations, and (iii) modular protocol specification and verification.
Research leaders: Mikael Asplund, John Engberg
Partner: Lund University
SafeML: Machine learning with safety guarantees
Duration: 2022-2026
Funding: Wallenberg AI, Autonomous Systems and Software Program (WASP)
The absence of formally established correctness guarantees is a fundamental barrier for the safe adoption of machine learning (ML) in safety-critical systems, e.g., cyber-physical systems or smart medical implants. Current techniques have difficulties in supplying formal guarantees to realistic ML-based safety-critical applications. They either provide weak guarantees (e.g., testing, or adversarial learning) or are at their infancy and cannot handle realistic ML models and applications. Therefore, the project aims to provides means to establish relevant and useful formal guarantees for realistic ML models and applications. In this project, we integrate adversarial machine learning techniques and well-founded verification approaches. The purpose is to enable formal guarantees on the behaviours of ML-powered safety-critical systems. We particularly target safety and privacy for Deep Neural Networks (DNNs) as adopted in safety-critical cyber-physical and medical systems. We approach the problem by proposing verification friendly models, by developing new verification techniques, and by using them to formalize and reason about challenging properties such as patient privacy in medical applications.
Purpose and goals, methodology: Extend applicability of formal verification of DNNs in terms of scalability by proposing new techniques and in terms of properties by formulating and reasoning about new relevant ones.
Research leaders and team members: Ahmed Rezine (co-PI, LiU), Amir Aminifar (co-PI, Lund University), Anahita Baninajjar (PhD student, Lund University)
Partner: Lund University
Research leaders (PI:s) in Formal methods for security
Privacy
About the topic
Projects
Projects on anonymisation and privacy protection of images, multimedia, and user data in VR
Funding: Wallenberg AI, Autonomous Systems and Software Program (WASP), The Swedish Research Council (VR), Graduate School in Computer Science (CUGS)
We have multiple projects on anonymisation and privacy protection of images, multimedia, and user data in VR, including a WASP-funded student project and two students funded via VR/CUGS. In these projects we build privacy technology for images, video, and immersive VR that keeps people unrecognisable while the experience stays natural and useful. Our goals are to (i) let creators and platforms share or stream content safely; (ii) give users control over how they appear and what motion data they reveal; and (iii) make these protections work at scale for multi-camera, free-viewpoint, and 360° experiences. Methodologically, we combine three pillars. (1) Content-layer protection: realistic de-identification and attribute editing for faces and full scenes, with consistency across camera angles and over time, so anonymised people remain believable and downstream analytics still work. (2) Telemetry-layer protection in VR: a client-side approach that lightly perturbs head-motion data sent to servers while locally predicting the viewer’s true viewport to keep playback smooth—preserving Quality of Experience (QoE) even as we hide identifying motion patterns in tile-based streaming. (3) Systems optimisation: model-driven systems research on multi-view streaming and edge/cloud placement under tight latency and quality budgets, paired with measurement studies of real VR apps and platforms. These studies inform design and policy by mapping data flows (including for different age profiles) and evaluating privacy–utility tradeoffs, QoE impacts, and deployment overheads.
Research leader: Niklas Carlsson
Team members: Min-Ha Le, Karol Wojtulewicz, Minxing Liu, Sheyda Mirzakhani
Projects on traffic analysis attacks and defences
Funding: Wallenberg AI, Autonomous Systems and Software Program (WASP), Swedish Foundation for Strategic Research (SSF), KIPL, Graduate School in Computer Science (CUGS)
We have several ongoing projects on traffic analysis attacks and defences, funded primarily via WASP, SSF, KIPL, and CUGS. In these projects, we investigate aspects such as how much private information can be inferred from encrypted traffic and how to harden systems against such leakage. Concretely, we study what an eavesdropper can learn (e.g., which specific news articles a user reads, or which live stream they watch via side-channel cues) and under what conditions such inference remains feasible. We also benchmark modern video fingerprinting (stress-testing attacks under real-world network variability and limited observation windows) and develop techniques that keep attacks effective in the wild. On the defence side, we design and evaluate practical, deployable countermeasures that balance privacy with bandwidth, delay, and user QoE; our work includes large-scale evaluations and the release of code/datasets to raise the bar for the community. We further explore ephemeral, per-connection defences that randomise traffic shaping to resist adaptive adversaries and are viable in real deployments. Methodologically, we combine longitudinal measurements, model-driven risk/utility analysis, and systems prototyping to quantify tradeoffs and guide operators (ISPs, CDNs, VPNs, data center operators) toward safer defaults.
Research leader: Niklas Carlsson
Team members: David Hasselquist, Carl Magnus Bruhner, and Somiya Kapoor, and Ethan Witwer
Partners: Tobias Pulls (Karlstad University), Niklas Johansson (Sectra Communications)
Research leader (PI) in Privacy
Security for networks, Internet of Things (IoT) and cloud services
About the topic
Projects
Adaptive software for the heterogeneous edge-cloud continuum (ASTECC)
Duration: 2022-2027
Funding: Swedish Foundation for Strategic Research (SSF)
The project investigates methods for the design, automated orchestration and dynamic adaptation of software to enable its autonomous, efficient and secure execution in dynamic, heterogeneous, distributed device-edge-cloud environments, i.e., in multi-provider, multi-service, and multi-criteria scenarios, without relying on a global resource manager. Secure communication and orchestration is a key requirement for such capabilities, which is why this is a central part of the project.
Research leaders, PI: Christoph Kessler, co-PIs: Mikael Asplund, Niklas Carlsson, Zebo Peng, Soheil Samii. The cybersecurity-related subprojects in ASTECC are led by Niklas Carlsson and Mikael Asplund.
Team members: August Ernstsson, Sajad Khosravi, Xiaopeng Teng, Reyhane Falanji, Sebastian Litzinger, Somiya Kapoor, Yungang Pan
Partners: Ericsson, Saab, Sectra Communications, Aptiv
CyberSecDome
Duration: 2023-2026
Funding: Horizon Europe
CyberSecDome is a visionary European project that combines AI technology and virtual reality to revolutionize cybersecurity. The project’s mission is to predict and efficiently respond to cybersecurity threats, safeguarding digital infrastructure. With a focus on situational awareness, it offers real-time insights into incidents and risks, fostering collaborative responses across stakeholders. Privacy-aware information sharing enhances the project’s impact. LiU's main responsibility in the project is to develop methods for automated penetration testing.
Research leaders (at LiU): Mikael Asplund, Charilaos Skandylas
Partners: Maggioli, Technical University of Munich (Germany), Airbus Defence and Space Cyber Programmes, Athens International Airport (Greece), EIT Digital (digital innovation ecosystem), Hellenic Telecommunications Organisation S.A. (Greece), Institute Mines Telecom (France), AEGIS IT Research (Germany), Security Labs Consulting Limited (Ireland), Technical University of Crete (Greece), Anglia Ruskin University (UK), Telecommunication Systems Institute (Greece), Cyberalytics Ltd, ITML, Sphynx Technology Solutions (Switzerland)
Projects on bitcoins and blockchains
Funding: Wallenberg AI, Autonomous Systems and Software Program (WASP) including NTU-WASP collaboration, Faculty of Science and Engineering (LiTH)
These projects include a joint WASP-funded collaboration with NTU (Singapore) on blockchain theory and practice, where we study security, scalability, and novel applications. Thus far, our work has focused on how Bitcoin is used in illicit ecosystems and how evidence-driven methods can improve governance and risk controls. We combine large-scale blockchain measurement, graph/flow analysis, and longitudinal event studies to understand illicit categories (e.g., ransomware, tumblers, darknet markets) and their money-flow patterns, temporal dynamics, and dispersion paths. We analyse flows linked to sanctioned entities to assess sanction timing/effectiveness and exchange touchpoints, informing enforcement priorities and compliance practices. To support reproducible research and monitoring, we build shareable tools and datasets that map Bitcoin activity across onion services at scale (≈177k sites per snapshot), enabling timely ecosystem health checks and policy testing. The overarching aim is to translate empirical insight into practical guidance for regulators, exchanges, and investigators—strengthening financial integrity while preserving legitimate privacy.
Research leader: Niklas Carlsson
Team members: Alireza Mohammadinodooshan, David Hasselquist
Partners: Jun Zhao (NTU Singapore), Martin Arlitt (Corelight)
Research leaders (PI:s) in Security for networks, Internet of Things (IoT) and cloud services
Web security
About the topic
Projects
Projects on empirical analysis and insights to improve the web certificate landscape
Funding: Wallenberg AI, Autonomous Systems and Software Program (WASP), Faculty of Science and Engineering (LiTH)
We have ongoing projects on empirical analysis and insights into how to best improve the certificate landscape, funded primarily via WASP and internal funds. In these projects, we aim to deliver data-driven guidance for a healthier WebPKI by measuring certificate issuance, chaining, revocation, and operator practices at Internet scale. Our longitudinal studies quantify breakage risks, revocation effectiveness, and operational anti-patterns (e.g., wildcard use, chain volatility), and we release measurement tooling/datasets to foster reproducibility. Methods combine passive/active Internet measurements, longitudinal CT-log mining, and collaboration with ecosystem stakeholders to turn findings into actionable recommendations. A key thrust is revocation in practice: we track status lifecycles, compare replacement behavior after revocations, and surface inconsistent status handling and post-revocation usage—evidence that motivates transparency and protocol improvements (including revocation-transparency concepts that leverage existing CT logs). The overarching goal is to reduce user risk and misconfiguration-driven outages through evidence-based best practices, shorter time-to-fix for compromised keys, and open measurement that the community can build on.
Research leader: Niklas Carlsson
Team members: Carl Magnus Bruhner, David Hasselquist
Partner: Martin Arlitt (Corelight)
Projects on social media analysis and the spread of fake, divisive, or unreliable news
We have ongoing projects on social media analysis and the spread of fake, divisive, or unreliable news, funded primarily via WASP, CUGS, and internal funds. In these projects, we study how bias, reliability, rhetoric, and platform visibility shape engagement, and thus also social media’s amplification of problematic news. Example findings thus far include that on X, rhetorical choices (e.g., risk-oriented vs. analytic language) move exposure-normalised engagement in opposite directions for unreliable vs. least-biased outlets, and hyperpartisan audiences react especially strongly to negative sentiment—with effects that differ by interaction depth (likes, retweets, replies, quotes). On Facebook, only about a third of news interactions happen in public; private spaces show deeper, class-dependent engagement—highlighting blind spots in public-only measurement. Complementary Instagram analyses show how format (albums/photos/videos) and concise messaging shape engagement over time, informing cross-platform comparisons. Methodologically, we assemble large, labeled datasets; control for visibility; use time-series and causal-style analyses; and release code/data where possible. The aim is to inform platform design and policy to reduce amplification loops for unreliable content, curb polarisation dynamics, and mitigate broader socio-economic harms linked to online cascades.
Research leader: Niklas Carlsson
Team members: Alireza Mohammadinodooshan, Sehrish Qummar
Research leader (PI) in Web security
Theses in cybersecurity








