The project aims to extend today’s methods for assurance of safety-critical system to apply in future systems with machine learning components.
Real time security
Autonomous systems with machine learning components will inevitably be used in environments that can potentially harm humans or the environment.

The project will study formal verification techniques and develop novel methods that provide evidence that such future systems behave as intended. Among properties of interest are robustness, decisiveness and correctness with respect to the intended function.

Researchers

Publications

2020

John Törnblom (2020) Formal Verification of Tree Ensembles in Safety-Critical Applications
John Törnblom, Simin Nadjm-Tehrani (2020) Formal Verification of Input-Output Mappings of Tree Ensembles Science of Computer Programming, Vol. 194 Continue to DOI

2019

John Törnblom, Simin Nadjm-Tehrani (2019) An Abstraction-Refinement Approach to Formal Verification of Tree Ensembles Computer Safety, Reliability, and Security: SAFECOMP 2019 Workshops, ASSURE, DECSoS, SASSUR, STRIVE, and WAISE, Turku, Finland, September 10, 2019, Proceedings, p. 301-313 Continue to DOI
John Törnblom, Simin Nadjm-Tehrani (2019) Formal Verification of Random Forests in Safety-Critical Applications Formal Techniques for Safety-Critical Systems, p. 55-71 Continue to DOI

WASP research