Verification of Safety-critical and Learning-based Software

The project aims to extend today’s methods for assurance of safety-critical system to apply in future systems with machine learning components.

Real time security
Autonomous systems with machine learning components will inevitably be used in environments that can potentially harm humans or the environment.

The project will study formal verification techniques and develop novel methods that provide evidence that such future systems behave as intended. Among properties of interest are robustness, decisiveness and correctness with respect to the intended function.

ResearchersShow/Hide content

PublicationsShow/Hide content



WASP researchShow/Hide content