Explaining outcomes of deep learning systems

In this project we aim to understand and analyse deep learning systems and provide explainable results. Specifically, we explore a new technique of symbolic representation of abstractions.

Deep learning systems have three characteristics that makes them difficult to trust in critical applications. Being statistical, they can not be deployed in contexts when worst case performance needs to be relied on. Their results can be difficult to interpret and come with no explanations, and they are notoriously fragile.

In this project we explore a new technique of symbolic representation of abstractions. We will build new tools to verify effectiveness of the method on an autonomous vehicles vision perception system.

Symbolic representation

Researchers

External partners

Professor Carl Seger, Chalmers Institute of technology and professor Liu Yang, NanYang Technical University Singapore.