Deep learning systems have three characteristics that makes them difficult to trust in critical applications. Being statistical, they can not be deployed in contexts when worst case performance needs to be relied on. Their results can be difficult to interpret and come with no explanations, and they are notoriously fragile.
In this project we explore a new technique of symbolic representation of abstractions. We will build new tools to verify effectiveness of the method on an autonomous vehicles vision perception system.