Photo credit metamorworksProbabilistic models—where unobserved variables are viewed as stochastic and dependencies between variables are encoded in joint probability distributions—are widely used in the areas of statistics and machine learning.
Probabilistic models come with many desirable properties: they enable reasoning about the uncertainties inherent to most data; they can be constructed hierarchically to build complex models from simple parts; they provide a natural safeguard against overfitting; and they allow for fully coherent inferences over complex structures from data.
Deep learning is a different branch of machine learning which has recently had remarkable success in a range of different applications related to computer vision, natural language processing and more. In most cases, the deep neural networks (DNNs) are trained discriminatively using supervised learning, with large sets of (annotated) training data.
Unfortunately, the advancement of deep learning has come at a price—DNNs often lack many of the desirable properties of probabilistic models, such as uncertainty quantification and structure exploitation over well-defined probabilistic priors.
In this project, we will develop theory and methods related to the interplay between probabilistic models and deep learning. We intend to develop both new models and new inference and learning algorithms for applications where unobserved variables are naturally characterized using, for instance, probabilistic graphical models or stochastic processes, whereas data is from some domain where deep learning has been successful (e.g., images).
We will consider applications to clinical decision support, active learning, weak supervision, and semi-supervised learning in the context of dynamical systems. However, the family of problems that involve such interplay between deep learning and probabilistic models is general, and we expect the results from the project to be widely applicable.