BackgroundMachine learning (ML), especially by means of deep learning, has made substantial progress over the last decade, e.g. for solving complex problems such as image classification, object detection, and natural language processing. However, the data-hungry nature of deep learning means that the full potential of a model is often inhibited by lack of data. This problem is especially pronounced within medical imaging, where data is expensive to capture, relies on medical expertise for annotation, and is of sensitive and protected nature. At the same time, for real-world use, medical ML requires large quantities of diverse training data to enable robust models that can handle unseen environments, where data characteristics differ from the training distribution.
Synthetically generated images can be used to improve image-based deep learning applications, both by increasing the amount of training data and by ensuring that different types of image content is included. Traditionally, computer graphics has been used for this purpose, but requires modeling of the image content. While this in many cases can be accomplished for natural images, it is difficult to model the complex biological content depicted in medical images. An alternative solution is to use deep learning for automatic generation of new image content, by means of generative adversarial networks (GANs). Over the last few years, research on GANs has progressed to the point that photo-realistic images can be generated in scenarios with narrow data distributions (cars, faces, etc.). For medical imaging, this is promising since the data domains in general are narrow. At the same time, GANs have mostly been used to generate 2D images of limited resolution. In medical imaging, data modalities can be more challenging, such as 3D volumes in radiology, or giga-pixel whole slide images (WSIs) in digital pathology. Furthermore, it is problematic to control the content generated by GANs.
The project aims at combining computer graphics and generative deep learning, in order to produce high-quality synthetic image datasets with detailed control over the image content. The overarching goal is a data-centric perspective of medical deep learning, where generated content can improve performance and robustness in limited data scenarios and aid in analyzing model performance under different types of variations. The project can be summarized in the following two sub-projects:
- Sub-project 1 – data synthesis: For radiology the synthesis is focused on improving GANs for 3D data, where existing methods are limited in resolution. For digital pathology the synthesis is focused on extending GANs to whole slide images (WSIs), which are of extremely high resolution. A central focus will also be to increase the control over the generated images, so that the distribution of image features can be explicitly controlled.
- Sub-project 2 – data-centric learning: With the data synthesis in place there are potential direct benefits in terms of increased performance of ML-assisted diagnosis. However, there are also more fundamental questions that will be focused on. By having the means to explicitly control the features of generated data, a systematic analysis will be conducted for dissecting which features are contributing to performance and robustness, and how training data can be improved by better reflecting such features.
About the project
- Project started 2022, and is expected to run until 2027.
- Project is supported by Centrum för Industriell Informationsteknologi (CENIIT)
- Industrial partners include Sectra and ContextVision.