WASP at Department of Science and Technology (ITN)


One of the research environments connected to WASP - Wallenberg AI, Autonomous Systems and Software Program at Linköping University (LiU), is located at the Department of Science and Technology (ITN) on Campus Norrköping.

The WASP-LiU-MIT research environment is located at the division of Media and Information Technology on Campus Norrköping. Our work draws on 20 years of research, focusing on visualization and interaction in the context of AI, autonomous systems and software.

We address topics like machine learning and human-automation collaboration in experimental and design-oriented research projects; our research aims to illuminate how people and partially autonomous technology can work together and co-exist in productive and appropriate ways.

For more information about the entire WASP at LiU, go up one level in the menu to WASP - Wallenberg AI, Autonomous Systems and Software Program. You will also find a contact list of all WASP researchers at LiU divided by department there.

Location and facilities

The MIT division is the research arm of the Visualization Center C. C also comprises a public science center focusing on visualization for science communication, including a 4k fulldome theater with 100 seats for immersive visualization experiences. Many of the public exhibits originate in research projects at the MIT division, primarily in the units of Scientific visualization, Immersive visualization, Information visualization, and Visual learning and communication.

Moreover, we have developed a Decision Arena in the form of a conference room with interactive access to a 360-degree surrounding display, and most recently the Norrköping Design Arena with a computer graphics lab, a large-format 3D scanner and facilities for prototyping and co-design in interaction and visualization.



The supercomputer Berzelius photographed with fisheye lens.

Swedish AI research gets more muscle

The supercomputer Berzelius was inaugurated in the spring of 2021, and was then Sweden's fastest supercomputer for AI. Yet, more power is needed to meet the needs of Swedish AI research.

Cars on the road illustrate artificial intelligence that can identify distances between vehicles.

NEST - a multi-million investment in the WASP program

Last year, WASP took the decision to start a total of nine NEST projects. One of the projects will be based at LiU and will involve several LiU researchers taking on the biggest challenges in AI research.

man pointing at a digital histology image on a big screen.

AI can help doctors work faster – but trust is crucial

For artificial intelligence (AI) to be helpful within healthcare, people and machines must work effectively together. A new study shows that doctors who use AI when examining tissue samples can work faster while still doing high-quality work.

Image synthesis and efficient data representations for visual machine learning

Driven by the high accuracy and performance of deep learning algorithms in computer vision tasks, we are investigating the use and production of highly realistic training data-sets for computer vision applications.

Moreover, we explore variations of abstracted data representations in order to further understand how data features at different abstraction levels affect the learning process in deep neural networks, and how this can be used to optimize the data generation and usage.


Procedural Modeling and Physically Based Rendering for Synthetic Data Generation in Automotive Applications, Apostolia Tsirikoglou, Joel Kronander, Magnus Wrenninge, Jonas Unger

External partner

The publication and work was done in collaboration with Magnus Wrenninge, 7D Labs.



Visualization for understanding and developing machine learning

Machine learning is in various flavors being deployed in a wide range of application domains, including autonomous vehicles, robot navigation, interaction systems and even medicine. As with most large-scale data driven approaches it is in most practical cases, e.g. in deep learning architectures, hard to analyze and understand what is the underlying learned model, how accurate is it, and exactly how it is solved?

Read more about the project

In this project, we will develop interactive visualization methods making both the learning process and the efficacy of the solution transparent to developers and users. Using deep learning and computer vision tasks as the initial application domain we will take a holistic approach and aim to investigate ways of visualizing aspects of the training data, network structures, and inference results jointly in the same framework.

The visualization system will combine new and traditional data visualization methods with novel holistic approaches specific to visualization of machine learning systems and target the needs of both developers and systems architects. For developers, the visualizations will be integrated into an environment with tools for analyzing training data quality and feature variation, as well as the performance of the system.

Enabling informed choices

Intuitive visual models will enable informed choices when data is collected and the architecture is trained. Another key challenge is to analyze and quantify the domain shift when training and validation data are from different sources. For systems architects, the tool-set will act as a visual analysis and debugging tool for architecture design and systems analysis. In our prototype system, we will include an integrated development environment with interfaces to, and visual debugging tools for, machine learning libraries such as TensorFlow. New tools for high level architecture design and analysis is one of the most important challenges in the development of next generation machine learning algorithms.


Gabriel Eilertsen, Joel Kronander, Gyorgy Denes, Rafal K. Mantiuk, Jonas Unger (2017), HDR image reconstruction from a single exposure using deep CNNs, ACM Transactions on Graphics

External partner

School of Computer Science and Engineering, NTU, Singapore.

Contakt: Jianmin Zheng asjmzheng@ntu.edu.sg

Visualisering för förståelse och utveckling av maskininlärning - deep learning.


Orchestrating machine learning development in complex medical imaging environments

Even though machine learning algorithms (especially deep convolutional neural networks) have shown promising results in many medical image analysis tasks (such as segmentation, detection and location) there are still a lot of challenges that need to be solved before these advances can be used in clinical practice.

Read more about the project

One hurdle is robustness in accuracy. An algorithm trained on data from hospital A is likely not to performance as well on data from hospital B. In many cases, this drop in accuracy is so significant that the algorithm cannot be used.

To achieve more generalizable results, typically more diverse training data are needed. Getting enough good-quality data is (as in many fields) a challenge. Therefore, our research will aim to give a more in-depth understanding of the training data space, in order to control it better and improve generalizability, even with little data. This will be investigated through data augmentation, data simulation/generation though generative adversarial networks (GANs), among other approaches.

Another challenge is to bridge the gap from developing one algorithm in a sandbox environment, to develop and maintain a wide range of machine learning solutions. A way to efficiently reuse components from the machine learning pipeline is necessary. Modularization and standardization of components will be investigated, with particular focus on training data and trained models, where methods developed related to training data space can be put to further use.

External partner

Sectra AB, employer for industrial PhD.

Visualization of H&E stained whole-slide images differ when originating from different medical centers. The images show clusters of image patches originating from five different centers, which ideally should be inseparable.


Interaction with autonomous agents for medical decision support

The aim of the research is to explore the interaction between humans and computers by the design of contexts that create efficient ensembles of medical practitioners and learning machines.

The last decade's advancements in machine learning (ML) has led to a dramatic increase in AI capabilities and the viability of learning by example. However, despite impressive technical advances and many successful research projects, machine algorithms for medical diagnostics are to a very small extent used in healthcare today.

One challenge is that for ML algorithms with less than 100% sensitivity and specificity the clinical user needs effective means to assess the validity of results and incorporate this knowledge within the broader context of their diagnostic process.

The research explorations involve viewing this interaction as a process that unfolds over time enabling reciprocal and continuous learning as well as framing machine learning as material in the design process and investigating the limits, extent and characteristic of the design space that this new material affords.

External partners

  • Sectra AB
  • Region Östergötland
  • Region Gävleborg
  • University of Leeds


Focal/peripheral data visualization for industrial control rooms

In the age of Industry 4.0, with increasing digitization of industry, the amount of operation and maintenance data delivered to the industrial control room operator grows manifold. In the scope of a PhD project, we are working with the problem of information overload.

Visuella metoder styr betraktaren Foto Veronika DomovaThe research has two major directions. The first direction is towards creating domain-specific big data visualizations.

The major principle that underpins this study is focal/peripheral data visualization which incorporates visual methods to guide and focus the user’s attention on the most important information.

The second direction is towards looking for alternative data visualization means, e.g. tangible devices and augmented reality, that can potentially unload the visual and cognitive pressure of the user.
Visuella metoder styr betraktarenPhotoVeronika Domova


External partner

ABB Corporate Research, employer for industrial PhD.

Automatic pattern tracking and visualization for decision support

Monitoring autonomous systems plays a key role in understanding the systems’ functionality and in ensuring their safety. A fundamental challenge thereby is the size, complexity and diversity of the data that is continuously generated and has been taken into account to make fast assessments and informed decision.

Without appropriate data reduction and visual support this is hardly possible.

This project is to establish methods for selection, tracking and visualization of patterns in multifield data to support decision making based on monitoring data from autonomous systems.

The project will consist of two parts, the first part lays the theoretical foundations for multifield pattern tracking, the second part focuses on the pattern selection, visualization and interaction with the tracking results.

Pattern extraction and tracking

Pattern of interest can exhibit complex shapes and structures and can be based on multiple data sources. Descriptors are explored for extraction and tracking of patterns with variations, including geometric distortion, rotation, translation, scale and variations relate to background noise and partially missing data.

Visualization for pattern selection and analysis

Multiple linked visualizations of the individual fields will be provided for interactive selection and visual analysis of patterns of interest. When appropriate we will also provide a pattern editor to support pattern sketching. We will support online changes and refinement of pattern definitions. The goal of the visualization is to support the analysis of selected patterns, follow them over time and observe changes in size and expression while also providing the data context.