Information Visualization (iVis)

The iVis Group at Linköping University focuses on the explorative analysis and visualization of typically large and complex information spaces, for example in environmental research, transportation systems, social sciences, or artificial intelligence.


Our vision is to attack the big data challenge by a combination of human-centered data analysis and interactive visualization for deriving meaning from the data and final decision making. Our research is highly relevant for academia and economy as both science and industry make increasing use of data-intensive technologies.

We take a human-centered and problem-oriented visualization approach: human-centered visualization deals with the development of interactive visualization techniques in consideration of user- and task-related information to explore and analyze complex data sets efficiently. This course of action combines aspects of different research fields, such as information and scientific visualization, human-computer interaction, information design, cognition, but also the particular application field. From all areas of visualization, we mainly focus on information visualization (abbr. InfoVis) which centers on the visualization of abstract data, e.g., hierarchical, networked, or tabular information sources. While the development of human-centered information visualization approaches and systems, user abilities and requirements, visualization tasks, tool functions, interactive features, and suitable visual representations are equally taken into account.

In contrast to visualization, data mining or machine learning are traditionally computer-centered. But to address the big data challenge more efficiently and to increase the trust into the analytical results, we have to use the advantages of both approaches synergistically, which is the main aspect of visual analytics. The design and implementation of visual analytics tools is one of the most promising approaches to cope with the ever increasing amount of data produced every day and allows new insights and beneficial discoveries.

The purpose of research in these areas is to develop novel methods and tools that are able to efficiently support analysts from various data domains. Our new visualization techniques enable them to solve difficult analytical problems (referring to the famous V’s of Big Data) and to identify and extract meaningful information from the data while improving the speed, accuracy, and completeness of their understanding. We are engaged in a wide range of research aspects which encompass the development of new algorithmic approaches for the extraction of patterns and relationships in data, the visual and auditory representation of these features, the use of machine learning approaches in visualization and vice versa, as well as the study of perceptual mechanisms and novel evaluation methodologies.

More information on our research, activities, incl. demonstrations tools and systems can be found on the iVis Group Site

Selected Research Areas

Explainable and Interpretable AI/ML

Contact: Andreas Kerren

Research in Machine Learning (ML) and Artificial Intelligence (AI) has become very popular in recent years, with many types of models proposed to comprehend and predict patterns and trends in data originated from different domains. As these models get more and more complex, it also becomes harder for users to assess and trust their results, since their internal operations are mostly hidden in black boxes. The interpretation and explanation of ML/AI models is currently a hot topic in the Information Visualization (InfoVis) community, with results showing that providing insights from ML models can lead to better predictions and improve the trustworthiness of the results. To accomplish this in our research we are investigating Visual Analytics (VA) methods to open the black boxes of various ML/AI models. Our research encompasses both unsupervised and supervised learning models.

Visual Analytics of Temporal Event Data

Contact: Katerina Vrotsou
There is today a vast and rapidly growing number of data-driven applications in society and industry producing temporal event data. Temporal event data comprise sequences of point or interval events occurring over time, for example, electronic health records, various types of tracking and monitoring data, and life history or daily time-use data. Effective analysis of this data can enable analysts to gain crucial understanding of complex and interconnected processes. For the examples above, such analyses can correspond to the study of patients’ medical records for diagnostics and treatment planning, analysis of monitoring events and alarms for process control and predictive maintenance, and the capturing of individuals’ activities for understanding behavioral patterns and social processes. With this as our starting point, we aim to conduct in this focus area innovative research at the intersection of temporal data mining and interactive visualization and produce visual analytics methods that facilitate human-centered analysis of large and complex temporal event data sets.

User-centered Evaluation

Contact: Camilla Forsell
A man sitting at a keyboard with the computer screen placed at a big distance in front of himThis research direction focuses on user-centered evaluation in visualization and visual analytics. Studies are conducted to investigate how users perceive, perform and gain insight when using visual representations of large and complex data sets. Within current practice, evaluation studies (still) heavily rely on theory and methods from other disciplines which predate modern visualization. Therefore, one area of this research also focuses on development and validation of visualization-specific methodology and guidance on how to use these methods to achieve high quality results. Specifically instruments such as heuristics and questionnaires for qualitative as well as quantitative studies are investigated.


Contact: Niklas Rönnberg
Sonification, i.e., the transformation of data into sound or mapping of data characteristics to sound parameters, is the auditory equivalent to visualization, and can be used as a complement to visualization to facilitate interpretation of a visual representation or to reveal new relationships in data. Visual representations might be hard to comprehend, and the visual complexity might load the visual cognitive system. By using sonification as a complement, it is possible to provide more information and simultaneously ease the interpretation of the visual representation. For example, sonification can reduce visual misinterpretations caused by simultaneous brightness contrast, ease understanding of density levels, or support perception of different datasets in a visualization.

Visual Text and Network Analytics

Contact: Andreas Kerren and Kostiantyn Kucher
Networks are one of the most important and also most challenging data sets in information visualization. Visualization research does not solely focus on the pretty representation of the networks. Their sheer size and complexity demand for other solutions to display and explore them. Our research addresses such issues by designing novel visualization approaches that provide filtering and advanced interaction possibilities, often in combination with computational methods such as network centralities or embedding technologies. A recent focus lies on multivariate and heterogeneous network visualization that both are crucial for many application domains. Similar visualization challenges are related to data sets consisting of vast amounts of texts and documents. We are developing text analytics tools that combine interactive visualization with natural language processing approaches. This combination will make it possible for human beings to make sense of large and dynamic text data and allows for exploration, control, and final evaluation of the analysis processes and results.

Human-Centered Design for Human AI/Automation Interaction

Contact: Jonas Lundberg
Visualization of drone positionsIn many mission-critical control systems, AI/Automation is currently increased toward a higher level of autonomy. This development can be found in application areas such as Air traffic control, maritime traffic control, rail traffic control, emergency and crisis response, as well as industrial process control. In many application cases, full autonomy cannot or has not currently been achieved (e.g., due to unpredictable or changing environments). Human accountability is also often desirable or required in such systems. Here, we design for human involvement. Our research on Human-Centered Design for Human AI/Automation Interaction is based on cognitive (systems) engineering with a focus on controllable and verifiable AI/Automation systems. The research concerns three main areas: 1) designing interactive systems for control and verification (human-in-the-loop), 2) inclusion of societal stakeholders in technology development (society-in-the-loop), and 3) modelling interactions with AI/Automated systems (to support development for human/society-in-the-loop).




Tim Ziemer, Sara Lenzi, Niklas Rönnberg, Thomas Hermann, Roberto Bresin (2023) Introduction to the special issue on design and perception of interactive sonification Journal on Multimodal User Interfaces Continue to DOI
Jimmy Hammarbäck, Jens Alfredson, Björn Johansson, Jonas Lundberg (2023) My synthetic wingman must understand me: modelling intent for future manned-unmanned teaming Cognition, Technology & Work Continue to DOI
Elizaveta Kopacheva, Masoud Fatemi, Kostiantyn Kucher (2023) Using Social-Media-Network Ties for Predicting Intended Protest Participation in Russia Online Social Networks and Media, Vol. 37-38, Article 100273 Continue to DOI
M. Cocchioni, S. Bonelli, Carl Westin, C. Borst, Magnus Bång, B. Hilburn (2023) Learning for Air Traffic Management: guidelines for future AI systems 12TH EASN INTERNATIONAL CONFERENCE ON "INNOVATION IN AVIATION & SPACE FOR OPENING NEW HORIZONS", Article 012105 Continue to DOI
Takanori Fujiwara, Tzu-Ping Liu (2023) Contrastive multiple correspondence analysis (cMCA): Using contrastive learning to identify latent subgroups in political parties PLOS ONE, Vol. 18 Continue to DOI
Peilin Yu, Aida Nordman, Lothar Meyer, Supathida Boonsong, Katerina Vrotsou (2023) Interactive Transformations and Visual Assessment of Noisy Event Sequences: An Application in En-Route Air Traffic Control 2023 IEEE 16TH PACIFIC VISUALIZATION SYMPOSIUM, PACIFICVIS, p. 92-101 Continue to DOI
Takanori Fujiwara, Yun-Hsin Kuo, Anders Ynnerman, Kwan-Liu Ma (2023) Feature Learning for Nonlinear Dimensionality Reduction toward Maximal Extraction of Hidden Patterns 2023 IEEE 16TH PACIFIC VISUALIZATION SYMPOSIUM, PACIFICVIS, p. 122-131 Continue to DOI
Aida Nordman, Lothar Meyer, Karl Johan Klang, Jonas Lundberg, Katerina Vrotsou (2023) Extraction of CD & R Work Phases from Eye-Tracking and Simulator Logs: A Topic Modelling Approach AEROSPACE, Vol. 10, Article 595 Continue to DOI
Veronika Domova, Katerina Vrotsou (2023) A Model for Types and Levels of Automation in Visual Analytics: A Survey, a Taxonomy, and Examples IEEE Transactions on Visualization and Computer Graphics, Vol. 29, p. 3550-3568 Continue to DOI
Katerina Vrotsou, Carlo Navarra, Kostiantyn Kucher, Igor Fedorov, Fredrik Schück, Jonas Unger, Tina-Simone Neset (2023) Towards a Volunteered Geographic Information-Facilitated Visual Analytics Pipeline to Improve Impact-Based Weather Warning Systems Atmosphere, Vol. 14, Article 1141 Continue to DOI