Explain yourself – Designing automation transparency for end users

Command deck on a freight ship, monitors and controls
FThe transparent RESKILL predictor (bottom left display) visualizing how different GPS filter settings affects vessel movement predictions.

While automation becomes more intelligent, it also becomes more difficult to understand. This project investigates how to design transparent automation explaining its behavior and recommendations to end-users in safety-critical working environments.

Background

Advanced automation such as artificial intelligence (AI) is considered an important enabler in many social and industrial settings for achieving future goals in terms of safety, capacity, and sustainability. A core issue is how to facilitate teamwork between humans, especially end-users, and their automation. This requires mutual understanding and the ability to provide feedback to one another. But current systems, with their complex structures, fast computing processes, big data management, and non-linear reasoning are difficult to understand. Research on automation transparency aims to increase human understanding, trust, and acceptance of automation.While there is a need for more transparent system designs, the development of meaningful visualizations and explanations for how automation works and reasons is not a trivial undertaking, especially in safety-critical contexts where time to act often is limited. Despite a recent surge in transparency research, there is a shortage of empirical findings to form stable conclusions. Little guidance exists on how to design and apply transparency in human-machine interfaces. AI research on transparency has been criticized for focusing on how to build explanations while neglecting the underlying psychology and human interpretation of them.

Project description

This project aims to build knowledge on how transparency can be designed to support end users’ understanding, interaction, and collaboration with automation in safety-critical tasks. This will be achieved through applied research in several domains, involving end users to identify transparency requirements, evaluate design concepts and prototypes, and participate in experiments. The theoretical foundation builds on design theory from the fields of human factors, human-machine interaction, and information visualization. The project will generate human factors design guidelines on how to discover users’ requirements for transparency and apply them in interface design, how to adapt transparency based on users’ feedback (e.g., using eye tracking), and how active learning can support users’ exploration of transparency functions in training.

The project seeks to advance the field of automation transparency by:

  • Involving end-users through the design process of transparent automation (i.e.co-design approach).
  • Providing empirical results based on human-in-the-loop simulations with operational end users across several safety critical working environments. 
  • Providing guidance on how to implement human factors in the design of transparent automation to facilitate understanding and trust
  • Exploring transparency mechanisms for user feedback and adaptation to the individual user. 

The research will rely on a co-design approach involving end-users will be used to ensure that derived transparency solutions support their understanding and address their requirements for transparency. In addition, unobtrusive methods will be used for gathering feedback from users for adapting explanations, for example by using eye tracking equipment. The design work starts with qualitative field studies to understand the domain, review of accident reports, and interviews with end users on automation challenges and their needs for understanding. The design process continues with translating the results into requirements for transparency according to the four dimensions. This represents the start of an iterative design process with end users frequently involved in reviewing the process and testing prototypes.

About the project

Aim: The project seeks to design and investigate transparent automation designs of advanced support systems, in safety-critical working environments, to better supporting operators’ understanding, interaction, and trust in their automation support systems.

  • Start: January 2022
  • Project leader: Carl Westin (contact details to the right)
  • The research is supported by Centrum för Industriell Informationsteknologi (CENIIT)
  • Industrial partners include the The Swedish Maritime Administration, LFV, and ABB Sweden. 

Related research projects:

RESKILL – transparency of ship prediction in maritime piloting and digital assistants in multiple remote tower operations. Funded by Trafikverket (2017-2021).
MAHALO – transparency and personalized decision support of an AI agent supporting air traffic controllers in traffic separation assurance Funded by Horizon 2020 (2020-2022).
HAIKU – transparency of AI assistants for aviation applications including aircraft cockpits, Urban Air Mobility, Digital towers, and airport safety management. Funded by Horizon 2020 (2022-2025).
EXPLAIN – transparency of AI support systems for industrial applications in pulping and mining. Funded by Eureka AI Cluster (2022-2025).