WASP Humanities and Society at IEI and TEMA

About WASP-HS at TEMA and IEI

The Marianne and Marcus Wallenberg Foundation has granted SEK 96 million to be shared by 16 research projects studying the impact of artificial intelligence and autonomous systems on our society and our behaviour. Two are conducted at LiU, at the Department of Thematic Studies (TEMA) and the Department of Management and Engineering (IEI).

The vision of WASP-HS (Humanities and Society) is to realise excellent research and develop competence on the consequences and challenges of artificial intelligence and autonomous systems for humanities and society.

The WASP-HS programme is planned to run 2019-2028 and forms an independent and parallel programme to WASP, while maintaining a close dialogue with the WASP programme. The Wallenberg Foundations are investing up to SEK 660 million in the WASP-HS research program. Umeå University is the host of the programme.

WASP-HS includes an extensive national graduate school with up to 70 doctoral students, the creation of least ten new research groups across Sweden, support for twelve visiting professors to strengthen Swedish research and networking activities, and a number of research projects.

A call for grants related to the consequences of autonomous systems, software and AI was issued in the spring of 2019. This call resulted in 16 funded projects focusing on ethical, societal and behavioural aspects of AI and autonomous systems. These 16 projects are distributed across nine Swedish universities and institutions. Read more on this page: https://wasp-hs.org/

The two LiU projects are:

  • Ethics and consequences of AI and caring robots
  • The Emergence of Complex Intelligent Systems and The Future of Management

For more information about the entire WASP at LiU, go up one level in the menu to WASP - Wallenberg AI, Autonomous Systems and Software Program. You will also find a contact list of all WASP researchers at LiU divided by department there.

Contact WASP HS

Ethics and consequences of AI and caring robots

Robots have become part of our domestic life: soon we will meet them also in the healthcare system. The project will use three case studies to examine whether, and if so how, we can build relationships with social robots based on trust, empathy and accountability.

Katherine Harrisson with Pepper

About the project

Our fascination with robots is old. So are our misgivings. Science fiction has warned us of the day they will take over for more than a century. Social theorists have long been predicting the consequences of robots’ entrance into the workplace. Luddites have been warning of their impact on our lives and our relationships. And more nuanced examinations Ericka Johnson communicating with Pepperhave probed the way we think of ourselves when we think of (and with) them. Yet, for many of us, robots in that stereotypical, personified form, as a unit we interact/intra-act with on an emotional level, have stayed in the realm of science fiction. We may have a robotic vacuum cleaner at home. We may have even given that vacuum cleaner a name. But an autonomous housekeeper robot who is part of the family (à la Rosie the robot maid in The Jetsons)? Not yet.

This is about to change. Robots are starting to enter our daily life. We and our children are going to be expected to interact with robots as they perform different kinds of care for and with us at different life stages. What will that do to how we – and how the robots – think of care? And how are we going to produce accountability, trust and empathy in the relational intra-actions we have, together?

This interdisciplinary project funded by the Marianne and Marcus Wallenberg Foundation is part of WASP-HS. It brings together robot designers, computer scientists and science, and technology and society (STS) theorists experienced in ethnographic studies of affective human-machine interactions. The team will explore three cases of robots in the iterative design/early testing phase.

Case study 1: Robot tutors, Social Robotics Lab, Uppsala
Case study 2: Robot interviewer, Fur Hat, Stockholm
Case study 3: Elder care robot, Machine Perception and Interaction Lab, Örebro

External partner

Ginevra Castellano, Senior lecturer, Social Robotics Lab, Uppsala University

Advisory team

  • Brian Cantwell Smith, Reid Hoffman Professor of Artificial Intelligence and the Human at the University of Toronto, Canada
  • Lucy Suchman, Professor emeritus in the Sociology Department and the Centre for Science Studies at Lancaster University, UK
  • Erik Sandewall, Professor emeritus, Linköping University, Sweden
  • Gabriel Skantze, Chief Scientist at Furhat Robotics and Professor in Speech Communication and Technology, KTH, Stockholm
  • Jennifer Rhee, Associate Professor, English, Virginia Commonwealth University, USA

Networks

Operationalising ethics for AI: translation, implementation and accountability challenges

How can we intervene in continually evolving AI infrastructures to exploit their usefulness while preventing them from exploiting people?

ight painting of a woman portrait, veins of fibre optic light passing through her face.

About the project

The most acute issues in AI development today can be mapped to three “gaps” in negotiating ethical and moral considerations: translation, implementation and accountability. Mired within the translation gap many technologists struggle to recognize whether and how something may be or may become an ethical issue. Even where these issues are recognized and discussed as potentially ethically problematic, the implementation gap makes it difficult to address them in practice and in code because there is a proliferation of tools but few clear routes to action. Finally, the problem of the accountability gap manifests in a lack of a clear accountability framework within companies and organizations producing technologies as well as among the stakeholders commissioning, implementing and using it. Operationalising ethics for AI brings together an experienced interdisciplinary team to address these three gaps.

Two case studies

  1. Explainability: AI-driven systems are often opaque and it can be difficult to understand how and why decisions are made. Taking a critical look at the burgeoning field of XAI, this project asks what should be explained to whom, how, and for what purpose?
  2. Synthetic data: how can we maximise the usefulness of AI to spot patterns in data sets whilst respecting concerns about security and privacy in sectors such as healthcare? Could synthetic data be the answer? This project will consider what are the fabricated realities constructed through synthetic data, what are they expected to achieve, and for whom.

Studies like these about the ethical, economic, social and legal aspects that may be entailed by the ongoing technological shift in society are at the heart of the WASP-HS programme.

 

The Emergence of Complex Intelligent Systems and The Future of Management

The way we approach AI and its implementation in complex systems will have huge impact on our ability to benefit from AI as well as avoiding negative consequences. AI is thus much more than technology.

Auguste Rodins Tänkaren and his modern version. Photo credit Hans Andersen samt iStock/mennovandijk

About the project

Industrial firms and other organizations are important actors and understanding how they can organize and coordinate their activities within their own as well as across organizations is important for an AI-enhanced society to become a reality. This project supports the industry and society transformation by exploring future management capabilities.

The activities are performed along three research themes

 

  1. Decision making in the (partly) unknown. This theme explores future decision making in relation to the characteristics of complex intelligent systems. The theme will address aspects such as understanding what the decision scope is, who takes decisions, where and when are decisions taken and how are decisions taken when decisions are made (at least partly) in the unknown and depend on emergent system behaviors that create unknown unknowns (i.e. unknowns that cannot be foreseen).
  2. Future organizational designs and interactions in ecosystems. In this theme, new perspectives on the links between the system architecture and organizational design are explored to generate insights into future organizational designs and interactions in ecosystems for complex intelligent systems that are characterized by layered system architectures, intelligent evolution of systems, and emergence of new types of actors such as data factories.
  3. Management when system complexity is beyond human cognitive understanding. This theme explores the consequences of increasing system complexity and artificial intelligence beyond human cognitive understanding in relation to design strategies based on increasingly intelligent representation tools such as model-based system engineering and visual analytics building on a model and data-driven representation logic.

Follow the WASP HS blog

Latest publications

2024

Youshan Yu, Nicolette Lakemond, Gunnar Holmberg (2024) Resilience in emerging complex intelligent systems: A case study of search and rescue Journal of Contingencies and Crisis Management, Vol. 32, Article e12626 (Article in journal) Continue to DOI
Elinor Särner, Anna Yström, Nicolette Lakemond, Gunnar Holmberg (2024) Prospective Sensemaking in the Front End of Innovation of AI Projects Research technology management, Vol. 67, p. 72-83 (Article in journal) Continue to DOI
Elinor Särner, Anna Yström, Nicolette Lakemond, Gunnar Holmberg (2024) Utilizing AI in prospective sensemaking for desired futures: outlining near- and distant-future sensemaking in complex system development