Operationalising ethics for AI: translation, implementation and accountability challenges

Futuristic light painting of a woman portrait, veins of fibre optic light through her face.

How can we intervene in continually evolving AI infrastructures to exploit their usefulness while preventing them from exploiting people?

The most acute issues in AI development today can be mapped to three “gaps” in negotiating ethical and moral considerations: translation, implementation and accountability. Mired within the translation gap many technologists struggle to recognize whether and how something may be or may become an ethical issue. Even where these issues are recognized and discussed as potentially ethically problematic, the implementation gap makes it difficult to address them in practice and in code because there is a proliferation of tools but few clear routes to action. Finally, the problem of the accountability gap manifests in a lack of a clear accountability framework within companies and organizations producing technologies as well as among the stakeholders commissioning, implementing and using it. Operationalising ethics for AI brings together an experienced interdisciplinary team to address these three gaps.

Two case studies

  1. Explainability: AI-driven systems are often opaque and it can be difficult to understand how and why decisions are made. Taking a critical look at the burgeoning field of XAI, this project asks what should be explained to whom, how, and for what purpose?
  2. Synthetic data: how can we maximise the usefulness of AI to spot patterns in data sets whilst respecting concerns about security and privacy in sectors such as healthcare? Could synthetic data be the answer? This project will consider what are the fabricated realities constructed through synthetic data, what are they expected to achieve, and for whom.

Studies like these about the ethical, economic, social and legal aspects that may be entailed by the ongoing technological shift in society are at the heart of the WASP-HS programme.

 

Synthetic data workshop series

This workshop series will discuss the risks, possibilities and promises of synthetic data across different application areas. Given the increasingly complicated regulatory environment around data use and AI systems, what kinds of risks are address, created or made possible through synthetic data is an important question. Where there is much excitement about synthetic data in the machine learning community, there is also apprehension and caution. There is a proliferation of synthetic data generation libraries and pipelines becoming available to the technical community. These promise to get beyond the triple challenges of privacy, bias, and data scarcity, but warrant a critical discussion about how and to what extent these challenges are being addressed. We seek to discuss what is the state of the art in synthetic data currently as well as what critical, legal, and ethical issues these techniques may encounter.

For more information, contact Katherine Harrison or Irina Shklovski.

Workshop 1:
Synthetic data in the medical domain

Thursday 16 November 2023, 9-12,
University of Copenhagen (Amager Campus)

Workshop 2:
Synthetic data in smart cities/digital twins

Wednesday 29 November 2023, 9-12,
Linköping University (Campus Valla)
- in collaboration with the TEMA Data Lab.

Workshop 3:
Synthetic Data for Social Science Research

Wednesday 10 April, 13-15,
Linköping University (Campus Valla)
- in collaboration with the TEMA Data Lab.

Contacts

Organisation