The most acute issues in AI development today can be mapped to three “gaps” in negotiating ethical and moral considerations: translation, implementation and accountability. Mired within the translation gap many technologists struggle to recognize whether and how something may be or may become an ethical issue. Even where these issues are recognized and discussed as potentially ethically problematic, the implementation gap makes it difficult to address them in practice and in code because there is a proliferation of tools but few clear routes to action. Finally, the problem of the accountability gap manifests in a lack of a clear accountability framework within companies and organizations producing technologies as well as among the stakeholders commissioning, implementing and using it. Operationalising ethics for AI brings together an experienced interdisciplinary team to address these three gaps.
Two case studies
- Explainability: AI-driven systems are often opaque and it can be difficult to understand how and why decisions are made. Taking a critical look at the burgeoning field of XAI, this project asks what should be explained to whom, how, and for what purpose?
- Synthetic data: how can we maximise the usefulness of AI to spot patterns in data sets whilst respecting concerns about security and privacy in sectors such as healthcare? Could synthetic data be the answer? This project will consider what are the fabricated realities constructed through synthetic data, what are they expected to achieve, and for whom.
Studies like these about the ethical, economic, social and legal aspects that may be entailed by the ongoing technological shift in society are at the heart of the WASP-HS programme.