Does it matter if you don’t find your sexual preferences represented in a dropdown menu on a dating site? What happens when decisions about what constitutes “valid” data to be captured are based on technical limits to processing power or standardized algorithms? How might social or organizational assumptions about use and users exclude some people in the development of innovative digital technologies?
In my research I combine my personal fascination with digital media technologies with my political commitment to examining critically the impact of such technologies on bodies and lives. My work sits at the intersection of Science & Technology Studies, media studies, and feminist theory, bringing critical perspectives on normativity and knowledge production to studies of different digital media technologies.
Smart cities and companion (ro)bots are just two areas where technological development has important implications for recognition and livability. Technologies like these make big promises, but also have big consequences for how we can all live our lives. I develop material-discursive approaches rooted in interdisciplinary collaborations in order to explore these consequences.
New research projects
I am currently engaged in two research projects that build on and significantly develop my previous experience and knowledge:
Operationalising ethics for AI: translation, implementation and accountability challenges
The most acute issues in AI development today can be mapped to three “gaps” in negotiating ethical and moral considerations: translation, implementation and accountability. Mired within the translation gap many technologists struggle to recognize whether and how something may be or may become an ethical issue. Even where these issues are recognized and discussed as potentially ethically problematic, the implementation gap makes it difficult to address them in practice and in code because there is a proliferation of tools but few clear routes to action. Finally, the problem of the accountability gap manifests in a lack of a clear accountability framework within companies and organizations producing technologies as well as among the stakeholders commissioning, implementing and using it. Operationalising ethics for AI brings together an experienced interdisciplinary team to address these three gaps.
2023-2028
Financed by the Marianne and Marcus Wallenberg Foundation
Robotic care practices: Creating trust, empathy and accountability in human-robot encounters
No longer a science fiction, robots are starting to enter our daily lives, performing different kinds of care for us and our children. This project examines attempts to program educational robots and recruitment assistant robots to produce relations of trust, empathy and accountability with humans. These relations are necessary for forming good social interactions with humans, upon which the successful long term adoption of these cognitive companions depends. Understanding how trust, empathy and accountability are created in these interactions thus represents both cutting-edge research and an important part of the perceived solution to a global shortage of workers prepared to carry out the time and labour intensive work of care. This project brings together developers from the Social Robotics Lab, Uppsala University, FurHat company in Stockholm, and social sciences researchers from Linköping University with an international advisory board specialised in human-robot interactions. Through ethnography, developer interviews and video analysis of intra-actions with robots, we critically interrogate a tension between emotion and accountability in human-robot relations that stems from fundamentally different understandings of emotions, and which has wide-ranging consequences. At regular joint learning seminars during this four year project we will work together to develop nuanced, interdisciplinary understandings of emotion that can refine practical applications of robotic care.
2020-2024
Financed by the Marianne and Marcus Wallenberg Foundation