Options
The Human Error Project
Type
fundamental research project
Acronym
HErr
Status
ongoing
Keywords
Algorithmic profiling
AI technologies
Data justice
Human rights
Description
We are living in a historical time when every little detail of our lived experience is turned into a data point that is used by AI systems and algorithms to profile us, judge us and make decisions about us. These technologies are used everywhere. Health and education practitioners use them to ‘track risk factors’ or find ‘personalized solutions’. Employers, banks and insurers use them to judge clients or potential candidates. Even governments, the police and immigration officials use these technologies to decide key issues about individual lives, from one’s right to asylum to one’s likelihood to commit a crime. The COVID-19 pandemic is only intensifying and exacerbating these practices of technological surveillance and profiling.
AI systems and predictive analytics are often used to make the process of data-driven decision more efficient and to ‘avoid the human error’. Yet paradoxically, as recent research has shown, these technologies are defined by intrinsic ‘errors’, ‘biases’ and ‘inaccuracies’, when it comes to reading humans which could lead to a variety of real-life harms and human rights abuses.
The Human Error Project: AI, Human Rights, and the Conflict over Algorithmic Profiling combines anthropological theory with critical data and AI research, and aims to investigate the fallacy of algorithms when it comes to reading humans by focusing on three different, albeit interconnected dimensions of human error in algorithms:
· Algorithmic Bias – Algorithms and AI systems are human made and will always be shaped by the cultural values and beliefs of the humans and societies that created them.
· Algorithmic Inaccuracy – Algorithms process data. Yet the data processed by algorithms is often the product of everyday human practices, which are messy, contradictory and taken out of context, hence algorithmic predictions are filled with inaccuracies, partial truths and mis-representations.
· Algorithmic Un-accountability – Algorithms lead to specific predictions that are often un-explainable. The fact that most of the algorithms used for algorithmic profiling are un-explainable, makes them unaccountable. How can we trust their decisions, if we cannot explain them?
Our Team will be working on different interconnected research projects. Prof. Barassi will be leading a 2-year-long qualitative investigation – based on critical discourse analysis and in-depth interviews – into the conflicts over algorithmic profiling in Europe, which is funded by the HSG Basic Research Fund. Dr. Antje Scharenberg will be working on a postdoctoral research project investigating the challenges of algorithmic profiling for human agency. Ms. Marie Poux-Berthe will be working on a three-year PhD Research on digital media and technologies and misconstruction of old age and aging and Ms Rahi Patra will be focusing on her PhD research on health surveillance technologies, algorithmic bias and their implications on human rights and privacy.
We believe that the understanding of human errors in algorithms has become a top priority of our times, because they shed light on the fact that the race for AI innovation is often shaped by stereotypical and reductionist understandings of human nature, and by new emerging conflicts about what it means to be human.
AI systems and predictive analytics are often used to make the process of data-driven decision more efficient and to ‘avoid the human error’. Yet paradoxically, as recent research has shown, these technologies are defined by intrinsic ‘errors’, ‘biases’ and ‘inaccuracies’, when it comes to reading humans which could lead to a variety of real-life harms and human rights abuses.
The Human Error Project: AI, Human Rights, and the Conflict over Algorithmic Profiling combines anthropological theory with critical data and AI research, and aims to investigate the fallacy of algorithms when it comes to reading humans by focusing on three different, albeit interconnected dimensions of human error in algorithms:
· Algorithmic Bias – Algorithms and AI systems are human made and will always be shaped by the cultural values and beliefs of the humans and societies that created them.
· Algorithmic Inaccuracy – Algorithms process data. Yet the data processed by algorithms is often the product of everyday human practices, which are messy, contradictory and taken out of context, hence algorithmic predictions are filled with inaccuracies, partial truths and mis-representations.
· Algorithmic Un-accountability – Algorithms lead to specific predictions that are often un-explainable. The fact that most of the algorithms used for algorithmic profiling are un-explainable, makes them unaccountable. How can we trust their decisions, if we cannot explain them?
Our Team will be working on different interconnected research projects. Prof. Barassi will be leading a 2-year-long qualitative investigation – based on critical discourse analysis and in-depth interviews – into the conflicts over algorithmic profiling in Europe, which is funded by the HSG Basic Research Fund. Dr. Antje Scharenberg will be working on a postdoctoral research project investigating the challenges of algorithmic profiling for human agency. Ms. Marie Poux-Berthe will be working on a three-year PhD Research on digital media and technologies and misconstruction of old age and aging and Ms Rahi Patra will be focusing on her PhD research on health surveillance technologies, algorithmic bias and their implications on human rights and privacy.
We believe that the understanding of human errors in algorithms has become a top priority of our times, because they shed light on the fact that the race for AI innovation is often shaped by stereotypical and reductionist understandings of human nature, and by new emerging conflicts about what it means to be human.
Leader contributor(s)
Member contributor(s)
Funder(s)
Notes
Project overview: https://mcm.unisg.ch/en/forschung/forschungsprojekte/forschungsprojekte-media-and-culture/human-error-project
Project website: https://thehumanerrorproject.ch/
Project research report 1: https://thehumanerrorproject.ch/ai-errors-mapping-debate-european-media-report/
Project website: https://thehumanerrorproject.ch/
Project research report 1: https://thehumanerrorproject.ch/ai-errors-mapping-debate-european-media-report/
Division(s)
Eprints ID
247943
11 results
Now showing
1 - 10 of 11
-
Publication
-
PublicationAI and the Western Illusion of Human Nature: Anthropology's fight against Human Reducationism and its Interdisciplinary FutureWith the rise of AI driven technologies, algorithms have replaced paperwork in the construction of social truths (Graeber, 2016); they build truths about who we are, our cultural worlds and our identities. Anthropologists have discussed the implications of big data as meaning construction (see Boellstroff and Mauer, 2015), the powerful discourses of algorithms as culture (Dourish, 2016; Seaver, 2017) or the multiple ways in which people are negotiating with data narratives in everyday life (Barassi, 2017, 2020; Pink et al., 2018; Dourish and Cruz, 2018). However, much more research is needed on the human reductionism implicit to these systems, and the western-centric and biased visions of human nature implicit to these technologies. This paper brings the findings of a three-year ethnographic project on the profiling of children from before birth (Child | Data | Citizen Project, 2016 - 2019) together with the findings of a (non-anthropological) research project aimed at analyzing the discourses around algorithmic profiling in Europe and the critical practices that are emerging against it (The Human Error Project, 2020 – ongoing). The paper will argue that anthropology has a fundamental role to play in the future of AI ethics research and the study of algorithmic profiling. The discipline reminds us that ideas of human nature are not only social and cultural but also political constructions (Sahlins, 2008; Graeber and Sahlins, 2017). Yet to succeed it will need to build projects that are truly interdisciplinary, which consider data-structures, policies, as well as popular media discourses.Type: conference keynote
-
PublicationAI errors, their human rights impacts and the role of mainstream media in Europe
;Patra, RahiOver the last decades European societies have been transformed by algorithmic logics and AI-driven technologies used to profile individuals and make data-driven decisions about their lives. Here, data and algorithmic profiling is used to make the process of decision making more efficient, and to ‘avoid the human error’. Paradoxically, when it comes to human profiling, however, recent research has shown that these technologies are filled with systemic inequalities, biases and inaccurate analysis of human practices and intentions (Barassi, 2020). The combination of bias, inaccuracy and unaccountability implies that AI systems will always be somehow fallacious in reading humans (Barassi, 2021). The ‘human error of AI’ can have profound impacts not only on human rights but on the future of our democracies. However, as Aradau and Blanke have shown (2021), little attention has been placed on the study of error and its political life. This paper argues that it is of pivotal importance to understand how societies are making sense of and coexist with AI errors. One way in which we can do this is by investigating the role of mainstream media in framing the debate. This paper draws on a critical discourse analysis carried out between September 2020 and February 2022, which studied how different cases of AI errors were reported in mainstream media. We analyzed 520 articles with a focus on three of the most influential European countries when it comes to technological innovation (Germany, France, and the UK). In each of these countries, we monitored five key national newspapers (daily or weekly), balanced across the political spectrum. We also analyzed key articles from the United States, Switzerland, and other European countries, where they were relevant to defining the wider discourse on AI impacts in Europe. The articles were selected through keyword search. In this paper we will present four conclusions of our analysis: 1. Mainstream media discourses on AI errors were often defined by fatalism and resignation regarding the perceived inevitability of technological progress. 2. A majority of the AI errors reported were about mis-readings of the human body/mind. 3. The reporting of AI errors varied from country to country and across the political spectrum. 4. The response to AI errors was frequently framed as a policy issue and the voice of civil society and grassroots organizations was often excluded. By looking at how mainstream media frame the debate about AI errors, our aim is to shed light on a lack of critical responses to the problem, which can have profound implications for our democratic futures.Type: conference speech -
Publication
-
PublicationType: conference paper
-
PublicationAI Errors in Health? The Problem of Scientific Bias and the Limits of Media Debate in Europe.( 2022-05-31)
;Patra, RahiType: conference speech -
Publication
-
PublicationAgeist technologies, ageist societies? Understanding the discourse about old age and digital technologies in FranceThis paper explores the representation of older people and their relationship with digital technologies in French mainstream media and professional debates during the ongoing Covid-19 pandemic. Among other societal issues that the Covid-19 pandemic has revealed in recent years, the place of and the issue of care for older people has received significant attention from politicians, civil society organizations and professionals from the healthcare sector. The mainstream media played a significant role in highlighting the issue and French people have increasingly relied on them to inform themselves. The nature of the problem at hand is twofold. On the one hand, academics demonstrated how the pandemic has revealed the underlying ageism operating in industrial countries (Ayalon, 2020). Others alerted us to how it fostered the harmful ideology of techno-solutionism (Milan, 2020). However, only a few have attempted to examine these issues together (Gallistl et al., 2021).Moreover, the issue at stake goes beyond the pandemic. The population’s ageing has been framed as causing multiple problems on the political, economic and social level. Digital technologies are increasingly promoted as solutions to any type of ‘problems’ (Morozov, 2014). Yet, suggesting a digital answer to the societal challenge caused by the demographic transition is reductive and harmful for older people as well as their younger counterparts. Drawing on Stuart Hall’s theoretical work on the representation of ‘the Other’, this paper is situated at the intersection of Critical Age Studies (Hazan, 1994; Katz, 1996) and Science and Technology Studies (Turkle, 2011). It builds on the combined analysis of 200 French mainstream media articles related to the subject of old age and ageing and a digital ethnography of five events which took place in 2021 and 2022. The selected events gathered stakeholders with a political, economic or technological perspective on the subject of old age and ageing with a national or European dimension. Based on the analysis of this data, the paper argues that the French discourse about older people and digital technologies contribute to both ageist representations of old age and fallacious expectations towards technologies.Type: conference speech
-
PublicationType: conference lecture
-
PublicationType: journal articleJournal: Morals and MachinesVolume: 2Issue: 1