The Human Error Project


We are living in a historical time when every little detail of our lived experience is turned into a data point that is used by AI systems and algorithms to profile us, judge us and make decisions about us. These technologies are used everywhere. Health and education practitioners use them to ‘track risk factors’ or find ‘personalized solutions’. Employers, banks and insurers use them to judge clients or potential candidates. Even governments, the police and immigration officials use these technologies to decide key issues about individual lives, from one’s right to asylum to one’s likelihood to commit a crime. The COVID-19 pandemic is only intensifying and exacerbating these practices of technological surveillance and profiling.

AI systems and predictive analytics are often used to make the process of data-driven decision more efficient and to ‘avoid the human error’. Yet paradoxically, as recent research has shown, these technologies are defined by intrinsic ‘errors’, ‘biases’ and ‘inaccuracies’, when it comes to reading humans which could lead to a variety of real-life harms and human rights abuses.

The Human Error Project: AI, Human Rights, and the Conflict over Algorithmic Profiling combines anthropological theory with critical data and AI research, and aims to investigate the fallacy of algorithms when it comes to reading humans by focusing on three different, albeit interconnected dimensions of human error in algorithms:
· Algorithmic Bias – Algorithms and AI systems are human made and will always be shaped by the cultural values and beliefs of the humans and societies that created them.
· Algorithmic Inaccuracy – Algorithms process data. Yet the data processed by algorithms is often the product of everyday human practices, which are messy, contradictory and taken out of context, hence algorithmic predictions are filled with inaccuracies, partial truths and mis-representations.
· Algorithmic Un-accountability – Algorithms lead to specific predictions that are often un-explainable. The fact that most of the algorithms used for algorithmic profiling are un-explainable, makes them unaccountable. How can we trust their decisions, if we cannot explain them?
Our Team will be working on different interconnected research projects. Prof. Barassi will be leading a 2-year-long qualitative investigation – based on critical discourse analysis and in-depth interviews – into the conflicts over algorithmic profiling in Europe, which is funded by the HSG Basic Research Fund. Dr. Antje Scharenberg will be working on a postdoctoral research project investigating the challenges of algorithmic profiling for human agency. Ms. Marie Poux-Berthe will be working on a three-year PhD Research on digital media and technologies and misconstruction of old age and aging and Ms Rahi Patra will be focusing on her PhD research on health surveillance technologies, algorithmic bias and their implications on human rights and privacy.

We believe that the understanding of human errors in algorithms has become a top priority of our times, because they shed light on the fact that the race for AI innovation is often shaped by stereotypical and reductionist understandings of human nature, and by new emerging conflicts about what it means to be human.

Additional Informations

Project overview:

Project website:

Commencement Dateunspecified
Acronym HErr
Contributors Barassi, Prof. Ph.D Veronica (Project Manager); Scharenberg, Dr. Antje (Project Worker) & Di Salvo, Philip (Project Worker)
Datestamp 17 Dec 2020 15:38
Publications Scharenberg, Antje & Barassi, Veronica: ALGORITHMIC RESISTANCE IN EUROPE AND THE QUESTION OF COLLECTIVE AGENCY. 2022. - AoIR (Association of Internet Researchers) 2022 - Decolonising the Internet. - Dublin.
Barassi, Veronica: The Human Error in AI and the Conflicts over Algorithms. 2021. - VIII STS Italia Conference: Dis/Entangling Technoscience - Vulnerability, Responsibility and Justice. - Online.
Barassi, Veronica (2021) David Graber, Bureaucratic Violence and the Critique of Surveillance Capitalism. Annals of the Fondazione Luigi Einaudi, LV, June 2021 237-254. ISSN 2532-4969
Barassi, Veronica: Datafied Citizens in the Age of Coerced Digital Participation. 2021. - Data Justice Conference: Civic Participation in the Datafied Society. - Online.
Barassi, Veronica & Patra, Rahi (2022) AI Errors in Health? The problem of scientific bias and the limits of media debate in Europe. Morals and Machines, 2 (1). 34-43.
Keywords Algorithmic profiling; AI technologies; Data justice; Human rights
Funders HSG – Grundlagenforschungsfonds (GFF)
Id 247943
Project Status ongoing
Project Type fundamental research project
Edit Item Edit Item