Now showing 1 - 2 of 2
  • Publication
    AI errors, their human rights impacts and the role of mainstream media in Europe
    Over the last decades European societies have been transformed by algorithmic logics and AI-driven technologies used to profile individuals and make data-driven decisions about their lives. Here, data and algorithmic profiling is used to make the process of decision making more efficient, and to ‘avoid the human error’. Paradoxically, when it comes to human profiling, however, recent research has shown that these technologies are filled with systemic inequalities, biases and inaccurate analysis of human practices and intentions (Barassi, 2020). The combination of bias, inaccuracy and unaccountability implies that AI systems will always be somehow fallacious in reading humans (Barassi, 2021). The ‘human error of AI’ can have profound impacts not only on human rights but on the future of our democracies. However, as Aradau and Blanke have shown (2021), little attention has been placed on the study of error and its political life. This paper argues that it is of pivotal importance to understand how societies are making sense of and coexist with AI errors. One way in which we can do this is by investigating the role of mainstream media in framing the debate. This paper draws on a critical discourse analysis carried out between September 2020 and February 2022, which studied how different cases of AI errors were reported in mainstream media. We analyzed 520 articles with a focus on three of the most influential European countries when it comes to technological innovation (Germany, France, and the UK). In each of these countries, we monitored five key national newspapers (daily or weekly), balanced across the political spectrum. We also analyzed key articles from the United States, Switzerland, and other European countries, where they were relevant to defining the wider discourse on AI impacts in Europe. The articles were selected through keyword search. In this paper we will present four conclusions of our analysis: 1. Mainstream media discourses on AI errors were often defined by fatalism and resignation regarding the perceived inevitability of technological progress. 2. A majority of the AI errors reported were about mis-readings of the human body/mind. 3. The reporting of AI errors varied from country to country and across the political spectrum. 4. The response to AI errors was frequently framed as a policy issue and the voice of civil society and grassroots organizations was often excluded. By looking at how mainstream media frame the debate about AI errors, our aim is to shed light on a lack of critical responses to the problem, which can have profound implications for our democratic futures.