Resisting AI

Last updated

Resisting AI: An Anti-fascist Approach to Artificial Intelligence
AuthorDan McQuillan
LanguageEnglish
Subjects Artificial Intelligence
Publisher Bristol University Press
Publication date
2022
Pages190
ISBN 978-1529213508

Resisting AI: An Anti-fascist Approach to Artificial Intelligence is a book on artificial intelligence (AI) by Dan McQuillan, published in 2022 by Bristol University Press.

Contents

Content

Resisting AI takes the form of an extended essay, [1] which contrasts optimistic visions about AI's potential by arguing that AI may best be seen as a continuation and reinforcement of bureaucratic forms of discrimination and violence, ultimately fostering authoritarian outcomes. [2] For McQuillan, AI's promise of objective calculability is antithetical to an egalitarian and just society. [3] [4] McQuillan uses the expression "AI violence" to describe how – based on opaque algorithms – various actors can discriminate against categories of people in accessing jobs, loans, medical care, and other benefits. [2]

The book suggests that AI has a political resonance with soft eugenic approaches to the valuation of life by modern welfare states, [5] and that AI exhibits eugenic features in its underlying logic, as well as in its technical operations. [5] The parallel is with historical eugenicists achieving saving to the state by sterilizing defectives so the state would not have to care for their offspring. [5]

The analysis of McQuillan goes beyond the known critique of AI systems fostering precarious labour markets, addressing "necropolitics", the politics of who is entitled to live, and who to die. [2] [6] Although McQuillan offers a brief history of machine learning at the beginning of the book – with its need for "hidden and undercompensated labour", [6] he is concerned more with the social impacts of AI rather than with its technical aspects. [7] [6] McQuillan sees AI as the continuation of existing bureaucratic systems that already marginalize vulnerable groups – aggravated by the fact that AI systems trained on existing data are likely to reinforce existing discriminations, e.g. in attempting to optimize welfare distribution based on existing data patterns, [7] ultimately creating a system of "self-reinforcing social profiling". [8]

In elaborating on the continuation between existing bureaucratic violence and AI, McQuillan connects to Hannah Arendt's concept of the thoughtless bureaucrat in Eichmann in Jerusalem: A Report on the Banality of Evil , which now becomes the algorithm that, lacking intent, cannot be accountable, and is thus endowed with an "algorithmic thoughtlessness". [9]

McQuillan defends the "fascist" in the title of the work by arguing that while not all AI is fascist, this emerging technology of control may end up being deployed by fascist or authoritarian regimes. [10] For McQuillan, AI can support the diffusion of states of exception, as a technology impossible to properly regulate and a mechanism for multiplying exceptions more widely. An example of a scenario where AI systems of surveillance could bring discrimination to a new high is the initiative to create LGBT-free zones in Poland. [11] [7]

Skeptical of ethical regulations to control the technology, McQuillan suggests people's councils and workers' councils, and other forms of citizens' agency to resist AI. [7] A chapter titled "Post-Machine Learning" makes an appeal for resistance via currents of thought from feminist science (standpoint theory), post-normal science (extended peer communities), and new materialism; McQuillan encourages the reader to question the meaning of "objectivity" and calls for the necessity of alternative ways of knowing. [12] Among the virtuous examples of resistance – possibly to be adopted by the AI workers themselves – McQuillan notes [13] the Lucas Plan of the workers of Lucas Aerospace Corporation, [14] in which a workforce declared redundant took control, reorienting the enterprise toward useful products. [10]

The work of McQuillan [15]

warns against "watered-down forms of engagement" with AI, such as citizen juries, which superficially look like democratic deliberation but may actually obscure important decisions about AI that are outside the purview of the engagement situation (McQuillan 2022, 128).

In an interview about the book, McQuillan defines himself as an "AI abolitionist". [16]

Reception

The book is praised for "masterfully disassembles AI as an epistemological, social, and political paradigm, [17] and for his examination of how most of the data that is fed into "privatized AI infrastructure is “amputated” [18] from context or embodied experience and ultimately processed through crowdsourcing."

On the critical side, a review in the academic journal Justice, Power and Resistance took exception to the "nightmarish visions of Big Brother" offered by McQuillan, and argued that while many elements of AI may pose concern, a critique should not be based on a caricature of what AI is, concluding that McQuillan's work is "less of a theory and more of a Manifesto". [3] Another review notes "a disconnect between the technical aspects of AI and the socio-political analysis McQuillan provides." [7]

Although the book was published before the ChatGPT and large language model debate heated up, the book has not lost relevance to the AI discussion. [19] It is noted [20] for suggesting a link between beliefs in artificial intelligence and beliefs in a racialised and gendered visions of intelligence overall, whereby a certain type of rational, measurable intelligence is privileged, leading to "historical notions of hierarchies of being". [21]

The blog Reboot praised McQuillan for offering a theory of harm of AI (why AI could end up hurting people and society) that does not just encourage tackling in isolation specific predicted problems with AI-centric systems: bias, non-inclusiveness, exploitativeness, environmental destructiveness, opacity, and non-contestability. [12]

For [22] educational policies could also look at AI following the reading of McQuillan:

In his book Resisting AI, Dan McQuillan argues that "When we're thinking about the actuality of AI, we can't separate the calculations in the code from the social context of its application" .... McQuillan's particular concern is how many contemporary applications of AI are amplifying existing inequalities and injustices as well as deepening social divisions and instabilities. His book makes a powerful case for anticipating these effects and actively resisting them for the good of societies.

Videos [19] [23] and podcasts [1] [24] [25] with an interest in AI and emerging technology have discussed the book.

See also

Related Research Articles

Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs.

Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of statistical algorithms that can learn from data and generalize to unseen data, and thus perform tasks without explicit instructions. Within a subdiscipline in machine learning, advances in the field of deep learning have allowed neural networks, a class of statistical algorithms, to surpass many previous machine learning approaches in performance.

Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. This contrasts with narrow AI, which is limited to specific tasks. Artificial superintelligence (ASI), on the other hand, refers to AGI that greatly exceeds human cognitive capabilities. AGI is considered one of the definitions of strong AI.

The expression computational intelligence (CI) usually refers to the ability of a computer to learn a specific task from data or experimental observation. Even though it is commonly considered a synonym of soft computing, there is still no commonly accepted definition of computational intelligence.

<span class="mw-page-title-main">Marcus Hutter</span> AI researcher

Marcus Hutter is a computer scientist, professor and artificial intelligence researcher. As a senior researcher at DeepMind, he studies the mathematical foundations of artificial general intelligence.

The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics, lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status, artificial superintelligence and existential risks.

Artificial intelligence marketing (AIM) is a form of marketing that uses artificial intelligence concepts and models such as machine learning, natural language processing (NLP), and computer vision to achieve marketing goals. The main difference between AIM and traditional forms of marketing resides in the reasoning, which is performed by a computer algorithm rather than a human.

Artificial intelligence (AI) has been used in applications throughout industry and academia. In a manner analogous to electricity or computers, AI serves as a general-purpose technology. AI programs are designed to simulate human perception and understanding. These systems are capable of adapting to new information and responding to changing situations. Machine learning has been used for various scientific and commercial purposes including language translation, image recognition, decision-making, credit scoring, and e-commerce.

Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. It should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with technology's grander social effects.

In the field of artificial intelligence (AI), AI alignment aims to steer AI systems toward a person's or group's intended goals, preferences, or ethical principles. An AI system is considered aligned if it advances the intended objectives. A misaligned AI system pursues unintended objectives.

<span class="mw-page-title-main">Artificial intelligence in healthcare</span> Overview of the use of artificial intelligence in healthcare

Artificial intelligence in healthcare is the application of artificial intelligence (AI) to analyze and understand complex medical and healthcare data. In some cases, it can exceed or augment human capabilities by providing better or faster ways to diagnose, treat, or prevent disease.

Explainable AI (XAI), often overlapping with interpretable AI, or explainable machine learning (XML), is a field of research within artificial intelligence (AI) that explores methods that provide humans with the ability of intellectual oversight over AI algorithms. The main focus is on the reasoning behind the decisions or predictions made by the AI algorithms, to make them more understandable and transparent. This addresses users' requirement to assess safety and scrutinize the automated decision making in applications. XAI counters the "black box" tendency of machine learning, where even the AI's designers cannot explain why it arrived at a specific decision.

The sociologyof quantification is the investigation of quantification as a sociological phenomenon in its own right.

Government by algorithm is an alternative form of government or social ordering where the usage of computer algorithms is applied to regulations, law enforcement, and generally any aspect of everyday life such as transportation or land registration. The term "government by algorithm" has appeared in academic literature as an alternative for "algorithmic governance" in 2013. A related term, algorithmic regulation, is defined as setting the standard, monitoring and modifying behaviour by means of computational algorithms – automation of judiciary is in its scope. In the context of blockchain, it is also known as blockchain governance.

Regulation of algorithms, or algorithmic regulation, is the creation of laws, rules and public sector policies for promotion and regulation of algorithms, particularly in artificial intelligence and machine learning. For the subset of AI algorithms, the term regulation of artificial intelligence is used. The regulatory and policy landscape for artificial intelligence (AI) is an emerging issue in jurisdictions globally, including in the European Union. Regulation of AI is considered necessary to both encourage AI and manage associated risks, but challenging. Another emerging topic is the regulation of blockchain algorithms and is mentioned along with regulation of AI algorithms. Many countries have enacted regulations of high frequency trades, which is shifting due to technological progress into the realm of AI algorithms.

Regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI). It is part of the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions worldwide, including for international organizations without direct enforcement power like the IEEE or the OECD.

Ethics of quantification is the study of the ethical issues associated to different forms of visible or invisible forms of quantification. These could include algorithms, metrics/indicators, statistical and mathematical modelling, as noted in a review of various aspects of sociology of quantification.

<span class="mw-page-title-main">Erik J. Larson</span> American journalist

Erik J. Larson is an American writer, tech entrepreneur, and computer scientist. He is author of The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do.

Causal AI is a technique in artificial intelligence that builds a causal model and can thereby make inferences using causality rather than just correlation. One practical use for causal AI is for organisations to explain decision-making and the causes for a decision.

Artificial intelligence could be defined as “systems which display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals”. These systems might be software-based or embedded in hardware. The so called intelligence would either be displayed by following a rule based or machine learning algorithm. Artificial intelligence in education (AiEd) is another vague term, and an interdisciplinary collection of fields which are bundled together, inter alia anthropomorphism, generative artificial intelligence, data-driven decision-making, ai ethics, classroom surveillance, data-privacy and Ai Literacy. An educator might learn to use these Ai systems as tools and become a prompt engineer, generate probabilistic code,text or rich media and optimize their digital content production. Or a governmental body might see Ai as an ideological project to normalize centralized power and decision making, while public schools and higher education contend with increasing privatization.

References

  1. 1 2 Sadowski, Jathan; Ongweso, Edward Jr.; McQuillan, Dan (2022). "186. Refusing the Everyday Fascism of Artificial Intelligence (ft. Dan McQuillan)". This Machine Kills (Podcast). Retrieved 30 January 2024 via SoundCloud.
  2. 1 2 3 Rossi, Nicola A. (12 July 2022). "Resisting AI – A Review". OrwellSociety.com.
  3. 1 2 Selkälä, Toni (1 November 2022). "Healthily futile: A quest for a different AI". Justice, Power and Resistance. 5 (3): 322–330. doi: 10.1332/PLPB9191 .
  4. McKenna, Brian (28 July 2023). "Resisting AI". Computer Weekly .
  5. 1 2 3 van Toorn, G., Soldatić, K. (9 April 2024). "Disablism, racism and the spectre of eugenics in digital welfare". Journal of Sociology. SAGE Publications Ltd: 14407833241244828. doi: 10.1177/14407833241244828 . ISSN   1440-7833.
  6. 1 2 3 Golumbia, David (1 October 2023). "Resisting AI: An Anti-fascist Approach to Artificial Intelligence, by Dan McQuillan". Critical AI. 1 (1–2). doi:10.1215/2834703x-10734967. S2CID   263647209.
  7. 1 2 3 4 5 Stürmer, Milan; Carrigan, Mark (16 November 2023). "Resisting AI: An Anti-fascist Approach to Artificial Intelligence – review". LSE Impact Blog. London School of Economics and Political Science.
  8. Knowles, Bran; Fledderjohann, Jasmine; Richards, John T.; Varshney, Kush R. (June 2023). "Trustworthy AI and the Logics of Intersectional Resistance". FAccT '23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. New York: Association for Computing Machinery. pp. 172–182. doi:10.1145/3593013.3593986.
  9. McQuillan (2022), pp. 62–63.
  10. 1 2 Klovig Skelton, Sebastian (1 June 2023). "AI interview: Dan McQuillan, critical computing expert". Computer Weekly .
  11. McQuillan (2022), pp. 75–77.
  12. 1 2 Tang, Joice; McKane, Andrus; McQuillan, Dan (2022). "Resisting AI ft. Dan McQuillan". Reboot. Retrieved 3 March 2024.
  13. McQuillan (2022), pp. 126, 141.
  14. "The Lucas Plan: How Greens and trade unionists can unite in common cause". TheEcologist.org. 2 November 2016.
  15. Lysen, F., Wyatt, S. (31 December 2024). "Refusing participation: hesitations about designing responsible patient engagement with artificial intelligence in healthcare". Journal of Responsible Innovation. 11 (1). Routledge: 2300161. doi: 10.1080/23299460.2023.2300161 . ISSN   2329-9460.
  16. Kremakova, Milena; McQuillan, Dan (6 June 2023). "Dan McQuillan in conversation: Big data, deep learning, and hold the apocalypse". The Sociological Review Magazine. doi: 10.51428/tsr.inuk8253 .
  17. Cabello Fernández-Delgado, F. (14 January 2024). "Dan McQuillan, Resisting AI: An Anti-Fascist Approach to Artificial Intelligence". International Journal of Communication. 18: 4. ISSN   1932-8036.
  18. McQuillan (2022), p. 13.
  19. 1 2 Sharpe, Oli (2023). "Book Summary and Review: Resisting AI by Dan McQuillan, a comment". Go Meta. Retrieved 30 January 2024 via YouTube.
  20. Brown, M. (2023). "Smoke screen". AWorkingLibrary.com. Retrieved 16 February 2024.
  21. McQuillan (2022), p. 90.
  22. Williamson, B. (1 March 2024). "The Social life of AI in Education". International Journal of Artificial Intelligence in Education. 34 (1): 97–104. doi: 10.1007/s40593-023-00342-5 . ISSN   1560-4306.
  23. Scarfe, Tim; McQuillan, Dan (2023). "#109 – Dr. Dan McQuillan – Resisting AI". Machine Learning Street Talk. Retrieved 30 January 2024 via YouTube.
  24. Wickham, Eric; McQuillan, Dan (9 March 2023). "Why We Must Resist AI w/ Dan McQuillan". Tech Won't Save Us (Podcast). Harbinger Media Network. Retrieved 30 January 2024 via Apple Podcasts.
  25. Edwards, Milo; Kesvani, Hussein; Avizandum, Alice; McQuillan, Dan; et al. (4 April 2023). "Dark Satanic Data Mills feat. Dan McQuillan". TrashFuture (Podcast). Retrieved 30 January 2024 via PodBean.