Author | Dan McQuillan |
---|---|
Language | English |
Subjects | Artificial Intelligence |
Publisher | Bristol University Press |
Publication date | 2022 |
Pages | 190 |
ISBN | 978-1529213508 |
Resisting AI: An Anti-fascist Approach to Artificial Intelligence is a book on artificial intelligence (AI) by Dan McQuillan, published in 2022 by Bristol University Press.
Resisting AI takes the form of an extended essay, [1] which contrasts optimistic visions about AI's potential by arguing that AI may best be seen as a continuation and reinforcement of bureaucratic forms of discrimination and violence, ultimately fostering authoritarian outcomes. [2] For McQuillan, AI's promise of objective calculability is antithetical to an egalitarian and just society. [3] [4] McQuillan uses the expression "AI violence" to describe how – based on opaque algorithms – various actors can discriminate against categories of people in accessing jobs, loans, medical care, and other benefits. [2]
The book suggests that AI has a political resonance with soft eugenic approaches to the valuation of life by modern welfare states, [5] and that AI exhibits eugenic features in its underlying logic, as well as in its technical operations. [5] The parallel is with historical eugenicists achieving saving to the state by sterilizing defectives so the state would not have to care for their offspring. [5]
The analysis of McQuillan goes beyond the known critique of AI systems fostering precarious labour markets, addressing "necropolitics", the politics of who is entitled to live, and who to die. [2] [6] Although McQuillan offers a brief history of machine learning at the beginning of the book – with its need for "hidden and undercompensated labour", [6] he is concerned more with the social impacts of AI rather than with its technical aspects. [7] [6] McQuillan sees AI as the continuation of existing bureaucratic systems that already marginalize vulnerable groups – aggravated by the fact that AI systems trained on existing data are likely to reinforce existing discriminations, e.g. in attempting to optimize welfare distribution based on existing data patterns, [7] ultimately creating a system of "self-reinforcing social profiling". [8]
In elaborating on the continuation between existing bureaucratic violence and AI, McQuillan connects to Hannah Arendt's concept of the thoughtless bureaucrat in Eichmann in Jerusalem: A Report on the Banality of Evil , which now becomes the algorithm that, lacking intent, cannot be accountable, and is thus endowed with an "algorithmic thoughtlessness". [9]
McQuillan defends the "fascist" in the title of the work by arguing that while not all AI is fascist, this emerging technology of control may end up being deployed by fascist or authoritarian regimes. [10] For McQuillan, AI can support the diffusion of states of exception, as a technology impossible to properly regulate and a mechanism for multiplying exceptions more widely. An example of a scenario where AI systems of surveillance could bring discrimination to a new high is the initiative to create LGBT-free zones in Poland. [11] [7]
Skeptical of ethical regulations to control the technology, McQuillan suggests people's councils and workers' councils, and other forms of citizens' agency to resist AI. [7] A chapter titled "Post-Machine Learning" makes an appeal for resistance via currents of thought from feminist science (standpoint theory), post-normal science (extended peer communities), and new materialism; McQuillan encourages the reader to question the meaning of "objectivity" and calls for the necessity of alternative ways of knowing. [12] Among the virtuous examples of resistance – possibly to be adopted by the AI workers themselves – McQuillan notes [13] the Lucas Plan of the workers of Lucas Aerospace Corporation, [14] in which a workforce declared redundant took control, reorienting the enterprise toward useful products. [10]
The work of McQuillan [15]
warns against "watered-down forms of engagement" with AI, such as citizen juries, which superficially look like democratic deliberation but may actually obscure important decisions about AI that are outside the purview of the engagement situation (McQuillan 2022, 128).
In an interview about the book, McQuillan defines himself as an "AI abolitionist". [16]
The book is praised for "masterfully disassembles AI as an epistemological, social, and political paradigm, [17] and for his examination of how most of the data that is fed into "privatized AI infrastructure is “amputated” [18] from context or embodied experience and ultimately processed through crowdsourcing."
On the critical side, a review in the academic journal Justice, Power and Resistance took exception to the "nightmarish visions of Big Brother" offered by McQuillan, and argued that while many elements of AI may pose concern, a critique should not be based on a caricature of what AI is, concluding that McQuillan's work is "less of a theory and more of a Manifesto". [3] Another review notes "a disconnect between the technical aspects of AI and the socio-political analysis McQuillan provides." [7]
Although the book was published before the ChatGPT and large language model debate heated up, the book has not lost relevance to the AI discussion. [19] It is noted [20] for suggesting a link between beliefs in artificial intelligence and beliefs in a racialised and gendered visions of intelligence overall, whereby a certain type of rational, measurable intelligence is privileged, leading to "historical notions of hierarchies of being". [21]
The blog Reboot praised McQuillan for offering a theory of harm of AI (why AI could end up hurting people and society) that does not just encourage tackling in isolation specific predicted problems with AI-centric systems: bias, non-inclusiveness, exploitativeness, environmental destructiveness, opacity, and non-contestability. [12]
For [22] educational policies could also look at AI following the reading of McQuillan:
In his book Resisting AI, Dan McQuillan argues that "When we're thinking about the actuality of AI, we can't separate the calculations in the code from the social context of its application" .... McQuillan's particular concern is how many contemporary applications of AI are amplifying existing inequalities and injustices as well as deepening social divisions and instabilities. His book makes a powerful case for anticipating these effects and actively resisting them for the good of societies.
Videos [19] [23] and podcasts [1] [24] [25] with an interest in AI and emerging technology have discussed the book.
Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software which enable machines to perceive their environment and uses learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs.
In machine learning, a neural network is a model inspired by the neuronal organization found in the biological neural networks in animal brains.
Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of statistical algorithms that can learn from data and generalize to unseen data, and thus perform tasks without explicit instructions. Recently, artificial neural networks have been able to surpass many previous approaches in performance.
Artificial Intelligence: A Modern Approach (AIMA) is a university textbook on artificial intelligence, written by Stuart J. Russell and Peter Norvig. It was first published in 1995, and the fourth edition of the book was released on 28 April 2020.
Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that can perform as well or better than humans on a wide range of cognitive tasks, as opposed to narrow AI, which is designed for specific tasks. It is one of various definitions of strong AI.
The expression computational intelligence (CI) usually refers to the ability of a computer to learn a specific task from data or experimental observation. Even though it is commonly considered a synonym of soft computing, there is still no commonly accepted definition of computational intelligence.
The ethics of artificial intelligence is the branch of the ethics of technology specific to artificial intelligence (AI) systems.
Artificial intelligence (AI) has been used in applications throughout industry and academia. Similar to electricity or computers, AI serves as a general-purpose technology that has numerous applications. Its applications span language translation, image recognition, decision-making, credit scoring, e-commerce and various other domains.
Music and artificial intelligence is the development of music software programs which use AI to generate music. As with applications in other fields, AI in music also simulates mental tasks. A prominent feature is the capability of an AI algorithm to learn based on past data, such as in computer accompaniment technology, wherein the AI is capable of listening to a human performer and performing accompaniment. Artificial intelligence also drives interactive composition technology, wherein a computer composes music in response to a live performance. There are other AI applications in music that cover not only music composition, production, and performance but also how music is marketed and consumed. Several music player programs have also been developed to use voice recognition and natural language processing technology for music voice control. Current research includes the application of AI in music composition, performance, theory and digital sound processing.
Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. Machine ethics should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with the grander social effects of technology.
In the field of artificial intelligence (AI), AI alignment research aims to steer AI systems toward a person's or group's intended goals, preferences, and ethical principles. An AI system is considered aligned if it advances its intended objectives. A misaligned AI system may pursue some objectives, but not the intended ones.
Artificial intelligence in healthcare is a term used to describe the use of machine-learning algorithms and software, or artificial intelligence (AI), to copy human cognition in the analysis, presentation, and understanding of complex medical and health care data, or to exceed human capabilities by providing new ways to diagnose, treat, or prevent disease. Specifically, AI is the ability of computer algorithms to arrive at approximate conclusions based solely on input data.
Explainable AI (XAI), often overlapping with Interpretable AI, or Explainable Machine Learning (XML), either refers to an artificial intelligence (AI) system over which it is possible for humans to retain intellectual oversight, or refers to the methods to achieve this. The main focus is usually on the reasoning behind the decisions or predictions made by the AI which are made more understandable and transparent. XAI counters the "black box" tendency of machine learning, where even the AI's designers cannot explain why it arrived at a specific decision.
The sociologyof quantification is the investigation of quantification as a sociological phenomenon in its own right.
Government by algorithm is an alternative form of government or social ordering where the usage of computer algorithms is applied to regulations, law enforcement, and generally any aspect of everyday life such as transportation or land registration. The term "government by algorithm" has appeared in academic literature as an alternative for "algorithmic governance" in 2013. A related term, algorithmic regulation, is defined as setting the standard, monitoring and modifying behaviour by means of computational algorithms – automation of judiciary is in its scope. In the context of blockchain, it is also known as blockchain governance.
The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union and in supra-national bodies like the IEEE, OECD and others. Since 2016, a wave of AI ethics guidelines have been published in order to maintain social control over the technology. Regulation is considered necessary to both encourage AI and manage associated risks. In addition to regulation, AI-deploying organizations need to play a central role in creating and deploying trustworthy AI in line with the principles of trustworthy AI, and take accountability to mitigate the risks. Regulation of AI through mechanisms such as review boards can also be seen as social means to approach the AI control problem.
Ethics of quantification is the study of the ethical issues associated to different forms of visible or invisible forms of quantification. These could include algorithms, metrics/indicators, statistical and mathematical modelling, as noted in a review of various aspects of sociology of quantification.
The Artificial Intelligence Act is a European Union regulation on artificial intelligence (AI) in the European Union. Proposed by the European Commission on 21 April 2021 and passed on 13 March 2024, it aims to establish a common regulatory and legal framework for AI.
Erik J. Larson is an American writer, tech entrepreneur, computer scientist. He is author of The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do.