Atlas of AI

Last updated

Atlas of AI
Atlas of AI.jpg
First edition
Author Kate Crawford
LanguageEnglish
Genre Philosophy of artificial intelligence
Publisher Yale University Press
Publication date
May 25, 2021
Media type Paperback, hardback
Pages336 pp
ISBN 9780300209570

Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence is a book by Australian academic Kate Crawford. It is based on Crawford's research into the development and labor behind artificial intelligence, as well as AI's impact on the world.

Contents

Overview

The book is mainly concerned with the ethics of artificial intelligence. Chapters 1 and 2 criticise Big Tech in general for exploitation of Earth's resources, such as in the Thacker Pass Lithium Mine, and human labor, such as in Amazon warehouses and the Amazon Mechanical Turk. Crawford also compares "TrueTime" in Google's Spanner with historical efforts to control time associated with colonialism. In Chapters 3 and 4, attention is drawn to the practice of building datasets without consent, and of training on incorrect or biased data, with particular focus on ImageNet and on a failed Amazon project to classify job applicants. Chapter 5 criticises affective computing for employing training sets which, although natural, were labelled by people who had been grounded in controversial emotional expression research by Paul Ekman, in particular his Facial Action Coding System (FACS), which had been based on posed images; it is implied that Affectiva's approach would not sufficiently attenuate the problems of FACS, and attention is drawn to potential inaccurate use of this technology in job interviews without addressing claims that human bias is worse. In Chapter 6, Crawford gives an overview of the secret services' surveillance software as revealed in the leaks of Edward Snowden, with a brief comparison to Cambridge Analytica and the military use of metadata, and recounts Google employees' objections to their unwitting involvement in Project Maven (giving their image recognition a military use) before this was moved to Palantir. Chapter 7 criticises the common perception of AlphaGo as an otherworldly intelligence instead of a natural product of massive brute-force calculation at environmental cost, and Chapter 8 discusses tech billionaires' fantasies of developing private spaceflight to escape resource depletion on Earth.

Reception

The book received positive reviews from critics, who singled out its exploration of issues like exploitation of labour and the environment, algorithmic bias, and false claims about AI's ability to recognize human emotion. [1] [2]

The book was considered a seminal work by Anais Resseguier of Ethics and AI. [3] It was included on the year end booklists of Financial Times , [4] and New Scientist , [5] and the 2021 Choice Outstanding Academic Titles booklist. [6]

Data scientist and MIT Technology Review editor Karen Hao praised the book's description of the ethical concerns regarding the labor and history behind artificial intelligence. [7]

Sue Halpern of The New York Review commented that she felt the book shined a light on "dehumanizing extractive practices", [8] a sentiment which was echoed by Michael Spezio of Science. [9] Virginia Dignum of Nature positively compared the book's exploration of artificial intelligence to The Alignment Problem by Brian Christian. [10]

Related Research Articles

Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs.

<span class="mw-page-title-main">Eliezer Yudkowsky</span> American AI researcher and writer (born 1979)

Eliezer S. Yudkowsky is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence. He is the founder of and a research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. His work on the prospect of a runaway intelligence explosion influenced philosopher Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies.

<span class="mw-page-title-main">Kate Crawford</span> Australian writer, composer, and academic

Kate Crawford is a researcher, writer, composer, producer and academic, who studies the social and political implications of artificial intelligence. She is based in New York and works as a principal researcher at Microsoft Research, the co-founder and former director of research at the AI Now Institute at NYU, a visiting professor at the MIT Center for Civic Media, a senior fellow at the Information Law Institute at NYU, and an associate professor in the Journalism and Media Research Centre at the University of New South Wales. She is also a member of the WEF's Global Agenda Council on Data-Driven Development.

The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics, lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status, artificial superintelligence and existential risks.

Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. It should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with technology's grander social effects.

<span class="mw-page-title-main">Kate Devlin</span> Northern Irish computer scientist, AI specialist

Kate Devlin, born Adela Katharine Devlin is a Northern Irish computer scientist specialising in Artificial intelligence and Human–computer interaction (HCI). She is best known for her work on human sexuality and robotics and was co-chair of the annual Love and Sex With Robots convention in 2016 held in London and was founder of the UK's first ever sex tech hackathon held in 2016 at Goldsmiths, University of London. She is Senior Lecturer in Social and Cultural Artificial Intelligence in the Department of Digital Humanities, King's College London and is the author of Turned On: Science, Sex and Robots in addition to several academic papers.

<span class="mw-page-title-main">Algorithmic bias</span> Technological phenomenon with social implications

Algorithmic bias describes systematic and repeatable errors in a computer system that create "unfair" outcomes, such as "privileging" one category over another in ways different from the intended function of the algorithm.

<span class="mw-page-title-main">Joanna Bryson</span> Researcher and Professor of Ethics and Technology

Joanna Joy Bryson is professor at Hertie School in Berlin. She works on Artificial Intelligence, ethics and collaborative cognition. She has been a British citizen since 2007.

<span class="mw-page-title-main">Meredith Whittaker</span> American artificial intelligence research scientist

Meredith Whittaker is the president of the Signal Foundation and serves on its board of directors. She was formerly the Minderoo Research Professor at New York University (NYU), and the co-founder and faculty director of the AI Now Institute. She also served as a senior advisor on AI to Chair Lina Khan at the Federal Trade Commission. Whittaker was employed at Google for 13 years, where she founded Google's Open Research group and co-founded the M-Lab. In 2018, she was a core organizer of the Google Walkouts and resigned from the company in July 2019.

The AI Now Institute is an American research institute studying the social implications of artificial intelligence and policy research that addresses the concentration of power in the tech industry. AI Now has partnered with organizations such as the Distributed AI Research Institute (DAIR), Data & Society, Ada Lovelace Institute, New York University Tandon School of Engineering, New York University Center for Data Science, Partnership on AI, and the ACLU. AI Now has produced annual reports that examine the social implications of artificial intelligence. In 2021-2, AI Now’s leadership served as a Senior Advisors on AI to Chair Lina Khan at the Federal Trade Commission. Its executive director is Amba Kak.

<span class="mw-page-title-main">Rumman Chowdhury</span> Data scientist, AI specialist

Rumman Chowdhury is a Bangladeshi American data scientist, a business founder, and former responsible artificial intelligence lead at Accenture. She was born in Rockland County, New York. She is recognized for her contributions to the field of data science.

<span class="mw-page-title-main">Timnit Gebru</span> Computer scientist

Timnit Gebru is an Eritrean Ethiopian-born computer scientist who works in the fields of artificial intelligence (AI), algorithmic bias and data mining. She is a co-founder of Black in AI, an advocacy group that has pushed for more Black roles in AI development and research. She is the founder of the Distributed Artificial Intelligence Research Institute (DAIR).

ACM Conference on Fairness, Accountability, and Transparency is a peer-reviewed academic conference series about ethics and computing systems. Sponsored by the Association for Computing Machinery, this conference focuses on issues such as algorithmic transparency, fairness in machine learning, bias, and ethics from a multi-disciplinary perspective. The conference community includes computer scientists, statisticians, social scientists, scholars of law, and others.

Regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI). It is part of the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions worldwide, including for international organizations without direct enforcement power like the IEEE or the OECD.

<span class="mw-page-title-main">Algorithmic Justice League</span> Digital advocacy non-profit organization

The Algorithmic Justice League (AJL) is a digital advocacy non-profit organization based in Cambridge, Massachusetts. Founded in 2016 by computer scientist Joy Buolamwini, the AJL uses research, artwork, and policy advocacy to increase societal awareness regarding the use of artificial intelligence (AI) in society and the harms and biases that AI can pose to society. The AJL has engaged in a variety of open online seminars, media appearances, and tech advocacy initiatives to communicate information about bias in AI systems and promote industry and government action to mitigate against the creation and deployment of biased AI systems. In 2021, Fast Company named AJL as one of the 10 most innovative AI companies in the world.

<span class="mw-page-title-main">Rashida Richardson</span> American attorney and scholar

Rashida Richardson is a visiting scholar at Rutgers Law School and the Rutgers Institute for Information Policy and the Law and an attorney advisor to the Federal Trade Commission. She is also an assistant professor of law and political science at the Northeastern University School of Law and the Northeastern University Department of Political Science in the College of Social Sciences and Humanities.

<span class="mw-page-title-main">Black in AI</span> Technology research organization

Black in AI, formally called the Black in AI Workshop, is a technology research organization and affinity group, founded by computer scientists Timnit Gebru and Rediet Abebe in 2017. It started as a conference workshop, later pivoting into an organization. Black in AI increases the presence and inclusion of Black people in the field of artificial intelligence (AI) by creating space for sharing ideas, fostering collaborations, mentorship, and advocacy.

<i>The Alignment Problem</i> 2020 non-fiction book by Brian Christian

The Alignment Problem: Machine Learning and Human Values is a 2020 non-fiction book by the American writer Brian Christian. It is based on numerous interviews with experts trying to build artificial intelligence systems, particularly machine learning systems, that are aligned with human values.

<span class="mw-page-title-main">Tess Posner</span> American social entrepreneur & AI diversity advocate

Tess Posner is an American social entrepreneur and musician known for her work in artificial intelligence advocacy and ethics, focusing on increasing equity and inclusion in technology.

The AAAI/ACM Conference on AI, Ethics, and Society (AIES) is a peer-reviewed academic conference series focused on societal and ethical aspects of artificial intelligence. The conference is jointly organized by the Association for Computing Machinery, namely the Special Interest Group on Artificial Intelligence (SIGAI), and the Association for the Advancement of Artificial Intelligence, and "is designed to shift the dynamics of the conversation on AI and ethics to concrete actions that scientists, businesses and society alike can take to ensure this promising technology is ushered into the world responsibility." The conference community includes lawyers, practitioners, and academics in computer science, philosophy, public policy, economics, human-computer interaction, and more.

References

  1. "Briefly Noted". The New Yorker. 5 May 2021. Retrieved 23 January 2022.
  2. Grossman, Wendy M. "Atlas of AI, book review: Mapping out the total cost of artificial intelligence". ZDNet. Retrieved 23 January 2022.
  3. Resseguier, Anais (25 October 2021). "Thinking AI with a hammer. Kate Crawford's Atlas of AI (2021)". AI and Ethics. 2: 247–248. doi: 10.1007/s43681-021-00115-7 . ISSN   2730-5961. S2CID   239934668.
  4. Thornhill, John (15 November 2021). "Best books of 2021: Technology". Financial Times. Retrieved 23 January 2022.
  5. Ings, Simon (1 December 2021). "The best books of 2021 - New Scientist's Christmas gift guide". New Scientist. Archived from the original on 1 December 2021. Retrieved 23 January 2022.
  6. "Atlas of AI by Kate Crawford". Yale Books UK. Retrieved 23 January 2022.
  7. "Stop talking about AI ethics. It's time to talk about power". MIT Technology Review. Retrieved 23 January 2022.
  8. Halpern, Sue. "The Human Costs of AI". ISSN   0028-7504 . Retrieved 23 January 2022.
  9. Spezio, Michael (16 April 2021). "AI empires". Science. 372 (6539): 246. Bibcode:2021Sci...372..246S. doi:10.1126/science.abh2250. S2CID   233245051.
  10. Dignum, Virginia (26 May 2021). "AI — the people and places that make, use and manage it". Nature. 593 (7860): 499–500. Bibcode:2021Natur.593..499D. doi: 10.1038/d41586-021-01397-x . S2CID   235216649.