Ethics of quantification

Last updated

Ethics of quantification is the study of the ethical issues associated to different forms of visible or invisible forms of quantification. These could include algorithms, metrics/indicators, statistical and mathematical modelling, as noted in a review of various aspects of sociology of quantification. [1]

According to Espeland and Stevens [2] an ethics of quantification would naturally descend from a sociology of quantification, especially at an age where democracy, merit, participation, accountability and even ‘‘fairness’’ are assumed to be best discovered and appreciated via numbers. In his classic work Trust in Numbers Theodore M. Porter [3] notes how numbers meet a demand for quantified objectivity, and may for this be by used by bureaucracies or institutions to gain legitimacy and epistemic authority.

For Andy Stirling of the STEPS Centre at Sussex University there is a rhetoric element around concepts such as ‘expected utility’, ‘decision theory’, ‘life cycle assessment’, ‘ecosystem services’ ‘sound scientific decisions’ and ‘evidence-based policy’. The instrumental application of these techniques and their use of quantification to deliver an impression of accuracy may raise ethical concerns. [4]

For Sheila Jasanoff these technologies of quantification can be labeled as 'Technologies of hubris', [5] whose function is to reassure the public while keeping the wheels of science and industry turning. The downside of the technologies of hubris is that they may generate overconfidence thanks to the appearance of exhaustivity; they can preempt a political discussion by transforming a political problem into a technical one; and remain fundamentally limited in processing what takes place outside their restricted range of assumptions. Jasanoff contrasts technologies of hubris with 'technologies of humility' [6] which admit the existence of ambiguity, indeterminacy and complexity, and strive to bring to the surface the ethical nature of problems. Technologies of humility are also sensitive to the need to alleviate known causes of people’s vulnerability, to pay attention to the distribution of benefits and risks, and to identify those factors and strategies which may promote or inhibit social learning.

For Sally Engle Merry, studying indicators of human rights, gender violence and sex trafficking, quantification is a technology of control, but whether it is reformist or authoritarian depends on who has harnessed its power and for what purpose. She notes in order to make indicators less misleading and distorting some principles should be followed: [7]

The field of algorithms and artificial intelligence is the regime of quantification where the discussion about ethics, is more advanced, see e.g. Weapons of Math Destruction [8] of Cathy O'Neil. While objectivity and efficiency are some positive properties associated with the use of algorithms, ethical issues are posed by these tools coming in the form of black boxes. [9] Thus algorithms have the power to act upon data and make decisions, but they are to a large extent beyond query. [8] [10] The existence of a surveillance capitalism in the theme of Shoshana Zuboff [11] 2019 book. A more militant reading of the dangers posed by artificial intelligence is "Resisting AI: An Anti-fascist Approach to Artificial Intelligence" of Dan McQuillan. [12]

See also

Related Research Articles

<span class="mw-page-title-main">Artificial intelligence</span> Intelligence of machines or software

Artificial intelligence (AI) is the intelligence of machines or software, as opposed to the intelligence of humans or animals. It is a field of study in computer science which develops and studies intelligent machines. Such machines may be called AIs.

<span class="mw-page-title-main">Friendly artificial intelligence</span> AI to benefit humanity

Friendly artificial intelligence is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.

<span class="mw-page-title-main">AI takeover</span> Hypothetical artificial intelligence scenario

An AI takeover is a hypothetical scenario in which artificial intelligence (AI) becomes the dominant form of intelligence on Earth, as computer programs or robots effectively take control of the planet away from the human species. Possible scenarios include replacement of the entire human workforce, takeover by a superintelligent AI, and the popular notion of a robot uprising. Stories of AI takeovers are very popular throughout science fiction. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.

<span class="mw-page-title-main">Theodore Porter</span> American historian

Theodore M. Porter is a professor who specializes in the history of science in the Department of History at UCLA. He has authored several books, including The Rise of Statistical Thinking, 1820-1900; and Trust in Numbers: The Pursuit of Objectivity in Science and Public Life, the latter a vast reference for sociology of quantification. His most recent book, published by Princeton University Press in 2018, is Genetics in the Madhouse: The Unknown History of Human Heredity. He graduated from Stanford University with an A.B. in history in 1976 and earned a Ph.D. from Princeton University in 1981. In 2008, he was elected to the American Academy of Arts and Sciences. In 2023, he received the George Sarton Medal for lifetime achievement from the History of Science Society.

Robot ethics, sometimes known as "roboethics", concerns ethical problems that occur with robots, such as whether robots pose a threat to humans in the long or short run, whether some uses of robots are problematic, and how robots should be designed such that they act 'ethically'. Alternatively, roboethics refers specifically to the ethics of human behavior towards robots, as robots become increasingly advanced. Robot ethics is a sub-field of ethics of technology, specifically information technology, and it has close links to legal as well as socio-economic concerns. Researchers from diverse areas are beginning to tackle ethical questions about creating robotic technology and implementing it in societies, in a way that will still ensure the safety of the human race.

<span class="mw-page-title-main">Shoshana Zuboff</span> American scholar (born 1951)

Shoshana Zuboff is an American author, professor, social psychologist, philosopher, and scholar.

<span class="mw-page-title-main">Ethics of artificial intelligence</span> Ethical issues specific to AI

The ethics of artificial intelligence is the branch of the ethics of technology specific to artificially intelligent systems. It is sometimes divided into a concern with the moral behavior of humans as they design, make, use and treat artificially intelligent systems, and a concern with the behavior of machines, in machine ethics.

Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. Machine ethics should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with the grander social effects of technology.

<span class="mw-page-title-main">AI alignment</span> AI conformance to the intended objective

In the field of artificial intelligence (AI), AI alignment research aims to steer AI systems towards humans' intended goals, preferences, or ethical principles. An AI system is considered aligned if it advances the intended objectives. A misaligned AI system pursues some objectives, but not the intended ones.

<span class="mw-page-title-main">Artificial intelligence in healthcare</span> Overview of the use of artificial intelligence in healthcare

Artificial intelligence in healthcare is a term used to describe the use of machine-learning algorithms and software, or artificial intelligence (AI), to copy human cognition in the analysis, presentation, and understanding of complex medical and health care data, or to exceed human capabilities by providing new ways to diagnose, treat, or prevent disease. Specifically, AI is the ability of computer algorithms to approximate conclusions based solely on input data.

Surveillance capitalism is a concept in political economics which denotes the widespread collection and commodification of personal data by corporations. This phenomenon is distinct from government surveillance, though the two can reinforce each other. The concept of surveillance capitalism, as described by Shoshana Zuboff, is driven by a profit-making incentive, and arose as advertising companies, led by Google's AdWords, saw the possibilities of using personal data to target consumers more precisely.

<span class="mw-page-title-main">Rumman Chowdhury</span> Data scientist, AI specialist

Rumman Chowdhury is a Bangladeshi American data scientist, a business founder, and former Responsible Artificial Intelligence Lead at Accenture. She was born in Rockland County, New York.

ACM Conference on Fairness, Accountability, and Transparency is a peer-reviewed academic conference series about ethics and computing systems. Sponsored by the Association for Computing Machinery, this conference focuses on issues such as algorithmic transparency, fairness in machine learning, bias, and ethics from a multi-disciplinary perspective. The conference community includes computer scientists, statisticians, social scientists, scholars of law, and others.

The sociologyof quantification is the investigation of quantification as a sociological phenomenon in its own right.

Regulation of algorithms, or algorithmic regulation, is the creation of laws, rules and public sector policies for promotion and regulation of algorithms, particularly in artificial intelligence and machine learning. For the subset of AI algorithms, the term regulation of artificial intelligence is used. The regulatory and policy landscape for artificial intelligence (AI) is an emerging issue in jurisdictions globally, including in the European Union. Regulation of AI is considered necessary to both encourage AI and manage associated risks, but challenging. Another emerging topic is the regulation of blockchain algorithms and is mentioned along with regulation of AI algorithms. Many countries have enacted regulations of high frequency trades, which is shifting due to technological progress into the realm of AI algorithms.

The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union and in supra-national bodies like the IEEE, OECD and others. Since 2016, a wave of AI ethics guidelines have been published in order to maintain social control over the technology. Regulation is considered necessary to both encourage AI and manage associated risks. In addition to regulation, AI-deploying organizations need to play a central role in creating and deploying trustworthy AI in line with the principles of trustworthy AI, and take accountability to mitigate the risks. Regulation of AI through mechanisms such as review boards can also be seen as social means to approach the AI control problem.

Automated decision-making (ADM) involves the use of data, machines and algorithms to make decisions in a range of contexts, including public administration, business, health, education, law, employment, transport, media and entertainment, with varying degrees of human oversight or intervention. ADM involves large-scale data from a range of sources, such as databases, text, social media, sensors, images or speech, that is processed using various technologies including computer software, algorithms, machine learning, natural language processing, artificial intelligence, augmented intelligence and robotics. The increasing use of automated decision-making systems (ADMS) across a range of contexts presents many benefits and challenges to human society requiring consideration of the technical, legal, ethical, societal, educational, economic and health consequences.

The AAAI/ACM Conference on AI, Ethics, and Society (AIES) is a peer-reviewed academic conference series focused on societal and ethical aspects of artificial intelligence. The conference is jointly organized by the Association for Computing Machinery, namely the Special Interest Group on Artificial Intelligence (SIGAI), and the Association for the Advancement of Artificial Intelligence. The conference community includes lawyers, practitioners, and academics in computer science, philosophy, public policy, economics, human-computer interaction, and more.

Resisting AI: An Anti-fascist Approach to Artificial Intelligence is a book on artificial intelligence (AI) by Dan McQuillan, published in 2022 by Bristol University Press.

Trust in numbers is a book of Theodore Porter published in 1995 by Princeton University Press.

References

  1. E. Popp Berman and D. Hirschman, “The Sociology of Quantification: Where Are We Now?,” Contemp. Sociol., vol. 47, no. 3, pp. 257–266, 2018.
  2. W. N. Espeland and M. L. Stevens, “A sociology of quantification,” Eur. J. Sociol., vol. 49, no. 3, pp. 401–436, 2008.
  3. T. M. Porter, Trust in Numbers: The Pursuit of Objectivity in Science and Public Life. Princeton University Press, 1995.
  4. A. Stirling, “How politics closes down uncertainty - STEPS Centre,” STEPS Centre, 2019.
  5. Jasanoff, S. Technologies of humility: Citizen participation in governing science. Minerva 41, 223–244 (2003).
  6. Jasanoff, S. Technologies of humility. Nature 450, 33 (2007).
  7. S. Engle Merry, The Seductions of Quantification: Measuring Human Rights, Gender Violence, and Sex Trafficking. University of Chicago Press, 2016.
  8. 1 2 C. O’Neil, Weapons of math destruction : how big data increases inequality and threatens democracy. Random House Publishing Group, 2016.
  9. J. Danaher et al., “Algorithmic governance: Developing a research agenda through the power of collective intelligence,” Big Data Soc., vol. 4, no. 2, pp. 1–21, 2017.
  10. R. Kitchin, “Thinking critically about and researching algorithms,” Inf. Commun. Soc., vol. 20, no. 1, pp. 14–29, Jan. 2017.
  11. Zuboff, Shoshana (January 2019). "Surveillance Capitalism and the Challenge of Collective Action". New Labor Forum. 28 (1): 10–29. doi: 10.1177/1095796018819461 . ISSN   1095-7960. S2CID   159380755.
  12. McQuillan, D. (2022). Resisting AI: An Anti-fascist Approach to Artificial Intelligence, Bristol University Press, https://bristoluniversitypress.co.uk/resisting-ai.