Suffering risks

Last updated
Scope-severity grid from Bostrom's paper "Existential Risk Prevention as Global Priority" X-risk-chart-en-01a.svg
Scope–severity grid from Bostrom's paper "Existential Risk Prevention as Global Priority"

Suffering risks, or s-risks, are risks involving an astronomical amount of suffering; much more than all of the suffering that has occurred on Earth thus far. [2] [3] They are sometimes categorized as a subclass of existential risks. [4]

Contents

Sources of possible s-risks include embodied artificial intelligence [5] and superintelligence, [6] as well as space colonization, which could potentially lead to "constant and catastrophic wars" [7] and an immense increase in wild animal suffering by introducing wild animals, who "generally lead short, miserable lives full of sometimes the most brutal suffering", to other planets, either intentionally or inadvertently. [8]

Steven Umbrello, an AI ethics researcher, has warned that biological computing may make system design more prone to s-risks. [5]

See also

Related Research Articles

<span class="mw-page-title-main">Eliezer Yudkowsky</span> American AI researcher and writer (born 1979)

Eliezer S. Yudkowsky is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence, including the idea that there might not be a "fire alarm" for AI. He is the founder of and a research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. His work on the prospect of a runaway intelligence explosion influenced philosopher Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies.

Friendly artificial intelligence is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.

<span class="mw-page-title-main">Nick Bostrom</span> Philosopher and writer (born 1973)

Nick Bostrom is a philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, superintelligence risks, and the reversal test. He is the founding director of the Future of Humanity Institute at Oxford University.

Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that can perform as well or better than humans on a wide range of cognitive tasks, as opposed to narrow AI, which is designed for specific tasks. It is one of various definitions of strong AI.

A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

<span class="mw-page-title-main">AI takeover</span> Hypothetical artificial intelligence scenario

An AI takeover is a scenario in which artificial intelligence (AI) becomes the dominant form of intelligence on Earth, as computer programs or robots effectively take control of the planet away from the human species. Possible scenarios include replacement of the entire human workforce due to automation, takeover by a superintelligent AI, and the popular notion of a robot uprising. Stories of AI takeovers are popular throughout science fiction. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.

The Institute for Ethics and Emerging Technologies (IEET) is a technoprogressive think tank that seeks to "promote ideas about how technological progress can increase freedom, happiness, and human flourishing in democratic societies." It was incorporated in the United States in 2004, as a non-profit 501(c)(3) organization, by philosopher Nick Bostrom and bioethicist James Hughes.

<span class="mw-page-title-main">Human extinction</span> Hypothetical end of the human species

Human extinction is the hypothetical end of the human species, either by population decline due to extraneous natural causes, such as an asteroid impact or large-scale volcanism, or via anthropogenic destruction (self-extinction), for example by sub-replacement fertility.

<span class="mw-page-title-main">Future of Humanity Institute</span> Oxford interdisciplinary research centre

The Future of Humanity Institute (FHI) is an interdisciplinary research centre at the University of Oxford investigating big-picture questions about humanity and its prospects. It was founded in 2005 as part of the Faculty of Philosophy and the Oxford Martin School. Its director is philosopher Nick Bostrom, and its research staff include futurist Anders Sandberg and Giving What We Can founder Toby Ord.

The ethics of artificial intelligence is the branch of the ethics of technology specific to artificial intelligence (AI) systems.

Population ethics is the philosophical study of the ethical problems arising when our actions affect who is born and how many people are born in the future. An important area within population ethics is population axiology, which is "the study of the conditions under which one state of affairs is better than another, when the states of affairs in question may differ over the numbers and the identities of the persons who ever live."

<span class="mw-page-title-main">Global catastrophic risk</span> Potentially harmful worldwide events

A global catastrophic risk or a doomsday scenario is a hypothetical event that could damage human well-being on a global scale, even endangering or destroying modern civilization. An event that could cause human extinction or permanently and drastically curtail humanity's existence or potential is known as an "existential risk."

Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. Machine ethics should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with the grander social effects of technology.

The Centre for the Study of Existential Risk (CSER) is a research centre at the University of Cambridge, intended to study possible extinction-level threats posed by present or future technology. The co-founders of the centre are Huw Price, Martin Rees and Jaan Tallinn.

<i>Superintelligence: Paths, Dangers, Strategies</i> 2014 book by Nick Bostrom

Superintelligence: Paths, Dangers, Strategies is a 2014 book by the philosopher Nick Bostrom. It explores how superintelligence could be created and what its features and motivations might be. It argues that superintelligence, if created, would be difficult to control, and that it could take over the world in order to accomplish its goals. The book also presents strategies to help make superintelligences whose goals benefit humanity. It was particularly influential for raising concerns about existential risk from artificial intelligence.

Instrumental convergence is the hypothetical tendency for most sufficiently intelligent beings to pursue similar sub-goals, even if their ultimate goals are quite different. More precisely, agents may pursue instrumental goals—goals which are made in pursuit of some particular end, but are not the end goals themselves—without ceasing, provided that their ultimate (intrinsic) goals may never be fully satisfied.

Existential risk from artificial general intelligence is the idea that substantial progress in artificial general intelligence (AGI) could result in human extinction or an irreversible global catastrophe.

<span class="mw-page-title-main">Seth Baum</span> American researcher

Seth Baum is an American researcher involved in the field of risk research. He is the executive director of the Global Catastrophic Risk Institute (GCRI), a think tank focused on existential risk. He is also affiliated with the Blue Marble Space Institute of Science and the Columbia University Center for Research on Environmental Decisions.

<i>Human Compatible</i> 2019 book by Stuart J. Russell

Human Compatible: Artificial Intelligence and the Problem of Control is a 2019 non-fiction book by computer scientist Stuart J. Russell. It asserts that the risk to humanity from advanced artificial intelligence (AI) is a serious concern despite the uncertainty surrounding future progress in AI. It also proposes an approach to the AI control problem.

The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union and in supra-national bodies like the IEEE, OECD and others. Since 2016, a wave of AI ethics guidelines have been published in order to maintain social control over the technology. Regulation is considered necessary to both encourage AI and manage associated risks. In addition to regulation, AI-deploying organizations need to play a central role in creating and deploying trustworthy AI in line with the principles of trustworthy AI, and take accountability to mitigate the risks. Regulation of AI through mechanisms such as review boards can also be seen as social means to approach the AI control problem.

References

  1. Bostrom, Nick (2013). "Existential Risk Prevention as Global Priority" (PDF). Global Policy. 4 (1): 15–3. doi:10.1111/1758-5899.12002 via Existential Risk.
  2. Daniel, Max (2017-06-20). "S-risks: Why they are the worst existential risks, and how to prevent them (EAG Boston 2017)". Center on Long-Term Risk. Retrieved 2023-09-14.
  3. Hilton, Benjamin (September 2022). "'S-risks'". 80,000 Hours. Retrieved 2023-09-14.
  4. Baumann, Tobias (2017). "S-risk FAQ". Center for Reducing Suffering. Retrieved 2023-09-14.
  5. 1 2 Umbrello, Steven; Sorgner, Stefan Lorenz (June 2019). "Nonconscious Cognitive Suffering: Considering Suffering Risks of Embodied Artificial Intelligence". Philosophies. 4 (2): 24. doi: 10.3390/philosophies4020024 . hdl: 2318/1702133 .
  6. Sotala, Kaj; Gloor, Lukas (2017-12-27). "Superintelligence As a Cause or Cure For Risks of Astronomical Suffering". Informatica. 41 (4). ISSN   1854-3871.
  7. Torres, Phil (2018-06-01). "Space colonization and suffering risks: Reassessing the "maxipok rule"". Futures. 100: 74–85. doi:10.1016/j.futures.2018.04.008. ISSN   0016-3287. S2CID   149794325.
  8. Kovic, Marko (2021-02-01). "Risks of space colonization". Futures. 126: 102638. doi:10.1016/j.futures.2020.102638. ISSN   0016-3287. S2CID   230597480.

Further reading