This article may contain excessive or inappropriate references to self-published sources .(December 2024) |
Risks of astronomical suffering, also called suffering risks or s-risks, are risks involving much more suffering than all that has occurred on Earth so far. [2] [3] They are sometimes categorized as a subclass of existential risks. [4]
According to some scholars, s-risks warrant serious consideration as they are not extremely unlikely and can arise from unforeseen scenarios. Although they may appear speculative, factors such as technological advancement, power dynamics, and historical precedents indicate that advanced technology could inadvertently result in substantial suffering. Thus, s-risks are considered to be a morally urgent matter, despite the possibility of technological benefits. [5]
Sources of possible s-risks include embodied artificial intelligence [6] and superintelligence, [7] as well as space colonization, which could potentially lead to "constant and catastrophic wars" [8] and an immense increase in wild animal suffering by introducing wild animals, who "generally lead short, miserable lives full of sometimes the most brutal suffering", to other planets, either intentionally or inadvertently. [9]
Artificial intelligence is central to s-risk discussions because it may eventually enable powerful actors to control vast technological systems. In a worst-case scenario, AI could be used to create systems of perpetual suffering, such as a totalitarian regime expanding across space. [10] Additionally, s-risks might arise incidentally, such as through AI-driven simulations of conscious beings experiencing suffering, or from economic activities that disregard the well-being of nonhuman or digital minds. [3] Steven Umbrello, an AI ethics researcher, has warned that biological computing may make system design more prone to s-risks. [6] Brian Tomasik has argued that astronomical suffering could emerge from solving the AI alignment problem incompletely. He argues for the possibility of a "near miss" scenario, where a superintelligent AI that is slightly misaligned has the maximum likelihood of causing astronomical suffering, compared to a completely unaligned AI. [11]
Space colonization could increase suffering by introducing wild animals to new environments, leading to ecological imbalances. In unfamiliar habitats, animals may struggle to survive, facing hunger, disease, and predation. These challenges, combined with unstable ecosystems, could cause population crashes or explosions, resulting in widespread suffering. Additionally, the lack of natural predators or proper biodiversity on colonized planets could worsen the situation, mirroring Earth’s ecological problems on a larger scale. This raises ethical concerns about the unintended consequences of space colonization, as it could propagate immense animal suffering in new, unstable ecosystems. Phil Torres argues that space colonization poses significant "suffering risks", where expansion into space will lead to the creation of diverse species and civilizations with conflicting interests. These differences, combined with advanced weaponry and the vast distances between civilizations, would result in catastrophic and unresolvable conflicts. Strategies like a "cosmic Leviathan" to impose order or deterrence policies are unlikely to succeed due to physical limitations in space and the destructive power of future technologies. Thus, Torres concludes that space colonization could create immense suffering and should be delayed or avoided altogether. [12]
David Pearce has argued that genetic engineering is a potential s-risk. Pearce argues that while technological mastery over the pleasure-pain axis and solving the hard problem of consciousness could lead to the potential eradication of suffering, it could also potentially increase the level of contrast in the hedonic range that sentient beings could experience. He argues that these technologies might make it feasible to create "hyperpain" or "dolorium" that experience levels of suffering beyond the human range. [13]
S-risk scenarios may arise from excessive criminal punishment, with precedents in both historical and in modern penal systems. These risks escalate in situations such as warfare or terrorism, especially when advanced technology is involved, as conflicts can amplify destructive tendencies like sadism, tribalism, and retributivism. War often intensifies these dynamics, with the possibility of catastrophic threats being used to force concessions. Agential s-risks are further aggravated by malevolent traits in powerful individuals, such as narcissism or psychopathy. This is exemplified by totalitarian dictators like Hitler and Stalin, whose actions in the 20th century inflicted widespread suffering. [14]
According to David Pearce, there are other potential s-risks that are more exotic, such as those posed by the many-worlds interpretation of quantum mechanics. [13]
To mitigate s-risks, efforts focus on researching and understanding the factors that exacerbate them, particularly in emerging technologies and social structures. Targeted strategies include promoting safe AI design, ensuring cooperation among AI developers, and modeling future civilizations to anticipate risks. Broad strategies may advocate for moral norms against large-scale suffering and stable political institutions. According to Anthony DiGiovanni, prioritizing s-risk reduction is essential, as it may be more manageable than other long-term challenges, while avoiding catastrophic outcomes could be easier than achieving an entirely utopian future. [15]
Induced amnesia has been proposed as a way to mitigate s-risks in locked-in conscious AI and certain AI-adjacent biological systems like brain organoids. [16]
David Pearce's concept of "cosmic rescue missions" proposes the idea of sending probes to alleviate potential suffering in extraterrestrial environments. These missions aim to identify and mitigate suffering among hypothetical extraterrestrial life forms, ensuring that if life exists elsewhere, it is treated ethically. [17] However, challenges include the lack of confirmed extraterrestrial life, uncertainty about their consciousness, and public support concerns, with environmentalists advocating for non-interference and others focusing on resource extraction. [18]
The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization. According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model of 1965, an upgradable intelligent agent could eventually enter a positive feedback loop of self-improvement cycles, each successive; and more intelligent generation appearing more and more rapidly, causing a rapid increase ("explosion") in intelligence which would ultimately result in a powerful superintelligence, qualitatively far surpassing all human intelligence.
Friendly artificial intelligence is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests such as fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.
Nick Bostrom is a philosopher known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, superintelligence risks, and the reversal test. He was the founding director of the now dissolved Future of Humanity Institute at the University of Oxford and is now Principal Researcher at the Macrostrategy Research Initiative.
Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. This contrasts with narrow AI, which is limited to specific tasks. Artificial superintelligence (ASI), on the other hand, refers to AGI that greatly exceeds human cognitive capabilities. AGI is considered one of the definitions of strong AI.
A superintelligence is a hypothetical agent that possesses intelligence surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.
An AI takeover is an imagined scenario in which artificial intelligence (AI) emerges as the dominant form of intelligence on Earth and computer programs or robots effectively take control of the planet away from the human species, which relies on human intelligence. Possible scenarios include replacement of the entire human workforce due to automation, takeover by an artificial superintelligence (ASI), and the notion of a robot uprising. Stories of AI takeovers have been popular throughout science fiction, but recent advancements have made the threat more real. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.
Space and survival is the idea that the long-term survival of the human species and technological civilization requires the building of a spacefaring civilization that utilizes the resources of outer space, and that not doing this might lead to human extinction. A related observation is that the window of opportunity for doing this may be limited due to the decreasing amount of surplus resources that will be available over time as a result of an ever-growing population.
Human extinction or omnicide is the hypothetical end of the human species, either by population decline due to extraneous natural causes, such as an asteroid impact or large-scale volcanism, or via anthropogenic destruction (self-extinction), for example by sub-replacement fertility.
The Future of Humanity Institute (FHI) was an interdisciplinary research centre at the University of Oxford investigating big-picture questions about humanity and its prospects. It was founded in 2005 as part of the Faculty of Philosophy and the Oxford Martin School. Its director was philosopher Nick Bostrom, and its research staff included futurist Anders Sandberg and Giving What We Can founder Toby Ord.
The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics, lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status, artificial superintelligence and existential risks.
A global catastrophic risk or a doomsday scenario is a hypothetical event that could damage human well-being on a global scale, even endangering or destroying modern civilization. An event that could cause human extinction or permanently and drastically curtail humanity's existence or potential is known as an "existential risk".
In the field of artificial intelligence (AI) design, AI capability control proposals, also referred to as AI confinement, aim to increase our ability to monitor and control the behavior of AI systems, including proposed artificial general intelligences (AGIs), in order to reduce the danger they might pose if misaligned. However, capability control becomes less effective as agents become more intelligent and their ability to exploit flaws in human control systems increases, potentially resulting in an existential risk from AGI. Therefore, the Oxford philosopher Nick Bostrom and others recommend capability control methods only as a supplement to alignment methods.
Roman Vladimirovich Yampolskiy is a Latvian computer scientist at the University of Louisville, mostly known for his work on AI safety and cybersecurity. He holds a PhD from the University at Buffalo (2008). He is the founder and current director of Cyber Security Lab, in the department of Computer Engineering and Computer Science at the Speed School of Engineering of the University of Louisville.
The Centre for the Study of Existential Risk (CSER) is a research centre at the University of Cambridge, intended to study possible extinction-level threats posed by present or future technology. The co-founders of the centre are Huw Price, Martin Rees and Jaan Tallinn.
Instrumental convergence is the hypothetical tendency for most sufficiently intelligent, goal-directed beings to pursue similar sub-goals, even if their ultimate goals are quite different. More precisely, agents may pursue instrumental goals—goals which are made in pursuit of some particular end, but are not the end goals themselves—without ceasing, provided that their ultimate (intrinsic) goals may never be fully satisfied.
Feeding Everyone No Matter What: Managing Food Security After Global Catastrophe is a 2014 book by David Denkenberger and Joshua M. Pearce and published by Elsevier under their Academic Press.
Existential risk from artificial intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.
Longtermism is the ethical view that positively influencing the long-term future is a key moral priority of our time. It is an important concept in effective altruism and a primary motivation for efforts that aim to reduce existential risks to humanity.
Scenarios in which a global catastrophic risk creates harm have been widely discussed. Some sources of catastrophic risk are anthropogenic, such as global warming, environmental degradation, and nuclear war. Others are non-anthropogenic or natural, such as meteor impacts or supervolcanoes. The impact of these scenarios can vary widely, depending on the cause and the severity of the event, ranging from temporary economic disruption to human extinction. Many societal collapses have already happened throughout human history.
Existential risk studies (ERS) is a field of studies focused on the definition and theorization of "existential risks", its ethical implications and the related strategies of long-term survival. Existential risks are diversely defined as global kinds of calamity that have the capacity of inducing the extinction of intelligent earthling life, such as humans, or, at least, a severe limitation of their potential, as defined by ERS theorists. The field development and expansion can be divided in waves according to its conceptual changes as well as its evolving relationship with related fields and theories, such as futures studies, disaster studies, AI safety, effective altruism and longtermism.