Risks of astronomical suffering, also called suffering risks or s-risks, are risks involving much more suffering than all that has occurred on Earth so far. [2] [3] They are sometimes categorized as a subclass of existential risks. [4]
According to some scholars, s-risks warrant serious consideration as they are not extremely unlikely and can arise from unforeseen scenarios. Although they may appear speculative, factors such as technological advancement, power dynamics, and historical precedents indicate that advanced technology could inadvertently result in substantial suffering. Thus, s-risks are considered to be a morally urgent matter, despite the possibility of technological benefits. [5]
Sources of possible s-risks include embodied artificial intelligence [6] and superintelligence, [7] as well as space colonization, which could potentially lead to "constant and catastrophic wars" [8] and an immense increase in wild animal suffering by introducing wild animals, who "generally lead short, miserable lives full of sometimes the most brutal suffering", to other planets, either intentionally or inadvertently. [9]
Artificial intelligence is central to s-risk discussions because it may eventually enable powerful actors to control vast technological systems. In a worst-case scenario, AI could be used to create systems of perpetual suffering, such as a totalitarian regime expanding across space. Additionally, s-risks might arise incidentally, such as through AI-driven simulations of conscious beings experiencing suffering, or from economic activities that disregard the well-being of nonhuman or digital minds. [10] Steven Umbrello, an AI ethics researcher, has warned that biological computing may make system design more prone to s-risks. [6] Brian Tomasik has argued that astronomical suffering could emerge from solving the AI alignment problem incompletely. He argues for the possibility of a "near miss" scenario, where a superintelligent AI that is slightly misaligned has the maximum likelihood of causing astronomical suffering, compared to a completely unaligned AI. [11]
Space colonization could increase suffering by introducing wild animals to new environments, leading to ecological imbalances. In unfamiliar habitats, animals may struggle to survive, facing hunger, disease, and predation. These challenges, combined with unstable ecosystems, could cause population crashes or explosions, resulting in widespread suffering. Additionally, the lack of natural predators or proper biodiversity on colonized planets could worsen the situation, mirroring Earth’s ecological problems on a larger scale. This raises ethical concerns about the unintended consequences of space colonization, as it could propagate immense animal suffering in new, unstable ecosystems. Phil Torres argues that space colonization poses significant "suffering risks", where expansion into space will lead to the creation of diverse species and civilizations with conflicting interests. These differences, combined with advanced weaponry and the vast distances between civilizations, would result in catastrophic and unresolvable conflicts. Strategies like a "cosmic Leviathan" to impose order or deterrence policies are unlikely to succeed due to physical limitations in space and the destructive power of future technologies. Thus, Torres concludes that space colonization could create immense suffering and should be delayed or avoided altogether. [12]
David Pearce has argued that genetic engineering is a potential s-risk. Pearce argues that while technological mastery over the pleasure-pain axis and solving the hard problem of consciousness could lead to the potential eradication of suffering, it could also potentially increase the level of contrast in the hedonic range that sentient beings could experience. He argues that these technologies might make it feasible to create "hyperpain" or "dolorium" that experience levels of suffering beyond the human range. [13]
S-risk scenarios may arise from excessive criminal punishment, with precedents in both historical and in modern penal systems. These risks escalate in situations such as warfare or terrorism, especially when advanced technology is involved, as conflicts can amplify destructive tendencies like sadism, tribalism, and retributivism. War often intensifies these dynamics, with the possibility of catastrophic threats being used to force concessions. Agential s-risks are further aggravated by malevolent traits in powerful individuals, such as narcissism or psychopathy. This is exemplified by totalitarian dictators like Hitler and Stalin, whose actions in the 20th century inflicted widespread suffering. [14]
According to David Pearce, there are other potential s-risks that are more exotic, such as those posed by the many-worlds interpretation of quantum mechanics. [13]
According to Tobias Baumann, s-risks can be grouped into three main categories:
Baumann emphasizes that these examples are speculative and acknowledges the uncertainty of future developments. He also warns of availability bias, which can lead to overestimating the likelihood of certain scenarios, stressing the importance of considering a broad spectrum of potential s-risks. [5]
To mitigate s-risks, efforts focus on researching and understanding the factors that exacerbate them, particularly in emerging technologies and social structures. Targeted strategies include promoting safe AI design, ensuring cooperation among AI developers, and modeling future civilizations to anticipate risks. Broad strategies may advocate for moral norms against large-scale suffering and stable political institutions. According to Anthony DiGiovanni, prioritizing s-risk reduction is essential, as it may be more manageable than other long-term challenges, while avoiding catastrophic outcomes could be easier than achieving an entirely utopian future. [15]
Induced amnesia has been proposed as a way to mitigate s-risks in locked-in conscious AI and certain AI-adjacent biological systems like brain organoids. [16]
David Pearce's concept of "cosmic rescue missions" proposes the idea of sending probes to alleviate potential suffering in extraterrestrial environments. These missions aim to identify and mitigate suffering among hypothetical extraterrestrial life forms, ensuring that if life exists elsewhere, it is treated ethically. [17] However, challenges include the lack of confirmed extraterrestrial life, uncertainty about their consciousness, and public support concerns, with environmentalists advocating for non-interference and others focusing on resource extraction. [18]
The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization. According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model of 1965, an upgradable intelligent agent could eventually enter a positive feedback loop of self-improvement cycles, each successive; and more intelligent generation appearing more and more rapidly, causing a rapid increase ("explosion") in intelligence which would ultimately result in a powerful superintelligence, qualitatively far surpassing all human intelligence.
Nick Bostrom is a philosopher known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, superintelligence risks, and the reversal test. He was the founding director of the now dissolved Future of Humanity Institute at the University of Oxford and is now Principal Researcher at the Macrostrategy Research Initiative.
Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. This contrasts with narrow AI, which is limited to specific tasks. Artificial superintelligence (ASI), on the other hand, refers to AGI that greatly exceeds human cognitive capabilities. AGI is considered one of the definitions of strong AI.
A superintelligence is a hypothetical agent that possesses intelligence surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.
An AI takeover is an imagined scenario in which artificial intelligence (AI) emerges as the dominant form of intelligence on Earth and computer programs or robots effectively take control of the planet away from the human species, which relies on human intelligence. Possible scenarios include replacement of the entire human workforce due to automation, takeover by a superintelligent AI (ASI), and the notion of a robot uprising. Stories of AI takeovers have been popular throughout science fiction, but recent advancements have made the threat more real. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.
Space and survival is the idea that the long-term survival of the human species and technological civilization requires the building of a spacefaring civilization that utilizes the resources of outer space, and that not doing this might lead to human extinction. A related observation is that the window of opportunity for doing this may be limited due to the decreasing amount of surplus resources that will be available over time as a result of an ever-growing population.
The Great Filter is the idea that, in the development of life from the earliest stages of abiogenesis to reaching the highest levels of development on the Kardashev scale, there is a barrier to development that makes detectable extraterrestrial life exceedingly rare. The Great Filter is one possible resolution of the Fermi paradox.
Human extinction or omnicide is the hypothetical end of the human species, either by population decline due to extraneous natural causes, such as an asteroid impact or large-scale volcanism, or via anthropogenic destruction (self-extinction), for example by sub-replacement fertility.
The Future of Humanity Institute (FHI) was an interdisciplinary research centre at the University of Oxford investigating big-picture questions about humanity and its prospects. It was founded in 2005 as part of the Faculty of Philosophy and the Oxford Martin School. Its director was philosopher Nick Bostrom, and its research staff included futurist Anders Sandberg and Giving What We Can founder Toby Ord.
The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics, lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status, artificial superintelligence and existential risks.
A global catastrophic risk or a doomsday scenario is a hypothetical event that could damage human well-being on a global scale, even endangering or destroying modern civilization. An event that could cause human extinction or permanently and drastically curtail humanity's existence or potential is known as an "existential risk".
In futurology, a singleton is a hypothetical world order in which there is a single decision-making agency at the highest level, capable of exerting effective control over its domain, and permanently preventing both internal and external threats to its supremacy. The term was first defined by Nick Bostrom.
Roman Vladimirovich Yampolskiy is a Latvian computer scientist at the University of Louisville, mostly known for his work on AI safety and cybersecurity. He holds a PhD from the University at Buffalo (2008). He is the founder and current director of Cyber Security Lab, in the department of Computer Engineering and Computer Science at the Speed School of Engineering of the University of Louisville.
The Centre for the Study of Existential Risk (CSER) is a research centre at the University of Cambridge, intended to study possible extinction-level threats posed by present or future technology. The co-founders of the centre are Huw Price, Martin Rees and Jaan Tallinn.
Existential risk from artificial intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.
Seth Baum is an American researcher involved in the field of risk research. He is the executive director of the Global Catastrophic Risk Institute (GCRI), a think tank focused on existential risk. He is also affiliated with the Blue Marble Space Institute of Science and the Columbia University Center for Research on Environmental Decisions.
Longtermism is the ethical view that positively influencing the long-term future is a key moral priority of our time. It is an important concept in effective altruism and a primary motivation for efforts that aim to reduce existential risks to humanity.
Scenarios in which a global catastrophic risk creates harm have been widely discussed. Some sources of catastrophic risk are anthropogenic, such as global warming, environmental degradation, and nuclear war. Others are non-anthropogenic or natural, such as meteor impacts or supervolcanoes. The impact of these scenarios can vary widely, depending on the cause and the severity of the event, ranging from temporary economic disruption to human extinction. Many societal collapses have already happened throughout human history.
Existential risk studies (ERS) is a field of studies focused on the definition and theorization of "existential risks", its ethical implications and the related strategies of long-term survival. Existential risks are diversely defined as global kinds of calamity that have the capacity of inducing the extinction of intelligent earthling life, such as humans, or, at least, a severe limitation of their potential, as defined by ERS theorists. The field development and expansion can be divided in waves according to its conceptual changes as well as its evolving relationship with related fields and theories, such as futures studies, disaster studies, AI safety, effective altruism and longtermism.
The ethics of simulated suffering examines the moral, philosophical, and practical implications of creating simulations that might lead to experiences of suffering. As technology advances, especially in the fields of artificial intelligence (AI) and virtual reality, there is growing concern that complex simulations could create entities capable of experiencing suffering. This area of ethics, intersecting with AI ethics and effective altruism, raises significant questions about moral responsibility, risk management, and societal regulation.