Existential risk studies (ERS) is a field of studies focused on the definition and theorization of "existential risks", its ethical implications and the related strategies of long-term survival. [1] [2] [3] [4] Existential risks are diversely defined as global kinds of calamity that have the capacity of inducing the extinction of intelligent earthling life, such as humans, or, at least, a severe limitation of their potential, as defined by ERS theorists. [5] [6] The field development and expansion can be divided in waves according to its conceptual changes as well as its evolving relationship with related fields and theories, such as futures studies, disaster studies, AI safety, effective altruism and longtermism. [2]
The historical precursors of existential risks studies can be found in early 19th-century thought around human extinction and the more recent models and theories of global catastrophic risk that date mainly to the Cold War period, especially the thinking around a hypothetical nuclear holocaust. [7] ERS emerged as a distinctive and unified field in the early 2000s, [2] experiencing a rapid growth in the academy [8] and also within the general public with the publication of popular-oriented books. [1] The field has also fostered the creation of a number of foundations, research centers and think tanks, some of which received substantial philanthropic funding [1] and notability within prestigious universities.
The idea of existential risks has it prehistory in the speculation on the possibility of human extinction. The prospect of extinction is itself a break from previous religious and mythological eschatology in the measure that it is thought as an absolute and naturalistic event. [9] As such, human extinction is a recent invention in the intellectual history of calamity. [5]
Its major historical source is present in science fiction literature. Notoriously, Lord Byron was, according to reports, concerned that a comet impact could bring the destruction of humanity, while his poem "Darkness" describes a future in which the Earth becomes lifeless. Mary Shelley's novel The Last Man provides another example of early naturalistic catastrophic imaginations, depicting the story of a man who lived through the death of the rest of humanity in the final decades of the 21st century, caused by many events such as a worldwide plague. [5] The idea itself of the "last man" can be traced to a emerging genre of 19th century literature, originating, most probably, with Jean-Baptiste Cousin de Grainville's work, also titled as The Last Man , published by 1805, where humanity lives through a crisis of infertility. A later rendition of this theme can be found in The Time Machine , published by H.G. Wells in 1895, where a time voyager finds himself 30 million years into a future in which the Earth is nothing but a cold and almost lifeless planet, the reason being the cooling of the sun. Around the same period, Wells wrote two other text on extinction, this time as nonfiction essays, titled "On Extinction" (1893) and "The Extinction of Man" (1897). [5] In the 20th century, human extinction persists as a theme in science fiction. Isaac Asimov not only concerned himself with the possibility of civilizational collapse in his Foundation trilogy, but also wrote a nonfiction book on the subject, titled A Choice of Catastrophes: The Disasters That Threaten Our World, and published in 1979. [10]
Another precursory trend for existential risks is identifiable in the discourses of scientific concern for catastrophes that emerged primarily in reaction to the invention of nuclear weapons. These early responses attended especially to the possibility of an atmospheric ignition, which was soon dismissed as implausible, as well as the concern with radioactive contamination, which became a substantial and persistent theme in the discussion of possible catastrophic events. The risk engendered by radioactive particles prompted a quick mobilization among scientists and intellectuals, notoriously exemplified by the Russell–Einstein Manifesto, in 1955, which warned about the possibility of a human extinction. As a consequence, the Pugwash Conferences on Science and World Affairs was established with the purposed of reducing the threat of armed conflicts. A similar effort is also exemplified by the creation of the Bulletin of the Atomic Scientists , gathering previous members of the Manhattan project. The bulletin has also created and maintained the iconic Doomsday Clock with the purpose of tracking global catastrophic risk while representing in a temporal fashion. [11]
The foundational moment of ERS can be dated to the publication of Nick Bostrom's 2002 essay titled "Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards". In this essay, Bostrom sought to frame human extinction as a topic of philosophic pertinence to the analytic and utilitarian traditions, mainly by dissociating it from past apocalyptical literature and by presenting a schematized and holistic review of possible threats for human survival or, more generally, to its capacity of realizing its own potential, as defined by him and which stands as the canonical definition of existential risk. [12] [13] [3] [14] Conjointly, he attempted to align this study of existential risks with an insight of its overcoming by a prospect of colossal technological development, which would allow human long-term survival through outer space colonization. [15] Most of the essay consists of the proposed classification for existential risks, which is composed by four categories, idiomatically named "Bangs", "Crunches", "Shrieks" and "Whimpers", all inspired by T. S. Eliot poem "The Hollow Men". [16]
The essay brought Bostrom significant academic recognition, incentivizing the attainment of his professorship at Oxford University as well as the directorship of the now defunct Future of Humanity Institute, in 2005, which he helped to found. [17] The Centre for the Study of Existential Risk was established by Cambridge University in 2012, which prompted its replication in other universities. [17]
This initial rendition of existential risks established what has been termed the 'first wave' of ERS. [14] Described as a instance of technological utopianism which is defined by its expectation, or, as Noah B. Taylor characterizes as a "teleological momentum", of a posthuman vision of the future. [14]
Proponents of this wave of ERS placed the emphasis of their hope and faith in technology, particularly artificial intelligence, genetic engineering, and nanotechnology, to lead humanity into a type of posthuman state where the divisions between the physical, virtual, mechanical, and biologic blur. [14]
The second wave, or generation, of ERS was characterized by its elaboration effort over the foundational work of Bostrom, and was further distinguished by its growing relations and interaction with effective altruism. The emphasis on transhumanism is considered to have been reduced during this period. [18]
After its relative institutional consolidation and the expansion of scholar engaged with the field, ERS became increasingly occupied with the issues relating to the diversity of its constituency and the need for a theoretical pluralism in its research. Some scholars of ERS focused on critical examinations of the "historically dominant" [1] approach within the field, termed by some as the "techno-utopian approach". [1] The so-called technological utopianism has formed the theoretical-core of ERS, drawing substantial inspiration from transhumanism, longtermism and the current of utilitarianism known as total utilitarianism. The scholars most critical of this background have claimed that it suffers from intrinsic moral unreliability and methodological flaws, which evidences the demand for new frameworks of ERS, especially the ones that enhances democratic values perceived as lacking in the original formulation. [19]
The canonical definition of existential risk was proposed early by Bostrom in his essay, "Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards", establishing it as a risk "(...) where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential", [20] implying a kind of risk which is both global and terminal. Further elaborated by Bostrom in another essay, "Existential Risk Prevention as Global Priority", published in 2013. [13] This definition, consequently, excludes, or, at least, is indirectly related to forms of calamity and mass suffering that remain below the selected threshold established by theorists of ERS. Genocides and enslavement are examples of these "local terminal risk[s]", while "global endurable risks" might range from moderate levels of global warming, threats to the biodiversity, and global economic recessions. [21] [20] In this sense, the 'existential' of existential risks is distinguished from other 'catastrophical' forms of risk, being essentially related to the concept of human potentiality also elaborated by Bostom. [20] As the author himself explains:
"Tragic as such events are to the people immediately affected, in the big picture of things — from the perspective of humankind as a whole — even the worst of these catastrophes are mere ripples on the surface of the great sea of life." [22]
The perceived problems of this definition of existential risk, primarily relating to its scale, have stimulated other scholars of the field to prefer a more broader category, that is less exclusively related to posthuman expectations and extinctionist scenarios, such as "global catastrophic risks". Bostrom himself has partially incorporated this concept in his work, editing a book titled "Global Catastrophic Risks", still without abandoning the emphasis in the specificity of 'existential' risks for its "pan-generational" and not merely "endurable" dimension. Other proeminent theorists of the field, such as Toby Ord, remain inclined to the canonical transhumanist definition. [23]
Maximizing future value is a concept of ERS defined by Nick Bostrom which exerted an early and persistent influence on the field, especially in the stream of thought most closely related to the first wave or techno-utopian paradigm of existential risks. Bostrom summed the concept by the jargon " Maxipok rule ", which he defined as "maximize the probability of an okay outcome, where an okay outcome is any outcome that avoids existential disaster". [24]
In his foundational essay, Bostrom proposes four categories of risks according to their outcome, all dealing with some sort of limitation of potential. Under each the categories are listed are then organized in a descending order of probability, starting with the outcome that the author considers more probable. [25] [16]
Existential risk studies developed a substantial relation with the effective altruism philanthropic philosophy and community, effectively embracing many of its core ideas as well as attracting a number of effective altruists into the field. [26] The EA community has also contributed financially to the academic consolidation of ERS. [27]
Some scholars within the field of ERS have claimed the need for a more attentive examination of its original theoretical-core and the opening for a theoretical pluralism which seeks to rectify the perceived methodological and moral flaws of this historically dominant approach. [19] This original theoretical base of ERS has been termed by some as the "techno-utopian approach", in reference to the general idea of technological utopianism, and has been defined by its strong bonds with transhumanism, longtermism and the so-called total utilitarianism. In this sense, the premises of such techno-utopian approach are manifested in the three assumption, not explicitly and totally shared by all its adherents, such as - a "(...) maximally technologically developed future could contain (and is defined in terms of) enormous quantities of utilitarian intrinsic value, particularly due to more fulfilling posthuman modes of living"; [19] that its failure would represent a "existential catastrophe"; [19] and, lastly, that the present moral obligation is to ensure the realization of this posthuman future, "(...) including through exceptional actions.". [19] These assumptions are considered particularly essential to the canonical definition of existential risk. [13]
The technological utopianism paradigm of ERS is considered most visible and influential by its articulation in Nick Bostrom's foundational work, both his aforementioned 2002 and 2013 essays, as well as his 2003 paper titled "Astronomical Waste". [13] Popular books by thinkers of existential risks, such as The Precipice , Superintelligence , and What We Owe the Future , have also fostered the public profile of technological utopianism. [13]
Some scholars consider the concept of existential risk established within ERS to be excessively restrictive and narrow, which discloses a colonialist attitude of neglect to the history of genocides, especially the one related with the colonial genocide of indigenous peoples. Nick Bostrom, for example, explicitly states that the start point for anthropogenic existential risks is the period after the end of World War II, with the invention of nuclear weapons. [28]
In a 2023 Global Policy article, Hobson and Corry write: [29]
The question is then: will existential security bring necessary emergency measures of a collective kind to bear on emerging catastrophic global threats, or erode ‘normal’ politics of domestic and international society (to the extent that these exist) and potentially legitimate a pursuit, not of global interests, but of a hegemonic set of interests posing as humanity? [...] [A]ny notion of collective global interest is inevitably already shot through with particular (geo)political positions and interests. The persistence of ‘the international’—the division of the social world into multiple uneven units (Rosenberg, 2006)—means that any universal category (of human or civilisation) will be partial or lodged in partial political communities. Legacies of violence and extinction perpetrated in the name of humanity and civilisation make for a bad track record. Added to the statist baggage of existing security practices and discourses, the potential violence of enacting security measures in the name of protecting a planetary or species category should therefore not be overlooked.
Theorists of ERS, Bostrom prominently, have often claimed that 'existential risk' is an understudied subject in academic literature. In an essay from 2013, titled "Existential Risk Prevention as Global Priority", Bostrom remarked that the Scopus database contains 900 papers on dung beetles but fewer than 50 papers when searching for "human extinction". Which confirms, in Bostrom view, the neglected state of research of this subject. [30]
However, other researches have contested and criticized both the premises and conclusions of this claim and the particular experiment that Bostrom used to substantiate it. Joshua Schuster and Derek Woods claimed that the same research, made in March of 2020, did present a marginally improved numbers of papers on human extinction; yet, the search for a commonly related term, "genocide", resulted in 7,166 papers. In a distinct database, JSTOR, the researches 66,809 results for "human extinction", 43,926 for "genocide" and 134,089 for "extinction". Besides that, the search for specific instances of existential risk, such as nuclear war or genetically engineered bioweapons, provide an enormous accumulation of research. Both authors claim that this different is symptomatic of the Bostrom attachment to self-defined criteria and terms for this kind of theme, remaining, according to them, inattentive to the research around human rights and genocide prevention. [28]
Posthumanism or post-humanism is an idea in continental philosophy and critical theory responding to the presence of anthropocentrism in 21st-century thought. Posthumanization comprises "those processes by which a society comes to include members other than 'natural' biological human beings who, in one way or another, contribute to the structures, dynamics, or meaning of the society."
Transhumanism is a philosophical and intellectual movement that advocates the enhancement of the human condition by developing and making widely available new and future technologies that can greatly enhance longevity, cognition, and well-being.
Technological utopianism is any ideology based on the premise that advances in science and technology could and should bring about a utopia, or at least help to fulfill one or another utopian ideal.
Nick Bostrom is a Swedish philosopher known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, superintelligence risks, and the reversal test. He was the founding director of the now dissolved Future of Humanity Institute at the University of Oxford and is now Principal Researcher at the Macrostrategy Research Initiative.
Singularitarianism is a movement defined by the belief that a technological singularity—the creation of superintelligence—will likely happen in the medium future, and that deliberate action ought to be taken to ensure that the singularity benefits humans.
Space and survival is the idea that the long-term survival of the human species and technological civilization requires the building of a spacefaring civilization that utilizes the resources of outer space, and that not doing this might lead to human extinction. A related observation is that the window of opportunity for doing this may be limited due to the decreasing amount of surplus resources that will be available over time as a result of an ever-growing population.
The Institute for Ethics and Emerging Technologies (IEET) is a technoprogressive think tank that seeks to "promote ideas about how technological progress can increase freedom, happiness, and human flourishing in democratic societies." It was incorporated in the United States in 2004, as a non-profit 501(c)(3) organization, by philosopher Nick Bostrom and bioethicist James Hughes.
Human extinction is the hypothetical end of the human species, either by population decline due to extraneous natural causes, such as an asteroid impact or large-scale volcanism, or via anthropogenic destruction (self-extinction), for example by sub-replacement fertility.
Differential technological development is a strategy of technology governance aiming to decrease risks from emerging technologies by influencing the sequence in which they are developed. On this strategy, societies would strive to delay the development of harmful technologies and their applications, while accelerating the development of beneficial technologies, especially those that offer protection against the harmful ones.
The Future of Humanity Institute (FHI) was an interdisciplinary research centre at the University of Oxford investigating big-picture questions about humanity and its prospects. It was founded in 2005 as part of the Faculty of Philosophy and the Oxford Martin School. Its director was philosopher Nick Bostrom, and its research staff included futurist Anders Sandberg and Giving What We Can founder Toby Ord.
A global catastrophic risk or a doomsday scenario is a hypothetical event that could damage human well-being on a global scale, even endangering or destroying modern civilization. An event that could cause human extinction or permanently and drastically curtail humanity's existence or potential is known as an "existential risk".
In futurology, a singleton is a hypothetical world order in which there is a single decision-making agency at the highest level, capable of exerting effective control over its domain, and permanently preventing both internal and external threats to its supremacy. The term was first defined by Nick Bostrom.
Existential risk from AI refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.
The Precipice: Existential Risk and the Future of Humanity is a 2020 non-fiction book by the Australian philosopher Toby Ord, a senior research fellow at the Future of Humanity Institute in Oxford. It argues that humanity faces unprecedented risks over the next few centuries and examines the moral significance of safeguarding humanity's future.
Risks of astronomical suffering, also called suffering risks or s-risks, are risks involving much more suffering than all that has occurred on Earth so far. They are sometimes categorized as a subclass of existential risks.
Longtermism is the ethical view that positively influencing the long-term future is a key moral priority of our time. It is an important concept in effective altruism and a primary motivation for efforts that aim to reduce existential risks to humanity.
Scenarios in which a global catastrophic risk creates harm have been widely discussed. Some sources of catastrophic risk are anthropogenic, such as global warming, environmental degradation, and nuclear war. Others are non-anthropogenic or natural, such as meteor impacts or supervolcanoes. The impact of these scenarios can vary widely, depending on the cause and the severity of the event, ranging from temporary economic disruption to human extinction. Many societal collapses have already happened throughout human history.
Émile P. Torres is an American philosopher, intellectual historian, author, and postdoctoral researcher at Case Western Reserve University. Their research focuses on eschatology, existential risk, and human extinction. Along with computer scientist Timnit Gebru, Torres coined the acronym neologism "TESCREAL" to criticize what they see as a group of related philosophies: transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism, and longtermism.
TESCREAL is an acronym neologism proposed by computer scientist Timnit Gebru and philosopher Émile P. Torres that stands for "transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism, and longtermism". Gebru and Torres argue that these ideologies should be treated as an "interconnected and overlapping" group with shared origins. They say this is a movement that allows its proponents to use the threat of human extinction to justify expensive or detrimental projects. They consider it pervasive in social and academic circles in Silicon Valley centered around artificial intelligence. As such, the acronym is sometimes used to criticize a perceived belief system associated with Big Tech.
"Letter from Utopia" is a fictional letter written by philosopher Nick Bostrom in 2008. It depicts, what Bostrom describes as, "A vision of the future, from the future". In the essay, a posthuman in the far future writes to humanity in the deep past, describing how wonderful their utopian existence is and encouraging their ancestors to do everything they can to make their future possible. This includes conquering aging, sickness, and death, increasing human intelligence, and eliminating suffering in pursuit of pleasure. The essay is considered a manifesto of transhumanism.