Global catastrophic risk

Last updated

Artist's impression of a major asteroid impact. An asteroid caused the extinction of the non-avian dinosaurs. Impact event.jpg
Artist's impression of a major asteroid impact. An asteroid caused the extinction of the non-avian dinosaurs.

A global catastrophic risk or a doomsday scenario is a hypothetical event that could damage human well-being on a global scale, [2] even endangering or destroying modern civilization. [3] An event that could cause human extinction or permanently and drastically curtail humanity's existence or potential is known as an "existential risk". [4]

Contents

In the 21st century, a number of academic and non-profit organizations have been established to research global catastrophic and existential risks, formulate potential mitigation measures and either advocate for or implement these measures. [5] [6] [7] [8]

Definition and classification

Scope-severity grid from Bostrom's paper "Existential Risk Prevention as Global Priority" X-risk-chart-en-01a.svg
Scope–severity grid from Bostrom's paper "Existential Risk Prevention as Global Priority"

Defining global catastrophic risks

The term global catastrophic risk "lacks a sharp definition", and generally refers (loosely) to a risk that could inflict "serious damage to human well-being on a global scale". [10]

Humanity has suffered large catastrophes before. Some of these have caused serious damage but were only local in scope—e.g. the Black Death may have resulted in the deaths of a third of Europe's population, [11] 10% of the global population at the time. [12] Some were global, but were not as severe—e.g. the 1918 influenza pandemic killed an estimated 3–6% of the world's population. [13] Most global catastrophic risks would not be so intense as to kill the majority of life on earth, but even if one did, the ecosystem and humanity would eventually recover (in contrast to existential risks).

Similarly, in Catastrophe: Risk and Response , Richard Posner singles out and groups together events that bring about "utter overthrow or ruin" on a global, rather than a "local or regional" scale. Posner highlights such events as worthy of special attention on cost–benefit grounds because they could directly or indirectly jeopardize the survival of the human race as a whole. [14]

Defining existential risks

Existential risks are defined as "risks that threaten the destruction of humanity's long-term potential." [15] The instantiation of an existential risk (an existential catastrophe [16] ) would either cause outright human extinction or irreversibly lock in a drastically inferior state of affairs. [9] [17] Existential risks are a sub-class of global catastrophic risks, where the damage is not only global but also terminal and permanent, preventing recovery and thereby affecting both current and all future generations. [9]

Non-extinction risks

While extinction is the most obvious way in which humanity's long-term potential could be destroyed, there are others, including unrecoverablecollapse and unrecoverabledystopia. [18] A disaster severe enough to cause the permanent, irreversible collapse of human civilisation would constitute an existential catastrophe, even if it fell short of extinction. [18] Similarly, if humanity fell under a totalitarian regime, and there were no chance of recovery then such a dystopia would also be an existential catastrophe. [19] Bryan Caplan writes that "perhaps an eternity of totalitarianism would be worse than extinction". [19] (George Orwell's novel Nineteen Eighty-Four suggests [20] an example. [21] ) A dystopian scenario shares the key features of extinction and unrecoverable collapse of civilization: before the catastrophe humanity faced a vast range of bright futures to choose from; after the catastrophe, humanity is locked forever in a terrible state. [18]

Psychologist Steven Pinker has called existential risk a "useless category" that can distract from threats he considers real and solvable, such as climate change and nuclear war. [22]

Potential sources of risk

Potential global catastrophic risks are conventionally classified as anthropogenic or non-anthropogenic hazards. Examples of non-anthropogenic risks are an asteroid or comet impact event, a supervolcanic eruption, a natural pandemic, a lethal gamma-ray burst, a geomagnetic storm from a coronal mass ejection destroying electronic equipment, natural long-term climate change, hostile extraterrestrial life, or the Sun transforming into a red giant star and engulfing the Earth billions of years in the future. [23]

Arrangement of global catastrophic risks into three sets according to whether they are largely human-caused, human influences upon nature, or purely natural Global catastrophic risks.png
Arrangement of global catastrophic risks into three sets according to whether they are largely human-caused, human influences upon nature, or purely natural

Anthropogenic risks are those caused by humans and include those related to technology, governance, and climate change. Technological risks include the creation of artificial intelligence misaligned with human goals, biotechnology, and nanotechnology. Insufficient or malign global governance creates risks in the social and political domain, such as global war and nuclear holocaust, [24] biological warfare and bioterrorism using genetically modified organisms, cyberwarfare and cyberterrorism destroying critical infrastructure like the electrical grid, or radiological warfare using weapons such as large cobalt bombs. Other global catastrophic risks include climate change, environmental degradation, extinction of species, famine as a result of non-equitable resource distribution, human overpopulation or underpopulation, crop failures, and non-sustainable agriculture.

Methodological challenges

Research into the nature and mitigation of global catastrophic risks and existential risks is subject to a unique set of challenges and, as a result, is not easily subjected to the usual standards of scientific rigour. [18] For instance, it is neither feasible nor ethical to study these risks experimentally. Carl Sagan expressed this with regards to nuclear war: "Understanding the long-term consequences of nuclear war is not a problem amenable to experimental verification". [25] Moreover, many catastrophic risks change rapidly as technology advances and background conditions, such as geopolitical conditions, change. Another challenge is the general difficulty of accurately predicting the future over long timescales, especially for anthropogenic risks which depend on complex human political, economic and social systems. [18] In addition to known and tangible risks, unforeseeable black swan extinction events may occur, presenting an additional methodological problem. [18] [26]

Lack of historical precedent

Humanity has never suffered an existential catastrophe and if one were to occur, it would necessarily be unprecedented. [18] Therefore, existential risks pose unique challenges to prediction, even more than other long-term events, because of observation selection effects. [27] Unlike with most events, the failure of a complete extinction event to occur in the past is not evidence against their likelihood in the future, because every world that has experienced such an extinction event has gone unobserved by humanity. Regardless of civilization collapsing events' frequency, no civilization observes existential risks in its history. [27] These anthropic issues may partly be avoided by looking at evidence that does not have such selection effects, such as asteroid impact craters on the Moon, or directly evaluating the likely impact of new technology. [9]

To understand the dynamics of an unprecedented, unrecoverable global civilizational collapse (a type of existential risk), it may be instructive to study the various local civilizational collapses that have occurred throughout human history. [28] For instance, civilizations such as the Roman Empire have ended in a loss of centralized governance and a major civilization-wide loss of infrastructure and advanced technology. However, these examples demonstrate that societies appear to be fairly resilient to catastrophe; for example, Medieval Europe survived the Black Death without suffering anything resembling a civilization collapse despite losing 25 to 50 percent of its population. [29]

Incentives and coordination

There are economic reasons that can explain why so little effort is going into global catastrophic risk reduction. First, it is speculative and may never happen, so many people focus on other more pressing issues. It is also a global public good, so we should expect it to be undersupplied by markets. [9] Even if a large nation invested in risk mitigation measures, that nation would enjoy only a small fraction of the benefit of doing so. Furthermore, global catastrophic risk reduction can be thought of as an intergenerational global public good. Since most of the hypothetical benefits of the reduction would be enjoyed by future generations, and though these future people would perhaps be willing to pay substantial sums for risk reduction, no mechanism for such a transaction exists. [9]

Cognitive biases

Numerous cognitive biases can influence people's judgment of the importance of existential risks, including scope insensitivity, hyperbolic discounting, availability heuristic, the conjunction fallacy, the affect heuristic, and the overconfidence effect. [30]

Scope insensitivity influences how bad people consider the extinction of the human race to be. For example, when people are motivated to donate money to altruistic causes, the quantity they are willing to give does not increase linearly with the magnitude of the issue: people are roughly as willing to prevent the deaths of 200,000 or 2,000 birds. [31] Similarly, people are often more concerned about threats to individuals than to larger groups. [30]

Eliezer Yudkowsky theorizes that scope neglect plays a role in public perception of existential risks: [32] [33]

Substantially larger numbers, such as 500 million deaths, and especially qualitatively different scenarios such as the extinction of the entire human species, seem to trigger a different mode of thinking... People who would never dream of hurting a child hear of existential risk, and say, "Well, maybe the human species doesn't really deserve to survive".

All past predictions of human extinction have proven to be false. To some, this makes future warnings seem less credible. Nick Bostrom argues that the absence of human extinction in the past is weak evidence that there will be no human extinction in the future, due to survivor bias and other anthropic effects. [34]

Sociobiologist E. O. Wilson argued that: "The reason for this myopic fog, evolutionary biologists contend, is that it was actually advantageous during all but the last few millennia of the two million years of existence of the genus Homo... A premium was placed on close attention to the near future and early reproduction, and little else. Disasters of a magnitude that occur only once every few centuries were forgotten or transmuted into myth." [35]

Proposed mitigation

Multi-layer defense

Defense in depth is a useful framework for categorizing risk mitigation measures into three layers of defense: [36]

  1. Prevention: Reducing the probability of a catastrophe occurring in the first place. Example: Measures to prevent outbreaks of new highly infectious diseases.
  2. Response: Preventing the scaling of a catastrophe to the global level. Example: Measures to prevent escalation of a small-scale nuclear exchange into an all-out nuclear war.
  3. Resilience: Increasing humanity's resilience (against extinction) when faced with global catastrophes. Example: Measures to increase food security during a nuclear winter.

Human extinction is most likely when all three defenses are weak, that is, "by risks we are unlikely to prevent, unlikely to successfully respond to, and unlikely to be resilient against". [36]

The unprecedented nature of existential risks poses a special challenge in designing risk mitigation measures since humanity will not be able to learn from a track record of previous events. [18]

Funding

Some researchers argue that both research and other initiatives relating to existential risk are underfunded. Nick Bostrom states that more research has been done on Star Trek , snowboarding, or dung beetles than on existential risks. Bostrom's comparisons have been criticized as "high-handed". [22] [37] As of 2020, the Biological Weapons Convention organization had an annual budget of US$1.4 million. [38]

Survival planning

Some scholars propose the establishment on Earth of one or more self-sufficient, remote, permanently occupied settlements specifically created for the purpose of surviving a global disaster. [39] [40] [41] Economist Robin Hanson argues that a refuge permanently housing as few as 100 people would significantly improve the chances of human survival during a range of global catastrophes. [39] [42]

Food storage has been proposed globally, but the monetary cost would be high. Furthermore, it would likely contribute to the current millions of deaths per year due to malnutrition. [43] In 2022, a team led by David Denkenberger modeled the cost-effectiveness of resilient foods to artificial general intelligence (AGI) safety and found "~98-99% confidence" for a higher marginal impact of work on resilient foods. [44] Some survivalists stock survival retreats with multiple-year food supplies.

The Svalbard Global Seed Vault is buried 400 feet (120 m) inside a mountain on an island in the Arctic. It is designed to hold 2.5 billion seeds from more than 100 countries as a precaution to preserve the world's crops. The surrounding rock is −6 °C (21 °F) (as of 2015) but the vault is kept at −18 °C (0 °F) by refrigerators powered by locally sourced coal. [45] [46]

More speculatively, if society continues to function and if the biosphere remains habitable, calorie needs for the present human population might in theory be met during an extended absence of sunlight, given sufficient advance planning. Conjectured solutions include growing mushrooms on the dead plant biomass left in the wake of the catastrophe, converting cellulose to sugar, or feeding natural gas to methane-digesting bacteria. [47] [48]

Global catastrophic risks and global governance

Insufficient global governance creates risks in the social and political domain, but the governance mechanisms develop more slowly than technological and social change. There are concerns from governments, the private sector, as well as the general public about the lack of governance mechanisms to efficiently deal with risks, negotiate and adjudicate between diverse and conflicting interests. This is further underlined by an understanding of the interconnectedness of global systemic risks. [49] In absence or anticipation of global governance, national governments can act individually to better understand, mitigate and prepare for global catastrophes. [50]

Climate emergency plans

In 2018, the Club of Rome called for greater climate change action and published its Climate Emergency Plan, which proposes ten action points to limit global average temperature increase to 1.5 degrees Celsius. [51] Further, in 2019, the Club published the more comprehensive Planetary Emergency Plan. [52]

There is evidence to suggest that collectively engaging with the emotional experiences that emerge during contemplating the vulnerability of the human species within the context of climate change allows for these experiences to be adaptive. When collective engaging with and processing emotional experiences is supportive, this can lead to growth in resilience, psychological flexibility, tolerance of emotional experiences, and community engagement. [53]

Space colonization

Space colonization is a proposed alternative to improve the odds of surviving an extinction scenario. [54] Solutions of this scope may require megascale engineering.

Astrophysicist Stephen Hawking advocated colonizing other planets within the Solar System once technology progresses sufficiently, in order to improve the chance of human survival from planet-wide events such as global thermonuclear war. [55] [56]

Billionaire Elon Musk writes that humanity must become a multiplanetary species in order to avoid extinction. [57] His company SpaceX is developing technology he projects will be used to colonize Mars.

Organizations

The Bulletin of the Atomic Scientists (est. 1945) is one of the oldest global risk organizations, founded after the public became alarmed by the potential of atomic warfare in the aftermath of WWII. It studies risks associated with nuclear war and energy and famously maintains the Doomsday Clock established in 1947. The Foresight Institute (est. 1986) examines the risks of nanotechnology and its benefits. It was one of the earliest organizations to study the unintended consequences of otherwise harmless technology gone haywire at a global scale. It was founded by K. Eric Drexler who postulated "grey goo". [58] [59]

Beginning after 2000, a growing number of scientists, philosophers and tech billionaires created organizations devoted to studying global risks both inside and outside of academia. [60]

Independent non-governmental organizations (NGOs) include the Machine Intelligence Research Institute (est. 2000), which aims to reduce the risk of a catastrophe caused by artificial intelligence, [61] with donors including Peter Thiel and Jed McCaleb. [62] The Nuclear Threat Initiative (est. 2001) seeks to reduce global threats from nuclear, biological and chemical threats, and containment of damage after an event. [8] It maintains a nuclear material security index. [63] The Lifeboat Foundation (est. 2009) funds research into preventing a technological catastrophe. [64] Most of the research money funds projects at universities. [65] The Global Catastrophic Risk Institute (est. 2011) is a US-based non-profit, non-partisan think tank founded by Seth Baum and Tony Barrett. GCRI does research and policy work across various risks, including artificial intelligence, nuclear war, climate change, and asteroid impacts. [66] The Global Challenges Foundation (est. 2012), based in Stockholm and founded by Laszlo Szombatfalvy, releases a yearly report on the state of global risks. [67] [68] The Future of Life Institute (est. 2014) works to reduce extreme, large-scale risks from transformative technologies, as well as steer the development and use of these technologies to benefit all life, through grantmaking, policy advocacy in the United States, European Union and United Nations, and educational outreach. [7] Elon Musk, Vitalik Buterin and Jaan Tallinn are some of its biggest donors. [69] The Center on Long-Term Risk (est. 2016), formerly known as the Foundational Research Institute, is a British organization focused on reducing risks of astronomical suffering (s-risks) from emerging technologies. [70]

University-based organizations included the Future of Humanity Institute (est. 2005) which researched the questions of humanity's long-term future, particularly existential risk. [5] It was founded by Nick Bostrom and was based at Oxford University. [5] The Centre for the Study of Existential Risk (est. 2012) is a Cambridge University-based organization which studies four major technological risks: artificial intelligence, biotechnology, global warming and warfare. [6] All are man-made risks, as Huw Price explained to the AFP news agency, "It seems a reasonable prediction that some time in this or the next century intelligence will escape from the constraints of biology". He added that when this happens "we're no longer the smartest things around," and will risk being at the mercy of "machines that are not malicious, but machines whose interests don't include us." [71] Stephen Hawking was an acting adviser. The Millennium Alliance for Humanity and the Biosphere is a Stanford University-based organization focusing on many issues related to global catastrophe by bringing together members of academia in the humanities. [72] [73] It was founded by Paul Ehrlich, among others. [74] Stanford University also has the Center for International Security and Cooperation focusing on political cooperation to reduce global catastrophic risk. [75] The Center for Security and Emerging Technology was established in January 2019 at Georgetown's Walsh School of Foreign Service and will focus on policy research of emerging technologies with an initial emphasis on artificial intelligence. [76] They received a grant of 55M USD from Good Ventures as suggested by Open Philanthropy. [76]

Other risk assessment groups are based in or are part of governmental organizations. The World Health Organization (WHO) includes a division called the Global Alert and Response (GAR) which monitors and responds to global epidemic crisis. [77] GAR helps member states with training and coordination of response to epidemics. [78] The United States Agency for International Development (USAID) has its Emerging Pandemic Threats Program which aims to prevent and contain naturally generated pandemics at their source. [79] The Lawrence Livermore National Laboratory has a division called the Global Security Principal Directorate which researches on behalf of the government issues such as bio-security and counter-terrorism. [80]

See also

Related Research Articles

The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization. According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model of 1965, an upgradable intelligent agent could eventually enter a positive feedback loop of self-improvement cycles, each successive; and more intelligent generation appearing more and more rapidly, causing a rapid increase ("explosion") in intelligence which would ultimately result in a powerful superintelligence, qualitatively far surpassing all human intelligence.

<span class="mw-page-title-main">Nick Bostrom</span> Philosopher and writer (born 1973)

Nick Bostrom is a philosopher known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, superintelligence risks, and the reversal test. He was the founding director of the now dissolved Future of Humanity Institute at the University of Oxford and is now Principal Researcher at the Macrostrategy Research Initiative.

A superintelligence is a hypothetical agent that possesses intelligence surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

<span class="mw-page-title-main">AI takeover</span> Hypothetical outcome of artificial intelligence

An AI takeover is an imagined scenario in which artificial intelligence (AI) emerges as the dominant form of intelligence on Earth and computer programs or robots effectively take control of the planet away from the human species, which relies on human intelligence. Possible scenarios include replacement of the entire human workforce due to automation, takeover by a superintelligent AI (ASI), and the notion of a robot uprising. Stories of AI takeovers have been popular throughout science fiction, but recent advancements have made the threat more real. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.

<span class="mw-page-title-main">Space and survival</span> Idea that spacefaring is necessary for long-term human survival

Space and survival is the idea that the long-term survival of the human species and technological civilization requires the building of a spacefaring civilization that utilizes the resources of outer space, and that not doing this might lead to human extinction. A related observation is that the window of opportunity for doing this may be limited due to the decreasing amount of surplus resources that will be available over time as a result of an ever-growing population.

The Great Filter is the idea that, in the development of life from the earliest stages of abiogenesis to reaching the highest levels of development on the Kardashev scale, there is a barrier to development that makes detectable extraterrestrial life exceedingly rare. The Great Filter is one possible resolution of the Fermi paradox.

<span class="mw-page-title-main">Human extinction</span> Hypothetical end of the human species

Human extinction or omnicide is the hypothetical end of the human species, either by population decline due to extraneous natural causes, such as an asteroid impact or large-scale volcanism, or via anthropogenic destruction (self-extinction), for example by sub-replacement fertility.

Differential technological development is a strategy of technology governance aiming to decrease risks from emerging technologies by influencing the sequence in which they are developed. On this strategy, societies would strive to delay the development of harmful technologies and their applications, while accelerating the development of beneficial technologies, especially those that offer protection against the harmful ones.

<span class="mw-page-title-main">Future of Humanity Institute</span> Defunct Oxford interdisciplinary research centre

The Future of Humanity Institute (FHI) was an interdisciplinary research centre at the University of Oxford investigating big-picture questions about humanity and its prospects. It was founded in 2005 as part of the Faculty of Philosophy and the Oxford Martin School. Its director was philosopher Nick Bostrom, and its research staff included futurist Anders Sandberg and Giving What We Can founder Toby Ord.

<span class="mw-page-title-main">Nuclear holocaust</span> Scenario of civilization collapse or human extinction by nuclear weapons

A nuclear holocaust, also known as a nuclear apocalypse, nuclear annihilation, nuclear armageddon, or atomic holocaust, is a theoretical scenario where the mass detonation of nuclear weapons causes widespread destruction and radioactive fallout. Such a scenario envisages large parts of the Earth becoming uninhabitable due to the effects of nuclear warfare, potentially causing the collapse of civilization, the extinction of humanity, and/or the termination of most biological life on Earth.

In futurology, a singleton is a hypothetical world order in which there is a single decision-making agency at the highest level, capable of exerting effective control over its domain, and permanently preventing both internal and external threats to its supremacy. The term was first defined by Nick Bostrom.

The Centre for the Study of Existential Risk (CSER) is a research centre at the University of Cambridge, intended to study possible extinction-level threats posed by present or future technology. The co-founders of the centre are Huw Price, Martin Rees and Jaan Tallinn.

<i>Superintelligence: Paths, Dangers, Strategies</i> 2014 book by Nick Bostrom

Superintelligence: Paths, Dangers, Strategies is a 2014 book by the philosopher Nick Bostrom. It explores how superintelligence could be created and what its features and motivations might be. It argues that superintelligence, if created, would be difficult to control, and that it could take over the world in order to accomplish its goals. The book also presents strategies to help make superintelligences whose goals benefit humanity. It was particularly influential for raising concerns about existential risk from artificial intelligence.

Existential risk from artificial intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.

<i>Global Catastrophic Risks</i> (book) 2008 non-fiction book

Global Catastrophic Risks is a 2008 non-fiction book edited by philosopher Nick Bostrom and astronomer Milan M. Ćirković. The book is a collection of essays from 26 academics written about various global catastrophic and existential risks.

<i>The Precipice: Existential Risk and the Future of Humanity</i> 2020 book about existential risks by Toby Ord

The Precipice: Existential Risk and the Future of Humanity is a 2020 non-fiction book by the Australian philosopher Toby Ord, a senior research fellow at the Future of Humanity Institute in Oxford. It argues that humanity faces unprecedented risks over the next few centuries and examines the moral significance of safeguarding humanity's future.

<span class="mw-page-title-main">Risk of astronomical suffering</span> Risks of astronomical suffering

Risks of astronomical suffering, also called suffering risks or s-risks, are risks involving much more suffering than all that has occurred on Earth so far. They are sometimes categorized as a subclass of existential risks.

<span class="mw-page-title-main">Longtermism</span> Philosophical view which prioritises the long-term future

Longtermism is the ethical view that positively influencing the long-term future is a key moral priority of our time. It is an important concept in effective altruism and a primary motivation for efforts that aim to reduce existential risks to humanity.

<span class="mw-page-title-main">Global catastrophe scenarios</span> Scenarios in which a global catastrophe creates harm

Scenarios in which a global catastrophic risk creates harm have been widely discussed. Some sources of catastrophic risk are anthropogenic, such as global warming, environmental degradation, and nuclear war. Others are non-anthropogenic or natural, such as meteor impacts or supervolcanoes. The impact of these scenarios can vary widely, depending on the cause and the severity of the event, ranging from temporary economic disruption to human extinction. Many societal collapses have already happened throughout human history.

Existential risk studies (ERS) is a field of studies focused on the definition and theorization of "existential risks", its ethical implications and the related strategies of long-term survival. Existential risks are diversely defined as global kinds of calamity that have the capacity of inducing the extinction of intelligent earthling life, such as humans, or, at least, a severe limitation of their potential, as defined by ERS theorists. The field development and expansion can be divided in waves according to its conceptual changes as well as its evolving relationship with related fields and theories, such as futures studies, disaster studies, AI safety, effective altruism and longtermism.

References

  1. Schulte, P.; et al. (March 5, 2010). "The Chicxulub Asteroid Impact and Mass Extinction at the Cretaceous-Paleogene Boundary" (PDF). Science . 327 (5970): 1214–1218. Bibcode:2010Sci...327.1214S. doi:10.1126/science.1177265. PMID   20203042. S2CID   2659741.
  2. Bostrom, Nick (2008). Global Catastrophic Risks (PDF). Oxford University Press. p. 1.
  3. Ripple WJ, Wolf C, Newsome TM, Galetti M, Alamgir M, Crist E, Mahmoud MI, Laurance WF (November 13, 2017). "World Scientists' Warning to Humanity: A Second Notice". BioScience. 67 (12): 1026–1028. doi: 10.1093/biosci/bix125 . hdl: 11336/71342 .
  4. Bostrom, Nick (March 2002). "Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards". Journal of Evolution and Technology . 9.
  5. 1 2 3 "About FHI". Future of Humanity Institute . Retrieved August 12, 2021.
  6. 1 2 "About us". Centre for the Study of Existential Risk . Retrieved August 12, 2021.
  7. 1 2 "The Future of Life Institute". Future of Life Institute . Retrieved May 5, 2014.
  8. 1 2 "Nuclear Threat Initiative". Nuclear Threat Initiative . Retrieved June 5, 2015.
  9. 1 2 3 4 5 6 Bostrom, Nick (2013). "Existential Risk Prevention as Global Priority" (PDF). Global Policy. 4 (1): 15–3. doi:10.1111/1758-5899.12002 via Existential Risk.
  10. Bostrom, Nick; Cirkovic, Milan (2008). Global Catastrophic Risks. Oxford: Oxford University Press. p. 1. ISBN   978-0-19-857050-9.
  11. Ziegler, Philip (2012). The Black Death. Faber and Faber. p. 397. ISBN   9780571287116.
  12. Muehlhauser, Luke (March 15, 2017). "How big a deal was the Industrial Revolution?". lukemuelhauser.com. Retrieved August 3, 2020.
  13. Taubenberger, Jeffery; Morens, David (2006). "1918 Influenza: the Mother of All Pandemics". Emerging Infectious Diseases. 12 (1): 15–22. doi:10.3201/eid1201.050979. PMC   3291398 . PMID   16494711.
  14. Posner, Richard A. (2006). Catastrophe: Risk and Response. Oxford: Oxford University Press. ISBN   978-0195306477. Introduction, "What is Catastrophe?"
  15. Ord, Toby (2020). The Precipice: Existential Risk and the Future of Humanity . New York: Hachette. ISBN   9780316484916. This is an equivalent, though crisper statement of Nick Bostrom's definition: "An existential risk is one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development." Source: Bostrom, Nick (2013). "Existential Risk Prevention as Global Priority". Global Policy. 4:15-31.
  16. Cotton-Barratt, Owen; Ord, Toby (2015), Existential risk and existential hope: Definitions (PDF), Future of Humanity Institute – Technical Report #2015-1, pp. 1–4
  17. Bostrom, Nick (2009). "Astronomical Waste: The opportunity cost of delayed technological development". Utilitas. 15 (3): 308–314. CiteSeerX   10.1.1.429.2849 . doi:10.1017/s0953820800004076. S2CID   15860897.
  18. 1 2 3 4 5 6 7 8 Ord, Toby (2020). The Precipice: Existential Risk and the Future of Humanity. New York: Hachette. ISBN   9780316484916.
  19. 1 2 Bryan Caplan (2008). "The totalitarian threat". Global Catastrophic Risks, eds. Bostrom & Cirkovic (Oxford University Press): 504–519. ISBN   9780198570509
  20. Glover, Dennis (June 1, 2017). "Did George Orwell secretly rewrite the end of Nineteen Eighty-Four as he lay dying?". The Sydney Morning Herald . Retrieved November 21, 2021. Winston's creator, George Orwell, believed that freedom would eventually defeat the truth-twisting totalitarianism portrayed in Nineteen Eighty-Four.
  21. Orwell, George (1949). Nineteen Eighty-Four. A novel. London: Secker & Warburg. Archived from the original on May 4, 2012. Retrieved August 12, 2021.
  22. 1 2 Kupferschmidt, Kai (January 11, 2018). "Could science destroy the world? These scholars want to save us from a modern-day Frankenstein". Science. AAAS. Retrieved April 20, 2020.
  23. Baum, Seth D. (2023). "Assessing natural global catastrophic risks". Natural Hazards. 115 (3): 2699–2719. Bibcode:2023NatHa.115.2699B. doi: 10.1007/s11069-022-05660-w . PMC   9553633 . PMID   36245947.
  24. Scouras, James (2019). "Nuclear War as a Global Catastrophic Risk". Journal of Benefit-Cost Analysis. 10 (2): 274–295. doi: 10.1017/bca.2019.16 .
  25. Sagan, Carl (Winter 1983). "Nuclear War and Climatic Catastrophe: Some Policy Implications". Foreign Affairs. Council on Foreign Relations. doi:10.2307/20041818. JSTOR   20041818 . Retrieved August 4, 2020.
  26. Jebari, Karim (2014). "Existential Risks: Exploring a Robust Risk Reduction Strategy" (PDF). Science and Engineering Ethics. 21 (3): 541–54. doi:10.1007/s11948-014-9559-3. PMID   24891130. S2CID   30387504 . Retrieved August 26, 2018.
  27. 1 2 Cirkovic, Milan M.; Bostrom, Nick; Sandberg, Anders (2010). "Anthropic Shadow: Observation Selection Effects and Human Extinction Risks" (PDF). Risk Analysis. 30 (10): 1495–1506. Bibcode:2010RiskA..30.1495C. doi:10.1111/j.1539-6924.2010.01460.x. PMID   20626690. S2CID   6485564.
  28. Kemp, Luke (February 2019). "Are we on the road to civilization collapse?". BBC . Retrieved August 12, 2021.
  29. Ord, Toby (2020). The Precipice: Existential Risk and the Future of Humanity. Hachette Books. ISBN   9780316484893. Europe survived losing 25 to 50 percent of its population in the Black Death, while keeping civilization firmly intact
  30. 1 2 Yudkowsky, Eliezer (2008). "Cognitive Biases Potentially Affecting Judgment of Global Risks" (PDF). Global Catastrophic Risks: 91–119. Bibcode:2008gcr..book...86Y.
  31. Desvousges, W.H., Johnson, F.R., Dunford, R.W., Boyle, K.J., Hudson, S.P., and Wilson, N. 1993, Measuring natural resource damages with contingent valuation: tests of validity and reliability. In Hausman, J.A. (ed), Contingent Valuation:A Critical Assessment, pp. 91–159 (Amsterdam: North Holland).
  32. Bostrom 2013.
  33. Yudkowsky, Eliezer. "Cognitive biases potentially affecting judgment of global risks". Global catastrophic risks 1 (2008): 86. p.114
  34. "We're Underestimating the Risk of Human Extinction". The Atlantic. March 6, 2012. Retrieved July 1, 2016.
  35. Is Humanity Suicidal? The New York Times Magazine May 30, 1993)
  36. 1 2 Cotton-Barratt, Owen; Daniel, Max; Sandberg, Anders (2020). "Defence in Depth Against Human Extinction: Prevention, Response, Resilience, and Why They All Matter". Global Policy. 11 (3): 271–282. doi:10.1111/1758-5899.12786. ISSN   1758-5899. PMC   7228299 . PMID   32427180.
  37. "Oxford Institute Forecasts The Possible Doom Of Humanity". Popular Science. 2013. Retrieved April 20, 2020.
  38. Toby Ord (2020). The precipice: Existential risk and the future of humanity. Hachette Books. ISBN   9780316484893. The international body responsible for the continued prohibition of bioweapons (the Biological Weapons Convention) has an annual budget of $1.4 million - less than the average McDonald's restaurant
  39. 1 2 Matheny, Jason Gaverick (2007). "Reducing the Risk of Human Extinction" (PDF). Risk Analysis. 27 (5): 1335–1344. Bibcode:2007RiskA..27.1335M. doi:10.1111/j.1539-6924.2007.00960.x. PMID   18076500. S2CID   14265396. Archived from the original (PDF) on August 27, 2014. Retrieved May 16, 2015.
  40. Wells, Willard. (2009). Apocalypse when?. Praxis. ISBN   978-0387098364.
  41. Wells, Willard. (2017). Prospects for Human Survival. Lifeboat Foundation. ISBN   978-0998413105.
  42. Hanson, Robin. "Catastrophe, social collapse, and human extinction". Global catastrophic risks 1 (2008): 357.
  43. Smil, Vaclav (2003). The Earth's Biosphere: Evolution, Dynamics, and Change. MIT Press. p. 25. ISBN   978-0-262-69298-4.
  44. Denkenberger, David C.; Sandberg, Anders; Tieman, Ross John; Pearce, Joshua M. (2022). "Long term cost-effectiveness of resilient foods for global catastrophes compared to artificial general intelligence safety". International Journal of Disaster Risk Reduction. 73: 102798. Bibcode:2022IJDRR..7302798D. doi:10.1016/j.ijdrr.2022.102798.
  45. Lewis Smith (February 27, 2008). "Doomsday vault for world's seeds is opened under Arctic mountain". The Times Online. London. Archived from the original on May 12, 2008.
  46. Suzanne Goldenberg (May 20, 2015). "The doomsday vault: the seeds that could save a post-apocalyptic world". The Guardian . Retrieved June 30, 2017.
  47. "Here's how the world could end—and what we can do about it". Science. AAAS. July 8, 2016. Retrieved March 23, 2018.
  48. Denkenberger, David C.; Pearce, Joshua M. (September 2015). "Feeding everyone: Solving the food crisis in event of global catastrophes that kill crops or obscure the sun" (PDF). Futures. 72: 57–68. doi:10.1016/j.futures.2014.11.008. S2CID   153917693.
  49. "Global Challenges Foundation | Understanding Global Systemic Risk". globalchallenges.org. Archived from the original on August 16, 2017. Retrieved August 15, 2017.
  50. "Global Catastrophic Risk Policy". gcrpolicy.com. Archived from the original on August 11, 2019. Retrieved August 11, 2019.
  51. Club of Rome (2018). "The Climate Emergency Plan" . Retrieved August 17, 2020.
  52. Club of Rome (2019). "The Planetary Emergency Plan" . Retrieved August 17, 2020.
  53. Kieft, J.; Bendell, J (2021). "The responsibility of communicating difficult truths about climate influenced societal disruption and collapse: an introduction to psychological research". Institute for Leadership and Sustainability (IFLAS) Occasional Papers. 7: 1–39.
  54. "Mankind must abandon earth or face extinction: Hawking", physorg.com, August 9, 2010, retrieved January 23, 2012
  55. Malik, Tariq (April 13, 2013). "Stephen Hawking: Humanity Must Colonize Space to Survive". Space.com . Retrieved July 1, 2016.
  56. Shukman, David (January 19, 2016). "Hawking: Humans at risk of lethal 'own goal'". BBC News. Retrieved July 1, 2016.
  57. Ginsberg, Leah (June 16, 2017). "Elon Musk thinks life on earth will go extinct, and is putting most of his fortune toward colonizing Mars". CNBC.
  58. Fred Hapgood (November 1986). "Nanotechnology: Molecular Machines that Mimic Life" (PDF). Omni . Archived from the original (PDF) on July 27, 2013. Retrieved June 5, 2015.
  59. Giles, Jim (2004). "Nanotech takes small step towards burying 'grey goo'". Nature. 429 (6992): 591. Bibcode:2004Natur.429..591G. doi: 10.1038/429591b . PMID   15190320.
  60. Sophie McBain (September 25, 2014). "Apocalypse soon: the scientists preparing for the end times". New Statesman . Retrieved June 5, 2015.
  61. "Reducing Long-Term Catastrophic Risks from Artificial Intelligence". Machine Intelligence Research Institute . Retrieved June 5, 2015. The Machine Intelligence Research Institute aims to reduce the risk of a catastrophe, should such an event eventually occur.
  62. Angela Chen (September 11, 2014). "Is Artificial Intelligence a Threat?". The Chronicle of Higher Education. Retrieved June 5, 2015.
  63. Alexander Sehmar (May 31, 2015). "Isis could obtain nuclear weapon from Pakistan, warns India". The Independent. Archived from the original on June 2, 2015. Retrieved June 5, 2015.
  64. "About the Lifeboat Foundation". The Lifeboat Foundation. Retrieved April 26, 2013.
  65. Ashlee, Vance (July 20, 2010). "The Lifeboat Foundation: Battling Asteroids, Nanobots and A.I." New York Times . Retrieved June 5, 2015.
  66. "Global Catastrophic Risk Institute". gcrinstitute.org. Retrieved March 22, 2022.
  67. Meyer, Robinson (April 29, 2016). "Human Extinction Isn't That Unlikely". The Atlantic . Boston, Massachusetts: Emerson Collective. Retrieved April 30, 2016.
  68. "Global Challenges Foundation website". globalchallenges.org. Retrieved April 30, 2016.
  69. Nick Bilton (May 28, 2015). "Ava of 'Ex Machina' Is Just Sci-Fi (for Now)". New York Times . Retrieved June 5, 2015.
  70. "About Us". Center on Long-Term Risk. Retrieved May 17, 2020. We currently focus on efforts to reduce the worst risks of astronomical suffering (s-risks) from emerging technologies, with a focus on transformative artificial intelligence.
  71. Hui, Sylvia (November 25, 2012). "Cambridge to study technology's risks to humans". Associated Press. Archived from the original on December 1, 2012. Retrieved January 30, 2012.
  72. Scott Barrett (2014). Environment and Development Economics: Essays in Honour of Sir Partha Dasgupta. Oxford University Press. p. 112. ISBN   9780199677856 . Retrieved June 5, 2015.
  73. "Millennium Alliance for Humanity & The Biosphere". Millennium Alliance for Humanity & The Biosphere. Retrieved June 5, 2015.
  74. Guruprasad Madhavan (2012). Practicing Sustainability. Springer Science & Business Media. p. 43. ISBN   9781461443483 . Retrieved June 5, 2015.
  75. "Center for International Security and Cooperation". Center for International Security and Cooperation. Retrieved June 5, 2015.
  76. 1 2 Anderson, Nick (February 28, 2019). "Georgetown launches think tank on security and emerging technology". Washington Post. Retrieved March 12, 2019.
  77. "Global Alert and Response (GAR)". World Health Organization . Archived from the original on February 16, 2003. Retrieved June 5, 2015.
  78. Kelley Lee (2013). Historical Dictionary of the World Health Organization. Rowman & Littlefield. p. 92. ISBN   9780810878587 . Retrieved June 5, 2015.
  79. "USAID Emerging Pandemic Threats Program". USAID. Archived from the original on October 22, 2014. Retrieved June 5, 2015.
  80. "Global Security". Lawrence Livermore National Laboratory. Archived from the original on December 27, 2007. Retrieved June 5, 2015.

Further reading