Human extinction

Last updated

Nuclear war is an often-predicted cause of the extinction of humankind. The explosion of the hydrogen bomb Ivy Mike.jpg
Nuclear war is an often-predicted cause of the extinction of humankind.

Human extinction or omnicide is the hypothetical end of the human species, either by population decline due to extraneous natural causes, such as an asteroid impact or large-scale volcanism, or via anthropogenic destruction (self-extinction), for example by sub-replacement fertility.

Contents

Some of the many possible contributors to anthropogenic hazard are climate change, global nuclear annihilation, biological warfare, weapons of mass destruction, and ecological collapse. Other scenarios center on emerging technologies, such as advanced artificial intelligence, biotechnology, or self-replicating nanobots.

The scientific consensus is that there is a relatively low risk of near-term human extinction due to natural causes. [2] [3] The likelihood of human extinction through humankind's own activities, however, is a current area of research and debate.

History of thought

Early history

Before the 18th and 19th centuries, the possibility that humans or other organisms could become extinct was viewed with scepticism. [4] It contradicted the principle of plenitude, a doctrine that all possible things exist. [4] The principle traces back to Aristotle, and was an important tenet of Christian theology. [5] Ancient philosophers such as Plato, Aristotle, and Lucretius wrote of the end of humankind only as part of a cycle of renewal. Marcion of Sinope was a proto-protestant who advocated for antinatalism that could lead to human extinction. [6] [7] Later philosophers such as Al-Ghazali, William of Ockham, and Gerolamo Cardano expanded the study of logic and probability and began wondering if abstract worlds existed, including a world without humans. Physicist Edmond Halley stated that the extinction of the human race may be beneficial to the future of the world. [8]

The notion that species can become extinct gained scientific acceptance during the Age of Enlightenment in the 17th and 18th centuries, and by 1800 Georges Cuvier had identified 23 extinct prehistoric species. [4] The doctrine was further gradually undermined by evidence from the natural sciences, particularly the discovery of fossil evidence of species that appeared to no longer exist, and the development of theories of evolution. [5] In On the Origin of Species , Darwin discussed the extinction of species as a natural process and a core component of natural selection. [9] Notably, Darwin was skeptical of the possibility of sudden extinction, viewing it as a gradual process. He held that the abrupt disappearances of species from the fossil record were not evidence of catastrophic extinctions, but rather represented unrecognised gaps[ clarification needed ] in the record. [9]

As the possibility of extinction became more widely established in the sciences, so did the prospect of human extinction. [4] In the 19th century, human extinction became a popular topic in science (e.g., Thomas Robert Malthus's An Essay on the Principle of Population ) and fiction (e.g., Jean-Baptiste Cousin de Grainville's The Last Man ). In 1863, a few years after Charles Darwin published On the Origin of Species , William King proposed that Neanderthals were an extinct species of the genus Homo . The Romantic authors and poets were particularly interested in the topic. [4] Lord Byron wrote about the extinction of life on Earth in his 1816 poem "Darkness", and in 1824 envisaged humanity being threatened by a comet impact, and employing a missile system to defend against it. [4] Mary Shelley's 1826 novel The Last Man is set in a world where humanity has been nearly destroyed by a mysterious plague. [4] At the turn of the 20th century, Russian cosmism, a precursor to modern transhumanism, advocated avoiding humanity's extinction by colonizing space. [4]

Atomic era

Castle Romeo nuclear test on Bikini Atoll Operation Castle - Romeo 001.jpg
Castle Romeo nuclear test on Bikini Atoll

The invention of the atomic bomb prompted a wave of discussion among scientists, intellectuals, and the public at large about the risk of human extinction. [4] In a 1945 essay, Bertrand Russell wrote that "[T]he prospect for the human race is sombre beyond all precedent. Mankind are faced with a clear-cut alternative: either we shall all perish, or we shall have to acquire some slight degree of common sense." [10] In 1950, Leo Szilard suggested it was technologically feasible to build a cobalt bomb that could render the planet unlivable. A 1950 Gallup poll found that 19% of Americans believed that another world war would mean "an end to mankind". [11] Rachel Carson's 1962 book Silent Spring raised awareness of environmental catastrophe. In 1983, Brandon Carter proposed the Doomsday argument, which used Bayesian probability to predict the total number of humans that will ever exist.

The discovery of "nuclear winter" in the early 1980s, a specific mechanism by which nuclear war could result in human extinction, again raised the issue to prominence. Writing about these findings in 1983, Carl Sagan argued that measuring the severity of extinction solely in terms of those who die "conceals its full impact", and that nuclear war "imperils all of our descendants, for as long as there will be humans." [12]

Post-Cold War

John Leslie's 1996 book The End of The World was an academic treatment of the science and ethics of human extinction. In it, Leslie considered a range of threats to humanity and what they have in common. In 2003, British Astronomer Royal Sir Martin Rees published Our Final Hour , in which he argues that advances in certain technologies create new threats to the survival of humankind and that the 21st century may be a critical moment in history when humanity's fate is decided. [13] Edited by Nick Bostrom and Milan M. Ćirković, Global Catastrophic Risks was published in 2008, a collection of essays from 26 academics on various global catastrophic and existential risks. [14] Toby Ord's 2020 book The Precipice: Existential Risk and the Future of Humanity argues that preventing existential risks is one of the most important moral issues of our time. The book discusses, quantifies, and compares different existential risks, concluding that the greatest risks are presented by unaligned artificial intelligence and biotechnology. [15]

Causes

Potential anthropogenic causes of human extinction include global thermonuclear war, deployment of a highly effective biological weapon, ecological collapse, runaway artificial intelligence, runaway nanotechnology (such as a grey goo scenario), overpopulation and increased consumption causing resource depletion and a concomitant population crash, population decline by choosing to have fewer children, and displacement of naturally evolved humans by a new species produced by genetic engineering or technological augmentation. Natural and external extinction risks include high-fatality-rate pandemic, supervolcanic eruption, asteroid impact, nearby supernova or gamma-ray burst, extreme solar flare, and alien invasion.

Humans (e.g. Homo sapiens sapiens ) as a species may also be considered to have "gone extinct" simply by being replaced with distant descendants whose continued evolution may produce new species or subspecies Homo or of hominids.

Without intervention by unexpected forces, the stellar evolution of the Sun is expected to make Earth uninhabitable, then destroy it. Depending on its ultimate fate, the entire universe may eventually become uninhabitable.

Probability

Natural vs. anthropogenic

Experts generally agree that anthropogenic existential risks are (much) more likely than natural risks. [16] [13] [17] [2] [18] A key difference between these risk types is that empirical evidence can place an upper bound on the level of natural risk. [2] Humanity has existed for at least 200,000 years, over which it has been subject to a roughly constant level of natural risk. If the natural risk were sufficiently high, then it would be highly unlikely that humanity would have survived as long as it has. Based on a formalization of this argument, researchers have concluded that we can be confident that natural risk is lower than 1 in 14,000 per year (equivalent to 1 in 140 per century, on average). [2]

Another empirical method to study the likelihood of certain natural risks is to investigate the geological record. [16] For example, a comet or asteroid impact event sufficient in scale to cause an impact winter that would cause human extinction before the year 2100 has been estimated at one-in-a-million. [19] [20] Moreover, large supervolcano eruptions may cause a volcanic winter that could endanger the survival of humanity. [21] The geological record suggests that supervolcanic eruptions are estimated to occur on average about once every 50,000 years, though most such eruptions would not reach the scale required to cause human extinction. [21] Famously, the supervolcano Mt. Toba may have almost wiped out humanity at the time of its last eruption (though this is contentious). [21] [22]

Since anthropogenic risk is a relatively recent phenomenon, humanity's track record of survival cannot provide similar assurances. [2] Humanity has only survived 79 years since the creation of nuclear weapons, and for future technologies, there is no track record. This has led thinkers like Carl Sagan to conclude that humanity is currently in a "time of perils" [23] – a uniquely dangerous period in human history, where it is subject to unprecedented levels of risk, beginning from when humans first started posing risk to themselves through their actions. [16] [24] Paleobiologist Olev Vinn has suggested that humans presumably have a number of inherited behavior patterns (IBPs) that are not fine-tuned for conditions prevailing in technological civilization. Indeed, some IBPs may be highly incompatible with such conditions and have a high potential to induce self-destruction. These patterns may include responses of individuals seeking power over conspecifics in relation to harvesting and consuming energy. [25]

Risk estimates

Given the limitations of ordinary observation and modeling, expert elicitation is frequently used instead to obtain probability estimates. [26]

Individual vs. species risks

Although existential risks are less manageable by individuals than, for example, health risks, according to Ken Olum, Joshua Knobe, and Alexander Vilenkin, the possibility of human extinction does have practical implications. For instance, if the "universal" doomsday argument is accepted, it changes the most likely source of disasters, and hence the most efficient means of preventing them. They write: "...you should be more concerned that a large number of asteroids have not yet been detected than about the particular orbit of each one. You should not worry especially about the chance that some specific nearby star will become a supernova, but more about the chance that supernovas are more deadly to nearby life than we believe." [45]

Difficulty

Some scholars argue that certain scenarios such as global thermonuclear war would have difficulty eradicating every last settlement on Earth. Physicist Willard Wells points out that any credible extinction scenario would have to reach into a diverse set of areas, including the underground subways of major cities, the mountains of Tibet, the remotest islands of the South Pacific, and even to McMurdo Station in Antarctica, which has contingency plans and supplies for long isolation. [46] In addition, elaborate bunkers exist for government leaders to occupy during a nuclear war. [19] The existence of nuclear submarines, which can stay hundreds of meters deep in the ocean for potentially years at a time, should also be considered. Any number of events could lead to a massive loss of human life, but if the last few (see minimum viable population) most resilient humans are unlikely to also die off, then that particular human extinction scenario may not seem credible. [47]

Ethics

Value of human life

Placard against omnicide, at Extinction Rebellion (2018) Placard against human extinction, Extinction Rebellion (cropped).jpg
Placard against omnicide, at Extinction Rebellion (2018)

"Existential risks" are risks that threaten the entire future of humanity, whether by causing human extinction or by otherwise permanently crippling human progress. [3] Multiple scholars have argued based on the size of the "cosmic endowment" that because of the inconceivably large number of potential future lives that are at stake, even small reductions of existential risk have great value.

In one of the earliest discussions of ethics of human extinction, Derek Parfit offers the following thought experiment: [48]

I believe that if we destroy mankind, as we now can, this outcome will be much worse than most people think. Compare three outcomes:

(1) Peace.
(2) A nuclear war that kills 99% of the world's existing population.
(3) A nuclear war that kills 100%.

(2) would be worse than (1), and (3) would be worse than (2). Which is the greater of these two differences? Most people believe that the greater difference is between (1) and (2). I believe that the difference between (2) and (3) is very much greater.

Derek Parfit

The scale of what is lost in an existential catastrophe is determined by humanity's long-term potential – what humanity could expect to achieve if it survived. [16] From a utilitarian perspective, the value of protecting humanity is the product of its duration (how long humanity survives), its size (how many humans there are over time), and its quality (on average, how good is life for future people). [16] :273 [49] On average, species survive for around a million years before going extinct. Parfit points out that the Earth will remain habitable for around a billion years. [48] And these might be lower bounds on our potential: if humanity is able to expand beyond Earth, it could greatly increase the human population and survive for trillions of years. [50] [16] :21 The size of the foregone potential that would be lost, were humanity to become extinct, is very large. Therefore, reducing existential risk by even a small amount would have a very significant moral value. [3] [51]

Carl Sagan wrote in 1983: "If we are required to calibrate extinction in numerical terms, I would be sure to include the number of people in future generations who would not be born.... (By one calculation), the stakes are one million times greater for extinction than for the more modest nuclear wars that kill "only" hundreds of millions of people. There are many other possible measures of the potential loss – including culture and science, the evolutionary history of the planet, and the significance of the lives of all of our ancestors who contributed to the future of their descendants. Extinction is the undoing of the human enterprise." [52]

Philosopher Robert Adams in 1989 rejected Parfit's "impersonal" views but spoke instead of a moral imperative for loyalty and commitment to "the future of humanity as a vast project... The aspiration for a better society – more just, more rewarding, and more peaceful... our interest in the lives of our children and grandchildren, and the hopes that they will be able, in turn, to have the lives of their children and grandchildren as projects." [53]

Philosopher Nick Bostrom argues in 2013 that preference-satisfactionist, democratic, custodial, and intuitionist arguments all converge on the common-sense view that preventing existential risk is a high moral priority, even if the exact "degree of badness" of human extinction varies between these philosophies. [54]

Parfit argues that the size of the "cosmic endowment" can be calculated from the following argument: If Earth remains habitable for a billion more years and can sustainably support a population of more than a billion humans, then there is a potential for 1016 (or 10,000,000,000,000,000) human lives of normal duration. [55] Bostrom goes further, stating that if the universe is empty, then the accessible universe can support at least 1034 biological human life-years; and, if some humans were uploaded onto computers, could even support the equivalent of 1054 cybernetic human life-years. [3]

Some economists and philosophers have defended views, including exponential discounting and person-affecting views of population ethics, on which future people do not matter (or matter much less), morally speaking. [56] While these views are controversial, [19] [57] [58] they would agree that an existential catastrophe would be among the worst things imaginable. It would cut short the lives of eight billion presently existing people, destroying all of what makes their lives valuable, and most likely subjecting many of them to profound suffering. So even setting aside the value of future generations, there may be strong reasons to reduce existential risk, grounded in concern for presently existing people. [59]

Beyond utilitarianism, other moral perspectives lend support to the importance of reducing existential risk. An existential catastrophe would destroy more than just humanity – it would destroy all cultural artifacts, languages, and traditions, and many of the things we value. [16] [60] So moral viewpoints on which we have duties to protect and cherish things of value would see this as a huge loss that should be avoided. [16] One can also consider reasons grounded in duties to past generations. For instance, Edmund Burke writes of a "partnership...between those who are living, those who are dead, and those who are to be born". [61] If one takes seriously the debt humanity owes to past generations, Ord argues the best way of repaying it might be to "pay it forward", and ensure that humanity's inheritance is passed down to future generations. [16] :49–51

There are several economists who have discussed the importance of global catastrophic risks. For example, Martin Weitzman argues that most of the expected economic damage from climate change may come from the small chance that warming greatly exceeds the mid-range expectations, resulting in catastrophic damage. [62] Richard Posner has argued that humanity is doing far too little, in general, about small, hard-to-estimate risks of large-scale catastrophes. [63]

Voluntary extinction

Voluntary Human Extinction Movement Voluntary Human Extinction Movement - motto.jpg
Voluntary Human Extinction Movement

Some philosophers adopt the antinatalist position that human extinction would not be a bad thing, but a good thing. David Benatar argues that coming into existence is always serious harm, and therefore it is better that people do not come into existence in the future. [64] Further, David Benatar, animal rights activist Steven Best, and anarchist Todd May, posit that human extinction would be a positive thing for the other organisms on the planet, and the planet itself, citing, for example, the omnicidal nature of human civilization. [65] [66] [67] The environmental view in favor of human extinction is shared by the members of Voluntary Human Extinction Movement and the Church of Euthanasia who call for refraining from reproduction and allowing the human species to go peacefully extinct, thus stopping further environmental degradation. [68]

In fiction

Jean-Baptiste Cousin de Grainville's 1805 science fantasy novel Le dernier homme (The Last Man), which depicts human extinction due to infertility, is considered the first modern apocalyptic novel and credited with launching the genre. [69] Other notable early works include Mary Shelley's 1826 The Last Man , depicting human extinction caused by a pandemic, and Olaf Stapledon's 1937 Star Maker , "a comparative study of omnicide". [4]

Some 21st century pop-science works, including The World Without Us by Alan Weisman, and the television specials Life After People and Aftermath: Population Zero pose a thought experiment: what would happen to the rest of the planet if humans suddenly disappeared? [70] [71] A threat of human extinction, such as through a technological singularity (also called an intelligence explosion), drives the plot of innumerable science fiction stories; an influential early example is the 1951 film adaption of When Worlds Collide . [72] Usually the extinction threat is narrowly avoided, but some exceptions exist, such as R.U.R. and Steven Spielberg's A.I. [73] [ page needed ]

See also

Related Research Articles

<span class="mw-page-title-main">Eliezer Yudkowsky</span> American AI researcher and writer (born 1979)

Eliezer S. Yudkowsky is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence. He is the founder of and a research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. His work on the prospect of a runaway intelligence explosion influenced philosopher Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies.

Friendly artificial intelligence is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.

<span class="mw-page-title-main">Nick Bostrom</span> Philosopher and writer (born 1973)

Nick Bostrom is a philosopher known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, superintelligence risks, and the reversal test. He was the founding director of the now dissolved Future of Humanity Institute at the University of Oxford and is now Principal Researcher at the Macrostrategy Research Initiative.

A superintelligence is a hypothetical agent that possesses intelligence surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

<span class="mw-page-title-main">AI takeover</span> Hypothetical outcome of artificial intelligence

An AI takeover is an imagined scenario in which artificial intelligence (AI) emerges as the dominant form of intelligence on Earth and computer programs or robots effectively take control of the planet away from the human species, which relies on human intelligence. Possible scenarios include replacement of the entire human workforce due to automation, takeover by a superintelligent AI (ASI), and the notion of a robot uprising. Stories of AI takeovers have been popular throughout science fiction, but recent advancements have made the threat more real. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.

<span class="mw-page-title-main">Space and survival</span> Idea that spacefaring is necessary for long-term human survival

Space and survival is the idea that the long-term survival of the human species and technological civilization requires the building of a spacefaring civilization that utilizes the resources of outer space, and that not doing this might lead to human extinction. A related observation is that the window of opportunity for doing this may be limited due to the decreasing amount of surplus resources that will be available over time as a result of an ever-growing population.

The Great Filter is the idea that, in the development of life from the earliest stages of abiogenesis to reaching the highest levels of development on the Kardashev scale, there is a barrier to development that makes detectable extraterrestrial life exceedingly rare. The Great Filter is one possible resolution of the Fermi paradox.

<span class="mw-page-title-main">Future of Humanity Institute</span> Defunct Oxford interdisciplinary research centre

The Future of Humanity Institute (FHI) was an interdisciplinary research centre at the University of Oxford investigating big-picture questions about humanity and its prospects. It was founded in 2005 as part of the Faculty of Philosophy and the Oxford Martin School. Its director was philosopher Nick Bostrom, and its research staff included futurist Anders Sandberg and Giving What We Can founder Toby Ord.

<span class="mw-page-title-main">Nuclear holocaust</span> Scenario of civilization collapse or human extinction by nuclear weapons

A nuclear holocaust, also known as a nuclear apocalypse, nuclear annihilation, nuclear armageddon, or atomic holocaust, is a theoretical scenario where the mass detonation of nuclear weapons causes widespread destruction and radioactive fallout. Such a scenario envisages large parts of the Earth becoming uninhabitable due to the effects of nuclear warfare, potentially causing the collapse of civilization, the extinction of humanity, and/or the termination of most biological life on Earth.

<span class="mw-page-title-main">Global catastrophic risk</span> Hypothetical global-scale disaster risk

A global catastrophic risk or a doomsday scenario is a hypothetical event that could damage human well-being on a global scale, even endangering or destroying modern civilization. An event that could cause human extinction or permanently and drastically curtail humanity's existence or potential is known as an "existential risk".

In futurology, a singleton is a hypothetical world order in which there is a single decision-making agency at the highest level, capable of exerting effective control over its domain, and permanently preventing both internal and external threats to its supremacy. The term was first defined by Nick Bostrom.

The Centre for the Study of Existential Risk (CSER) is a research centre at the University of Cambridge, intended to study possible extinction-level threats posed by present or future technology. The co-founders of the centre are Huw Price, Martin Rees and Jaan Tallinn.

<i>Superintelligence: Paths, Dangers, Strategies</i> 2014 book by Nick Bostrom

Superintelligence: Paths, Dangers, Strategies is a 2014 book by the philosopher Nick Bostrom. It explores how superintelligence could be created and what its features and motivations might be. It argues that superintelligence, if created, would be difficult to control, and that it could take over the world in order to accomplish its goals. The book also presents strategies to help make superintelligences whose goals benefit humanity. It was particularly influential for raising concerns about existential risk from artificial intelligence.

Existential risk from artificial intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.

<i>Global Catastrophic Risks</i> (book) 2008 non-fiction book

Global Catastrophic Risks is a 2008 non-fiction book edited by philosopher Nick Bostrom and astronomer Milan M. Ćirković. The book is a collection of essays from 26 academics written about various global catastrophic and existential risks.

<i>The Precipice: Existential Risk and the Future of Humanity</i> 2020 book about existential risks by Toby Ord

The Precipice: Existential Risk and the Future of Humanity is a 2020 non-fiction book by the Australian philosopher Toby Ord, a senior research fellow at the Future of Humanity Institute in Oxford. It argues that humanity faces unprecedented risks over the next few centuries and examines the moral significance of safeguarding humanity's future.

<span class="mw-page-title-main">End Times (book)</span> 2019 book by Bryan Walsh

End Times: A Brief Guide to the End of the World is a 2019 non-fiction book by journalist Bryan Walsh. The book discusses various risks of human extinction, including asteroids, volcanoes, nuclear war, global warming, pathogens, biotech, AI, and extraterrestrial intelligence. The book includes interviews with astronomers, anthropologists, biologists, climatologists, geologists, and other scholars. The book advocates strongly for greater action.

<span class="mw-page-title-main">Longtermism</span> Philosophical view which prioritises the long-term future

Longtermism is the ethical view that positively influencing the long-term future is a key moral priority of our time. It is an important concept in effective altruism and a primary motivation for efforts that aim to reduce existential risks to humanity.

<span class="mw-page-title-main">Global catastrophe scenarios</span> Scenarios in which a global catastrophe creates harm

Scenarios in which a global catastrophic risk creates harm have been widely discussed. Some sources of catastrophic risk are anthropogenic, such as global warming, environmental degradation, and nuclear war. Others are non-anthropogenic or natural, such as meteor impacts or supervolcanoes. The impact of these scenarios can vary widely, depending on the cause and the severity of the event, ranging from temporary economic disruption to human extinction. Many societal collapses have already happened throughout human history.

Existential risk studies (ERS) is a field of studies focused on the definition and theorization of "existential risks", its ethical implications and the related strategies of long-term survival. Existential risks are diversely defined as global kinds of calamity that have the capacity of inducing the extinction of intelligent earthling life, such as humans, or, at least, a severe limitation of their potential, as defined by ERS theorists. The field development and expansion can be divided in waves according to its conceptual changes as well as its evolving relationship with related fields and theories, such as futures studies, disaster studies, AI safety, effective altruism and longtermism.

References

  1. Di Mardi (October 15, 2020). "The grim fate that could be 'worse than extinction'". BBC News. Retrieved November 11, 2020. When we think of existential risks, events like nuclear war or asteroid impacts often come to mind.
  2. 1 2 3 4 5 Snyder-Beattie, Andrew E.; Ord, Toby; Bonsall, Michael B. (July 30, 2019). "An upper bound for the background rate of human extinction". Scientific Reports. 9 (1): 11054. Bibcode:2019NatSR...911054S. doi:10.1038/s41598-019-47540-7. ISSN   2045-2322. PMC   6667434 . PMID   31363134.
  3. 1 2 3 4 5 Bostrom 2013.
  4. 1 2 3 4 5 6 7 8 9 10 Moynihan, Thomas (September 23, 2020). "How Humanity Came To Contemplate Its Possible Extinction: A Timeline". The MIT Press Reader . Retrieved October 11, 2020.
    See also:
  5. 1 2 Darwin, Charles; Costa, James T. (2009). The Annotated Origin. Harvard University Press. p. 121. ISBN   978-0674032811.
  6. Moll, S. (2010). The Arch-heretic Marcion. Wissenschaftliche Untersuchungen zum Neuen Testament. Mohr Siebeck. p. 132. ISBN   978-3-16-150268-2 . Retrieved June 11, 2023.
  7. Welchman, A. (2014). Politics of Religion/Religions of Politics. Sophia Studies in Cross-cultural Philosophy of Traditions and Cultures. Springer Netherlands. p. 21. ISBN   978-94-017-9448-0 . Retrieved June 11, 2023.
  8. Moynihan, T. (2020). X-Risk: How Humanity Discovered Its Own Extinction. MIT Press. p. 56. ISBN   978-1-913029-84-5 . Retrieved October 19, 2022.
  9. 1 2 Raup, David M. (1995). "The Role of Extinction in Evolution". In Fitch, W. M.; Ayala, F. J. (eds.). Tempo And Mode in Evolution: Genetics And Paleontology 50 Years After Simpson. National Academies Press (US).
  10. Russell, Bertrand (1945). "The Bomb and Civilization". Archived from the original on August 7, 2020.
  11. Erskine, Hazel Gaudet (1963). "The Polls: Atomic Weapons and Nuclear Energy". The Public Opinion Quarterly. 27 (2): 155–190. doi:10.1086/267159. JSTOR   2746913.
  12. Sagan, Carl (January 28, 2009). "Nuclear War and Climatic Catastrophe: Some Policy Implications". doi:10.2307/20041818. JSTOR   20041818 . Retrieved August 11, 2021.
  13. 1 2 Reese, Martin (2003). Our Final Hour: A Scientist's Warning: How Terror, Error, and Environmental Disaster Threaten Humankind's Future In This Century – On Earth and Beyond. Basic Books. ISBN   0-465-06863-4.
  14. Bostrom, Nick; Ćirković, Milan M., eds. (2008). Global catastrophic risks. Oxford University Press. ISBN   978-0199606504.
  15. Ord, Toby (2020). The Precipice: Existential Risk and the Future of Humanity. New York: Hachette. 4:15–31. ISBN   9780316484916. This is an equivalent, though crisper statement of Nick Bostrom's definition: "An existential risk is one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development." Source: Bostrom, Nick (2013). "Existential Risk Prevention as Global Priority". Global Policy.
  16. 1 2 3 4 5 6 7 8 9 10 Ord, Toby (2020). The Precipice: Existential Risk and the Future of Humanity. New York: Hachette. ISBN   9780316484916.
  17. 1 2 Bostrom, Nick; Sandberg, Anders (2008). "Global Catastrophic Risks Survey" (PDF). FHI Technical Report #2008-1. Future of Humanity Institute.
  18. "Frequently Asked Questions". Existential Risk. Future of Humanity Institute . Retrieved July 26, 2013. The great bulk of existential risk in the foreseeable future is anthropogenic; that is, arising from human activity.
  19. 1 2 3 4 Matheny, Jason Gaverick (2007). "Reducing the Risk of Human Extinction" (PDF). Risk Analysis. 27 (5): 1335–1344. Bibcode:2007RiskA..27.1335M. doi:10.1111/j.1539-6924.2007.00960.x. PMID   18076500. S2CID   14265396. Archived from the original (PDF) on August 27, 2014. Retrieved July 1, 2016.
  20. Asher, D.J.; Bailey, M.E.; Emel'yanenko, V.; Napier, W.M. (2005). "Earth in the cosmic shooting gallery" (PDF). The Observatory. 125: 319–322. Bibcode:2005Obs...125..319A.
  21. 1 2 3 Rampino, M.R.; Ambrose, S.H. (2002). "Super eruptions as a threat to civilizations on Earth-like planets" (PDF). Icarus. 156 (2): 562–569. Bibcode:2002Icar..156..562R. doi:10.1006/icar.2001.6808. Archived from the original (PDF) on September 24, 2015. Retrieved February 14, 2022.
  22. Yost, Chad L.; Jackson, Lily J.; Stone, Jeffery R.; Cohen, Andrew S. (March 1, 2018). "Subdecadal phytolith and charcoal records from Lake Malawi, East Africa imply minimal effects on human evolution from the ~74 ka Toba supereruption". Journal of Human Evolution. 116: 75–94. Bibcode:2018JHumE.116...75Y. doi: 10.1016/j.jhevol.2017.11.005 . ISSN   0047-2484. PMID   29477183.
  23. Sagan, Carl (1994). Pale Blue Dot . Random House. pp. 305–6. ISBN   0-679-43841-6. Some planetary civilizations see their way through, place limits on what may and what must not be done, and safely pass through the time of perils. Others are not so lucky or so prudent, perish.
  24. Parfit, Derek (2011). On What Matters Vol. 2. Oxford University Press. p. 616. ISBN   9780199681044. We live during the hinge of history ... If we act wisely in the next few centuries, humanity will survive its most dangerous and decisive period.
  25. Vinn, O. (2024). "Potential incompatibility of inherited behavior patterns with civilization: Implications for Fermi paradox". Science Progress. 107 (3): 1–6. doi:10.1177/00368504241272491. PMC   11307330 . PMID   39105260.
  26. Rowe, Thomas; Beard, Simon (2018). "Probabilities, methodologies and the evidence base in existential risk assessments" (PDF). Working Paper, Centre for the Study of Existential Risk. Retrieved August 26, 2018.
  27. Gott, III, J. Richard (1993). "Implications of the Copernican principle for our future prospects". Nature . 363 (6427): 315–319. Bibcode:1993Natur.363..315G. doi:10.1038/363315a0. S2CID   4252750.
  28. Leslie 1996, p. 146.
  29. Rees, Martin (2004) [2003]. Our Final Century. Arrow Books. p. 9.
  30. Meyer, Robinson (April 29, 2016). "Human Extinction Isn't That Unlikely". The Atlantic . Boston, Massachusetts: Emerson Collective. Retrieved April 30, 2016.
  31. Grace, Katja; Salvatier, John; Dafoe, Allen; Zhang, Baobao; Evans, Owain (May 3, 2018). "When Will AI Exceed Human Performance? Evidence from AI Experts". arXiv: 1705.08807 [cs.AI].
  32. Strick, Katie (May 31, 2023). "Is the AI apocalypse actually coming? What life could look like if robots take over". The Standard. Retrieved May 31, 2023.
  33. Purtill, Corinne. "How Close Is Humanity to the Edge?". The New Yorker. Retrieved January 8, 2021.
  34. "What are the chances of an AI apocalypse?". The Economist . July 10, 2023. Retrieved July 10, 2023.
  35. "A 30% Chance of AI Catastrophe: Samotsvety's Forecasts on AI Risks and the Impact of a Strong AI Treaty". Treaty on Artificial Intelligence Safety and Cooperation (TAISC). May 1, 2023. Retrieved May 1, 2023.
  36. "Will humans become extinct by 2100?". Metaculus . November 12, 2017. Retrieved March 26, 2024.
  37. Pielke, Jr., Roger (November 13, 2024). "Global Existential Risks". American Enterprise Institute . Retrieved December 17, 2024.
  38. Edwards, Lin (June 23, 2010). "Humans will be extinct in 100 years says eminent scientist". Phys.org . Retrieved January 10, 2021.
  39. Nafeez, Ahmed (July 28, 2020). "Theoretical Physicists Say 90% Chance of Societal Collapse Within Several Decades". Vice. Retrieved August 2, 2021.
  40. Bologna, M.; Aquino, G. (2020). "Deforestation and world population sustainability: a quantitative analysis". Scientific Reports. 10 (7631): 7631. arXiv: 2006.12202 . Bibcode:2020NatSR..10.7631B. doi:10.1038/s41598-020-63657-6. PMC   7203172 . PMID   32376879.
  41. Bostrom, Nick (2002), "Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards", Journal of Evolution and Technology, vol. 9, My subjective opinion is that setting this probability lower than 25% would be misguided, and the best estimate may be considerably higher.
  42. Whitmire, Daniel P. (August 3, 2017). "Implication of our technological species being first and early". International Journal of Astrobiology. 18 (2): 183–188. doi: 10.1017/S1473550417000271 .
  43. Leslie 1996, p. 139.
  44. 1 2 Salotti, Jean-Marc (April 2022). "Human extinction by asteroid impact". Futures. 138: 102933. doi: 10.1016/j.futures.2022.102933 . S2CID   247718308.
  45. "Practical application", of the Princeton University paper: Philosophical Implications of Inflationary Cosmology, p. 39. Archived May 12, 2005, at the Wayback Machine .
  46. Wells, Willard. (2009). Apocalypse when?. Praxis. ISBN   978-0387098364.
  47. Tonn, Bruce; MacGregor, Donald (2009). "A singular chain of events". Futures. 41 (10): 706–714. doi:10.1016/j.futures.2009.07.009. S2CID   144553194. SSRN   1775342.
  48. 1 2 Parfit, Derek (1984). Reasons and Persons. Oxford University Press. pp. 453–454.
  49. MacAskill, William; Yetter Chappell, Richard (2021). "Population Ethics | Practical Implications of Population Ethical Theories". Introduction to Utilitarianism. Retrieved August 12, 2021.
  50. Bostrom, Nick (2009). "Astronomical Waste: The opportunity cost of delayed technological development". Utilitas. 15 (3): 308–314. CiteSeerX   10.1.1.429.2849 . doi:10.1017/s0953820800004076. S2CID   15860897.
  51. Todd, Benjamin (2017). "The case for reducing existential risks". 80,000 Hours . Retrieved January 8, 2020.
  52. Sagan, Carl (1983). "Nuclear war and climatic catastrophe: Some policy implications". Foreign Affairs. 62 (2): 257–292. doi:10.2307/20041818. JSTOR   20041818. S2CID   151058846.
  53. Adams, Robert Merrihew (October 1989). "Should Ethics be More Impersonal? a Critical Notice of Derek Parfit, Reasons and Persons". The Philosophical Review. 98 (4): 439–484. doi:10.2307/2185115. JSTOR   2185115.
  54. Bostrom 2013, pp. 23–24.
  55. Parfit, D. (1984) Reasons and Persons. Oxford, England: Clarendon Press. pp. 453–454.
  56. Narveson, Jan (1973). "Moral Problems of Population". The Monist. 57 (1): 62–86. doi:10.5840/monist197357134. PMID   11661014.
  57. Greaves, Hilary (2017). "Discounting for Public Policy: A Survey". Economics & Philosophy. 33 (3): 391–439. doi:10.1017/S0266267117000062. ISSN   0266-2671. S2CID   21730172.
  58. Greaves, Hilary (2017). "Population axiology". Philosophy Compass. 12 (11): e12442. doi:10.1111/phc3.12442. ISSN   1747-9991.
  59. Lewis, Gregory (May 23, 2018). "The person-affecting value of existential risk reduction". www.gregoryjlewis.com. Retrieved August 7, 2020.
  60. Sagan, Carl (Winter 1983). "Nuclear War and Climatic Catastrophe: Some Policy Implications". Foreign Affairs. Council on Foreign Relations. doi:10.2307/20041818. JSTOR   20041818 . Retrieved August 4, 2020.
  61. Burke, Edmund (1999) [1790]. "Reflections on the Revolution in France" (PDF). In Canavan, Francis (ed.). Select Works of Edmund Burke Volume 2. Liberty Fund. p. 192.
  62. Weitzman, Martin (2009). "On modeling and interpreting the economics of catastrophic climate change" (PDF). The Review of Economics and Statistics. 91 (1): 1–19. doi:10.1162/rest.91.1.1. S2CID   216093786.
  63. Posner, Richard (2004). Catastrophe: Risk and Response. Oxford University Press.
  64. Benatar, David (2008). Better Never to Have Been: The Harm of Coming into Existence. Oxford University Press. p.  28. ISBN   978-0199549269. Being brought into existence is not a benefit but always a harm.
  65. Benatar, David (2008). Better Never to Have Been: The Harm of Coming into Existence. Oxford University Press. p.  224. ISBN   978-0199549269. Although there are many non-human species – especially carnivores – that also cause a lot of suffering, humans have the unfortunate distinction of being the most destructive and harmful species on earth. The amount of suffering in the world could be radically reduced if there were no more humans.
  66. Best, Steven (2014). "Conclusion: Reflections on Activism and Hope in a Dying World and Suicidal Culture". The Politics of Total Liberation: Revolution for the 21st Century. Palgrave Macmillan. p. 165. doi:10.1057/9781137440723_7. ISBN   978-1137471116. In an era of catastrophe and crisis, the continuation of the human species in a viable or desirable form, is obviously contingent and not a given or necessary good. But considered from the standpoint of animals and the earth, the demise of humanity would be the best imaginable event possible, and the sooner the better. The extinction of Homo sapiens would remove the malignancy ravaging the planet, destroy a parasite consuming its host, shut down the killing machines, and allow the earth to regenerate while permitting new species to evolve.
  67. May, Todd (December 17, 2018). "Would Human Extinction Be a Tragedy?". The New York Times . Human beings are destroying large parts of the inhabitable earth and causing unimaginable suffering to many of the animals that inhabit it. This is happening through at least three means. First, human contribution to climate change is devastating ecosystems ... Second, the increasing human population is encroaching on ecosystems that would otherwise be intact. Third, factory farming fosters the creation of millions upon millions of animals for whom it offers nothing but suffering and misery before slaughtering them in often barbaric ways. There is no reason to think that those practices are going to diminish any time soon. Quite the opposite.
  68. MacCormack, Patricia (2020). The Ahuman Manifesto: Activism for the End of the Anthropocene. Bloomsbury Academic. pp. 143, 166. ISBN   978-1350081093.
  69. Wagar, W. Warren (2003). "Review of The Last Man, Jean-Baptiste François Xavier Cousin de Grainville". Utopian Studies . 14 (1): 178–180. ISSN   1045-991X. JSTOR   20718566.
  70. "He imagines a world without people. But why?". The Boston Globe . August 18, 2007. Retrieved July 20, 2016.
  71. Tucker, Neely (March 8, 2008). "Depopulation Boom". The Washington Post . Retrieved July 20, 2016.
  72. Barcella, Laura (2012). The end: 50 apocalyptic visions from pop culture that you should know about – before it's too late. San Francisco, California: Zest Books. ISBN   978-0982732250.
  73. Dinello, Daniel (2005). Technophobia!: science fiction visions of posthuman technology (1st ed.). Austin, Texas: University of Texas press. ISBN   978-0-292-70986-7.

Sources

Further reading