AI aftermath scenarios

Last updated

Some scholars believe that advances in artificial intelligence, or AI, will eventually lead to a semi-apocalyptic post-scarcity and post-work economy where intelligent machines can outperform humans in almost every, if not every, domain. [1] The questions of what such a world might look like, and whether specific scenarios constitute utopias or dystopias, are the subject of active debate. [2]

Contents

Background

Most scientists believe that AI research will at some point lead to the creation of machines that are as intelligent, or more intelligent, than human beings in every domain of interest. [3] There is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains; therefore superintelligence is physically possible. [4] [5] In addition to potential algorithmic improvements over human brains, a digital brain can be many orders of magnitude larger and faster than a human brain, which was constrained in size by evolution to be small enough to fit through a birth canal. [6] While there is no consensus on when artificial intelligence will outperform humans, many scholars argue that whenever it does happen, the introduction of a second species of intelligent life onto the planet will have far-reaching implications. [4] [7] Scholars often disagree with one another both about what types of post-AI scenarios are most likely, and about what types of post-AI scenarios would be most desirable. Finally, some dissenters argue that AI will never become as intelligent as humans, for example because the human race will already likely have destroyed itself before research has time to advance sufficiently to create artificial general intelligence. [8]

Postulates: robot labor and post-scarcity economy

All of the following "AI aftermath scenarios" of the aftermath of arbitrarily-advanced AI development are crucially dependent on two intertwined theses. The first thesis is that, at some point in the future, some kind of economic growth will continue until a "post-scarcity" economy is reached that could, unless extremely hyperconcentrated, effortlessly provide an extremely comfortable standard of living for a population equaling or, within reason, exceeding the current human population, without even requiring the bulk of the population to participate in the workforce. This economic growth could come from the continuation of existing growth trends and the refinement of existing technologies, or through future breakthroughs in emerging technologies such as nanotechnology and automation through robotics and futuristic advanced artificial intelligence. The second thesis is that advances in artificial intelligence will render humans unnecessary for the functioning of the economy: human labor declines in relative economic value if robots are easier to cheaply mass-produce then humans, more customizable than humans, and if they become more intelligent and capable than humans. [8] [9] [10]

Cosmic endowment and limits to growth

The Universe may be spatially infinite; however, the accessible Universe is bounded by the cosmological event horizon of around 16 billion light years. [11] [12] Some physicists believe it plausible that nearest alien civilization may well be located more than 16 billion light years away; [13] [14] in this best-case expansion scenario, the human race could eventually, by colonizing a significant fraction of the accessible Universe, increase the accessible biosphere by perhaps 32 orders of magnitude. [15] The twentieth century saw a partial "demographic transition" to lower birthrates associated with wealthier societies; [16] however, in the very long run, intergenerational fertility correlations (whether due to natural selection or due to cultural transmission of large-family norms from parents to children) are predicted to result in an increase in fertility over time, in the absence of either mandated birth control or periodic Malthusian catastrophes. [17] [18]

Scenarios

Libertarianism

Libertarian scenarios postulate that intelligent machines, uploaded humans, cyborgs, and unenhanced humans will coexist peacefully in a framework focused on respecting property rights. Because industrial productivity is no longer gated by scarce human labor, the value of land skyrockets compared to the price of goods; even remaining "Luddite" humans who owned or inherited land should be able to sell or lease a small piece of it to the more-productive robots in exchange for a perpetual annuity sufficient to easily indefinitely meet all of their basic financial needs. [8] Such people can live as long as they choose to, and are free to engage in almost any activity they can conceive of, for pleasure or for self-actualization, without financial concern. Advanced technologies enable entirely new modes of thought and experience, thus adding to the palette of possible feelings. People in the future may even experience never-ending "gradients of bliss". [19]

Evolution moves toward greater complexity, greater elegance, greater knowledge, greater intelligence, greater beauty, greater creativity, and greater levels of subtle attributes such as love. In every monotheistic tradition God is likewise described as all of these qualities, only without any limitation: infinite knowledge, infinite intelligence, infinite beauty, infinite creativity, infinite love, and so on. Of course, even the accelerating growth of evolution never achieves an infinite level, but as it explodes exponentially it certainly moves rapidly in that direction. So evolution moves inexorably toward this conception of God, although never quite reaching this ideal. We can regard, therefore, the freeing of our thinking from the severe limitations of its biological form to be an essentially spiritual undertaking. [19] [20]

Such decentralized scenarios may be unstable in the long run, as the greediest elements of the super intelligent classes would have both the means and the motive to usurp the property of the unenhanced classes. Even if the mechanisms for ensuring legal property rights are both unbreakable and loophole-free, there may still be an ever-present danger of humans and cyborgs being "tricked" by the cleverest of the superintelligent machines into unwittingly signing over their own property. Suffering may be widespread, as sentient beings without property may die, and no mechanism prevents a being from reproducing up until the limits of his own inheritable resources, resulting in a multitude of that being's descendants scrabbling out an existence of minimal sustenance. [8] [10] [21]

Imagine running on a treadmill at a steep incline — heart pounding, muscles aching, lungs gasping for air. A glance at the timer: your next break, which will also be your death, is due in 49 years, 3 months, 20 days, 4 hours, 56 minutes, and 12 seconds. You wish you had not been born. [9]

Nick Bostrom, philosopher, University of Oxford

Communism

Ray Kurzweil posits that the goals of communism will be realized by advanced technological developments in the 21st century, where the intersection of low manufacturing costs, material abundance, and open-source design philosophies in software and in hardware will enable the realization of the maxim "from each according to his ability, to each according to his needs". [22]

Benevolent dictator

Utopien arche04, 2010 2010 Utopien arche04.jpg
Utopien arche04, 2010

In this scenario, postulate that a superintelligent artificial intelligence takes control of society, but acts in a beneficial way. Its programmers, despite being on a deadline, solved quasi-philosophical problems that had seemed to some intractable, and created an AI with the following goal: to use its superintelligence to figure out what human utopia looks like by analyzing human behavior, human brains, and human genes; and then, to implement that utopia. The AI arrives at a subtle and complex definition of human flourishing. Valuing diversity, and recognizing that different people have different preferences, the AI divides Earth into different sectors. Harming others, making weapons, evading surveillance, or trying to create a rival superintelligence are globally banned; apart from that, each sector is free to make its own laws; for example, a religious person might choose to live in the "pious sector" corresponding to his religion, where the appropriate religious rules are strictly enforced. In all sectors, disease, poverty, crime, hangovers, addiction, and all other involuntary suffering have been eliminated. Many sectors boast advanced architecture and spectacle that "make typical sci-fi visions pale in comparison". [8] Life is an "all-inclusive pleasure cruise", [8] as if it were "Christmas 365 days a year". [23]

After spending an intense week in the knowledge sector learning about the ultimate laws of physics that the AI has discovered, you might decide to cut loose in the hedonistic sector over the weekend and then relax for a few days at the beach resort in the wildlife sector. [8]

Max Tegmark, physicist, MIT

Still, many people are dissatisfied, Tegmark writes. Humans have no freedom in shaping their collective destiny. Some want the freedom to have as many children as they want. Others resent surveillance by the AI, or chafe at bans on weaponry and on creating further superintelligence machines. Others may come to regret the choices they have made, or find their lives feel hollow and superficial. [8]

Bostrom argues that an AI's code of ethics should ideally improve in certain ways on current norms of moral behavior, in the same way that we regard current morality to be superior to the morality of earlier eras of slavery. In contrast, Ernest Davis of New York University this approach is too dangerous, stating "I feel safer in the hands of a superintelligence who is guided by 2014 morality, or for that matter by 1700 morality, than in the hands of one that decides to consider the question for itself." [24]

Gatekeeper AI

In "Gatekeeper" AI scenarios, the AI can act to prevent rival superintelligences from being created, but otherwise errs on the side of allowing humans to create their own destiny. [8] Ben Goertzel of OpenCog has advocated a "Nanny AI" scenario where the AI additionally takes some responsibility for preventing humans from destroying themselves, for example by slowing down technological progress to give time for society to advance in a more thoughtful and deliberate manner. [8] [25] In a third scenario, a superintelligent "Protector" AI gives humans the illusion of control, by hiding or erasing all knowledge of its existence, but works behind the scenes to guarantee positive outcomes. In all three scenarios, while humanity gains more control (or at least the illusion of control), humanity ends up progressing more slowly than it would if the AI were unrestricted in its willingness to rain down all the benefits and unintended consequences of its advanced technology on the human race. [8]

Boxed AI

Pandora's box
(19th century engraving) Frederick Stuart Church - Opened up a Pandora's box.jpg
Pandora's box
(19th century engraving)

People ask what is the relationship between humans and machines, and my answer is that it's very obvious: Machines are our slaves. [26]

Tom Dietterich, president of the AAAI

The AI box scenario postulates that a superintelligent AI can be "confined to a box" and its actions can be restricted by human gatekeepers; the humans in charge would try to take advantage of some of the AI's scientific breakthroughs or reasoning abilities, without allowing the AI to take over the world. Successful gatekeeping may be difficult; the more intelligent the AI is, the more likely the AI can find a clever way to use "social hacking" and convince the gatekeepers to let it escape, or even to find an unforeseen physical method of escape. [27] [28]

Human-AI merger

Kurzweil argues that in the future "There will be no distinction, post-Singularity, between human and machine or between physical and virtual reality". [29]

Human extinction

If a dominant superintelligent machine were to conclude that human survival is an unnecessary risk or a waste of resources, the result would be human extinction. This could occur if a machine, programmed without respect for human values, unexpectedly gains superintelligence through recursive self-improvement, or manages to escape from its containment in an AI Box scenario. This could also occur if the first superintelligent AI was programmed with an incomplete or inaccurate understanding of human values, either because the task of instilling the AI with human values was too difficult or impossible; due to a buggy initial implementation of the AI; or due to bugs accidentally being introduced, either by its human programmers or by the self-improving AI itself, in the course of refining its code base. Bostrom and others argue that human extinction is probably the "default path" that society is currently taking, in the absence of substantial preparatory attention to AI safety. The resultant AI might not be sentient, and might place no value on sentient life; the resulting hollow world, devoid of life, might be like "a Disneyland without children". [9]

Zoo

Jerry Kaplan, author of Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence, posits a scenario where humans are farmed or kept on a reserve, just as humans preserve endangered species like chimpanzees. [30] Apple co-founder and AI skeptic Steve Wozniak stated in 2015 that robots taking over would actually "be good for the human race", on the grounds that he believes humans would become the robots' pampered pets. [31]

Alternatives to AI

Some scholars doubt that "game-changing" superintelligent machines will ever come to pass. Gordon Bell of Microsoft Research has stated "the population will destroy itself before the technological singularity". Gordon Moore, discoverer of the eponymous Moore's law, stated "I am a skeptic. I don't believe this kind of thing is likely to happen, at least for a long time. And I don't know why I feel that way." Evolutionary psychologist Steven Pinker stated, "The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible." [32]

Bill Joy of Sun Microsystems, in his April 2000 essay Why the Future Doesn't Need Us , has advocated for global "voluntary relinquishment" of artificial general intelligence and other risky technologies. [33] [34] Most experts believe relinquishment is extremely unlikely. AI skeptic Oren Etzioni has stated that researchers and scientists have no choice but to push forward with AI developments: "China says they want to be an AI leader, Putin has said the same thing. So the global race is on." [35]

Related Research Articles

The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization. According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model of 1965, an upgradable intelligent agent could eventually enter a positive feedback loop of self-improvement cycles, each successive; and more intelligent generation appearing more and more rapidly, causing a rapid increase ("explosion") in intelligence which would ultimately result in a powerful superintelligence, qualitatively far surpassing all human intelligence.

<span class="mw-page-title-main">Eliezer Yudkowsky</span> American AI researcher and writer (born 1979)

Eliezer S. Yudkowsky is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence. He is the founder of and a research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. His work on the prospect of a runaway intelligence explosion influenced philosopher Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies.

Friendly artificial intelligence is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.

<span class="mw-page-title-main">Nick Bostrom</span> Philosopher and writer (born 1973)

Nick Bostrom is a philosopher known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, superintelligence risks, and the reversal test. He was the founding director of the now dissolved Future of Humanity Institute at the University of Oxford and is now Principal Researcher at the Macrostrategy Research Initiative.

Singularitarianism is a movement defined by the belief that a technological singularity—the creation of superintelligence—will likely happen in the medium future, and that deliberate action ought to be taken to ensure that the singularity benefits humans.

Artificial general intelligence (AGI) is a type of [[artificial intelligence]pp] (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. This contrasts with narrow AI, which is limited to specific tasks. AGI is considered one of the definitions of strong AI.

A superintelligence is a hypothetical agent that possesses intelligence surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

The Machine Intelligence Research Institute (MIRI), formerly the Singularity Institute for Artificial Intelligence (SIAI), is a non-profit research institute focused since 2005 on identifying and managing potential existential risks from artificial general intelligence. MIRI's work has focused on a friendly AI approach to system design and on predicting the rate of technology development.

<i>The Singularity Is Near</i> 2005 non-fiction book by Ray Kurzweil

The Singularity Is Near: When Humans Transcend Biology is a 2005 non-fiction book about artificial intelligence and the future of humanity by inventor and futurist Ray Kurzweil. A sequel book, The Singularity Is Nearer, was released on June 25, 2024.

<span class="mw-page-title-main">AI takeover</span> Hypothetical outcome of artificial intelligence

An AI takeover is an imagined scenario in which artificial intelligence (AI) emerges as the dominant form of intelligence on Earth and computer programs or robots effectively take control of the planet away from the human species, which relies on human intelligence. Possible scenarios include replacement of the entire human workforce due to automation, takeover by a superintelligent AI (ASI), and the notion of a robot uprising. Stories of AI takeovers have been popular throughout science fiction, but recent advancements have made the threat more real. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.

An artificial brain is software and hardware with cognitive abilities similar to those of the animal or human brain.

The following outline is provided as an overview of and topical guide to artificial intelligence:

The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics, lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status, artificial superintelligence and existential risks.

In the field of artificial intelligence (AI) design, AI capability control proposals, also referred to as AI confinement, aim to increase our ability to monitor and control the behavior of AI systems, including proposed artificial general intelligences (AGIs), in order to reduce the danger they might pose if misaligned. However, capability control becomes less effective as agents become more intelligent and their ability to exploit flaws in human control systems increases, potentially resulting in an existential risk from AGI. Therefore, the Oxford philosopher Nick Bostrom and others recommend capability control methods only as a supplement to alignment methods.

Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. It should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with technology's grander social effects.

<i>Superintelligence: Paths, Dangers, Strategies</i> 2014 book by Nick Bostrom

Superintelligence: Paths, Dangers, Strategies is a 2014 book by the philosopher Nick Bostrom. It explores how superintelligence could be created and what its features and motivations might be. It argues that superintelligence, if created, would be difficult to control, and that it could take over the world in order to accomplish its goals. The book also presents strategies to help make superintelligences whose goals benefit humanity. It was particularly influential for raising concerns about existential risk from artificial intelligence.

Instrumental convergence is the hypothetical tendency for most sufficiently intelligent, goal directed beings to pursue similar sub-goals, even if their ultimate goals are quite different. More precisely, agents may pursue instrumental goals—goals which are made in pursuit of some particular end, but are not the end goals themselves—without ceasing, provided that their ultimate (intrinsic) goals may never be fully satisfied.

Existential risk from artificial intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.

<i>Life 3.0</i> 2017 book by Max Tegmark on artificial intelligence

Life 3.0: Being Human in the Age of Artificial Intelligence is a 2017 non-fiction book by Swedish-American cosmologist Max Tegmark. Life 3.0 discusses artificial intelligence (AI) and its impact on the future of life on Earth and beyond. The book discusses a variety of societal implications, what can be done to maximize the chances of a positive outcome, and potential futures for humanity, technology and combinations thereof.

<i>Human Compatible</i> 2019 book by Stuart J. Russell

Human Compatible: Artificial Intelligence and the Problem of Control is a 2019 non-fiction book by computer scientist Stuart J. Russell. It asserts that the risk to humanity from advanced artificial intelligence (AI) is a serious concern despite the uncertainty surrounding future progress in AI. It also proposes an approach to the AI control problem.

References

  1. "Humanity should fear advances in artificial intelligence". Debating Matters. Retrieved 2023-01-14.
  2. "Google sentient AI debate overshadows more pressing issues like prejudice". South China Morning Post. 2022-06-15. Retrieved 2023-01-14.
  3. Müller, Vincent C., and Nick Bostrom. "Future progress in artificial intelligence: A survey of expert opinion." Fundamental issues of artificial intelligence. Springer International Publishing, 2016. 553-570.
  4. 1 2 "Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence – but are we taking AI seriously enough?'" . The Independent (UK). Archived from the original on 2014-05-02. Retrieved 4 December 2017.
  5. "Stephen Hawking warns artificial intelligence could end mankind". BBC. 2 December 2014. Retrieved 4 December 2017.
  6. Graves, Matthew (8 November 2017). "Why We Should Be Concerned About Artificial Superintelligence". Skeptic (US magazine) . Vol. 22, no. 2. Retrieved 4 December 2017.
  7. "Clever cogs". The Economist. 9 August 2014. Retrieved 4 December 2017.
  8. 1 2 3 4 5 6 7 8 9 10 11 Tegmark, Max (2017). "Chapter 5: Aftermath: The next 10,000 years". Life 3.0: Being Human in the Age of Artificial Intelligence (First ed.). New York: Knopf. ISBN   9781101946596. OCLC   973137375.
  9. 1 2 3 Bostrom, Nick (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.
  10. 1 2 Robin Hanson (2016). The Age of Em: Work, Love, and Life when Robots Rule the Earth. Oxford University Press.
  11. Ćirković, Milan M. "Forecast for the next eon: Applied cosmology and the long-term fate of intelligent beings." Foundations of Physics 34.2 (2004): 239-261.
  12. Olson, S. Jay. "Homogeneous cosmology with aggressively expanding civilizations." Classical and Quantum Gravity 32.21 (2015): 215025.
  13. Tegmark, Max. "Our mathematical universe." Allen Lane-Penguin Books, London (2014).
  14. John Gribbin. Alone in the Universe . New York, Wiley, 2011.
  15. Russell, Stuart (30 August 2017). "Artificial intelligence: The future is superintelligent". Nature. 548 (7669): 520–521. Bibcode:2017Natur.548..520R. doi: 10.1038/548520a .
  16. Myrskylä, Mikko; Kohler, Hans-Peter; Billari, Francesco C. (6 August 2009). "Advances in development reverse fertility declines". Nature. 460 (7256): 741–743. Bibcode:2009Natur.460..741M. doi:10.1038/nature08230. PMID   19661915. S2CID   4381880.
  17. Kolk, M.; Cownden, D.; Enquist, M. (29 January 2014). "Correlations in fertility across generations: can low fertility persist?". Proceedings of the Royal Society B: Biological Sciences. 281 (1779): 20132561. doi:10.1098/rspb.2013.2561. PMC   3924067 . PMID   24478294.
  18. Burger, Oskar; DeLong, John P. (28 March 2016). "What if fertility decline is not permanent? The need for an evolutionarily informed approach to understanding low fertility". Philosophical Transactions of the Royal Society B: Biological Sciences. 371 (1692): 20150157. doi:10.1098/rstb.2015.0157. PMC   4822437 . PMID   27022084.
  19. 1 2 Jordan, Gregory E (2006). "Apologia for transhumanist religion". Journal of Evolution and Technology. 15 (1): 55–72.
  20. Kurzweil, Ray. The Singularity is Near . Gerald Duckworth & Co, 2010.
  21. Poole, Steven (15 June 2016). "The Age of Em review – the horrific future when robots rule the Earth". The Guardian. Retrieved 4 December 2017.
  22. Ray Kurzweil (February 1, 2012). Kurzweil: Technology Will Achieve the Goals of Communism. FORA TV.
  23. "Artificial intelligence: can we control it?". Financial Times. 14 June 2016. Retrieved 4 December 2017.
  24. Davis, Ernest (March 2015). "Ethical guidelines for a superintelligence". Artificial Intelligence. 220: 121–124. doi: 10.1016/j.artint.2014.12.003 .
  25. Goertzel, Ben. "Should humanity build a global AI nanny to delay the singularity until it's better understood?". Journal of Consciousness Studies. 19 (1–2): 96–111.
  26. "As Jeopardy! Robot Watson Grows Up, How Afraid of It Should We Be?". New York . 20 May 2015. Retrieved 4 December 2017.
  27. "Control dangerous AI before it controls us, one expert says". NBC News. 1 March 2012. Retrieved 4 December 2017.
  28. Vinge, Vernor (1993). "The coming technological singularity: How to survive in the post-human era". Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace: 11–22. Bibcode:1993vise.nasa...11V. I argue that confinement is intrinsically impractical. For the case of physical confinement: Imagine yourself confined to your house with only limited data access to the outside, to your masters. If those masters thought at a rate -- say -- one million times slower than you, there is little doubt that over a period of years (your time) you could come up with 'helpful advice' that would incidentally set you free.
  29. "Scientists: Humans and machines will merge in future". www.cnn.com. 15 July 2008. Retrieved 4 December 2017.
  30. Wakefield, Jane (28 September 2015). "Do we really need to fear AI?". BBC News. Retrieved 4 December 2017.
  31. Gibbs, Samuel (25 June 2015). "Apple co-founder Steve Wozniak says humans will be robots' pets". The Guardian. Retrieved 7 January 2018.
  32. "Tech Luminaries Address Singularity". IEEE Spectrum: Technology, Engineering, and Science News. 1 June 2008. Retrieved 4 December 2017.
  33. "Why the Future Doesn't Need Us". WIRED. 1 April 2000. Retrieved 4 December 2017.
  34. "The mouse pad that roared". SFGate. 14 March 2000. Retrieved 4 December 2017.
  35. "Elon Musk says AI could doom human civilization. Zuckerberg disagrees. Who's right?". USA TODAY. 2 January 2018. Retrieved 8 January 2018.

See also