Differential technological development

Last updated

Differential technological development is a strategy of technology governance aiming to decrease risks from emerging technologies by influencing the sequence in which they are developed. On this strategy, societies would strive to delay the development of harmful technologies and their applications, while accelerating the development of beneficial technologies, especially those that offer protection against the harmful ones. [1] [2]

Contents

History of the idea

Differential technological development was initially proposed by philosopher Nick Bostrom in 2002 [1] and he applied the idea to the governance of artificial intelligence in his 2014 book Superintelligence: Paths, Dangers, Strategies. [3] The strategy was also endorsed by philosopher Toby Ord in his 2020 book The Precipice: Existential Risk and the Future of Humanity, who writes that "While it may be too difficult to prevent the development of a risky technology, we may be able to reduce existential risk by speeding up the development of protective technologies relative to dangerous ones." [2] [4]

Informal discussion

Paul Christiano believes that while accelerating technological progress appears to be one of the best ways to improve human welfare in the next few decades, [5] a faster rate of growth cannot be equally important for the far future because growth must eventually saturate due to physical limits. Hence, from the perspective of the far future, differential technological development appears more crucial. [6] [ unreliable source? ]

Inspired by Bostrom's proposal, Luke Muehlhauser and Anna Salamon suggested a more general project of "differential intellectual progress", in which society advances its wisdom, philosophical sophistication, and understanding of risks faster than its technological power. [7] [ unreliable source? ] [8] [ unreliable source? ] Brian Tomasik has expanded on this notion. [9] [ unreliable source? ]

See also

Related Research Articles

The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization. According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model, an upgradable intelligent agent will eventually enter a "runaway reaction" of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an "explosion" in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.

<span class="mw-page-title-main">Mind uploading</span> Hypothetical process of digitally emulating a brain

Mind uploading is a speculative process of whole brain emulation in which a brain scan is used to completely emulate the mental state of the individual in a digital computer. The computer would then run a simulation of the brain's information processing, such that it would respond in essentially the same way as the original brain and experience having a sentient conscious mind.

<span class="mw-page-title-main">Friendly artificial intelligence</span> AI to benefit humanity

Friendly artificial intelligence is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.

<span class="mw-page-title-main">Nick Bostrom</span> Swedish philosopher and writer (born 1973)

Nick Bostrom is a Swedish philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, superintelligence risks, and the reversal test. In 2011, he founded the Oxford Martin Program on the Impacts of Future Technology, and he is the founding director of the Future of Humanity Institute at Oxford University.

<span class="mw-page-title-main">Singularitarianism</span> Belief in an incipient technological singularity

Singularitarianism is a movement defined by the belief that a technological singularity—the creation of superintelligence—will likely happen in the medium future, and that deliberate action ought to be taken to ensure that the singularity benefits humans.

A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

The Machine Intelligence Research Institute (MIRI), formerly the Singularity Institute for Artificial Intelligence (SIAI), is a non-profit research institute focused since 2005 on identifying and managing potential existential risks from artificial general intelligence. MIRI's work has focused on a friendly AI approach to system design and on predicting the rate of technology development.

<span class="mw-page-title-main">AI takeover</span> Hypothetical artificial intelligence scenario

An AI takeover is a hypothetical scenario in which artificial intelligence (AI) becomes the dominant form of intelligence on Earth, as computer programs or robots effectively take control of the planet away from the human species. Possible scenarios include replacement of the entire human workforce, takeover by a superintelligent AI, and the popular notion of a robot uprising. Stories of AI takeovers are very popular throughout science fiction. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.

<span class="mw-page-title-main">Human extinction</span> Hypothetical end of the human species

Human extinction is the hypothetical end of the human species due to either natural causes such as population decline from sub-replacement fertility, an asteroid impact, large-scale volcanism, or via anthropogenic destruction (self-extinction).

<span class="mw-page-title-main">Future of Humanity Institute</span> Oxford interdisciplinary research centre

The Future of Humanity Institute (FHI) is an interdisciplinary research centre at the University of Oxford investigating big-picture questions about humanity and its prospects. It was founded in 2005 as part of the Faculty of Philosophy and the Oxford Martin School. Its director is philosopher Nick Bostrom, and its research staff include futurist Anders Sandberg and Giving What We Can founder Toby Ord.

<span class="mw-page-title-main">Global catastrophic risk</span> Potentially harmful worldwide events

A global catastrophic risk or a doomsday scenario is a hypothetical future event that could damage human well-being on a global scale, even endangering or destroying modern civilization. An event that could cause human extinction or permanently and drastically curtail humanity's potential is known as an "existential risk."

In futurology, a singleton is a hypothetical world order in which there is a single decision-making agency at the highest level, capable of exerting effective control over its domain, and permanently preventing both internal and external threats to its supremacy. The term was first defined by Nick Bostrom.

In the field of artificial intelligence (AI) design, AI capability control proposals, also referred to as AI confinement, aim to increase our ability to monitor and control the behavior of AI systems, including proposed artificial general intelligences (AGIs), in order to reduce the danger they might pose if misaligned. However, capability control becomes less effective as agents become more intelligent and their ability to exploit flaws in human control systems increases, potentially resulting in an existential risk from AGI. Therefore, the Oxford philosopher Nick Bostrom and others recommend capability control methods only as a supplement to alignment methods.

<span class="mw-page-title-main">Toby Ord</span> Australian philosopher (born 1979)

Toby David Godfrey Ord is an Australian philosopher. He founded Giving What We Can in 2009, an international society whose members pledge to donate at least 10% of their income to effective charities, and is a key figure in the effective altruism movement, which promotes using reason and evidence to help the lives of others as much as possible. He is a senior research fellow at the University of Oxford's Future of Humanity Institute, where his work is focused on existential risk. His book on the subject The Precipice: Existential Risk and the Future of Humanity was published in March 2020.

<i>Superintelligence: Paths, Dangers, Strategies</i> 2014 book by Nick Bostrom

Superintelligence: Paths, Dangers, Strategies is a 2014 book by the Swedish philosopher Nick Bostrom from the University of Oxford. It argues that if machine brains surpass human brains in general intelligence, then this new superintelligence could replace humans as the dominant lifeform on Earth. Sufficiently intelligent machines could improve their own capabilities faster than human computer scientists, and the outcome could be an existential catastrophe for humans.

Instrumental convergence is the hypothetical tendency for most sufficiently intelligent beings to pursue similar sub-goals, even if their ultimate goals are pretty different. More precisely, agents may pursue instrumental goals—goals which are made in pursuit of some particular end, but are not the end goals themselves—without ceasing, provided that their ultimate (intrinsic) goals may never be fully satisfied.

<span class="mw-page-title-main">Existential risk from artificial general intelligence</span> Hypothesized risk to human existence

Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could result in human extinction or another irreversible global catastrophe.

<i>Human Compatible</i> 2019 book by Stuart J. Russell

Human Compatible: Artificial Intelligence and the Problem of Control is a 2019 non-fiction book by computer scientist Stuart J. Russell. It asserts that the risk to humanity from advanced artificial intelligence (AI) is a serious concern despite the uncertainty surrounding future progress in AI. It also proposes an approach to the AI control problem.

<i>The Precipice: Existential Risk and the Future of Humanity</i> 2020 book about existential risks by Toby Ord

The Precipice: Existential Risk and the Future of Humanity is a 2020 non-fiction book by the Australian philosopher Toby Ord, a senior research fellow at the Future of Humanity Institute in Oxford. It argues that humanity faces unprecedented risks over the next few centuries and examines the moral significance of safeguarding humanity's future.

<span class="mw-page-title-main">Longtermism</span> Philosophical view which prioritises the long-term future

Longtermism is the ethical view that positively influencing the long-term future is a key moral priority of our time. It is an important concept in effective altruism and serves as a primary motivation for efforts that claim to reduce existential risks to humanity.

References

  1. 1 2 Bostrom, Nick (2002). "Existential Risks: Analyzing Human Extinction Scenarios".{{cite journal}}: Cite journal requires |journal= (help) 9 Journal of Evolution and Technology Jetpress Oxford Research Archive
  2. 1 2 Ord, Toby (2020). The Precipice: Existential Risk and the Future of Humanity. United Kingdom: Bloomsbury Publishing. p. 200. ISBN   978-1526600219.
  3. Bostrom, Nick (2014). Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press. pp. 229–237. ISBN   978-0199678112.
  4. Purtill, Corinne (21 November 2020). "How Close Is Humanity to the Edge?". The New Yorker. Retrieved 2020-11-27.
  5. Muehlhauser, Anna Salamon. "RFID solutions".
  6. Christiano, Paul (15 Oct 2014). "On Progress and Prosperity". Effective Altruism Forum. Retrieved 21 October 2014.
  7. Muehlhauser, Luke; Anna Salamon (2012). "Intelligence Explosion: Evidence and Import" (PDF): 18–19. Archived from the original (PDF) on 26 October 2014. Retrieved 29 November 2013.{{cite journal}}: Cite journal requires |journal= (help)
  8. Muehlhauser, Luke (2013). Facing the Intelligence Explosion. Machine Intelligence Research Institute. Retrieved 29 November 2013.
  9. Tomasik, Brian (23 Oct 2013). "Differential Intellectual Progress as a Positive-Sum Project". Foundational Research Institute. Retrieved 18 February 2016.