Differential technological development

Last updated

Differential technological development is a strategy of technology governance aiming to decrease risks from emerging technologies by influencing the sequence in which they are developed. On this strategy, societies would strive to delay the development of harmful technologies and their applications, while accelerating the development of beneficial technologies, especially those that offer protection against the harmful ones. [1] [2]

Contents

History of the idea

Differential technological development was initially proposed by philosopher Nick Bostrom in 2002 [1] and he applied the idea to the governance of artificial intelligence in his 2014 book Superintelligence: Paths, Dangers, Strategies. [3] The strategy was also endorsed by philosopher Toby Ord in his 2020 book The Precipice: Existential Risk and the Future of Humanity, who writes that "While it may be too difficult to prevent the development of a risky technology, we may be able to reduce existential risk by speeding up the development of protective technologies relative to dangerous ones." [2] [4]

Informal discussion

Paul Christiano believes that while accelerating technological progress appears to be one of the best ways to improve human welfare in the next few decades, a faster rate of growth cannot be equally important for the far future because growth must eventually saturate due to physical limits. Hence, from the perspective of the far future, differential technological development appears more crucial. [5]

Inspired by Bostrom's proposal, Luke Muehlhauser and Anna Salamon suggested a more general project of "differential intellectual progress", in which society advances its wisdom, philosophical sophistication, and understanding of risks faster than its technological power. [6] [7] Brian Tomasik has expanded on this notion. [8]

See also

Related Research Articles

Friendly artificial intelligence is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.

<span class="mw-page-title-main">Nick Bostrom</span> Philosopher and writer (born 1973)

Nick Bostrom is a philosopher known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, superintelligence risks, and the reversal test. He was the founding director of the now dissolved Future of Humanity Institute at the University of Oxford and is now Principal Researcher at the Macrostrategy Research Initiative.

Singularitarianism is a movement defined by the belief that a technological singularity—the creation of superintelligence—will likely happen in the medium future, and that deliberate action ought to be taken to ensure that the singularity benefits humans.

A superintelligence is a hypothetical agent that possesses intelligence surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

The Machine Intelligence Research Institute (MIRI), formerly the Singularity Institute for Artificial Intelligence (SIAI), is a non-profit research institute focused since 2005 on identifying and managing potential existential risks from artificial general intelligence. MIRI's work has focused on a friendly AI approach to system design and on predicting the rate of technology development.

<span class="mw-page-title-main">AI takeover</span> Hypothetical outcome of artificial intelligence

An AI takeover is an imagined scenario in which artificial intelligence (AI) emerges as the dominant form of intelligence on Earth and computer programs or robots effectively take control of the planet away from the human species, which relies on human intelligence. Possible scenarios include replacement of the entire human workforce due to automation, takeover by a superintelligent AI (ASI), and the notion of a robot uprising. Stories of AI takeovers have been popular throughout science fiction, but recent advancements have made the threat more real. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.

<span class="mw-page-title-main">Human extinction</span> Hypothetical end of the human species

Human extinction is the hypothetical end of the human species, either by population decline due to extraneous natural causes, such as an asteroid impact or large-scale volcanism, or via anthropogenic destruction (self-extinction), for example by sub-replacement fertility.

<span class="mw-page-title-main">Anders Sandberg</span> Swedish computer scientist, futurist, transhumanist, and philosopher

Anders Sandberg is a Swedish researcher, futurist and transhumanist. He holds a PhD in computational neuroscience from Stockholm University, and is a former senior research fellow at the Future of Humanity Institute at the University of Oxford.

<span class="mw-page-title-main">Future of Humanity Institute</span> Defunct Oxford interdisciplinary research centre

The Future of Humanity Institute (FHI) was an interdisciplinary research centre at the University of Oxford investigating big-picture questions about humanity and its prospects. It was founded in 2005 as part of the Faculty of Philosophy and the Oxford Martin School. Its director was philosopher Nick Bostrom, and its research staff included futurist Anders Sandberg and Giving What We Can founder Toby Ord.

<span class="mw-page-title-main">Global catastrophic risk</span> Hypothetical global-scale disaster risk

A global catastrophic risk or a doomsday scenario is a hypothetical event that could damage human well-being on a global scale, even endangering or destroying modern civilization. An event that could cause human extinction or permanently and drastically curtail humanity's existence or potential is known as an "existential risk".

In futurology, a singleton is a hypothetical world order in which there is a single decision-making agency at the highest level, capable of exerting effective control over its domain, and permanently preventing both internal and external threats to its supremacy. The term was first defined by Nick Bostrom.

In the field of artificial intelligence (AI) design, AI capability control proposals, also referred to as AI confinement, aim to increase our ability to monitor and control the behavior of AI systems, including proposed artificial general intelligences (AGIs), in order to reduce the danger they might pose if misaligned. However, capability control becomes less effective as agents become more intelligent and their ability to exploit flaws in human control systems increases, potentially resulting in an existential risk from AGI. Therefore, the Oxford philosopher Nick Bostrom and others recommend capability control methods only as a supplement to alignment methods.

<span class="mw-page-title-main">Toby Ord</span> Australian philosopher (born 1979)

Toby David Godfrey Ord is an Australian philosopher. In 2009 he founded Giving What We Can, an international society whose members pledge to donate at least 10% of their income to effective charities, and is a key figure in the effective altruism movement, which promotes using reason and evidence to help the lives of others as much as possible.

<i>Superintelligence: Paths, Dangers, Strategies</i> 2014 book by Nick Bostrom

Superintelligence: Paths, Dangers, Strategies is a 2014 book by the philosopher Nick Bostrom. It explores how superintelligence could be created and what its features and motivations might be. It argues that superintelligence, if created, would be difficult to control, and that it could take over the world in order to accomplish its goals. The book also presents strategies to help make superintelligences whose goals benefit humanity. It was particularly influential for raising concerns about existential risk from artificial intelligence.

Instrumental convergence is the hypothetical tendency for most sufficiently intelligent, goal directed beings to pursue similar sub-goals, even if their ultimate goals are quite different. More precisely, agents may pursue instrumental goals—goals which are made in pursuit of some particular end, but are not the end goals themselves—without ceasing, provided that their ultimate (intrinsic) goals may never be fully satisfied.

Existential risk from artificial intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.

<i>The Precipice: Existential Risk and the Future of Humanity</i> 2020 book about existential risks by Toby Ord

The Precipice: Existential Risk and the Future of Humanity is a 2020 non-fiction book by the Australian philosopher Toby Ord, a senior research fellow at the Future of Humanity Institute in Oxford. It argues that humanity faces unprecedented risks over the next few centuries and examines the moral significance of safeguarding humanity's future.

<span class="mw-page-title-main">Risk of astronomical suffering</span> Risks of astronomical suffering

Risks of astronomical suffering, also called suffering risks or s-risks, are risks involving much more suffering than all that has occurred on Earth so far. They are sometimes categorized as a subclass of existential risks.

<span class="mw-page-title-main">Longtermism</span> Philosophical view which prioritises the long-term future

Longtermism is the ethical view that positively influencing the long-term future is a key moral priority of our time. It is an important concept in effective altruism and a primary motivation for efforts that aim to reduce existential risks to humanity.

Existential risk studies (ERS) is a field of studies focused on the definition and theorization of "existential risks", its ethical implications and the related strategies of long-term survival. Existential risks are diversely defined as global kinds of calamity that have the capacity of inducing the extinction of intelligent earthling life, such as humans, or, at least, a severe limitation of their potential, as defined by ERS theorists. The field development and expansion can be divided in waves according to its conceptual changes as well as its evolving relationship with related fields and theories, such as futures studies, disaster studies, AI safety, effective altruism and longtermism.

References

  1. 1 2 Bostrom, Nick (2002). "Existential Risks: Analyzing Human Extinction Scenarios".{{cite journal}}: Cite journal requires |journal= (help) 9 Journal of Evolution and Technology Jetpress Oxford Research Archive
  2. 1 2 Ord, Toby (2020). The Precipice: Existential Risk and the Future of Humanity. United Kingdom: Bloomsbury Publishing. p. 200. ISBN   978-1526600219.
  3. Bostrom, Nick (2014). Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press. pp. 229–237. ISBN   978-0199678112.
  4. Purtill, Corinne (21 November 2020). "How Close Is Humanity to the Edge?". The New Yorker. Retrieved 2020-11-27.
  5. Christiano, Paul (15 Oct 2014). "On Progress and Prosperity". Effective Altruism Forum. Retrieved 21 October 2014.
  6. Muehlhauser, Luke; Anna Salamon (2012). "Intelligence Explosion: Evidence and Import" (PDF). pp. 18–19. Archived from the original (PDF) on 26 October 2014. Retrieved 29 November 2013.
  7. Muehlhauser, Luke (2013). "Facing the Intelligence Explosion". Machine Intelligence Research Institute. Retrieved 29 November 2013.
  8. Tomasik, Brian (23 Oct 2013). "Differential Intellectual Progress as a Positive-Sum Project". Foundational Research Institute. Retrieved 18 February 2016.