Open letter on artificial intelligence (2015)

Last updated
Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter
CreatedJanuary 2015
Author(s) Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts
Subjectresearch on the societal impacts of AI

In January 2015, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts [1] signed an open letter on artificial intelligence calling for research on the societal impacts of AI. The letter affirmed that society can reap great potential benefits from artificial intelligence, but called for concrete research on how to prevent certain potential "pitfalls": artificial intelligence has the potential to eradicate disease and poverty, but researchers must not create something which is unsafe or uncontrollable. [1] The four-paragraph letter, titled "Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter", lays out detailed research priorities in an accompanying twelve-page document.

Contents

Background

By 2014, both physicist Stephen Hawking and business magnate Elon Musk had publicly voiced the opinion that superhuman artificial intelligence could provide incalculable benefits, but could also end the human race if deployed incautiously. At the time, Hawking and Musk both sat on the scientific advisory board for the Future of Life Institute, an organisation working to "mitigate existential risks facing humanity". The institute drafted an open letter directed to the broader AI research community, [2] and circulated it to the attendees of its first conference in Puerto Rico during the first weekend of 2015. [3] The letter was made public on January 12. [4]

Purpose

The letter highlights both the positive and negative effects of artificial intelligence. [5] According to Bloomberg Business, Professor Max Tegmark of MIT circulated the letter in order to find common ground between signatories who consider super intelligent AI a significant existential risk, and signatories such as Professor Oren Etzioni, who believe the AI field was being "impugned" by a one-sided media focus on the alleged risks. [4] The letter contends that:

The potential benefits (of AI) are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls. [6]

One of the signatories, Professor Bart Selman of Cornell University, said the purpose is to get AI researchers and developers to pay more attention to AI safety. In addition, for policymakers and the general public, the letter is meant to be informative but not alarmist. [2] Another signatory, Professor Francesca Rossi, stated that "I think it's very important that everybody knows that AI researchers are seriously thinking about these concerns and ethical issues". [7]

Concerns raised by the letter

The signatories ask: How can engineers create AI systems that are beneficial to society, and that are robust? Humans need to remain in control of AI; our AI systems must "do what we want them to do". [1] The required research is interdisciplinary, drawing from areas ranging from economics and law to various branches of computer science, such as computer security and formal verification. Challenges that arise are divided into verification ("Did I build the system right?"), validity ("Did I build the right system?"), security, and control ("OK, I built the system wrong, can I fix it?"). [8]

Short-term concerns

Some near-term concerns relate to autonomous vehicles, from civilian drones and self-driving cars. For example, a self-driving car may, in an emergency, have to decide between a small risk of a major accident and a large probability of a small accident. Other concerns relate to lethal intelligent autonomous weapons: Should they be banned? If so, how should 'autonomy' be precisely defined? If not, how should culpability for any misuse or malfunction be apportioned?

Other issues include privacy concerns as AI becomes increasingly able to interpret large surveillance datasets, and how to best manage the economic impact of jobs displaced by AI. [2]

Long-term concerns

The document closes by echoing Microsoft research director Eric Horvitz's concerns that:

we could one day lose control of AI systems via the rise of superintelligences that do not act in accordance with human wishes and that such powerful systems would threaten humanity. Are such dystopic outcomes possible? If so, how might these situations arise? ... What kind of investments in research should be made to better understand and to address the possibility of the rise of a dangerous superintelligence or the occurrence of an "intelligence explosion"?

Existing tools for harnessing AI, such as reinforcement learning and simple utility functions, are inadequate to solve this; therefore more research is necessary to find and validate a robust solution to the "control problem". [8]

Signatories

Signatories include physicist Stephen Hawking, business magnate Elon Musk, the entrepreneurs behind DeepMind and Vicarious, Google's director of research Peter Norvig, [1] Professor Stuart J. Russell of the University of California, Berkeley, [9] and other AI experts, robot makers, programmers, and ethicists. [10] The original signatory count was over 150 people, [11] including academics from Cambridge, Oxford, Stanford, Harvard, and MIT. [12]

Notes

  1. 1 2 3 4 Sparkes, Matthew (13 January 2015). "Top scientists call for caution over artificial intelligence". The Telegraph (UK) . Retrieved 24 April 2015.
  2. 1 2 3 Chung, Emily (13 January 2015). "AI must turn focus to safety, Stephen Hawking and other researchers say". Canadian Broadcasting Corporation . Retrieved 24 April 2015.
  3. McMillan, Robert (16 January 2015). "AI Has Arrived, and That Really Worries the World's Brightest Minds". Wired . Retrieved 24 April 2015.
  4. 1 2 Dina Bass; Jack Clark (4 February 2015). "Is Elon Musk Right About AI? Researchers Don't Think So". Bloomberg Business . Retrieved 24 April 2015.
  5. Bradshaw, Tim (12 January 2015). "Scientists and investors warn on AI". The Financial Times . Retrieved 24 April 2015. Rather than fear-mongering, the letter is careful to highlight both the positive and negative effects of artificial intelligence.
  6. "Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter". Future of Life Institute . Retrieved 24 April 2015.
  7. "Big science names sign open letter detailing AI danger". New Scientist . 14 January 2015. Retrieved 24 April 2015.
  8. 1 2 "Research priorities for robust and beneficial artificial intelligence" (PDF). Future of Life Institute. 23 January 2015. Retrieved 24 April 2015.
  9. Wolchover, Natalie (21 April 2015). "Concerns of an Artificial Intelligence Pioneer". Quanta magazine. Retrieved 24 April 2015.
  10. "Experts pledge to rein in AI research". BBC News . 12 January 2015. Retrieved 24 April 2015.
  11. Hern, Alex (12 January 2015). "Experts including Elon Musk call for research to avoid AI 'pitfalls'". The Guardian . Retrieved 24 April 2015.
  12. Griffin, Andrew (12 January 2015). "Stephen Hawking, Elon Musk and others call for research to avoid dangers of artificial intelligence" . The Independent . Archived from the original on 2022-05-24. Retrieved 24 April 2015.

Related Research Articles

<span class="mw-page-title-main">Artificial intelligence</span> Ability of systems to perceive, synthesize, and infer information

Artificial intelligence (AI) is intelligence—perceiving, synthesizing, and inferring information—demonstrated by machines, as opposed to intelligence displayed by humans or by other animals. Example tasks in which this is done include speech recognition, computer vision, translation between (natural) languages, as well as other mappings of inputs.

<span class="mw-page-title-main">Eliezer Yudkowsky</span> American AI researcher and writer (born 1979)

Eliezer S. Yudkowsky is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence, including the idea of a "fire alarm" for AI. He is a co-founder and research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. His work on the prospect of a runaway intelligence explosion influenced philosopher Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies.

<span class="mw-page-title-main">Nick Bostrom</span> Swedish philosopher and writer

Nick Bostrom is a Swedish philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, superintelligence risks, and the reversal test. In 2011, he founded the Oxford Martin Program on the Impacts of Future Technology, and is the founding director of the Future of Humanity Institute at Oxford University. In 2009 and 2015, he was included in Foreign Policy's Top 100 Global Thinkers list.

A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

The Machine Intelligence Research Institute (MIRI), formerly the Singularity Institute for Artificial Intelligence (SIAI), is a non-profit research institute focused since 2005 on identifying and managing potential existential risks from artificial general intelligence. MIRI's work has focused on a friendly AI approach to system design and on predicting the rate of technology development.

<span class="mw-page-title-main">AI takeover</span> Hypothetical artificial intelligence scenario

An AI takeover is a hypothetical scenario in which an artificial intelligence (AI) becomes the dominant form of intelligence on Earth, as computer programs or robots effectively take the control of the planet away from the human species. Possible scenarios include replacement of the entire human workforce, takeover by a superintelligent AI, and the popular notion of a robot uprising. Stories of AI takeovers are very popular throughout science-fiction. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.

<span class="mw-page-title-main">Ethics of artificial intelligence</span> Ethical issues specific to AI

The ethics of artificial intelligence is the branch of the ethics of technology specific to artificially intelligent systems. It is sometimes divided into a concern with the moral behavior of humans as they design, make, use and treat artificially intelligent systems, and a concern with the behavior of machines, in machine ethics. It also includes the issue of a possible singularity due to superintelligent AI.

<span class="mw-page-title-main">Future of Life Institute</span> International nonprofit research institute

The Future of Life Institute (FLI) is a nonprofit organization with the stated goal of reducing global catastrophic and existential risks facing humanity, particularly existential risk from advanced artificial intelligence (AI). FLI's work includes grantmaking, educational outreach, and advocacy within the United Nations, United States government, and European Union institutions. Its founders include MIT cosmologist Max Tegmark, UC Santa Cruz cosmologist and Faggin Presidential Chair for the Physics of Information Anthony Aguirre and Skype co-founder Jaan Tallinn, and its advisors include entrepreneur Elon Musk.

<i>Superintelligence: Paths, Dangers, Strategies</i> 2014 book by Nick Bostrom

Superintelligence: Paths, Dangers, Strategies is a 2014 book by the Swedish philosopher Nick Bostrom from the University of Oxford. It argues that if machine brains surpass human brains in general intelligence, then this new superintelligence could replace humans as the dominant lifeform on Earth. Sufficiently intelligent machines could improve their own capabilities faster than human computer scientists, and the outcome could be an existential catastrophe for humans.

<span class="mw-page-title-main">Existential risk from artificial general intelligence</span> Hypothesized risk to human existence

Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could result in human extinction or some other unrecoverable global catastrophe.

<span class="mw-page-title-main">D. Scott Phoenix</span>

D. Scott Phoenix is an American entrepreneur and cofounder of Vicarious, an artificial intelligence research company.

<span class="mw-page-title-main">OpenAI</span> Artificial intelligence research organization

OpenAI is an American artificial intelligence (AI) research laboratory consisting of the non-profit OpenAI Incorporated and its for-profit subsidiary corporation OpenAI Limited Partnership. OpenAI conducts AI research with the declared intention of promoting and developing a friendly AI. OpenAI systems run on an Azure-based supercomputing platform from Microsoft.

<span class="mw-page-title-main">Effective Altruism Global</span> Recurring effective altruism conference

Effective Altruism Global, abbreviated EA Global or EAG, is a series of philanthropy conferences that focuses on the effective altruism movement. The conferences are run by the Centre for Effective Altruism. Huffington Post editor Nico Pitney described the events as a gathering of "nerd altruists", which was "heavy on people from technology, science, and analytical disciplines".

Robotic governance provides a regulatory framework to deal with autonomous and intelligent machines. This includes research and development activities as well as handling of these machines. The idea is related to the concepts of corporate governance, technology governance and IT-governance, which provide a framework for the management of organizations or the focus of a global IT infrastructure.

<i>Life 3.0</i> 2017 book by Max Tegmark on Artificial Intelligence

Life 3.0: Being Human in the Age of Artificial Intelligence is a 2017 book by Swedish-American cosmologist Max Tegmark . Life 3.0 discusses Artificial Intelligence (AI) and its impact on the future of life on Earth and beyond. The book discusses a variety of societal implications, what can be done to maximize the chances of a positive outcome, and potential futures for humanity, technology and combinations thereof.

<span class="mw-page-title-main">AI aftermath scenarios</span> Overview of AIs possible effects on the human state

Many scholars believe that advances in artificial intelligence, or AI, will eventually lead to a semi-apocalyptic post-scarcity economy where intelligent machines can outperform humans in nearly, if not every, domain. The questions of what such a world might look like, and whether specific scenarios constitute utopias or dystopias, are the subject of active debate.

A military artificial intelligence arms race is an arms race between two or more states to develop and deploy lethal autonomous weapons systems (LAWS). Since the mid-2010s, many analysts have noted the emergence of such an arms race between global superpowers for better military AI, driven by increasing geopolitical and military tensions. An AI arms race is sometimes placed in the context of an AI Cold War between the US and China.

Do You Trust This Computer? is a 2018 American documentary film directed by Chris Paine that outlines the benefits and especially the dangers of artificial intelligence. It features interviews with a range of prominent individuals relevant to AI, such as Ray Kurzweil, Elon Musk, Michal Kosinski, and Jonathan Nolan. The film was directed by Chris Paine, known for Who Killed the Electric Car? (2006) and the subsequent followup, Revenge of the Electric Car (2011).

Roko's basilisk is a thought experiment which states that an otherwise benevolent artificial superintelligence (AI) in the future would be incentivized to create a virtual reality simulation to torture anyone who knew of its potential existence but did not directly contribute to its advancement or development. It originated in a 2010 post at discussion board LessWrong, a technical forum focused on analytical rational enquiry. The thought experiment's name derives from the poster of the article (Roko) and the basilisk, a mythical creature capable of destroying enemies with its stare.

AI safety is an interdisciplinary field concerned with preventing accidents, misuse, or other harmful consequences that could result from artificial intelligence (AI) systems. It encompasses machine ethics and AI alignment, which aim to make AI systems moral and beneficial, and AI safety encompasses technical problems including monitoring systems for risks and making them highly reliable. Beyond AI research, it involves developing norms and policies that promote safety.