Open letter on artificial intelligence (2015)

Last updated
Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter
CreatedJanuary 2015
Author(s) Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts
Subjectresearch on the societal impacts of AI

In January 2015, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts [1] signed an open letter on artificial intelligence [2] calling for research on the societal impacts of AI. The letter affirmed that society can reap great potential benefits from artificial intelligence, but called for concrete research on how to prevent certain potential "pitfalls": artificial intelligence has the potential to eradicate disease and poverty, but researchers must not create something which is unsafe or uncontrollable. [1] The four-paragraph letter, titled "Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter", lays out detailed research priorities in an accompanying twelve-page document. [3]

Contents

Background

By 2014, both physicist Stephen Hawking and business magnate Elon Musk had publicly voiced the opinion that superhuman artificial intelligence could provide incalculable benefits, but could also end the human race if deployed incautiously. At the time, Hawking and Musk both sat on the scientific advisory board for the Future of Life Institute, an organisation working to "mitigate existential risks facing humanity". The institute drafted an open letter directed to the broader AI research community, [4] and circulated it to the attendees of its first conference in Puerto Rico during the first weekend of 2015. [5] The letter was made public on January 12. [6]

Purpose

The letter highlights both the positive and negative effects of artificial intelligence. [7] According to Bloomberg Business, Professor Max Tegmark of MIT circulated the letter in order to find common ground between signatories who consider super intelligent AI a significant existential risk, and signatories such as Professor Oren Etzioni, who believe the AI field was being "impugned" by a one-sided media focus on the alleged risks. [6] The letter contends that:

The potential benefits (of AI) are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls. [8]

One of the signatories, Professor Bart Selman of Cornell University, said the purpose is to get AI researchers and developers to pay more attention to AI safety. In addition, for policymakers and the general public, the letter is meant to be informative but not alarmist. [4] Another signatory, Professor Francesca Rossi, stated that "I think it's very important that everybody knows that AI researchers are seriously thinking about these concerns and ethical issues". [9]

Concerns raised by the letter

The signatories ask: How can engineers create AI systems that are beneficial to society, and that are robust? Humans need to remain in control of AI; our AI systems must "do what we want them to do". [1] The required research is interdisciplinary, drawing from areas ranging from economics and law to various branches of computer science, such as computer security and formal verification. Challenges that arise are divided into verification ("Did I build the system right?"), validity ("Did I build the right system?"), security, and control ("OK, I built the system wrong, can I fix it?"). [10]

Short-term concerns

Some near-term concerns relate to autonomous vehicles, from civilian drones and self-driving cars. For example, a self-driving car may, in an emergency, have to decide between a small risk of a major accident and a large probability of a small accident. Other concerns relate to lethal intelligent autonomous weapons: Should they be banned? If so, how should 'autonomy' be precisely defined? If not, how should culpability for any misuse or malfunction be apportioned?

Other issues include privacy concerns as AI becomes increasingly able to interpret large surveillance datasets, and how to best manage the economic impact of jobs displaced by AI. [4]

Long-term concerns

The document closes by echoing Microsoft research director Eric Horvitz's concerns that:

we could one day lose control of AI systems via the rise of superintelligences that do not act in accordance with human wishes and that such powerful systems would threaten humanity. Are such dystopic outcomes possible? If so, how might these situations arise? ... What kind of investments in research should be made to better understand and to address the possibility of the rise of a dangerous superintelligence or the occurrence of an "intelligence explosion"?

Existing tools for harnessing AI, such as reinforcement learning and simple utility functions, are inadequate to solve this; therefore more research is necessary to find and validate a robust solution to the "control problem". [10]

Signatories

Signatories include physicist Stephen Hawking, business magnate Elon Musk, the entrepreneurs behind DeepMind and Vicarious, Google's director of research Peter Norvig, [1] Professor Stuart J. Russell of the University of California, Berkeley, [11] and other AI experts, robot makers, programmers, and ethicists. [12] The original signatory count was over 150 people, [13] including academics from Cambridge, Oxford, Stanford, Harvard, and MIT. [14]

Notes

  1. 1 2 3 4 Sparkes, Matthew (13 January 2015). "Top scientists call for caution over artificial intelligence". The Telegraph (UK) . Retrieved 24 April 2015.
  2. "FLI - Future of Life Institute | AI Open Letter". web.archive.org. 2015-11-02. Retrieved 2024-07-09.
  3. Russell, Stuart; Dewey, Daniel; Tegmar, Max (2015-01-23). "Research priorities for robust and beneficial artificial intelligence" (PDF).{{cite web}}: CS1 maint: url-status (link)
  4. 1 2 3 Chung, Emily (13 January 2015). "AI must turn focus to safety, Stephen Hawking and other researchers say". Canadian Broadcasting Corporation . Retrieved 24 April 2015.
  5. McMillan, Robert (16 January 2015). "AI Has Arrived, and That Really Worries the World's Brightest Minds". Wired . Retrieved 24 April 2015.
  6. 1 2 Dina Bass; Jack Clark (4 February 2015). "Is Elon Musk Right About AI? Researchers Don't Think So". Bloomberg Business . Retrieved 24 April 2015.
  7. Bradshaw, Tim (12 January 2015). "Scientists and investors warn on AI". The Financial Times . Retrieved 24 April 2015. Rather than fear-mongering, the letter is careful to highlight both the positive and negative effects of artificial intelligence.
  8. "Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter". Future of Life Institute . Retrieved 24 April 2015.
  9. "Big science names sign open letter detailing AI danger". New Scientist . 14 January 2015. Retrieved 24 April 2015.
  10. 1 2 "Research priorities for robust and beneficial artificial intelligence" (PDF). Future of Life Institute. 23 January 2015. Retrieved 24 April 2015.
  11. Wolchover, Natalie (21 April 2015). "Concerns of an Artificial Intelligence Pioneer". Quanta magazine. Retrieved 24 April 2015.
  12. "Experts pledge to rein in AI research". BBC News . 12 January 2015. Retrieved 24 April 2015.
  13. Hern, Alex (12 January 2015). "Experts including Elon Musk call for research to avoid AI 'pitfalls'". The Guardian . Retrieved 24 April 2015.
  14. Griffin, Andrew (12 January 2015). "Stephen Hawking, Elon Musk and others call for research to avoid dangers of artificial intelligence" . The Independent . Archived from the original on 2022-05-24. Retrieved 24 April 2015.

Related Research Articles

<span class="mw-page-title-main">Eliezer Yudkowsky</span> American AI researcher and writer (born 1979)

Eliezer S. Yudkowsky is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence. He is the founder of and a research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. His work on the prospect of a runaway intelligence explosion influenced philosopher Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies.

<span class="mw-page-title-main">Nick Bostrom</span> Philosopher and writer (born 1973)

Nick Bostrom is a philosopher known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, superintelligence risks, and the reversal test. He was the founding director of the now dissolved Future of Humanity Institute at the University of Oxford and is now Principal Researcher at the Macrostrategy Research Initiative.

Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human capabilities across a wide range of cognitive tasks. This is in contrast to narrow AI, which is designed for specific tasks. AGI is considered one of various definitions of strong AI.

A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

The Machine Intelligence Research Institute (MIRI), formerly the Singularity Institute for Artificial Intelligence (SIAI), is a non-profit research institute focused since 2005 on identifying and managing potential existential risks from artificial general intelligence. MIRI's work has focused on a friendly AI approach to system design and on predicting the rate of technology development.

<span class="mw-page-title-main">AI takeover</span> Hypothetical outcome of artificial intelligence

An AI takeover is an imagined scenario in which artificial intelligence (AI) emerges as the dominant form of intelligence on Earth and computer programs or robots effectively take control of the planet away from the human species, which relies on human intelligence. Stories of AI takeovers remain popular throughout science fiction, but recent advancements have made the threat more real. Possible scenarios include replacement of the entire human workforce due to automation, takeover by a superintelligent AI (ASI), and the notion of a robot uprising. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.

The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics, lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status, artificial superintelligence and existential risks.

<span class="mw-page-title-main">Future of Life Institute</span> International nonprofit research institute

The Future of Life Institute (FLI) is a nonprofit organization which aims to steer transformative technology towards benefiting life and away from large-scale risks, with a focus on existential risk from advanced artificial intelligence (AI). FLI's work includes grantmaking, educational outreach, and advocacy within the United Nations, United States government, and European Union institutions.

<i>Superintelligence: Paths, Dangers, Strategies</i> 2014 book by Nick Bostrom

Superintelligence: Paths, Dangers, Strategies is a 2014 book by the philosopher Nick Bostrom. It explores how superintelligence could be created and what its features and motivations might be. It argues that superintelligence, if created, would be difficult to control, and that it could take over the world in order to accomplish its goals. The book also presents strategies to help make superintelligences whose goals benefit humanity. It was particularly influential for raising concerns about existential risk from artificial intelligence.

Existential risk from artificial general intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.

<span class="mw-page-title-main">D. Scott Phoenix</span>

D. Scott Phoenix is an American entrepreneur and former cofounder and CEO of Vicarious, an artificial intelligence research company funded by 250M from Elon Musk, Mark Zuckerberg, and others that was acquired by Intrinsic, an Alphabet company in 2022.

OpenAI is an American artificial intelligence (AI) research organization founded in December 2015 and headquartered in San Francisco, California. Its mission is to develop "safe and beneficial" artificial general intelligence, which it defines as "highly autonomous systems that outperform humans at most economically valuable work". As a leading organization in the ongoing AI boom, OpenAI is known for the GPT family of large language models, the DALL-E series of text-to-image models, and a text-to-video model named Sora. Its release of ChatGPT in November 2022 has been credited with catalyzing widespread interest in generative AI.

<span class="mw-page-title-main">Effective Altruism Global</span> Recurring effective altruism conference

Effective Altruism Global, abbreviated EA Global or EAG, is a series of philanthropy conferences that focuses on the effective altruism movement. The conferences are run by the Centre for Effective Altruism. Huffington Post editor Nico Pitney described the events as a gathering of "nerd altruists", which was "heavy on people from technology, science, and analytical disciplines".

<i>Life 3.0</i> 2017 book by Max Tegmark on artificial intelligence

Life 3.0: Being Human in the Age of Artificial Intelligence is a 2017 non-fiction book by Swedish-American cosmologist Max Tegmark. Life 3.0 discusses artificial intelligence (AI) and its impact on the future of life on Earth and beyond. The book discusses a variety of societal implications, what can be done to maximize the chances of a positive outcome, and potential futures for humanity, technology and combinations thereof.

<span class="mw-page-title-main">AI aftermath scenarios</span> Overview of AIs possible effects on the human state

Some scholars believe that advances in artificial intelligence, or AI, will eventually lead to a semi-apocalyptic post-scarcity and post-work economy where intelligent machines can outperform humans in almost every, if not every, domain. The questions of what such a world might look like, and whether specific scenarios constitute utopias or dystopias, are the subject of active debate.

A military artificial intelligence arms race is an arms race between two or more states to develop and deploy lethal autonomous weapons systems (LAWS). Since the mid-2010s, many analysts have noted the emergence of such an arms race between superpowers for better military AI, driven by increasing geopolitical and military tensions.

<i>Do You Trust This Computer?</i> 2018 American film

Do You Trust This Computer? is a 2018 American documentary film directed by Chris Paine that outlines the benefits and especially the dangers of artificial intelligence. It features interviews with a range of prominent individuals relevant to AI, such as Ray Kurzweil, Elon Musk, Jerry Kaplan, Michal Kosinski, D. Scott Phoenix, Hiroshi Ishiguro, and Jonathan Nolan. The film was directed by Chris Paine, known for Who Killed the Electric Car? (2006) and the subsequent followup, Revenge of the Electric Car (2011).

Roko's basilisk is a thought experiment which states that an otherwise benevolent artificial superintelligence (AI) in the future would be incentivized to create a virtual reality simulation to torture anyone who knew of its potential existence but did not directly contribute to its advancement or development, in order to incentivize said advancement. It originated in a 2010 post at discussion board LessWrong, a technical forum focused on analytical rational enquiry. The thought experiment's name derives from the poster of the article (Roko) and the basilisk, a mythical creature capable of destroying enemies with its stare.

AI safety is an interdisciplinary field focused on preventing accidents, misuse, or other harmful consequences arising from artificial intelligence (AI) systems. It encompasses machine ethics and AI alignment, which aim to ensure AI systems are moral and beneficial, as well as monitoring AI systems for risks and enhancing their reliability. The field is particularly concerned with existential risks posed by advanced AI models.

Pause Giant AI Experiments: An Open Letter is the title of a letter published by the Future of Life Institute in March 2023. The letter calls "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4", citing risks such as AI-generated propaganda, extreme automation of jobs, human obsolescence, and a society-wide loss of control. It received more than 30,000 signatures, including academic AI researchers and industry CEOs such as Yoshua Bengio, Stuart Russell, Elon Musk, Steve Wozniak and Yuval Noah Harari.