Machine Intelligence Research Institute

Last updated
Machine Intelligence Research Institute
Formation2000;23 years ago (2000)
Type Nonprofit research institute
PurposeResearch into friendly artificial intelligence and the AI control problem
Location
Key people
Eliezer Yudkowsky
Website intelligence.org

The Machine Intelligence Research Institute (MIRI), formerly the Singularity Institute for Artificial Intelligence (SIAI), is a non-profit research institute focused since 2005 on identifying and managing potential existential risks from artificial general intelligence. MIRI's work has focused on a friendly AI approach to system design and on predicting the rate of technology development.

Contents

History

Yudkowsky at Stanford University in 2006 Eliezer Yudkowsky, Stanford 2006 (square crop).jpg
Yudkowsky at Stanford University in 2006

In 2000, Eliezer Yudkowsky founded the Singularity Institute for Artificial Intelligence with funding from Brian and Sabine Atkins, with the purpose of accelerating the development of artificial intelligence (AI). [1] [2] [3] However, Yudkowsky began to be concerned that AI systems developed in the future could become superintelligent and pose risks to humanity, [1] and in 2005 the institute moved to Silicon Valley and began to focus on ways to identify and manage those risks, which were at the time largely ignored by scientists in the field. [2]

Starting in 2006, the Institute organized the Singularity Summit to discuss the future of AI including its risks, initially in cooperation with Stanford University and with funding from Peter Thiel. The San Francisco Chronicle described the first conference as a "Bay Area coming-out party for the tech-inspired philosophy called transhumanism". [4] [5] In 2011, its offices were four apartments in downtown Berkeley. [6] In December 2012, the institute sold its name, web domain, and the Singularity Summit to Singularity University, [7] and in the following month took the name "Machine Intelligence Research Institute". [8]

In 2014 and 2015, public and scientific interest in the risks of AI grew, increasing donations to fund research at MIRI and similar organizations. [3] [9] :327

In 2019, Open Philanthropy recommended a general-support grant of approximately $2.1 million over two years to MIRI. [10] In April 2020, Open Philanthropy supplemented this with a $7.7M grant over two years. [11] [12]

In 2021, Vitalik Buterin donated several million dollars worth of Ethereum to MIRI. [13]

Research and approach

Nate Soares presenting an overview of the AI alignment problem at Google in 2016 Nate Soares giving a talk at Google.gk.jpg
Nate Soares presenting an overview of the AI alignment problem at Google in 2016

MIRI's approach to identifying and managing the risks of AI, led by Yudkowsky, primarily addresses how to design friendly AI, covering both the initial design of AI systems and the creation of mechanisms to ensure that evolving AI systems remain friendly. [3] [14] [15]

MIRI researchers advocate early safety work as a precautionary measure. [16] However, MIRI researchers have expressed skepticism about the views of singularity advocates like Ray Kurzweil that superintelligence is "just around the corner". [14] MIRI has funded forecasting work through an initiative called AI Impacts, which studies historical instances of discontinuous technological change, and has developed new measures of the relative computational power of humans and computer hardware. [17]

MIRI aligns itself with the principles and objectives of the effective altruism movement. [18]

Works by MIRI staff

See also

Related Research Articles

The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization. According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model, an upgradable intelligent agent will eventually enter a "runaway reaction" of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an "explosion" in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.

<span class="mw-page-title-main">Eliezer Yudkowsky</span> American AI researcher and writer (born 1979)

Eliezer S. Yudkowsky is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence, including the idea that there might not be a "fire alarm" for AI. He is the founder of and a research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. His work on the prospect of a runaway intelligence explosion influenced philosopher Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies.

<span class="mw-page-title-main">Friendly artificial intelligence</span> AI to benefit humanity

Friendly artificial intelligence is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.

<span class="mw-page-title-main">Nick Bostrom</span> Swedish philosopher and writer (born 1973)

Nick Bostrom is a Swedish philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, superintelligence risks, and the reversal test. He is the founding director of the Future of Humanity Institute at Oxford University.

<span class="mw-page-title-main">Singularitarianism</span> Belief in an incipient technological singularity

Singularitarianism is a movement defined by the belief that a technological singularity—the creation of superintelligence—will likely happen in the medium future, and that deliberate action ought to be taken to ensure that the singularity benefits humans.

<span class="mw-page-title-main">Artificial general intelligence</span> Hypothetical human-level or stronger AI

An artificial general intelligence (AGI) is a hypothetical type of intelligent agent. If realized, an AGI could learn to accomplish any intellectual task that human beings or animals can perform. Alternatively, AGI has been defined as an autonomous system that surpasses human capabilities in the majority of economically valuable tasks. Creating AGI is a primary goal of some artificial intelligence research and of companies such as OpenAI, DeepMind, and Anthropic. AGI is a common topic in science fiction and futures studies.

A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

<span class="mw-page-title-main">AI takeover</span> Hypothetical artificial intelligence scenario

An AI takeover is a hypothetical scenario in which artificial intelligence (AI) becomes the dominant form of intelligence on Earth, as computer programs or robots effectively take control of the planet away from the human species. Possible scenarios include replacement of the entire human workforce, takeover by a superintelligent AI, and the popular notion of a robot uprising. Stories of AI takeovers are very popular throughout science fiction. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.

<span class="mw-page-title-main">Ethics of artificial intelligence</span> Ethical issues specific to AI

The ethics of artificial intelligence is the branch of the ethics of technology specific to artificially intelligent systems. It is sometimes divided into a concern with the moral behavior of humans as they design, make, use and treat artificially intelligent systems, and a concern with the behavior of machines, in machine ethics.

The Singularity Summit was the annual conference of the Machine Intelligence Research Institute. It was started in 2006 at Stanford University by Ray Kurzweil, Eliezer Yudkowsky, and Peter Thiel, and the subsequent summits in 2007, 2008, 2009, 2010, 2011, and 2012 have been held in San Francisco, San Jose, New York City, San Francisco, New York City, and San Francisco respectively. Some speakers have included Sebastian Thrun, Rodney Brooks, Barney Pell, Marshall Brain, Justin Rattner, Peter Diamandis, Stephen Wolfram, Gregory Benford, Robin Hanson, Anders Sandberg, Juergen Schmidhuber, Aubrey de Grey, Max Tegmark, and Michael Shermer.

<i>LessWrong</i> Rationality-focused community blog

LessWrong is a community blog and forum focused on discussion of cognitive biases, philosophy, psychology, economics, rationality, and artificial intelligence, among other topics.

<span class="mw-page-title-main">Eric Horvitz</span> American computer scientist, and Technical Fellow at Microsoft

Eric Joel Horvitz is an American computer scientist, and Technical Fellow at Microsoft, where he serves as the company's first Chief Scientific Officer. He was previously the director of Microsoft Research Labs, including research centers in Redmond, WA, Cambridge, MA, New York, NY, Montreal, Canada, Cambridge, UK, and Bangalore, India.

In the field of artificial intelligence (AI) design, AI capability control proposals, also referred to as AI confinement, aim to increase our ability to monitor and control the behavior of AI systems, including proposed artificial general intelligences (AGIs), in order to reduce the danger they might pose if misaligned. However, capability control becomes less effective as agents become more intelligent and their ability to exploit flaws in human control systems increases, potentially resulting in an existential risk from AGI. Therefore, the Oxford philosopher Nick Bostrom and others recommend capability control methods only as a supplement to alignment methods.

Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. Machine ethics should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with the grander social effects of technology.

Roman Vladimirovich Yampolskiy is a Russian computer scientist at the University of Louisville, known for his work on behavioral biometrics, security of cyberworlds, and artificial intelligence safety. He holds a PhD from the University at Buffalo (2008). He is currently the director of Cyber Security Laboratory in the department of Computer Engineering and Computer Science at the Speed School of Engineering.

<i>Our Final Invention</i> 2013 book by James Barrat

Our Final Invention: Artificial Intelligence and the End of the Human Era is a 2013 non-fiction book by the American author James Barrat. The book discusses the potential benefits and possible risks of human-level or super-human artificial intelligence. Those supposed risks include extermination of the human race.

<i>Superintelligence: Paths, Dangers, Strategies</i> 2014 book by Nick Bostrom

Superintelligence: Paths, Dangers, Strategies is a 2014 book by the philosopher Nick Bostrom. It explores how superintelligence could be created and what its features and motivations might be. It argues that superintelligence, if created, would be difficult to control, and that it could take over the world in order to accomplish its goals. The book also presents strategies to help make superintelligences whose goals benefit humanity. It was particularly influential for raising concerns about existential risk from artificial intelligence.

Instrumental convergence is the hypothetical tendency for most sufficiently intelligent beings to pursue similar sub-goals, even if their ultimate goals are quite different. More precisely, agents may pursue instrumental goals—goals which are made in pursuit of some particular end, but are not the end goals themselves—without ceasing, provided that their ultimate (intrinsic) goals may never be fully satisfied.

<span class="mw-page-title-main">Existential risk from artificial general intelligence</span> Hypothesized risk to human existence

Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could result in human extinction or an irreversible global catastrophe.

Roko's basilisk is a thought experiment which states that an otherwise benevolent artificial superintelligence (AI) in the future would be incentivized to create a virtual reality simulation to torture anyone who knew of its potential existence but did not directly contribute to its advancement or development, in order to incentivize said advancement. It originated in a 2010 post at discussion board LessWrong, a technical forum focused on analytical rational enquiry. The thought experiment's name derives from the poster of the article (Roko) and the basilisk, a mythical creature capable of destroying enemies with its stare.

References

  1. 1 2 "MIRI: Artificial Intelligence: The Danger of Good Intentions - Future of Life Institute". Future of Life Institute. 11 October 2015. Archived from the original on 28 August 2018. Retrieved 28 August 2018.
  2. 1 2 Khatchadourian, Raffi. "The Doomsday Invention". The New Yorker. Archived from the original on 2019-04-29. Retrieved 2018-08-28.
  3. 1 2 3 Waters, Richard (31 October 2014). "Artificial intelligence: machine v man". Financial Times. Archived from the original on 27 August 2018. Retrieved 27 August 2018.
  4. Abate, Tom (2006). "Smarter than thou?". San Francisco Chronicle . Archived from the original on 11 February 2011. Retrieved 12 October 2015.
  5. Abate, Tom (2007). "Public meeting will re-examine future of artificial intelligence". San Francisco Chronicle . Archived from the original on 14 January 2016. Retrieved 12 October 2015.
  6. Kaste, Martin (January 11, 2011). "The Singularity: Humanity's Last Invention?". All Things Considered, NPR. Archived from the original on August 28, 2018. Retrieved August 28, 2018.
  7. "Press release: Singularity University Acquires the Singularity Summitt". Singularity University. 9 December 2012. Archived from the original on 27 April 2019. Retrieved 28 August 2018.
  8. "Press release: We are now the "Machine Intelligence Research Institute" (MIRI) - Machine Intelligence Research Institute". Machine Intelligence Research Institute. 30 January 2013. Archived from the original on 23 September 2018. Retrieved 28 August 2018.
  9. Tegmark, Max (2017). Life 3.0: Being Human in the Age of Artificial Intelligence . United States: Knopf. ISBN   978-1-101-94659-6.
  10. "Machine Intelligence Research Institute — General Support (2019)". Open Philanthropy Project. 2019-03-29. Archived from the original on 2019-10-08. Retrieved 2019-10-08.
  11. "Machine Intelligence Research Institute — General Support (2020)". Open Philanthropy Project. 10 March 2020. Archived from the original on April 13, 2020.
  12. Bensinger, Rob (April 27, 2020). "MIRI's largest grant to date!". MIRI. Archived from the original on April 27, 2020. Retrieved April 27, 2020.
  13. Maheshwari, Suyash (2021-05-13). "Ethereum creator Vitalik Buterin donates $1.5 billion in cryptocurrency to India COVID Relief Fund & other charities". MSN . Archived from the original on 2021-08-24. Retrieved 2023-01-23.
  14. 1 2 LaFrance, Adrienne (2015). "Building Robots With Better Morals Than Humans". The Atlantic . Archived from the original on 19 August 2015. Retrieved 12 October 2015.
  15. Russell, Stuart; Norvig, Peter (2009). Artificial Intelligence: A Modern Approach . Prentice Hall. ISBN   978-0-13-604259-4.
  16. Sathian, Sanjena (4 January 2016). "The Most Important Philosophers of Our Time Reside in Silicon Valley". OZY. OZY. Archived from the original on 29 July 2018. Retrieved 28 July 2018.
  17. Hsu, Jeremy (2015). "Making Sure AI's Rapid Rise Is No Surprise". Discover . Archived from the original on 12 October 2015. Retrieved 12 October 2015.
  18. "AI and Effective Altruism". Machine Intelligence Research Institute. 2015-08-28. Archived from the original on 2019-10-08. Retrieved 2019-10-08.

Further reading