Machine Intelligence Research Institute

Last updated
Machine Intelligence Research Institute
Formation2000;24 years ago (2000)
Type Nonprofit research institute
58-2565917
PurposeResearch into friendly artificial intelligence and the AI control problem
Location
Key people
Eliezer Yudkowsky
Website intelligence.org OOjs UI icon edit-ltr-progressive.svg

The Machine Intelligence Research Institute (MIRI), formerly the Singularity Institute for Artificial Intelligence (SIAI), is a non-profit research institute focused since 2005 on identifying and managing potential existential risks from artificial general intelligence. MIRI's work has focused on a friendly AI approach to system design and on predicting the rate of technology development.

Contents

History

Yudkowsky at Stanford University in 2006 Eliezer Yudkowsky, Stanford 2006 (square crop).jpg
Yudkowsky at Stanford University in 2006

In 2000, Eliezer Yudkowsky founded the Singularity Institute for Artificial Intelligence with funding from Brian and Sabine Atkins, with the purpose of accelerating the development of artificial intelligence (AI). [1] [2] [3] However, Yudkowsky began to be concerned that AI systems developed in the future could become superintelligent and pose risks to humanity, [1] and in 2005 the institute moved to Silicon Valley and began to focus on ways to identify and manage those risks, which were at the time largely ignored by scientists in the field. [2]

Starting in 2006, the Institute organized the Singularity Summit to discuss the future of AI including its risks, initially in cooperation with Stanford University and with funding from Peter Thiel. The San Francisco Chronicle described the first conference as a "Bay Area coming-out party for the tech-inspired philosophy called transhumanism". [4] [5] In 2011, its offices were four apartments in downtown Berkeley. [6] In December 2012, the institute sold its name, web domain, and the Singularity Summit to Singularity University, [7] and in the following month took the name "Machine Intelligence Research Institute". [8]

In 2014 and 2015, public and scientific interest in the risks of AI grew, increasing donations to fund research at MIRI and similar organizations. [3] [9] :327

In 2019, Open Philanthropy recommended a general-support grant of approximately $2.1 million over two years to MIRI. [10] In April 2020, Open Philanthropy supplemented this with a $7.7M grant over two years. [11] [12]

In 2021, Vitalik Buterin donated several million dollars worth of Ethereum to MIRI. [13]

Research and approach

Nate Soares presenting an overview of the AI alignment problem at Google in 2016 Nate Soares giving a talk at Google.gk.jpg
Nate Soares presenting an overview of the AI alignment problem at Google in 2016

MIRI's approach to identifying and managing the risks of AI, led by Yudkowsky, primarily addresses how to design friendly AI, covering both the initial design of AI systems and the creation of mechanisms to ensure that evolving AI systems remain friendly. [3] [14] [15]

MIRI researchers advocate early safety work as a precautionary measure. [16] However, MIRI researchers have expressed skepticism about the views of singularity advocates like Ray Kurzweil that superintelligence is "just around the corner". [14] MIRI has funded forecasting work through an initiative called AI Impacts, which studies historical instances of discontinuous technological change, and has developed new measures of the relative computational power of humans and computer hardware. [17]

MIRI aligns itself with the principles and objectives of the effective altruism movement. [18]

Works by MIRI staff

See also

Related Research Articles

The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization. According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model of 1965, an upgradable intelligent agent could eventually enter a positive feedback loop of self-improvement cycles, each successive; and more intelligent generation appearing more and more rapidly, causing a rapid increase ("explosion") in intelligence which would ultimately result in a powerful superintelligence, qualitatively far surpassing all human intelligence.

<span class="mw-page-title-main">Eliezer Yudkowsky</span> American AI researcher and writer (born 1979)

Eliezer S. Yudkowsky is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence. He is the founder of and a research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. His work on the prospect of a runaway intelligence explosion influenced philosopher Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies.

Friendly artificial intelligence is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.

Singularitarianism is a movement defined by the belief that a technological singularity—the creation of superintelligence—will likely happen in the medium future, and that deliberate action ought to be taken to ensure that the singularity benefits humans.

Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. This contrasts with narrow AI, which is limited to specific tasks. Artificial superintelligence (ASI), on the other hand, refers to AGI that greatly exceeds human cognitive capabilities. AGI is considered one of the definitions of strong AI.

A superintelligence is a hypothetical agent that possesses intelligence surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

<span class="mw-page-title-main">AI takeover</span> Hypothetical outcome of artificial intelligence

An AI takeover is an imagined scenario in which artificial intelligence (AI) emerges as the dominant form of intelligence on Earth and computer programs or robots effectively take control of the planet away from the human species, which relies on human intelligence. Possible scenarios include replacement of the entire human workforce due to automation, takeover by a superintelligent AI (ASI), and the notion of a robot uprising. Stories of AI takeovers have been popular throughout science fiction, but recent advancements have made the threat more real. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.

<span class="mw-page-title-main">Jaan Tallinn</span> Estonian programmer and investor

Jaan Tallinn is an Estonian billionaire computer programmer and investor known for his participation in the development of Skype and file-sharing application FastTrack/Kazaa.

The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics, lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status, artificial superintelligence and existential risks.

The Singularity Summit was the annual conference of the Machine Intelligence Research Institute. It was started in 2006 at Stanford University by Ray Kurzweil, Eliezer Yudkowsky, and Peter Thiel, and the subsequent summits in 2007, 2008, 2009, 2010, 2011, and 2012 have been held in San Francisco, San Jose, New York City, San Francisco, New York City, and San Francisco respectively. Some speakers have included Sebastian Thrun, Rodney Brooks, Barney Pell, Marshall Brain, Justin Rattner, Peter Diamandis, Stephen Wolfram, Gregory Benford, Robin Hanson, Anders Sandberg, Juergen Schmidhuber, Aubrey de Grey, Max Tegmark, and Michael Shermer.

<i>LessWrong</i> Rationality-focused community blog

LessWrong is a community blog and forum focused on discussion of cognitive biases, philosophy, psychology, economics, rationality, and artificial intelligence, among other topics.

In the field of artificial intelligence (AI) design, AI capability control proposals, also referred to as AI confinement, aim to increase our ability to monitor and control the behavior of AI systems, including proposed artificial general intelligences (AGIs), in order to reduce the danger they might pose if misaligned. However, capability control becomes less effective as agents become more intelligent and their ability to exploit flaws in human control systems increases, potentially resulting in an existential risk from AGI. Therefore, the Oxford philosopher Nick Bostrom and others recommend capability control methods only as a supplement to alignment methods.

Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. It should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with technology's grander social effects.

<span class="mw-page-title-main">Roman Yampolskiy</span> Latvian computer scientist (born 1979)

Roman Vladimirovich Yampolskiy is a Latvian computer scientist at the University of Louisville, mostly known for his work on AI safety and cybersecurity. He holds a PhD from the University at Buffalo (2008). He is the founder and current director of Cyber Security Lab, in the department of Computer Engineering and Computer Science at the Speed School of Engineering of the University of Louisville.

<i>Our Final Invention</i> 2013 book by James Barrat

Our Final Invention: Artificial Intelligence and the End of the Human Era is a 2013 non-fiction book by the American author James Barrat. The book discusses the potential benefits and possible risks of human-level (AGI) or super-human (ASI) artificial intelligence. Those supposed risks include extermination of the human race.

<span class="mw-page-title-main">Vitalik Buterin</span> Canadian programmer (born 1994)

Vitaly Dmitrievich Buterin, better known as Vitalik Buterin, is a Canadian computer programmer and co-founder of Ethereum. Buterin became involved with cryptocurrency early in its inception, co-founding Bitcoin Magazine in 2011 and Dark Wallet in 2013 together with Amir Taaki and Cody Wilson In 2015, Buterin deployed the Ethereum blockchain with Gavin Wood, Charles Hoskinson, Anthony Di Iorio, and Joseph Lubin.

<i>Superintelligence: Paths, Dangers, Strategies</i> 2014 book by Nick Bostrom

Superintelligence: Paths, Dangers, Strategies is a 2014 book by the philosopher Nick Bostrom. It explores how superintelligence could be created and what its features and motivations might be. It argues that superintelligence, if created, would be difficult to control, and that it could take over the world in order to accomplish its goals. The book also presents strategies to help make superintelligences whose goals benefit humanity. It was particularly influential for raising concerns about existential risk from artificial intelligence.

Instrumental convergence is the hypothetical tendency for most sufficiently intelligent, goal-directed beings to pursue similar sub-goals, even if their ultimate goals are quite different. More precisely, agents may pursue instrumental goals—goals which are made in pursuit of some particular end, but are not the end goals themselves—without ceasing, provided that their ultimate (intrinsic) goals may never be fully satisfied.

Existential risk from artificial intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.

Roko's basilisk is a thought experiment which states that an otherwise benevolent artificial superintelligence (AI) in the future would be incentivized to create a virtual reality simulation to torture anyone who knew of its potential existence but did not directly contribute to its advancement or development, in order to incentivize said advancement. It originated in a 2010 post at discussion board LessWrong, a technical forum focused on analytical rational enquiry. The thought experiment's name derives from the poster of the article (Roko) and the basilisk, a mythical creature capable of destroying enemies with its stare.

References

  1. 1 2 "MIRI: Artificial Intelligence: The Danger of Good Intentions - Future of Life Institute". Future of Life Institute. 11 October 2015. Archived from the original on 28 August 2018. Retrieved 28 August 2018.
  2. 1 2 Khatchadourian, Raffi. "The Doomsday Invention". The New Yorker. Archived from the original on 2019-04-29. Retrieved 2018-08-28.
  3. 1 2 3 Waters, Richard (31 October 2014). "Artificial intelligence: machine v man". Financial Times. Archived from the original on 27 August 2018. Retrieved 27 August 2018.
  4. Abate, Tom (2006). "Smarter than thou?". San Francisco Chronicle . Archived from the original on 11 February 2011. Retrieved 12 October 2015.
  5. Abate, Tom (2007). "Public meeting will re-examine future of artificial intelligence". San Francisco Chronicle . Archived from the original on 14 January 2016. Retrieved 12 October 2015.
  6. Kaste, Martin (January 11, 2011). "The Singularity: Humanity's Last Invention?". All Things Considered, NPR. Archived from the original on August 28, 2018. Retrieved August 28, 2018.
  7. "Press release: Singularity University Acquires the Singularity Summitt". Singularity University. 9 December 2012. Archived from the original on 27 April 2019. Retrieved 28 August 2018.
  8. "Press release: We are now the "Machine Intelligence Research Institute" (MIRI) - Machine Intelligence Research Institute". Machine Intelligence Research Institute. 30 January 2013. Archived from the original on 23 September 2018. Retrieved 28 August 2018.
  9. Tegmark, Max (2017). Life 3.0: Being Human in the Age of Artificial Intelligence . United States: Knopf. ISBN   978-1-101-94659-6.
  10. "Machine Intelligence Research Institute — General Support (2019)". Open Philanthropy Project. 2019-03-29. Archived from the original on 2019-10-08. Retrieved 2019-10-08.
  11. "Machine Intelligence Research Institute — General Support (2020)". Open Philanthropy Project. 10 March 2020. Archived from the original on April 13, 2020.
  12. Bensinger, Rob (April 27, 2020). "MIRI's largest grant to date!". MIRI. Archived from the original on April 27, 2020. Retrieved April 27, 2020.
  13. Maheshwari, Suyash (2021-05-13). "Ethereum creator Vitalik Buterin donates $1.5 billion in cryptocurrency to India COVID Relief Fund & other charities". MSN . Archived from the original on 2021-08-24. Retrieved 2023-01-23.
  14. 1 2 LaFrance, Adrienne (2015). "Building Robots With Better Morals Than Humans". The Atlantic . Archived from the original on 19 August 2015. Retrieved 12 October 2015.
  15. Russell, Stuart; Norvig, Peter (2009). Artificial Intelligence: A Modern Approach . Prentice Hall. ISBN   978-0-13-604259-4.
  16. Sathian, Sanjena (4 January 2016). "The Most Important Philosophers of Our Time Reside in Silicon Valley". OZY. Archived from the original on 29 July 2018. Retrieved 28 July 2018.
  17. Hsu, Jeremy (2015). "Making Sure AI's Rapid Rise Is No Surprise". Discover . Archived from the original on 12 October 2015. Retrieved 12 October 2015.
  18. "AI and Effective Altruism". Machine Intelligence Research Institute. 2015-08-28. Archived from the original on 2019-10-08. Retrieved 2019-10-08.

Further reading