Center for Human-Compatible Artificial Intelligence

Last updated
Center for Human-Compatible Artificial Intelligence
Formation2016;8 years ago (2016)
Headquarters Berkeley, California
Leader
Stuart J. Russell
Parent organization
University of California, Berkeley
Website humancompatible.ai

The Center for Human-Compatible Artificial Intelligence (CHAI) is a research center at the University of California, Berkeley focusing on advanced artificial intelligence (AI) safety methods. The center was founded in 2016 by a group of academics led by Berkeley computer science professor and AI expert Stuart J. Russell. [1] [2] Russell is known for co-authoring the widely used AI textbook Artificial Intelligence: A Modern Approach .

Contents

CHAI's faculty membership includes Russell, Pieter Abbeel and Anca Dragan from Berkeley, Bart Selman and Joseph Halpern from Cornell, [3] Michael Wellman and Satinder Singh Baveja from the University of Michigan, and Tom Griffiths and Tania Lombrozo from Princeton. [4] In 2016, the Open Philanthropy Project (OpenPhil) recommended that Good Ventures provide CHAI support of $5,555,550 over five years. [5] CHAI has since received additional grants from OpenPhil and Good Ventures of over $12,000,000, including for collaborations with the World Economic Forum and Global AI Council. [6] [7] [8]

Research

CHAI's approach to AI safety research focuses on value alignment strategies, particularly inverse reinforcement learning, in which the AI infers human values from observing human behavior. [9] It has also worked on modeling human-machine interaction in scenarios where intelligent machines have an "off-switch" that they are capable of overriding. [10]

See also

Related Research Articles

Artificial intelligence (AI) is the intelligence of machines or software, as opposed to the intelligence of other living beings, primarily of humans. It is a field of study in computer science that develops and studies intelligent machines. Such machines may be called AIs.

Friendly artificial intelligence is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.

<span class="mw-page-title-main">UC Berkeley College of Engineering</span> Engineering school of the University of California, Berkeley

The University of California, Berkeley College of Engineering is the engineering school of the University of California, Berkeley, a public research university in Berkeley, California.

<span class="mw-page-title-main">Stuart J. Russell</span> British computer scientist and author (born 1962)

Stuart Jonathan Russell is a British computer scientist known for his contributions to artificial intelligence (AI). He is a professor of computer science at the University of California, Berkeley and was from 2008 to 2011 an adjunct professor of neurological surgery at the University of California, San Francisco. He holds the Smith-Zadeh Chair in Engineering at University of California, Berkeley. He founded and leads the Center for Human-Compatible Artificial Intelligence (CHAI) at UC Berkeley. Russell is the co-author with Peter Norvig of the authoritative textbook of the field of AI: Artificial Intelligence: A Modern Approach used in more than 1,500 universities in 135 countries.

The Machine Intelligence Research Institute (MIRI), formerly the Singularity Institute for Artificial Intelligence (SIAI), is a non-profit research institute focused since 2005 on identifying and managing potential existential risks from artificial general intelligence. MIRI's work has focused on a friendly AI approach to system design and on predicting the rate of technology development.

<span class="mw-page-title-main">AI takeover</span> Hypothetical artificial intelligence scenario

An AI takeover is a scenario in which artificial intelligence (AI) becomes the dominant form of intelligence on Earth, as computer programs or robots effectively take control of the planet away from the human species. Possible scenarios include replacement of the entire human workforce due to automation, takeover by a superintelligent AI, and the popular notion of a robot uprising. Stories of AI takeovers are very popular throughout science fiction. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.

Bart Selman is a Dutch-American professor of computer science at Cornell University. He has previously worked at AT&T Bell Laboratories. He is also co-founder and principal investigator of the Center for Human-Compatible Artificial Intelligence (CHAI) at the University of California, Berkeley, led by Berkeley artificial intelligence (AI) expert Stuart J. Russell, and co-chair of the Computing Community Consortium's 20-year roadmap for AI research.

The following outline is provided as an overview of and topical guide to artificial intelligence:

<span class="mw-page-title-main">Tshilidzi Marwala</span> South African engineer and university administrator

Tshilidzi Marwala is a South African artificial intelligence engineer, a computer scientist, a mechanical engineer and a university administrator. He is currently Rector of the United Nations University and UN Under-Secretary-General. In August 2023 Marwala was appointed to the United Nations scientific advisory council.

The ethics of artificial intelligence is the branch of the ethics of technology specific to artificially intelligent systems. It is sometimes divided into a concern with the moral behavior of humans as they design, make, use and treat artificially intelligent systems, and a concern with the behavior of machines, in machine ethics.

<span class="mw-page-title-main">Ruzena Bajcsy</span> American computer scientist

Ruzena Bajcsy is an American engineer and computer scientist who specializes in robotics. She is professor of electrical engineering and computer sciences at the University of California, Berkeley, where she is also director emerita of CITRIS.

<span class="mw-page-title-main">Andrew Ng</span> American artificial intelligence researcher

Andrew Yan-Tak Ng is a British-American computer scientist and technology entrepreneur focusing on machine learning and artificial intelligence (AI). Ng was a cofounder and head of Google Brain and was the former Chief Scientist at Baidu, building the company's Artificial Intelligence Group into a team of several thousand people.

Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. Machine ethics should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with the grander social effects of technology.

Existential risk from artificial general intelligence is the idea that substantial progress in artificial general intelligence (AGI) could result in human extinction or an irreversible global catastrophe.

In the field of artificial intelligence (AI), AI alignment research aims to steer AI systems towards humans' intended goals, preferences, or ethical principles. An AI system is considered aligned if it advances its intended objectives. A misaligned AI system pursues some objectives, but not the intended ones.

<span class="mw-page-title-main">Mary-Anne Williams</span> Australian professor at UNSW founded Artificial Intelligence programs

Mary-Anne Williams FTSE is the Michael J Crouch Chair for Innovation at the University of New South Wales in Sydney Australia (UNSW) based in the UNSW Business School.

Ashutosh Saxena is an Indian-American computer scientist, researcher, and entrepreneur known for his contributions to the field of artificial intelligence and robotics. His research interests include deep learning, robotics, and 3-dimensional computer vision. Saxena is the co-founder and CEO of Caspar.AI, which is an artificial intelligence company that automates peoples' homes and builds applications such as fall detectors for senior living. Prior to Caspar.AI, Ashutosh co-founded Cognical Katapult, which provides a no credit required alternative to traditional financing for online and omni-channel retail. Before Katapult, Saxena was an assistant professor in the Computer Science Department and faculty director of the RoboBrain Project at Cornell University.

<span class="mw-page-title-main">Pieter Abbeel</span> Machine learning researcher at Berkeley

Pieter Abbeel is a professor of electrical engineering and computer sciences, Director of the Berkeley Robot Learning Lab, and co-director of the Berkeley AI Research (BAIR) Lab at the University of California, Berkeley. He is also the co-founder of covariant.ai, a venture-funded start-up that aims to teach robots new, complex skills, and co-founder of Gradescope, an online grading system that has been implemented in over 500 universities nationwide. He is best known for his cutting-edge research in robotics and machine learning, particularly in deep reinforcement learning. In 2021, he joined AIX Ventures as an Investment Partner. AIX Ventures is a venture capital fund that invests in artificial intelligence startups.

<i>Human Compatible</i> 2019 book by Stuart J. Russell

Human Compatible: Artificial Intelligence and the Problem of Control is a 2019 non-fiction book by computer scientist Stuart J. Russell. It asserts that the risk to humanity from advanced artificial intelligence (AI) is a serious concern despite the uncertainty surrounding future progress in AI. It also proposes an approach to the AI control problem.

<span class="mw-page-title-main">Kay Firth-Butterfield</span> Law and AI ethics professor & author

Kay Firth-Butterfield is a lawyer, professor, and author specializing in the intersection of artificial intelligence, international relations, Business and AI ethics. She is the CEO of the Centre for Trustworthy Technology which is a Member of the World Economic Forum's Forth Industrial Revolution Network. Before starting her new position Kay was the head of AI and machine learning at the World Economic Forum. She was an adjunct professor of law at the University of Texas at Austin.

References

  1. Norris, Jeffrey (Aug 29, 2016). "UC Berkeley launches Center for Human-Compatible Artificial Intelligence" . Retrieved Dec 27, 2019.
  2. Solon, Olivia (Aug 30, 2016). "The rise of robots: forget evil AI – the real risk is far more insidious". The Guardian . Retrieved Dec 27, 2019.
  3. Cornell University. "Human-Compatible AI" . Retrieved Dec 27, 2019.
  4. Center for Human-Compatible Artificial Intelligence. "People" . Retrieved Dec 27, 2019.
  5. Open Philanthropy Project (Aug 2016). "UC Berkeley — Center for Human-Compatible AI (2016)" . Retrieved Dec 27, 2019.
  6. Open Philanthropy Project (Nov 2019). "UC Berkeley — Center for Human-Compatible AI (2019)" . Retrieved Dec 27, 2019.
  7. "UC Berkeley — Center for Human-Compatible Artificial Intelligence (2021)". openphilanthropy.org.
  8. "World Economic Forum — Global AI Council Workshop". Open Philanthropy . April 2020. Archived from the original on 2023-09-01. Retrieved 2023-09-01.
  9. Conn, Ariel (Aug 31, 2016). "New Center for Human-Compatible AI". Future of Life Institute . Retrieved Dec 27, 2019.
  10. Bridge, Mark (June 10, 2017). "Making robots less confident could prevent them taking over". The Times .