Center for Human-Compatible Artificial Intelligence

Last updated
Center for Human-Compatible Artificial Intelligence
Formation2016;8 years ago (2016)
Headquarters Berkeley, California
Leader
Stuart J. Russell
Parent organization
University of California, Berkeley
Website humancompatible.ai

The Center for Human-Compatible Artificial Intelligence (CHAI) is a research center at the University of California, Berkeley focusing on advanced artificial intelligence (AI) safety methods. The center was founded in 2016 by a group of academics led by Berkeley computer science professor and AI expert Stuart J. Russell. [1] [2] Russell is known for co-authoring the widely used AI textbook Artificial Intelligence: A Modern Approach .

Contents

CHAI's faculty membership includes Russell, Pieter Abbeel and Anca Dragan from Berkeley, Bart Selman and Joseph Halpern from Cornell, [3] Michael Wellman and Satinder Singh Baveja from the University of Michigan, and Tom Griffiths and Tania Lombrozo from Princeton. [4] In 2016, the Open Philanthropy Project (OpenPhil) recommended that Good Ventures provide CHAI support of $5,555,550 over five years. [5] CHAI has since received additional grants from OpenPhil and Good Ventures of over $12,000,000, including for collaborations with the World Economic Forum and Global AI Council. [6] [7] [8]

Research

CHAI's approach to AI safety research focuses on value alignment strategies, particularly inverse reinforcement learning, in which the AI infers human values from observing human behavior. [9] It has also worked on modeling human-machine interaction in scenarios where intelligent machines have an "off-switch" that they are capable of overriding. [10]

See also

Related Research Articles

Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and uses learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs.

Friendly artificial intelligence is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.

The University of California, Berkeley College of Engineering is the engineering school of the University of California, Berkeley. The college occupies fourteen buildings on the northeast side of the main campus and also operates the 150-acre (61-hectare) Richmond Field Station. Established in 1931, the college is considered to be one of the most prestigious and selective engineering schools in both the nation and the world.

<span class="mw-page-title-main">Peter Norvig</span> American computer scientist (born 1956)

Peter Norvig is an American computer scientist and Distinguished Education Fellow at the Stanford Institute for Human-Centered AI. He previously served as a director of research and search quality at Google. Norvig is the co-author with Stuart J. Russell of the most popular textbook in the field of AI: Artificial Intelligence: A Modern Approach used in more than 1,500 universities in 135 countries.

<span class="mw-page-title-main">Stuart J. Russell</span> British computer scientist and author (born 1962)

Stuart Jonathan Russell is a British computer scientist known for his contributions to artificial intelligence (AI). He is a professor of computer science at the University of California, Berkeley and was from 2008 to 2011 an adjunct professor of neurological surgery at the University of California, San Francisco. He holds the Smith-Zadeh Chair in Engineering at University of California, Berkeley. He founded and leads the Center for Human-Compatible Artificial Intelligence (CHAI) at UC Berkeley. Russell is the co-author with Peter Norvig of the authoritative textbook of the field of AI: Artificial Intelligence: A Modern Approach used in more than 1,500 universities in 135 countries.

The Machine Intelligence Research Institute (MIRI), formerly the Singularity Institute for Artificial Intelligence (SIAI), is a non-profit research institute focused since 2005 on identifying and managing potential existential risks from artificial general intelligence. MIRI's work has focused on a friendly AI approach to system design and on predicting the rate of technology development.

<span class="mw-page-title-main">AI takeover</span> Hypothetical outcome of artificial intelligence

An AI takeover is an imagined scenario in which artificial intelligence (AI) emerges as the dominant form of intelligence on Earth and computer programs or robots effectively take control of the planet away from the human species, which relies on human intelligence. Stories of AI takeovers remain popular throughout science fiction, but recent advancements have made the threat more real. Possible scenarios include replacement of the entire human workforce due to automation, takeover by a superintelligent AI (ASI), and the notion of a robot uprising. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.

Bart Selman is a Dutch-American professor of computer science at Cornell University. He is also co-founder and principal investigator of the Center for Human-Compatible Artificial Intelligence (CHAI) at the University of California, Berkeley, led by Stuart J. Russell, and co-chair of the Computing Community Consortium's 20-year roadmap for AI research.

<span class="mw-page-title-main">Tshilidzi Marwala</span> South African engineer and university administrator

Tshilidzi Marwala is a South African artificial intelligence engineer, a computer scientist, a mechanical engineer and a university administrator. He is currently Rector of the United Nations University and UN Under-Secretary-General. In August 2023 Marwala was appointed to the United Nations scientific advisory council.

The ethics of artificial intelligence is the branch of the ethics of technology specific to artificial intelligence (AI) systems.

<span class="mw-page-title-main">Andrew Ng</span> American artificial intelligence researcher

Andrew Yan-Tak Ng is a British-American computer scientist and technology entrepreneur focusing on machine learning and artificial intelligence (AI). Ng was a cofounder and head of Google Brain and was the former Chief Scientist at Baidu, building the company's Artificial Intelligence Group into a team of several thousand people.

Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. It should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with technology's grander social effects.

<span class="mw-page-title-main">Jeff Dean</span> American computer scientist and software engineer

Jeffrey Adgate "Jeff" Dean is an American computer scientist and software engineer. Since 2018, he has been the lead of Google AI. He was appointed Alphabet's chief scientist in 2023 after a reorganization of Alphabet's AI focused groups.

Existential risk from artificial general intelligence is the idea that substantial progress in artificial general intelligence (AGI) could result in human extinction or an irreversible global catastrophe.

Open Philanthropy is a research and grantmaking foundation that makes grants based on the doctrine of effective altruism. It was founded as a partnership between GiveWell and Good Ventures. Its current chief executive officer is Alexander Berger, and its main funders are Cari Tuna and Dustin Moskovitz. Moskovitz says that their wealth, worth $16 billion, "belongs to the world. We intend not to have much when we die."

<span class="mw-page-title-main">Mary-Anne Williams</span> Australian professor at UNSW founded Artificial Intelligence programs

Mary-Anne Williams FTSE is the Michael J Crouch Chair for Innovation at the University of New South Wales in Sydney Australia (UNSW) based in the UNSW Business School.

Ashutosh Saxena is an Indian-American computer scientist, researcher, and entrepreneur known for his contributions to the field of artificial intelligence and robotics. His research interests include deep learning and physical AI for autonomous systems. Saxena is the co-founder and CEO of Caspar.AI, which uses AI with data from ambient 3D radar sensors to predict 20+ health & wellness markers for patients. Prior to Caspar.AI, Ashutosh co-founded Cognical Katapult, which provides a no credit required alternative to traditional financing for online and omni-channel retail. Before Katapult, Saxena was an assistant professor in the Computer Science Department and faculty director of the RoboBrain Project at Cornell University.

<span class="mw-page-title-main">Pieter Abbeel</span> Machine learning researcher at Berkeley

Pieter Abbeel is a professor of electrical engineering and computer sciences, Director of the Berkeley Robot Learning Lab, and co-director of the Berkeley AI Research (BAIR) Lab at the University of California, Berkeley. He is also the co-founder of covariant.ai, a venture-funded start-up that aims to teach robots new, complex skills, and co-founder of Gradescope, an online grading system that has been implemented in over 500 universities nationwide. He is best known for his cutting-edge research in robotics and machine learning, particularly in deep reinforcement learning. In 2021, he joined AIX Ventures as an Investment Partner. AIX Ventures is a venture capital fund that invests in artificial intelligence startups.

<i>Human Compatible</i> 2019 book by Stuart J. Russell

Human Compatible: Artificial Intelligence and the Problem of Control is a 2019 non-fiction book by computer scientist Stuart J. Russell. It asserts that the risk to humanity from advanced artificial intelligence (AI) is a serious concern despite the uncertainty surrounding future progress in AI. It also proposes an approach to the AI control problem.

<span class="mw-page-title-main">Kay Firth-Butterfield</span> Law and AI ethics professor & author

Kay Firth-Butterfield is a lawyer, professor, and author specializing in the intersection of artificial intelligence, international relations, Business and AI ethics. She is the CEO of the Centre for Trustworthy Technology which is a Member of the World Economic Forum's Forth Industrial Revolution Network. Before starting her new position Kay was the head of AI and machine learning at the World Economic Forum. She was an adjunct professor of law at the University of Texas at Austin.

References

  1. Norris, Jeffrey (Aug 29, 2016). "UC Berkeley launches Center for Human-Compatible Artificial Intelligence" . Retrieved Dec 27, 2019.
  2. Solon, Olivia (Aug 30, 2016). "The rise of robots: forget evil AI – the real risk is far more insidious". The Guardian . Retrieved Dec 27, 2019.
  3. Cornell University. "Human-Compatible AI" . Retrieved Dec 27, 2019.
  4. Center for Human-Compatible Artificial Intelligence. "People" . Retrieved Dec 27, 2019.
  5. Open Philanthropy Project (Aug 2016). "UC Berkeley — Center for Human-Compatible AI (2016)" . Retrieved Dec 27, 2019.
  6. Open Philanthropy Project (Nov 2019). "UC Berkeley — Center for Human-Compatible AI (2019)" . Retrieved Dec 27, 2019.
  7. "UC Berkeley — Center for Human-Compatible Artificial Intelligence (2021)". openphilanthropy.org.
  8. "World Economic Forum — Global AI Council Workshop". Open Philanthropy . April 2020. Archived from the original on 2023-09-01. Retrieved 2023-09-01.
  9. Conn, Ariel (Aug 31, 2016). "New Center for Human-Compatible AI". Future of Life Institute . Retrieved Dec 27, 2019.
  10. Bridge, Mark (June 10, 2017). "Making robots less confident could prevent them taking over". The Times .