David A. McAllester

Last updated
David A. McAllester
Born (1956-05-30) May 30, 1956 (age 67)
United States
Alma mater Massachusetts Institute of Technology
Known for Artificial intelligence
Awards AAAI Classic Paper Award (2010) [1]
International Conference on Logic Programming Test of Time award (2014) [2]
Scientific career
Fields Computer Science, Artificial Intelligence, Machine Learning
Institutions Massachusetts Institute of Technology
Toyota Technological Institute at Chicago
Doctoral advisor Gerald Sussman

David A. McAllester (born May 30, 1956) is an American computer scientist who is Professor and former chief academic officer at the Toyota Technological Institute at Chicago. He received his B.S., M.S. and Ph.D. degrees from the Massachusetts Institute of Technology in 1978, 1979 and 1987 respectively. His PhD was supervised by Gerald Sussman. He was on the faculty of Cornell University for the academic year 1987-1988 and on the faculty of MIT from 1988 to 1995. He was a member of technical staff at AT&T Labs-Research from 1995 to 2002. He has been a fellow of the American Association of Artificial Intelligence since 1997. [3] He has written over 100 refereed publications.

Contents

McAllester's research areas include machine learning theory, the theory of programming languages, automated reasoning, AI planning, computer game playing (computer chess) and computational linguistics. A 1991 paper on AI planning [4] proved to be one of the most influential papers of the decade in that area. [5] A 1993 paper on computer game algorithms [6] influenced the design of the algorithms used in the Deep Blue chess system that defeated Garry Kasparov. [7] A 1998 paper on machine learning theory [8] introduced PAC-Bayesian theorems which combine Bayesian and non-Bayesian methods.

Opinions on artificial intelligence

McAllester has voiced concerns about the potential dangers of artificial intelligence, writing in an article to the Pittsburgh Tribune-Review that it is inevitable that fully automated intelligent machines will be able to design and build smarter, better versions of themselves, an event known as the singularity. The singularity would enable machines to become infinitely intelligent, and would pose an "incredibly dangerous scenario". McAllester estimates a 10 percent probability of the Singularity occurring within 25 years, and a 90 percent probability of it occurring within 75 years. [9] He appeared on the AAAI Presidential Panel on Long-Term AI Futures in 2009:, [10] and considers the dangers of superintelligent AI worth taking seriously:

I am uncomfortable saying that we are ninety-nine per cent certain that we are safe for fifty years... That feels like hubris to me. [11]

He was later described as discussing the singularity at the panel in terms of two major milestones in artificial intelligence:

1) Operational Sentience: We can easily converse with computers. 2) The AI Chain Reaction: A computer that boot straps itself to a better self. Repeat. [12]

McAllester has also written on friendly artificial intelligence on his blog. He says that before machines become capable of programming themselves (potentially leading to the singularity), there should be a period where they are moderately intelligent in which it should be possible to test out giving them a purpose or mission that should render them safe to humans:

I personally believe that it is likely that within a decade agents will be capable of compelling conversation about the everyday events that are the topics of non-technical dinner conversations. I think this will happen long before machines can program themselves leading to an intelligence explosion. The early stages of artificial general intelligence (AGI) will be safe. However, the early stages of AGI will provide an excellent test bed for the servant mission or other approaches to friendly AI ... If there is a coming era of safe (not too intelligent) AGI then we will have time to think further about later more dangerous eras. [13]

Related Research Articles

<span class="mw-page-title-main">Artificial intelligence</span> Academic discipline today mostly overlapping with machine learning

Artificial intelligence (AI) is the intelligence of machines or software, as opposed to the intelligence of human beings or animals. AI applications include advanced web search engines, recommendation systems, understanding human speech, self-driving cars, generative or creative tools, and competing at the highest level in strategic games.

The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization. According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model, an upgradable intelligent agent will eventually enter a "runaway reaction" of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an "explosion" in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.

<span class="mw-page-title-main">Eliezer Yudkowsky</span> American AI researcher and writer (born 1979)

Eliezer S. Yudkowsky is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence, including the idea of a "fire alarm" for AI. He is a co-founder and research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. His work on the prospect of a runaway intelligence explosion influenced philosopher Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies.

<span class="mw-page-title-main">Friendly artificial intelligence</span> AI to benefit humanity

Friendly artificial intelligence is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.

Ray Solomonoff was the inventor of algorithmic probability, his General Theory of Inductive Inference, and was a founder of algorithmic information theory. He was an originator of the branch of artificial intelligence based on machine learning, prediction and probability. He circulated the first report on non-semantic machine learning in 1956.

<span class="mw-page-title-main">Singularitarianism</span> Belief in an incipient technological singularity

Singularitarianism is a movement defined by the belief that a technological singularity—the creation of superintelligence—will likely happen in the medium future, and that deliberate action ought to be taken to ensure that the singularity benefits humans.

<span class="mw-page-title-main">Artificial general intelligence</span> Hypothetical human-level or stronger AI

An artificial general intelligence (AGI) is a type of hypothetical intelligent agent. The AGI concept is that it can learn to accomplish any intellectual task that human beings or animals can perform. Alternatively, AGI has been defined as an autonomous system that surpasses human capabilities in the majority of economically valuable tasks. Creating AGI is a primary goal of some artificial intelligence research and companies such as OpenAI, DeepMind, and Anthropic. AGI is a common topic in science fiction and futures studies.

The Machine Intelligence Research Institute (MIRI), formerly the Singularity Institute for Artificial Intelligence (SIAI), is a non-profit research institute focused since 2005 on identifying and managing potential existential risks from artificial general intelligence. MIRI's work has focused on a friendly AI approach to system design and on predicting the rate of technology development.

<span class="mw-page-title-main">AI takeover</span> Hypothetical artificial intelligence scenario

An AI takeover is a hypothetical scenario in which artificial intelligence (AI) becomes the dominant form of intelligence on Earth, as computer programs or robots effectively take control of the planet away from the human species. Possible scenarios include replacement of the entire human workforce, takeover by a superintelligent AI, and the popular notion of a robot uprising. Stories of AI takeovers are very popular throughout science fiction. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.

Michael Irwin Jordan is an American scientist, professor at the University of California, Berkeley and researcher in machine learning, statistics, and artificial intelligence.

Marcus Hutter is DeepMind Senior Scientist researching the mathematical foundations of artificial general intelligence. He is on leave from his professorship at the ANU College of Engineering and Computer Science of the Australian National University in Canberra, Australia. Hutter studied physics and computer science at the Technical University of Munich. In 2000 he joined Jürgen Schmidhuber's group at the Istituto Dalle Molle di Studi sull'Intelligenza Artificiale in Manno, Switzerland. With others, he developed a mathematical theory of artificial general intelligence. His book Universal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability was published by Springer in 2005.

<span class="mw-page-title-main">Outline of artificial intelligence</span> Overview of and topical guide to artificial intelligence

The following outline is provided as an overview of and topical guide to artificial intelligence:

<span class="mw-page-title-main">Steve Omohundro</span> American computer scientist

Stephen Malvern Omohundro is an American computer scientist whose areas of research include Hamiltonian physics, dynamical systems, programming languages, machine learning, machine vision, and the social implications of artificial intelligence. His current work uses rational economics to develop safe and beneficial intelligent technologies for better collaborative modeling, understanding, innovation, and decision making.

<span class="mw-page-title-main">Eric Horvitz</span> American computer scientist, and Technical Fellow at Microsoft

Eric Joel Horvitz is an American computer scientist, and Technical Fellow at Microsoft, where he serves as the company's first Chief Scientific Officer. He was previously the director of Microsoft Research Labs, including research centers in Redmond, WA, Cambridge, MA, New York, NY, Montreal, Canada, Cambridge, UK, and Bangalore, India.

In the field of artificial intelligence (AI) design, AI capability control proposals, also referred to as AI confinement, aim to increase our ability to monitor and control the behavior of AI systems, including proposed artificial general intelligences (AGIs), in order to reduce the danger they might pose if misaligned. However, capability control becomes less effective as agents become more intelligent and their ability to exploit flaws in human control systems increases, potentially resulting in an existential risk from AGI. Therefore, the Oxford philosopher Nick Bostrom and others recommend capability control methods only as a supplement to alignment methods.

Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. Machine ethics should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with the grander social effects of technology.

<span class="mw-page-title-main">Milind Tambe</span> American computer scientist

Milind Tambe is an Indian-American educator serving as Professor of Computer Science at Harvard University. He also serves as the director of Center for Research on Computation and Society at Harvard University and the director of "AI for Social Good" at Google Research India.

<span class="mw-page-title-main">Existential risk from artificial general intelligence</span> Hypothesized risk to human existence

Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could result in human extinction or another irreversible global catastrophe.

<span class="mw-page-title-main">Glossary of artificial intelligence</span> List of definitions of terms and concepts commonly used in the study of artificial intelligence

This glossary of artificial intelligence is a list of definitions of terms and concepts relevant to the study of artificial intelligence, its sub-disciplines, and related fields. Related glossaries include Glossary of computer science, Glossary of robotics, and Glossary of machine vision.

Thomas G. Dietterich is emeritus professor of computer science at Oregon State University. He is one of the pioneers of the field of machine learning. He served as executive editor of Machine Learning (journal) (1992–98) and helped co-found the Journal of Machine Learning Research. In response to the media's attention on the dangers of artificial intelligence, Dietterich has been quoted for an academic perspective to a broad range of media outlets including National Public Radio, Business Insider, Microsoft Research, CNET, and The Wall Street Journal.

References

  1. "AAAI Classic Paper Award". AAAI. 2016. Retrieved 19 August 2016.
  2. "Pascal's paper stands the test of time". Australian National University. 23 April 2014. Retrieved 19 August 2016.
  3. "David McAllester biography". Toyota Technological Institute at Chicago. Retrieved 19 August 2016.
  4. McAllester, David; Rosenblitt, David (December 1991). "Systematic Nonlinear Planning" (PDF). Proceedings AAAI-91. AAAI: 634–639. Retrieved 19 August 2016.
  5. "Google Scholar Citations". Google Scholar . 2016. Retrieved 19 August 2016.
  6. McAllester, David; Yuret, Deniz (20 October 1993). "Alpha-Beta-Conspiracy Search". Draft. CiteSeerX   10.1.1.44.6969 .{{cite journal}}: Cite journal requires |journal= (help)
  7. Campbell, Murray S.; Joseph Hoane, Jr., A.; Hsu, Feng-hsiung (1999). "Search Control Methods in Deep Blue" (PDF). AAAI Technical Report SS-99-07. AAAI: 19–23. Archived from the original (PDF) on 14 September 2016. Retrieved 16 August 2016. To the best of our knowledge, the idea of separating the white and black depth computation was first suggested by David McAllester. A later paper (McAllester and Yuret 1993) derived an algorithm, ABC, from conspiracy theory (McAllester 1988).
  8. McAllester, David (1998). "Some PAC-Bayesian theorems". Proceedings of the eleventh annual conference on Computational learning theory - COLT' 98. Association for Computing Machinery. pp. 230–234. CiteSeerX   10.1.1.21.1745 . doi:10.1145/279943.279989. ISBN   978-1581130577. S2CID   53234792 . Retrieved 19 August 2016.
  9. Cronin, Mike (2 November 2009). "Futurists' report reviews dangers of smart robots". Pittsburgh Tribune-Review . Retrieved 20 August 2016.
  10. "Asilomar Meeting on Long-Term AI Futures". Microsoft Research. 2009. Retrieved 20 August 2016.
  11. Khatchadourian, Raffi (23 November 2015). "The Doomsday Invention: Will artificial intelligence bring us utopia or destruction?". The New Yorker . Retrieved 23 August 2016.
  12. Fortnow, Lance (31 July 2009). "The Singularity". Computational Complexity. Retrieved 20 August 2016.
  13. McAllester, David (10 August 2014). "Friendly AI and the Servant Mission". Machine Thoughts. WordPress . Retrieved 20 August 2016.