Richard S. Sutton | |
---|---|
![]() Sutton in 2021 | |
Born | 1957or1958(age 67–68) Ohio, U.S. |
Citizenship | Canadian |
Education | Stanford University (BA) University of Massachusetts, Amherst (MS, PhD) |
Known for | Temporal difference learning, Dyna, Options, GQ(λ) |
Awards | AAAI Fellow (2001) President's Award (INNS) (2003) Royal Society of Canada Fellow (2016) Turing Award (2024) |
Scientific career | |
Fields | Artificial intelligence Reinforcement learning |
Institutions | University of Alberta |
Thesis | Temporal credit assignment in reinforcement learning (1984) |
Doctoral advisor | Andrew Barto |
Doctoral students | David Silver Doina Precup |
Website | incompleteideas |
Richard Stuart Sutton FRS FRSC (born 1957 or 1958) is a Canadian computer scientist. He is a professor of computing science at the University of Alberta, fellow & Chief Scientific Advisor at the Alberta Machine Intelligence Institute, and a research scientist at Keen Technologies. [1] Sutton is considered one of the founders of modern computational reinforcement learning. [2] In particular, he contributed to temporal difference learning and policy gradient methods. [3] He received the 2024 Turing Award with Andrew Barto. [4] [5]
Richard Sutton was born in either 1957 or 1958 [6] [7] in Ohio, and grew up in Oak Brook, Illinois, a suburb of Chicago, United States. [8]
Sutton received his B.A. in psychology from Stanford University in 1978 before taking an MS (1980) and PhD (1984) in computer science from the University of Massachusetts Amherst under the supervision of Andrew Barto. His doctoral dissertation, Temporal Credit Assignment in Reinforcement Learning, introduced actor-critic architectures and temporal credit assignment. [9] [3]
He was influenced by Harry Klopf's work in the 1970s, which proposed that supervised learning is insufficient for AI or explaining intelligent behavior, and trial-and-error learning, driven by "hedonic aspects of behavior", is necessary. This focused his interest to reinforcement learning. [10]
Sutton held a postdoctoral position at the University of Massachusetts Amherst in 1984. [11] He worked at GTE Laboratories in Waltham, Massachusetts as principal member of technical staff from 1985 to 1994, then returned to the University of Massachusetts Amherst as a senior research scientist. [12] He joined AT&T Labs Shannon Laboratory in Florham Park, New Jersey as principal technical staff member from 1998 to 2002. [5] He has been a professor of computing science at the University of Alberta since 2003, where he helped establish the Reinforcement Learning and Artificial Intelligence Laboratory. [13] In 2017 he became a distinguished research scientist with DeepMind and helped launch DeepMind Alberta in Edmonton, a research office operated in close collaboration with the University of Alberta. [14] He was elected Fellow of the Royal Society of Canada in 2016 and Fellow of the Royal Society in 2021. [15] [16] [3] [17]
A former American, Sutton became a Canadian citizen in 2015. [17]
Sutton joined Andrew Barto in the early 1980s at UMass, trying to explore the behavior of neurons in the human brain as the basis for human intelligence, a concept that had been advanced by computer scientist A. Harry Klopf. Sutton and Barto used mathematics toward furthering the concept and using it as the basis for artificial intelligence. This concept became known as reinforcement learning and went on to becoming a key part of artificial intelligence techniques. [18]
Barto and Sutton used Markov decision processes (MDP) as the mathematical foundation to explain how agents (algorithmic entities) made decisions when in a stochastic or random environment, receiving rewards at the end of every action. Traditional MDP theory assumed the agents knew all information about the MDPs in their attempt toward maximizing their cumulative rewards. Barto and Sutton's reinforcement learning techniques allowed for both the environment and the rewards to be unknown, and thus allowed for these category of algorithms to be applied to a wide array of problems. [19]
Sutton returned to Canada in the 2000s and continued working on the topic which continued to develop in academic circles until one of its first major real world applications saw Google's AlphaGo program built on this concept defeating the then prevailing human champion. [18] Barto and Sutton have widely been credited and accepted as pioneers of modern reinforcement learning, with the technique itself being foundational to the modern AI boom. [20]
In a 2019 essay, Sutton proposed the "bitter lesson", which criticized the field of AI research for failing to learn that "building in how we think we think does not work in the long run", arguing that "70 years of AI research [had shown] that general methods that leverage computation are ultimately the most effective, and by a large margin", beating efforts building on human knowledge about specific fields like computer vision, speech recognition, chess or Go. [21] [22]
Sutton argues that large language models aren’t capable of learning on-the-job, and so new model architectures are required to enable continual learning. [23] [ non-primary source needed ] Sutton further argues that a special training phase will be unnecessary — the agent will learn on-the-fly, rendering large language models obsolete. [23]
In 2023, Sutton and John Carmack announced a partnership for the development of artificial general intelligence (AGI). [1]
Sutton is a fellow of the Association for the Advancement of Artificial Intelligence (AAAI) since 2001; [24] his nomination read: "For significant contributions to many topics in machine learning, including reinforcement learning, temporal difference techniques, and neural networks." [24] In 2003, he received the President's Award from the International Neural Network Society [25] and in 2013, the Outstanding Achievement in Research award from the University of Massachusetts Amherst. [26] He received the 2024 Turing Award from the Association for Computing Machinery together with Andrew Barto; the citation of the award read: "For developing the conceptual and algorithmic foundations of reinforcement learning." [4] [27]
In 2016, Sutton was elected Fellow of the Royal Society of Canada. [28] In 2021, he was elected Fellow of the Royal Society of London. [29]
Sutton introduced temporal-difference methods for prediction and control, establishing convergence properties and practical algorithms. [30] He proposed integrated learning and planning through the Dyna architecture. [31] He co-developed the options framework for temporal abstraction in reinforcement learning. [32] He co-authored the first modern policy gradient formulation with function approximation. [33] [11] [5] [16]
His essay The Bitter Lesson summarized a view that general methods that scale with computation dominate domain-specific approaches in the long run. [34]
Year | Title | Venue or publisher | Notes |
---|---|---|---|
1988 | Learning to predict by the methods of temporal differences | Machine Learning 3, 9-44 | TD learning foundations [35] |
1990 | Neural Networks for Control | MIT Press | co-editor with W. T. Miller III and P. J. Werbos [36] |
1991 | Dyna, an integrated architecture for learning, planning, and reacting | ACM SIGART Bulletin | Early Dyna results [37] |
1998 | Reinforcement Learning: An Introduction | MIT Press | with Andrew G. Barto. First edition [38] |
1999 | Between MDPs and semi-MDPs, a framework for temporal abstraction in RL | Artificial Intelligence 112, 181-211 | Options framework with Doina Precup and Satinder Singh [39] |
2000 | Policy Gradient Methods for Reinforcement Learning with Function Approximation | NeurIPS 12 | Policy gradient theorem with function approximation [40] |
2010 | GQ(lambda), a general gradient algorithm for temporal-difference prediction learning with eligibility traces | technical report, University of Alberta | off-policy TD with gradients, with H. R. Maei [41] |
2018 | Reinforcement Learning, An Introduction | MIT Press | with Andrew G. Barto. Second edition [42] |
Research that Barto, 76, and Sutton, 67, began in the late 1970s paved the way for some of the past decade's AI breakthroughs.
So I'm 67 years old, but I want to still try to do some amazing things.