Thomas L. Dean | |
---|---|
Born | 1950 (age 71–72) |
Nationality | American |
Alma mater | Virginia Polytechnic Institute Yale University |
Known for | ANYTIME ALGORITHMS |
Awards | AAAI Fellow (1994) [1] ACM Fellow (2009) [2] |
Scientific career | |
Fields | Computer Science |
Institutions | Google Stanford University Brown University |
Thesis | Temporal Imagery: An Approach to Reasoning about Time for Planning and Problem Solving (1985) |
Doctoral advisor | Drew McDermott |
Website | cs |
Thomas L. Dean (born 1950) is an American computer scientist known for his work in robot planning, probabilistic graphical models, and computational neuroscience. He was one of the first to introduce ideas from operations research and control theory to artificial intelligence. [3] In particular, he introduced the idea of the anytime algorithm and was the first to apply the factored Markov decision process to robotics. [4] [5] He has authored several influential textbooks on artificial intelligence. [3] [6] [7]
He was a professor at Brown University from 1993 to 2007, holding roles including department chair, acting vice president for computing and information services, and deputy provost. [8] In 2006 he started working at Google, where he was instrumental in helping the Google Brain project get its start. He is currently an emeritus professor at Brown and a lecturer and research fellow at Stanford. [9]
Dean and Wellman's book Planning and Control [3] provided a much-needed bridge between research in AI on discrete-time symbolic methods for goal directed planning and decision making and continuous-time control theoretic methods for robotics and industrial control systems. Basic control concepts including "observability", "stability", and "optimality" are introduced, and many of the most important theoretical results are presented and explained. In a book review in the Artificial Intelligence Journal, James Hendler wrote that the book serves as a 'Rosetta Stone' for translation between the fields of robotics and AI. [10]
The term anytime algorithm was coined by Dean and Boddy in the late '80s. [11] The focus of Dean and Boddy's work in this area has been on deliberation scheduling applied to time-dependent planning problems. Deliberation scheduling is the explicit allocation of resources to tasks (in most cases anytime algorithms) so as to maximize the total value of an agent's computation. [12] Time-dependent planning problems are defined to be planning problems where the time available for responding to events varies from situation to situation. In addition to defining the basic concepts, Dean and Boddy provided theoretical analyses and applications in robotics and operations research [13] [14] [15] [16] . [17]
Dean played a leading role in the adoption of the framework of Markov decision processes (MDPs) as a foundational tool in artificial intelligence. In particular, he pioneered the use of AI representations and algorithms for || factoring || complex models and problems into weakly-interacting subparts to improve computational efficiency. His work in state estimation emphasized temporal causal reasoning [18] [13] [19] and the integration with probabilistic graphical models [20] . [21] His work in control includes state-space partitioning [22] [23] [24] [25] , [26] hierarchical methods [20] , [21] and model minimization [27] [28] [29] . [14] This line of work is clearly summarized by a highly influential paper jointly written with Craig Boutilier and Steve Hanks. [30]
Working with his collaborators, James Allen and Yiannis Aloimonos specializing in respectively computer vision and natural language processing, Dean wrote one of the first modern AI textbooks incorporating probability theory, machine learning and robotics, and placing traditional AI topics such as symbolic reasoning and knowledge representation using the predicate calculus within a broader context. [6] The first and only edition published in December 1994 initially competed with the first edition of Russell and Norvig's Artificial Intelligence: A Modern Approach that came out in 1995, but was eclipsed by the second edition of the Russell and Norvig text released in 2003. [31]
As co-chair of the 1991 AAAI Conference Dean organized a press event featuring mobile robots carrying trays of canapés and barely avoiding the participants. The coverage on the evening news was enthusiastically positive and in 1992, Dean and Peter Bonasso, with feedback from the robotics community, created the AAAI Robotics Competition featuring events aimed at showing off robots competing in events that involved performing tasks in the home, office, and disaster sites [32] [33] . [34] The competition was still being held in 2010. [35]
After starting as a research scientist at Google, Dean was appointed as a consulting professor at Stanford and began teaching a course with the title Computational Models of the Neocortex. During the next fifteen years he invited top neuroscientists from all over the world to give talks and advise students working on class projects. Several of the classes resulted in papers coauthored by students that led to research projects at Google [29] [36] [37] . [38]
In an effort to create a team focusing on scalable computational neuroscience, Dean and his students at Stanford to produced a white paper entitled Technology Prospects and Investment Opportunities for Scalable Neuroscience [29] that served as the basis for building a team of software engineers and computational neuroscientists focusing on connectomics. Early on, Dean worked with Christof Koch the chief scientist at the Allen Institute for Brain Neuroscience to develop a partnership, and hired Viren Jain from HHMI to serve as the technical lead for the project.
Dean and Jain expanded the team to more than ten software engineers and participated in the planning of the NIH Brain Initiative. As their computer vision and machine learning tools improved, the team sought out and developed additional partnerships with Gerry Rubin at HHMI Janelia Campus, Jeff Lichtman at Harvard, and Winfried Denk at the Max Planck Institute of Neurobiology. Each of these collaborations would lead to high-accuracy, dense reconstructions of neural tissue samples in different organisms, repeatedly surpassing the current state of the art in size and quality [39] [40] . [41] Viren Jain is currently the project manager and lead scientist for the ongoing effort at Google. The resulting data on brain connectivity, including the 'hemibrain' connectome, a highly detailed map of neuronal connectivity in the fly brain [42] and the 'H01' dataset, a 1.4 petabyte rendering of a small sample of human brain tissue, [43] was publicly released.
Dean led some of the earliest investigations into the use of neural networks at Google, that directly led to the creation of the Google Brain project. He experimented with approaches for using hardware acceleration to overcome current performance limitations in building industrial-scale web services, and collaborated with Dean Gaudet on the Google Infrastructure and Platforms Team to make the case for introducing graphic processing units (GPU) in Google data centers. He worked closely with Vincent Vanhoucke, who led the perception research and speech recognition quality team, to demonstrate the value of GPUs for training and deploying deep neural network architectures in the cloud focusing on speech recognition for Google Search by Voice.
Dean served as the Deputy Provost of Brown University from 2003 to 2005, as the chair of Brown's Computer Science Department from 1997 until 2002, and as the Acting Vice President for Computing and Information Services from 2001 until 2002. As Deputy Provost he helped develop and launch new multidisciplinary programs in genomics and the brain sciences as well as oversee substantial changes in the medical school and university libraries.
Dean was named a fellow of AAAI in 1994 and an ACM fellow in 2009. He has served on the Executive Council of AAAI and the Computing Research Association Board of Directors. He was a recipient of an NSF Presidential Young Investigator Award in 1989. He served as program co-chair for the 1991 National Conference on Artificial Intelligence and the program chair for the 1999 International Joint Conference on Artificial Intelligence held in Stockholm. He was a founding member of the Academic Alliance of the National Center for Women and Information Technology and a former member of the IJCAI Inc. Board of Trustees.
In artificial intelligence, symbolic artificial intelligence is the term for the collection of all methods in artificial intelligence research that are based on high-level symbolic (human-readable) representations of problems, logic and search. Symbolic AI used tools such as logic programming, production rules, semantic nets and frames, and it developed applications such as knowledge-based systems, symbolic mathematics, automated theorem provers, ontologies, the semantic web, and automated planning and scheduling systems. The Symbolic AI paradigm led to seminal ideas in search, symbolic programming languages, agents, multi-agent systems, the semantic web, and the strengths and limitations of formal knowledge and reasoning systems.
Reason maintenance is a knowledge representation approach to efficient handling of inferred information that is explicitly stored. Reason maintenance distinguishes between base facts, which can be defeated, and derived facts. As such it differs from belief revision which, in its basic form, assumes that all facts are equally important. Reason maintenance was originally developed as a technique for implementing problem solvers. It encompasses a variety of techniques that share a common architecture: two components—a reasoner and a reason maintenance system—communicate with each other via an interface. The reasoner uses the reason maintenance system to record its inferences and justifications of the inferences. The reasoner also informs the reason maintenance system which are the currently valid base facts (assumptions). The reason maintenance system uses the information to compute the truth value of the stored derived facts and to restore consistency if an inconsistency is derived.
A Dynamic Bayesian Network (DBN) is a Bayesian network (BN) which relates variables to each other over adjacent time steps. This is often called a Two-Timeslice BN (2TBN) because it says that at any point in time T, the value of a variable can be calculated from the internal regressors and the immediate prior value. DBNs were developed by Paul Dagum in the early 1990s at Stanford University's Section on Medical Informatics. Dagum developed DBNs to unify and extend traditional linear state-space models such as Kalman filters, linear and normal forecasting models such as ARMA and simple dependency models such as hidden Markov models into a general probabilistic representation and inference mechanism for arbitrary nonlinear and non-normal time-dependent domains.
Automated planning and scheduling, sometimes denoted as simply AI planning, is a branch of artificial intelligence that concerns the realization of strategies or action sequences, typically for execution by intelligent agents, autonomous robots and unmanned vehicles. Unlike classical control and classification problems, the solutions are complex and must be discovered and optimized in multidimensional space. Planning is also related to decision theory.
James Alexander Hendler is an artificial intelligence researcher at Rensselaer Polytechnic Institute, United States, and one of the originators of the Semantic Web. He is a Fellow of the National Academy of Public Administration.
The following outline is provided as an overview of and topical guide to artificial intelligence:
Austin Tate is Emeritus Professor of Knowledge-based systems in the School of Informatics at the University of Edinburgh. From 1985 to 2019 he was Director of AIAI in the School of Informatics at the University of Edinburgh.
Stephen Malvern Omohundro is an American computer scientist whose areas of research include Hamiltonian physics, dynamical systems, programming languages, machine learning, machine vision, and the social implications of artificial intelligence. His current work uses rational economics to develop safe and beneficial intelligent technologies for better collaborative modeling, understanding, innovation, and decision making.
This is a timeline of artificial intelligence, sometimes alternatively called synthetic intelligence.
Michael Lederman Littman is a computer scientist. He works mainly in reinforcement learning, but has done work in machine learning, game theory, computer networking, partially observable Markov decision process solving, computer solving of analogy problems and other areas. He is currently a University Professor of Computer Science at Brown University, where he has taught since 2012.
Leslie Pack Kaelbling is an American roboticist and the Panasonic Professor of Computer Science and Engineering at the Massachusetts Institute of Technology. She is widely recognized for adapting partially observable Markov decision process from operations research for application in artificial intelligence and robotics. Kaelbling received the IJCAI Computers and Thought Award in 1997 for applying reinforcement learning to embedded control systems and developing programming tools for robot navigation. In 2000, she was elected as a Fellow of the Association for the Advancement of Artificial Intelligence.
In probability theory, a Markov model is a stochastic model used to model pseudo-randomly changing systems. It is assumed that future states depend only on the current state, not on the events that occurred before it. Generally, this assumption enables reasoning and computation with the model that would otherwise be intractable. For this reason, in the fields of predictive modelling and probabilistic forecasting, it is desirable for a given model to exhibit the Markov property.
Sven Koenig is a full professor in computer science at the University of Southern California. He received an M.S. degree in computer science from the University of California at Berkeley in 1991 and a Ph.D. in computer science from Carnegie Mellon University in 1997, advised by Reid Simmons.
Eric Joel Horvitz is an American computer scientist, and Technical Fellow at Microsoft, where he serves as the company's first Chief Scientific Officer. He was previously the director of Microsoft Research Labs, including research centers in Redmond, WA, Cambridge, MA, New York, NY, Montreal, Canada, Cambridge, UK, and Bangalore, India.
Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. Machine ethics should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with the grander social effects of technology.
Google Brain is a deep learning artificial intelligence research team under the umbrella of Google AI, a research division at Google dedicated to artificial intelligence. Formed in 2011, Google Brain combines open-ended machine learning research with information systems and large-scale computing resources. The team has created tools such as TensorFlow, which allow for neural networks to be used by the public, with multiple internal AI research projects. The team aims to create research opportunities in machine learning and natural language processing.
Shlomo Zilberstein is an Israeli-American computer scientist. He is a Professor of Computer Science and Associate Dean for Research and Engagement in the College of Information and Computer Sciences at the University of Massachusetts, Amherst. He graduated with a B.A. in Computer Science summa cum laude from Technion – Israel Institute of Technology in 1982, and a Ph.D. in Computer Science from University of California at Berkeley in 1993, advised by Stuart J. Russell. He is known for his contributions to artificial intelligence, anytime algorithms, multi-agent systems, and automated planning and scheduling algorithms, notably within the context of Markov decision processes (MDPs), Partially Observable MDPs (POMDPs), and Decentralized POMDPs (Dec-POMDPs).
This glossary of artificial intelligence is a list of definitions of terms and concepts relevant to the study of artificial intelligence, its sub-disciplines, and related fields. Related glossaries include Glossary of computer science, Glossary of robotics, and Glossary of machine vision.
Thomas G. Dietterich is emeritus professor of computer science at Oregon State University. He is one of the pioneers of the field of machine learning. He served as executive editor of Machine Learning (journal) (1992–98) and helped co-found the Journal of Machine Learning Research. In response to the media's attention on the dangers of artificial intelligence, Dietterich has been quoted for an academic perspective to a broad range of media outlets including National Public Radio, Business Insider, Microsoft Research, CNET, and The Wall Street Journal.
The following outline is provided as an overview of and topical guide to machine learning. Machine learning is a subfield of soft computing within computer science that evolved from the study of pattern recognition and computational learning theory in artificial intelligence. In 1959, Arthur Samuel defined machine learning as a "field of study that gives computers the ability to learn without being explicitly programmed". Machine learning explores the study and construction of algorithms that can learn from and make predictions on data. Such algorithms operate by building a model from an example training set of input observations in order to make data-driven predictions or decisions expressed as outputs, rather than following strictly static program instructions.