Thomas Dean (computer scientist)

Last updated
Thomas L. Dean
Thomas Dean in 2012.jpg
Thomas Dean in 2012
Born1950 (age 7172)
Nationality American
Alma mater Virginia Polytechnic Institute
Yale University
Known forANYTIME ALGORITHMS
Awards AAAI Fellow (1994) [1]
ACM Fellow (2009) [2]
Scientific career
Fields Computer Science
Institutions Google
Stanford University
Brown University
Thesis Temporal Imagery: An Approach to Reasoning about Time for Planning and Problem Solving (1985)
Doctoral advisor Drew McDermott
Website cs.brown.edu/people/tdean/

Thomas L. Dean (born 1950) is an American computer scientist known for his work in robot planning, probabilistic graphical models, and computational neuroscience. He was one of the first to introduce ideas from operations research and control theory to artificial intelligence. [3] In particular, he introduced the idea of the anytime algorithm and was the first to apply the factored Markov decision process to robotics. [4] [5] He has authored several influential textbooks on artificial intelligence. [3] [6] [7]

Contents

He was a professor at Brown University from 1993 to 2007, holding roles including department chair, acting vice president for computing and information services, and deputy provost. [8] In 2006 he started working at Google, where he was instrumental in helping the Google Brain project get its start. He is currently an emeritus professor at Brown and a lecturer and research fellow at Stanford. [9]

Academic and Scientific Contribution

Artificial Intelligence

Control

Dean and Wellman's book Planning and Control [3] provided a much-needed bridge between research in AI on discrete-time symbolic methods for goal directed planning and decision making and continuous-time control theoretic methods for robotics and industrial control systems. Basic control concepts including "observability", "stability", and "optimality" are introduced, and many of the most important theoretical results are presented and explained. In a book review in the Artificial Intelligence Journal, James Hendler wrote that the book serves as a 'Rosetta Stone' for translation between the fields of robotics and AI. [10]

Anytime Algorithms

The term anytime algorithm was coined by Dean and Boddy in the late '80s. [11] The focus of Dean and Boddy's work in this area has been on deliberation scheduling applied to time-dependent planning problems. Deliberation scheduling is the explicit allocation of resources to tasks (in most cases anytime algorithms) so as to maximize the total value of an agent's computation. [12] Time-dependent planning problems are defined to be planning problems where the time available for responding to events varies from situation to situation. In addition to defining the basic concepts, Dean and Boddy provided theoretical analyses and applications in robotics and operations research [13] [14] [15] [16] . [17]

Markov Processes

Dean played a leading role in the adoption of the framework of Markov decision processes (MDPs) as a foundational tool in artificial intelligence. In particular, he pioneered the use of AI representations and algorithms for || factoring || complex models and problems into weakly-interacting subparts to improve computational efficiency. His work in state estimation emphasized temporal causal reasoning [18] [13] [19] and the integration with probabilistic graphical models [20] . [21] His work in control includes state-space partitioning [22] [23] [24] [25] , [26] hierarchical methods [20] , [21] and model minimization [27] [28] [29] . [14] This line of work is clearly summarized by a highly influential paper jointly written with Craig Boutilier and Steve Hanks. [30]

AI Textbook

Working with his collaborators, James Allen and Yiannis Aloimonos specializing in respectively computer vision and natural language processing, Dean wrote one of the first modern AI textbooks incorporating probability theory, machine learning and robotics, and placing traditional AI topics such as symbolic reasoning and knowledge representation using the predicate calculus within a broader context. [6] The first and only edition published in December 1994 initially competed with the first edition of Russell and Norvig's Artificial Intelligence: A Modern Approach that came out in 1995, but was eclipsed by the second edition of the Russell and Norvig text released in 2003. [31]

Robotics

As co-chair of the 1991 AAAI Conference Dean organized a press event featuring mobile robots carrying trays of canaps and barely avoiding the participants. The coverage on the evening news was enthusiastically positive and in 1992, Dean and Peter Bonasso, with feedback from the robotics community, created the AAAI Robotics Competition featuring events aimed at showing off robots competing in events that involved performing tasks in the home, office, and disaster sites [32] [33] . [34] The competition was still being held in 2010. [35]

Computational Neuroscience

Stanford Course

After starting as a research scientist at Google, Dean was appointed as a consulting professor at Stanford and began teaching a course with the title Computational Models of the Neocortex. During the next fifteen years he invited top neuroscientists from all over the world to give talks and advise students working on class projects. Several of the classes resulted in papers coauthored by students that led to research projects at Google [29] [36] [37] . [38]

Neuromancer Project

In an effort to create a team focusing on scalable computational neuroscience, Dean and his students at Stanford to produced a white paper entitled Technology Prospects and Investment Opportunities for Scalable Neuroscience [29] that served as the basis for building a team of software engineers and computational neuroscientists focusing on connectomics. Early on, Dean worked with Christof Koch the chief scientist at the Allen Institute for Brain Neuroscience to develop a partnership, and hired Viren Jain from HHMI to serve as the technical lead for the project.

Dean and Jain expanded the team to more than ten software engineers and participated in the planning of the NIH Brain Initiative. As their computer vision and machine learning tools improved, the team sought out and developed additional partnerships with Gerry Rubin at HHMI Janelia Campus, Jeff Lichtman at Harvard, and Winfried Denk at the Max Planck Institute of Neurobiology. Each of these collaborations would lead to high-accuracy, dense reconstructions of neural tissue samples in different organisms, repeatedly surpassing the current state of the art in size and quality [39] [40] . [41] Viren Jain is currently the project manager and lead scientist for the ongoing effort at Google. The resulting data on brain connectivity, including the 'hemibrain' connectome, a highly detailed map of neuronal connectivity in the fly brain [42] and the 'H01' dataset, a 1.4 petabyte rendering of a small sample of human brain tissue, [43] was publicly released.

Google Brain

Dean led some of the earliest investigations into the use of neural networks at Google, that directly led to the creation of the Google Brain project. He experimented with approaches for using hardware acceleration to overcome current performance limitations in building industrial-scale web services, and collaborated with Dean Gaudet on the Google Infrastructure and Platforms Team to make the case for introducing graphic processing units (GPU) in Google data centers. He worked closely with Vincent Vanhoucke, who led the perception research and speech recognition quality team, to demonstrate the value of GPUs for training and deploying deep neural network architectures in the cloud focusing on speech recognition for Google Search by Voice.

Administrative and Professional Services

University Administration

Dean served as the Deputy Provost of Brown University from 2003 to 2005, as the chair of Brown's Computer Science Department from 1997 until 2002, and as the Acting Vice President for Computing and Information Services from 2001 until 2002. As Deputy Provost he helped develop and launch new multidisciplinary programs in genomics and the brain sciences as well as oversee substantial changes in the medical school and university libraries.

Professional Leadership

Dean was named a fellow of AAAI in 1994 and an ACM fellow in 2009. He has served on the Executive Council of AAAI and the Computing Research Association Board of Directors. He was a recipient of an NSF Presidential Young Investigator Award in 1989. He served as program co-chair for the 1991 National Conference on Artificial Intelligence and the program chair for the 1999 International Joint Conference on Artificial Intelligence held in Stockholm. He was a founding member of the Academic Alliance of the National Center for Women and Information Technology and a former member of the IJCAI Inc. Board of Trustees.

Related Research Articles

In artificial intelligence, symbolic artificial intelligence is the term for the collection of all methods in artificial intelligence research that are based on high-level symbolic (human-readable) representations of problems, logic and search. Symbolic AI used tools such as logic programming, production rules, semantic nets and frames, and it developed applications such as knowledge-based systems, symbolic mathematics, automated theorem provers, ontologies, the semantic web, and automated planning and scheduling systems. The Symbolic AI paradigm led to seminal ideas in search, symbolic programming languages, agents, multi-agent systems, the semantic web, and the strengths and limitations of formal knowledge and reasoning systems.

Reason maintenance is a knowledge representation approach to efficient handling of inferred information that is explicitly stored. Reason maintenance distinguishes between base facts, which can be defeated, and derived facts. As such it differs from belief revision which, in its basic form, assumes that all facts are equally important. Reason maintenance was originally developed as a technique for implementing problem solvers. It encompasses a variety of techniques that share a common architecture: two components—a reasoner and a reason maintenance system—communicate with each other via an interface. The reasoner uses the reason maintenance system to record its inferences and justifications of the inferences. The reasoner also informs the reason maintenance system which are the currently valid base facts (assumptions). The reason maintenance system uses the information to compute the truth value of the stored derived facts and to restore consistency if an inconsistency is derived.

Dynamic Bayesian network Probabilistic graphical model

A Dynamic Bayesian Network (DBN) is a Bayesian network (BN) which relates variables to each other over adjacent time steps. This is often called a Two-Timeslice BN (2TBN) because it says that at any point in time T, the value of a variable can be calculated from the internal regressors and the immediate prior value. DBNs were developed by Paul Dagum in the early 1990s at Stanford University's Section on Medical Informatics. Dagum developed DBNs to unify and extend traditional linear state-space models such as Kalman filters, linear and normal forecasting models such as ARMA and simple dependency models such as hidden Markov models into a general probabilistic representation and inference mechanism for arbitrary nonlinear and non-normal time-dependent domains.

Automated planning and scheduling, sometimes denoted as simply AI planning, is a branch of artificial intelligence that concerns the realization of strategies or action sequences, typically for execution by intelligent agents, autonomous robots and unmanned vehicles. Unlike classical control and classification problems, the solutions are complex and must be discovered and optimized in multidimensional space. Planning is also related to decision theory.

<span class="mw-page-title-main">James Hendler</span> AI researcher

James Alexander Hendler is an artificial intelligence researcher at Rensselaer Polytechnic Institute, United States, and one of the originators of the Semantic Web. He is a Fellow of the National Academy of Public Administration.

The following outline is provided as an overview of and topical guide to artificial intelligence:

<span class="mw-page-title-main">Austin Tate</span>

Austin Tate is Emeritus Professor of Knowledge-based systems in the School of Informatics at the University of Edinburgh. From 1985 to 2019 he was Director of AIAI in the School of Informatics at the University of Edinburgh.

Steve Omohundro American computer scientist

Stephen Malvern Omohundro is an American computer scientist whose areas of research include Hamiltonian physics, dynamical systems, programming languages, machine learning, machine vision, and the social implications of artificial intelligence. His current work uses rational economics to develop safe and beneficial intelligent technologies for better collaborative modeling, understanding, innovation, and decision making.

This is a timeline of artificial intelligence, sometimes alternatively called synthetic intelligence.

Michael L. Littman American computer scientist

Michael Lederman Littman is a computer scientist. He works mainly in reinforcement learning, but has done work in machine learning, game theory, computer networking, partially observable Markov decision process solving, computer solving of analogy problems and other areas. He is currently a University Professor of Computer Science at Brown University, where he has taught since 2012.

Leslie Pack Kaelbling is an American roboticist and the Panasonic Professor of Computer Science and Engineering at the Massachusetts Institute of Technology. She is widely recognized for adapting partially observable Markov decision process from operations research for application in artificial intelligence and robotics. Kaelbling received the IJCAI Computers and Thought Award in 1997 for applying reinforcement learning to embedded control systems and developing programming tools for robot navigation. In 2000, she was elected as a Fellow of the Association for the Advancement of Artificial Intelligence.

In probability theory, a Markov model is a stochastic model used to model pseudo-randomly changing systems. It is assumed that future states depend only on the current state, not on the events that occurred before it. Generally, this assumption enables reasoning and computation with the model that would otherwise be intractable. For this reason, in the fields of predictive modelling and probabilistic forecasting, it is desirable for a given model to exhibit the Markov property.

Sven Koenig (computer scientist) German computer scientist

Sven Koenig is a full professor in computer science at the University of Southern California. He received an M.S. degree in computer science from the University of California at Berkeley in 1991 and a Ph.D. in computer science from Carnegie Mellon University in 1997, advised by Reid Simmons.

<span class="mw-page-title-main">Eric Horvitz</span> American computer scientist, and Technical Fellow at Microsoft

Eric Joel Horvitz is an American computer scientist, and Technical Fellow at Microsoft, where he serves as the company's first Chief Scientific Officer. He was previously the director of Microsoft Research Labs, including research centers in Redmond, WA, Cambridge, MA, New York, NY, Montreal, Canada, Cambridge, UK, and Bangalore, India.

Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. Machine ethics should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with the grander social effects of technology.

Google Brain is a deep learning artificial intelligence research team under the umbrella of Google AI, a research division at Google dedicated to artificial intelligence. Formed in 2011, Google Brain combines open-ended machine learning research with information systems and large-scale computing resources. The team has created tools such as TensorFlow, which allow for neural networks to be used by the public, with multiple internal AI research projects. The team aims to create research opportunities in machine learning and natural language processing.

<span class="mw-page-title-main">Shlomo Zilberstein</span>

Shlomo Zilberstein is an Israeli-American computer scientist. He is a Professor of Computer Science and Associate Dean for Research and Engagement in the College of Information and Computer Sciences at the University of Massachusetts, Amherst. He graduated with a B.A. in Computer Science summa cum laude from Technion – Israel Institute of Technology in 1982, and a Ph.D. in Computer Science from University of California at Berkeley in 1993, advised by Stuart J. Russell. He is known for his contributions to artificial intelligence, anytime algorithms, multi-agent systems, and automated planning and scheduling algorithms, notably within the context of Markov decision processes (MDPs), Partially Observable MDPs (POMDPs), and Decentralized POMDPs (Dec-POMDPs).

This glossary of artificial intelligence is a list of definitions of terms and concepts relevant to the study of artificial intelligence, its sub-disciplines, and related fields. Related glossaries include Glossary of computer science, Glossary of robotics, and Glossary of machine vision.

Thomas G. Dietterich is emeritus professor of computer science at Oregon State University. He is one of the pioneers of the field of machine learning. He served as executive editor of Machine Learning (journal) (1992–98) and helped co-found the Journal of Machine Learning Research. In response to the media's attention on the dangers of artificial intelligence, Dietterich has been quoted for an academic perspective to a broad range of media outlets including National Public Radio, Business Insider, Microsoft Research, CNET, and The Wall Street Journal.

<span class="mw-page-title-main">Outline of machine learning</span> Overview of and topical guide to machine learning

The following outline is provided as an overview of and topical guide to machine learning. Machine learning is a subfield of soft computing within computer science that evolved from the study of pattern recognition and computational learning theory in artificial intelligence. In 1959, Arthur Samuel defined machine learning as a "field of study that gives computers the ability to learn without being explicitly programmed". Machine learning explores the study and construction of algorithms that can learn from and make predictions on data. Such algorithms operate by building a model from an example training set of input observations in order to make data-driven predictions or decisions expressed as outputs, rather than following strictly static program instructions.

References

  1. "Elected AAAI Fellows".
  2. "ACM Fellows".
  3. 1 2 3 Dean, Thomas; Wellman, Michael (1991). Planning and Control. Morgan Kaufmann.
  4. "Four Googlers elected ACM Fellows". 2009.
  5. Thomas Dean publications indexed by Google Scholar
  6. 1 2 Dean, Thomas; Allen, James; Aloimonos, Yiannis (1995). Artificial Intelligence: Theory and Practice. Addison-Wesley.
  7. Dean, Thomas (2004). Talking with Computers. Cambridge University Press.
  8. "Tom Dean academic biography".
  9. "Thomas L Dean, Stanford Bio".
  10. Hendler, James (1995). "Book Review: Planning and Control by Thomas Dean and Michael Wellman". Artificial Intelligence. 73: 379–386. doi:10.1016/0004-3702(95)90045-4.
  11. Dean, Thomas; Boddy, Mark (1988). "An Analysis of Time-Dependent Planning". Proceedings AAAI-88. Cambridge, Massachusetts: MIT Press. pp. 49–54.
  12. Garvey, Alan; Lesser, Victor (1994). "A survey of research in deliberative real-time artificial intelligence". Real-Time Systems. 6 (3): 317–347. doi:10.1007/BF01088630. S2CID   16566928.
  13. 1 2 Dean, Thomas; Boddy, Mark (1987). "Incremental Causal Reasoning". Proceedings AAAI-87. Cambridge, Massachusetts: MIT Press. pp. 196–201.
  14. 1 2 Dean, Thomas; Givan, Robert; Leach, Sonia (1997). "Model Reduction Techniques for Computing Approximately Optimal Solutions for Markov Decision Processes". In Geiger, Dan; Shenoy, Prakesh Pundalik (eds.). Proceedings of the 13th Conference on Uncertainty in Artificial Intelligence. San Francisco, California: Morgan Kaufmann Publishers. pp. 124–131.
  15. Dean, Thomas; Lin, Shieu-Hong (1995). "Decomposition Techniques for Planning in Stochastic Domains". Proceedings IJCAI-95. San Francisco, California: Morgan Kaufmann Publishers. pp. 1121–1127.
  16. Dean, Thomas; Kaelbling, Leslie; Kirman, Jak; Nicholson, Ann (1993). "Planning With Deadlines in Stochastic Domains". Proceedings AAAI-93. Cambridge, Massachusetts: MIT Press. pp. 574–579.
  17. Dean, Thomas; Kaelbling, Leslie; Kirman, Jak; Nicholson, Ann (1995). "Planning Under Time Constraints in Stochastic Domains". AIJ. 76: 35–74.
  18. Dean, Thomas; Kanazawa, Keiji (1989). "A Model for Reasoning About Persistence and Causation". Computational Intelligence. 5 (2): 142–150. doi:10.1111/j.1467-8640.1989.tb00324.x. S2CID   57798167.
  19. Dean, Thomas; Kanazawa, Keiji (1988). "Probabilistic Causal Reasoning". Proceedings of the Canadian Society for Computational Studies of Intelligence. pp. 125–132.
  20. 1 2 Dean, Thomas; Kanazawa, Keiji (1989). "Persistence and Probabilistic Inference". IEEE Transactions on Systems, Man, and Cybernetics. 19: 574–585. doi:10.1109/21.31063.
  21. 1 2 Dean, T.; Kirman, J.; Kanazawa, K. (1992). "Probabilistic Network Representations of Continuous-Time Stochastic Processes for Applications in Planning and Control". In Hendler, James (ed.). Proceedings of the First International Conference on Artificial Intelligence Planning Systems (ICAPS-92). San Francisco, California: Morgan Kaufmann Publishers. pp. 273–274.
  22. Dean, Thomas; Firby, R. James; Miller, David P. (1988). "Hierarchical Planning Involving Deadlines, Travel Time and Resources (Also appears in Readings in Planning (Morgan Kaufmann), edited by James Allen, James Hendler, and Austin Tate, and in Autonomous Mobile Robots: Control, Planning, and Architecture (IEEE Computer Society Press), edited by S. S. Iyengar and Alberto Elfes)". CIJ. 4: 381–398.
  23. Hauskrecht, Milos; Meuleau, Nicolas; Boutilier, Craig; Kaelbling, Leslie Pack; Dean, Thomas (1998). "Hierarchical solution of Markov decision processes using macro-actions". Proceedings of the 14th Conference on Uncertainty in Artificial Intelligence (UAI-98). San Francisco, California: Morgan Kaufmann Publishers. pp. 220–229.
  24. Kim, Kee-Eung; Dean, Thomas (2003). "Solving Factored Markov Decision Processes Using Non-homogeneous Partitions". AIJ. 147: 225–251.
  25. Kim, Kee-Eung; Meuleau, Nicolas; Dean, Thomas (2000). "Approximate Solutions to Factored Markov Decision Processes via Greedy Search in the Space of Finite State Controllers". Proceedings of the 5th International Conference on Artificial Intelligence Planning Systems (ICAPS-2000). Menlo Park, California: AAAI Press. pp. 323–330.
  26. Littman, Michael; Dean, Thomas; Kaelbling, Leslie (1995). "On the Complexity of Solving Markov Decision Problems". Proceedings of the 11th Conference on Uncertainty in Artificial Intelligence. San Francisco, California: Morgan Kaufmann Publishers. pp. 394–402.
  27. Dean, Thomas; Givan, Robert (1997). "Model Minimization in Markov Decision Processes". Proceedings AAAI-97. Cambridge, Massachusetts: MIT Press. pp. 106–111.
  28. Dean, Thomas; Givan, Robert; Kim, Kee-Eung (1998). "Solving Planning Problems with Large State and Action Spaces". Proceedings of the 4th International Conference on Artificial Intelligence Planning Systems (ICAPS-98). pp. 102–110.
  29. 1 2 3 Dean, Thomas; Ahanonu, Biafra; Chowdhury, Mainak; Datta, Anjali; Esteva, Andre; Eth, Daniel; Redmon, Nobie; Rumyantsev, Oleg; Tarter, Ysis (2013). "On the Technology Prospects and Investment Opportunities for Scalable Neuroscience". arXiv: 1307.7302 [q-bio.NC].
  30. Boutilier, Craig; Dean, Thomas; Hanks, Steven (1999). "Decision-Theoretic Planning: Structural Assumptions and Computational Leverage". Journal of Artificial Intelligence Research. 11: 1–94. doi:10.1613/jair.575. S2CID   5297450.
  31. Furbach, Ulrich (2003). "AI - A Multiple Book Review". Artificial Intelligence. 145 (1–2): 379–386. doi:10.1016/S0004-3702(03)00011-0.
  32. https://worldwidescience.org/topicpages/a/aaai+robot+competition.html
  33. Dean, Thomas; Bonasso, R. Peter (1993). "1992 AAAI Robot Exhibition and Competition". {AI} Magazine. 14: 35–48.
  34. Dean, Thomas; Bonasso, R. Peter (1997). "A Retrospective on the AAAI Robot Competitions". {AI} Magazine. 18: 11–23.
  35. Anderson, Monica; Chernova, Sonia; Dodds, Zachary; Thomaz, Andrea L.; Touretsky, David (2011). "Report on the AAAI 2010 Robot Exhibition". {AI} Magazine. 32 (3): 109–118.
  36. Dean, Thomas (2017). "Inferring Mesoscale Models of Neural Computation". arXiv: 1710.05183 [q-bio.NC].
  37. Dean, Thomas; Chiang, Maurice; Gomez, Marcus; Gruver, Nate; Hindy, Yousef; Lam, Michelle; Lu, Peter; Sanchez, Sophia; Saxena, Rohun; Smith, Michael (2018). "Amanuensis: The Programmer's Apprentice". arXiv: 1807.00082 [q-bio.NC].
  38. Dean, Thomas; Fan, Chaofei; Lewis, Francis E.; Sano, Megumi (2019). "Biological Blueprints for Next Generation AI Systems". arXiv: 1912.00421 [q-bio.NC].
  39. Januszewski, Michal; Kornfeld, J"{o}rgen; Li, Peter H; Pope, Art; Blakely, Tim; Lindsey, Larry; Maitin-Shepard, Jeremy B; Tyka, Mike; Denk, Winfried; Jain, Viren (2017). "High-Precision Automated Reconstruction of Neurons with Flood-filling Networks". Nature Methods. 15 (8): 605–610. doi:10.1038/s41592-018-0049-4. PMID   30013046. S2CID   49863171.
  40. Shapson-Coe, Alexander; Januszewski, Michał; Berger, Daniel R.; Pope, Art; Wu, Yuelong; Blakely, Tim; Schalek, Richard L.; Li, Peter; Wang, Shuohong; Maitin-Shepard, Jeremy (2021). "A connectomic study of a petascale fragment of human cerebral cortex". bioRxiv.
  41. Xu, C. Shan; Januszewski, Michal; Lu, Zhiyuan; Takemura, Shin-ya; Hayworth, Kenneth J.; Huang, Gary; Shinomiya, Kazunori; Maitin-Shepard, Jeremy; Ackerman, David; Berg, Stuart (2020). "A Connectome of the Adult Drosophila Central Brain". bioRxiv.
  42. "Releasing the Drosophila Hemibrain Connectome — The Largest Synapse-Resolution Map of Brain Connectivity". January 2020.
  43. "A Browsable Petascale Reconstruction of the Human Cortex". June 2021.