Josh Tenenbaum | |
---|---|
Born | 21 August 1972 |
Citizenship | United States |
Alma mater | Yale University MIT |
Known for | Bayesian cognitive science |
Awards |
|
Scientific career | |
Fields | Artificial intelligence Cognitive science |
Institutions | Stanford University MIT |
Thesis | A Bayesian Framework for Concept Learning (1999) |
Doctoral advisor | Whitman Richards |
Doctoral students | Thomas L. Griffiths, Rebecca Saxe |
Joshua Brett Tenenbaum (Josh Tenenbaum) is Professor of Computational Cognitive Science at the Massachusetts Institute of Technology. [2] He is known for contributions to mathematical psychology and Bayesian cognitive science. According to the MacArthur Foundation, which named him a MacArthur Fellow in 2019, "Tenenbaum is one of the first to develop and apply probabilistic and statistical modeling to the study of human learning, reasoning, and perception, and to show how these models can explain a fundamental challenge of cognition: how our minds understand so much from so little, so quickly." [3]
Tenenbaum grew up in California. His mother was a teacher [4] and his father is Internet commerce pioneer Jay Martin Tenenbaum. [5]
His research direction was strongly influenced by his parents' interest in teaching and learning, and later by interactions with cognitive psychologist Roger Shepard, during his years at Yale. [4]
Tenenbaum received his undergraduate degree in physics from Yale University in 1993, and his Ph.D. from MIT in 1999. [2] His work focuses on analyzing probabilistic inference as the engine of human cognition and as a means to develop machine learning. According to the MacArthur Foundation, "Tenenbaum is one of the first to develop and apply probabilistic and statistical modeling to the study of human learning, reasoning, and perception, and to show how these models can explain a fundamental challenge of cognition: how our minds understand so much from so little, so quickly." [3]
At MIT, Tenebaum is a professor of computational cognitive science and a member of CSAIL, MIT’s Computer Science and Artificial Intelligence Laboratory. He leads MIT's Computational Cognitive Science lab and is also head of an AI project called the MIT Quest for Intelligence. [6] [7]
In 2018, R & D Magazine named Tenenbaum their "Innovator of the Year." [4]
In 2019, Tenenbaum was named a MacArthur Fellow. The MacArthur webpage describes his work as follows: "Combining computational models with behavioral experiments to shed light on human learning, reasoning, and perception, and exploring how to bring artificial intelligence closer to the capabilities of human thinking." [3]
Tenenbaum's recent research includes teaching AI systems to imitate human face-recognition methods [7] and programming AI to understand cause and effect. [8]
Tenenbaum has a list of his publications on his MIT web page and on Google Scholar.
Cognitive science is the interdisciplinary, scientific study of the mind and its processes with input from linguistics, psychology, neuroscience, philosophy, computer science/artificial intelligence, and anthropology. It examines the nature, the tasks, and the functions of cognition. Cognitive scientists study intelligence and behavior, with a focus on how nervous systems represent, process, and transform information. Mental faculties of concern to cognitive scientists include language, perception, memory, attention, reasoning, and emotion; to understand these faculties, cognitive scientists borrow from fields such as linguistics, psychology, artificial intelligence, philosophy, neuroscience, and anthropology. The typical analysis of cognitive science spans many levels of organization, from learning and decision to logic and planning; from neural circuitry to modular brain organization. One of the fundamental concepts of cognitive science is that "thinking can best be understood in terms of representational structures in the mind and computational procedures that operate on those structures."
Cognition refers to "the mental action or process of acquiring knowledge and understanding through thought, experience, and the senses". It encompasses all aspects of intellectual functions and processes such as: perception, attention, thought, intelligence, the formation of knowledge, memory and working memory, judgment and evaluation, reasoning and computation, problem solving and decision making, comprehension and production of language. Imagination is also a cognitive process, it is considered as such because it involves thinking about possibilities. Cognitive processes use existing knowledge and discover new knowledge.
In artificial intelligence, symbolic artificial intelligence is the term for the collection of all methods in artificial intelligence research that are based on high-level symbolic (human-readable) representations of problems, logic and search. Symbolic AI used tools such as logic programming, production rules, semantic nets and frames, and it developed applications such as knowledge-based systems, symbolic mathematics, automated theorem provers, ontologies, the semantic web, and automated planning and scheduling systems. The Symbolic AI paradigm led to seminal ideas in search, symbolic programming languages, agents, multi-agent systems, the semantic web, and the strengths and limitations of formal knowledge and reasoning systems.
Computer Science and Artificial Intelligence Laboratory (CSAIL) is a research institute at the Massachusetts Institute of Technology (MIT) formed by the 2003 merger of the Laboratory for Computer Science (LCS) and the Artificial Intelligence Laboratory. Housed within the Ray and Maria Stata Center, CSAIL is the largest on-campus laboratory as measured by research scope and membership. It is part of the Schwarzman College of Computing but is also overseen by the MIT Vice President of Research.
A cognitive architecture refers to both a theory about the structure of the human mind and to a computational instantiation of such a theory used in the fields of artificial intelligence (AI) and computational cognitive science. The formalized models can be used to further refine a comprehensive theory of cognition and as a useful artificial intelligence program. Successful cognitive architectures include ACT-R and SOAR. The research on cognitive architectures as software instantiation of cognitive theories was initiated by Allen Newell in 1990.
Dedre Dariel Gentner is an American cognitive and developmental psychologist. She is the Alice Gabriel Twight Professor of Psychology at Northwestern University. She is a leading researcher in the study of analogical reasoning.
Computational cognition is the study of the computational basis of learning and inference by mathematical modeling, computer simulation, and behavioral experiments. In psychology, it is an approach which develops computational models based on experimental results. It seeks to understand the basis behind the human method of processing of information. Early on computational cognitive scientists sought to bring back and create a scientific form of Brentano's psychology.
In philosophy of mind, the computational theory of mind (CTM), also known as computationalism, is a family of views that hold that the human mind is an information processing system and that cognition and consciousness together are a form of computation. Warren McCulloch and Walter Pitts (1943) were the first to suggest that neural activity is computational. They argued that neural computations explain cognition. The theory was proposed in its modern form by Hilary Putnam in 1967, and developed by his PhD student, philosopher, and cognitive scientist Jerry Fodor in the 1960s, 1970s, and 1980s. Despite being vigorously disputed in analytic philosophy in the 1990s due to work by Putnam himself, John Searle, and others, the view is common in modern cognitive psychology and is presumed by many theorists of evolutionary psychology. In the 2000s and 2010s the view has resurfaced in analytic philosophy.
Embodied cognitive science is an interdisciplinary field of research, the aim of which is to explain the mechanisms underlying intelligent behavior. It comprises three main methodologies: the modeling of psychological and biological systems in a holistic manner that considers the mind and body as a single entity; the formation of a common set of general principles of intelligent behavior; and the experimental use of robotic agents in controlled environments.
The following outline is provided as an overview of and topical guide to thought (thinking):
Ron Sun is a cognitive scientist who made significant contributions to computational psychology and other areas of cognitive science and artificial intelligence. He is currently professor of cognitive sciences at Rensselaer Polytechnic Institute, and formerly the James C. Dowell Professor of Engineering and Professor of Computer Science at University of Missouri. He received his Ph.D. in 1992 from Brandeis University.
Daniela L. Rus is a roboticist and computer scientist, Director of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), and the Andrew and Erna Viterbi Professor in the Department of Electrical Engineering and Computer Science (EECS) at the Massachusetts Institute of Technology.
Cognitive musicology is a branch of cognitive science concerned with computationally modeling musical knowledge with the goal of understanding both music and cognition.
The Troland Research Awards are an annual prize given by the United States National Academy of Sciences to two researchers in recognition of psychological research on the relationship between consciousness and the physical world. The areas where these award funds are to be spent include but are not limited to areas of experimental psychology, the topics of sensation, perception, motivation, emotion, learning, memory, cognition, language, and action. The award preference is given to experimental work with a quantitative approach or experimental research seeking physiological explanations.
Embodied cognition is the theory that many features of cognition, whether human or otherwise, are shaped by aspects of an organism's entire body. Sensory and motor systems are seen as fundamentally integrated with cognitive processing. The cognitive features include high-level mental constructs and performance on various cognitive tasks. The bodily aspects involve the motor system, the perceptual system, the bodily interactions with the environment (situatedness), and the assumptions about the world built into the organism's functional structure.
Alan Yuille is a Bloomberg Distinguished Professor of Computational Cognitive Science with appointments in the departments of Cognitive Science and Computer Science at Johns Hopkins University. Yuille develops models of vision and cognition for computers, intended for creating artificial vision systems. He studied under Stephen Hawking at Cambridge University on a PhD in theoretical physics, which he completed in 1981.
Intuitive statistics, or folk statistics, refers to the cognitive phenomenon where organisms use data to make generalizations and predictions about the world. This can be a small amount of sample data or training instances, which in turn contribute to inductive inferences about either population-level properties, future data, or both. Inferences can involve revising hypotheses, or beliefs, in light of probabilistic data that inform and motivate future predictions. The informal tendency for cognitive animals to intuitively generate statistical inferences, when formalized with certain axioms of probability theory, constitutes statistics as an academic discipline.
The Department of Brain and Cognitive Sciences at the Massachusetts Institute of Technology, Cambridge, Massachusetts, United States, engages in fundamental research in the areas of brain and neural systems, and cognitive processes. The department is within the School of Science at the MIT and began initially as the Department of Psychology founded by the psychologist Hans-Lukas Teuber in 1964. In 1986 the MIT Department of Psychology merged with the Whittaker College integrating Psychology and Neuroscience research to form the Department of Brain and Cognitive Sciences.
Neuro-symbolic AI integrates neural and symbolic AI architectures to address complementary strengths and weaknesses of each, providing a robust AI capable of reasoning, learning, and cognitive modeling. As argued by Valiant and many others, the effective construction of rich computational cognitive models demands the combination of sound symbolic reasoning and efficient machine learning models. Gary Marcus, argues that: "We cannot construct rich cognitive models in an adequate, automated way without the triumvirate of hybrid architecture, rich prior knowledge, and sophisticated techniques for reasoning.". Further, "To build a robust, knowledge-driven approach to AI we must have the machinery of symbol-manipulation in our toolkit. Too much of useful knowledge is abstract to make do without tools that represent and manipulate abstraction, and to date, the only machinery that we know of that can manipulate such abstract knowledge reliably is the apparatus of symbol-manipulation."
Thomas L. Griffiths is an Australian academic who is the Henry R. Luce Professor of Information Technology, Consciousness, and Culture at Princeton University. He studies human decision-making and its connection to problem-solving methods in computation. His book with Brian Christian, Algorithms to Live By: The Computer Science of Human Decisions, was named one of the "Best Books of 2016" by MIT Technology Review.
Tenenbaum’s scientific work currently focuses on two areas: describing the structure, content and development of people’s common sense theories, especially intuitive physics and intuitive psychology, and understanding how people are able to learn and generalize new concepts, models, theories and tasks from very few examples, a concept known as “one-shot learning.”"Eulogy". 8 December 2020.
Meanwhile, his son Josh Tenenbaum, PhD ‘98, has followed his father’s footsteps to MIT. He’s an assistant professor in the Department of Brain and Cognitive Science.
For instance, in 2015 he and two other researchers created computer programs capable of learning to recognize new handwritten characters, as well as certain objects in images, after seeing just a few examples. This is important because the best machine-learning programs typically require huge quantities of training data.
'What we were trying to do in this work is to explain how perception can be so much richer than just attaching semantic labels on parts of an image, and to explore the question of how do we see all of the physical world,' says Josh Tenenbaum, a professor of computational cognitive science and a member of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Center for Brains, Minds, and Machines (CBMM).
The test devised by Tenenbaum is important, says Kun Zhang, an assistant professor who works on causal inference and machine learning at Carnegie Mellon University, because it provides a good way to measure causal understanding, albeit in a very limited setting.