James Frederick Allen | |
---|---|
Born | 1950 (age 73–74) |
Alma mater | University of Toronto (Ph.D., 1979) |
Known for | TRIPS (An Integrated Intelligent Problem-Solving Assistant) PLOW (A Collaborative Task Learning Agent) |
Awards | AAAI Fellow (1990, Founding) [1] |
Scientific career | |
Fields | Artificial Intelligence Natural Language Processing & Understanding Computational Linguistics |
Institutions | University of Rochester IHMC |
Thesis | A plan-based approach to speech act recognition (1979) |
Academic advisors | C. Raymond Perrault |
Notable students | Garrison Cottrell Henry Kautz Diane Litman |
Website | www |
James Frederick Allen (born 1950) is an American computational linguist recognized for his contributions to temporal logic, in particular Allen's interval algebra. He is interested in knowledge representation, commonsense reasoning, and natural language understanding, believing that "deep language understanding can only currently be achieved by significant hand-engineering of semantically-rich formalisms coupled with statistical preferences". [2] He is the John H. Dessaurer Professor of Computer Science at the University of Rochester [3]
Allen received his Ph.D. from the University of Toronto in 1979, under the supervision of C. Raymond Perrault, [4] [5] [6] after which he joined the faculty at Rochester. [7] At Rochester, he was department chair from 1987 to 1990, directed the Cognitive Science Program from 1992 to 1996, and co-directed the Center for the Sciences of Language from 1996 to 1998. [7] He served as the Editor-in-Chief of Computational Linguistics from 1983–1993. [7] [8] [9] Since 2006 he has also been associate director of the Florida Institute for Human and Machine Cognition. [7] [10]
The TRIPS project is a long-term research to build generic technology for dialogue (both spoken and 'chat') systems, which includes natural language processing, collaborative problem solving, and dynamic context-sensitive language modeling. This is contrast with the data driven approaches by machine learning, which requires to collect and annotate corpora, i.e. training data, firstly. [11]
PLOW agent is a system that learns executable task models from a single collaborative learning session, which integrates wide AI technologies including deep natural language understanding, knowledge representation and reasoning, dialogue systems, planning/agent-based systems, and machine learning. This paper won the outstanding paper award at AAAI in 2007. [12]
Allen is the author of the textbook Natural Language Understanding (Benjamin-Cummings, 1987; 2nd ed., 1995). [13] [14]
He is also the co-author with Henry Kautz, Richard Pelavin, and Josh Tenenberg of Reasoning About Plans (Morgan Kaufmann, 1991). [15]
In 1991 he was elected as a fellow of the Association for the Advancement of Artificial Intelligence (1990, founding fellow). [1] [16]
In the field of artificial intelligence, the most difficult problems are informally known as AI-complete or AI-hard, implying that the difficulty of these computational problems, assuming intelligence is computational, is equivalent to that of solving the central artificial intelligence problem—making computers as intelligent as people. To call a problem AI-complete reflects an attitude that it would not be solved by a simple specific algorithm.
Natural language processing (NLP) is an interdisciplinary subfield of computer science and linguistics. It is primarily concerned with giving computers the ability to support and manipulate human language. It involves processing natural language datasets, such as text corpora or speech corpora, using either rule-based or probabilistic machine learning approaches. The goal is a computer capable of "understanding" the contents of documents, including the contextual nuances of the language within them. The technology can then accurately extract information and insights contained in the documents as well as categorize and organize the documents themselves.
Gerald Jay Sussman is the Panasonic Professor of Electrical Engineering at the Massachusetts Institute of Technology (MIT). He has been involved in artificial intelligence (AI) research at MIT since 1964. His research has centered on understanding the problem-solving strategies used by scientists and engineers, with the goals of automating parts of the process and formalizing it to provide more effective methods of science and engineering education. Sussman has also worked in computer languages, in computer architecture, and in Very Large Scale Integration (VLSI) design.
Natural-language understanding (NLU) or natural-language interpretation (NLI) is a subset of natural-language processing in artificial intelligence that deals with machine reading comprehension. Natural-language understanding is considered an AI-hard problem.
Allen Newell was an American researcher in computer science and cognitive psychology at the RAND Corporation and at Carnegie Mellon University's School of Computer Science, Tepper School of Business, and Department of Psychology. He contributed to the Information Processing Language (1956) and two of the earliest AI programs, the Logic Theorist (1956) and the General Problem Solver (1957). He was awarded the ACM's A.M. Turing Award along with Herbert A. Simon in 1975 for their contributions to artificial intelligence and the psychology of human cognition.
In artificial intelligence, symbolic artificial intelligence is the term for the collection of all methods in artificial intelligence research that are based on high-level symbolic (human-readable) representations of problems, logic and search. Symbolic AI used tools such as logic programming, production rules, semantic nets and frames, and it developed applications such as knowledge-based systems, symbolic mathematics, automated theorem provers, ontologies, the semantic web, and automated planning and scheduling systems. The Symbolic AI paradigm led to seminal ideas in search, symbolic programming languages, agents, multi-agent systems, the semantic web, and the strengths and limitations of formal knowledge and reasoning systems.
Hector Joseph Levesque is a Canadian academic and researcher in artificial intelligence. His research concerns incorporating commonsense reasoning in intelligent systems and he initiated the Winograd Schemas Challenge.
Jaime Guillermo Carbonell was a computer scientist who made seminal contributions to the development of natural language processing tools and technologies. His extensive research in machine translation resulted in the development of several state-of-the-art language translation and artificial intelligence systems. He earned his B.S. degrees in Physics and in Mathematics from MIT in 1975 and did his Ph.D. under Dr. Roger Schank at Yale University in 1979. He joined Carnegie Mellon University as an assistant professor of computer science in 1979 and lived in Pittsburgh from then. He was affiliated with the Language Technologies Institute, Computer Science Department, Machine Learning Department, and Computational Biology Department at Carnegie Mellon.
Manuela Maria Veloso is the Head of J.P. Morgan AI Research & Herbert A. Simon University Professor Emeritus in the School of Computer Science at Carnegie Mellon University, where she was previously Head of the Machine Learning Department. She served as president of Association for the Advancement of Artificial Intelligence (AAAI) until 2014, and the co-founder and a Past President of the RoboCup Federation. She is a fellow of AAAI, Institute of Electrical and Electronics Engineers (IEEE), American Association for the Advancement of Science (AAAS), and Association for Computing Machinery (ACM). She is an international expert in artificial intelligence and robotics.
Pierre Baldi is a distinguished professor of computer science at University of California Irvine and the director of its Institute for Genomics and Bioinformatics.
Language and Communication Technologies is the scientific study of technologies that explore language and communication. It is an interdisciplinary field that encompasses the fields of computer science, linguistics and cognitive science.
Eric Joel Horvitz is an American computer scientist, and Technical Fellow at Microsoft, where he serves as the company's first Chief Scientific Officer. He was previously the director of Microsoft Research Labs, including research centers in Redmond, WA, Cambridge, MA, New York, NY, Montreal, Canada, Cambridge, UK, and Bangalore, India.
David Leigh Waltz was a computer scientist who made significant contributions in several areas of artificial intelligence, including constraint satisfaction, case-based reasoning and the application of massively parallel computation to AI problems. He held positions in academia and industry and at the time of his death, was a professor of Computer Science at Columbia University where he directed the Center for Computational Learning Systems.
Barbara J. Grosz CorrFRSE is an American computer scientist and Higgins Professor of Natural Sciences at Harvard University. She has made seminal contributions to the fields of natural language processing and multi-agent systems. With Alison Simmons, she is co-founder of the Embedded EthiCS programme at Harvard, which embeds ethics lessons into computer science courses.
Henry A. Kautz is a computer scientist, Founding Director of Institute for Data Science and Professor at University of Rochester. He is interested in knowledge representation, artificial intelligence, data science and pervasive computing.
Dan Roth is the Eduardo D. Glandt Distinguished Professor of Computer and Information Science at the University of Pennsylvania.
This glossary of artificial intelligence is a list of definitions of terms and concepts relevant to the study of artificial intelligence, its sub-disciplines, and related fields. Related glossaries include Glossary of computer science, Glossary of robotics, and Glossary of machine vision.
Jiebo Luo is a Chinese-American computer scientist, the Albert Arendt Hopeman Professor of Engineering and Professor of Computer Science at the University of Rochester. He is interested in artificial intelligence, data science and computer vision.
Yejin Choi is Wissner-Slivka Chair of Computer Science at the University of Washington. Her research considers natural language processing and computer vision.
Thomas L. Dean is an American computer scientist known for his work in robot planning, probabilistic graphical models, and computational neuroscience. He was one of the first to introduce ideas from operations research and control theory to artificial intelligence. In particular, he introduced the idea of the anytime algorithm and was the first to apply the factored Markov decision process to robotics. He has authored several influential textbooks on artificial intelligence.