William Aaron Woods | |
---|---|
Born | June 17, 1942 |
Alma mater | Ohio Wesleyan University Harvard University |
Known for | KL-ONE [1] Semantic networks Knowledge representation and reasoning [2] |
Awards | Association for Computational Linguistics Lifetime Achievement Award [3] |
Scientific career | |
Institutions | Alphabet Sun Microsystems [4] ITA Software BBN Technologies [5] [6] ON Technology Applied Expert Systems, Inc. [7] Ohio Wesleyan University Harvard University [8] |
Thesis | Semantics for a Question Answering System (1968) |
Doctoral advisor | Susumu Kuno [9] |
Doctoral students | Steven Salzberg [9] Bonnie Webber [9] Ronald J. Brachman |
Website | www |
William Aaron Woods (born June 17, 1942), generally known as Bill Woods, is a researcher in natural language processing, continuous speech understanding, knowledge representation, and knowledge-based search technology. He is currently a Software Engineer at Google. [10]
Woods received a bachelor's degree from Ohio Wesleyan University (1964) and a Master's (1965) and Ph.D. (1968) in applied mathematics from Harvard University, where he then served as an assistant professor and later as a Gordon McKay Professor of the Practice of Computer Science.
Woods built one of the first natural language question answering systems (LUNAR) to answer questions about the Apollo 11 Moon rocks for the NASA Manned Spacecraft Center while he was at Bolt Beranek and Newman (BBN) [5] in Cambridge, Massachusetts. At BBN, he was a principal scientist and manager of the Artificial Intelligence Department in the '70's and early '80's. He was the principal investigator for BBN's early work in natural language processing and knowledge representation and for its first project in continuous speech understanding. Subsequently, he was chief scientist for Applied Expert Systems and Principal Technologist for ON Technology, Cambridge start-ups. In 1991, he joined Sun Microsystems Laboratories as a principal scientist and distinguished engineer, and in 2007, he joined ITA Software as a distinguished software engineer. ITA was acquired by Google in 2011, where he now works.
Woods' 1975 paper "What's in a Link" [11] is a widely cited [12] critical review of early work in semantic networks. This paper has been cited in the context of querying and natural language processing approaches that make use of Semantic Networks and general knowledge modeling. The paper attempts to clarify notions of meaning and semantics in computational systems. Woods further elaborated on the issues and how they relate to contemporary systems in "Meaning and Links" (2007).
Woods has received many honors:
Knowledge representation and reasoning is a field of artificial intelligence (AI) dedicated to representing information about the world in a form that a computer system can use to solve complex tasks, such as diagnosing a medical condition or having a natural-language dialog. Knowledge representation incorporates findings from psychology about how humans solve problems and represent knowledge, in order to design formalisms that make complex systems easier to design and build. Knowledge representation and reasoning also incorporates findings from logic to automate various kinds of reasoning.
Natural language processing (NLP) is a subfield of computer science and especially artificial intelligence. It is primarily concerned with providing computers with the ability to process data encoded in natural language and is thus closely related to information retrieval, knowledge representation and computational linguistics, a subfield of linguistics. Typically data is collected in text corpora, using either rule-based, statistical or neural-based approaches in machine learning and deep learning.
A semantic network, or frame network is a knowledge base that represents semantic relations between concepts in a network. This is often used as a form of knowledge representation. It is a directed or undirected graph consisting of vertices, which represent concepts, and edges, which represent semantic relations between concepts, mapping or connecting semantic fields. A semantic network may be instantiated as, for example, a graph database or a concept map. Typical standardized semantic networks are expressed as semantic triples.
Natural language understanding (NLU) or natural language interpretation (NLI) is a subset of natural language processing in artificial intelligence that deals with machine reading comprehension. NLU has been considered an AI-hard problem.
John Florian Sowa is an American computer scientist, an expert in artificial intelligence and computer design, and the inventor of conceptual graphs.
Computational semiotics is an interdisciplinary field that applies, conducts, and draws on research in logic, mathematics, the theory and practice of computation, formal and natural language studies, the cognitive sciences generally, and semiotics proper. The term encompasses both the application of semiotics to computer hardware and software design and, conversely, the use of computation for performing semiotic analysis. The former focuses on what semiotics can bring to computation; the latter on what computation can bring to semiotics.
Semantic similarity is a metric defined over a set of documents or terms, where the idea of distance between items is based on the likeness of their meaning or semantic content as opposed to lexicographical similarity. These are mathematical tools used to estimate the strength of the semantic relationship between units of language, concepts or instances, through a numerical description obtained according to the comparison of information supporting their meaning or describing their nature. The term semantic similarity is often confused with semantic relatedness. Semantic relatedness includes any relation between two terms, while semantic similarity only includes "is a" relations. For example, "car" is similar to "bus", but is also related to "road" and "driving".
Wallace "Wally" Feurzeig was an American computer scientist who was co-inventor, with Seymour Papert and Cynthia Solomon, of the programming language Logo, and a well-known researcher in artificial intelligence (AI).
In linguistics, statistical semantics applies the methods of statistics to the problem of determining the meaning of words or phrases, ideally through unsupervised learning, to a degree of precision at least sufficient for the purpose of information retrieval.
Daniel Gureasko Bobrow was an American computer scientist who created an oft-cited artificial intelligence program STUDENT, with which he earned his PhD., worked at BBN Technologies (BBN), then was a Research Fellow in the Intelligent Systems Laboratory of the Palo Alto Research Center.
STUDENT is an early artificial intelligence program that solves algebra word problems. It is written in Lisp by Daniel G. Bobrow as his PhD thesis in 1964. It was designed to read and solve the kind of word problems found in high school algebra books. The program is often cited as an early accomplishment of AI in natural language processing.
Frames are an artificial intelligence data structure used to divide knowledge into substructures by representing "stereotyped situations".
James Frederick Allen is an American computational linguist recognized for his contributions to temporal logic, in particular Allen's interval algebra. He is interested in knowledge representation, commonsense reasoning, and natural language understanding, believing that "deep language understanding can only currently be achieved by significant hand-engineering of semantically-rich formalisms coupled with statistical preferences". He is the John H. Dessaurer Professor of Computer Science at the University of Rochester.
Carl Eddie Hewitt was an American computer scientist who designed the Planner programming language for automated planning and the actor model of concurrent computation, which have been influential in the development of logic, functional and object-oriented programming. Planner was the first programming language based on procedural plans invoked using pattern-directed invocation from assertions and goals. The actor model influenced the development of the Scheme programming language, the π-calculus, and served as an inspiration for several other programming languages.
The following outline is provided as an overview of and topical guide to natural-language processing:
A deductive classifier is a type of artificial intelligence inference engine. It takes as input a set of declarations in a frame language about a domain such as medical research or molecular biology. For example, the names of classes, sub-classes, properties, and restrictions on allowable values. The classifier determines if the various declarations are logically consistent and if not will highlight the specific inconsistent declarations and the inconsistencies among them. If the declarations are consistent the classifier can then assert additional information based on the input. For example, it can add information about existing classes, create additional classes, etc. This differs from traditional inference engines that trigger off of IF-THEN conditions in rules. Classifiers are also similar to theorem provers in that they take as input and produce output via first-order logic. Classifiers originated with KL-ONE frame languages. They are increasingly significant now that they form a part in the enabling technology of the Semantic Web. Modern classifiers leverage the Web Ontology Language. The models they analyze and generate are called ontologies.
In natural language processing, a word embedding is a representation of a word. The embedding is used in text analysis. Typically, the representation is a real-valued vector that encodes the meaning of the word in such a way that the words that are closer in the vector space are expected to be similar in meaning. Word embeddings can be obtained using language modeling and feature learning techniques, where words or phrases from the vocabulary are mapped to vectors of real numbers.
This glossary of artificial intelligence is a list of definitions of terms and concepts relevant to the study of artificial intelligence (AI), its subdisciplines, and related fields. Related glossaries include Glossary of computer science, Glossary of robotics, and Glossary of machine vision.
A semantic decomposition is an algorithm that breaks down the meanings of phrases or concepts into less complex concepts. The result of a semantic decomposition is a representation of meaning. This representation can be used for tasks, such as those related to artificial intelligence or machine learning. Semantic decomposition is common in natural language processing applications.
Madeleine Ashcraft Bates is a researcher in natural language processing who worked at BBN Technologies in Cambridge, Massachusetts from the early 1970s to the late 1990s. She was president of the Association for Computational Linguistics in 1985, and co-editor of the book Challenges in Natural Language Processing (1993).