STUDENT is an early artificial intelligence program that solves algebra word problems. It is written in Lisp by Daniel G. Bobrow as his PhD thesis in 1964 (Bobrow 1964). It was designed to read and solve the kind of word problems found in high school algebra books. [1] The program is often cited as an early accomplishment of AI in natural language processing.
Within Project MAC at MIT, the STUDENT system was an early example of a question answering software, which uniquely involved natural language processing and symbolic programming. [2] Other early attempts for solving algebra story problems were realized with 1960s hardware and software as well: for example, the Philips, Baseball and Synthex systems. [3]
STUDENT accepts an algebra story written in the English language as input, and generates a number as output. This is realized with a layered pipeline that consists of heuristics for pattern transformation. At first, sentences in English are converted into kernel sentences, which each contain a single piece of information. Next, the kernel sentences are converted into mathematical expressions. [4] The knowledge base that supports the transformation contains 52 facts.[ clarification needed ] [5]
STUDENT uses a rule-based system with logic inference. [6] The rules are pre-programmed by the software developer and are able to parse natural language.
More powerful techniques for natural language processing, such as machine learning, came into use later as hardware grew more capable, and gained popularity over simpler rule-based systems. [7]
Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs.
Knowledge representation (KR) aims to model information in a structured manner to formally represent it as knowledge in knowledge-based systems. Whereas knowledge representationand reasoning (KRR, KR&R, or KR²) also aims to understand, reason and interpret knowledge. KRR is widely used in the field of artificial intelligence (AI) with the goal to represent information about the world in a form that a computer system can use to solve complex tasks, such as diagnosing a medical condition or having a natural-language dialog. KR incorporates findings from psychology about how humans solve problems and represent knowledge, in order to design formalisms that make complex systems easier to design and build. KRR also incorporates findings from logic to automate various kinds of reasoning.
Natural language processing (NLP) is a subfield of computer science and especially artificial intelligence. It is primarily concerned with providing computers with the ability to process data encoded in natural language and is thus closely related to information retrieval, knowledge representation and computational linguistics, a subfield of linguistics. Typically data is collected in text corpora, using either rule-based, statistical or neural-based approaches in machine learning and deep learning.
Natural language understanding (NLU) or natural language interpretation (NLI) is a subset of natural language processing in artificial intelligence that deals with machine reading comprehension. NLU has been considered an AI-hard problem.
In artificial intelligence, symbolic artificial intelligence is the term for the collection of all methods in artificial intelligence research that are based on high-level symbolic (human-readable) representations of problems, logic and search. Symbolic AI used tools such as logic programming, production rules, semantic nets and frames, and it developed applications such as knowledge-based systems, symbolic mathematics, automated theorem provers, ontologies, the semantic web, and automated planning and scheduling systems. The Symbolic AI paradigm led to seminal ideas in search, symbolic programming languages, agents, multi-agent systems, the semantic web, and the strengths and limitations of formal knowledge and reasoning systems.
Peter Norvig is an American computer scientist and Distinguished Education Fellow at the Stanford Institute for Human-Centered AI. He previously served as a director of research and search quality at Google. Norvig is the co-author with Stuart J. Russell of the most popular textbook in the field of AI: Artificial Intelligence: A Modern Approach used in more than 1,500 universities in 135 countries.
In computing, memoization or memoisation is an optimization technique used primarily to speed up computer programs by storing the results of expensive function calls to pure functions and returning the cached result when the same inputs occur again. Memoization has also been used in other contexts, such as in simple mutually recursive descent parsing. It is a type of caching, distinct from other forms of caching such as buffering and page replacement. In the context of some logic programming languages, memoization is also known as tabling.
General Problem Solver (GPS) is a computer program created in 1957 by Herbert A. Simon, J. C. Shaw, and Allen Newell intended to work as a universal problem solver machine. In contrast to the former Logic Theorist project, the GPS works with means–ends analysis.
A blackboard system is an artificial intelligence approach based on the blackboard architectural model, where a common knowledge base, the "blackboard", is iteratively updated by a diverse group of specialist knowledge sources, starting with a problem specification and ending with a solution. Each knowledge source updates the blackboard with a partial solution when its internal constraints match the blackboard state. In this way, the specialists work together to solve the problem. The blackboard model was originally designed as a way to handle complex, ill-defined problems, where the solution is the sum of its parts.
Wallace "Wally" Feurzeig was an American computer scientist who was co-inventor, with Seymour Papert and Cynthia Solomon, of the programming language Logo, and a well-known researcher in artificial intelligence (AI).
The history of artificial intelligence (AI) began in antiquity, with myths, stories, and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen. The study of logic and formal reasoning from antiquity to the present led directly to the invention of the programmable digital computer in the 1940s, a machine based on abstract mathematical reasoning. This device and the ideas behind it inspired scientists to begin discussing the possibility of building an electronic brain.
In the history of artificial intelligence, an AI winter is a period of reduced funding and interest in artificial intelligence research. The field has experienced several hype cycles, followed by disappointment and criticism, followed by funding cuts, followed by renewed interest years or even decades later.
Grammar induction is the process in machine learning of learning a formal grammar from a set of observations, thus constructing a model which accounts for the characteristics of the observed objects. More generally, grammatical inference is that branch of machine learning where the instance space consists of discrete combinatorial objects such as strings, trees and graphs.
The following outline is provided as an overview of and topical guide to artificial intelligence:
Paradigms of AI Programming: Case Studies in Common Lisp (ISBN 1-55860-191-0) is a well-known programming book by Peter Norvig about artificial intelligence programming using Common Lisp.
Frames are an artificial intelligence data structure used to divide knowledge into substructures by representing "stereotyped situations".
The following outline is provided as an overview of and topical guide to natural-language processing:
This glossary of artificial intelligence is a list of definitions of terms and concepts relevant to the study of artificial intelligence (AI), its subdisciplines, and related fields. Related glossaries include Glossary of computer science, Glossary of robotics, and Glossary of machine vision.
LISP is a university textbook on the Lisp programming language, written by Patrick Henry Winston and Berthold Klaus Paul Horn. It was first published in 1981, and the third edition of the book was released in 1989. The book is intended to introduce the Lisp programming language and its applications.
{{cite conference}}
: CS1 maint: multiple names: authors list (link)