The Automated Mathematician (AM) is one of the earliest successful discovery systems. [1] It was created by Douglas Lenat in Lisp, [2] and in 1977 led to Lenat being awarded the IJCAI Computers and Thought Award. [3]
AM worked by generating and modifying short Lisp programs which were then interpreted as defining various mathematical concepts; [4] for example, a program that tested equality between the length of two lists was considered to represent the concept of numerical equality, while a program that produced a list whose length was the product of the lengths of two other lists was interpreted as representing the concept of multiplication. The system had elaborate heuristics for choosing which programs to extend and modify, based on the experiences of working mathematicians in solving mathematical problems.
Lenat claimed that the system was composed of hundreds of data structures called "concepts," together with hundreds of "heuristic rules" and a simple flow of control: "AM repeatedly selects the top task from the agenda and tries to carry it out. This is the whole control structure!" Yet the heuristic rules were not always represented as separate data structures; some had to be intertwined with the control flow logic. Some rules had preconditions that depended on the history, or otherwise could not be represented in the framework of the explicit rules. [5]
What's more, the published versions of the rules often involve vague terms that are not defined further, such as "If two expressions are structurally similar, ..." (Rule 218) or "... replace the value obtained by some other (very similar) value..." (Rule 129). [6]
Another source of information is the user, via Rule 2: "If the user has recently referred to X, then boost the priority of any tasks involving X." Thus, it appears quite possible that much of the real discovery work is buried in unexplained procedures. [7]
Lenat claimed that the system had rediscovered both Goldbach's conjecture and the fundamental theorem of arithmetic. Later critics accused Lenat of over-interpreting the output of AM. In his paper Why AM and Eurisko appear to work, Lenat conceded that any system that generated enough short Lisp programs would generate ones that could be interpreted by an external observer as representing equally sophisticated mathematical concepts. However, he argued that this property was in itself interesting—and that a promising direction for further research would be to look for other languages in which short random strings were likely to be useful. [8]
This intuition was the basis of AM's successor Eurisko, which attempted to generalize the search for mathematical concepts to the search for useful heuristics. [9]
Cyc is a long-term artificial intelligence project that aims to assemble a comprehensive ontology and knowledge base that spans the basic concepts and rules about how the world works. Hoping to capture common sense knowledge, Cyc focuses on implicit knowledge that other AI platforms may take for granted. This is contrasted with facts one might find somewhere on the internet or retrieve via a search engine or Wikipedia. Cyc enables semantic reasoners to perform human-like reasoning and be less "brittle" when confronted with novel situations.
In artificial intelligence (AI), an expert system is a computer system emulating the decision-making ability of a human expert. Expert systems are designed to solve complex problems by reasoning through bodies of knowledge, represented mainly as if–then rules rather than through conventional procedural code. The first expert systems were created in the 1970s and then proliferated in the 1980s. Expert systems were among the first truly successful forms of AI software. An expert system is divided into two subsystems: the inference engine and the knowledge base. The knowledge base represents facts and rules. The inference engine applies the rules to the known facts to deduce new facts. Inference engines can also include explanation and debugging abilities.
Knowledge representation and reasoning is the field of artificial intelligence (AI) dedicated to representing information about the world in a form that a computer system can use to solve complex tasks such as diagnosing a medical condition or having a dialog in a natural language. Knowledge representation incorporates findings from psychology about how humans solve problems and represent knowledge, in order to design formalisms that will make complex systems easier to design and build. Knowledge representation and reasoning also incorporates findings from logic to automate various kinds of reasoning.
Planner is a programming language designed by Carl Hewitt at MIT, and first published in 1969. First, subsets such as Micro-Planner and Pico-Planner were implemented, and then essentially the whole language was implemented as Popler by Julian Davies at the University of Edinburgh in the POP-2 programming language. Derivations such as QA4, Conniver, QLISP and Ether were important tools in artificial intelligence research in the 1970s, which influenced commercial developments such as Knowledge Engineering Environment (KEE) and Automated Reasoning Tool (ART).
Douglas Bruce Lenat was an American computer scientist and researcher in artificial intelligence who was the founder and CEO of Cycorp, Inc. in Austin, Texas.
John McCarthy was an American computer scientist and cognitive scientist. He was one of the founders of the discipline of artificial intelligence. He co-authored the document that coined the term "artificial intelligence" (AI), developed the programming language family Lisp, significantly influenced the design of the language ALGOL, popularized time-sharing, and invented garbage collection.
In artificial intelligence, symbolic artificial intelligence is the term for the collection of all methods in artificial intelligence research that are based on high-level symbolic (human-readable) representations of problems, logic and search. Symbolic AI used tools such as logic programming, production rules, semantic nets and frames, and it developed applications such as knowledge-based systems, symbolic mathematics, automated theorem provers, ontologies, the semantic web, and automated planning and scheduling systems. The Symbolic AI paradigm led to seminal ideas in search, symbolic programming languages, agents, multi-agent systems, the semantic web, and the strengths and limitations of formal knowledge and reasoning systems.
In the field of artificial intelligence, an inference engine is a software component of an intelligent system that applies logical rules to the knowledge base to deduce new information. The first inference engines were components of expert systems. The typical expert system consisted of a knowledge base and an inference engine. The knowledge base stored facts about the world. The inference engine applied logical rules to the knowledge base and deduced new knowledge. This process would iterate as each new fact in the knowledge base could trigger additional rules in the inference engine. Inference engines work primarily in one of two modes either special rule or facts: forward chaining and backward chaining. Forward chaining starts with the known facts and asserts new facts. Backward chaining starts with goals, and works backward to determine what facts must be asserted so that the goals can be achieved.
Eurisko is a discovery system written by Douglas Lenat in RLL-1, a representation language itself written in the Lisp programming language. A sequel to Automated Mathematician, it consists of heuristics, i.e. rules of thumb, including heuristics describing how to use and change its own heuristics. Lenat was frustrated by Automated Mathematician's constraint to a single domain and so developed Eurisko; his frustration with the effort of encoding domain knowledge for Eurisko led to Lenat's subsequent development of Cyc. Lenat envisioned ultimately coupling the Cyc knowledgebase with the Eurisko discovery engine.
Dendral was a project in artificial intelligence (AI) of the 1960s, and the computer software expert system that it produced. Its primary aim was to study hypothesis formation and discovery in science. For that, a specific task in science was chosen: help organic chemists in identifying unknown organic molecules, by analyzing their mass spectra and using knowledge of chemistry. It was done at Stanford University by Edward Feigenbaum, Bruce G. Buchanan, Joshua Lederberg, and Carl Djerassi, along with a team of highly creative research associates and students. It began in 1965 and spans approximately half the history of AI research.
Logic in computer science covers the overlap between the field of logic and that of computer science. The topic can essentially be divided into three main areas:
The IJCAI Computers and Thought Award is presented every two years by the International Joint Conference on Artificial Intelligence (IJCAI), recognizing outstanding young scientists in artificial intelligence. It was originally funded with royalties received from the book Computers and Thought, and is currently funded by IJCAI.
In computer science, in particular in knowledge representation and reasoning and metalogic, the area of automated reasoning is dedicated to understanding different aspects of reasoning. The study of automated reasoning helps produce computer programs that allow computers to reason completely, or nearly completely, automatically. Although automated reasoning is considered a sub-field of artificial intelligence, it also has connections with theoretical computer science and philosophy.
Woodrow Wilson "Woody" Bledsoe was an American mathematician, computer scientist, and prominent educator. He is one of the founders of artificial intelligence (AI), making early contributions in pattern recognition, facial recognition, and automated theorem proving. He continued to make significant contributions to AI throughout his long career. One of his influences was Frank Rosenblatt.
Alan Richard Bundy is a professor at the School of Informatics at the University of Edinburgh, known for his contributions to automated reasoning, especially to proof planning, the use of meta-level reasoning to guide proof search.
A hyper-heuristic is a heuristic search method that seeks to automate, often by the incorporation of machine learning techniques, the process of selecting, combining, generating or adapting several simpler heuristics to efficiently solve computational search problems. One of the motivations for studying hyper-heuristics is to build systems which can handle classes of problems rather than solving just one problem.
Carl Eddie Hewitt was an American computer scientist who designed the Planner programming language for automated planning and the actor model of concurrent computation, which have been influential in the development of logic, functional and object-oriented programming. Planner was the first programming language based on procedural plans invoked using pattern-directed invocation from assertions and goals. The actor model influenced the development of the Scheme programming language, the π-calculus, and served as an inspiration for several other programming languages.
Inductive programming (IP) is a special area of automatic programming, covering research from artificial intelligence and programming, which addresses learning of typically declarative and often recursive programs from incomplete specifications, such as input/output examples or constraints.
Computational heuristic intelligence (CHI) refers to specialized programming techniques in computational intelligence. These techniques have the express goal of avoiding complexity issues, also called NP-hard problems, by using human-like techniques. They are best summarized as the use of exemplar-based methods (heuristics), rather than rule-based methods (algorithms). Hence the term is distinct from the more conventional computational algorithmic intelligence, or symbolic AI. An example of a CHI technique is the encoding specificity principle of Tulving and Thompson. In general, CHI principles are problem solving techniques used by people, rather than programmed into machines. It is by drawing attention to this key distinction that the use of this term is justified in a field already replete with confusing neologisms. Note that the legal systems of all modern human societies employ both heuristics from individual trial records as well as legislated statutes (rules) as regulatory guides.
A discovery system is an artificial intelligence system that attempts to discover new scientific concepts or laws. The aim of discovery systems is to automate scientific data analysis and the scientific discovery process. Ideally, an artificial intelligence system should be able to search systematically through the space of all possible hypotheses and yield the hypothesis - or set of equally likely hypotheses - that best describes the complex patterns in data.