This article needs additional citations for verification .(October 2012) |
In information technology a reasoning system is a software system that generates conclusions from available knowledge using logical techniques such as deduction and induction. Reasoning systems play an important role in the implementation of artificial intelligence and knowledge-based systems.
By the everyday usage definition of the phrase, all computer systems are reasoning systems in that they all automate some type of logic or decision. In typical use in the Information Technology field however, the phrase is usually reserved for systems that perform more complex kinds of reasoning. For example, not for systems that do fairly straightforward types of reasoning such as calculating a sales tax or customer discount but making logical inferences about a medical diagnosis or mathematical theorem. Reasoning systems come in two modes: interactive and batch processing. Interactive systems interface with the user to ask clarifying questions or otherwise allow the user to guide the reasoning process. Batch systems take in all the available information at once and generate the best answer possible without user feedback or guidance. [1]
Reasoning systems have a wide field of application that includes scheduling, business rule processing, problem solving, complex event processing, intrusion detection, predictive analytics, robotics, computer vision, and natural language processing.
The first reasoning systems were theorem provers, systems that represent axioms and statements in First Order Logic and then use rules of logic such as modus ponens to infer new statements. Another early type of reasoning system were general problem solvers. These were systems such as the General Problem Solver designed by Newell and Simon. General problem solvers attempted to provide a generic planning engine that could represent and solve structured problems. They worked by decomposing problems into smaller more manageable sub-problems, solving each sub-problem and assembling the partial answers into one final answer. Another example general problem solver was the SOAR family of systems.
In practice these theorem provers and general problem solvers were seldom useful for practical applications and required specialized users with knowledge of logic to utilize. The first practical application of automated reasoning were expert systems. Expert systems focused on much more well defined domains than general problem solving such as medical diagnosis or analyzing faults in an aircraft. Expert systems also focused on more limited implementations of logic. Rather than attempting to implement the full range of logical expressions they typically focused on modus-ponens implemented via IF-THEN rules. Focusing on a specific domain and allowing only a restricted subset of logic improved the performance of such systems so that they were practical for use in the real world and not merely as research demonstrations as most previous automated reasoning systems had been. The engine used for automated reasoning in expert systems were typically called inference engines. Those used for more general logical inferencing are typically called theorem provers. [2]
With the rise in popularity of expert systems many new types of automated reasoning were applied to diverse problems in government and industry. Some such as case-based reasoning were off shoots of expert systems research. Others such as constraint satisfaction algorithms were also influenced by fields such as decision technology and linear programming. Also, a completely different approach, one not based on symbolic reasoning but on a connectionist model has also been extremely productive. This latter type of automated reasoning is especially well suited to pattern matching and signal detection types of problems such as text searching and face matching.
The term reasoning system can be used to apply to just about any kind of sophisticated decision support system as illustrated by the specific areas described below. However, the most common use of the term reasoning system implies the computer representation of logic. Various implementations demonstrate significant variation in terms of systems of logic and formality. Most reasoning systems implement variations of propositional and symbolic (predicate) logic. These variations may be mathematically precise representations of formal logic systems (e.g., FOL), or extended and hybrid versions of those systems (e.g., Courteous logic [3] ). Reasoning systems may explicitly implement additional logic types (e.g., modal, deontic, temporal logics). However, many reasoning systems implement imprecise and semi-formal approximations to recognised logic systems. These systems typically support a variety of procedural and semi-declarative techniques in order to model different reasoning strategies. They emphasise pragmatism over formality and may depend on custom extensions and attachments in order to solve real-world problems.
Many reasoning systems employ deductive reasoning to draw inferences from available knowledge. These inference engines support forward reasoning or backward reasoning to infer conclusions via modus ponens. The recursive reasoning methods they employ are termed 'forward chaining' and 'backward chaining', respectively. Although reasoning systems widely support deductive inference, some systems employ abductive, inductive, defeasible and other types of reasoning. Heuristics may also be employed to determine acceptable solutions to intractable problems.
Reasoning systems may employ the closed world assumption (CWA) or open world assumption (OWA). The OWA is often associated with ontological knowledge representation and the Semantic Web. Different systems exhibit a variety of approaches to negation. As well as logical or bitwise complement, systems may support existential forms of strong and weak negation including negation-as-failure and 'inflationary' negation (negation of non-ground atoms). Different reasoning systems may support monotonic or non-monotonic reasoning, stratification and other logical techniques.
Many reasoning systems provide capabilities for reasoning under uncertainty. This is important when building situated reasoning agents which must deal with uncertain representations of the world. There are several common approaches to handling uncertainty. These include the use of certainty factors, probabilistic methods such as Bayesian inference or Dempster–Shafer theory, multi-valued ('fuzzy') logic and various connectionist approaches. [4]
This section provides a non-exhaustive and informal categorisation of common types of reasoning system. These categories are not absolute. They overlap to a significant degree and share a number of techniques, methods and algorithms.
Constraint solvers solve constraint satisfaction problems (CSPs). They support constraint programming. A constraint is a which must be met by any valid solution to a problem. Constraints are defined declaratively and applied to variables within given domains. Constraint solvers use search, backtracking and constraint propagation techniques to find solutions and determine optimal solutions. They may employ forms of linear and nonlinear programming. They are often used to perform optimization within highly combinatorial problem spaces. For example, they may be used to calculate optimal scheduling, design efficient integrated circuits or maximise productivity in a manufacturing process. [5]
Theorem provers use automated reasoning techniques to determine proofs of mathematical theorems. They may also be used to verify existing proofs. In addition to academic use, typical applications of theorem provers include verification of the correctness of integrated circuits, software programs, engineering designs, etc.
Logic programs (LPs) are software programs written using programming languages whose primitives and expressions provide direct representations of constructs drawn from mathematical logic. An example of a general-purpose logic programming language is Prolog. LPs represent the direct application of logic programming to solve problems. Logic programming is characterised by highly declarative approaches based on formal logic, and has wide application across many disciplines.
Rule engines represent conditional logic as discrete rules. Rule sets can be managed and applied separately to other functionality. They have wide applicability across many domains. Many rule engines implement reasoning capabilities. A common approach is to implement production systems to support forward or backward chaining. Each rule ('production') binds a conjunction of predicate clauses to a list of executable actions.
At run-time, the rule engine matches productions against facts and executes ('fires') the associated action list for each match. If those actions remove or modify any facts, or assert new facts, the engine immediately re-computes the set of matches. Rule engines are widely used to model and apply business rules, to control decision-making in automated processes and to enforce business and technical policies.
Deductive classifiers arose slightly later than rule-based systems and were a component of a new type of artificial intelligence knowledge representation tool known as frame languages. A frame language describes the problem domain as a set of classes, subclasses, and relations among the classes. It is similar to the object-oriented model. Unlike object-oriented models however, frame languages have a formal semantics based on first order logic.
They utilize this semantics to provide input to the deductive classifier. The classifier in turn can analyze a given model (known as an ontology) and determine if the various relations described in the model are consistent. If the ontology is not consistent the classifier will highlight the declarations that are inconsistent. If the ontology is consistent the classifier can then do further reasoning and draw additional conclusions about the relations of the objects in the ontology.
For example, it may determine that an object is actually a subclass or instance of additional classes as those described by the user. Classifiers are an important technology in analyzing the ontologies used to describe models in the Semantic web. [6] [7]
Machine learning systems evolve their behavior over time based on experience. This may involve reasoning over observed events or example data provided for training purposes. For example, machine learning systems may use inductive reasoning to generate hypotheses for observed facts. Learning systems search for generalised rules or functions that yield results in line with observations and then use these generalisations to control future behavior.
Case-based reasoning (CBR) systems provide solutions to problems by analysing similarities to other problems for which known solutions already exist. Case-based reasoning uses the top (superficial) levels of similarity; namely, the object, feature, and value criteria. This differs case-based reasoning from analogical reasoning in that analogical reasoning uses only the "deep" similarity criterion i.e. relationship or even relationships of relationships, and need not find similarity on the shallower levels. This difference makes case-based reasoning applicable only among cases of the same domain because similar objects, features, and/or values must be in the same domain, while the "deep" similarity criterion of "relationships" makes analogical reasoning applicable cross-domains where only the relationships ae similar between the cases. CBR systems are commonly used in customer/technical support and call centre scenarios and have applications in industrial manufacture, agriculture, medicine, law and many other areas.
A procedural reasoning system (PRS) uses reasoning techniques to select plans from a procedural knowledge base. Each plan represents a course of action for achievement of a given goal. The PRS implements a belief-desire-intention model by reasoning over facts ('beliefs') to select appropriate plans ('intentions') for given goals ('desires'). Typical applications of PRS include management, monitoring and fault detection systems.
In artificial intelligence, an expert system is a computer system emulating the decision-making ability of a human expert. Expert systems are designed to solve complex problems by reasoning through bodies of knowledge, represented mainly as if–then rules rather than through conventional procedural code. The first expert systems were created in the 1970s and then proliferated in the 1980s. Expert systems were among the first truly successful forms of artificial intelligence (AI) software. An expert system is divided into two subsystems: the inference engine and the knowledge base. The knowledge base represents facts and rules. The inference engine applies the rules to the known facts to deduce new facts. Inference engines can also include explanation and debugging abilities.
Knowledge representation and reasoning is the field of artificial intelligence (AI) dedicated to representing information about the world in a form that a computer system can use to solve complex tasks such as diagnosing a medical condition or having a dialog in a natural language. Knowledge representation incorporates findings from psychology about how humans solve problems, and represent knowledge in order to design formalisms that will make complex systems easier to design and build. Knowledge representation and reasoning also incorporates findings from logic to automate various kinds of reasoning.
Logic programming is a programming, database and knowledge representation paradigm based on formal logic. A logic program is a set of sentences in logical form, representing knowledge about some problem domain. Computation is performed by applying logical reasoning to that knowledge, to solve problems in the domain. Major logic programming language families include Prolog, Answer Set Programming (ASP) and Datalog. In all of these languages, rules are written in the form of clauses:
Planner is a programming language designed by Carl Hewitt at MIT, and first published in 1969. First, subsets such as Micro-Planner and Pico-Planner were implemented, and then essentially the whole language was implemented as Popler by Julian Davies at the University of Edinburgh in the POP-2 programming language. Derivations such as QA4, Conniver, QLISP and Ether were important tools in artificial intelligence research in the 1970s, which influenced commercial developments such as Knowledge Engineering Environment (KEE) and Automated Reasoning Tool (ART).
Inferences are steps in reasoning, moving from premises to logical consequences; etymologically, the word infer means to "carry forward". Inference is theoretically traditionally divided into deduction and induction, a distinction that in Europe dates at least to Aristotle. Deduction is inference deriving logical conclusions from premises known or assumed to be true, with the laws of valid inference being studied in logic. Induction is inference from particular evidence to a universal conclusion. A third type of inference is sometimes distinguished, notably by Charles Sanders Peirce, contradistinguishing abduction from induction.
In artificial intelligence, symbolic artificial intelligence is the term for the collection of all methods in artificial intelligence research that are based on high-level symbolic (human-readable) representations of problems, logic and search. Symbolic AI used tools such as logic programming, production rules, semantic nets and frames, and it developed applications such as knowledge-based systems, symbolic mathematics, automated theorem provers, ontologies, the semantic web, and automated planning and scheduling systems. The Symbolic AI paradigm led to seminal ideas in search, symbolic programming languages, agents, multi-agent systems, the semantic web, and the strengths and limitations of formal knowledge and reasoning systems.
A non-monotonic logic is a formal logic whose conclusion relation is not monotonic. In other words, non-monotonic logics are devised to capture and represent defeasible inferences, i.e., a kind of inference in which reasoners draw tentative conclusions, enabling reasoners to retract their conclusion(s) based on further evidence. Most studied formal logics have a monotonic entailment relation, meaning that adding a formula to the hypotheses never produces a pruning of its set of conclusions. Intuitively, monotonicity indicates that learning a new piece of knowledge cannot reduce the set of what is known. Monotonic logics cannot handle various reasoning tasks such as reasoning by default, abductive reasoning, some important approaches to reasoning about knowledge, and similarly, belief revision.
Logic in computer science covers the overlap between the field of logic and that of computer science. The topic can essentially be divided into three main areas:
A knowledge-based system (KBS) is a computer program that reasons and uses a knowledge base to solve complex problems. The term is broad and refers to many different kinds of systems. The one common theme that unites all knowledge based systems is an attempt to represent knowledge explicitly and a reasoning system that allows it to derive new knowledge. Thus, a knowledge-based system has two distinguishing features: a knowledge base and an inference engine.
Loom is a knowledge representation language developed by researchers in the artificial intelligence research group at the University of Southern California's Information Sciences Institute. The leader of the Loom project and primary architect for Loom was Robert MacGregor. The research was primarily sponsored by the Defense Advanced Research Projects Agency (DARPA).
In computer science, in particular in knowledge representation and reasoning and metalogic, the area of automated reasoning is dedicated to understanding different aspects of reasoning. The study of automated reasoning helps produce computer programs that allow computers to reason completely, or nearly completely, automatically. Although automated reasoning is considered a sub-field of artificial intelligence, it also has connections with theoretical computer science and philosophy.
F-logic is a knowledge representation and ontology language. F-logic combines the advantages of conceptual modeling with object-oriented, frame-based languages and offers a declarative, compact and simple syntax, as well as the well-defined semantics of a logic-based language.
Logic is the formal science of using reason and is considered a branch of both philosophy and mathematics and to a lesser extent computer science. Logic investigates and classifies the structure of statements and arguments, both through the study of formal systems of inference and the study of arguments in natural language. The scope of logic can therefore be very large, ranging from core topics such as the study of fallacies and paradoxes, to specialized analyses of reasoning such as probability, correct reasoning, and arguments involving causality. One of the aims of logic is to identify the correct and incorrect inferences. Logicians study the criteria for the evaluation of arguments.
A deductive language is a computer programming language in which the program is a collection of predicates ('facts') and rules that connect them. Such a language is used to create knowledge based systems or expert systems which can deduce answers to problem sets by applying the rules to the facts they have been given. An example of a deductive language is Prolog, or its database-query cousin, Datalog.
Frames are an artificial intelligence data structure used to divide knowledge into substructures by representing "stereotyped situations". They were proposed by Marvin Minsky in his 1974 article "A Framework for Representing Knowledge". Frames are the primary data structure used in artificial intelligence frame languages; they are stored as ontologies of sets.
A semantic reasoner, reasoning engine, rules engine, or simply a reasoner, is a piece of software able to infer logical consequences from a set of asserted facts or axioms. The notion of a semantic reasoner generalizes that of an inference engine, by providing a richer set of mechanisms to work with. The inference rules are commonly specified by means of an ontology language, and often a description logic language. Many reasoners use first-order predicate logic to perform reasoning; inference commonly proceeds by forward chaining and backward chaining. There are also examples of probabilistic reasoners, including non-axiomatic reasoning systems, and probabilistic logic networks.
In computer science, a rule-based system is a computer system in which domain-specific knowledge is represented in the form of rules and general-purpose reasoning is used to solve problems in the domain.
A deductive classifier is a type of artificial intelligence inference engine. It takes as input a set of declarations in a frame language about a domain such as medical research or molecular biology. For example, the names of classes, sub-classes, properties, and restrictions on allowable values. The classifier determines if the various declarations are logically consistent and if not will highlight the specific inconsistent declarations and the inconsistencies among them. If the declarations are consistent the classifier can then assert additional information based on the input. For example, it can add information about existing classes, create additional classes, etc. This differs from traditional inference engines that trigger off of IF-THEN conditions in rules. Classifiers are also similar to theorem provers in that they take as input and produce output via first-order logic. Classifiers originated with KL-ONE frame languages. They are increasingly significant now that they form a part in the enabling technology of the Semantic Web. Modern classifiers leverage the Web Ontology Language. The models they analyze and generate are called ontologies.
This glossary of artificial intelligence is a list of definitions of terms and concepts relevant to the study of artificial intelligence, its sub-disciplines, and related fields. Related glossaries include Glossary of computer science, Glossary of robotics, and Glossary of machine vision.