This article has multiple issues. Please help improve it or discuss these issues on the talk page . (Learn how and when to remove these template messages)
|
A knowledge-based system (KBS) is a computer program that reasons and uses a knowledge base to solve complex problems. Knowledge-based systems were the focus of early artificial intelligence researchers in the 1980s. The term can refer to a broad range of systems. However, all knowledge-based systems have two defining components: an attempt to represent knowledge explicitly, called a knowledge base, and a reasoning system that allows them to derive new knowledge, known as an inference engine.
The knowledge base contains domain-specific facts and rules [1] about a problem domain (rather than knowledge implicitly embedded in procedural code, as in a conventional computer program). In addition, the knowledge may be structured by means of a subsumption ontology, frames, conceptual graph, or logical assertions. [2]
The inference engine uses general-purpose reasoning methods to infer new knowledge and to solve problems in the problem domain. Most commonly, it employs forward chaining or backward chaining. Other approaches include the use of automated theorem proving, logic programming, blackboard systems, and term rewriting systems such as Constraint Handling Rules (CHR). These more formal approaches are covered in detail in the Wikipedia article on knowledge representation and reasoning.
The term "knowledge-based system" was often used interchangeably with "expert system", possibly because almost all of the earliest knowledge-based systems were designed for expert tasks. However, these terms tell us about different aspects of a system:
Today, virtually all expert systems are knowledge-based, whereas knowledge-based system architecture is used in a wide range of types of system designed for a variety of tasks.
The first knowledge-based systems were primarily rule-based expert systems. These represented facts about the world as simple assertions in a flat database and used domain-specific rules to reason about these assertions, and then to add to them. One of the most famous of these early systems was Mycin, a program for medical diagnosis.
Representing knowledge explicitly via rules had several advantages:
Later[ when? ] architectures for knowledge-based reasoning, such as the BB1 blackboard architecture (a blackboard system), [4] allowed the reasoning process itself to be affected by new inferences, providing meta-level reasoning. BB1 allowed the problem-solving process itself to be monitored. Different kinds of problem-solving (e.g., top-down, bottom-up, and opportunistic problem-solving) could be selectively mixed based on the current state of problem solving. Essentially, the problem-solver was being used both to solve a domain-level problem along with its own control problem, which could depend on the former.
Other examples of knowledge-based system architectures supporting meta-level reasoning are MRS [5] and SOAR.
In the 1980s and 1990s, in addition to expert systems, other applications of knowledge-based systems included real-time process control, [6] intelligent tutoring systems, [7] and problem-solvers for specific domains such as protein structure analysis, [8] construction-site layout, [9] and computer system fault diagnosis. [10]
As knowledge-based systems became more complex, the techniques used to represent the knowledge base became more sophisticated and included logic, term-rewriting systems, conceptual graphs, and frames.
Frames, for example, are a way representing world knowledge using techniques that can be seen as analogous to object-oriented programming, specifically classes and subclasses, hierarchies and relations between classes, and behavior[ clarification needed ] of objects. With the knowledge base more structured, reasoning could now occur not only by independent rules and logical inference, but also based on interactions within the knowledge base itself. For example, procedures stored as daemons on[ clarification needed ] objects could fire and could replicate the chaining behavior of rules. [11]
Another advancement in the 1990s was the development of special purpose automated reasoning systems called classifiers. Rather than statically declare the subsumption relations in a knowledge-base, a classifier allows the developer to simply declare facts about the world and let the classifier deduce the relations. In this way a classifier also can play the role of an inference engine. [12]
The most recent[ as of? ] advancement of knowledge-based systems was to adopt the technologies, especially a kind of logic called description logic, for the development of systems that use the internet. The internet often has to deal with complex, unstructured data that cannot be relied on to fit a specific data model. The technology of knowledge-based systems, and especially the ability to classify objects on demand, is ideal for such systems. The model for these kinds of knowledge-based internet systems is known as the Semantic Web. [13]
Cyc is a long-term artificial intelligence project that aims to assemble a comprehensive ontology and knowledge base that spans the basic concepts and rules about how the world works.
In artificial intelligence (AI), an expert system is a computer system emulating the decision-making ability of a human expert. Expert systems are designed to solve complex problems by reasoning through bodies of knowledge, represented mainly as if–then rules rather than through conventional procedural programming code. Expert systems were among the first truly successful forms of AI software. First they were created in the 1970s and then proliferated in the 1980s, being then widely regarded as the future of AI - before the advent of successful artificial neural networks. An expert system is divided into two subsystems: 1) a knowledge base, which represents facts and rules; and 2) an inference engine, which applies the rules to the known facts to deduce new facts, and can include explaining and debugging abilities.
Knowledge representation and reasoning is the field of artificial intelligence (AI) dedicated to representing information about the world in a form that a computer system can use to solve complex tasks such as diagnosing a medical condition or having a dialog in a natural language. Knowledge representation incorporates findings from psychology about how humans solve problems and represent knowledge, in order to design formalisms that will make complex systems easier to design and build. Knowledge representation and reasoning also incorporates findings from logic to automate various kinds of reasoning.
In artificial intelligence, symbolic artificial intelligence is the term for the collection of all methods in artificial intelligence research that are based on high-level symbolic (human-readable) representations of problems, logic and search. Symbolic AI used tools such as logic programming, production rules, semantic nets and frames, and it developed applications such as knowledge-based systems, symbolic mathematics, automated theorem provers, ontologies, the semantic web, and automated planning and scheduling systems. The Symbolic AI paradigm led to seminal ideas in search, symbolic programming languages, agents, multi-agent systems, the semantic web, and the strengths and limitations of formal knowledge and reasoning systems.
In the field of artificial intelligence, an inference engine is a software component of an intelligent system that applies logical rules to the knowledge base to deduce new information. The first inference engines were components of expert systems. The typical expert system consisted of a knowledge base and an inference engine. The knowledge base stored facts about the world. The inference engine applied logical rules to the knowledge base and deduced new knowledge. This process would iterate as each new fact in the knowledge base could trigger additional rules in the inference engine. Inference engines work primarily in one of two modes either special rule or facts: forward chaining and backward chaining. Forward chaining starts with the known facts and asserts new facts. Backward chaining starts with goals, and works backward to determine what facts must be asserted so that the goals can be achieved.
Forward chaining is one of the two main methods of reasoning when using an inference engine and can be described logically as repeated application of modus ponens. Forward chaining is a popular implementation strategy for expert systems, business and production rule systems. The opposite of forward chaining is backward chaining.
Logic in computer science covers the overlap between the field of logic and that of computer science. The topic can essentially be divided into three main areas:
A blackboard system is an artificial intelligence approach based on the blackboard architectural model, where a common knowledge base, the "blackboard", is iteratively updated by a diverse group of specialist knowledge sources, starting with a problem specification and ending with a solution. Each knowledge source updates the blackboard with a partial solution when its internal constraints match the blackboard state. In this way, the specialists work together to solve the problem. The blackboard model was originally designed as a way to handle complex, ill-defined problems, where the solution is the sum of its parts.
In artificial intelligence, model-based reasoning refers to an inference method used in expert systems based on a model of the physical world. With this approach, the main focus of application development is developing the model. Then at run time, an "engine" combines this model knowledge with observed data to derive conclusions such as a diagnosis or a prediction.
A "production system" is a computer program typically used to provide some form of artificial intelligence, which consists primarily of a set of rules about behavior, but it also includes the mechanism necessary to follow those rules as the system responds to states of the world. Those rules, termed productions, are a basic representation found useful in automated planning, expert systems and action selection.
Semantic integration is the process of interrelating information from diverse sources, for example calendars and to do lists, email archives, presence information, documents of all sorts, contacts, search results, and advertising and marketing relevance derived from them. In this regard, semantics focuses on the organization of and action upon information by acting as an intermediary between heterogeneous data sources, which may conflict not only by structure but also context or value.
A deductive language is a computer programming language in which the program is a collection of predicates ('facts') and rules that connect them. Such a language is used to create knowledge based systems or expert systems which can deduce answers to problem sets by applying the rules to the facts they have been given. An example of a deductive language is Prolog, or its database-query cousin, Datalog.
Norms can be considered from different perspectives in artificial intelligence to create computers and computer software that are capable of intelligent behaviour.
Frames are an artificial intelligence data structure used to divide knowledge into substructures by representing "stereotyped situations". They were proposed by Marvin Minsky in his 1974 article "A Framework for Representing Knowledge". Frames are the primary data structure used in artificial intelligence frame languages; they are stored as ontologies of sets.
A semantic reasoner, reasoning engine, rules engine, or simply a reasoner, is a piece of software able to infer logical consequences from a set of asserted facts or axioms. The notion of a semantic reasoner generalizes that of an inference engine, by providing a richer set of mechanisms to work with. The inference rules are commonly specified by means of an ontology language, and often a description logic language. Many reasoners use first-order predicate logic to perform reasoning; inference commonly proceeds by forward chaining and backward chaining. There are also examples of probabilistic reasoners, including non-axiomatic reasoning systems, and probabilistic logic networks.
In computer science, a rule-based system is a computer system in which domain-specific knowledge is represented in the form of rules and general-purpose reasoning is used to solve problems in the domain.
In information technology a reasoning system is a software system that generates conclusions from available knowledge using logical techniques such as deduction and induction. Reasoning systems play an important role in the implementation of artificial intelligence and knowledge-based systems.
A legal expert system is a domain-specific expert system that uses artificial intelligence to emulate the decision-making abilities of a human expert in the field of law. Legal expert systems employ a rule base or knowledge base and an inference engine to accumulate, reference and produce expert knowledge on specific subjects within the legal domain.
A deductive classifier is a type of artificial intelligence inference engine. It takes as input a set of declarations in a frame language about a domain such as medical research or molecular biology. For example, the names of classes, sub-classes, properties, and restrictions on allowable values. The classifier determines if the various declarations are logically consistent and if not will highlight the specific inconsistent declarations and the inconsistencies among them. If the declarations are consistent the classifier can then assert additional information based on the input. For example, it can add information about existing classes, create additional classes, etc. This differs from traditional inference engines that trigger off of IF-THEN conditions in rules. Classifiers are also similar to theorem provers in that they take as input and produce output via first-order logic. Classifiers originated with KL-ONE frame languages. They are increasingly significant now that they form a part in the enabling technology of the Semantic Web. Modern classifiers leverage the Web Ontology Language. The models they analyze and generate are called ontologies.
This glossary of artificial intelligence is a list of definitions of terms and concepts relevant to the study of artificial intelligence, its sub-disciplines, and related fields. Related glossaries include Glossary of computer science, Glossary of robotics, and Glossary of machine vision.