Reason maintenance

Last updated

Reason maintenance [1] [2] is a knowledge representation approach to efficient handling of inferred information that is explicitly stored. Reason maintenance distinguishes between base facts, which can be defeated, and derived facts. As such it differs from belief revision which, in its basic form, assumes that all facts are equally important. Reason maintenance was originally developed as a technique for implementing problem solvers. [2] It encompasses a variety of techniques that share a common architecture: [3] two components—a reasoner and a reason maintenance system—communicate with each other via an interface. The reasoner uses the reason maintenance system to record its inferences and justifications of ("reasons" for) the inferences. The reasoner also informs the reason maintenance system which are the currently valid base facts (assumptions). The reason maintenance system uses the information to compute the truth value of the stored derived facts and to restore consistency if an inconsistency is derived.

Contents

A truth maintenance system, or TMS, is a knowledge representation method for representing both beliefs and their dependencies and an algorithm called the "truth maintenance algorithm" that manipulates and maintains the dependencies. The name truth maintenance is due to the ability of these systems to restore consistency.

A truth maintenance system maintains consistency between old believed knowledge and current believed knowledge in the knowledge base (KB) through revision. If the current believed statements contradict the knowledge in the KB, then the KB is updated with the new knowledge. It may happen that the same data will again be believed, and the previous knowledge will be required in the KB. If the previous data are not present, but may be required for new inference. But if the previous knowledge was in the KB, then no retracing of the same knowledge is needed. The use of TMS avoids such retracing; it keeps track of the contradictory data with the help of a dependency record. This record reflects the retractions and additions which makes the inference engine (IE) aware of its current belief set.

Each statement having at least one valid justification is made a part of the current belief set. When a contradiction is found, the statement(s) responsible for the contradiction are identified and the records are appropriately updated. This process is called dependency-directed backtracking.

The TMS algorithm maintains the records in the form of a dependency network. Each node in the network is an entry in the KB (a premise, antecedent, or inference rule etc.) Each arc of the network represent the inference steps through which the node was derived.

A premise is a fundamental belief which is assumed to be true. They do not need justifications. The set of premises are the basis from which justifications for all other nodes will be derived.

There are two types of justification for a node. They are:

  1. Support list [SL]
  2. Conditional proof (CP)

Many kinds of truth maintenance systems exist. Two major types are single-context and multi-context truth maintenance. In single context systems, consistency is maintained among all facts in memory (KB) and relates to the notion of consistency found in classical logic. Multi-context systems support paraconsistency by allowing consistency to be relevant to a subset of facts in memory, a context, according to the history of logical inference. This is achieved by tagging each fact or deduction with its logical history. Multi-agent truth maintenance systems perform truth maintenance across multiple memories, often located on different machines. de Kleer's assumption-based truth maintenance system (ATMS, 1986) was utilized in systems based upon KEE on the Lisp Machine. The first multi-agent TMS was created by Mason and Johnson. It was a multi-context system. Bridgeland and Huhns created the first single-context multi-agent system.

See also

Related Research Articles

Cyc

Cyc is a long-term artificial intelligence project that aims to assemble a comprehensive ontology and knowledge base that spans the basic concepts and rules about how the world works. Hoping to capture common sense knowledge, Cyc focuses on implicit knowledge that other AI platforms may take for granted. This is contrasted with facts one might find somewhere on the internet or retrieve via a search engine or Wikipedia. Cyc enables semantic reasoners to perform human-like reasoning and be less "brittle" when confronted with novel situations.

Epistemology Branch of philosophy concerned with the nature and scope of knowledge

Epistemology is the branch of philosophy concerned with knowledge. Epistemologists study the nature, origin, and scope of knowledge, epistemic justification, the rationality of belief, and various related issues. Epistemology is considered one of the four main branches of philosophy, along with ethics, logic, and metaphysics.

Douglas Lenat

Douglas Bruce Lenat is the CEO of Cycorp, Inc. of Austin, Texas, and has been a prominent researcher in artificial intelligence; he was awarded the biannual IJCAI Computers and Thought Award in 1976 for creating the machine learning program, AM. He has worked on machine learning, knowledge representation, "cognitive economy", blackboard systems, and what he dubbed in 1984 "ontological engineering". He has also worked in military simulations, and numerous projects for US government, military, intelligence, and scientific organizations. In 1980, he published a critique of conventional random-mutation Darwinism. He authored a series of articles in the Journal of Artificial Intelligence exploring the nature of heuristic rules.

Case-based reasoning (CBR), broadly construed, is the process of solving new problems based on the solutions of similar past problems. An auto mechanic who fixes an engine by recalling another car that exhibited similar symptoms is using case-based reasoning. A lawyer who advocates a particular outcome in a trial based on legal precedents or a judge who creates case law is using case-based reasoning. So, too, an engineer copying working elements of nature, is treating nature as a database of solutions to problems. Case-based reasoning is a prominent type of analogy solution making.

A Bayesian network is a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph (DAG). Bayesian networks are ideal for taking an event that occurred and predicting the likelihood that any one of several possible known causes was the contributing factor. For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases.

Inferences are steps in reasoning, moving from premises to logical consequences; etymologically, the word infer means to "carry forward". Inference is theoretically traditionally divided into deduction and induction, a distinction that in Europe dates at least to Aristotle. Deduction is inference deriving logical conclusions from premises known or assumed to be true, with the laws of valid inference being studied in logic. Induction is inference from particular premises to a universal conclusion. A third type of inference is sometimes distinguished, notably by Charles Sanders Peirce, contradistinguishing abduction from induction.

A non-monotonic logic is a formal logic whose conclusion relation is not monotonic. In other words, non-monotonic logics are devised to capture and represent defeasible inferences, i.e., a kind of inference in which reasoners draw tentative conclusions, enabling reasoners to retract their conclusion(s) based on further evidence. Most studied formal logics have a monotonic entailment relation, meaning that adding a formula to a theory never produces a pruning of its set of conclusions. Intuitively, monotonicity indicates that learning a new piece of knowledge cannot reduce the set of what is known. A monotonic logic cannot handle various reasoning tasks such as reasoning by default, abductive reasoning, some important approaches to reasoning about knowledge, and similarly, belief revision.

Default logic is a non-monotonic logic proposed by Raymond Reiter to formalize reasoning with default assumptions.

In artificial intelligence (AI), commonsense reasoning is a human-like ability to make presumptions about the type and essence of ordinary situations humans encounter every day. These assumptions include judgments about the nature of physical objects, taxonomic properties, and peoples' intentions. A device that exhibits commonsense reasoning might be capable of drawing conclusions that are similar to humans' folk psychology and naive physics.

Belief revision is the process of changing beliefs to take into account a new piece of information. The logical formalization of belief revision is researched in philosophy, in databases, and in artificial intelligence for the design of rational agents.

In artificial intelligence research, commonsense knowledge consists of facts about the everyday world, such as "Lemons are sour", that all humans are expected to know. It is currently an unsolved problem in Artificial General Intelligence and is a focus of the Allen Institute for Artificial Intelligence. The first AI program to address common sense knowledge was Advice Taker in 1959 by John McCarthy.

Raymond Reiter was a Canadian computer scientist and logician. He was one of the founders of the field of non-monotonic reasoning with his work on default logic, model-based diagnosis, closed world reasoning, and truth maintenance systems. He also contributed to the situation calculus.

In philosophical logic, defeasible reasoning is a kind of reasoning that is rationally compelling, though not deductively valid. It usually occurs when a rule is given, but there may be specific exceptions to the rule, or subclasses that are subject to a different rule. Defeasibility is found in literatures that are concerned with argument and the process of argument, or heuristic reasoning.

SNePS is a knowledge representation, reasoning, and acting (KRRA) system developed and maintained by Stuart C. Shapiro and colleagues at the State University of New York at Buffalo.

In artificial intelligence, a procedural reasoning system (PRS) is a framework for constructing real-time reasoning systems that can perform complex tasks in dynamic environments. It is based on the notion of a rational agent or intelligent agent using the belief–desire–intention software model.

David A. McAllester is an American computer scientist who is Professor and former chief academic officer at the Toyota Technological Institute at Chicago. He received his B.S., M.S. and Ph.D. degrees from the Massachusetts Institute of Technology in 1978, 1979 and 1987 respectively. His PhD was supervised by Gerald Sussman. He was on the faculty of Cornell University for the academic year 1987-1988 and on the faculty of MIT from 1988 to 1995. He was a member of technical staff at AT&T Labs-Research from 1995 to 2002. He has been a fellow of the American Association of Artificial Intelligence since 1997. He has written over 100 refereed publications.

DSSim is an ontology mapping system, that has been conceived to achieve a certain level of the envisioned machine intelligence on the Semantic Web. The main driving factors behind its development was to provide an alternative to the existing heuristics or machine learning based approaches with a multi-agent approach that makes use of uncertain reasoning. The system provides a possible approach to establish machine understanding over Semantic Web data through multi-agent beliefs and conflict resolution.

The Provenance Markup Language is an interlingua for representing and sharing knowledge about how information published on the Web was asserted from information sources and/or derived from Web information by intelligent agents. The language was initially developed in support of DARPA Agent Markup Language with a goal of explaining how automated theorem provers (ATP) derive conclusions from a set of axioms. Information, inference steps, inference rules, and agents are the three main building blocks of the language. In the context of an inference step, information can play the role of antecedent and conclusion. Information can also play the role of axiom that is basically a conclusion with no antecedents. PML uses the broad philosophical definition of agent as opposed to any other more specific definition of agent.

Explainable AI (XAI) is artificial intelligence (AI) in which the results of the solution can be understood by humans. It contrasts with the concept of the "black box" in machine learning where even its designers cannot explain why an AI arrived at a specific decision. XAI may be an implementation of the social right to explanation. XAI is relevant even if there is no legal right or regulatory requirement—for example, XAI can improve the user experience of a product or service by helping end users trust that the AI is making good decisions. This way the aim of XAI is to explain what has been done, what is done right now, what will be done next and unveil the information the actions are based on. These characteristics make it possible (i) to confirm existing knowledge (ii) to challenge existing knowledge and (iii) to generate new assumptions.

Detection of fake news online is important in today's society as fresh news content is rapidly being produced as a result of the abundance of available technology. Claire Wardle has identified seven main categories of fake news, and within each category, the fake news content can be either visual and/or linguistic-based. In order to detect fake news, both linguistic and non-linguistic cues can be analyzed using several methods. While many of these methods of detecting fake news are generally successful, they do have some limitations.

References

  1. Doyle, J., 1983. The ins and outs of reason maintenance, in: Proceedings of the Eighth International Joint Conference on Artificial Intelligence - Volume 1, IJCAI’83. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, pp. 349–351.
  2. 1 2 Doyle, J.: Truth maintenance systems for problem solving. Tech. Rep. AI-TR-419, Dep. of Electrical Engineering and Computer Science of MIT (1978)
  3. McAllester, D.A.: Truth maintenance. AAAI90 (1990)

Other references