In computer science, the well-founded semantics is a three-valued semantics for logic programming, which gives a precise meaning to general logic programs.
The well-founded semantics was defined by Van Gelder, et al. in 1988. [1] [2] The Prolog system XSB implements the well-founded semantics since 1997. [3] [4]
The well-founded semantics assigns a unique model to every general logic program. However, instead of only assigning propositions true or false, it adds a third value unknown for representing ignorance. [1]
A simple example is the logic program that encodes two propositions a
and b
, and in which a
must be true whenever b
is not and vice versa:
a:-not(b).b:-not(a).
neither a
nor b
are true or false, but both have the truth value unknown. In the two-valued stable model semantics, there are two stable models, one in which a
is true and b
is false, and one in which b
is true and a
is false.
Stratified logic programs have a 2-valued well-founded model, in which every proposition is either true or false. This coincides with the unique stable model of the program. The well-founded semantics can be viewed as a three-valued version of the stable model semantics. [5]
In 1989, Van Gelder suggested an algorithm to compute the well-founded semantics of a propositional logic program whose time complexity is quadratic in the size of the program. [6] As of 2001 [update] , no general subquadratic algorithm for the problem was known. [7]
Logic programming is a programming, database and knowledge representation paradigm based on formal logic. A logic program is a set of sentences in logical form, representing knowledge about some problem domain. Computation is performed by applying logical reasoning to that knowledge, to solve problems in the domain. Major logic programming language families include Prolog, Answer Set Programming (ASP) and Datalog. In all of these languages, rules are written in the form of clauses:
Prolog is a logic programming language that has its origins in artificial intelligence, automated theorem proving and computational linguistics.
In computer science, declarative programming is a programming paradigm—a style of building the structure and elements of computer programs—that expresses the logic of a computation without describing its control flow.
In mathematical logic and logic programming, a Horn clause is a logical formula of a particular rule-like form that gives it useful properties for use in logic programming, formal specification, universal algebra and model theory. Horn clauses are named for the logician Alfred Horn, who first pointed out their significance in 1951.
Bunched logic is a variety of substructural logic proposed by Peter O'Hearn and David Pym. Bunched logic provides primitives for reasoning about resource composition, which aid in the compositional analysis of computer and other systems. It has category-theoretic and truth-functional semantics, which can be understood in terms of an abstract concept of resource, and a proof theory in which the contexts Γ in an entailment judgement Γ ⊢ A are tree-like structures (bunches) rather than lists or (multi)sets as in most proof calculi. Bunched logic has an associated type theory, and its first application was in providing a way to control the aliasing and other forms of interference in imperative programs. The logic has seen further applications in program verification, where it is the basis of the assertion language of separation logic, and in systems modelling, where it provides a way to decompose the resources used by components of a system.
Datalog is a declarative logic programming language. While it is syntactically a subset of Prolog, Datalog generally uses a bottom-up rather than top-down evaluation model. This difference yields significantly different behavior and properties from Prolog. It is often used as a query language for deductive databases. Datalog has been applied to problems in data integration, networking, program analysis, and more.
Answer set programming (ASP) is a form of declarative programming oriented towards difficult search problems. It is based on the stable model semantics of logic programming. In ASP, search problems are reduced to computing stable models, and answer set solvers—programs for generating stable models—are used to perform search. The computational process employed in the design of many answer set solvers is an enhancement of the DPLL algorithm and, in principle, it always terminates.
Negation as failure is a non-monotonic inference rule in logic programming, used to derive from failure to derive . Note that can be different from the statement of the logical negation of , depending on the completeness of the inference algorithm and thus also on the formal logic system.
The autoepistemic logic is a formal logic for the representation and reasoning of knowledge about knowledge. While propositional logic can only express facts, autoepistemic logic can express knowledge and lack of knowledge about facts.
The concept of a stable model, or answer set, is used to define a declarative semantics for logic programs with negation as failure. This is one of several standard approaches to the meaning of negation in logic programming, along with program completion and the well-founded semantics. The stable model semantics is the basis of answer set programming.
F-logic is a knowledge representation and ontology language. It combines the advantages of conceptual modeling with object-oriented, frame-based languages, and offers a declarative, compact and simple syntax, and the well-defined semantics of a logic programming language.
M. Dale Skeen is an American computer scientist. He specializes in designing and implementing large-scale computing systems, distributed computing and database management systems.
The following Comparison of Prolog implementations provides a reference for the relative feature sets and performance of different implementations of the Prolog computer programming language. A comprehensive discussion of the most significant Prolog systems is presented in an article published in the 50-years of Prolog anniversary issue of the journal Theory and Practice of Logic Programming (TPLP).
Georg Gottlob FRS is an Austrian-Italian computer scientist who works in the areas of database theory, logic, and artificial intelligence and is Professor of Informatics at the University of Calabria. He was Professor at the University of Oxford.
Incremental computing, also known as incremental computation, is a software feature which, whenever a piece of data changes, attempts to save time by only recomputing those outputs which depend on the changed data. When incremental computing is successful, it can be significantly faster than computing new outputs naively. For example, a spreadsheet software package might use incremental computation in its recalculation features, to update only those cells containing formulas which depend on the changed cells.
In database theory, Imieliński–Lipski algebra is an extension of relational algebra onto tables with different types of null values. It is used to operate on relations with incomplete information.
Logic programming is a programming paradigm that includes languages based on formal logic, including Datalog and Prolog. This article describes the syntax and semantics of the purely declarative subset of these languages. Confusingly, the name "logic programming" also refers to a specific programming language that roughly corresponds to the declarative subset of Prolog. Unfortunately, the term must be used in both senses in this article.
Probabilistic logic programming is a programming paradigm that combines logic programming with probabilities.
Tabling is a technique first developed for natural language processing, where it was called Earley parsing. It consists of storing in a table partial successful analyses that might come in handy for future reuse.
Val Tannen is a computer scientist known for his contributions to the fields of database systems and programming languages. He is currently professor in the Department of Computer and Information Science at the University of Pennsylvania.