Golem is an inductive logic programming algorithm developed by Stephen Muggleton and Cao Feng in 1990. [1] It uses the technique of relative least general generalisation proposed by Gordon Plotkin, leading to a bottom-up search through the subsumption lattice. [2] In 1992, shortly after its introduction, Golem was considered the only inductive logic programming system capable of scaling to tens of thousands of examples. [3]
Golem takes as input a definite program B as background knowledge together with sets of positive and negative examples, denoted and respectively. The overall idea is to construct the least general generalisation of with respect to the background knowledge. However, if B is not merely a finite set of ground atoms, then this relative least general generalisation may not exist. [4] Therefore, rather than using B directly, Golem uses the set of all ground atoms that can be resolved from B in at most h resolution steps. An additional difficulty is that if is non-empty, the least general generalisation of may entail a negative example. In this case, Golem generalises different subsets of separately to obtain a program of several clauses. [2] Golem also employs some restrictions on the hypothesis space, ensuring that relative least general generalisations are polynomial in the number of training examples. Golem demands that all variables in the head of a clause also appears in a literal of the clause body; that the number of substitutions needed to instantiate existentially quantified variables introduced in a literal is bounded; and that the depth of the chain of substitutions needed to instantiate such a variable is also bounded. [3]
The following example about learning definitions of family relations uses the abbreviations
It starts from the background knowledge (cf. picture)
the positive examples
and the trivial proposition true to denote the absence of negative examples.
The relative least general generalisation is now computed as follows to obtain a definition of the daughter relation.
The resulting Horn clause is the hypothesis h obtained by Golem. Informally, the clause reads " is called a daughter of if is the parent of and is female", which is a commonly accepted definition.
Inductive logic programming (ILP) is a subfield of symbolic artificial intelligence which uses logic programming as a uniform representation for examples, background knowledge and hypotheses. The term "inductive" here refers to philosophical rather than mathematical induction. Given an encoding of the known background knowledge and a set of examples represented as a logical database of facts, an ILP system will derive a hypothesised logic program which entails all the positive and none of the negative examples.
In boolean logic, a disjunctive normal form (DNF) is a canonical normal form of a logical formula consisting of a disjunction of conjunctions; it can also be described as an OR of ANDs, a sum of products, or a cluster concept. As a normal form, it is useful in automated theorem proving.
In Boolean logic, a formula is in conjunctive normal form (CNF) or clausal normal form if it is a conjunction of one or more clauses, where a clause is a disjunction of literals; otherwise put, it is a product of sums or an AND of ORs. As a canonical normal form, it is useful in automated theorem proving and circuit theory.
In mathematical logic, a universal quantification is a type of quantifier, a logical constant which is interpreted as "given any", "for all", or "for any". It expresses that a predicate can be satisfied by every member of a domain of discourse. In other words, it is the predication of a property or relation to every member of the domain. It asserts that a predicate within the scope of a universal quantifier is true of every value of a predicate variable.
Exclusive or or exclusive disjunction or exclusive alternation or logical non-equivalence or logical inequality is a logical operator whose negation is the logical biconditional. With two inputs, XOR is true if and only if the inputs differ. With multiple inputs, XOR is true if and only if the number of true inputs is odd.
Relevance logic, also called relevant logic, is a kind of non-classical logic requiring the antecedent and consequent of implications to be relevantly related. They may be viewed as a family of substructural or modal logics. It is generally, but not universally, called relevant logic by British and, especially, Australian logicians, and relevance logic by American logicians.
In axiomatic set theory and the branches of mathematics and philosophy that use it, the axiom of infinity is one of the axioms of Zermelo–Fraenkel set theory. It guarantees the existence of at least one infinite set, namely a set containing the natural numbers. It was first published by Ernst Zermelo as part of his set theory in 1908.
In mathematics, a Heyting algebra (also known as pseudo-Boolean algebra) is a bounded lattice (with join and meet operations written ∨ and ∧ and with least element 0 and greatest element 1) equipped with a binary operation a → b of implication such that (c ∧ a) ≤ b is equivalent to c ≤ (a → b). From a logical standpoint, A → B is by this definition the weakest proposition for which modus ponens, the inference rule A → B, A ⊢ B, is sound. Like Boolean algebras, Heyting algebras form a variety axiomatizable with finitely many equations. Heyting algebras were introduced by Arend Heyting (1930) to formalize intuitionistic logic.
In computer science, 2-satisfiability, 2-SAT or just 2SAT is a computational problem of assigning values to variables, each of which has two possible values, in order to satisfy a system of constraints on pairs of variables. It is a special case of the general Boolean satisfiability problem, which can involve constraints on more than two variables, and of constraint satisfaction problems, which can allow more than two choices for the value of each variable. But in contrast to those more general problems, which are NP-complete, 2-satisfiability can be solved in polynomial time.
In computer science, program synthesis is the task to construct a program that provably satisfies a given high-level formal specification. In contrast to program verification, the program is to be constructed rather than given; however, both fields make use of formal proof techniques, and both comprise approaches of different degrees of automation. In contrast to automatic programming techniques, specifications in program synthesis are usually non-algorithmic statements in an appropriate logical calculus.
In computer science, a loop invariant is a property of a program loop that is true before each iteration. It is a logical assertion, sometimes checked with a code assertion. Knowing its invariant(s) is essential in understanding the effect of a loop.
CARINE (Computer Aided Reasoning Engine) is a first-order classical logic automated theorem prover. It was initially built for the study of the enhancement effects of the strategies delayed clause-construction (DCC) and attribute sequences (ATS) in a depth-first search based algorithm. CARINE's main search algorithm is semi-linear resolution (SLR) which is based on an iteratively-deepening depth-first search (also known as depth-first iterative-deepening (DFID)) and used in theorem provers like THEO. SLR employs DCC to achieve a high inference rate, and ATS to reduce the search space.
Independence-friendly logic is an extension of classical first-order logic (FOL) by means of slashed quantifiers of the form and , where is a finite set of variables. The intended reading of is "there is a which is functionally independent from the variables in ". IF logic allows one to express more general patterns of dependence between variables than those which are implicit in first-order logic. This greater level of generality leads to an actual increase in expressive power; the set of IF sentences can characterize the same classes of structures as existential second-order logic.
In mathematical logic and automated theorem proving, resolution is a rule of inference leading to a refutation-complete theorem-proving technique for sentences in propositional logic and first-order logic. For propositional logic, systematically applying the resolution rule acts as a decision procedure for formula unsatisfiability, solving the Boolean satisfiability problem. For first-order logic, resolution can be used as the basis for a semi-algorithm for the unsatisfiability problem of first-order logic, providing a more practical method than one following from Gödel's completeness theorem.
In computational complexity theory, the maximum satisfiability problem (MAX-SAT) is the problem of determining the maximum number of clauses, of a given Boolean formula in conjunctive normal form, that can be made true by an assignment of truth values to the variables of the formula. It is a generalization of the Boolean satisfiability problem, which asks whether there exists a truth assignment that makes all clauses true.
In mathematical logic, a tautology is a formula or assertion that is true in every possible interpretation. An example is "x=y or x≠y". Similarly, "either the ball is green, or the ball is not green" is always true, regardless of the colour of the ball.
In computer science, conflict-driven clause learning (CDCL) is an algorithm for solving the Boolean satisfiability problem (SAT). Given a Boolean formula, the SAT problem asks for an assignment of variables so that the entire formula evaluates to true. The internal workings of CDCL SAT solvers were inspired by DPLL solvers. The main difference between CDCL and DPLL is that CDCL's backjumping is non-chronological.
Inductive probability attempts to give the probability of future events based on past events. It is the basis for inductive reasoning, and gives the mathematical basis for learning and the perception of patterns. It is a source of knowledge about the world.
In computer science, the planar 3-satisfiability problem (abbreviated PLANAR 3SAT or PL3SAT) is an extension of the classical Boolean 3-satisfiability problem to a planar incidence graph. In other words, it asks whether the variables of a given Boolean formula—whose incidence graph consisting of variables and clauses can be embedded on a plane—can be consistently replaced by the values TRUE or FALSE in such a way that the formula evaluates to TRUE. If this is the case, the formula is called satisfiable. On the other hand, if no such assignment exists, the function expressed by the formula is FALSE for all possible variable assignments and the formula is unsatisfiable. For example, the formula "a AND NOT b" is satisfiable because one can find the values a = TRUE and b = FALSE, which make (a AND NOT b) = TRUE. In contrast, "a AND NOT a" is unsatisfiable.
Theta-subsumption is a decidable relation between two first-order clauses that guarantees that one clause logically entails the other. It was first introduced by John Alan Robinson in 1965 and has become a fundamental notion in inductive logic programming. Deciding whether a given clause θ-subsumes another is an NP-complete problem.