Probabilistic logic programming is a programming paradigm that combines logic programming with probabilities.
Most approaches to probabilistic logic programming are based on the distribution semantics, which splits a program into a set of probabilistic facts and a logic program. It defines a probability distribution on interpretations of the Herbrand universe of the program.
Most approaches to probabilistic logic programming are based on the distribution semantics, [1] which underlies many languages such as Probabilistic Horn Abduction, PRISM, Independent Choice Logic , probabilistic Datalog, Logic Programs with Annotated Disjunctions, ProbLog, P-log, and CP-logic. While the number of languages is large, all share a common approach so that there are transformations with linear complexity that can translate one language into another. [2]
Under the distribution semantics, a probabilistic logic program is interpreted as a set of independent probabilistic facts (ground atomic formulas annotated with a probability) and a logic program which can use the probabilistic facts in the bodies of its clauses. The probability of any assignment of truth values to the groundings of the formulas associated with probabilistic facts is given by the product of their probabilities; this is equivalent to assuming the choices of probabilistic facts to be independent random variables. [1] [3]
If for any choice of truth values for the probabilistic facts, the resulting logic program is stratified, it has a unique minimal Herbrand model which can be seen as the unique interpretation associated with that choice of truth values. [1]
Important subclasses of stratified programs are positive programs, which do not use negation, but may be recursive, and acyclic programs, which may use negation but have no recursive dependencies. [1]
The stable model semantics underlying answer set programming gives meaning to unstratified programs by allocating potentially more than one answer set to every truth value assignment of the probabilistic facts. This raises the question of how to distribute the probability mass across the answer sets. [4] [5]
The probabilistic logic programming language P-Log resolves this by dividing the probability mass equally between the answer sets, following the principle of indifference. [4] [6]
Alternatively, probabilistic answer set programming under the credal semantics allocates a credal set to every query. Its lower probability bound is defined by only considering those truth value assignments of the probabilistic facts for which the query is true in every answer set of the resulting program (cautious reasoning); its upper probability bound is defined by considering those assignments for which the query is true in some answer set (brave reasoning). [4] [5]
Under the distribution semantics, a probabilistic logic program defines a probability distribution over interpretations of its predicates on its Herbrand universe. The probability of a ground query is then obtained from the joint distribution of the query and the worlds: it is the sum of the probability of the worlds where the query is true. [2] [7] [8]
The problem of computing the probability of queries is called (marginal) inference. Solving it by computing all the worlds and then identifying those that entail the query is impractical as the number of possible worlds is exponential in the number of ground probabilistic facts. [2] In fact, already for acyclic programs and atomic queries, computing the conditional probability of a query given a conjunction of atoms as evidence is #P-complete. [9]
Usually, exact inference is performed by resorting to knowledge compilation: according to this, a propositional theory and a query are compiled into a “target language”, which is then used to answer queries in polynomial time. The compilation becomes the main computational bottleneck, but considerable effort has been devoted to the development of efficient compilers. The compilation methods differ in the compactness of the target language and the class of queries and transformations that they support in polynomial time. [2]
Since the cost of inference may be very high, approximate algorithms have been developed. They either compute subsets of possibly incomplete explanations or use random sampling. In the first approach, a subset of the explanations provides a lower bound and the set of partially expanded explanations provides an upper bound. In the second approach, the truth of the query is repeatedly checked in an ordinary logic program sampled from the probabilistic program. The probability of the query is then given by the fraction of the successes. [2] [10]
Probabilistic inductive logic programming aims to learn probabilistic logic programs from data. This includes parameter learning, which estimates the probability annotations of a program while the clauses themselves are given by the user, and structure learning, in which the clauses themselves are induced by the probabilistic inductive logic programming system. [2]
Common approaches to parameter learning are based on expectation–maximization or gradient descent, while structure learning van beperformed by searching the space of possible clauses under a variety of heuristics. [2]
Logic programming is a programming, database and knowledge representation paradigm based on formal logic. A logic program is a set of sentences in logical form, representing knowledge about some problem domain. Computation is performed by applying logical reasoning to that knowledge, to solve problems in the domain. Major logic programming language families include Prolog, Answer Set Programming (ASP) and Datalog. In all of these languages, rules are written in the form of clauses:
Inductive logic programming (ILP) is a subfield of symbolic artificial intelligence which uses logic programming as a uniform representation for examples, background knowledge and hypotheses. The term "inductive" here refers to philosophical rather than mathematical induction. Given an encoding of the known background knowledge and a set of examples represented as a logical database of facts, an ILP system will derive a hypothesised logic program which entails all the positive and none of the negative examples.
Deductive reasoning is the mental process of drawing deductive inferences. An inference is deductively valid if its conclusion follows logically from its premises, i.e. it is impossible for the premises to be true and the conclusion to be false.
Inferences are steps in reasoning, moving from premises to logical consequences; etymologically, the word infer means to "carry forward". Inference is theoretically traditionally divided into deduction and induction, a distinction that in Europe dates at least to Aristotle. Deduction is inference deriving logical conclusions from premises known or assumed to be true, with the laws of valid inference being studied in logic. Induction is inference from particular evidence to a universal conclusion. A third type of inference is sometimes distinguished, notably by Charles Sanders Peirce, contradistinguishing abduction from induction.
A non-monotonic logic is a formal logic whose conclusion relation is not monotonic. In other words, non-monotonic logics are devised to capture and represent defeasible inferences, i.e., a kind of inference in which reasoners draw tentative conclusions, enabling reasoners to retract their conclusion(s) based on further evidence. Most studied formal logics have a monotonic entailment relation, meaning that adding a formula to the hypotheses never produces a pruning of its set of conclusions. Intuitively, monotonicity indicates that learning a new piece of knowledge cannot reduce the set of what is known. Monotonic logics cannot handle various reasoning tasks such as reasoning by default, abductive reasoning, some important approaches to reasoning about knowledge, and similarly, belief revision.
The term Inductive reasoning is used to refer to any method of reasoning in which broad generalizations or principles are derived from a body of observations. This article is concerned with the inductive reasoning other than deductive reasoning, where the conclusion of a deductive argument is certain given the premises are correct; in contrast, the truth of the conclusion of an inductive argument is at best probable, based upon the evidence given.
Solomonoff's theory of inductive inference is a mathematical theory of induction introduced by Ray Solomonoff, based on probability theory and theoretical computer science. In essence, Solomonoff's induction derives the posterior probability of any computable theory, given a sequence of observed data. This posterior probability is derived from Bayes' rule and some universal prior, that is, a prior that assigns a positive probability to any computable theory.
Datalog is a declarative logic programming language. While it is syntactically a subset of Prolog, Datalog generally uses a bottom-up rather than top-down evaluation model. This difference yields significantly different behavior and properties from Prolog. It is often used as a query language for deductive databases. Datalog has been applied to problems in data integration, networking, program analysis, and more.
Formal epistemology uses formal methods from decision theory, logic, probability theory and computability theory to model and reason about issues of epistemological interest. Work in this area spans several academic fields, including philosophy, computer science, economics, and statistics. The focus of formal epistemology has tended to differ somewhat from that of traditional epistemology, with topics like uncertainty, induction, and belief revision garnering more attention than the analysis of knowledge, skepticism, and issues with justification.
A Markov logic network (MLN) is a probabilistic logic which applies the ideas of a Markov network to first-order logic, defining probability distributions on possible worlds on any given domain.
Probabilistic logic involves the use of probability and logic to deal with uncertain situations. Probabilistic logic extends traditional logic truth tables with probabilistic expressions. A difficulty of probabilistic logics is their tendency to multiply the computational complexities of their probabilistic and logical components. Other difficulties include the possibility of counter-intuitive results, such as in case of belief fusion in Dempster–Shafer theory. Source trust and epistemic uncertainty about the probabilities they provide, such as defined in subjective logic, are additional elements to consider. The need to deal with a broad variety of contexts and issues has led to many different proposals.
Logic is the formal science of using reason and is considered a branch of both philosophy and mathematics and to a lesser extent computer science. Logic investigates and classifies the structure of statements and arguments, both through the study of formal systems of inference and the study of arguments in natural language. The scope of logic can therefore be very large, ranging from core topics such as the study of fallacies and paradoxes, to specialized analyses of reasoning such as probability, correct reasoning, and arguments involving causality. One of the aims of logic is to identify the correct and incorrect inferences. Logicians study the criteria for the evaluation of arguments.
In computer science, a rule-based system is a computer system in which domain-specific knowledge is represented in the form of rules and general-purpose reasoning is used to solve problems in the domain.
A probabilistic logic network (PLN) is a conceptual, mathematical and computational approach to uncertain inference; inspired by logic programming, but using probabilities in place of crisp (true/false) truth values, and fractional uncertainty in place of crisp known/unknown values. In order to carry out effective reasoning in real-world circumstances, artificial intelligence software must robustly handle uncertainty. However, previous approaches to uncertain inference do not have the breadth of scope required to provide an integrated treatment of the disparate forms of cognitively critical uncertainty as they manifest themselves within the various forms of pragmatic inference. Going beyond prior probabilistic approaches to uncertain inference, PLN is able to encompass within uncertain logic such ideas as induction, abduction, analogy, fuzziness and speculation, and reasoning about time and causality.
Probabilistic programming (PP) is a programming paradigm in which probabilistic models are specified and inference for these models is performed automatically. It represents an attempt to unify probabilistic modeling and traditional general purpose programming in order to make the former easier and more widely applicable. It can be used to create systems that help make decisions in the face of uncertainty.
In information technology a reasoning system is a software system that generates conclusions from available knowledge using logical techniques such as deduction and induction. Reasoning systems play an important role in the implementation of artificial intelligence and knowledge-based systems.
Inductive programming (IP) is a special area of automatic programming, covering research from artificial intelligence and programming, which addresses learning of typically declarative and often recursive programs from incomplete specifications, such as input/output examples or constraints.
ProbLog is a probabilistic logic programming language that extends Prolog with probabilities. It minimally extends Prolog by adding the notion of a probabilistic fact, which combines the idea of logical atoms and random variables. Similarly to Prolog, ProbLog can query an atom. While Prolog returns the truth value of the queried atom, ProbLog returns the probability of it being true.
In artificial intelligence, a sentential decision diagram (SDD) is a type of knowledge representation used in knowledge compilation to represent Boolean functions. SDDs can be viewed as a generalization of the influential ordered binary decision diagram (OBDD) representation, by allowing decisions on multiple variables at once. Like OBDDs, SDDs allow for tractable Boolean operations, while being exponentially more succinct. For this reason, they have become an important representation in knowledge compilation.
As of 3 February 2024, this article is derived in whole or in part fromRiguzzi, Fabrizio; Bellodi, Elena; Zese, Riccardo (2014). "A History of Probabilistic Inductive Logic Programming". Frontiers in Robotics and AI. 1. doi: 10.3389/frobt.2014.00006 .The copyright holder has licensed the content in a manner that permits reuse under CC BY-SA 3.0 and GFDL. All relevant terms must be followed.