Probabilistic logic

Last updated

Probabilistic logic (also probability logic and probabilistic reasoning) involves the use of probability and logic to deal with uncertain situations. Probabilistic logic extends traditional logic truth tables with probabilistic expressions. A difficulty of probabilistic logics is their tendency to multiply the computational complexities of their probabilistic and logical components. Other difficulties include the possibility of counter-intuitive results, such as in case of belief fusion in Dempster–Shafer theory. Source trust and epistemic uncertainty about the probabilities they provide, such as defined in subjective logic, are additional elements to consider. The need to deal with a broad variety of contexts and issues has led to many different proposals.

Contents

Logical background

There are numerous proposals for probabilistic logics. Very roughly, they can be categorized into two different classes: those logics that attempt to make a probabilistic extension to logical entailment, such as Markov logic networks, and those that attempt to address the problems of uncertainty and lack of evidence (evidentiary logics).

That the concept of probability can have different meanings may be understood by noting that, despite the mathematization of probability in the Enlightenment, mathematical probability theory remains, to this very day, entirely unused in criminal courtrooms, when evaluating the "probability" of the guilt of a suspected criminal. [1]

More precisely, in evidentiary logic, there is a need to distinguish the objective truth of a statement from our decision about the truth of that statement, which in turn must be distinguished from our confidence in its truth: thus, a suspect's real guilt is not necessarily the same as the judge's decision on guilt, which in turn is not the same as assigning a numerical probability to the commission of the crime, and deciding whether it is above a numerical threshold of guilt. The verdict on a single suspect may be guilty or not guilty with some uncertainty, just as the flipping of a coin may be predicted as heads or tails with some uncertainty. Given a large collection of suspects, a certain percentage may be guilty, just as the probability of flipping "heads" is one-half. However, it is incorrect to take this law of averages with regard to a single criminal (or single coin-flip): the criminal is no more "a little bit guilty" than predicting a single coin flip to be "a little bit heads and a little bit tails": we are merely uncertain as to which it is. Expressing uncertainty as a numerical probability may be acceptable when making scientific measurements of physical quantities, but it is merely a mathematical model of the uncertainty we perceive in the context of "common sense" reasoning and logic. Just as in courtroom reasoning, the goal of employing uncertain inference is to gather evidence to strengthen the confidence of a proposition, as opposed to performing some sort of probabilistic entailment.

Historical context

Historically, attempts to quantify probabilistic reasoning date back to antiquity. There was a particularly strong interest starting in the 12th century, with the work of the Scholastics, with the invention of the half-proof (so that two half-proofs are sufficient to prove guilt), the elucidation of moral certainty (sufficient certainty to act upon, but short of absolute certainty), the development of Catholic probabilism (the idea that it is always safe to follow the established rules of doctrine or the opinion of experts, even when they are less probable), the case-based reasoning of casuistry, and the scandal of Laxism (whereby probabilism was used to give support to almost any statement at all, it being possible to find an expert opinion in support of almost any proposition.). [1]

Modern proposals

Below is a list of proposals for probabilistic and evidentiary extensions to classical and predicate logic.

See also

Related Research Articles

Bayesian probability is an interpretation of the concept of probability, in which, instead of frequency or propensity of some phenomenon, probability is interpreted as reasonable expectation representing a state of knowledge or as quantification of a personal belief.

In propositional logic, modus ponens, also known as modus ponendo ponens, implication elimination, or affirming the antecedent, is a deductive argument form and rule of inference. It can be summarized as "P implies Q.P is true. Therefore, Q must also be true."

In propositional logic, modus tollens (MT), also known as modus tollendo tollens and denying the consequent, is a deductive argument form and a rule of inference. Modus tollens is a mixed hypothetical syllogism that takes the form of "If P, then Q. Not Q. Therefore, not P." It is an application of the general truth that if a statement is true, then so is its contrapositive. The form shows that inference from P implies Q to the negation of Q implies the negation of P is a valid argument.

The word probability has been used in a variety of ways since it was first applied to the mathematical study of games of chance. Does probability measure the real, physical, tendency of something to occur, or is it a measure of how strongly one believes it will occur, or does it draw on both these elements? In answering such questions, mathematicians interpret the probability values of probability theory.

<span class="mw-page-title-main">Abductive reasoning</span> Inference seeking the simplest and most likely explanation

Abductive reasoning is a form of logical inference that seeks the simplest and most likely conclusion from a set of observations. It was formulated and advanced by American philosopher Charles Sanders Peirce beginning in the latter half of the 19th century.

Deductive reasoning is the mental process of drawing deductive inferences. An inference is deductively valid if its conclusion follows logically from its premises, i.e. it is impossible for the premises to be true and the conclusion to be false.

<span class="mw-page-title-main">Dempster–Shafer theory</span> Mathematical framework to model epistemic uncertainty

The theory of belief functions, also referred to as evidence theory or Dempster–Shafer theory (DST), is a general framework for reasoning with uncertainty, with understood connections to other frameworks such as probability, possibility and imprecise probability theories. First introduced by Arthur P. Dempster in the context of statistical inference, the theory was later developed by Glenn Shafer into a general framework for modeling epistemic uncertainty—a mathematical theory of evidence. The theory allows one to combine evidence from different sources and arrive at a degree of belief that takes into account all the available evidence.

Inferences are steps in reasoning, moving from premises to logical consequences; etymologically, the word infer means to "carry forward". Inference is theoretically traditionally divided into deduction and induction, a distinction that in Europe dates at least to Aristotle. Deduction is inference deriving logical conclusions from premises known or assumed to be true, with the laws of valid inference being studied in logic. Induction is inference from particular evidence to a universal conclusion. A third type of inference is sometimes distinguished, notably by Charles Sanders Peirce, contradistinguishing abduction from induction.

The term Inductive reasoning is used to refer to any method of reasoning in which broad generalizations or principles are derived from a body of observations. This article is concerned with the inductive reasoning other than deductive reasoning, where the conclusion of a deductive argument is certain given the premises are correct; in contrast, the truth of the conclusion of an inductive argument is at best probable, based upon the evidence given.

Imprecise probability generalizes probability theory to allow for partial probability specifications, and is applicable when information is scarce, vague, or conflicting, in which case a unique probability distribution may be hard to identify. Thereby, the theory aims to represent the available knowledge more accurately. Imprecision is useful for dealing with expert elicitation, because:

Formal epistemology uses formal methods from decision theory, logic, probability theory and computability theory to model and reason about issues of epistemological interest. Work in this area spans several academic fields, including philosophy, computer science, economics, and statistics. The focus of formal epistemology has tended to differ somewhat from that of traditional epistemology, with topics like uncertainty, induction, and belief revision garnering more attention than the analysis of knowledge, skepticism, and issues with justification.

The lottery paradox arises from Henry E. Kyburg Jr. considering a fair 1,000-ticket lottery that has exactly one winning ticket. If that much is known about the execution of the lottery, it is then rational to accept that some ticket will win.

Logic is the formal science of using reason and is considered a branch of both philosophy and mathematics and to a lesser extent computer science. Logic investigates and classifies the structure of statements and arguments, both through the study of formal systems of inference and the study of arguments in natural language. The scope of logic can therefore be very large, ranging from core topics such as the study of fallacies and paradoxes, to specialized analyses of reasoning such as probability, correct reasoning, and arguments involving causality. One of the aims of logic is to identify the correct and incorrect inferences. Logicians study the criteria for the evaluation of arguments.

Subjective logic is a type of probabilistic logic that explicitly takes epistemic uncertainty and source trust into account. In general, subjective logic is suitable for modeling and analysing situations involving uncertainty and relatively unreliable sources. For example, it can be used for modeling and analysing trust networks and Bayesian networks.

A probabilistic logic network (PLN) is a conceptual, mathematical and computational approach to uncertain inference; inspired by logic programming, but using probabilities in place of crisp (true/false) truth values, and fractional uncertainty in place of crisp known/unknown values. In order to carry out effective reasoning in real-world circumstances, artificial intelligence software must robustly handle uncertainty. However, previous approaches to uncertain inference do not have the breadth of scope required to provide an integrated treatment of the disparate forms of cognitively critical uncertainty as they manifest themselves within the various forms of pragmatic inference. Going beyond prior probabilistic approaches to uncertain inference, PLN is able to encompass within uncertain logic such ideas as induction, abduction, analogy, fuzziness and speculation, and reasoning about time and causality.

Uncertain inference was first described by C. J. van Rijsbergen as a way to formally define a query and document relationship in Information retrieval. This formalization is a logical implication with an attached measure of uncertainty.

<span class="mw-page-title-main">Logic</span> Study of correct reasoning

Logic is the study of correct reasoning. It includes both formal and informal logic. Formal logic is the science of deductively valid inferences or logical truths. It studies how conclusions follow from premises due to the structure of arguments alone, independent of their topic and content. Informal logic is associated with informal fallacies, critical thinking, and argumentation theory. It examines arguments expressed in natural language while formal logic uses formal language. When used as a countable noun, the term "a logic" refers to a logical formal system that articulates a proof system. Logic plays a central role in many fields, such as philosophy, mathematics, computer science, and linguistics.

Bayesian epistemology is a formal approach to various topics in epistemology that has its roots in Thomas Bayes' work in the field of probability theory. One advantage of its formal method in contrast to traditional epistemology is that its concepts and theorems can be defined with a high degree of precision. It is based on the idea that beliefs can be interpreted as subjective probabilities. As such, they are subject to the laws of probability theory, which act as the norms of rationality. These norms can be divided into static constraints, governing the rationality of beliefs at any moment, and dynamic constraints, governing how rational agents should change their beliefs upon receiving new evidence. The most characteristic Bayesian expression of these principles is found in the form of Dutch books, which illustrate irrationality in agents through a series of bets that lead to a loss for the agent no matter which of the probabilistic events occurs. Bayesians have applied these fundamental principles to various epistemological topics but Bayesianism does not cover all topics of traditional epistemology. The problem of confirmation in the philosophy of science, for example, can be approached through the Bayesian principle of conditionalization by holding that a piece of evidence confirms a theory if it raises the likelihood that this theory is true. Various proposals have been made to define the concept of coherence in terms of probability, usually in the sense that two propositions cohere if the probability of their conjunction is higher than if they were neutrally related to each other. The Bayesian approach has also been fruitful in the field of social epistemology, for example, concerning the problem of testimony or the problem of group belief. Bayesianism still faces various theoretical objections that have not been fully solved.

References

  1. 1 2 James Franklin, The Science of Conjecture: Evidence and Probability before Pascal, 2001 The Johns Hopkins Press, ISBN   0-8018-7109-3.
  2. Nilsson, N. J., 1986, "Probabilistic logic," Artificial Intelligence 28(1): 71-87.
  3. A. Jøsang. Subjective Logic: A formalism for reasoning under uncertainty . Springer Verlag, 2016
  4. Jøsang, A. and McAnally, D., 2004, "Multiplication and Comultiplication of Beliefs," International Journal of Approximate Reasoning, 38(1), pp.19-51, 2004
  5. Jøsang, A., 2008, "Conditional Reasoning with Subjective Logic," Journal of Multiple-Valued Logic and Soft Computing, 15(1), pp.5-38, 2008
  6. A. Jøsang. Generalising Bayes' Theorem in Subjective Logic. 2016 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI 2016), Baden-Baden, Germany, 2016.
  7. Gerla, G., 1994, "Inferences in Probability Logic," Artificial Intelligence 70(1–2):33–52.
  8. Riveret, R.; Baroni, P.; Gao, Y.; Governatori, G.; Rotolo, A.; Sartor, G. (2018), "A Labelling Framework for Probabilistic Argumentation", Annals of Mathematics and Artificial Intelligence, 83: 221–287.
  9. Kohlas, J., and Monney, P.A., 1995. A Mathematical Theory of Hints. An Approach to the Dempster–Shafer Theory of Evidence . Vol. 425 in Lecture Notes in Economics and Mathematical Systems. Springer Verlag.
  10. Haenni, R, 2005, "Towards a Unifying Theory of Logical and Probabilistic Reasoning," ISIPTA'05, 4th International Symposium on Imprecise Probabilities and Their Applications: 193-202. "Archived copy" (PDF). Archived from the original (PDF) on 2006-06-18. Retrieved 2006-06-18.{{cite web}}: CS1 maint: archived copy as title (link)
  11. Ruspini, E.H., Lowrance, J., and Strat, T., 1992, "Understanding evidential reasoning," International Journal of Approximate Reasoning, 6(3): 401-424.

Further reading