Fully probabilistic design

Last updated

Decision making (DM) can be seen as a purposeful choice of action sequences. It also covers control, a purposeful choice of input sequences. As a rule, it runs under randomness, uncertainty and incomplete knowledge. A range of prescriptive theories have been proposed how to make optimal decisions under these conditions. They optimise sequence of decision rules, mappings of the available knowledge on possible actions. This sequence is called strategy or policy. Among various theories, Bayesian DM is broadly accepted axiomatically based theory that solves the design of optimal decision strategy. It describes random, uncertain or incompletely known quantities as random variables, i.e. by their joint probability expressing belief in their possible values. The strategy that minimises expected loss (or equivalently maximises expected reward) expressing decision-maker's goals is then taken as the optimal strategy. While the probabilistic description of beliefs is uniquely and deductively driven by rules for joint probabilities, the composition and decomposition of the loss function have no such universally applicable formal machinery.

Fully probabilistic design (of decision strategies or control, FPD) removes the mentioned drawback and expresses also the DM goals of by the "ideal" probability, which assigns high (small) values to desired (undesired) behaviours of the closed DM loop formed by the influenced world part and by the used strategy. FPD has axiomatic basis and has Bayesian DM as its restricted subpart. [1] [2] FPD has a range of theoretical consequences , [3] [4] and, importantly, has been successfully used to quite diverse application domains. [5]

Related Research Articles

The word probability has been used in a variety of ways since it was first applied to the mathematical study of games of chance. Does probability measure the real, physical, tendency of something to occur, or is it a measure of how strongly one believes it will occur, or does it draw on both these elements? In answering such questions, mathematicians interpret the probability values of probability theory.

<span class="mw-page-title-main">Statistical inference</span> Process of using data analysis

Statistical inference is the process of using data analysis to infer properties of an underlying distribution of probability. Inferential statistical analysis infers properties of a population, for example by testing hypotheses and deriving estimates. It is assumed that the observed data set is sampled from a larger population.

Bayesian inference is a method of statistical inference in which Bayes' theorem is used to update the probability for a hypothesis as more evidence or information becomes available. Bayesian inference is an important technique in statistics, and especially in mathematical statistics. Bayesian updating is particularly important in the dynamic analysis of a sequence of data. Bayesian inference has found application in a wide range of activities, including science, engineering, philosophy, medicine, sport, and law. In the philosophy of decision theory, Bayesian inference is closely related to subjective probability, often called "Bayesian probability".

A Bayesian network is a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph (DAG). Bayesian networks are ideal for taking an event that occurred and predicting the likelihood that any one of several possible known causes was the contributing factor. For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases.

Bayesian statistics is a theory in the field of statistics based on the Bayesian interpretation of probability where probability expresses a degree of belief in an event. The degree of belief may be based on prior knowledge about the event, such as the results of previous experiments, or on personal beliefs about the event. This differs from a number of other interpretations of probability, such as the frequentist interpretation that views probability as the limit of the relative frequency of an event after many trials.

Decision theory is a branch of applied probability theory concerned with the theory of making decisions based on assigning probabilities to various factors and assigning numerical consequences to the outcome.

Decision analysis (DA) is the discipline comprising the philosophy, methodology, and professional practice necessary to address important decisions in a formal manner. Decision analysis includes many procedures, methods, and tools for identifying, clearly representing, and formally assessing important aspects of a decision; for prescribing a recommended course of action by applying the maximum expected-utility axiom to a well-formed representation of the decision; and for translating the formal representation of a decision and its corresponding recommendation into insight for the decision maker, and other corporate and non-corporate stakeholders.

In game theory, a player's strategy is any of the options which they choose in a setting where the outcome depends not only on their own actions but on the actions of others. The discipline mainly concerns the action of a player in a game affecting the behavior or actions of other players. Some examples of "games" include chess, bridge, poker, monopoly, diplomacy or battleship. A player's strategy will determine the action which the player will take at any stage of the game. In studying game theory, economists enlist a more rational lens in analyzing decisions rather than the psychological or sociological perspectives taken when analyzing relationships between decisions of two or more parties in different disciplines.

In economics and game theory, complete information is an economic situation or game in which knowledge about other market participants or players is available to all participants. The utility functions, payoffs, strategies and "types" of players are thus common knowledge. Complete information is the concept that each player in the game is aware of the sequence, strategies, and payoffs throughout gameplay. Given this information, the players have the ability to plan accordingly based on the information to maximize their own strategies and utility at the end of the game.

In game theory, a Bayesian game is a game that models the outcome of player interactions using aspects of Bayesian probability. Bayesian games are notable because they allowed, for the first time in game theory, for the specification of the solutions to games with incomplete information.

Algorithmic information theory (AIT) is a branch of theoretical computer science that concerns itself with the relationship between computation and information of computably generated objects, such as strings or any other data structure. In other words, it is shown within algorithmic information theory that computational incompressibility "mimics" the relations or inequalities found in information theory. According to Gregory Chaitin, it is "the result of putting Shannon's information theory and Turing's computability theory into a cocktail shaker and shaking vigorously."

Formal epistemology uses formal methods from decision theory, logic, probability theory and computability theory to model and reason about issues of epistemological interest. Work in this area spans several academic fields, including philosophy, computer science, economics, and statistics. The focus of formal epistemology has tended to differ somewhat from that of traditional epistemology, with topics like uncertainty, induction, and belief revision garnering more attention than the analysis of knowledge, skepticism, and issues with justification.

Probabilistic logic involves the use of probability and logic to deal with uncertain situations. Probabilistic logic extends traditional logic truth tables with probabilistic expressions. A difficulty of probabilistic logics is their tendency to multiply the computational complexities of their probabilistic and logical components. Other difficulties include the possibility of counter-intuitive results, such as in case of belief fusion in Dempster–Shafer theory. Source trust and epistemic uncertainty about the probabilities they provide, such as defined in subjective logic, are additional elements to consider. The need to deal with a broad variety of contexts and issues has led to many different proposals.

The transferable belief model (TBM) is an elaboration on the Dempster–Shafer theory (DST), which is a mathematical model used to evaluate the probability that a given proposition is true from other propositions which are assigned probabilities. It was developed by Philippe Smets who proposed his approach as a response to Zadeh’s example against Dempster's rule of combination. In contrast to the original DST the TBM propagates the open-world assumption that relaxes the assumption that all possible outcomes are known. Under the open world assumption Dempster's rule of combination is adapted such that there is no normalization. The underlying idea is that the probability mass pertaining to the empty set is taken to indicate an unexpected outcome, e.g. the belief in a hypothesis outside the frame of discernment. This adaptation violates the probabilistic character of the original DST and also Bayesian inference. Therefore, the authors substituted notation such as probability masses and probability update with terms such as degrees of belief and transfer giving rise to the name of the method: The transferable belief model.

<span class="mw-page-title-main">Jean-François Mertens</span> Belgian game theorist (1946–2012)

Jean-François Mertens was a Belgian game theorist and mathematical economist.

Construction by Jean-François Mertens and Zamir implementing with John Harsanyi's proposal to model games with incomplete information by supposing that each player is characterized by a privately known type that describes his feasible strategies and payoffs as well as a probability distribution over other players' types.

Stochastic scheduling concerns scheduling problems involving random attributes, such as random processing times, random due dates, random weights, and stochastic machine breakdowns. Major applications arise in manufacturing systems, computer systems, communication systems, logistics and transportation, and machine learning, among others.

Probabilistic numerics is a scientific field at the intersection of statistics, machine learning and applied mathematics, where tasks in numerical analysis including finding numerical solutions for integration, linear algebra, optimisation and differential equations are seen as problems of statistical, probabilistic, or Bayesian inference.

References

  1. Kárný, Miroslav; Kroupa, Tomáš (2012). "Axiomatisation of fully probabilistic design". Information Sciences. 186 (1): 105–113. doi:10.1016/j.ins.2011.09.018.
  2. "Fully Probabilistic Design of Dynamic Decision Strategies for Imperfect Participants in Market Scenarios". Institute of Information Theory and Automation. Retrieved 2014-09-01.
  3. Kárný, Miroslav; Guy, Tatiana V. (2006). "Fully probabilistic control design". Systems and Control Letters. 55 (4): 259–265. doi:10.1016/j.sysconle.2005.08.001.
  4. Kárný, Miroslav; Guy, Tatiana V. (2014). "On the Origins of Imperfection and Apparent Non-Rationality". Springer.
  5. Quinn, Anthony; Ettler, Pavel; Jirsa, Ladislav; Nagy, Ivan; Nedoma, Petr (2003). "Probabilistic advisory systems for data-intensive applications". International Journal of Adaptive Control and Signal Processing. 17 (2): 133–148. doi:10.1002/acs.743. S2CID   57215238.

See also