Bayesian programming

Last updated

Bayesian programming is a formalism and a methodology for having a technique to specify probabilistic models and solve problems when less than the necessary information is available.

Contents

Edwin T. Jaynes proposed that probability could be considered as an alternative and an extension of logic for rational reasoning with incomplete and uncertain information. In his founding book Probability Theory: The Logic of Science [1] he developed this theory and proposed what he called “the robot,” which was not a physical device, but an inference engine to automate probabilistic reasoning—a kind of Prolog for probability instead of logic. Bayesian programming [2] is a formal and concrete implementation of this "robot".

Bayesian programming may also be seen as an algebraic formalism to specify graphical models such as, for instance, Bayesian networks, dynamic Bayesian networks, Kalman filters or hidden Markov models. Indeed, Bayesian Programming is more general than Bayesian networks and has a power of expression equivalent to probabilistic factor graphs. [3]

Formalism

A Bayesian program is a means of specifying a family of probability distributions.

The constituent elements of a Bayesian program are presented below: [4]

  1. A program is constructed from a description and a question.
  2. A description is constructed using some specification () as given by the programmer and an identification or learning process for the parameters not completely specified by the specification, using a data set ().
  3. A specification is constructed from a set of pertinent variables, a decomposition and a set of forms.
  4. Forms are either parametric forms or questions to other Bayesian programs.
  5. A question specifies which probability distribution has to be computed.

Description

The purpose of a description is to specify an effective method of computing a joint probability distribution on a set of variables given a set of experimental data and some specification . This joint distribution is denoted as: . [5]

To specify preliminary knowledge , the programmer must undertake the following:

  1. Define the set of relevant variables on which the joint distribution is defined.
  2. Decompose the joint distribution (break it into relevant independent or conditional probabilities).
  3. Define the forms of each of the distributions (e.g., for each variable, one of the list of probability distributions).

Decomposition

Given a partition of containing subsets, variables are defined , each corresponding to one of these subsets. Each variable is obtained as the conjunction of the variables belonging to the subset. Recursive application of Bayes' theorem leads to:

Conditional independence hypotheses then allow further simplifications. A conditional independence hypothesis for variable is defined by choosing some variable among the variables appearing in the conjunction , labelling as the conjunction of these chosen variables and setting:

We then obtain:

Such a simplification of the joint distribution as a product of simpler distributions is called a decomposition, derived using the chain rule.

This ensures that each variable appears at the most once on the left of a conditioning bar, which is the necessary and sufficient condition to write mathematically valid decompositions.[ citation needed ]

Forms

Each distribution appearing in the product is then associated with either a parametric form (i.e., a function ) or a question to another Bayesian program .

When it is a form , in general, is a vector of parameters that may depend on or or both. Learning takes place when some of these parameters are computed using the data set .

An important feature of Bayesian Programming is this capacity to use questions to other Bayesian programs as components of the definition of a new Bayesian program. is obtained by some inferences done by another Bayesian program defined by the specifications and the data . This is similar to calling a subroutine in classical programming and provides an easy way to build hierarchical models.

Question

Given a description (i.e., ), a question is obtained by partitioning into three sets: the searched variables, the known variables and the free variables.

The 3 variables , and are defined as the conjunction of the variables belonging to these sets.

A question is defined as the set of distributions:

made of many "instantiated questions" as the cardinal of , each instantiated question being the distribution:

Inference

Given the joint distribution , it is always possible to compute any possible question using the following general inference:

where the first equality results from the marginalization rule, the second results from Bayes' theorem and the third corresponds to a second application of marginalization. The denominator appears to be a normalization term and can be replaced by a constant .

Theoretically, this allows to solve any Bayesian inference problem. In practice, however, the cost of computing exhaustively and exactly is too great in almost all cases.

Replacing the joint distribution by its decomposition we get:

which is usually a much simpler expression to compute, as the dimensionality of the problem is considerably reduced by the decomposition into a product of lower dimension distributions.

Example

Bayesian spam detection

The purpose of Bayesian spam filtering is to eliminate junk e-mails.

The problem is very easy to formulate. E-mails should be classified into one of two categories: non-spam or spam. The only available information to classify the e-mails is their content: a set of words. Using these words without taking the order into account is commonly called a bag of words model.

The classifier should furthermore be able to adapt to its user and to learn from experience. Starting from an initial standard setting, the classifier should modify its internal parameters when the user disagrees with its own decision. It will hence adapt to the user's criteria to differentiate between non-spam and spam. It will improve its results as it encounters increasingly classified e-mails.

Variables

The variables necessary to write this program are as follows:

  1. : a binary variable, false if the e-mail is not spam and true otherwise.
  2. : binary variables. is true if the word of the dictionary is present in the text.

These binary variables sum up all the information about an e-mail.

Decomposition

Starting from the joint distribution and applying recursively Bayes' theorem we obtain:

This is an exact mathematical expression.

It can be drastically simplified by assuming that the probability of appearance of a word knowing the nature of the text (spam or not) is independent of the appearance of the other words. This is the naive Bayes assumption and this makes this spam filter a naive Bayes model.

For instance, the programmer can assume that:

to finally obtain:

This kind of assumption is known as the naive Bayes' assumption. It is "naive" in the sense that the independence between words is clearly not completely true. For instance, it completely neglects that the appearance of pairs of words may be more significant than isolated appearances. However, the programmer may assume this hypothesis and may develop the model and the associated inferences to test how reliable and efficient it is.

Parametric forms

To be able to compute the joint distribution, the programmer must now specify the distributions appearing in the decomposition:

  1. is a prior defined, for instance, by
  2. Each of the forms may be specified using Laplace rule of succession (this is a pseudocounts-based smoothing technique to counter the zero-frequency problem of words never-seen-before):

where stands for the number of appearances of the word in non-spam e-mails and stands for the total number of non-spam e-mails. Similarly, stands for the number of appearances of the word in spam e-mails and stands for the total number of spam e-mails.

Identification

The forms are not yet completely specified because the parameters , , and have no values yet.

The identification of these parameters could be done either by batch processing a series of classified e-mails or by an incremental updating of the parameters using the user's classifications of the e-mails as they arrive.

Both methods could be combined: the system could start with initial standard values of these parameters issued from a generic database, then some incremental learning customizes the classifier to each individual user.

Question

The question asked to the program is: "what is the probability for a given text to be spam knowing which words appear and don't appear in this text?" It can be formalized by:

which can be computed as follows:

The denominator appears to be a normalization constant. It is not necessary to compute it to decide if we are dealing with spam. For instance, an easy trick is to compute the ratio:

This computation is faster and easier because it requires only products.

Bayesian program

The Bayesian spam filter program is completely defined by:

Bayesian filter, Kalman filter and hidden Markov model

Bayesian filters (often called Recursive Bayesian estimation) are generic probabilistic models for time evolving processes. Numerous models are particular instances of this generic approach, for instance: the Kalman filter or the Hidden Markov model (HMM).

Variables

  • Variables are a time series of state variables considered to be on a time horizon ranging from to .
  • Variables are a time series of observation variables on the same horizon.

Decomposition

The decomposition is based:

  • on , called the system model, transition model or dynamic model, which formalizes the transition from the state at time to the state at time ;
  • on , called the observation model, which expresses what can be observed at time when the system is in state ;
  • on an initial state at time : .

Parametrical forms

The parametrical forms are not constrained and different choices lead to different well-known models: see Kalman filters and Hidden Markov models just below.

Question

The typical question for such models is : what is the probability distribution for the state at time knowing the observations from instant to ?

The most common case is Bayesian filtering where , which searches for the present state, knowing past observations.

However, it is also possible , to extrapolate a future state from past observations, or to do smoothing , to recover a past state from observations made either before or after that instant.

More complicated questions may also be asked as shown below in the HMM section.

Bayesian filters have a very interesting recursive property, which contributes greatly to their attractiveness. may be computed simply from with the following formula:

Another interesting point of view for this equation is to consider that there are two phases: a prediction phase and an estimation phase:

  • During the prediction phase, the state is predicted using the dynamic model and the estimation of the state at the previous moment:
  • During the estimation phase, the prediction is either confirmed or invalidated using the last observation:

Bayesian program

Kalman filter

The very well-known Kalman filters [6] are a special case of Bayesian filters.

They are defined by the following Bayesian program:

  • Variables are continuous.
  • The transition model and the observation model are both specified using Gaussian laws with means that are linear functions of the conditioning variables.

With these hypotheses and by using the recursive formula, it is possible to solve the inference problem analytically to answer the usual question. This leads to an extremely efficient algorithm, which explains the popularity of Kalman filters and the number of their everyday applications.

When there are no obvious linear transition and observation models, it is still often possible, using a first-order Taylor's expansion, to treat these models as locally linear. This generalization is commonly called the extended Kalman filter.

Hidden Markov model

Hidden Markov models (HMMs) are another very popular specialization of Bayesian filters.

They are defined by the following Bayesian program:

  • Variables are treated as being discrete.
  • The transition model and the observation model are

both specified using probability matrices.

  • The question most frequently asked of HMMs is:

What is the most probable series of states that leads to the present state, knowing the past observations?

This particular question may be answered with a specific and very efficient algorithm called the Viterbi algorithm.

The Baum–Welch algorithm has been developed for HMMs.

Applications

Academic applications

Since 2000, Bayesian programming has been used to develop both robotics applications and life sciences models. [7]

Robotics

In robotics, bayesian programming was applied to autonomous robotics, [8] [9] [10] [11] [12] robotic CAD systems, [13] advanced driver-assistance systems, [14] robotic arm control, mobile robotics, [15] [16] human-robot interaction, [17] human-vehicle interaction (Bayesian autonomous driver models) [18] [19] [20] [21] [22] video game avatar programming and training [23] and real-time strategy games (AI). [24]

Life sciences

In life sciences, bayesian programming was used in vision to reconstruct shape from motion, [25] to model visuo-vestibular interaction [26] and to study saccadic eye movements; [27] in speech perception and control to study early speech acquisition [28] and the emergence of articulatory-acoustic systems; [29] and to model handwriting perception and control. [30]

Pattern recognition

Bayesian program learning has potential applications voice recognition and synthesis, image recognition and natural language processing. It employs the principles of compositionality (building abstract representations from parts), causality (building complexity from parts) and learning to learn (using previously recognized concepts to ease the creation of new concepts). [31]

Possibility theories

The comparison between probabilistic approaches (not only bayesian programming) and possibility theories continues to be debated.

Possibility theories like, for instance, fuzzy sets, [32] fuzzy logic [33] and possibility theory [34] are alternatives to probability to model uncertainty. They argue that probability is insufficient or inconvenient to model certain aspects of incomplete/uncertain knowledge.

The defense of probability is mainly based on Cox's theorem, which starts from four postulates concerning rational reasoning in the presence of uncertainty. It demonstrates that the only mathematical framework that satisfies these postulates is probability theory. The argument is that any approach other than probability necessarily infringes one of these postulates and the value of that infringement.

Probabilistic programming

The purpose of probabilistic programming is to unify the scope of classical programming languages with probabilistic modeling (especially bayesian networks) to deal with uncertainty while profiting from the programming languages' expressiveness to encode complexity.

Extended classical programming languages include logical languages as proposed in Probabilistic Horn Abduction, [35] Independent Choice Logic, [36] PRISM, [37] and ProbLog which proposes an extension of Prolog.

It can also be extensions of functional programming languages (essentially Lisp and Scheme) such as IBAL or CHURCH. The underlying programming languages can be object-oriented as in BLOG and FACTORIE or more standard ones as in CES and FIGARO. [38]

The purpose of Bayesian programming is different. Jaynes' precept of "probability as logic" argues that probability is an extension of and an alternative to logic above which a complete theory of rationality, computation and programming can be rebuilt. [1] Bayesian programming attempts to replace classical languages with a programming approach based on probability that considers incompleteness and uncertainty.

The precise comparison between the semantics and power of expression of Bayesian and probabilistic programming is an open question.

See also

Related Research Articles

<span class="mw-page-title-main">Dirac delta function</span> Generalized function whose value is zero everywhere except at zero

In mathematical analysis, the Dirac delta function, also known as the unit impulse, is a generalized function on the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire real line is equal to one. Since there is no function having this property, to model the delta "function" rigorously involves the use of limits or, as is common in mathematics, measure theory and the theory of distributions.

In mathematical analysis, the Haar measure assigns an "invariant volume" to subsets of locally compact topological groups, consequently defining an integral for functions on those groups.

<span class="mw-page-title-main">Fourier series</span> Decomposition of periodic functions into sums of simpler sinusoidal forms

A Fourier series is an expansion of a periodic function into a sum of trigonometric functions. The Fourier series is an example of a trigonometric series, but not all trigonometric series are Fourier series. By expressing a function as a sum of sines and cosines, many problems involving the function become easier to analyze because trigonometric functions are well understood. For example, Fourier series were first used by Joseph Fourier to find solutions to the heat equation. This application is possible because the derivatives of trigonometric functions fall into simple patterns. Fourier series cannot be used to approximate arbitrary functions, because most functions have infinitely many terms in their Fourier series, and the series do not always converge. Well-behaved functions, for example smooth functions, have Fourier series that converge to the original function. The coefficients of the Fourier series are determined by integrals of the function multiplied by trigonometric functions, described in Common forms of the Fourier series below.

<span class="mw-page-title-main">Naive Bayes classifier</span> Probabilistic classification algorithm

In statistics, naive Bayes classifiers are a family of linear "probabilistic classifiers" which assumes that the features are conditionally independent, given the target class. The strength (naivety) of this assumption is what gives the classifier its name. These classifiers are among the simplest Bayesian network models.

<span class="mw-page-title-main">Fokker–Planck equation</span> Partial differential equation

In statistical mechanics and information theory, the Fokker–Planck equation is a partial differential equation that describes the time evolution of the probability density function of the velocity of a particle under the influence of drag forces and random forces, as in Brownian motion. The equation can be generalized to other observables as well. The Fokker-Planck equation has multiple applications in information theory, graph theory, data science, finance, economics etc.

A conformal field theory (CFT) is a quantum field theory that is invariant under conformal transformations. In two dimensions, there is an infinite-dimensional algebra of local conformal transformations, and conformal field theories can sometimes be exactly solved or classified.

A prior probability distribution of an uncertain quantity, often simply called the prior, is its assumed probability distribution before some evidence is taken into account. For example, the prior could be the probability distribution representing the relative proportions of voters who will vote for a particular politician in a future election. The unknown quantity may be a parameter of the model or a latent variable rather than an observable variable.

In theoretical physics, the Batalin–Vilkovisky (BV) formalism was developed as a method for determining the ghost structure for Lagrangian gauge theories, such as gravity and supergravity, whose corresponding Hamiltonian formulation has constraints not related to a Lie algebra. The BV formalism, based on an action that contains both fields and "antifields", can be thought of as a vast generalization of the original BRST formalism for pure Yang–Mills theory to an arbitrary Lagrangian gauge theory. Other names for the Batalin–Vilkovisky formalism are field-antifield formalism, Lagrangian BRST formalism, or BV–BRST formalism. It should not be confused with the Batalin–Fradkin–Vilkovisky (BFV) formalism, which is the Hamiltonian counterpart.

In statistics, G-tests are likelihood-ratio or maximum likelihood statistical significance tests that are increasingly being used in situations where chi-squared tests were previously recommended.

<span class="mw-page-title-main">Lamb shift</span> Difference in energy of hydrogenic atom electron states not predicted by the Dirac equation

In physics the Lamb shift, named after Willis Lamb, refers to an anomalous difference in energy between two electron orbitals in a hydrogen atom. The difference was not predicted by theory and it cannot be derived from the Dirac equation, which predicts identical energies. Hence the Lamb shift refers to a deviation from theory seen in the differing energies contained by the 2S1/2 and 2P1/2 orbitals of the hydrogen atom.

In mathematics, the discrete-time Fourier transform (DTFT) is a form of Fourier analysis that is applicable to a sequence of discrete values.

<span class="mw-page-title-main">Wigner distribution function</span>

The Wigner distribution function (WDF) is used in signal processing as a transform in time-frequency analysis.

In condensed matter physics and crystallography, the static structure factor is a mathematical description of how a material scatters incident radiation. The structure factor is a critical tool in the interpretation of scattering patterns obtained in X-ray, electron and neutron diffraction experiments.

<span class="mw-page-title-main">Dirichlet process</span> Family of stochastic processes

In probability theory, Dirichlet processes are a family of stochastic processes whose realizations are probability distributions. In other words, a Dirichlet process is a probability distribution whose range is itself a set of probability distributions. It is often used in Bayesian inference to describe the prior knowledge about the distribution of random variables—how likely it is that the random variables are distributed according to one or another particular distribution.

In polynomial interpolation of two variables, the Padua points are the first known example of a unisolvent point set with minimal growth of their Lebesgue constant, proven to be . Their name is due to the University of Padua, where they were originally discovered.

<span class="mw-page-title-main">Laue equations</span> Equations describing diffraction in a crystal lattice

In crystallography and solid state physics, the Laue equations relate incoming waves to outgoing waves in the process of elastic scattering, where the photon energy or light temporal frequency does not change upon scattering by a crystal lattice. They are named after physicist Max von Laue (1879–1960).

Static force fields are fields, such as a simple electric, magnetic or gravitational fields, that exist without excitations. The most common approximation method that physicists use for scattering calculations can be interpreted as static forces arising from the interactions between two bodies mediated by virtual particles, particles that exist for only a short time determined by the uncertainty principle. The virtual particles, also known as force carriers, are bosons, with different bosons associated with each force.

In quantum information theory, the classical capacity of a quantum channel is the maximum rate at which classical data can be sent over it error-free in the limit of many uses of the channel. Holevo, Schumacher, and Westmoreland proved the following least upper bound on the classical capacity of any quantum channel :

<span class="mw-page-title-main">Two-ray ground-reflection model</span>

The two-rays ground-reflection model is a multipath radio propagation model which predicts the path losses between a transmitting antenna and a receiving antenna when they are in line of sight (LOS). Generally, the two antenna each have different height. The received signal having two components, the LOS component and the reflection component formed predominantly by a single ground reflected wave.

The spectrum of a chirp pulse describes its characteristics in terms of its frequency components. This frequency-domain representation is an alternative to the more familiar time-domain waveform, and the two versions are mathematically related by the Fourier transform. The spectrum is of particular interest when pulses are subject to signal processing. For example, when a chirp pulse is compressed by its matched filter, the resulting waveform contains not only a main narrow pulse but, also, a variety of unwanted artifacts many of which are directly attributable to features in the chirp's spectral characteristics.

References

  1. 1 2 Jaynes, E. T. (10 April 2003). Probability Theory: The Logic of Science. Cambridge University Press. ISBN   978-1-139-43516-1.
  2. Bessiere, Pierre; Mazer, Emmanuel; Manuel Ahuactzin, Juan; Mekhnacha, Kamel (20 December 2013). Bayesian Programming. CRC Press. ISBN   978-1-4398-8032-6.
  3. "Expression Graphs: Unifying Factor Graphs and Sum-Product Networks" (PDF). bcf.usc.edu.
  4. "Probabilistic Modeling and Bayesian Analysis" (PDF). ocw.mit.edu.
  5. "Bayesian Networks" (PDF). cs.brandeis.edu.
  6. Kalman, R. E. (1960). "A New Approach to Linear Filtering and Prediction Problems". Journal of Basic Engineering. 82: 33–45. doi:10.1115/1.3662552. S2CID   1242324.
  7. Bessière, Pierre; Laugier, Christian; Siegwart, Roland (15 May 2008). Probabilistic Reasoning and Decision Making in Sensory-Motor Systems. Springer Science & Business Media. ISBN   978-3-540-79006-8.
  8. Lebeltel, O.; Bessière, P.; Diard, J.; Mazer, E. (2004). "Bayesian Robot Programming" (PDF). Advanced Robotics. 16 (1): 49–79. doi:10.1023/b:auro.0000008671.38949.43. S2CID   18768468.
  9. Diard, J.; Gilet, E.; Simonin, E.; Bessière, P. (2010). "Incremental learning of Bayesian sensorimotor models: from low-level behaviours to large-scale structure of the environment" (PDF). Connection Science. 22 (4): 291–312. Bibcode:2010ConSc..22..291D. doi:10.1080/09540091003682561. S2CID   216035458.
  10. Pradalier, C.; Hermosillo, J.; Koike, C.; Braillon, C.; Bessière, P.; Laugier, C. (2005). "The CyCab: a car-like robot navigating autonomously and safely among pedestrians". Robotics and Autonomous Systems. 50 (1): 51–68. CiteSeerX   10.1.1.219.69 . doi:10.1016/j.robot.2004.10.002.
  11. Ferreira, J.; Lobo, J.; Bessière, P.; Castelo-Branco, M.; Dias, J. (2012). "A Bayesian Framework for Active Artificial Perception" (PDF). IEEE Transactions on Systems, Man, and Cybernetics - Part B: Cybernetics. 99 (2): 1–13. doi:10.1109/TSMCB.2012.2214477. PMID   23014760. S2CID   1808051.
  12. Ferreira, J. F.; Dias, J. M. (2014). Probabilistic Approaches to Robotic Perception. Springer. ISBN   978-3-319-02005-1.
  13. Mekhnacha, K.; Mazer, E.; Bessière, P. (2001). "The design and implementation of a Bayesian CAD modeler for robotic applications". Advanced Robotics. 15 (1): 45–69. CiteSeerX   10.1.1.552.3126 . doi:10.1163/156855301750095578. S2CID   7920387.
  14. Coué, C.; Pradalier, C.; Laugier, C.; Fraichard, T.; Bessière, P. (2006). "Bayesian Occupancy Filtering for Multitarget Tracking: an Automotive Application" (PDF). International Journal of Robotics Research. 25 (1): 19–30. doi:10.1177/0278364906061158. S2CID   13874685.
  15. Vasudevan, S.; Siegwart, R. (2008). "Bayesian space conceptualization and place classification for semantic maps in mobile robotics". Robotics and Autonomous Systems. 56 (6): 522–537. CiteSeerX   10.1.1.149.4189 . doi:10.1016/j.robot.2008.03.005.
  16. Perrin, X.; Chavarriaga, R.; Colas, F.; Seigwart, R.; Millan, J. (2010). "Brain-coupled interaction for semi-autonomous navigation of an assistive robot". Robotics and Autonomous Systems. 58 (12): 1246–1255. doi:10.1016/j.robot.2010.05.010.
  17. Rett, J.; Dias, J.; Ahuactzin, J-M. (2010). "Bayesian reasoning for Laban Movement Analysis used in human-machine interaction". International Journal of Reasoning-Based Intelligent Systems. 2 (1): 13–35. CiteSeerX   10.1.1.379.6216 . doi:10.1504/IJRIS.2010.029812.
  18. Möbus, C.; Eilers, M.; Garbe, H.; Zilinski, M. (2009). "Probabilistic and Empirical Grounded Modeling of Agents in (Partial) Cooperative Traffic Scenarios" (PDF). In Duffy, Vincent G. (ed.). Digital Human Modeling. Second International Conference, ICDHM 2009, San Diego, CA, USA. Lecture Notes in Computer Science. Vol. 5620. Springer. pp. 423–432. doi: 10.1007/978-3-642-02809-0_45 . ISBN   978-3-642-02808-3.
  19. Möbus, C.; Eilers, M. (2009). "Further Steps Towards Driver Modeling according to the Bayesian Programming Approach". In Duffy, Vincent G. (ed.). Digital Human Modeling. Second International Conference, ICDHM 2009, San Diego, CA, USA. Lecture Notes in Computer Science. Vol. 5620. Springer. pp. 413–422. CiteSeerX   10.1.1.319.2067 . doi:10.1007/978-3-642-02809-0_44. ISBN   978-3-642-02808-3.
  20. Eilers, M.; Möbus, C. (2010). "Lernen eines modularen Bayesian Autonomous Driver Mixture-of-Behaviors (BAD MoB) Modells" (PDF). In Kolrep, H.; Jürgensohn, Th. (eds.). Fahrermodellierung - Zwischen kinematischen Menschmodellen und dynamisch-kognitiven Verhaltensmodellen. Fortschrittsbericht des VDI in der Reihe 22 (Mensch-Maschine-Systeme). Düsseldorf, Germany: VDI-Verlag. pp. 61–74. ISBN   978-3-18-303222-8.
  21. Eilers, M.; Möbus, C. (2011). "Learning the Relevant Percepts of Modular Hierarchical Bayesian Driver Models Using a Bayesian Information Criterion". In Duffy, V.G. (ed.). Digital Human Modeling. LNCS 6777. Heidelberg, Germany: Springer. pp. 463–472. doi: 10.1007/978-3-642-21799-9_52 . ISBN   978-3-642-21798-2.
  22. Eilers, M.; Möbus, C. (2011). "Learning of a Bayesian Autonomous Driver Mixture-of-Behaviors (BAD-MoB) Model". In Duffy, V.G. (ed.). Advances in Applied Digital Human Modeling. LNCS 6777. Boca Raton, USA: CRC Press, Taylor & Francis Group. pp. 436–445. ISBN   978-1-4398-3511-1.
  23. Le Hy, R.; Arrigoni, A.; Bessière, P.; Lebetel, O. (2004). "Teaching Bayesian Behaviours to Video Game Characters" (PDF). Robotics and Autonomous Systems. 47 (2–3): 177–185. doi:10.1016/j.robot.2004.03.012. S2CID   16415524.
  24. Synnaeve, G. (2012). Bayesian Programming and Learning for Multiplayer Video Games (PDF).
  25. Colas, F.; Droulez, J.; Wexler, M.; Bessière, P. (2008). "A unified probabilistic model of the perception of three-dimensional structure from optic flow". Biological Cybernetics. 97 (5–6): 461–77. CiteSeerX   10.1.1.215.1491 . doi:10.1007/s00422-007-0183-z. PMID   17987312. S2CID   215821150.
  26. Laurens, J.; Droulez, J. (2007). "Bayesian processing of vestibular information". Biological Cybernetics. 96 (4): 389–404. doi:10.1007/s00422-006-0133-1. PMID   17146661. S2CID   18138027.
  27. Colas, F.; Flacher, F.; Tanner, T.; Bessière, P.; Girard, B. (2009). "Bayesian models of eye movement selection with retinotopic maps" (PDF). Biological Cybernetics. 100 (3): 203–214. doi: 10.1007/s00422-009-0292-y . PMID   19212780. S2CID   5906668.
  28. Serkhane, J.; Schwartz, J-L.; Bessière, P. (2005). "Building a talking baby robot A contribution to the study of speech acquisition and evolution" (PDF). Interaction Studies. 6 (2): 253–286. doi:10.1075/is.6.2.06ser.
  29. Moulin-Frier, C.; Laurent, R.; Bessière, P.; Schwartz, J-L.; Diard, J. (2012). "Adverse conditions improve distinguishability of auditory, motor and percep-tuo-motor theories of speech perception: an exploratory Bayesian modeling study" (PDF). Language and Cognitive Processes. 27 (7–8): 1240–1263. doi:10.1080/01690965.2011.645313. S2CID   55504109.
  30. Gilet, E.; Diard, J.; Bessière, P. (2011). Sporns, Olaf (ed.). "Bayesian Action–Perception Computational Model: Interaction of Production and Recognition of Cursive Letters". PLOS ONE. 6 (6): e20387. Bibcode:2011PLoSO...620387G. doi: 10.1371/journal.pone.0020387 . PMC   3106017 . PMID   21674043.
  31. "New algorithm helps machines learn as quickly as humans". www.gizmag.com. 2016-01-22. Retrieved 2016-01-23.
  32. Zadeh, L.A. (June 1965). "Fuzzy sets". Information and Control . San Diego. 8 (3): 338–353. doi: 10.1016/S0019-9958(65)90241-X . ISSN   0019-9958. Zbl   0139.24606. Wikidata   Q25938993.
  33. Zadeh, L.A. (September 1975). "Fuzzy logic and approximate reasoning". Synthese . Springer. 30 (3–4): 407–428. doi:10.1007/BF00485052. ISSN   0039-7857. OCLC   714993477. S2CID   46975216. Wikidata   Q57275767.
  34. Dubois, D.; Prade, H. (2001). "Possibility Theory, Probability Theory and Multiple-Valued Logics: A Clarification" (PDF). Ann. Math. Artif. Intell. 32 (1–4): 35–66. doi:10.1023/A:1016740830286. S2CID   10271476.
  35. Poole, D. (1993). "Probabilistic Horn abduction and Bayesian networks". Artificial Intelligence. 64: 81–129. doi:10.1016/0004-3702(93)90061-F.
  36. Poole, D. (1997). "The Independent Choice Logic for modelling multiple agents under uncertainty". Artificial Intelligence. 94 (1–2): 7–56. doi: 10.1016/S0004-3702(97)00027-1 .
  37. Sato, T.; Kameya, Y. (2001). "Parameter learning of logic programs for symbolic-statistical modeling" (PDF). Journal of Artificial Intelligence Research. 15 (2001): 391–454. arXiv: 1106.1797 . Bibcode:2011arXiv1106.1797S. doi:10.1613/jair.912. S2CID   7857569. Archived from the original (PDF) on 2014-07-12. Retrieved 2015-10-18.
  38. figaro on GitHub

Further reading