The false confidence theorem [1] suggests that an analyst should not be willing to assign a high degree of belief to an assertion on the basis of low quality data. High belief here would be unjustified, the confidence placed in the assertion could be said to be false.
This requires that assigning a low degree of belief to an assertion does not then imply a high degree of belief in the alternative. Additive measures of belief such as probability do not protect an analyst from making such an error due to probability dilution. [2] This also affects applications such as Bayesian model calibration where aleatory and epistemic uncertainty are modelled using additive measures of belief. [3]
The motivating example is that of predicting the trajectories of two satellites to determine whether they will collide. [1] Action must be taken to avoid collision if this is likely, otherwise the satellites can be left to pass at a safe distance. A collision is predicted if the passing distance is smaller than the radius of each satellite.
False confidence is most apparent when the data available for predicting the trajectory of each satellite is noisy or limited in quantity. This produces a predictive distribution for passing distance with higher variance, so for a given true passing distance this reduces the belief assigned to a collision. However, since probability is additive, the belief assignment to collision and non-collision must sum to 1. So low-quality data also increases the belief assigned to non-collision.
Using probability as a measure of belief assumes that the passing distance is indeed a random variable. If this were true, it would be justified to conclude that the increased variance supports the prediction of non-collision. However, the variance here stems from epistemic uncertainty, and this does not physically affect the trajectory of the satellite. Noisier data does not mean that the satellite has more capacity to change trajectory.
A method of belief assignment for an assertion regarding an unknown value can be said to be free from false confidence if it satisfies the Martin-Liu validity criterion: [4]
This explicitly limits the probability of assigning a high degree of belief to false assertions.
Non-additive measures of belief are required to protect against false confidence. Inferential models are defined to ensure they meet the Martin-Liu validity criterion. [5] Consonant confidence structures also satisfy this criterion, [6] as do confidence distributions so long as is an interval. [7]