In probability theory, Boole's inequality, also known as the union bound, says that for any finite or countableset of events, the probability that at least one of the events happens is no greater than the sum of the probabilities of the individual events. This inequality provides an upper bound on the probability of occurrence of at least one of a countable number of events in terms of the individual probabilities of the events. Boole's inequality is named for its discoverer, George Boole.[1]
Formally, for a countable set of events A1, A2, A3, ..., we have
In measure-theoretic terms, Boole's inequality follows from the fact that a measure (and certainly any probability measure) is σ-sub-additive. Thus Boole's inequality holds not only for probability measures , but more generally when is replaced by any finite measure.
Proof
Proof using induction
Boole's inequality may be proved for finite collections of events using the method of induction.[citation needed]
For the case, it follows that
For the case , we have
Since and because the union operation is associative, we have
The inequalities follow from the inclusion–exclusion principle, and Boole's inequality is the special case of . Since the proof of the inclusion-exclusion principle requires only the finite additivity (and nonnegativity) of , thus the Bonferroni inequalities holds more generally is replaced by any finite content, in the sense of measure theory.
Proof for odd K
Let , where for each . These such partition the sample space, and for each and every , is either contained in or disjoint from it.
If , then contributes 0 to both sides of the inequality.
Otherwise, assume is contained in exactly of the . Then contributes exactly to the right side of the inequality, while it contributes
to the left side of the inequality. However, by Pascal's rule, this is equal to
which telescopes to
Thus, the inequality holds for all events , and so by summing over , we obtain the desired inequality:
Suppose that you are estimating five parameters based on a random sample, and you can control each parameter separately. If you want your estimations of all five parameters to be good with a chance 95%, what should you do to each parameter?
Tuning each parameter's chance to be good to within 95% is not enough because "all are good" is a subset of each event "Estimate i is good". We can use Boole's Inequality to solve this problem. By finding the complement of event "all five are good", we can change this question into another condition:
P(at least one estimation is bad) = 0.05 ≤ P(A1 is bad) + P(A2 is bad) + P(A3 is bad) + P(A4 is bad) + P(A5 is bad)
One way is to make each of them equal to 0.05/5 = 0.01, that is 1%. In other words, you have to guarantee each estimate good to 99%( for example, by constructing a 99% confidence interval) to make sure the total estimation to be good with a chance 95%. This is called the Bonferroni Method of simultaneous inference.
Bonferroni, Carlo E. (1936), "Teoria statistica delle classi e calcolo delle probabilità", Pubbl. D. R. Ist. Super. Di Sci. Econom. E Commerciali di Firenze (in Italian), 8: 1–62, Zbl0016.41103
Dohmen, Klaus (2003), Improved Bonferroni Inequalities via Abstract Tubes. Inequalities and Identities of Inclusion–Exclusion Type, Lecture Notes in Mathematics, vol.1826, Berlin: Springer-Verlag, pp.viii+113, ISBN3-540-20025-8, MR2019293, Zbl1026.05009
This page is based on this Wikipedia article Text is available under the CC BY-SA 4.0 license; additional terms may apply. Images, videos and audio are available under their respective licenses.