In the statistical theory of estimation, the German tank problem consists of estimating the maximum of a discrete uniform distribution from sampling without replacement. In simple terms, suppose there exists an unknown number of items which are sequentially numbered from 1 to N. A random sample of these items is taken and their sequence numbers observed; the problem is to estimate N from these observed numbers.
The problem can be approached using either frequentist inference or Bayesian inference, leading to different results. Estimating the population maximum based on a single sample yields divergent results, whereas estimation based on multiple samples is a practical estimation question whose answer is simple (especially in the frequentist setting) but not obvious (especially in the Bayesian setting).
The problem is named after its historical application by Allied forces in World War II to the estimation of the monthly rate of German tank production from very limited data. This exploited the manufacturing practice of assigning and attaching ascending sequences of serial numbers to tank components (chassis, gearbox, engine, wheels), with some of the tanks eventually being captured in battle by Allied forces.
The adversary is presumed to have manufactured a series of tanks marked with consecutive whole numbers, beginning with serial number 1. Additionally, regardless of a tank's date of manufacture, history of service, or the serial number it bears, the distribution over serial numbers becoming revealed to analysis is uniform, up to the point in time when the analysis is conducted.
Assuming tanks are assigned sequential serial numbers starting with 1, suppose that four tanks are captured and that they have the serial numbers: 19, 40, 42 and 60.
A frequentist approach (using the minimum-variance unbiased estimator) predicts the total number of tanks produced will be:
A Bayesian approach (using a uniform prior over the integers in for any suitably large ) predicts that the median number of tanks produced will be very similar to the frequentist prediction:
whereas the Bayesian mean predicts that the number of tanks produced would be:
Let N equal the total number of tanks predicted to have been produced, m equal the highest serial number observed and k equal the number of tanks captured.
The frequentist prediction is calculated as:
The Bayesian median is calculated as:
The Bayesian mean is calculated as:
These Bayesian quantities are derived from the Bayesian posterior distribution:
This probability mass function has a positive skewness, related to the fact that there are at least 60 tanks. Because of this skewness, the mean may not be the most meaningful estimate. The median in this example is 74.5, in close agreement with the frequentist formula. Using Stirling's approximation, the posterior may be approximated by an exponentially decaying function of n,
which results in the following approximation for the median:
and the following approximations for the mean and standard deviation:
During the course of the Second World War, the Western Allies made sustained efforts to determine the extent of German production and approached this in two major ways: conventional intelligence gathering and statistical estimation. In many cases, statistical analysis substantially improved on conventional intelligence. In some cases, conventional intelligence was used in conjunction with statistical methods, as was the case in estimation of Panther tank production just prior to D-Day.
The allied command structure had thought the Panzer V (Panther) tanks seen in Italy, with their high velocity, long-barreled 75 mm/L70 guns, were unusual heavy tanks and would only be seen in northern France in small numbers, much the same way as the Tiger I was seen in Tunisia. The US Army was confident that the Sherman tank would continue to perform well, as it had versus the Panzer III and Panzer IV tanks in North Africa and Sicily. [lower-alpha 1] Shortly before D-Day, rumors indicated that large numbers of Panzer V tanks were being used.
To determine whether this was true, the Allies attempted to estimate the number of tanks being produced. To do this, they used the serial numbers on captured or destroyed tanks. The principal numbers used were gearbox numbers, as these fell in two unbroken sequences. Chassis and engine numbers were also used, though their use was more complicated. Various other components were used to cross-check the analysis. Similar analyses were done on wheels, which were observed to be sequentially numbered (i.e., 1, 2, 3, ..., N). [2] [lower-alpha 2] [3] [4]
The analysis of tank wheels yielded an estimate for the number of wheel molds that were in use. A discussion with British road wheel makers then estimated the number of wheels that could be produced from this many molds, which yielded the number of tanks that were being produced each month. Analysis of wheels from two tanks (32 road wheels each, 64 road wheels total) yielded an estimate of 270 tanks produced in February 1944, substantially more than had previously been suspected. [5]
German records after the war showed production for the month of February 1944 was 276. [6] [lower-alpha 3] The statistical approach proved to be far more accurate than conventional intelligence methods, and the phrase "German tank problem" became accepted as a descriptor for this type of statistical analysis.
Estimating production was not the only use of this serial-number analysis. It was also used to understand German production more generally, including number of factories, relative importance of factories, length of supply chain (based on lag between production and use), changes in production, and use of resources such as rubber.
According to conventional Allied intelligence estimates, the Germans were producing around 1,400 tanks a month between June 1940 and September 1942. Applying the formula below to the serial numbers of captured tanks, the number was calculated to be 246 a month. After the war, captured German production figures from the ministry of Albert Speer showed the actual number to be 245. [3]
Estimates for some specific months are given as: [7]
Month | Statistical estimate | Intelligence estimate | German records |
---|---|---|---|
June 1940 | 169 | 1,000 | 122 |
June 1941 | 244 | 1,550 | 271 |
August 1942 | 327 | 1,550 | 342 |
Similar serial-number analysis was used for other military equipment during World War II, most successfully for the V-2 rocket. [8]
Factory markings on Soviet military equipment were analyzed during the Korean War, and by German intelligence during World War II. [9]
In the 1980s, some Americans were given access to the production line of Israel's Merkava tanks. The production numbers were classified, but the tanks had serial numbers, allowing estimation of production. [10]
The formula has been used in non-military contexts, for example to estimate the number of Commodore 64 computers built, where the result (12.5 million) matches the low-end estimates. [11]
To confound serial-number analysis, serial numbers can be excluded, or usable auxiliary information reduced. Alternatively, serial numbers that resist cryptanalysis can be used, most effectively by randomly choosing numbers without replacement from a list that is much larger than the number of objects produced, or by producing random numbers and checking them against the list of already assigned numbers; collisions are likely to occur unless the number of digits possible is more than twice the number of digits in the number of objects produced (where the serial number can be in any base); see birthday problem. [lower-alpha 4] For this, a cryptographically secure pseudorandom number generator may be used. All these methods require a lookup table (or breaking the cypher) to back out from serial number to production order, which complicates use of serial numbers: a range of serial numbers cannot be recalled, for instance, but each must be looked up individually, or a list generated.
Alternatively, sequential serial numbers can be encrypted with a simple substitution cipher, which allows easy decoding, but is also easily broken by frequency analysis: even if starting from an arbitrary point, the plaintext has a pattern (namely, numbers are in sequence). One example is given in Ken Follett's novel Code to Zero , where the encryption of the Jupiter-C rocket serial numbers is given by:
H | U | N | T | S | V | I | L | E | X |
---|---|---|---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 0 |
The code word here is Huntsville (with repeated letters omitted) to get a 10-letter key. [12] The rocket number 13 was therefore "HN", and the rocket number 24 was "UT".
For point estimation (estimating a single value for the total, ), the minimum-variance unbiased estimator (MVUE, or UMVU estimator) is given by: [lower-alpha 5]
where m is the largest serial number observed (sample maximum) and k is the number of tanks observed (sample size). [10] [13] Note that once a serial number has been observed, it is no longer in the pool and will not be observed again.
This has a variance [10]
so the standard deviation is approximately N/k, the expected size of the gap between sorted observations in the sample.
The formula may be understood intuitively as the sample maximum plus the average gap between observations in the sample, the sample maximum being chosen as the initial estimator, due to being the maximum likelihood estimator, [lower-alpha 6] with the gap being added to compensate for the negative bias of the sample maximum as an estimator for the population maximum, [lower-alpha 7] and written as
This can be visualized by imagining that the observations in the sample are evenly spaced throughout the range, with additional observations just outside the range at 0 and N + 1. If starting with an initial gap between 0 and the lowest observation in the sample (the sample minimum), the average gap between consecutive observations in the sample is ; the being because the observations themselves are not counted in computing the gap between observations. [lower-alpha 8] . A derivation of the expected value and the variance of the sample maximum are shown in the page of the discrete uniform distribution.
This philosophy is formalized and generalized in the method of maximum spacing estimation; a similar heuristic is used for plotting position in a Q–Q plot, plotting sample points at k / (n + 1), which is evenly on the uniform distribution, with a gap at the end.
Instead of, or in addition to, point estimation, interval estimation can be carried out, such as confidence intervals. These are easily computed, based on the observation that the probability that k observations in the sample will fall in an interval covering p of the range (0 ≤ p ≤ 1) is pk (assuming in this section that draws are with replacement, to simplify computations; if draws are without replacement, this overstates the likelihood, and intervals will be overly conservative).
Thus the sampling distribution of the quantile of the sample maximum is the graph x1/k from 0 to 1: the p-th to q-th quantile of the sample maximum m are the interval [p1/kN, q1/kN]. Inverting this yields the corresponding confidence interval for the population maximum of [m/q1/k, m/p1/k].
For example, taking the symmetric 95% interval p = 2.5% and q = 97.5% for k = 5 yields 0.0251/5 ≈ 0.48, 0.9751/5 ≈ 0.995, so the confidence interval is approximately [1.005m, 2.08m]. The lower bound is very close to m, thus more informative is the asymmetric confidence interval from p = 5% to 100%; for k = 5 this yields 0.051/5 ≈ 0.55 and the interval [m, 1.82m].
More generally, the (downward biased) 95% confidence interval is [m, m/0.051/k] = [m, m·201/k]. For a range of k values, with the UMVU point estimator (plus 1 for legibility) for reference, this yields:
k | Point estimate | Confidence interval |
---|---|---|
1 | 2m | [m, 20m] |
2 | 1.5m | [m, 4.5m] |
5 | 1.2m | [m, 1.82m] |
10 | 1.1m | [m, 1.35m] |
20 | 1.05m | [m, 1.16m] |
Immediate observations are:
Note that m/k cannot be used naively (or rather (m + m/k − 1)/k) as an estimate of the standard error SE, as the standard error of an estimator is based on the population maximum (a parameter), and using an estimate to estimate the error in that very estimate is circular reasoning.
The Bayesian approach to the German tank problem [14] is to consider the posterior probability that the number of enemy tanks is , when the number of observed tanks is , and the maximum observed serial number is .
The answer to this problem depends on the choice of prior for . One can proceed using a proper prior over the positive integers, e.g., the Poisson or Negative Binomial distribution, where a closed formula for the posterior mean and posterior variance can be obtained. [15] Below, we will instead adopt a bounded uniform prior.
For brevity, in what follows, is written .
The rule for conditional probability gives
The expression
is the conditional probability that the maximum serial number observed, , is equal to , when the number of enemy tanks, , is known to be equal to , and the number of enemy tanks observed, , is known to be equal to .
It is
where is a binomial coefficient and is an Iverson bracket.
The expression can be derived as follows: answers the question: "What is the probability of a specific serial number being the highest number observed in a sample of tanks, given there are tanks in total?"
One can think of the sample of size to be the result of individual draws without replacement. Assume is observed on draw number . The probability of this occurring is:
As can be seen from the right-hand side, this expression is independent of and therefore the same for each . As can be drawn on different draws, the probability of any specific being the largest one observed is times the above probability:
The expression is the probability that the maximum serial number is equal to once tanks have been observed but before the serial numbers have actually been observed.
The expression can be re-written in terms of the other quantities by marginalizing over all possible .
We assume that is fixed in advance so that we do not have to consider any distribution over . Thus, our prior can depend on .
The expression
is the credibility that the total number of tanks, , is equal to when the number tanks observed is known to be , but before the serial numbers have been observed. Assume that it is some discrete uniform distribution
The upper limit must be finite, because the function
is not a mass distribution function. Our result below will not depend on .
Provided that , so that the prior is consistent with the observed data:
As , the summation approaches (which is finite if k ≥ 2). Thus, for suitably large , we have
For k ≥ 1 the mode of the distribution of the number of enemy tanks is m.
For k ≥ 2, the credibility that the number of enemy tanks is equal to, is
The credibility that the number of enemy tanks, N, is greater than n, is
For k ≥ 3, N has the finite mean value:
For k ≥ 4, N has the finite standard deviation:
These formulas are derived below.
The following binomial coefficient identity is used below for simplifying series relating to the German Tank Problem.
This sum formula is somewhat analogous to the integral formula
These formulas apply for k > 1.
Observing one tank randomly out of a population of n tanks gives the serial number m with probability 1/n for m ≤ n, and zero probability for m > n. Using Iverson bracket notation this is written
This is the conditional probability mass distribution function of .
When considered a function of n for fixed m this is a likelihood function.
The maximum likelihood estimate for the total number of tanks is N0 = m, clearly a biased estimate since the true number can be more than this, potentially many more, but cannot be fewer.
The marginal likelihood (i.e. marginalized over all models) is infinite, being a tail of the harmonic series.
but
where is the harmonic number.
The credibility mass distribution function depends on the prior limit :
The mean value of is
If two tanks rather than one are observed, then the probability that the larger of the observed two serial numbers is equal to m, is
When considered a function of n for fixed m this is a likelihood function
The total likelihood is
and the credibility mass distribution function is
The median satisfies
so
and so the median is
but the mean value of is infinite
The conditional probability that the largest of k observations taken from the serial numbers {1,...,n}, is equal to m, is
The likelihood function of n is the same expression
The total likelihood is finite for k ≥ 2:
The credibility mass distribution function is
The complementary cumulative distribution function is the credibility that N > x
The cumulative distribution function is the credibility that N ≤ x
The order of magnitude of the number of enemy tanks is
The statistical uncertainty is the standard deviation , satisfying the equation
So
and
The variance-to-mean ratio is simply
In number theory, an arithmetic, arithmetical, or number-theoretic function is generally any function f(n) whose domain is the positive integers and whose range is a subset of the complex numbers. Hardy & Wright include in their definition the requirement that an arithmetical function "expresses some arithmetical property of n". There is a larger class of number-theoretic functions that do not fit this definition, for example, the prime-counting functions. This article provides links to functions of both classes.
In mathematics, the binomial coefficients are the positive integers that occur as coefficients in the binomial theorem. Commonly, a binomial coefficient is indexed by a pair of integers n ≥ k ≥ 0 and is written It is the coefficient of the xk term in the polynomial expansion of the binomial power (1 + x)n; this coefficient can be computed by the multiplicative formula
The uncertainty principle, also known as Heisenberg's indeterminacy principle, is a fundamental concept in quantum mechanics. It states that there is a limit to the precision with which certain pairs of physical properties, such as position and momentum, can be simultaneously known. In other words, the more accurately one property is measured, the less accurately the other property can be known.
In mathematics, the Euler numbers are a sequence En of integers defined by the Taylor series expansion
In the mathematical field of real analysis, the monotone convergence theorem is any of a number of related theorems proving the good convergence behaviour of monotonic sequences, i.e. sequences that are non-increasing, or non-decreasing. In its simplest form, it says that a non-decreasing bounded-above sequence of real numbers converges to its smallest upper bound, its supremum. Likewise, a non-increasing bounded-below sequence converges to its largest lower bound, its infimum. In particular, infinite sums of non-negative numbers converge to the supremum of the partial sums if and only if the partial sums are bounded.
In mathematics, a generating function is a representation of an infinite sequence of numbers as the coefficients of a formal power series. Generating functions are often expressed in closed form, by some expression involving operations on the formal series.
In fluid dynamics, Stokes' law is an empirical law for the frictional force – also called drag force – exerted on spherical objects with very small Reynolds numbers in a viscous fluid. It was derived by George Gabriel Stokes in 1851 by solving the Stokes flow limit for small Reynolds numbers of the Navier–Stokes equations.
In mathematics, a Dirichlet series is any series of the form where s is complex, and is a complex sequence. It is a special case of general Dirichlet series.
The Basel problem is a problem in mathematical analysis with relevance to number theory, concerning an infinite sum of inverse squares. It was first posed by Pietro Mengoli in 1650 and solved by Leonhard Euler in 1734, and read on 5 December 1735 in The Saint Petersburg Academy of Sciences. Since the problem had withstood the attacks of the leading mathematicians of the day, Euler's solution brought him immediate fame when he was twenty-eight. Euler generalised the problem considerably, and his ideas were taken up more than a century later by Bernhard Riemann in his seminal 1859 paper "On the Number of Primes Less Than a Given Magnitude", in which he defined his zeta function and proved its basic properties. The problem is named after Basel, hometown of Euler as well as of the Bernoulli family who unsuccessfully attacked the problem.
In mathematics, a divergent series is an infinite series that is not convergent, meaning that the infinite sequence of the partial sums of the series does not have a finite limit.
In mathematics, the discrete-time Fourier transform (DTFT) is a form of Fourier analysis that is applicable to a sequence of discrete values.
In mathematics, a Lambert series, named for Johann Heinrich Lambert, is a series taking the form
In system analysis, among other fields of study, a linear time-invariant (LTI) system is a system that produces an output signal from any input signal subject to the constraints of linearity and time-invariance; these terms are briefly defined in the overview below. These properties apply (exactly or approximately) to many important physical systems, in which case the response y(t) of the system to an arbitrary input x(t) can be found directly using convolution: y(t) = (x ∗ h)(t) where h(t) is called the system's impulse response and ∗ represents convolution (not to be confused with multiplication). What's more, there are systematic methods for solving any such system (determining h(t)), whereas systems not meeting both properties are generally more difficult (or impossible) to solve analytically. A good example of an LTI system is any electrical circuit consisting of resistors, capacitors, inductors and linear amplifiers.
In mathematics, especially in combinatorics, Stirling numbers of the first kind arise in the study of permutations. In particular, the unsigned Stirling numbers of the first kind count permutations according to their number of cycles.
In mathematics, the prime zeta function is an analogue of the Riemann zeta function, studied by Glaisher (1891). It is defined as the following infinite series, which converges for :
Fourier amplitude sensitivity testing (FAST) is a variance-based global sensitivity analysis method. The sensitivity value is defined based on conditional variances which indicate the individual or joint effects of the uncertain inputs on the output.
In mathematical analysis and applications, multidimensional transforms are used to analyze the frequency content of signals in a domain of two or more dimensions.
In mathematics, a transformation of a sequence's generating function provides a method of converting the generating function for one sequence into a generating function enumerating another. These transformations typically involve integral formulas applied to a sequence generating function or weighted sums over the higher-order derivatives of these functions.
In number theory, the prime omega functions and count the number of prime factors of a natural number Thereby counts each distinct prime factor, whereas the related function counts the total number of prime factors of honoring their multiplicity. That is, if we have a prime factorization of of the form for distinct primes , then the respective prime omega functions are given by and . These prime factor counting functions have many important number theoretic relations.