Fermi problem

Last updated

A Fermi problem (or Fermi quiz, Fermi question, Fermi estimate), also known as a order-of-magnitude problem (or order-of-magnitude estimate, order estimation), is an estimation problem in physics or engineering education, designed to teach dimensional analysis or approximation of extreme scientific calculations. Fermi problems are usually back-of-the-envelope calculations. The estimation technique is named after physicist Enrico Fermi as he was known for his ability to make good approximate calculations with little or no actual data. Fermi problems typically involve making justified guesses about quantities and their variance or lower and upper bounds. In some cases, order-of-magnitude estimates can also be derived using dimensional analysis.

Contents

Historical background

An example is Enrico Fermi's estimate of the strength of the atomic bomb that detonated at the Trinity test, based on the distance traveled by pieces of paper he dropped from his hand during the blast. Fermi's estimate of 10 kilotons of TNT was well within an order of magnitude of the now-accepted value of 21 kilotons. [1] [2] [3]

Examples

Fermi questions are often extreme in nature, and cannot usually be solved using common mathematical or scientific information.

Example questions given by the official Fermi Competition:[ clarification needed ]

"If the mass of one teaspoon of water could be converted entirely into energy in the form of heat, what volume of water, initially at room temperature, could it bring to a boil? (litres)."

"How much does the Thames River heat up in going over the Fanshawe Dam? (Celsius degrees)."

"What is the mass of all the automobiles scrapped in North America this month? (kilograms)." [4] [5]

Possibly the most famous Fermi Question is the Drake equation, which seeks to estimate the number of intelligent civilizations in the galaxy. The basic question of why, if there were a significant number of such civilizations, human civilization has never encountered any others is called the Fermi paradox. [6]

Advantages and scope

Scientists often look for Fermi estimates of the answer to a problem before turning to more sophisticated methods to calculate a precise answer. This provides a useful check on the results. While the estimate is almost certainly incorrect, it is also a simple calculation that allows for easy error checking, and to find faulty assumptions if the figure produced is far beyond what we might reasonably expect. By contrast, precise calculations can be extremely complex but with the expectation that the answer they produce is correct. The far larger number of factors and operations involved can obscure a very significant error, either in mathematical process or in the assumptions the equation is based on, but the result may still be assumed to be right because it has been derived from a precise formula that is expected to yield good results. Without a reasonable frame of reference to work from it is seldom clear if a result is acceptably precise or is many degrees of magnitude (tens or hundreds of times) too big or too small. The Fermi estimation gives a quick, simple way to obtain this frame of reference for what might reasonably be expected to be the answer.

As long as the initial assumptions in the estimate are reasonable quantities, the result obtained will give an answer within the same scale as the correct result, and if not gives a base for understanding why this is the case. For example, suppose a person was asked to determine the number of piano tuners in Chicago. If their initial estimate told them there should be a hundred or so, but the precise answer tells them there are many thousands, then they know they need to find out why there is this divergence from the expected result. First looking for errors, then for factors the estimation did not take account of – does Chicago have a number of music schools or other places with a disproportionately high ratio of pianos to people? Whether close or very far from the observed results, the context the estimation provides gives useful information both about the process of calculation and the assumptions that have been used to look at problems.

Fermi estimates are also useful in approaching problems where the optimal choice of calculation method depends on the expected size of the answer. For instance, a Fermi estimate might indicate whether the internal stresses of a structure are low enough that it can be accurately described by linear elasticity; or if the estimate already bears significant relationship in scale relative to some other value, for example, if a structure will be over-engineered to withstand loads several times greater than the estimate.[ citation needed ]

Although Fermi calculations are often not accurate, as there may be many problems with their assumptions, this sort of analysis does inform one what to look for to get a better answer. For the above example, one might try to find a better estimate of the number of pianos tuned by a piano tuner in a typical day, or look up an accurate number for the population of Chicago. It also gives a rough estimate that may be good enough for some purposes: if a person wants to start a store in Chicago that sells piano tuning equipment, and calculates that they need 10,000 potential customers to stay in business, they can reasonably assume that the above estimate is far enough below 10,000 that they should consider a different business plan (and, with a little more work, they could compute a rough upper bound on the number of piano tuners by considering the most extreme reasonable values that could appear in each of their assumptions).

Explanation

Fermi estimates generally work because the estimations of the individual terms are often close to correct, and overestimates and underestimates help cancel each other out. That is, if there is no consistent bias, a Fermi calculation that involves the multiplication of several estimated factors (such as the number of piano tuners in Chicago) will probably be more accurate than might be first supposed.

In detail, multiplying estimates corresponds to adding their logarithms; thus one obtains a sort of Wiener process or random walk on the logarithmic scale, which diffuses as (in number of terms n). In discrete terms, the number of overestimates minus underestimates will have a binomial distribution. In continuous terms, if one makes a Fermi estimate of n steps, with standard deviation σ units on the log scale from the actual value, then the overall estimate will have standard deviation , since the standard deviation of a sum scales as in the number of summands.

For instance, if one makes a 9-step Fermi estimate, at each step overestimating or underestimating the correct number by a factor of 2 (or with a standard deviation 2), then after 9 steps the standard error will have grown by a logarithmic factor of , so 23 = 8. Thus one will expect to be within 18 to 8 times the correct value – within an order of magnitude, and much less than the worst case of erring by a factor of 29 = 512 (about 2.71 orders of magnitude). If one has a shorter chain or estimates more accurately, the overall estimate will be correspondingly better.

See also

Related Research Articles

A histogram is a visual representation of the distribution of quantitative data. The term was first introduced by Karl Pearson. To construct a histogram, the first step is to "bin" the range of values— divide the entire range of values into a series of intervals—and then count how many values fall into each interval. The bins are usually specified as consecutive, non-overlapping intervals of a variable. The bins (intervals) are adjacent and are typically of equal size.

Two numbers are within an order of magnitude of each other if their ratio is between 1/10 and 10. In other words, the two numbers are within a factor of 10 of each other. This concept helps articulate that 1000 and 1010 are close to each other in the same way that 1.00 and 1.01 are, even though the absolute differences are 10 and 0.01, respectively.

<span class="mw-page-title-main">Standard deviation</span> In statistics, a measure of variation

In statistics, the standard deviation is a measure of the amount of variation of the values of a variable about its mean. A low standard deviation indicates that the values tend to be close to the mean of the set, while a high standard deviation indicates that the values are spread out over a wider range. The standard deviation is commonly used in the determination of what constitutes an outlier and what does not.

<span class="mw-page-title-main">Monte Carlo method</span> Probabilistic problem-solving algorithm

Monte Carlo methods, or Monte Carlo experiments, are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The underlying concept is to use randomness to solve problems that might be deterministic in principle. The name comes from the Monte Carlo Casino in Monaco, where the primary developer of the method, physicist Stanislaw Ulam, was inspired by his uncle's gambling habits.

The Fermi energy is a concept in quantum mechanics usually referring to the energy difference between the highest and lowest occupied single-particle states in a quantum system of non-interacting fermions at absolute zero temperature. In a Fermi gas, the lowest occupied state is taken to have zero kinetic energy, whereas in a metal, the lowest occupied state is typically taken to mean the bottom of the conduction band.

In physics, screening is the damping of electric fields caused by the presence of mobile charge carriers. It is an important part of the behavior of charge-carrying fluids, such as ionized gases, electrolytes, and charge carriers in electronic conductors . In a fluid, with a given permittivity ε, composed of electrically charged constituent particles, each pair of particles interact through the Coulomb force as where the vector r is the relative position between the charges. This interaction complicates the theoretical treatment of the fluid. For example, a naive quantum mechanical calculation of the ground-state energy density yields infinity, which is unreasonable. The difficulty lies in the fact that even though the Coulomb force diminishes with distance as 1/r2, the average number of particles at each distance r is proportional to r2, assuming the fluid is fairly isotropic. As a result, a charge fluctuation at any one point has non-negligible effects at large distances.

In the mathematical subfield of numerical analysis, numerical stability is a generally desirable property of numerical algorithms. The precise definition of stability depends on the context. One is numerical linear algebra and the other is algorithms for solving ordinary and partial differential equations by discrete approximation.

<span class="mw-page-title-main">Random walk</span> Process forming a path from many random steps

In mathematics, a random walk, sometimes known as a drunkard's walk, is a stochastic process that describes a path that consists of a succession of random steps on some mathematical space.

Hand-waving is a pejorative label for attempting to be seen as effective – in word, reasoning, or deed – while actually doing nothing effective or substantial. It is often applied to debating techniques that involve fallacies, misdirection and the glossing over of details. It is also used academically to indicate unproven claims and skipped steps in proofs, with some specific meanings in particular fields, including literary criticism, speculative fiction, mathematics, logic, science and engineering.

An approximation is anything that is intentionally similar but not exactly equal to something else.

In statistics, an effect size is a value measuring the strength of the relationship between two variables in a population, or a sample-based estimate of that quantity. It can refer to the value of a statistic calculated from a sample of data, the value of a parameter for a hypothetical population, or to the equation that operationalizes how statistics or parameters lead to the effect size value. Examples of effect sizes include the correlation between two variables, the regression coefficient in a regression, the mean difference, or the risk of a particular event happening. Effect sizes are a complement tool for statistical hypothesis testing, and play an important role in power analyses to assess the sample size required for new experiments. Effect size are fundamental in meta-analyses which aim at provide the combined effect size based on data from multiple studies. The cluster of data-analysis methods concerning effect sizes is referred to as estimation statistics.

<span class="mw-page-title-main">Free body diagram</span> Diagram showing applied forces and moments on a physical body

In physics and engineering, a free body diagram is a graphical illustration used to visualize the applied forces, moments, and resulting reactions on a body in a given condition. It depicts a body or connected bodies with all the applied forces and moments, and reactions, which act on the body(ies). The body may consist of multiple internal members, or be a compact body. A series of free bodies and other diagrams may be necessary to solve complex problems. Sometimes in order to calculate the resultant force graphically the applied forces are arranged as the edges of a polygon of forces or force polygon.

A back-of-the-envelope calculation is a rough calculation, typically jotted down on any available scrap of paper such as an envelope. It is more than a guess but less than an accurate calculation or mathematical proof. The defining characteristic of back-of-the-envelope calculations is the use of simplified assumptions.

Sample size determination or estimation is the act of choosing the number of observations or replicates to include in a statistical sample. The sample size is an important feature of any empirical study in which the goal is to make inferences about a population from a sample. In practice, the sample size used in a study is usually determined based on the cost, time, or convenience of collecting the data, and the need for it to offer sufficient statistical power. In complex studies, different sample sizes may be allocated, such as in stratified surveys or experimental designs with multiple treatment groups. In a census, data is sought for an entire population, hence the intended sample size is equal to the population. In experimental design, where a study may be divided into different treatment groups, there may be different sample sizes for each group.

Guesstimate is an informal English portmanteau of guess and estimate, first used by American statisticians in 1934 or 1935. It is defined as an estimate made without using adequate or complete information, or, more strongly, as an estimate arrived at by guesswork or conjecture. Like the words estimate and guess, guesstimate may be used as a verb or a noun. A guesstimate may be a first rough approximation pending a more accurate estimate, or it may be an educated guess at something for which no better information will become available.

In psychology, number sense is the term used for the hypothesis that some animals, particularly humans, have a biologically determined ability that allows them to represent and manipulate large numerical quantities. The term was popularized by Stanislas Dehaene in his 1997 book "The Number Sense," but originally named by the mathematician Tobias Dantzig in his 1930 text Number: The Language of Science.

<span class="mw-page-title-main">Spherical cow</span> Humorous concept in scientific models

The spherical cow is a humorous metaphor for highly simplified scientific models of complex phenomena. Originating in theoretical physics, the metaphor refers to physicists' tendency to develop toy models that reduce a problem to the simplest form imaginable, making calculations more feasible, even if the simplification hinders the model's application to reality.

<span class="mw-page-title-main">68–95–99.7 rule</span> Shorthand used in statistics

In statistics, the 68–95–99.7 rule, also known as the empirical rule, and sometimes abbreviated 3sr, is a shorthand used to remember the percentage of values that lie within an interval estimate in a normal distribution: approximately 68%, 95%, and 99.7% of the values lie within one, two, and three standard deviations of the mean, respectively.

In statistics and in particular statistical theory, unbiased estimation of a standard deviation is the calculation from a statistical sample of an estimated value of the standard deviation of a population of values, in such a way that the expected value of the calculation equals the true value. Except in some important situations, outlined later, the task has little relevance to applications of statistics since its need is avoided by standard procedures, such as the use of significance tests and confidence intervals, or by using Bayesian analysis.

<span class="mw-page-title-main">Estimation</span> Process of finding an approximation

Estimation is the process of finding an estimate or approximation, which is a value that is usable for some purpose even if input data may be incomplete, uncertain, or unstable. The value is nonetheless usable because it is derived from the best information available. Typically, estimation involves "using the value of a statistic derived from a sample to estimate the value of a corresponding population parameter". The sample provides information that can be projected, through various formal or informal processes, to determine a range most likely to describe the missing information. An estimate that turns out to be incorrect will be an overestimate if the estimate exceeds the actual result and an underestimate if the estimate falls short of the actual result.

References

  1. "A Backward Glance: Eyewitnesses to Trinity" (PDF). Nuclear Weapons Journal (2). Los Alamos National Laboratory: 45. 2005. Retrieved 27 October 2022.
  2. Von Baeyer, Hans Christian (September 1988). "How Fermi Would Have Fixed It". The Sciences. 28 (5): 2–4. doi:10.1002/j.2326-1951.1988.tb03037.x.
  3. Von Baeyer, Hans Christian (2001). "The Fermi Solution". The Fermi Solution: Essays on Science. Dover Publications. pp. 3–12. ISBN   9780486417073. OCLC   775985788.
  4. Weinstein, L.B. (2012). "Fermi Questions". Old Dominion University. Retrieved 27 October 2022.
  5. Fermi Questions. Richard K Curtis. 2001.
  6. Ćirković, Milan M. (10 May 2018). The Great Silence: Science and Philosophy of Fermi's Paradox. Oxford University Press. ISBN   9780199646302.

Further reading

The following books contain many examples of Fermi problems with solutions:

There are or have been a number of university-level courses devoted to estimation and the solution of Fermi problems. The materials for these courses are a good source for additional Fermi problem examples and material about solution strategies: