This article needs additional citations for verification .(April 2013) |
An approximation is anything that is intentionally similar but not exactly equal to something else.
The word approximation is derived from Latin approximatus, from proximus meaning very near and the prefix ad- (ad- before p becomes ap- by assimilation) meaning to. [1] Words like approximate, approximately and approximation are used especially in technical or scientific contexts. In everyday English, words such as roughly or around are used with a similar meaning. [2] It is often found abbreviated as approx.
The term can be applied to various properties (e.g., value, quantity, image, description) that are nearly, but not exactly correct; similar, but not exactly the same (e.g., the approximate time was 10 o'clock).
Although approximation is most often applied to numbers, it is also frequently applied to such things as mathematical functions, shapes, and physical laws.
In science, approximation can refer to using a simpler process or model when the correct model is difficult to use. An approximate model is used to make calculations easier. Approximations might also be used if incomplete information prevents use of exact representations.
The type of approximation used depends on the available information, the degree of accuracy required, the sensitivity of the problem to this data, and the savings (usually in time and effort) that can be achieved by approximation.
Approximation theory is a branch of mathematics, and a quantitative part of functional analysis. Diophantine approximation deals with approximations of real numbers by rational numbers.
Approximation usually occurs when an exact form or an exact numerical number is unknown or difficult to obtain. However some known form may exist and may be able to represent the real form so that no significant deviation can be found. For example, 1.5 × 106 means that the true value of something being measured is 1,500,000 to the nearest hundred thousand (so the actual value is somewhere between 1,450,000 and 1,550,000); this is in contrast to the notation 1.500 × 106, which means that the true value is 1,500,000 to the nearest thousand (implying that the true value is somewhere between 1,499,500 and 1,500,500).
Numerical approximations sometimes result from using a small number of significant digits. Calculations are likely to involve rounding errors and other approximation errors. Log tables, slide rules and calculators produce approximate answers to all but the simplest calculations. The results of computer calculations are normally an approximation expressed in a limited number of significant digits, although they can be programmed to produce more precise results. [3] Approximation can occur when a decimal number cannot be expressed in a finite number of binary digits.
Related to approximation of functions is the asymptotic value of a function, i.e. the value as one or more of a function's parameters becomes arbitrarily large. For example, the sum is asymptotically equal to k. No consistent notation is used throughout mathematics and some texts use ≈ to mean approximately equal and ~ to mean asymptotically equal whereas other texts use the symbols the other way around.
≅ ≈ | |
---|---|
Approximately equal to Almost equal to | |
In Unicode | U+2245≅APPROXIMATELY EQUAL TO (≅, ≅) U+2248≈ALMOST EQUAL TO (≈, ≈, ≈, ≈, ≈, ≈) |
Different from | |
Different from | U+2242≂MINUS TILDE |
Related | |
See also | U+2249≉ NOT ALMOST EQUAL TO U+003D= EQUALS SIGN U+2243≃ASYMPTOTICALLY EQUAL TO |
The approximately equals sign, ≈, was introduced by British mathematician Alfred Greenhill. [4]
Symbols used in LaTeX markup.
\approx
), usually to indicate approximation between numbers, like .\not\approx
), usually to indicate that numbers are not approximately equal ().\simeq
), usually to indicate asymptotic equivalence between functions, like . \sim
), usually to indicate proportionality between functions, the same of the line above will be .\cong
), usually to indicate congruence between figures, like .\eqsim
), usually to indicate that two quantities are equal up to constants.\lessapprox
) and (\gtrapprox
), usually to indicate that either the inequality holds or the two values are approximately equal.Symbols used to denote items that are approximately equal are wavy or dotted equals signs. [5]
U+223C∼TILDE OPERATOR | Which is also sometimes used to indicate proportionality. |
U+223D∽REVERSED TILDE | Which is also sometimes used to indicate proportionality. |
U+2243≃ASYMPTOTICALLY EQUAL TO | A combination of "≈" and "=", which is used to indicate asymptotic equality. |
U+2245≅APPROXIMATELY EQUAL TO | Another combination of "≈" and "=", which is used to indicate isomorphism or congruence. |
U+2246≆APPROXIMATELY BUT NOT ACTUALLY EQUAL TO | |
U+2247≇NEITHER APPROXIMATELY NOR ACTUALLY EQUAL TO | |
U+2248≈ALMOST EQUAL TO | |
U+2249≉NOT ALMOST EQUAL TO | |
U+224A≊ALMOST EQUAL OR EQUAL TO | Another combination of "≈" and "=", used to indicate equivalence or approximate equivalence. |
U+2250≐APPROACHES THE LIMIT | Which can be used to represent the approach of a variable, y, to a limit; like the common syntax, . [6] |
U+2252≒APPROXIMATELY EQUAL TO OR THE IMAGE OF | Which is used like "≈" or "≃" in Japan, Taiwan, and Korea. |
U+2253≓IMAGE OF OR APPROXIMATELY EQUAL TO | A reversed variation of U+2252≒APPROXIMATELY EQUAL TO OR THE IMAGE OF. |
U+225F≟ QUESTIONED EQUAL TO | |
U+2A85⪅LESS-THAN OR APPROXIMATE | |
U+2A86⪆GREATER-THAN OR APPROXIMATE |
Approximation arises naturally in scientific experiments. The predictions of a scientific theory can differ from actual measurements. This can be because there are factors in the real situation that are not included in the theory. For example, simple calculations may not include the effect of air resistance. Under these circumstances, the theory is an approximation to reality. Differences may also arise because of limitations in the measuring technique. In this case, the measurement is an approximation to the actual value.
The history of science shows that earlier theories and laws can be approximations to some deeper set of laws. Under the correspondence principle, a new scientific theory should reproduce the results of older, well-established, theories in those domains where the old theories work. [7] The old theory becomes an approximation to the new theory.
Some problems in physics are too complex to solve by direct analysis, or progress could be limited by available analytical tools. Thus, even when the exact representation is known, an approximation may yield a sufficiently accurate solution while reducing the complexity of the problem significantly. Physicists often approximate the shape of the Earth as a sphere even though more accurate representations are possible, because many physical characteristics (e.g., gravity) are much easier to calculate for a sphere than for other shapes.
Approximation is also used to analyze the motion of several planets orbiting a star. This is extremely difficult due to the complex interactions of the planets' gravitational effects on each other. [8] An approximate solution is effected by performing iterations. In the first iteration, the planets' gravitational interactions are ignored, and the star is assumed to be fixed. If a more precise solution is desired, another iteration is then performed, using the positions and motions of the planets as identified in the first iteration, but adding a first-order gravity interaction from each planet on the others. This process may be repeated until a satisfactorily precise solution is obtained.
The use of perturbations to correct for the errors can yield more accurate solutions. Simulations of the motions of the planets and the star also yields more accurate solutions.
The most common versions of philosophy of science accept that empirical measurements are always approximations — they do not perfectly represent what is being measured.
Within the European Union (EU), "approximation" refers to a process through which EU legislation is implemented and incorporated within Member States' national laws, despite variations in the existing legal framework in each country. Approximation is required as part of the pre-accession process for new member states, [9] and as a continuing process when required by an EU Directive. Approximation is a key word generally employed within the title of a directive, for example the Trade Marks Directive of 16 December 2015 serves "to approximate the laws of the Member States relating to trade marks". [10] The European Commission describes approximation of law as "a unique obligation of membership in the European Union". [9]
In mathematics, the logarithm to baseb is the inverse function of exponentiation with base b. That means that the logarithm of a number x to the base b is the exponent to which b must be raised to produce x. For example, since 1000 = 103, the logarithm base of 1000 is 3, or log10 (1000) = 3. The logarithm of x to base b is denoted as logb (x), or without parentheses, logb x. When the base is clear from the context or is irrelevant it is sometimes written log x.
Numerical analysis is the study of algorithms that use numerical approximation for the problems of mathematical analysis. It is the study of numerical methods that attempt to find approximate solutions of problems rather than the exact ones. Numerical analysis finds application in all fields of engineering and the physical sciences, and in the 21st century also the life and social sciences like economics, medicine, business and even the arts. Current growth in computing power has enabled the use of more complex numerical analysis, providing detailed and realistic mathematical models in science and engineering. Examples of numerical analysis include: ordinary differential equations as found in celestial mechanics, numerical linear algebra in data analysis, and stochastic differential equations and Markov chains for simulating living cells in medicine and biology.
In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots of a real-valued function. The most basic version starts with a real-valued function f, its derivative f′, and an initial guess x0 for a root of f. If f satisfies certain assumptions and the initial guess is close, then
The number π is a mathematical constant that is the ratio of a circle's circumference to its diameter, approximately equal to 3.14159. The number π appears in many formulae across mathematics and physics. It is an irrational number, meaning that it cannot be expressed exactly as a ratio of two integers, although fractions such as are commonly used to approximate it. Consequently, its decimal representation never ends, nor enters a permanently repeating pattern. It is a transcendental number, meaning that it cannot be a solution of an equation involving only finite sums, products, powers, and integers. The transcendence of π implies that it is impossible to solve the ancient challenge of squaring the circle with a compass and straightedge. The decimal digits of π appear to be randomly distributed, but no proof of this conjecture has been found.
The method of least squares is a parameter estimation method in regression analysis based on minimizing the sum of the squares of the residuals made in the results of each individual equation.
In mathematics and applied mathematics, perturbation theory comprises methods for finding an approximate solution to a problem, by starting from the exact solution of a related, simpler problem. A critical feature of the technique is a middle step that breaks the problem into "solvable" and "perturbative" parts. In perturbation theory, the solution is expressed as a power series in a small parameter . The first term is the known solution to the solvable problem. Successive terms in the series at higher powers of usually become smaller. An approximate 'perturbation solution' is obtained by truncating the series, usually by keeping only the first two terms, the solution to the known problem and the 'first order' perturbation correction.
In analysis, numerical integration comprises a broad family of algorithms for calculating the numerical value of a definite integral. The term numerical quadrature is more or less a synonym for "numerical integration", especially as applied to one-dimensional integrals. Some authors refer to numerical integration over more than one dimension as cubature; others take "quadrature" to include higher-dimensional integration.
In mathematics, analytic number theory is a branch of number theory that uses methods from mathematical analysis to solve problems about the integers. It is often said to have begun with Peter Gustav Lejeune Dirichlet's 1837 introduction of Dirichlet L-functions to give the first proof of Dirichlet's theorem on arithmetic progressions. It is well known for its results on prime numbers and additive number theory.
Quantization, in mathematics and digital signal processing, is the process of mapping input values from a large set to output values in a (countable) smaller set, often with a finite number of elements. Rounding and truncation are typical examples of quantization processes. Quantization is involved to some degree in nearly all digital signal processing, as the process of representing a signal in digital form ordinarily involves rounding. Quantization also forms the core of essentially all lossy compression algorithms.
In mathematical analysis, asymptotic analysis, also known as asymptotics, is a method of describing limiting behavior.
In mathematics, the regula falsi, method of false position, or false position method is a very old method for solving an equation with one unknown; this method, in modified form, is still in use. In simple terms, the method is the trial and error technique of using test ("false") values for the variable and then adjusting the test value according to the outcome. This is sometimes also referred to as "guess and check". Versions of the method predate the advent of algebra and the use of equations.
In mathematics, linearization is finding the linear approximation to a function at a given point. The linear approximation of a function is the first order Taylor expansion around the point of interest. In the study of dynamical systems, linearization is a method for assessing the local stability of an equilibrium point of a system of nonlinear differential equations or discrete dynamical systems. This method is used in fields such as engineering, physics, economics, and ecology.
Methods of computing square roots are algorithms for approximating the non-negative square root of a positive real number . Since all square roots of natural numbers, other than of perfect squares, are irrational, square roots can usually only be computed to some finite precision: these methods typically construct a series of increasingly accurate approximations.
In mathematics, a Padé approximant is the "best" approximation of a function near a specific point by a rational function of given order. Under this technique, the approximant's power series agrees with the power series of the function it is approximating. The technique was developed around 1890 by Henri Padé, but goes back to Georg Frobenius, who introduced the idea and investigated the features of rational approximations of power series.
Approximations for the mathematical constant pi in the history of mathematics reached an accuracy within 0.04% of the true value before the beginning of the Common Era. In Chinese mathematics, this was improved to approximations correct to what corresponds to about seven decimal digits by the 5th century.
In numerical analysis, Aitken's delta-squared process or Aitken extrapolation is a series acceleration method used for accelerating the rate of convergence of a sequence. It is named after Alexander Aitken, who introduced this method in 1926. It is most useful for accelerating the convergence of a sequence that is converging linearly. A precursor form was known to Seki Kōwa and applied to the rectification of the circle, i.e., to the calculation of π.
In numerical analysis, Gauss–Legendre quadrature is a form of Gaussian quadrature for approximating the definite integral of a function. For integrating over the interval [−1, 1], the rule takes the form:
A mathematical constant is a number whose value is fixed by an unambiguous definition, often referred to by a special symbol, or by mathematicians' names to facilitate using it across multiple mathematical problems. Constants arise in many areas of mathematics, with constants such as e and π occurring in such diverse contexts as geometry, number theory, statistics, and calculus.
The proper generalized decomposition (PGD) is an iterative numerical method for solving boundary value problems (BVPs), that is, partial differential equations constrained by a set of boundary conditions, such as the Poisson's equation or the Laplace's equation.
In mathematics, quasilinearization is a technique which replaces a nonlinear differential equation or operator equation with a sequence of linear problems, which are presumed to be easier, and whose solutions approximate the solution of the original nonlinear problem with increasing accuracy. It is a generalization of Newton's method; the word "quasilinearization" is commonly used when the differential equation is a boundary value problem.
≐ approaches a limit