Pythagorean addition

Last updated

In mathematics, Pythagorean addition is a binary operation on the real numbers that computes the length of the hypotenuse of a right triangle, given its two sides. According to the Pythagorean theorem, for a triangle with sides and , this length can be calculated as

Contents

where denotes the Pythagorean addition operation. [1]

This operation can be used in the conversion of Cartesian coordinates to polar coordinates. It also provides a simple notation and terminology for some formulas when its summands are complicated; for example, the energy-momentum relation in physics becomes

It is implemented in many programming libraries as the hypot function, in a way designed to avoid errors arising due to limited-precision calculations performed on computers. In its applications to signal processing and propagation of measurement uncertainty, the same operation is also called addition in quadrature. [2]

Applications

Example of Pythagorean addition of independent errors using vector addition of orthogonal vectors Add in quadrature.svg
Example of Pythagorean addition of independent errors using vector addition of orthogonal vectors

Pythagorean addition (and its implementation as the hypot function) is often used together with the atan2 function to convert from Cartesian coordinates to polar coordinates : [3] [4]

If measurements have independent errors respectively, the quadrature method gives the overall error,

whereas the upper limit of the overall error is

if the errors were not independent. [5]

This is equivalent of finding the magnitude of the resultant of adding orthogonal vectors, each with magnitude equal to the uncertainty, using the Pythagorean theorem.

In signal processing, addition in quadrature is used to find the overall noise from independent sources of noise. For example, if an image sensor gives six digital numbers of shot noise, three of dark current noise and two of Johnson–Nyquist noise under a specific condition, the overall noise is

digital numbers, [6] showing the dominance of larger sources of noise.

Properties

The operation is associative and commutative, [7] and

This means that the real numbers under form a commutative semigroup.

The real numbers under are not a group, because can never produce a negative number as its result, whereas each element of a group must be the result of multiplication of itself by the identity element. On the non-negative numbers, it is still not a group, because Pythagorean addition of one number by a second positive number can only increase the first number, so no positive number can have an inverse element. Instead, it forms a commutative monoid on the non-negative numbers, with zero as its identity.

Implementation

Hypot is a mathematical function defined to calculate the length of the hypotenuse of a right-angle triangle. It was designed to avoid errors arising due to limited-precision calculations performed on computers. Calculating the length of the hypotenuse of a triangle is possible using the square root function on the sum of two squares, but hypot avoids problems that occur when squaring very large or very small numbers. If calculated using the natural formula,

the squares of very large or small values of and may exceed the range of machine precision when calculated on a computer, leading to an inaccurate result caused by arithmetic underflow and overflow. The hypot function was designed to calculate the result without causing this problem. [8]

If either input to hypot is infinite, the result is infinite. Because this is true for all possible values of the other input, the IEEE 754 floating-point standard requires that this remains true even when the other input is not a number (NaN). [9]

Since C++17, there has been an additional hypot function for 3D calculations: [10]

Calculation order

The difficulty with the naive implementation is that may overflow or underflow, unless the intermediate result is computed with extended precision. A common implementation technique is to exchange the values, if necessary, so that , and then to use the equivalent form

The computation of cannot overflow unless both and are zero. If underflows, the final result is equal to , which is correct within the precision of the calculation. The square root is computed of a value between 1 and 2. Finally, the multiplication by cannot underflow, and overflows only when the result is too large to represent. [8] This implementation has the downside that it requires an additional floating-point division, which can double the cost of the naive implementation, as multiplication and addition are typically far faster than division and square root. Typically, the implementation is slower by a factor of 2.5 to 3. [11]

More complex implementations avoid this by dividing the inputs into more cases:

However, this implementation is extremely slow when it causes incorrect jump predictions due to different cases. Additional techniques allow the result to be computed more accurately, e.g. to less than one ulp. [8]

Programming language support

The function is present in many programming languages and libraries, including CSS, [12] C++11, [13] D, [14] Go, [15] JavaScript (since ES2015), [16] Julia, [17] Java (since version 1.5), [18] Kotlin, [19] MATLAB, [20] PHP, [21] Python, [22] Ruby, [23] Rust, [24] and Scala. [25]

See also

Related Research Articles

<span class="mw-page-title-main">Floating-point arithmetic</span> Computer approximation for real numbers

In computing, floating-point arithmetic (FP) is arithmetic that represents real numbers approximately, using an integer with a fixed precision, called the significand, scaled by an integer exponent of a fixed base. For example, 12.345 can be represented as a base-ten floating-point number:

<span class="mw-page-title-main">Square root</span> Number whose square is a given number

In mathematics, a square root of a number x is a number y such that ; in other words, a number y whose square is x. For example, 4 and −4 are square roots of 16 because .

In probability theory, the central limit theorem (CLT) establishes that, in many situations, for identically distributed independent samples, the standardized sample mean tends towards the standard normal distribution even if the original variables themselves are not normally distributed.

In mathematics, a Gaussian function, often simply referred to as a Gaussian, is a function of the base form

<span class="mw-page-title-main">Hypotenuse</span> Longest side of a right-angled triangle, the side opposite of the right angle

In geometry, a hypotenuse is the longest side of a right-angled triangle, the side opposite the right angle. The length of the hypotenuse can be found using the Pythagorean theorem, which states that the square of the length of the hypotenuse equals the sum of the squares of the lengths of the other two sides. For example, if one of the other sides has a length of 3 and the other has a length of 4, then their squares add up to 25. The length of the hypotenuse is the square root of 25, that is, 5.

In mathematics, a proof by infinite descent, also known as Fermat's method of descent, is a particular kind of proof by contradiction used to show that a statement cannot possibly hold for any number, by showing that if the statement were to hold for a number, then the same would be true for a smaller number, leading to an infinite descent and ultimately a contradiction. It is a method which relies on the well-ordering principle, and is often used to show that a given equation, such as a Diophantine equation, has no solutions.

In numerical analysis, the Kahan summation algorithm, also known as compensated summation, significantly reduces the numerical error in the total obtained by adding a sequence of finite-precision floating-point numbers, compared to the obvious approach. This is done by keeping a separate running compensation, in effect extending the precision of the sum by the precision of the compensation variable.

The IEEE Standard for Floating-Point Arithmetic is a technical standard for floating-point arithmetic established in 1985 by the Institute of Electrical and Electronics Engineers (IEEE). The standard addressed many problems found in the diverse floating-point implementations that made them difficult to use reliably and portably. Many hardware floating-point units use the IEEE 754 standard.

<span class="mw-page-title-main">Squeezed coherent state</span> Type of quantum state

In physics, a squeezed coherent state is a quantum state that is usually described by two non-commuting observables having continuous spectra of eigenvalues. Examples are position and momentum of a particle, and the (dimension-less) electric field in the amplitude and in the mode of a light wave. The product of the standard deviations of two such operators obeys the uncertainty principle:

<span class="mw-page-title-main">Dilution of precision (navigation)</span> Propagation of error with varying topology

Dilution of precision (DOP), or geometric dilution of precision (GDOP), is a term used in satellite navigation and geomatics engineering to specify the error propagation as a mathematical effect of navigation satellite geometry on positional measurement precision.

<span class="mw-page-title-main">Special right triangle</span> Right triangle with a feature making calculations on the triangle easier

A special right triangle is a right triangle with some regular feature that makes calculations on the triangle easier, or for which simple formulas exist. For example, a right triangle may have angles that form simple relationships, such as 45°–45°–90°. This is called an "angle-based" right triangle. A "side-based" right triangle is one in which the lengths of the sides form ratios of whole numbers, such as 3 : 4 : 5, or of other special numbers such as the golden ratio. Knowing the relationships of the angles or ratios of sides of these special right triangles allows one to quickly calculate various lengths in geometric problems without resorting to more advanced methods.

<span class="mw-page-title-main">Pythagorean prime</span>

A Pythagorean prime is a prime number of the form . Pythagorean primes are exactly the odd prime numbers that are the sum of two squares; this characterization is Fermat's theorem on sums of two squares.

<span class="mw-page-title-main">Congruum</span> Spacing between equally-spaced square numbers

In number theory, a congruum is the difference between successive square numbers in an arithmetic progression of three squares. That is, if , , and are three square numbers that are equally spaced apart from each other, then the spacing between them, , is called a congruum.

<span class="mw-page-title-main">Sine and cosine</span> Fundamental trigonometric functions

In mathematics, sine and cosine are trigonometric functions of an angle. The sine and cosine of an acute angle are defined in the context of a right triangle: for the specified angle, its sine is the ratio of the length of the side that is opposite that angle to the length of the longest side of the triangle, and the cosine is the ratio of the length of the adjacent leg to that of the hypotenuse. For an angle , the sine and cosine functions are denoted simply as and .

<span class="mw-page-title-main">Alpha max plus beta min algorithm</span> High-speed approximation of the square root of the sum of two squares

The alpha max plus beta min algorithm is a high-speed approximation of the square root of the sum of two squares. The square root of the sum of two squares, also known as Pythagorean addition, is a useful function, because it finds the hypotenuse of a right triangle given the two side lengths, the norm of a 2-D vector, or the magnitude of a complex number z = a + bi given the real and imaginary parts.

In the mathematical field of numerical analysis, monotone cubic interpolation is a variant of cubic interpolation that preserves monotonicity of the data set being interpolated.

A digital differential analyzer (DDA), also sometimes called a digital integrating computer, is a digital implementation of a differential analyzer. The integrators in a DDA are implemented as accumulators, with the numeric result converted back to a pulse rate by the overflow of the accumulator.

<span class="mw-page-title-main">Spiral of Theodorus</span> Polygonal curve made from right triangles

In geometry, the spiral of Theodorus is a spiral composed of right triangles, placed edge-to-edge. It was named after Theodorus of Cyrene.

<span class="mw-page-title-main">Pythagorean theorem</span> Relation between sides of a right triangle

In mathematics, the Pythagorean theorem or Pythagoras' theorem is a fundamental relation in Euclidean geometry between the three sides of a right triangle. It states that the area of the square whose side is the hypotenuse is equal to the sum of the areas of the squares on the other two sides.

2Sum is a floating-point algorithm for computing the exact round-off error in a floating-point addition operation.

References

  1. Moler, Cleve; Morrison, Donald (1983). "Replacing square roots by Pythagorean sums". IBM Journal of Research and Development. 27 (6): 577–581. CiteSeerX   10.1.1.90.5651 . doi:10.1147/rd.276.0577.
  2. Johnson, David L. (2017). "12.2.3 Addition in Quadrature". Statistical Tools for the Comprehensive Practice of Industrial Hygiene and Environmental Health Sciences. John Wiley & Sons. p. 289. ISBN   9781119143017.
  3. "SIN (3M): Trigonometric functions and their inverses". Unix Programmer's Manual: Reference Guide (4.3 Berkeley Software Distribution Virtual VAX-11 Version ed.). Department of Electrical Engineering and Computer Science, University of California, Berkeley. April 1986.
  4. Beebe, Nelson H. F. (2017). The Mathematical-Function Computation Handbook: Programming Using the MathCW Portable Software Library. Springer. p. 70. ISBN   9783319641102.
  5. D.B. Schneider, Error Analysis in Measuring Systems, Proceedings of the 1962 Standards Laboratory Conference, page 94
  6. J.T. Bushberg et al, The Essential Physics of Medical Imaging, section 10.2.7, Wolters Kluwer Health
  7. Falmagne, Jean-Claude (2015). "Deriving meaningful scientific laws from abstract, "gedanken" type, axioms: five examples". Aequationes Mathematicae . 89 (2): 393–435. doi:10.1007/s00010-015-0339-1. MR   3340218. S2CID   121424613.
  8. 1 2 3 Borges, Carlos F. (2021), "Algorithm 1014: An Improved Algorithm for hypot(x, y)", ACM Transactions on Mathematical Software, 47 (1): 9:1–9:12, arXiv: 1904.09481 , doi:10.1145/3428446, S2CID   230588285
  9. Fog, Agner (2020-04-27). "Floating point exception tracking and NAN propagation" (PDF). p. 6.
  10. Common mathematical functions std::hypot, std::hypotf, std::hypotl
  11. Measured on ARM and x64 (Intel and AMD) for different compilers with maximum optimization for 32 bit and 64 bit floats.
  12. Cimpanu, Catalin. "CSS to get support for trigonometry functions". ZDNet. Retrieved 2019-11-01.
  13. "Hypot - C++ Reference".
  14. "STD.math - D Programming Language".
  15. "Math package - math - PKG.go.dev".
  16. "Math.hypot() - JavaScript | MDN".
  17. "Mathematics · the Julia Language".
  18. "Math (Java 2 Platform SE 5.0)".
  19. "hypot - Kotlin Programming Language". Kotlin. Retrieved 2018-03-19.
  20. "Square root of sum of squares (Hypotenuse) - MATLAB hypot - MathWorks Benelux".
  21. "PHP: Hypot - Manual".
  22. "Math — Mathematical functions — Python 3.9.7 documentation".
  23. "Module: Math (Ruby 3.0.2)".
  24. "F64 - Rust".
  25. "Scala Standard Library 2.13.6 - scala.math".

Further reading