In mathematics, Pythagorean addition is a binary operation on the real numbers that computes the length of the hypotenuse of a right triangle, given its two sides. According to the Pythagorean theorem, for a triangle with sides and , this length can be calculated as where denotes the Pythagorean addition operation. [1]
This operation can be used in the conversion of Cartesian coordinates to polar coordinates. It also provides a simple notation and terminology for some formulas when its summands are complicated; for example, the energy-momentum relation in physics becomes It is implemented in many programming libraries as the hypot function, in a way designed to avoid errors arising due to limited-precision calculations performed on computers. In its applications to signal processing and propagation of measurement uncertainty, the same operation is also called addition in quadrature; [2] it is related to the quadratic mean or "root mean square".
Pythagorean addition (and its implementation as the hypot function) is often used together with the atan2 function to convert from Cartesian coordinates to polar coordinates : [3] [4]
If measurements have independent errors respectively, the quadrature method gives the overall error, whereas the upper limit of the overall error is if the errors were not independent. [5]
This is equivalent of finding the magnitude of the resultant of adding orthogonal vectors, each with magnitude equal to the uncertainty, using the Pythagorean theorem.
In signal processing, addition in quadrature is used to find the overall noise from independent sources of noise. For example, if an image sensor gives six digital numbers of shot noise, three of dark current noise and two of Johnson–Nyquist noise under a specific condition, the overall noise is digital numbers, [6] showing the dominance of larger sources of noise.
The root mean square of a finite set of numbers is just their Pythagorean sum, normalized to form a generalized mean by dividing by .
The operation is associative and commutative, [7] and This means that the real numbers under form a commutative semigroup.
The real numbers under are not a group, because can never produce a negative number as its result, whereas each element of a group must be the result of applying the group operation to itself and the identity element. On the non-negative numbers, it is still not a group, because Pythagorean addition of one number by a second positive number can only increase the first number, so no positive number can have an inverse element. Instead, it forms a commutative monoid on the non-negative numbers, with zero as its identity.
Hypot is a mathematical function defined to calculate the length of the hypotenuse of a right-angle triangle. It was designed to avoid errors arising due to limited-precision calculations performed on computers. Calculating the length of the hypotenuse of a triangle is possible using the square root function on the sum of two squares, but hypot avoids problems that occur when squaring very large or very small numbers. If calculated using the natural formula, the squares of very large or small values of and may exceed the range of machine precision when calculated on a computer, leading to an inaccurate result caused by arithmetic underflow and overflow. The hypot function was designed to calculate the result without causing this problem. [8]
If either input to hypot is infinite, the result is infinite. Because this is true for all possible values of the other input, the IEEE 754 floating-point standard requires that this remains true even when the other input is not a number (NaN). [9]
Since C++17, there has been an additional hypot function for 3D calculations: [10]
The difficulty with the naive implementation is that may overflow or underflow, unless the intermediate result is computed with extended precision. A common implementation technique is to exchange the values, if necessary, so that , and then to use the equivalent form
The computation of cannot overflow unless both and are zero. If underflows, the final result is equal to , which is correct within the precision of the calculation. The square root is computed of a value between 1 and 2. Finally, the multiplication by cannot underflow, and overflows only when the result is too large to represent. [8] This implementation has the downside that it requires an additional floating-point division, which can double the cost of the naive implementation, as multiplication and addition are typically far faster than division and square root. Typically, the implementation is slower by a factor of 2.5 to 3. [11]
More complex implementations avoid this by dividing the inputs into more cases:
However, this implementation is extremely slow when it causes incorrect jump predictions due to different cases. Additional techniques allow the result to be computed more accurately, e.g. to less than one ulp. [8]
The function is present in many programming languages and libraries, including CSS, [12] C++11, [13] D, [14] Fortran (since Fortran 2008), [15] Go, [16] JavaScript (since ES2015), [17] Julia, [18] Java (since version 1.5), [19] Kotlin, [20] MATLAB, [21] PHP, [22] Python, [23] Ruby, [24] Rust, [25] and Scala. [26]
++
and +-+
respectively.In computing, floating-point arithmetic (FP) is arithmetic that represents subsets of real numbers using an integer with a fixed precision, called the significand, scaled by an integer exponent of a fixed base. Numbers of this form are called floating-point numbers. For example, 12.345 is a floating-point number in base ten with five digits of precision:
A Pythagorean triple consists of three positive integers a, b, and c, such that a2 + b2 = c2. Such a triple is commonly written (a, b, c), a well-known example is (3, 4, 5). If (a, b, c) is a Pythagorean triple, then so is (ka, kb, kc) for any positive integer k. A triangle whose side lengths are a Pythagorean triple is a right triangle and called a Pythagorean triangle.
In mathematics, a square root of a number x is a number y such that ; in other words, a number y whose square is x. For example, 4 and −4 are square roots of 16 because .
The Box–Muller transform, by George Edward Pelham Box and Mervin Edgar Muller, is a random number sampling method for generating pairs of independent, standard, normally distributed random numbers, given a source of uniformly distributed random numbers. The method was first mentioned explicitly by Raymond E. A. C. Paley and Norbert Wiener in their 1934 treatise on Fourier transforms in the complex domain. Given the status of these latter authors and the widespread availability and use of their treatise, it is almost certain that Box and Muller were well aware of its contents.
In mathematics, a Gaussian function, often simply referred to as a Gaussian, is a function of the base form and with parametric extension for arbitrary real constants a, b and non-zero c. It is named after the mathematician Carl Friedrich Gauss. The graph of a Gaussian is a characteristic symmetric "bell curve" shape. The parameter a is the height of the curve's peak, b is the position of the center of the peak, and c controls the width of the "bell".
In geometry, a hypotenuse is the side of a right triangle opposite the right angle. It is the longest side of any such triangle; the two other shorter sides of such a triangle are called catheti or legs. The length of the hypotenuse can be found using the Pythagorean theorem, which states that the square of the length of the hypotenuse equals the sum of the squares of the lengths of the two legs. Mathematically, this can be written as , where a is the length of one leg, b is the length of another leg, and c is the length of the hypotenuse.
In mathematics, a proof by infinite descent, also known as Fermat's method of descent, is a particular kind of proof by contradiction used to show that a statement cannot possibly hold for any number, by showing that if the statement were to hold for a number, then the same would be true for a smaller number, leading to an infinite descent and ultimately a contradiction. It is a method which relies on the well-ordering principle, and is often used to show that a given equation, such as a Diophantine equation, has no solutions.
The great-circle distance, orthodromic distance, or spherical distance is the distance between two points on a sphere, measured along the great-circle arc between them. This arc is the shortest path between the two points on the surface of the sphere.
The IEEE Standard for Floating-Point Arithmetic is a technical standard for floating-point arithmetic originally established in 1985 by the Institute of Electrical and Electronics Engineers (IEEE). The standard addressed many problems found in the diverse floating-point implementations that made them difficult to use reliably and portably. Many hardware floating-point units use the IEEE 754 standard.
Dilution of precision (DOP), or geometric dilution of precision (GDOP), is a term used in satellite navigation and geomatics engineering to specify the error propagation as a mathematical effect of navigation satellite geometry on positional measurement precision.
In numerical analysis, numerical differentiation algorithms estimate the derivative of a mathematical function or function subroutine using values of the function and perhaps other knowledge about the function.
A special right triangle is a right triangle with some regular feature that makes calculations on the triangle easier, or for which simple formulas exist. For example, a right triangle may have angles that form simple relationships, such as 45°–45°–90°. This is called an "angle-based" right triangle. A "side-based" right triangle is one in which the lengths of the sides form ratios of whole numbers, such as 3 : 4 : 5, or of other special numbers such as the golden ratio. Knowing the relationships of the angles or ratios of sides of these special right triangles allows one to quickly calculate various lengths in geometric problems without resorting to more advanced methods.
A Pythagorean prime is a prime number of the form . Pythagorean primes are exactly the odd prime numbers that are the sum of two squares; this characterization is Fermat's theorem on sums of two squares.
In computing and mathematics, the function atan2 is the 2-argument arctangent. By definition, is the angle measure between the positive -axis and the ray from the origin to the point in the Cartesian plane. Equivalently, is the argument of the complex number
The alpha max plus beta min algorithm is a high-speed approximation of the square root of the sum of two squares. The square root of the sum of two squares, also known as Pythagorean addition, is a useful function, because it finds the hypotenuse of a right triangle given the two side lengths, the norm of a 2-D vector, or the magnitude of a complex number z = a + bi given the real and imaginary parts.
A digital differential analyzer (DDA), also sometimes called a digital integrating computer, is a digital implementation of a differential analyzer. The integrators in a DDA are implemented as accumulators, with the numeric result converted back to a pulse rate by the overflow of the accumulator.
In geometry, the spiral of Theodorus is a spiral composed of right triangles, placed edge-to-edge. It was named after Theodorus of Cyrene.
In numerical analysis, catastrophic cancellation is the phenomenon that subtracting good approximations to two nearby numbers may yield a very bad approximation to the difference of the original numbers.
In plane geometry, an automedian triangle is a triangle in which the lengths of the three medians are proportional to the lengths of the three sides, in a different order. The three medians of an automedian triangle may be translated to form the sides of a second triangle that is similar to the first one.
2Sum is a floating-point algorithm for computing the exact round-off error in a floating-point addition operation.