Difference of two squares

Last updated

In mathematics, the difference of two squares is a squared (multiplied by itself) number subtracted from another squared number. Every difference of squares may be factored according to the identity

Contents

in elementary algebra.

Proof

The proof of the factorization identity is straightforward. Starting from the right-hand side, apply the distributive law to get

By the commutative law, the middle two terms cancel:

leaving

The resulting identity is one of the most commonly used in mathematics. Among many uses, it gives a simple proof of the AM–GM inequality in two variables.

The proof holds in any commutative ring.

Conversely, if this identity holds in a ring R for all pairs of elements a and b, then R is commutative. To see this, apply the distributive law to the right-hand side of the equation and get

.

For this to be equal to , we must have

for all pairs a, b, so R is commutative.

Geometrical demonstrations

Difference of two squares.svg

The difference of two squares can also be illustrated geometrically as the difference of two square areas in a plane. In the diagram, the shaded part represents the difference between the areas of the two squares, i.e. . The area of the shaded part can be found by adding the areas of the two rectangles; , which can be factorized to . Therefore, .

Another geometric proof proceeds as follows: We start with the figure shown in the first diagram below, a large square with a smaller square removed from it. The side of the entire square is a, and the side of the small removed square is b. The area of the shaded region is . A cut is made, splitting the region into two rectangular pieces, as shown in the second diagram. The larger piece, at the top, has width a and height a-b. The smaller piece, at the bottom, has width a-b and height b. Now the smaller piece can be detached, rotated, and placed to the right of the larger piece. In this new arrangement, shown in the last diagram below, the two pieces together form a rectangle, whose width is and whose height is . This rectangle's area is . Since this rectangle came from rearranging the original figure, it must have the same area as the original figure. Therefore, . Difference of two squares geometric proof.png

Uses

Factorization of polynomials and simplification of expressions

The formula for the difference of two squares can be used for factoring polynomials that contain the square of a first quantity minus the square of a second quantity. For example, the polynomial can be factored as follows:

As a second example, the first two terms of can be factored as , so we have:

Moreover, this formula can also be used for simplifying expressions:

Complex number case: sum of two squares

The difference of two squares is used to find the linear factors of the sum of two squares, using complex number coefficients.

For example, the complex roots of can be found using difference of two squares:

(since )

Therefore, the linear factors are and .

Since the two factors found by this method are complex conjugates, we can use this in reverse as a method of multiplying a complex number to get a real number. This is used to get real denominators in complex fractions. [1]

Rationalising denominators

The difference of two squares can also be used in the rationalising of irrational denominators. [2] This is a method for removing surds from expressions (or at least moving them), applying to division by some combinations involving square roots.

For example: The denominator of can be rationalised as follows:

Here, the irrational denominator has been rationalised to .

Mental arithmetic

The difference of two squares can also be used as an arithmetical short cut. If two numbers (whose average is a number which is easily squared) are multiplied, the difference of two squares can be used to give you the product of the original two numbers.

For example:

Using the difference of two squares, can be restated as

which is .

Difference of two consecutive perfect squares

The difference of two consecutive perfect squares is the sum of the two bases n and n+1. This can be seen as follows:

Therefore, the difference of two consecutive perfect squares is an odd number. Similarly, the difference of two arbitrary perfect squares is calculated as follows:

Therefore, the difference of two even perfect squares is a multiple of 4 and the difference of two odd perfect squares is a multiple of 8.

Galileo's law of odd numbers

Galileo's law of odd numbers Galileo's law of odd numbers.svg
Galileo's law of odd numbers

A ramification of the difference of consecutive squares, Galileo's law of odd numbers states that the distance covered by an object falling without resistance in uniform gravity in successive equal time intervals is linearly proportional to the odd numbers. That is, if a body falling from rest covers a certain distance during an arbitrary time interval, it will cover 3, 5, 7, etc. times that distance in the subsequent time intervals of the same length.

From the equation for uniform linear acceleration, the distance covered

for initial speed constant acceleration (acceleration due to gravity without air resistance), and time elapsed it follows that the distance is proportional to (in symbols, ), thus the distance from the starting point are consecutive squares for integer values of time elapsed. [3]

Factorization of integers

Several algorithms in number theory and cryptography use differences of squares to find factors of integers and detect composite numbers. A simple example is the Fermat factorization method, which considers the sequence of numbers , for . If one of the equals a perfect square , then is a (potentially non-trivial) factorization of .

This trick can be generalized as follows. If mod and mod , then is composite with non-trivial factors and . This forms the basis of several factorization algorithms (such as the quadratic sieve) and can be combined with the Fermat primality test to give the stronger Miller–Rabin primality test.

Generalizations

Vectors a (purple), b (cyan) and a + b (blue) are shown with arrows Rhombus understood analytically.svg
Vectors a (purple), b (cyan) and a + b (blue) are shown with arrows

The identity also holds in inner product spaces over the field of real numbers, such as for dot product of Euclidean vectors:

The proof is identical. For the special case that a and b have equal norms (which means that their dot squares are equal), this demonstrates analytically the fact that two diagonals of a rhombus are perpendicular. This follows from the left side of the equation being equal to zero, requiring the right side to equal zero as well, and so the vector sum of a + b (the long diagonal of the rhombus) dotted with the vector difference a - b (the short diagonal of the rhombus) must equal zero, which indicates the diagonals are perpendicular.

Difference of two nth powers

Visual proof of the differences between two squares and two cubes Difference of squares and cubes visual proof.svg
Visual proof of the differences between two squares and two cubes

If a and b are two elements of a commutative ring R, then .

History

Historically, the Babylonians used the difference of two squares to calculate multiplications. [4]

For example:

93 × 87 = 90² 3² = 8091

64 × 56 = 60² 4² = 3584

See also

Notes

  1. Complex or imaginary numbers TheMathPage.com, retrieved 22 December 2011
  2. Multiplying Radicals TheMathPage.com, retrieved 22 December 2011
  3. RP Olenick et al., The Mechanical Universe: Introduction to Mechanics and Heat
  4. "Babylonian mathematics".

Related Research Articles

<span class="mw-page-title-main">Associative property</span> Property of a mathematical operation

In mathematics, the associative property is a property of some binary operations, which means that rearranging the parentheses in an expression will not change the result. In propositional logic, associativity is a valid rule of replacement for expressions in logical proofs.

<span class="mw-page-title-main">Euclidean algorithm</span> Algorithm for computing greatest common divisors

In mathematics, the Euclidean algorithm, or Euclid's algorithm, is an efficient method for computing the greatest common divisor (GCD) of two integers (numbers), the largest number that divides them both without a remainder. It is named after the ancient Greek mathematician Euclid, who first described it in his Elements . It is an example of an algorithm, a step-by-step procedure for performing a calculation according to well-defined rules, and is one of the oldest algorithms in common use. It can be used to reduce fractions to their simplest form, and is a part of many other number-theoretic and cryptographic calculations.

In mathematics, the greatest common divisor (GCD) of two or more integers, which are not all zero, is the largest positive integer that divides each of the integers. For two integers x, y, the greatest common divisor of x and y is denoted . For example, the GCD of 8 and 12 is 4, that is, gcd(8, 12) = 4.

In mathematics, an integral domain is a nonzero commutative ring in which the product of any two nonzero elements is nonzero. Integral domains are generalizations of the ring of integers and provide a natural setting for studying divisibility. In an integral domain, every nonzero element a has the cancellation property, that is, if a ≠ 0, an equality ab = ac implies b = c.

In mathematics, a principal ideal domain, or PID, is an integral domain in which every ideal is principal, i.e., can be generated by a single element. More generally, a principal ideal ring is a nonzero commutative ring whose ideals are principal, although some authors refer to PIDs as principal rings. The distinction is that a principal ideal ring may have zero divisors whereas a principal ideal domain cannot.

<span class="mw-page-title-main">Gaussian integer</span> Complex number whose real and imaginary parts are both integers

In number theory, a Gaussian integer is a complex number whose real and imaginary parts are both integers. The Gaussian integers, with ordinary addition and multiplication of complex numbers, form an integral domain, usually written as or

In mathematics, a unique factorization domain (UFD) is a ring in which a statement analogous to the fundamental theorem of arithmetic holds. Specifically, a UFD is an integral domain in which every non-zero non-unit element can be written as a product of irreducible elements, uniquely up to order and units.

In algebra, an irreducible element of an integral domain is a non-zero element that is not invertible, and is not the product of two non-invertible elements.

<span class="mw-page-title-main">Factorization</span> (Mathematical) decomposition into a product

In mathematics, factorization (or factorisation, see English spelling differences) or factoring consists of writing a number or another mathematical object as a product of several factors, usually smaller or simpler objects of the same kind. For example, 3 × 5 is an integer factorization of 15, and (x – 2)(x + 2) is a polynomial factorization of x2 – 4.

In mathematics, specifically ring theory, a principal ideal is an ideal in a ring that is generated by a single element of through multiplication by every element of The term also has another, similar meaning in order theory, where it refers to an (order) ideal in a poset generated by a single element which is to say the set of all elements less than or equal to in

<span class="mw-page-title-main">Complex conjugate</span> Fundamental operation on complex numbers

In mathematics, the complex conjugate of a complex number is the number with an equal real part and an imaginary part equal in magnitude but opposite in sign. That is, if and are real numbers then the complex conjugate of is The complex conjugate of is often denoted as or .

<span class="mw-page-title-main">Spherical harmonics</span> Special mathematical functions defined on the surface of a sphere

In mathematics and physical science, spherical harmonics are special functions defined on the surface of a sphere. They are often employed in solving partial differential equations in many scientific fields. A list of the spherical harmonics is available in Table of spherical harmonics.

Pollard's rho algorithm is an algorithm for integer factorization. It was invented by John Pollard in 1975. It uses only a small amount of space, and its expected running time is proportional to the square root of the smallest prime factor of the composite number being factorized.

The quadratic sieve algorithm (QS) is an integer factorization algorithm and, in practice, the second-fastest method known. It is still the fastest for integers under 100 decimal digits or so, and is considerably simpler than the number field sieve. It is a general-purpose factorization algorithm, meaning that its running time depends solely on the size of the integer to be factored, and not on special structure or properties. It was invented by Carl Pomerance in 1981 as an improvement to Schroeppel's linear sieve.

<span class="mw-page-title-main">Curvilinear coordinates</span> Coordinate system whose directions vary in space

In geometry, curvilinear coordinates are a coordinate system for Euclidean space in which the coordinate lines may be curved. These coordinates may be derived from a set of Cartesian coordinates by using a transformation that is locally invertible at each point. This means that one can convert a point given in a Cartesian coordinate system to its curvilinear coordinates and back. The name curvilinear coordinates, coined by the French mathematician Lamé, derives from the fact that the coordinate surfaces of the curvilinear systems are curved.

In number theory, Dixon's factorization method is a general-purpose integer factorization algorithm; it is the prototypical factor base method. Unlike for other factor base methods, its run-time bound comes with a rigorous proof that does not rely on conjectures about the smoothness properties of the values taken by a polynomial.

Fermat's factorization method, named after Pierre de Fermat, is based on the representation of an odd integer as the difference of two squares:

Shanks' square forms factorization is a method for integer factorization devised by Daniel Shanks as an improvement on Fermat's factorization method.

In algebra, the greatest common divisor of two polynomials is a polynomial, of the highest possible degree, that is a factor of both the two original polynomials. This concept is analogous to the greatest common divisor of two integers.

In mathematics and computer algebra the factorization of a polynomial consists of decomposing it into a product of irreducible factors. This decomposition is theoretically possible and is unique for polynomials with coefficients in any field, but rather strong restrictions on the field of the coefficients are needed to allow the computation of the factorization by means of an algorithm. In practice, algorithms have been designed only for polynomials with coefficients in a finite field, in the field of rationals or in a finitely generated field extension of one of them.

References