Trigonometric polynomial

Last updated

In the mathematical subfields of numerical analysis and mathematical analysis, a trigonometric polynomial is a finite linear combination of functions sin(nx) and cos(nx) with n taking on the values of one or more natural numbers. The coefficients may be taken as real numbers, for real-valued functions. For complex coefficients, there is no difference between such a function and a finite Fourier series.

Contents

Trigonometric polynomials are widely used, for example in trigonometric interpolation applied to the interpolation of periodic functions. They are used also in the discrete Fourier transform.

The term trigonometric polynomial for the real-valued case can be seen as using the analogy: the functions sin(nx) and cos(nx) are similar to the monomial basis for polynomials. In the complex case the trigonometric polynomials are spanned by the positive and negative powers of eix, Laurent polynomials in z under the change of variables z = eix.

Formal definition

Any function T of the form

with coefficients and at least one of the highest-degree coefficients and non-zero, is called a complex trigonometric polynomial of degree N. [1] Using Euler's formula the polynomial can be rewritten as

with .

Analogously, letting coefficients , and at least one of and non-zero or, equivalently, and for all , then

is called a real trigonometric polynomial of degree N. [2] [3]

Fejér-Riesz theorem

The Fejér-Riesz theorem states that every positive real trigonometric polynomial can be represented as the square of the modulus of another (usually complex) trigonometric polynomial such that: [4] [5]

Properties

A trigonometric polynomial can be considered a periodic function on the real line, with period some divisor of 2π, or as a function on the unit circle.

A basic result is that the trigonometric polynomials are dense in the space of continuous functions on the unit circle, with the uniform norm; [6] this is a special case of the Stone–Weierstrass theorem. More concretely, for every continuous function f and every ε > 0, there exists a trigonometric polynomial T such that |f(z) T(z)| < ε for all z. Fejér's theorem states that the arithmetic means of the partial sums of the Fourier series of f converge uniformly to f, provided f is continuous on the circle, thus giving an explicit way to find an approximating trigonometric polynomial T.

A trigonometric polynomial of degree N has a maximum of 2N roots in any interval [a, a + 2π) with a in R, unless it is the zero function. [7]

Notes

Related Research Articles

<span class="mw-page-title-main">Binomial coefficient</span> Number of subsets of a given size

In mathematics, the binomial coefficients are the positive integers that occur as coefficients in the binomial theorem. Commonly, a binomial coefficient is indexed by a pair of integers nk ≥ 0 and is written It is the coefficient of the xk term in the polynomial expansion of the binomial power (1 + x)n; this coefficient can be computed by the multiplicative formula

In elementary algebra, the binomial theorem (or binomial expansion) describes the algebraic expansion of powers of a binomial. According to the theorem, it is possible to expand the polynomial (x + y)n into a sum involving terms of the form axbyc, where the exponents b and c are nonnegative integers with b + c = n, and the coefficient a of each term is a specific positive integer depending on n and b. For example, for n = 4,

<span class="mw-page-title-main">Complex number</span> Number with a real and an imaginary part

In mathematics, a complex number is an element of a number system that extends the real numbers with a specific element denoted i, called the imaginary unit and satisfying the equation ; every complex number can be expressed in the form , where a and b are real numbers. Because no real number satisfies the above equation, i was called an imaginary number by René Descartes. For the complex number ,a is called the real part, and b is called the imaginary part. The set of complex numbers is denoted by either of the symbols or C. Despite the historical nomenclature, "imaginary" complex numbers have a mathematical existence as firm as that of the real numbers, and they are fundamental tools in the scientific description of the natural world.

In complex analysis, an entire function, also called an integral function, is a complex-valued function that is holomorphic on the whole complex plane. Typical examples of entire functions are polynomials and the exponential function, and any finite sums, products and compositions of these, such as the trigonometric functions sine and cosine and their hyperbolic counterparts sinh and cosh, as well as derivatives and integrals of entire functions such as the error function. If an entire function has a root at , then , taking the limit value at , is an entire function. On the other hand, the natural logarithm, the reciprocal function, and the square root are all not entire functions, nor can they be continued analytically to an entire function.

In mathematics, a polynomial is a mathematical expression consisting of indeterminates and coefficients, that involves only the operations of addition, subtraction, multiplication and exponentiation to nonnegative integer powers, and has a finite number of terms. An example of a polynomial of a single indeterminate x is x2 − 4x + 7. An example with three indeterminates is x3 + 2xyz2yz + 1.

<span class="mw-page-title-main">Trigonometric functions</span> Functions of an angle

In mathematics, the trigonometric functions are real functions which relate an angle of a right-angled triangle to ratios of two side lengths. They are widely used in all sciences that are related to geometry, such as navigation, solid mechanics, celestial mechanics, geodesy, and many others. They are among the simplest periodic functions, and as such are also widely used for studying periodic phenomena through Fourier analysis.

In mathematics, de Moivre's formula states that for any real number x and integer n it holds that

<span class="mw-page-title-main">Fourier series</span> Decomposition of periodic functions into sums of simpler sinusoidal forms

A Fourier series is an expansion of a periodic function into a sum of trigonometric functions. The Fourier series is an example of a trigonometric series, but not all trigonometric series are Fourier series. By expressing a function as a sum of sines and cosines, many problems involving the function become easier to analyze because trigonometric functions are well understood. For example, Fourier series were first used by Joseph Fourier to find solutions to the heat equation. This application is possible because the derivatives of trigonometric functions fall into simple patterns. Fourier series cannot be used to approximate arbitrary functions, because most functions have infinitely many terms in their Fourier series, and the series do not always converge. Well-behaved functions, for example smooth functions, have Fourier series that converge to the original function. The coefficients of the Fourier series are determined by integrals of the function multiplied by trigonometric functions, described in Common forms of the Fourier series below.

<span class="mw-page-title-main">Root of unity</span> Number that has an integer power equal to 1

In mathematics, a root of unity, occasionally called a de Moivre number, is any complex number that yields 1 when raised to some positive integer power n. Roots of unity are used in many branches of mathematics, and are especially important in number theory, the theory of group characters, and the discrete Fourier transform.

<span class="mw-page-title-main">Cayley–Hamilton theorem</span> Every square matrix over a commutative ring satisfies its own characteristic equation

In linear algebra, the Cayley–Hamilton theorem states that every square matrix over a commutative ring satisfies its own characteristic equation.

<span class="mw-page-title-main">Chebyshev polynomials</span> Polynomial sequence

The Chebyshev polynomials are two sequences of polynomials related to the cosine and sine functions, notated as and . They can be defined in several equivalent ways, one of which starts with trigonometric functions:

<span class="mw-page-title-main">Spherical harmonics</span> Special mathematical functions defined on the surface of a sphere

In mathematics and physical science, spherical harmonics are special functions defined on the surface of a sphere. They are often employed in solving partial differential equations in many scientific fields. The table of spherical harmonics contains a list of common spherical harmonics.

A generalized Fourier series is the expansion of a square integrable function into a sum of square integrable orthogonal basis functions. The standard Fourier series uses an orthonormal basis of trigonometric functions, and the series expansion is applied to periodic functions. In contrast, a generalized Fourier series uses any set of orthogonal basis functions and can apply to any square integrable function.

In mathematics, an almost periodic function is, loosely speaking, a function of a real number that is periodic to within any desired level of accuracy, given suitably long, well-distributed "almost-periods". The concept was first studied by Harald Bohr and later generalized by Vyacheslav Stepanov, Hermann Weyl and Abram Samoilovitch Besicovitch, amongst others. There is also a notion of almost periodic functions on locally compact abelian groups, first studied by John von Neumann.

In mathematics, smooth functions and analytic functions are two very important types of functions. One can easily prove that any analytic function of a real argument is smooth. The converse is not true, as demonstrated with the counterexample below.

In mathematics, trigonometric interpolation is interpolation with trigonometric polynomials. Interpolation is the process of finding a function which goes through some given data points. For trigonometric interpolation, this function has to be a trigonometric polynomial, that is, a sum of sines and cosines of given periods. This form is especially suited for interpolation of periodic functions.

<span class="mw-page-title-main">Marcel Riesz</span> Hungarian mathematician

Marcel Riesz was a Hungarian mathematician, known for work on summation methods, potential theory, and other parts of analysis, as well as number theory, partial differential equations, and Clifford algebras. He spent most of his career in Lund, Sweden.

<span class="mw-page-title-main">Trigonometric series</span> Infinite sum of sines and cosines

In mathematics, a trigonometric series is an infinite series of the form

<span class="mw-page-title-main">Sine and cosine</span> Fundamental trigonometric functions

In mathematics, sine and cosine are trigonometric functions of an angle. The sine and cosine of an acute angle are defined in the context of a right triangle: for the specified angle, its sine is the ratio of the length of the side that is opposite that angle to the length of the longest side of the triangle, and the cosine is the ratio of the length of the adjacent leg to that of the hypotenuse. For an angle , the sine and cosine functions are denoted as and .

Polynomial Matrix Spectral Factorization or Matrix Fejer–Riesz Theorem is a tool used to study the matrix decomposition of polynomial matrices. Polynomial matrices are widely studied in the fields of systems theory and control theory and have seen other uses relating to stable polynomials. In stability theory, Spectral Factorization has been used to find determinantal matrix representations for bivariate stable polynomials and real zero polynomials.

References