Principal part

Last updated

In mathematics, the principal part has several independent meanings but usually refers to the negative-power portion of the Laurent series of a function.

Contents

Laurent series definition

The principal part at of a function

is the portion of the Laurent series consisting of terms with negative degree. [1] That is,

is the principal part of at . If the Laurent series has an inner radius of convergence of , then has an essential singularity at if and only if the principal part is an infinite sum. If the inner radius of convergence is not , then may be regular at despite the Laurent series having an infinite principal part.

Other definitions

Calculus

Consider the difference between the function differential and the actual increment:

The differential dy is sometimes called the principal (linear) part of the function increment Δy.

Distribution theory

The term principal part is also used for certain kinds of distributions having a singular support at a single point.

See also

Related Research Articles

In mathematics, a continuous function is a function such that a continuous variation of the argument induces a continuous variation of the value of the function. This means there are no abrupt changes in value, known as discontinuities. More precisely, a function is continuous if arbitrarily small changes in its value can be assured by restricting to sufficiently small changes of its argument. A discontinuous function is a function that is not continuous. Until the 19th century, mathematicians largely relied on intuitive notions of continuity and considered only continuous functions. The epsilon–delta definition of a limit was introduced to formalize the definition of continuity.

In mathematics, the branch of real analysis studies the behavior of real numbers, sequences and series of real numbers, and real functions. Some particular properties of real-valued sequences and functions that real analysis studies include convergence, limits, continuity, smoothness, differentiability and integrability.

<span class="mw-page-title-main">Laplace's equation</span> Second-order partial differential equation

In mathematics and physics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace, who first studied its properties. This is often written as

<span class="mw-page-title-main">Dirac delta function</span> Generalized function whose value is zero everywhere except at zero

In mathematical analysis, the Dirac delta distribution, also known as the unit impulse, is a generalized function or distribution over the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire real line is equal to one.

<span class="mw-page-title-main">Laurent series</span> Power series with negative powers

In mathematics, the Laurent series of a complex function is a representation of that function as a power series which includes terms of negative degree. It may be used to express complex functions in cases where a Taylor series expansion cannot be applied. The Laurent series was named after and first published by Pierre Alphonse Laurent in 1843. Karl Weierstrass may have discovered it first in a paper written in 1841, but it was not published until after his death.

In mathematics, an infinite series of numbers is said to converge absolutely if the sum of the absolute values of the summands is finite. More precisely, a real or complex series is said to converge absolutely if for some real number Similarly, an improper integral of a function, is said to converge absolutely if the integral of the absolute value of the integrand is finite—that is, if

<span class="mw-page-title-main">Heaviside step function</span> Indicator function of positive numbers

The Heaviside step function, or the unit step function, usually denoted by H or θ, is a step function named after Oliver Heaviside, the value of which is zero for negative arguments and one for positive arguments. It is an example of the general class of step functions, all of which can be represented as linear combinations of translations of this one.

In vector calculus, Green's theorem relates a line integral around a simple closed curve C to a double integral over the plane region D bounded by C. It is the two-dimensional special case of Stokes' theorem.

In mathematics, the Cauchy principal value, named after Augustin Louis Cauchy, is a method for assigning values to certain improper integrals which would otherwise be undefined. In this method, a singularity on an integral interval is avoided by limiting the integral interval to the non singular domain.

In mathematics, the Henstock–Kurzweil integral or generalized Riemann integral or gauge integral – also known as the (narrow) Denjoy integral, Luzin integral or Perron integral, but not to be confused with the more general wide Denjoy integral – is one of a number of inequivalent definitions of the integral of a function. It is a generalization of the Riemann integral, and in some situations is more general than the Lebesgue integral. In particular, a function is Lebesgue integrable if and only if the function and its absolute value are Henstock–Kurzweil integrable.

<span class="mw-page-title-main">Integral test for convergence</span> Test for infinite series of monotonous terms for convergence

In mathematics, the integral test for convergence is a method used to test infinite series of monotonous terms for convergence. It was developed by Colin Maclaurin and Augustin-Louis Cauchy and is sometimes known as the Maclaurin–Cauchy test.

In mathematics, the Poisson summation formula is an equation that relates the Fourier series coefficients of the periodic summation of a function to values of the function's continuous Fourier transform. Consequently, the periodic summation of a function is completely defined by discrete samples of the original function's Fourier transform. And conversely, the periodic summation of a function's Fourier transform is completely defined by discrete samples of the original function. The Poisson summation formula was discovered by Siméon Denis Poisson and is sometimes called Poisson resummation.

In mathematics, Laplace's method, named after Pierre-Simon Laplace, is a technique used to approximate integrals of the form

In mathematics, in particular in algebraic geometry and differential geometry, Dolbeault cohomology (named after Pierre Dolbeault) is an analog of de Rham cohomology for complex manifolds. Let M be a complex manifold. Then the Dolbeault cohomology groups depend on a pair of integers p and q and are realized as a subquotient of the space of complex differential forms of degree (p,q).

The Sokhotski–Plemelj theorem is a theorem in complex analysis, which helps in evaluating certain integrals. The real-line version of it is often used in physics, although rarely referred to by name. The theorem is named after Julian Sochocki, who proved it in 1868, and Josip Plemelj, who rediscovered it as a main ingredient of his solution of the Riemann–Hilbert problem in 1908.

In mathematics, the Laplace limit is the maximum value of the eccentricity for which a solution to Kepler's equation, in terms of a power series in the eccentricity, converges. It is approximately

In mathematics, the Cauchy–Hadamard theorem is a result in complex analysis named after the French mathematicians Augustin Louis Cauchy and Jacques Hadamard, describing the radius of convergence of a power series. It was published in 1821 by Cauchy, but remained relatively unknown until Hadamard rediscovered it. Hadamard's first publication of this result was in 1888; he also included it as part of his 1892 Ph.D. thesis.

In calculus, the differential represents the principal part of the change in a function with respect to changes in the independent variable. The differential is defined by

In mathematics, a limit is the value that a function approaches as the input approaches some value. Limits are essential to calculus and mathematical analysis, and are used to define continuity, derivatives, and integrals.

In mathematics, singular integral operators of convolution type are the singular integral operators that arise on Rn and Tn through convolution by distributions; equivalently they are the singular integral operators that commute with translations. The classical examples in harmonic analysis are the harmonic conjugation operator on the circle, the Hilbert transform on the circle and the real line, the Beurling transform in the complex plane and the Riesz transforms in Euclidean space. The continuity of these operators on L2 is evident because the Fourier transform converts them into multiplication operators. Continuity on Lp spaces was first established by Marcel Riesz. The classical techniques include the use of Poisson integrals, interpolation theory and the Hardy–Littlewood maximal function. For more general operators, fundamental new techniques, introduced by Alberto Calderón and Antoni Zygmund in 1952, were developed by a number of authors to give general criteria for continuity on Lp spaces. This article explains the theory for the classical operators and sketches the subsequent general theory.

References

  1. Laurent. 16 October 2016. ISBN   9781467210782 . Retrieved 31 March 2016.