Bessel potential

Last updated

In mathematics, the Bessel potential is a potential (named after Friedrich Wilhelm Bessel) similar to the Riesz potential but with better decay properties at infinity.

Contents

If s is a complex number with positive real part then the Bessel potential of order s is the operator

where Δ is the Laplace operator and the fractional power is defined using Fourier transforms.

Yukawa potentials are particular cases of Bessel potentials for in the 3-dimensional space.

Representation in Fourier space

The Bessel potential acts by multiplication on the Fourier transforms: for each

Integral representations

When , the Bessel potential on can be represented by

where the Bessel kernel is defined for by the integral formula [1]

Here denotes the Gamma function. The Bessel kernel can also be represented for by [2]

This last expression can be more succinctly written in terms of a modified Bessel function, [3] for which the potential gets its name:

Asymptotics

At the origin, one has as , [4]

In particular, when the Bessel potential behaves asymptotically as the Riesz potential.

At infinity, one has, as , [5]

See also

Related Research Articles

<span class="mw-page-title-main">Bessel function</span> Families of solutions to related differential equations

Bessel functions, first defined by the mathematician Daniel Bernoulli and then generalized by Friedrich Bessel, are canonical solutions y(x) of Bessel's differential equation

<span class="mw-page-title-main">Fourier transform</span> Mathematical transform that expresses a function of time as a function of frequency

A Fourier transform (FT) is a mathematical transform that decomposes functions into frequency components, which are represented by the output of the transform as a function of frequency. Most commonly functions of time or space are transformed, which will output a function depending on temporal frequency or spatial frequency respectively. That process is also called analysis. An example application would be decomposing the waveform of a musical chord into terms of the intensity of its constituent pitches. The term Fourier transform refers to both the frequency domain representation and the mathematical operation that associates the frequency domain representation to a function of space or time.

In calculus, and more generally in mathematical analysis, integration by parts or partial integration is a process that finds the integral of a product of functions in terms of the integral of the product of their derivative and antiderivative. It is frequently used to transform the antiderivative of a product of functions into an antiderivative for which a solution can be more easily found. The rule can be thought of as an integral version of the product rule of differentiation.

<span class="mw-page-title-main">Fokker–Planck equation</span> Partial differential equation

In statistical mechanics, the Fokker–Planck equation is a partial differential equation that describes the time evolution of the probability density function of the velocity of a particle under the influence of drag forces and random forces, as in Brownian motion. The equation can be generalized to other observables as well.

In vector calculus, Green's theorem relates a line integral around a simple closed curve C to a double integral over the plane region D bounded by C. It is the two-dimensional special case of Stokes' theorem.

In mathematics, the Fourier inversion theorem says that for many types of functions it is possible to recover a function from its Fourier transform. Intuitively it may be viewed as the statement that if we know all frequency and phase information about a wave then we may reconstruct the original wave precisely.

In mathematical analysis, Parseval's identity, named after Marc-Antoine Parseval, is a fundamental result on the summability of the Fourier series of a function. Geometrically, it is a generalized Pythagorean theorem for inner-product spaces.

In mathematics, the Poisson summation formula is an equation that relates the Fourier series coefficients of the periodic summation of a function to values of the function's continuous Fourier transform. Consequently, the periodic summation of a function is completely defined by discrete samples of the original function's Fourier transform. And conversely, the periodic summation of a function's Fourier transform is completely defined by discrete samples of the original function. The Poisson summation formula was discovered by Siméon Denis Poisson and is sometimes called Poisson resummation.

<span class="mw-page-title-main">Radon transform</span> Integral transform

In mathematics, the Radon transform is the integral transform which takes a function f defined on the plane to a function Rf defined on the (two-dimensional) space of lines in the plane, whose value at a particular line is equal to the line integral of the function over that line. The transform was introduced in 1917 by Johann Radon, who also provided a formula for the inverse transform. Radon further included formulas for the transform in three dimensions, in which the integral is taken over planes. It was later generalized to higher-dimensional Euclidean spaces, and more broadly in the context of integral geometry. The complex analogue of the Radon transform is known as the Penrose transform. The Radon transform is widely applicable to tomography, the creation of an image from the projection data associated with cross-sectional scans of an object.

<span class="mw-page-title-main">Sinc function</span> Special mathematical function defined as sin(x)/x

In mathematics, physics and engineering, the sinc function, denoted by sinc(x), has two forms, normalized and unnormalized.

<span class="mw-page-title-main">Reproducing kernel Hilbert space</span>

In functional analysis, a reproducing kernel Hilbert space (RKHS) is a Hilbert space of functions in which point evaluation is a continuous linear functional. Roughly speaking, this means that if two functions and in the RKHS are close in norm, i.e., is small, then and are also pointwise close, i.e., is small for all . The converse does not need to be true. Informally, this can be shown by looking at the supremum norm: the sequence of functions converges pointwise, but do not converge uniformly i.e. do not converge with respect to the supremum norm.

In mathematics, the Riesz–Thorin theorem, often referred to as the Riesz–Thorin interpolation theorem or the Riesz–Thorin convexity theorem, is a result about interpolation of operators. It is named after Marcel Riesz and his student G. Olof Thorin.

<span class="mw-page-title-main">Parabolic cylinder function</span>

In mathematics, the parabolic cylinder functions are special functions defined as solutions to the differential equation

In mathematics, and specifically in potential theory, the Poisson kernel is an integral kernel, used for solving the two-dimensional Laplace equation, given Dirichlet boundary conditions on the unit disk. The kernel can be understood as the derivative of the Green's function for the Laplace equation. It is named for Siméon Poisson.

In mathematics, Fourier–Bessel series is a particular kind of generalized Fourier series based on Bessel functions.

In mathematics, the Riesz potential is a potential named after its discoverer, the Hungarian mathematician Marcel Riesz. In a sense, the Riesz potential defines an inverse for a power of the Laplace operator on Euclidean space. They generalize to several variables the Riemann–Liouville integrals of one variable.

Expected shortfall (ES) is a risk measure—a concept used in the field of financial risk measurement to evaluate the market risk or credit risk of a portfolio. The "expected shortfall at q% level" is the expected return on the portfolio in the worst of cases. ES is an alternative to value at risk that is more sensitive to the shape of the tail of the loss distribution.

In mathematics, Maass forms or Maass wave forms are studied in the theory of automorphic forms. Maass forms are complex-valued smooth functions of the upper half plane, which transform in a similar way under the operation of a discrete subgroup of as modular forms. They are Eigenforms of the hyperbolic Laplace Operator defined on and satisfy certain growth conditions at the cusps of a fundamental domain of . In contrast to the modular forms the Maass forms need not be holomorphic. They were studied first by Hans Maass in 1949.

In mathematics, the fractional Laplacian is an operator, which generalizes the notion of spatial derivatives to fractional powers.

In representation theory of mathematics, the Waldspurger formula relates the special values of two L-functions of two related admissible irreducible representations. Let k be the base field, f be an automorphic form over k, π be the representation associated via the Jacquet–Langlands correspondence with f. Goro Shimura (1976) proved this formula, when and f is a cusp form; Günter Harder made the same discovery at the same time in an unpublished paper. Marie-France Vignéras (1980) proved this formula, when and f is a newform. Jean-Loup Waldspurger, for whom the formula is named, reproved and generalized the result of Vignéras in 1985 via a totally different method which was widely used thereafter by mathematicians to prove similar formulas.

References

  1. Stein, Elias (1970). Singular integrals and differentiability properties of functions . Princeton University Press. Chapter V eq. (26). ISBN   0-691-08079-8.
  2. N. Aronszajn; K. T. Smith (1961). "Theory of Bessel potentials I". Ann. Inst. Fourier. 11. 385–475, (4,2). doi: 10.5802/aif.116 .
  3. N. Aronszajn; K. T. Smith (1961). "Theory of Bessel potentials I". Ann. Inst. Fourier. 11. 385–475. doi: 10.5802/aif.116 .
  4. N. Aronszajn; K. T. Smith (1961). "Theory of Bessel potentials I". Ann. Inst. Fourier. 11. 385–475, (4,3). doi: 10.5802/aif.116 .
  5. N. Aronszajn; K. T. Smith (1961). "Theory of Bessel potentials I". Ann. Inst. Fourier. 11: 385–475. doi: 10.5802/aif.116 .