Nevanlinna function

Last updated

In mathematics, in the field of complex analysis, a Nevanlinna function is a complex function which is an analytic function on the open upper half-plane and has non-negative imaginary part. A Nevanlinna function maps the upper half-plane to itself or to a real constant, [1] but is not necessarily injective or surjective. Functions with this property are sometimes also known as Herglotz, Pick or R functions.

Contents

Integral representation

Every Nevanlinna function N admits a representation

where C is a real constant, D is a non-negative constant, is the upper half-plane, and μ is a Borel measure on satisfying the growth condition

Conversely, every function of this form turns out to be a Nevanlinna function. The constants in this representation are related to the function N via

and the Borel measure μ can be recovered from N by employing the Stieltjes inversion formula (related to the inversion formula for the Stieltjes transformation):

A very similar representation of functions is also called the Poisson representation. [2]

Examples

Some elementary examples of Nevanlinna functions follow (with appropriately chosen branch cuts in the first three). ( can be replaced by for any real number .)

These are injective but when p does not equal 1 or −1 they are not surjective and can be rotated to some extent around the origin, such as .
is a Nevanlinna function if (sufficient but not necessary) is a positive real number and . This is equivalent to the set of such transformations that map the real axis to itself. One may then add any constant in the upper half-plane, and move the pole into the lower half-plane, giving new values for the parameters. Example:
is a Nevanlinna function.

Related Research Articles

Exponential distribution Probability distribution

In probability theory and statistics, the exponential distribution is the probability distribution of the time between events in a Poisson point process, i.e., a process in which events occur continuously and independently at a constant average rate. It is a particular case of the gamma distribution. It is the continuous analogue of the geometric distribution, and it has the key property of being memoryless. In addition to being used for the analysis of Poisson point processes it is found in various other contexts.

Ellipsoid Quadric surface that looks like a deformed sphere

An ellipsoid is a surface that may be obtained from a sphere by deforming it by means of directional scalings, or more generally, of an affine transformation.

In mathematics, a self-adjoint operator on an infinite-dimensional complex vector space V with inner product is a linear map A that is its own adjoint. If V is finite-dimensional with a given orthonormal basis, this is equivalent to the condition that the matrix of A is a Hermitian matrix, i.e., equal to its conjugate transpose A. By the finite-dimensional spectral theorem, V has an orthonormal basis such that the matrix of A relative to this basis is a diagonal matrix with entries in the real numbers. In this article, we consider generalizations of this concept to operators on Hilbert spaces of arbitrary dimension.

In mathematics, particularly in functional analysis, the spectrum of a bounded linear operator is a generalisation of the set of eigenvalues of a matrix. Specifically, a complex number λ is said to be in the spectrum of a bounded linear operator T if is not invertible, where I is the identity operator. The study of spectra and related properties is known as spectral theory, which has numerous applications, most notably the mathematical formulation of quantum mechanics.

Lorentz group Lie group of Lorentz transformations

In physics and mathematics, the Lorentz group is the group of all Lorentz transformations of Minkowski spacetime, the classical and quantum setting for all (non-gravitational) physical phenomena. The Lorentz group is named for the Dutch physicist Hendrik Lorentz.

Jensens inequality Theorem of convex functions

In mathematics, Jensen's inequality, named after the Danish mathematician Johan Jensen, relates the value of a convex function of an integral to the integral of the convex function. It was proved by Jensen in 1906. Given its generality, the inequality appears in many forms depending on the context, some of which are presented below. In its simplest form the inequality states that the convex transformation of a mean is less than or equal to the mean applied after convex transformation; it is a simple corollary that the opposite is true of concave transformations.

Variational Bayesian methods are a family of techniques for approximating intractable integrals arising in Bayesian inference and machine learning. They are typically used in complex statistical models consisting of observed variables as well as unknown parameters and latent variables, with various sorts of relationships among the three types of random variables, as might be described by a graphical model. As typical in Bayesian inference, the parameters and latent variables are grouped together as "unobserved variables". Variational Bayesian methods are primarily used for two purposes:

  1. To provide an analytical approximation to the posterior probability of the unobserved variables, in order to do statistical inference over these variables.
  2. To derive a lower bound for the marginal likelihood of the observed data. This is typically used for performing model selection, the general idea being that a higher marginal likelihood for a given model indicates a better fit of the data by that model and hence a greater probability that the model in question was the one that generated the data.
Dirichlet integral Integral of sin(x)/x from 0 to infinity.

In mathematics, there are several integrals known as the Dirichlet integral, after the German mathematician Peter Gustav Lejeune Dirichlet, one of which is the improper integral of the sinc function over the positive real line:

Characteristic function (probability theory)

In probability theory and statistics, the characteristic function of any real-valued random variable completely defines its probability distribution. If a random variable admits a probability density function, then the characteristic function is the Fourier transform of the probability density function. Thus it provides an alternative route to analytical results compared with working directly with probability density functions or cumulative distribution functions. There are particularly simple results for the characteristic functions of distributions defined by the weighted sums of random variables.

In mathematics, a π-system on a set is a collection of certain subsets of such that

In mathematics, the Schur orthogonality relations, which were proven by Issai Schur through Schur's lemma, express a central fact about representations of finite groups. They admit a generalization to the case of compact groups in general, and in particular compact Lie groups, such as the rotation group SO(3).

In statistics, the bias of an estimator is the difference between this estimator's expected value and the true value of the parameter being estimated. An estimator or decision rule with zero bias is called unbiased. In statistics, "bias" is an objective property of an estimator. Bias can also be measured with respect to the median, rather than the mean, in which case one distinguishes median-unbiased from the usual mean-unbiasedness property. Bias is a distinct concept from consistency. Consistent estimators converge in probability to the true value of the parameter, but may be biased or unbiased; see bias versus consistency for more.

In mathematics, the logarithmic norm is a real-valued functional on operators, and is derived from either an inner product, a vector norm, or its induced operator norm. The logarithmic norm was independently introduced by Germund Dahlquist and Sergei Lozinskiĭ in 1958, for square matrices. It has since been extended to nonlinear operators and unbounded operators as well. The logarithmic norm has a wide range of applications, in particular in matrix theory, differential equations and numerical analysis. In the finite dimensional setting it is also referred to as the matrix measure or the Lozinskiĭ measure.

In complex analysis, given initial data consisting of points in the complex unit disc and target data consisting of points in , the Nevanlinna–Pick interpolation problem is to find a holomorphic function that interpolates the data, that is for all ,

In mathematics, the spectral theory of ordinary differential equations is the part of spectral theory concerned with the determination of the spectrum and eigenfunction expansion associated with a linear ordinary differential equation. In his dissertation Hermann Weyl generalized the classical Sturm–Liouville theory on a finite closed interval to second order differential operators with singularities at the endpoints of the interval, possibly semi-infinite or infinite. Unlike the classical case, the spectrum may no longer consist of just a countable set of eigenvalues, but may also contain a continuous part. In this case the eigenfunction expansion involves an integral over the continuous part with respect to a spectral measure, given by the Titchmarsh–Kodaira formula. The theory was put in its final simplified form for singular differential equations of even degree by Kodaira and others, using von Neumann's spectral theorem. It has had important applications in quantum mechanics, operator theory and harmonic analysis on semisimple Lie groups.

Poisson distribution Discrete probability distribution

In probability theory and statistics, the Poisson distribution, named after French mathematician Siméon Denis Poisson, is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time or space if these events occur with a known constant mean rate and independently of the time since the last event. The Poisson distribution can also be used for the number of events in other specified intervals such as distance, area or volume.

In the field of mathematical analysis, a general Dirichlet series is an infinite series that takes the form of

In mathematics, a function defined on a region of the complex plane is said to be of bounded type if it is equal to the ratio of two analytic functions bounded in that region. But more generally, a function is of bounded type in a region if and only if is analytic on and has a harmonic majorant on where . Being the ratio of two bounded analytic functions is a sufficient condition for a function to be of bounded type, and if is simply connected the condition is also necessary.

Grunsky matrix

In complex analysis and geometric function theory, the Grunsky matrices, or Grunsky operators, are infinite matrices introduced in 1939 by Helmut Grunsky. The matrices correspond to either a single holomorphic function on the unit disk or a pair of holomorphic functions on the unit disk and its complement. The Grunsky inequalities express boundedness properties of these matrices, which in general are contraction operators or in important special cases unitary operators. As Grunsky showed, these inequalities hold if and only if the holomorphic function is univalent. The inequalities are equivalent to the inequalities of Goluzin, discovered in 1947. Roughly speaking, the Grunsky inequalities give information on the coefficients of the logarithm of a univalent function; later generalizations by Milin, starting from the Lebedev–Milin inequality, succeeded in exponentiating the inequalities to obtain inequalities for the coefficients of the univalent function itself. The Grunsky matrix and its associated inequalities were originally formulated in a more general setting of univalent functions between a region bounded by finitely many sufficiently smooth Jordan curves and its complement: the results of Grunsky, Goluzin and Milin generalize to that case.

In mathematics, and especially differential and algebraic geometry, K-stability is an algebro-geometric stability condition, for complex manifolds and complex algebraic varieties. The notion of K-stability was first introduced by Gang Tian and reformulated more algebraically later by Simon Donaldson. The definition was inspired by a comparison to geometric invariant theory (GIT) stability. In the special case of Fano varieties, K-stability precisely characterises the existence of Kähler–Einstein metrics. More generally, on any compact complex manifold, K-stability is conjectured to be equivalent to the existence of constant scalar curvature Kähler metrics.

References

  1. A real number is not considered to be in the upper half-plane.
  2. See for example Section 4, "Poisson representation" in Louis de Branges (1968). Hilbert Spaces of Entire Functions. Prentice-Hall. ASIN   B0006BUXNM. De Branges gives a form for functions whose real part is non-negative in the upper half-plane.

General