In mathematics, in the area of complex analysis, Carlson's theorem is a uniqueness theorem which was discovered by Fritz David Carlson. Informally, it states that two different analytic functions which do not grow very fast at infinity can not coincide at the integers. The theorem may be obtained from the Phragmén–Lindelöf theorem, which is itself an extension of the maximum-modulus theorem.
Carlson's theorem is typically invoked to defend the uniqueness of a Newton series expansion. Carlson's theorem has generalized analogues for other expansions.
Assume that f satisfies the following three conditions. The first two conditions bound the growth of f at infinity, whereas the third one states that f vanishes on the non-negative integers.
Then f is identically zero.
The first condition may be relaxed: it is enough to assume that f is analytic in Re z > 0, continuous in Re z ≥ 0, and satisfies
for some real values C, τ.
To see that the second condition is sharp, consider the function f(z) = sin(πz). It vanishes on the integers; however, it grows exponentially on the imaginary axis with a growth rate of c = π, and indeed it is not identically zero.
A result, due to Rubel (1956), relaxes the condition that f vanish on the integers. Namely, Rubel showed that the conclusion of the theorem remains valid if f vanishes on a subset A ⊂ {0, 1, 2, ...} of upper density 1, meaning that
This condition is sharp, meaning that the theorem fails for sets A of upper density smaller than 1.
Suppose f(z) is a function that possesses all finite forward differences . Consider then the Newton series
with is the binomial coefficient and is the n-th forward difference. By construction, one then has that f(k) = g(k) for all non-negative integers k, so that the difference h(k) = f(k) − g(k) = 0. This is one of the conditions of Carlson's theorem; if h obeys the others, then h is identically zero, and the finite differences for f uniquely determine its Newton series. That is, if a Newton series for f exists, and the difference satisfies the Carlson conditions, then f is unique.
In mathematics, more specifically in functional analysis, a Banach space is a complete normed vector space. Thus, a Banach space is a vector space with a metric that allows the computation of vector length and distance between vectors and is complete in the sense that a Cauchy sequence of vectors always converges to a well-defined limit that is within the space.
In number theory, a Carmichael number is a composite number , which in modular arithmetic satisfies the congruence relation:
In complex analysis, an entire function, also called an integral function, is a complex-valued function that is holomorphic on the whole complex plane. Typical examples of entire functions are polynomials and the exponential function, and any finite sums, products and compositions of these, such as the trigonometric functions sine and cosine and their hyperbolic counterparts sinh and cosh, as well as derivatives and integrals of entire functions such as the error function. If an entire function has a root at , then , taking the limit value at , is an entire function. On the other hand, the natural logarithm, the reciprocal function, and the square root are all not entire functions, nor can they be continued analytically to an entire function.
In mathematics, the prime number theorem (PNT) describes the asymptotic distribution of the prime numbers among the positive integers. It formalizes the intuitive idea that primes become less common as they become larger by precisely quantifying the rate at which this occurs. The theorem was proved independently by Jacques Hadamard and Charles Jean de la Vallée Poussin in 1896 using ideas introduced by Bernhard Riemann.
A finite difference is a mathematical expression of the form f (x + b) − f (x + a). If a finite difference is divided by b − a, one gets a difference quotient. The approximation of derivatives by finite differences plays a central role in finite difference methods for the numerical solution of differential equations, especially boundary value problems.
In mathematics, a modular form is a (complex) analytic function on the upper half-plane that satisfies:
In mathematics, the Dedekind eta function, named after Richard Dedekind, is a modular form of weight 1/2 and is a function defined on the upper half-plane of complex numbers, where the imaginary part is positive. It also occurs in bosonic string theory.
In mathematics, the Poisson summation formula is an equation that relates the Fourier series coefficients of the periodic summation of a function to values of the function's continuous Fourier transform. Consequently, the periodic summation of a function is completely defined by discrete samples of the original function's Fourier transform. And conversely, the periodic summation of a function's Fourier transform is completely defined by discrete samples of the original function. The Poisson summation formula was discovered by Siméon Denis Poisson and is sometimes called Poisson resummation.
In mathematics, in particular in the theory of modular forms, a Hecke operator, studied by Erich Hecke (1937a,1937b), is a certain kind of "averaging" operator that plays a significant role in the structure of vector spaces of modular forms and more general automorphic representations.
The Ramanujan tau function, studied by Ramanujan (1916), is the function defined by the following identity:
In complex analysis, the Phragmén–Lindelöf principle, first formulated by Lars Edvard Phragmén (1863–1937) and Ernst Leonard Lindelöf (1870–1946) in 1908, is a technique which employs an auxiliary, parameterized function to prove the boundedness of a holomorphic function on an unbounded domain when an additional condition constraining the growth of on is given. It is a generalization of the maximum modulus principle, which is only applicable to bounded domains.
In mathematics, the von Mangoldt function is an arithmetic function named after German mathematician Hans von Mangoldt. It is an example of an important arithmetic function that is neither multiplicative nor additive.
In mathematics, in the area of complex analysis, Nachbin's theorem is commonly used to establish a bound on the growth rates for an analytic function. This article provides a brief review of growth rates, including the idea of a function of exponential type. Classification of growth rates based on type help provide a finer tool than big O or Landau notation, since a number of theorems about the analytic structure of the bounded function and its integral transforms can be stated. In particular, Nachbin's theorem may be used to give the domain of convergence of the generalized Borel transform, given below.
In complex analysis, a branch of mathematics, a holomorphic function is said to be of exponential type C if its growth is bounded by the exponential function for some real-valued constant as . When a function is bounded in this way, it is then possible to express it as certain kinds of convergent summations over a series of other complex functions, as well as understanding when it is possible to apply techniques such as Borel summation, or, for example, to apply the Mellin transform, or to perform approximations using the Euler–Maclaurin formula. The general case is handled by Nachbin's theorem, which defines the analogous notion of -type for a general function as opposed to .
In mathematics, a local martingale is a type of stochastic process, satisfying the localized version of the martingale property. Every martingale is a local martingale; every bounded local martingale is a martingale; in particular, every local martingale that is bounded from below is a supermartingale, and every local martingale that is bounded from above is a submartingale; however, in general a local martingale is not a martingale, because its expectation can be distorted by large values of small probability. In particular, a driftless diffusion process is a local martingale, but not necessarily a martingale.
In mathematics, Lindelöf's theorem is a result in complex analysis named after the Finnish mathematician Ernst Leonard Lindelöf. It states that a holomorphic function on a half-strip in the complex plane that is bounded on the boundary of the strip and does not grow "too fast" in the unbounded direction of the strip must remain bounded on the whole strip. The result is useful in the study of the Riemann zeta function, and is a special case of the Phragmén–Lindelöf principle. Also, see Hadamard three-lines theorem.
Anatoly Alexeyevich Karatsuba was a Russian mathematician working in the field of analytic number theory, p-adic numbers and Dirichlet series.
In arithmetic combinatorics, the Erdős–Szemerédi theorem states that for every finite set of integers, at least one of , the set of pairwise sums or , the set of pairwise products form a significantly larger set. More precisely, the Erdős–Szemerédi theorem states that there exist positive constants c and such that for any non-empty set
In mathematics, Sobolev spaces for planar domains are one of the principal techniques used in the theory of partial differential equations for solving the Dirichlet and Neumann boundary value problems for the Laplacian in a bounded domain in the plane with smooth boundary. The methods use the theory of bounded operators on Hilbert space. They can be used to deduce regularity properties of solutions and to solve the corresponding eigenvalue problems.
Tau functions are an important ingredient in the modern mathematical theory of integrable systems, and have numerous applications in a variety of other domains. They were originally introduced by Ryogo Hirota in his direct method approach to soliton equations, based on expressing them in an equivalent bilinear form.