Carleman matrix

Last updated

In mathematics, a Carleman matrix is a matrix used to convert function composition into matrix multiplication. It is often used in iteration theory to find the continuous iteration of functions which cannot be iterated by pattern recognition alone. Other uses of Carleman matrices occur in the theory of probability generating functions, and Markov chains.

Contents

Definition

The Carleman matrix of an infinitely differentiable function is defined as:

so as to satisfy the (Taylor series) equation:

For instance, the computation of by

simply amounts to the dot-product of row 1 of with a column vector .

The entries of in the next row give the 2nd power of :

and also, in order to have the zeroth power of in , we adopt the row 0 containing zeros everywhere except the first position, such that

Thus, the dot product of with the column vector yields the column vector , i.e.,

Generalization

A generalization of the Carleman matrix of a function can be defined around any point, such as:

or where . This allows the matrix power to be related as:

General Series

Another way to generalize it even further is think about a general series in the following way:
Let be a series approximation of , where is a basis of the space containing
Assuming that is also a basis for , We can define , therefore we have , now we can prove that , if we assume that is also a basis for and .
Let be such that where .
Now
Comparing the first and the last term, and from being a base for , and it follows that

Examples

Rederive (Taylor) Carleman Matrix

If we set we have the Carleman matrix. Because

then we know that the n-th coefficient must be the nth-coefficient of the taylor series of . Therefore
Therefore
Which is the Carleman matrix given above. (It's important to note that this is not an orthornormal basis)

Carleman Matrix For Orthonormal Basis

If is an orthonormal basis for a Hilbert Space with a defined inner product , we can set and will be . Then .

Carleman Matrix for Fourier Series

If we have the analogous for Fourier Series. Let and represent the carleman coefficient and matrix in the fourier basis. Because the basis is orthogonal, we have.

.


Then, therefore, which is

Properties

Carleman matrices satisfy the fundamental relationship

which makes the Carleman matrix M a (direct) representation of . Here the term denotes the composition of functions .

Other properties include:

Examples

The Carleman matrix of a constant is:

The Carleman matrix of the identity function is:

The Carleman matrix of a constant addition is:

The Carleman matrix of the successor function is equivalent to the Binomial coefficient:

The Carleman matrix of the logarithm is related to the (signed) Stirling numbers of the first kind scaled by factorials:

The Carleman matrix of the logarithm is related to the (unsigned) Stirling numbers of the first kind scaled by factorials:

The Carleman matrix of the exponential function is related to the Stirling numbers of the second kind scaled by factorials:

The Carleman matrix of exponential functions is:

The Carleman matrix of a constant multiple is:

The Carleman matrix of a linear function is:

The Carleman matrix of a function is:

The Carleman matrix of a function is:

The Bell matrix or the Jabotinsky matrix of a function is defined as [1] [2] [3]

so as to satisfy the equation

These matrices were developed in 1947 by Eri Jabotinsky to represent convolutions of polynomials. [4] It is the transpose of the Carleman matrix and satisfy

which makes the Bell matrix B an anti-representation of .

See also

Notes

  1. Knuth, D. (1992). "Convolution Polynomials". The Mathematica Journal. 2 (4): 67–78. arXiv: math/9207221 . Bibcode:1992math......7221K.
  2. Jabotinsky, Eri (1953). "Representation of functions by matrices. Application to Faber polynomials". Proceedings of the American Mathematical Society. 4 (4): 546–553. doi: 10.1090/S0002-9939-1953-0059359-0 . ISSN   0002-9939.
  3. Lang, W. (2000). "On generalizations of the stirling number triangles". Journal of Integer Sequences. 3 (2.4): 1–19. Bibcode:2000JIntS...3...24L.
  4. Jabotinsky, Eri (1947). "Sur la représentation de la composition de fonctions par un produit de matrices. Applicaton à l'itération de e^x et de e^x-1". Comptes rendus de l'Académie des Sciences. 224: 323–324.

Related Research Articles

Bra–ket notation, also called Dirac notation, is a notation for linear algebra and linear operators on complex vector spaces together with their dual space both in the finite-dimensional and infinite-dimensional case. It is specifically designed to ease the types of calculations that frequently come up in quantum mechanics. Its use in quantum mechanics is quite widespread.

The Riesz representation theorem, sometimes called the Riesz–Fréchet representation theorem after Frigyes Riesz and Maurice René Fréchet, establishes an important connection between a Hilbert space and its continuous dual space. If the underlying field is the real numbers, the two are isometrically isomorphic; if the underlying field is the complex numbers, the two are isometrically anti-isomorphic. The (anti-) isomorphism is a particular natural isomorphism.

In linear algebra, a Toeplitz matrix or diagonal-constant matrix, named after Otto Toeplitz, is a matrix in which each descending diagonal from left to right is constant. For instance, the following matrix is a Toeplitz matrix:

In physics, an operator is a function over a space of physical states onto another space of physical states. The simplest example of the utility of operators is the study of symmetry. Because of this, they are useful tools in classical mechanics. Operators are even more important in quantum mechanics, where they form an intrinsic part of the formulation of the theory.

<span class="mw-page-title-main">Adjoint representation</span> Mathematical term

In mathematics, the adjoint representation of a Lie group G is a way of representing the elements of the group as linear transformations of the group's Lie algebra, considered as a vector space. For example, if G is , the Lie group of real n-by-n invertible matrices, then the adjoint representation is the group homomorphism that sends an invertible n-by-n matrix to an endomorphism of the vector space of all linear transformations of defined by: .

Matrix mechanics is a formulation of quantum mechanics created by Werner Heisenberg, Max Born, and Pascual Jordan in 1925. It was the first conceptually autonomous and logically consistent formulation of quantum mechanics. Its account of quantum jumps supplanted the Bohr model's electron orbits. It did so by interpreting the physical properties of particles as matrices that evolve in time. It is equivalent to the Schrödinger wave formulation of quantum mechanics, as manifest in Dirac's bra–ket notation.

In quantum mechanics, a Slater determinant is an expression that describes the wave function of a multi-fermionic system. It satisfies anti-symmetry requirements, and consequently the Pauli principle, by changing sign upon exchange of two electrons. Only a small subset of all possible fermionic wave functions can be written as a single Slater determinant, but those form an important and useful subset because of their simplicity.

In physics, the S-matrix or scattering matrix relates the initial state and the final state of a physical system undergoing a scattering process. It is used in quantum mechanics, scattering theory and quantum field theory (QFT).

Creation operators and annihilation operators are mathematical operators that have widespread applications in quantum mechanics, notably in the study of quantum harmonic oscillators and many-particle systems. An annihilation operator lowers the number of particles in a given state by one. A creation operator increases the number of particles in a given state by one, and it is the adjoint of the annihilation operator. In many subfields of physics and chemistry, the use of these operators instead of wavefunctions is known as second quantization. They were introduced by Paul Dirac.

In mathematics, the Kronecker product, sometimes denoted by ⊗, is an operation on two matrices of arbitrary size resulting in a block matrix. It is a specialization of the tensor product from vectors to matrices and gives the matrix of the tensor product linear map with respect to a standard choice of basis. The Kronecker product is to be distinguished from the usual matrix multiplication, which is an entirely different operation. The Kronecker product is also sometimes called matrix direct product.

Quantum statistical mechanics is statistical mechanics applied to quantum mechanical systems. In quantum mechanics a statistical ensemble is described by a density operator S, which is a non-negative, self-adjoint, trace-class operator of trace 1 on the Hilbert space H describing the quantum system. This can be shown under various mathematical formalisms for quantum mechanics.

<span class="mw-page-title-main">Dirichlet distribution</span> Probability distribution

In probability and statistics, the Dirichlet distribution (after Peter Gustav Lejeune Dirichlet), often denoted , is a family of continuous multivariate probability distributions parameterized by a vector of positive reals. It is a multivariate generalization of the beta distribution, hence its alternative name of multivariate beta distribution (MBD). Dirichlet distributions are commonly used as prior distributions in Bayesian statistics, and in fact, the Dirichlet distribution is the conjugate prior of the categorical distribution and multinomial distribution.

In probability theory the hypoexponential distribution or the generalized Erlang distribution is a continuous distribution, that has found use in the same fields as the Erlang distribution, such as queueing theory, teletraffic engineering and more generally in stochastic processes. It is called the hypoexponetial distribution as it has a coefficient of variation less than one, compared to the hyper-exponential distribution which has coefficient of variation greater than one and the exponential distribution which has coefficient of variation of one.

The following are important identities involving derivatives and integrals in vector calculus.

In mathematics, the Hamburger moment problem, named after Hans Ludwig Hamburger, is formulated as follows: given a sequence (m0, m1, m2, ...), does there exist a positive Borel measure μ (for instance, the measure determined by the cumulative distribution function of a random variable) on the real line such that

In the mathematical discipline of matrix theory, a Jordan matrix, named after Camille Jordan, is a block diagonal matrix over a ring R, where each block along the diagonal, called a Jordan block, has the following form:

In mathematics, in the area of wavelet analysis, a refinable function is a function which fulfils some kind of self-similarity. A function is called refinable with respect to the mask if

Phase-comparison monopulse is a technique used in radio frequency (RF) applications such as radar and direction finding to accurately estimate the direction of arrival of a signal from the phase difference of the signal measured on two separated antennas or more typically from displaced phase centers of an array antenna. Phase-comparison monopulse differs from amplitude-comparison monopulse in that the former uses displaced phase centers with a common beam pointing direction, while the latter uses a common phase center and displaced beam pointing directions.

In mathematics, every analytic function can be used for defining a matrix function that maps square matrices with complex entries to square matrices of the same size.

Tau functions are an important ingredient in the modern mathematical theory of integrable systems, and have numerous applications in a variety of other domains. They were originally introduced by Ryogo Hirota in his direct method approach to soliton equations, based on expressing them in an equivalent bilinear form.

References