In linear algebra, a circulant matrix is a square matrix in which all rows are composed of the same elements and each row is rotated one element to the right relative to the preceding row. It is a particular kind of Toeplitz matrix.
In numerical analysis, circulant matrices are important because they are diagonalized by a discrete Fourier transform, and hence linear equations that contain them may be quickly solved using a fast Fourier transform. [1] They can be interpreted analytically as the integral kernel of a convolution operator on the cyclic group and hence frequently appear in formal descriptions of spatially invariant linear operations. This property is also critical in modern software defined radios, which utilize Orthogonal Frequency Division Multiplexing to spread the symbols (bits) using a cyclic prefix. This enables the channel to be represented by a circulant matrix, simplifying channel equalization in the frequency domain.
In cryptography, a circulant matrix is used in the MixColumns step of the Advanced Encryption Standard.
An circulant matrix takes the form or the transpose of this form (by choice of notation). If each is a square matrix, then the matrix is called a block-circulant matrix.
A circulant matrix is fully specified by one vector, , which appears as the first column (or row) of . The remaining columns (and rows, resp.) of are each cyclic permutations of the vector with offset equal to the column (or row, resp.) index, if lines are indexed from to . (Cyclic permutation of rows has the same effect as cyclic permutation of columns.) The last row of is the vector shifted by one in reverse.
Different sources define the circulant matrix in different ways, for example as above, or with the vector corresponding to the first row rather than the first column of the matrix; and possibly with a different direction of shift (which is sometimes called an anti-circulant matrix).
The polynomial is called the associated polynomial of the matrix .
The normalized eigenvectors of a circulant matrix are the Fourier modes, namely, where is a primitive -th root of unity and is the imaginary unit.
(This can be understood by realizing that multiplication with a circulant matrix implements a convolution. In Fourier space, convolutions become multiplication. Hence the product of a circulant matrix with a Fourier mode yields a multiple of that Fourier mode, i.e. it is an eigenvector.)
The corresponding eigenvalues are given by
As a consequence of the explicit formula for the eigenvalues above, the determinant of a circulant matrix can be computed as: Since taking the transpose does not change the eigenvalues of a matrix, an equivalent formulation is
The rank of a circulant matrix is equal to where is the degree of the polynomial . [2]
There are important connections between circulant matrices and the DFT matrices. In fact, it can be shown that where is the first column of . The eigenvalues of are given by the product . This product can be readily calculated by a fast Fourier transform. [3]
Circulant matrices can be interpreted geometrically, which explains the connection with the discrete Fourier transform.
Consider vectors in as functions on the integers with period , (i.e., as periodic bi-infinite sequences: ) or equivalently, as functions on the cyclic group of order (denoted or ) geometrically, on (the vertices of) the regular -gon: this is a discrete analog to periodic functions on the real line or circle.
Then, from the perspective of operator theory, a circulant matrix is the kernel of a discrete integral transform, namely the convolution operator for the function ; this is a discrete circular convolution. The formula for the convolution of the functions is
(recall that the sequences are periodic) which is the product of the vector by the circulant matrix for .
The discrete Fourier transform then converts convolution into multiplication, which in the matrix setting corresponds to diagonalization.
The -algebra of all circulant matrices with complex entries is isomorphic to the group -algebra of
For a symmetric circulant matrix one has the extra condition that . Thus it is determined by elements.
The eigenvalues of any real symmetric matrix are real. The corresponding eigenvalues become: for even, and for odd, where denotes the real part of . This can be further simplified by using the fact that and depending on even or odd.
Symmetric circulant matrices belong to the class of bisymmetric matrices.
The complex version of the circulant matrix, ubiquitous in communications theory, is usually Hermitian. In this case and its determinant and all eigenvalues are real.
If n is even the first two rows necessarily takes the form in which the first element in the top second half-row is real.
If n is odd we get
Tee [5] has discussed constraints on the eigenvalues for the Hermitian condition.
Given a matrix equation
where is a circulant matrix of size , we can write the equation as the circular convolution where is the first column of , and the vectors , and are cyclically extended in each direction. Using the circular convolution theorem, we can use the discrete Fourier transform to transform the cyclic convolution into component-wise multiplication so that
This algorithm is much faster than the standard Gaussian elimination, especially if a fast Fourier transform is used.
In graph theory, a graph or digraph whose adjacency matrix is circulant is called a circulant graph/digraph. Equivalently, a graph is circulant if its automorphism group contains a full-length cycle. The Möbius ladders are examples of circulant graphs, as are the Paley graphs for fields of prime order.
In mathematics, the discrete Fourier transform (DFT) converts a finite sequence of equally-spaced samples of a function into a same-length sequence of equally-spaced samples of the discrete-time Fourier transform (DTFT), which is a complex-valued function of frequency. The interval at which the DTFT is sampled is the reciprocal of the duration of the input sequence. An inverse DFT (IDFT) is a Fourier series, using the DTFT samples as coefficients of complex sinusoids at the corresponding DTFT frequencies. It has the same sample-values as the original input sequence. The DFT is therefore said to be a frequency domain representation of the original input sequence. If the original sequence spans all the non-zero values of a function, its DTFT is continuous, and the DFT provides discrete samples of one cycle. If the original sequence is one cycle of a periodic function, the DFT provides all the non-zero values of one DTFT cycle.
In mathematics, matrix addition is the operation of adding two matrices by adding the corresponding entries together.
In mathematics, specifically in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.
In linear algebra, a Toeplitz matrix or diagonal-constant matrix, named after Otto Toeplitz, is a matrix in which each descending diagonal from left to right is constant. For instance, the following matrix is a Toeplitz matrix:
In probability theory and statistics, a covariance matrix is a square matrix giving the covariance between each pair of elements of a given random vector.
In mathematics, a block matrix or a partitioned matrix is a matrix that is interpreted as having been broken into sections called blocks or submatrices.
In linear algebra, the Frobenius companion matrix of the monic polynomial is the square matrix defined as
In numerical linear algebra, a Givens rotation is a rotation in the plane spanned by two coordinates axes. Givens rotations are named after Wallace Givens, who introduced them to numerical analysts in the 1950s while he was working at Argonne National Laboratory.
In mathematics, the Kronecker product, sometimes denoted by ⊗, is an operation on two matrices of arbitrary size resulting in a block matrix. It is a specialization of the tensor product from vectors to matrices and gives the matrix of the tensor product linear map with respect to a standard choice of basis. The Kronecker product is to be distinguished from the usual matrix multiplication, which is an entirely different operation. The Kronecker product is also sometimes called matrix direct product.
In applied mathematics, a DFT matrix is an expression of a discrete Fourier transform (DFT) as a transformation matrix, which can be applied to a signal through matrix multiplication.
In numerical linear algebra, the Jacobi method is an iterative algorithm for determining the solutions of a strictly diagonally dominant system of linear equations. Each diagonal element is solved for, and an approximate value is plugged in. The process is then iterated until it converges. This algorithm is a stripped-down version of the Jacobi transformation method of matrix diagonalization. The method is named after Carl Gustav Jacob Jacobi.
In numerical linear algebra, the method of successive over-relaxation (SOR) is a variant of the Gauss–Seidel method for solving a linear system of equations, resulting in faster convergence. A similar method can be used for any slowly converging iterative process.
In the mathematical discipline of matrix theory, a Jordan matrix, named after Camille Jordan, is a block diagonal matrix over a ring R, where each block along the diagonal, called a Jordan block, has the following form:
In mathematics and physics, in particular quantum information, the term generalized Pauli matrices refers to families of matrices which generalize the properties of the Pauli matrices. Here, a few classes of such matrices are summarized.
In mathematics, a Carleman matrix is a matrix used to convert function composition into matrix multiplication. It is often used in iteration theory to find the continuous iteration of functions which cannot be iterated by pattern recognition alone. Other uses of Carleman matrices occur in the theory of probability generating functions, and Markov chains.
In mathematics, the discrete Fourier transform over a ring generalizes the discrete Fourier transform (DFT), of a function whose values are commonly complex numbers, over an arbitrary ring.
In image processing, a kernel, convolution matrix, or mask is a small matrix used for blurring, sharpening, embossing, edge detection, and more. This is accomplished by doing a convolution between the kernel and an image. Or more simply, when each pixel in the output image is a function of the nearby pixels in the input image, the kernel is that function.
The cyclotomic fast Fourier transform is a type of fast Fourier transform algorithm over finite fields. This algorithm first decomposes a DFT into several circular convolutions, and then derives the DFT results from the circular convolution results. When applied to a DFT over , this algorithm has a very low multiplicative complexity. In practice, since there usually exist efficient algorithms for circular convolutions with specific lengths, this algorithm is very efficient.
In probability theory and statistics, a complex random vector is typically a tuple of complex-valued random variables, and generally is a random variable taking values in a vector space over the field of complex numbers. If are complex-valued random variables, then the n-tuple is a complex random vector. Complex random variables can always be considered as pairs of real random vectors: their real and imaginary parts.
Generalized pencil-of-function method (GPOF), also known as matrix pencil method, is a signal processing technique for estimating a signal or extracting information with complex exponentials. Being similar to Prony and original pencil-of-function methods, it is generally preferred to those for its robustness and computational efficiency.
{{cite book}}
: |journal=
ignored (help)