The local inverse is a kind of inverse function or matrix inverse used in image and signal processing, as well as in other general areas of mathematics.
The concept of a local inverse came from interior reconstruction of CT[ clarification needed ] images. One interior reconstruction method first approximately reconstructs the image outside the ROI (region of interest), and then subtracts the re-projection data of the image outside the ROI from the original projection data; then this data is used to make a new reconstruction. This idea can be widened to a full inverse. Instead of directly making an inverse, the unknowns outside of the local region can be inverted first. Recalculate the data from these unknowns (outside the local region), subtract this recalculated data from the original, and then take the inverse inside the local region using this newly produced data for the outside region.
This concept is a direct extension of the local tomography, generalized inverse and iterative refinement methods. It is used to solve the inverse problem with incomplete input data, similarly to local tomography. However this concept of local inverse can also be applied to complete input data.
Assume there are , , and that satisfy
Here is not equal to , but is close to , where is the identity matrix. Examples of matrices of the type are the filtered back-projection method for image reconstruction and the inverse with regularization. In this case the following is an approximate solution:
A better solution for can be found as follows:
In the above formula is useless, hence
In the same way, there is
In the above the solution is divided into two parts, inside the ROI and is outside the ROI, f inside of FOV(field of view) and g outside the FOV.
The two parts can be extended to many parts, in which case the extended method is referred to as the sub-region iterative refinement method [1]
Assume , , , and are known matrices; and are unknown vectors; is a known vector; is an unknown vector. We are interested in determining x. What is a good solution?
Here is or is close to . The local inverse algorithm is as follows:
(1) . An extrapolated version of is obtained by
(2) . An approximate version of is calculated by
(3) . A correction for is done by
(4) . A corrected function for is calculated by
(5) . An extrapolated function for is obtained by
(6) . A local inverse solution is obtained
In the above algorithm, there are two time extrapolations for which are used to overcome the data truncation problem. There is a correction for . This correction can be a constant correction to correct the DC values of or a linear correction according to prior knowledge about . This algorithm can be found in the following reference:. [2]
In the example of the reference, [3] it is found that , here the constant correction is made. A more complicated correction can be made, for example a linear correction, which might achieve better results.
Shuang-ren Zhao defined a Local inverse [2] to solve the above problem. First consider the simplest solution
or
Here is the correct data in which there is no influence of the outside object function. From this data it is easy to get the correct solution,
Here is a correct(or exact) solution for the unknown , which means . In case that is not a square matrix or has no inverse, the generalized inverse can applied,
Since is unknown, if it is set to , an approximate solution is obtained.
In the above solution the result is related to the unknown vector . Since can have any value the result has very strong artifacts, namely
These kind of artifacts are referred to as truncation artifacts in the field of CT image reconstruction. In order to minimize the above artifacts in the solution, a special matrix is considered, which satisfies
and thus satisfies
Solving the above equation with Generalized inverse gives
Here is the generalized inverse of , and is a solution for . It is easy to find a matrix Q which satisfies , specifically can be written as the following:
This matrix is referred as the transverse projection of , and is the generalized inverse of . The matrix satisfies
from which it follows that
It is easy to prove that :
and hence
Hence Q is also the generalized inverse of Q
That means
Hence,
or
The matrix
is referred to as the local inverse of the matrix Using the local inverse instead of the generalized inverse or the inverse can avoid artifacts from unknown input data. Considering,
Hence there is
Hence is only related to the correct data . This kind error can be calculated as
This kind error are called the bowl effect. The bowl effect is not related to the unknown object , it is only related to the correct data .
In case the contribution of to is smaller than that of , or
the local inverse solution is better than for this kind of inverse problem. Using instead of , the truncation artifacts are replaced by the bowl effect. This result is the same as in local tomography, hence the local inverse is a direct extension of the concept of local tomography.
It is well known that the solution of the generalized inverse is a minimal L2 norm method. From the above derivation it is clear that the solution of the local inverse is a minimal L2 norm method subject to the condition that the influence of the unknown object is . Hence the local inverse is also a direct extension of the concept of the generalized inverse.
In mathematics, a symmetric matrix with real entries is positive-definite if the real number is positive for every nonzero real column vector where is the transpose of . More generally, a Hermitian matrix is positive-definite if the real number is positive for every nonzero complex column vector where denotes the conjugate transpose of
In mathematics, the concept of an inverse element generalises the concepts of opposite and reciprocal of numbers.
In linear algebra, a Toeplitz matrix or diagonal-constant matrix, named after Otto Toeplitz, is a matrix in which each descending diagonal from left to right is constant. For instance, the following matrix is a Toeplitz matrix:
In linear algebra, Cramer's rule is an explicit formula for the solution of a system of linear equations with as many equations as unknowns, valid whenever the system has a unique solution. It expresses the solution in terms of the determinants of the (square) coefficient matrix and of matrices obtained from it by replacing one column by the column vector of right-sides of the equations. It is named after Gabriel Cramer, who published the rule for an arbitrary number of unknowns in 1750, although Colin Maclaurin also published special cases of the rule in 1748, and possibly knew of it as early as 1729.
In vector calculus, the Jacobian matrix of a vector-valued function of several variables is the matrix of all its first-order partial derivatives. When this matrix is square, that is, when the function takes the same number of variables as input as the number of vector components of its output, its determinant is referred to as the Jacobian determinant. Both the matrix and the determinant are often referred to simply as the Jacobian in literature.
In the mathematical field of differential geometry, a metric tensor is an additional structure on a manifold M that allows defining distances and angles, just as the inner product on a Euclidean space allows defining distances and angles there. More precisely, a metric tensor at a point p of M is a bilinear form defined on the tangent space at p, and a metric tensor on M consists of a metric tensor at each point p of M that varies smoothly with p.
In linear algebra, an n-by-n square matrix A is called invertible if there exists an n-by-n square matrix B such that
In mathematics, and in particular linear algebra, the Moore–Penrose inverse of a matrix is the most widely known generalization of the inverse matrix. It was independently described by E. H. Moore in 1920, Arne Bjerhammar in 1951, and Roger Penrose in 1955. Earlier, Erik Ivar Fredholm had introduced the concept of a pseudoinverse of integral operators in 1903. When referring to a matrix, the term pseudoinverse, without further specification, is often used to indicate the Moore–Penrose inverse. The term generalized inverse is sometimes used as a synonym for pseudoinverse.
In linear algebra, a Vandermonde matrix, named after Alexandre-Théophile Vandermonde, is a matrix with the terms of a geometric progression in each row: an matrix
In linear algebra and the theory of matrices, the Schur complement of a block matrix is defined as follows.
In mathematics, the matrix exponential is a matrix function on square matrices analogous to the ordinary exponential function. It is used to solve systems of linear differential equations. In the theory of Lie groups, the matrix exponential gives the exponential map between a matrix Lie algebra and the corresponding Lie group.
In quantum computing and specifically the quantum circuit model of computation, a quantum logic gate is a basic quantum circuit operating on a small number of qubits. They are the building blocks of quantum circuits, like classical logic gates are for conventional digital circuits.
In linear algebra, the generalized singular value decomposition (GSVD) is the name of two different techniques based on the singular value decomposition (SVD). The two versions differ because one version decomposes two matrices and the other version uses a set of constraints imposed on the left and right singular vectors of a single-matrix SVD.
In linear algebra, it is often important to know which vectors have their directions unchanged by a linear transformation. An eigenvector or characteristic vector is such a vector. Thus an eigenvector of a linear transformation is scaled by a constant factor when the linear transformation is applied to it: . The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor .
LMS, is a color space which represents the response of the three types of cones of the human eye, named for their responsivity (sensitivity) peaks at long, medium, and short wavelengths.
The lifting scheme is a technique for both designing wavelets and performing the discrete wavelet transform (DWT). In an implementation, it is often worthwhile to merge these steps and design the wavelet filters while performing the wavelet transform. This is then called the second-generation wavelet transform. The technique was introduced by Wim Sweldens.
In mathematics, and in particular, algebra, a generalized inverse of an element x is an element y that has some properties of an inverse element but not necessarily all of them. The purpose of constructing a generalized inverse of a matrix is to obtain a matrix that can serve as an inverse in some sense for a wider class of matrices than invertible matrices. Generalized inverses can be defined in any mathematical structure that involves associative multiplication, that is, in a semigroup. This article describes generalized inverses of a matrix .
In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.
In iterative reconstruction in digital imaging, interior reconstruction (also known as limited field of view (LFV) reconstruction) is a technique to correct truncation artifacts caused by limiting image data to a small field of view. The reconstruction focuses on an area known as the region of interest (ROI). Although interior reconstruction can be applied to dental or cardiac CT images, the concept is not limited to CT. It is applied with one of several methods.
In mathematics, the matrix sign function is a matrix function on square matrices analogous to the complex sign function.