This article has multiple issues. Please help improve it or discuss these issues on the talk page . (Learn how and when to remove these messages)
|
In applied mathematics, Wahba's problem, first posed by Grace Wahba in 1965, seeks to find a rotation matrix (special orthogonal matrix) between two coordinate systems from a set of (weighted) vector observations. Solutions to Wahba's problem are often used in satellite attitude determination utilising sensors such as magnetometers and multi-antenna GPS receivers. The cost function that Wahba's problem seeks to minimise is as follows:
where is the k-th 3-vector measurement in the reference frame, is the corresponding k-th 3-vector measurement in the body frame and is a 3 by 3 rotation matrix between the coordinate frames. [1] is an optional set of weights for each observation.
A number of solutions to the problem have appeared in literature, notably Davenport's q-method, [2] QUEST and methods based on the singular value decomposition (SVD). Several methods for solving Wahba's problem are discussed by Markley and Mortari.
This is an alternative formulation of the Orthogonal Procrustes problem (consider all the vectors multiplied by the square-roots of the corresponding weights as columns of two matrices with N columns to obtain the alternative formulation). An elegant derivation of the solution on one and a half page can be found in. [3]
One solution can be found using a singular value decomposition (SVD).
1. Obtain a matrix as follows:
2. Find the singular value decomposition of
3. The rotation matrix is simply:
where
Principal component analysis (PCA) is a linear dimensionality reduction technique with applications in exploratory data analysis, visualization and data preprocessing.
In linear algebra, an orthogonal matrix, or orthonormal matrix, is a real square matrix whose columns and rows are orthonormal vectors.
In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix into a rotation, followed by a rescaling followed by another rotation. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any matrix. It is related to the polar decomposition.
In mathematics, particularly in linear algebra, a skew-symmetricmatrix is a square matrix whose transpose equals its negative. That is, it satisfies the condition
In mathematics, a Hermitian matrix is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the i-th row and j-th column is equal to the complex conjugate of the element in the j-th row and i-th column, for all indices i and j:
In linear algebra, a QR decomposition, also known as a QR factorization or QU factorization, is a decomposition of a matrix A into a product A = QR of an orthonormal matrix Q and an upper triangular matrix R. QR decomposition is often used to solve the linear least squares (LLS) problem and is the basis for a particular eigenvalue algorithm, the QR algorithm.
In linear algebra and functional analysis, a projection is a linear transformation from a vector space to itself such that . That is, whenever is applied twice to any vector, it gives the same result as if it were applied once. It leaves its image unchanged. This definition of "projection" formalizes and generalizes the idea of graphical projection. One can also consider the effect of a projection on a geometrical object by examining the effect of the projection on points in the object.
In linear algebra, a rotation matrix is a transformation matrix that is used to perform a rotation in Euclidean space. For example, using the convention below, the matrix
In geometry, Euler's rotation theorem states that, in three-dimensional space, any displacement of a rigid body such that a point on the rigid body remains fixed, is equivalent to a single rotation about some axis that runs through the fixed point. It also means that the composition of two rotations is also a rotation. Therefore the set of rotations has a group structure, known as a rotation group.
In statistics, Procrustes analysis is a form of statistical shape analysis used to analyse the distribution of a set of shapes. The name Procrustes refers to a bandit from Greek mythology who made his victims fit his bed either by stretching their limbs or cutting them off.
In mathematics, a Euclidean distance matrix is an n×n matrix representing the spacing of a set of n points in Euclidean space. For points in k-dimensional space ℝk, the elements of their Euclidean distance matrix A are given by squares of distances between them. That is
In mathematics, particularly in the theory of spinors, the Weyl–Brauer matrices are an explicit realization of a Clifford algebra as a matrix algebra of 2⌊n/2⌋ × 2⌊n/2⌋ matrices. They generalize the Pauli matrices to n dimensions, and are a specific construction of higher-dimensional gamma matrices. They are named for Richard Brauer and Hermann Weyl, and were one of the earliest systematic constructions of spinors from a representation theoretic standpoint.
The Kabsch algorithm, also known as the Kabsch-Umeyama algorithm, named after Wolfgang Kabsch and Shinji Umeyama, is a method for calculating the optimal rotation matrix that minimizes the RMSD between two paired sets of points. It is useful for point-set registration in computer graphics, and in cheminformatics and bioinformatics to compare molecular and protein structures.
The orthogonal Procrustes problem is a matrix approximation problem in linear algebra. In its classical form, one is given two matrices and and asked to find an orthogonal matrix which most closely maps to . Specifically, the orthogonal Procrustes problem is an optimization problem given by
In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.
Non-linear least squares is the form of least squares analysis used to fit a set of m observations with a model that is non-linear in n unknown parameters (m ≥ n). It is used in some forms of nonlinear regression. The basis of the method is to approximate the model by a linear one and to refine the parameters by successive iterations. There are many similarities to linear least squares, but also some significant differences. In economic theory, the non-linear least squares method is applied in (i) the probit regression, (ii) threshold regression, (iii) smooth regression, (iv) logistic link regression, (v) Box–Cox transformed regressors ().
The TRIAD method is the earliest published algorithm for determining spacecraft attitude, which was first introduced by Harold Black in 1964. Given the knowledge of two vectors in the reference and body coordinates of a satellite, the TRIAD algorithm obtains the direction cosine matrix relating to both frames. Harold Black played a key role in the development of the guidance, navigation, and control of the U.S. Navy's Transit satellite system at Johns Hopkins Applied Physics Laboratories. TRIAD represented the state of practice in spacecraft attitude determination before the advent of Wahba's problem. and its several optimal solutions. Covariance analysis for Black's solution was subsequently provided by Markley.
In mathematics, a rigid transformation is a geometric transformation of a Euclidean space that preserves the Euclidean distance between every pair of points.
L1-norm principal component analysis (L1-PCA) is a general method for multivariate data analysis. L1-PCA is often preferred over standard L2-norm principal component analysis (PCA) when the analyzed data may contain outliers, as it is believed to be robust.
The quaternion estimator algorithm (QUEST) is an algorithm designed to solve Wahba's problem, that consists of finding a rotation matrix between two coordinate systems from two sets of observations sampled in each system respectively. The key idea behind the algorithm is to find an expression of the loss function for the Wahba's problem as a quadratic form, using the Cayley–Hamilton theorem and the Newton–Raphson method to efficiently solve the eigenvalue problem and construct a numerically stable representation of the solution.