Triad method

Last updated

The TRIAD method is the earliest published algorithm for determining spacecraft attitude, which was first introduced by Harold Black in 1964. [1] [2] [3] Given the knowledge of two vectors in the reference and body coordinates of a satellite, the TRIAD algorithm obtains the direction cosine matrix relating to both frames. Harold Black played a key role in the development of the guidance, navigation, and control of the U.S. Navy's Transit satellite system at Johns Hopkins Applied Physics Laboratories. TRIAD represented the state of practice in spacecraft attitude determination before the advent of Wahba's problem. [4] and its several optimal solutions. Covariance analysis for Black's solution was subsequently provided by Markley. [5]

Contents

Summary

Firstly, one considers the linearly independent reference vectors and . Let be the corresponding measured directions of the reference unit vectors as resolved in a body fixed frame of reference. Following that, they are then related by the equations,

for , where is a rotation matrix (sometimes also known as a proper orthogonal matrix, i.e., ). transforms vectors in the body fixed frame into the frame of the reference vectors. Among other properties, rotational matrices preserve the length of the vector they operate on. Note that the direction cosine matrix also transforms the cross product vector, written as,

TRIAD proposes an estimate of the direction cosine matrix as a solution to the linear system equations given by

where have been used to separate different column vectors.

The solution presented above works well in the noise-free case. However, in practice, are noisy and the orthogonality condition of the attitude matrix (or the direction cosine matrix) is not preserved by the above procedure. TRIAD incorporates the following elegant procedure to redress this problem. To this end, one defines unit vectors,

and

to be used in place of the first two columns of equation ( 3 ). Their cross product is used as the third column in the linear system of equations obtaining a proper orthogonal matrix for the spacecraft attitude given by the following:

While the normalizations of equations ( 4 ) - ( 7 ) are not necessary, they have been carried out to achieve a computational advantage in solving the linear system of equations in ( 8 ). Thus an estimate of the spacecraft attitude is given by the proper orthogonal matrix as

Note that computational efficiency has been achieved in this procedure by replacing the matrix inverse with a transpose. This is possible because the matrices involved in computing attitude are each composed of a TRIAD of orthonormal basis vectors. "TRIAD" derives its name from this observation.

TRIAD Attitude Matrix and Handedness of Measurements

It is of consequence to note that the TRIAD method always produces a proper orthogonal matrix irrespective of the handedness of the reference and body vectors employed in the estimation process. This can be shown as follows: In a matrix form given

where and Note that if the columns of form a left-handed TRIAD, then the columns of are also left-handed because of the one-one correspondence between the vectors. This is because of the simple fact that, in Euclidean geometry, the angle between any two vectors remains invariant to coordinate transformations. Therefore, the determinant is or depending on whether its columns are right-handed or left-handed respectively (similarly, ). Taking determinant on both sides of the relation in Eq. ( 10 ), one concludes that

This is quite useful in practical applications since the analyst is always guaranteed a proper orthogonal matrix irrespective of the nature of the reference and measured vector quantities.

Applications

TRIAD was used as an attitude determination technique to process the telemetry data from the Transit satellite system (used by the U.S. Navy for navigation). The principles of the Transit system gave rise to the global positioning system satellite constellation. In an application problem, the reference vectors are usually known directions (e.g. stars, Earth magnetic field, gravity vector, etc.). Body fixed vectors are the measured directions as observed by an on-board sensor (e.g. star tracker, magnetometer, etc.). With advances in micro-electronics, attitude determination algorithms such as TRIAD have found their place in a variety of devices (e.g. smart phones, cars, tablets, UAVs, etc.) with a broad impact on modern society.

See also

Related Research Articles

In mathematics, the determinant is a scalar-valued function of the entries of a square matrix. The determinant of a matrix A is commonly denoted det(A), det A, or |A|. Its value characterizes some properties of the matrix and the linear map represented, on a given basis, by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the corresponding linear map is an isomorphism.

<span class="mw-page-title-main">Linear algebra</span> Branch of mathematics

Linear algebra is the branch of mathematics concerning linear equations such as:

<span class="mw-page-title-main">Rotation</span> Movement of an object around an axis

Rotation or rotational motion is the circular movement of an object around a central line, known as an axis of rotation. A plane figure can rotate in either a clockwise or counterclockwise sense around a perpendicular axis intersecting anywhere inside or outside the figure at a center of rotation. A solid figure has an infinite number of possible axes and angles of rotation, including chaotic rotation, in contrast to rotation around a fixed axis.

In mathematics, and more specifically in linear algebra, a linear subspace or vector subspace is a vector space that is a subset of some larger vector space. A linear subspace is usually simply called a subspace when the context serves to distinguish it from other types of subspaces.

<span class="mw-page-title-main">Gram–Schmidt process</span> Orthonormalization of a set of vectors

In mathematics, particularly linear algebra and numerical analysis, the Gram–Schmidt process or Gram-Schmidt algorithm is a way of finding a set of two or more vectors that are perpendicular to each other.

<span class="mw-page-title-main">Row and column spaces</span> Vector spaces associated to a matrix

In linear algebra, the column space of a matrix A is the span of its column vectors. The column space of a matrix is the image or range of the corresponding matrix transformation.

In linear algebra, an orthogonal matrix, or orthonormal matrix, is a real square matrix whose columns and rows are orthonormal vectors.

<span class="mw-page-title-main">System of linear equations</span> Several equations of degree 1 to be solved simultaneously

In mathematics, a system of linear equations is a collection of two or more linear equations involving the same variables. For example,

Levinson recursion or Levinson–Durbin recursion is a procedure in linear algebra to recursively calculate the solution to an equation involving a Toeplitz matrix. The algorithm runs in Θ(n2) time, which is a strong improvement over Gauss–Jordan elimination, which runs in Θ(n3).

In mathematics, the Kronecker product, sometimes denoted by ⊗, is an operation on two matrices of arbitrary size resulting in a block matrix. It is a specialization of the tensor product from vectors to matrices and gives the matrix of the tensor product linear map with respect to a standard choice of basis. The Kronecker product is to be distinguished from the usual matrix multiplication, which is an entirely different operation. The Kronecker product is also sometimes called matrix direct product.

In mathematics, the kernel of a linear map, also known as the null space or nullspace, is the part of the domain which is mapped to the zero vector of the co-domain; the kernel is always a linear subspace of the domain. That is, given a linear map L : VW between two vector spaces V and W, the kernel of L is the vector space of all elements v of V such that L(v) = 0, where 0 denotes the zero vector in W, or more symbolically:

In linear algebra, an eigenvector or characteristic vector is a vector that has its direction unchanged by a given linear transformation. More precisely, an eigenvector, , of a linear transformation, , is scaled by a constant factor, , when the linear transformation is applied to it: . The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor .

The Eckart conditions, named after Carl Eckart, simplify the nuclear motion (rovibrational) Hamiltonian that arises in the second step of the Born–Oppenheimer approximation. They make it possible to approximately separate rotation from vibration. Although the rotational and vibrational motions of the nuclei in a molecule cannot be fully separated, the Eckart conditions minimize the coupling close to a reference configuration. The Eckart conditions are explained by Louck and Galbraith.

In statistics and signal processing, the orthogonality principle is a necessary and sufficient condition for the optimality of a Bayesian estimator. Loosely stated, the orthogonality principle says that the error vector of the optimal estimator is orthogonal to any possible estimator. The orthogonality principle is most commonly stated for linear estimators, but more general formulations are possible. Since the principle is a necessary and sufficient condition for optimality, it can be used to find the minimum mean square error estimator.

<span class="mw-page-title-main">Matrix (mathematics)</span> Array of numbers

In mathematics, a matrix is a rectangular array or table of numbers, symbols, or expressions, with elements or entries arranged in rows and columns, which is used to represent a mathematical object or property of such an object.

<span class="mw-page-title-main">Polynomial regression</span> Statistics concept

In statistics, polynomial regression is a form of regression analysis in which the relationship between the independent variable x and the dependent variable y is modeled as an nth degree polynomial in x. Polynomial regression fits a nonlinear relationship between the value of x and the corresponding conditional mean of y, denoted E(y |x). Although polynomial regression fits a nonlinear model to the data, as a statistical estimation problem it is linear, in the sense that the regression function E(y | x) is linear in the unknown parameters that are estimated from the data. For this reason, polynomial regression is considered to be a special case of multiple linear regression.

Spacecraft attitude control is the process of controlling the orientation of a spacecraft with respect to an inertial frame of reference or another entity such as the celestial sphere, certain fields, and nearby objects, etc.

In applied mathematics, Wahba's problem, first posed by Grace Wahba in 1965, seeks to find a rotation matrix between two coordinate systems from a set of (weighted) vector observations. Solutions to Wahba's problem are often used in satellite attitude determination utilising sensors such as magnetometers and multi-antenna GPS receivers. The cost function that Wahba's problem seeks to minimise is as follows:

<span class="mw-page-title-main">Linear seismic inversion</span> Interpretation of seismic data using linear model

Inverse modeling is a mathematical technique where the objective is to determine the physical properties of the subsurface of an earth region that has produced a given seismogram. Cooke and Schneider (1983) defined it as calculation of the earth's structure and physical parameters from some set of observed seismic data. The underlying assumption in this method is that the collected seismic data are from an earth structure that matches the cross-section computed from the inversion algorithm. Some common earth properties that are inverted for include acoustic velocity, formation and fluid densities, acoustic impedance, Poisson's ratio, formation compressibility, shear rigidity, porosity, and fluid saturation.

The quaternion estimator algorithm (QUEST) is an algorithm designed to solve Wahba's problem, that consists of finding a rotation matrix between two coordinate systems from two sets of observations sampled in each system respectively. The key idea behind the algorithm is to find an expression of the loss function for the Wahba's problem as a quadratic form, using the Cayley–Hamilton theorem and the Newton–Raphson method to efficiently solve the eigenvalue problem and construct a numerically stable representation of the solution.

References

  1. Black, Harold (July 1964). "A Passive System for Determining the Attitude of a Satellite". AIAA Journal. 2 (7): 1350–1351. Bibcode:1964AIAAJ...2.1350.. doi:10.2514/3.2555.
  2. Black, Harold (July–August 1990). "Early Developments of Transit, the Navy Navigation Satellite System". Journal of Guidance, Control and Dynamics. 13 (4): 577–585. Bibcode:1990JGCD...13..577B. doi:10.2514/3.25373.
  3. Markley, F. Landis (1999). "Attitude Determination Using Two Vector Measurements". 1999 Flight Mechanics Symposium: 2 via ResearchGate.
  4. Wahba, Grace (July 1966). "A Least Squares Estimate of Satellite Attitude, Problem 65.1". SIAM Review. 8: 385–386. doi:10.1137/1008080.
  5. Markley, Landis (April–June 1993). "Attitude Determination Using Vector Observations: A Fast Optimal Matrix Algorithm" (PDF). The Journal of Astronautical Sciences. 41 (2): 261–280. Retrieved April 18, 2012.