Triad method

Last updated

The TRIAD method is the earliest published algorithm for determining spacecraft attitude, which was first introduced by Harold Black in 1964. [1] [2] [3] Given the knowledge of two vectors in the reference and body coordinates of a satellite, the TRIAD algorithm obtains the direction cosine matrix relating to both frames. Harold Black played a key role in the development of the guidance, navigation, and control of the U.S. Navy's Transit satellite system at Johns Hopkins Applied Physics Laboratories. TRIAD represented the state of practice in spacecraft attitude determination before the advent of Wahba's problem. [4] and its several optimal solutions. Covariance analysis for Black's solution was subsequently provided by Markley. [5]

Contents

Summary

Firstly, one considers the linearly independent reference vectors and . Let be the corresponding measured directions of the reference unit vectors as resolved in a body fixed frame of reference. Following that, they are then related by the equations,

 

 

 

 

(1)

for , where is a rotation matrix (sometimes also known as a proper orthogonal matrix, i.e., ). transforms vectors in the body fixed frame into the frame of the reference vectors. Among other properties, rotational matrices preserve the length of the vector they operate on. Note that the direction cosine matrix also transforms the cross product vector, written as,

 

 

 

 

(2)

TRIAD proposes an estimate of the direction cosine matrix as a solution to the linear system equations given by

 

 

 

 

(3)

where have been used to separate different column vectors.

The solution presented above works well in the noise-free case. However, in practice, are noisy and the orthogonality condition of the attitude matrix (or the direction cosine matrix) is not preserved by the above procedure. TRIAD incorporates the following elegant procedure to redress this problem. To this end, one defines unit vectors,

 

 

 

 

(4)

 

 

 

 

(5)

and

 

 

 

 

(6)

 

 

 

 

(7)

to be used in place of the first two columns of equation ( 3 ). Their cross product is used as the third column in the linear system of equations obtaining a proper orthogonal matrix for the spacecraft attitude given by the following:

 

 

 

 

(8)

While the normalizations of equations ( 4 ) - ( 7 ) are not necessary, they have been carried out to achieve a computational advantage in solving the linear system of equations in ( 8 ). Thus an estimate of the spacecraft attitude is given by the proper orthogonal matrix as

 

 

 

 

(9)

Note that computational efficiency has been achieved in this procedure by replacing the matrix inverse with a transpose. This is possible because the matrices involved in computing attitude are each composed of a TRIAD of orthonormal basis vectors. "TRIAD" derives its name from this observation.

TRIAD Attitude Matrix and Handedness of Measurements

It is of consequence to note that the TRIAD method always produces a proper orthogonal matrix irrespective of the handedness of the reference and body vectors employed in the estimation process. This can be shown as follows: In a matrix form given

 

 

 

 

(10)

where and Note that if the columns of form a left-handed TRIAD, then the columns of are also left-handed because of the one-one correspondence between the vectors. This is because of the simple fact that, in Euclidean geometry, the angle between any two vectors remains invariant to coordinate transformations. Therefore, the determinant is or depending on whether its columns are right-handed or left-handed respectively (similarly, ). Taking determinant on both sides of the relation in Eq. ( 10 ), one concludes that

 

 

 

 

(11)

This is quite useful in practical applications since the analyst is always guaranteed a proper orthogonal matrix irrespective of the nature of the reference and measured vector quantities.

Applications

TRIAD was used as an attitude determination technique to process the telemetry data from the Transit satellite system (used by the U.S. Navy for navigation). The principles of the Transit system gave rise to the global positioning system satellite constellation. In an application problem, the reference vectors are usually known directions (e.g. stars, Earth magnetic field, gravity vector, etc.). Body fixed vectors are the measured directions as observed by an on-board sensor (e.g. star tracker, magnetometer, etc.). With advances in micro-electronics, attitude determination algorithms such as TRIAD have found their place in a variety of devices (e.g. smart phones, cars, tablets, UAVs, etc.) with a broad impact on modern society.

See also

Related Research Articles

In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. It characterizes some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the linear map represented by the matrix is an isomorphism. The determinant of a product of matrices is the product of their determinants (the preceding property is a corollary of this one). The determinant of a matrix A is denoted det(A), det A, or |A|.

<span class="mw-page-title-main">Linear algebra</span> Branch of mathematics

Linear algebra is the branch of mathematics concerning linear equations such as:

<span class="mw-page-title-main">Linear subspace</span> In mathematics, vector subspace

In mathematics, and more specifically in linear algebra, a linear subspace, also known as a vector subspace is a vector space that is a subset of some larger vector space. A linear subspace is usually simply called a subspace when the context serves to distinguish it from other types of subspaces.

<span class="mw-page-title-main">Gram–Schmidt process</span> Orthonormalization of a set of vectors

In mathematics, particularly linear algebra and numerical analysis, the Gram–Schmidt process is a method for orthonormalizing a set of vectors in an inner product space, most commonly the Euclidean space Rn equipped with the standard inner product. The Gram–Schmidt process takes a finite, linearly independent set of vectors S = {v1, ..., vk} for kn and generates an orthogonal set S′ = {u1, ..., uk} that spans the same k-dimensional subspace of Rn as S.

<span class="mw-page-title-main">Row and column spaces</span>

In linear algebra, the column space of a matrix A is the span of its column vectors. The column space of a matrix is the image or range of the corresponding matrix transformation.

<span class="mw-page-title-main">System of linear equations</span> Several equations of degree 1 to be solved simultaneously

In mathematics, a system of linear equations is a collection of one or more linear equations involving the same variables.

Levinson recursion or Levinson–Durbin recursion is a procedure in linear algebra to recursively calculate the solution to an equation involving a Toeplitz matrix. The algorithm runs in Θ(n2) time, which is a strong improvement over Gauss–Jordan elimination, which runs in Θ(n3).

In mathematics, and in particular linear algebra, the Moore–Penrose inverse of a matrix is the most widely known generalization of the inverse matrix. It was independently described by E. H. Moore in 1920, Arne Bjerhammar in 1951, and Roger Penrose in 1955. Earlier, Erik Ivar Fredholm had introduced the concept of a pseudoinverse of integral operators in 1903. When referring to a matrix, the term pseudoinverse, without further specification, is often used to indicate the Moore–Penrose inverse. The term generalized inverse is sometimes used as a synonym for pseudoinverse.

In mathematics, the Kronecker product, sometimes denoted by ⊗, is an operation on two matrices of arbitrary size resulting in a block matrix. It is a specialization of the tensor product from vectors to matrices and gives the matrix of the tensor product linear map with respect to a standard choice of basis. The Kronecker product is to be distinguished from the usual matrix multiplication, which is an entirely different operation. The Kronecker product is also sometimes called matrix direct product.

In mathematics, the kernel of a linear map, also known as the null space or nullspace, is the linear subspace of the domain of the map which is mapped to the zero vector. That is, given a linear map L : VW between two vector spaces V and W, the kernel of L is the vector space of all elements v of V such that L(v) = 0, where 0 denotes the zero vector in W, or more symbolically:

In numerical linear algebra, the Arnoldi iteration is an eigenvalue algorithm and an important example of an iterative method. Arnoldi finds an approximation to the eigenvalues and eigenvectors of general matrices by constructing an orthonormal basis of the Krylov subspace, which makes it particularly useful when dealing with large sparse matrices.

In statistics, ordinary least squares (OLS) is a type of linear least squares method for choosing the unknown parameters in a linear regression model by the principle of least squares: minimizing the sum of the squares of the differences between the observed dependent variable in the input dataset and the output of the (linear) function of the independent variable.

In linear algebra, a coefficient matrix is a matrix consisting of the coefficients of the variables in a set of linear equations. The matrix is used in solving systems of linear equations.

In statistics and signal processing, the orthogonality principle is a necessary and sufficient condition for the optimality of a Bayesian estimator. Loosely stated, the orthogonality principle says that the error vector of the optimal estimator is orthogonal to any possible estimator. The orthogonality principle is most commonly stated for linear estimators, but more general formulations are possible. Since the principle is a necessary and sufficient condition for optimality, it can be used to find the minimum mean square error estimator.

<span class="mw-page-title-main">Matrix (mathematics)</span> Two-dimensional array of numbers

In mathematics, a matrix is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a property of such an object.

In statistics, polynomial regression is a form of regression analysis in which the relationship between the independent variable x and the dependent variable y is modelled as an nth degree polynomial in x. Polynomial regression fits a nonlinear relationship between the value of x and the corresponding conditional mean of y, denoted E(y |x). Although polynomial regression fits a nonlinear model to the data, as a statistical estimation problem it is linear, in the sense that the regression function E(y | x) is linear in the unknown parameters that are estimated from the data. For this reason, polynomial regression is considered to be a special case of multiple linear regression.

Spacecraft attitude control is the process of controlling the orientation of a spacecraft with respect to an inertial frame of reference or another entity such as the celestial sphere, certain fields, and nearby objects, etc.

In applied mathematics, Wahba's problem, first posed by Grace Wahba in 1965, seeks to find a rotation matrix between two coordinate systems from a set of (weighted) vector observations. Solutions to Wahba's problem are often used in satellite attitude determination utilising sensors such as magnetometers and multi-antenna GPS receivers. The cost function that Wahba's problem seeks to minimise is as follows:

<span class="mw-page-title-main">Linear seismic inversion</span> Interpretation of seismic data using linear model

Inverse modeling is a mathematical technique where the objective is to determine the physical properties of the subsurface of an earth region that has produced a given seismogram. Cooke and Schneider (1983) defined it as calculation of the earth's structure and physical parameters from some set of observed seismic data. The underlying assumption in this method is that the collected seismic data are from an earth structure that matches the cross-section computed from the inversion algorithm. Some common earth properties that are inverted for include acoustic velocity, formation and fluid densities, acoustic impedance, Poisson's ratio, formation compressibility, shear rigidity, porosity, and fluid saturation.

The quaternion estimator algorithm (QUEST) is an algorithm designed to solve Wahba's problem, that consists of finding a rotation matrix between two coordinate systems from two sets of observations sampled in each system respectively. The key idea behind the algorithm is to find an expression of the loss function for the Wahba's problem as a quadratic form, using the Cayley–Hamilton theorem and the Newton–Raphson method to efficiently solve the eigenvalue problem and construct a numerically stable representation of the solution.

References

  1. Black, Harold (July 1964). "A Passive System for Determining the Attitude of a Satellite". AIAA Journal. 2 (7): 1350–1351. Bibcode:1964AIAAJ...2.1350.. doi:10.2514/3.2555.
  2. Black, Harold (July–August 1990). "Early Developments of Transit, the Navy Navigation Satellite System". Journal of Guidance, Control and Dynamics. 13 (4): 577–585. Bibcode:1990JGCD...13..577B. doi:10.2514/3.25373.
  3. Markley, F. Landis (1999). "Attitude Determination Using Two Vector Measurements". 1999 Flight Mechanics Symposium: 2 via ResearchGate.
  4. Wahba, Grace (July 1966). "A Least Squares Estimate of Satellite Attitude, Problem 65.1". SIAM Review. 8: 385–386. doi:10.1137/1008080.
  5. Markley, Landis (April–June 1993). "Attitude Determination Using Vector Observations: A Fast Optimal Matrix Algorithm" (PDF). The Journal of Astronautical Sciences. 41 (2): 261–280. Retrieved April 18, 2012.