The invariant extended Kalman filter (IEKF) (not to be confused with the iterated extended Kalman filter) was first introduced as a version of the extended Kalman filter (EKF) for nonlinear systems possessing symmetries (or invariances), [1] then generalized and recast as an adaptation to Lie groups of the linear Kalman filtering theory. [2] Instead of using a linear correction term based on a linear output error, the IEKF uses a geometrically adapted correction term based on an invariant output error; in the same way the gain matrix is not updated from a linear state error, but from an invariant state error. The main benefit is that the gain and covariance equations have reduced dependence on the estimated value of the state. In some cases they converge to constant values on a much bigger set of trajectories than is the case for the EKF, which results in a better convergence of the estimation.
Consider a system whose state is encoded at time step by an element of a Lie group and dynamics has the following shape: [3]
where is a group automorphism of , is the group operation and an element of . The system is supposed to be observed through a measurement having the following shape:
where belongs to a vector space endowed with a left action of the elements of denoted again by (which cannot create confusion with the group operation as the second member of the operation is an element of , not ). Alternatively, the same theory applies to a measurement defined by a right action:
The invariant extended Kalman filter is an observer defined by the following equations if the measurement function is a left action:
where is the exponential map of and is a gain matrix to be tuned through a Riccati equation.
If measurement function is a right action then the update state is defined as:
The discrete-time framework above was first introduced for continuous-time dynamics of the shape:
where the vector field verifies at any time the relation: [2]
where the identity element of the group is denoted by and is used the short-hand notation (resp. ) for the left translation (resp. the right translation ) where denots the tangent space to at . It leads to more involved computations than the discrete-time framework, but properties are similar.
The main benefit of invariant extended Kalman filtering is the behavior of the invariant error variable, whose definition depends on the type of measurement. For left actions we define a left-invariant error variable as:
while for right actions we define a right-invariant error variable as:
Indeed, replacing , , by their values we obtain for left actions, after some algebra:
and for right actions:
We see the estimated value of the state is not involved in the equation followed by the error variable, a property of linear Kalman filtering the classical extended Kalman filter does not share, but the similarity with the linear case actually goes much further. Let be a linear version of the error variable defined by the identity:
Then, with defined by the Taylor expansion we actually have: [2]
In other words, there are no higher-order terms: the dynamics is linear for the error variable . This result and error dynamics independence are at the core of theoretical properties and practical performance of IEKF. [2]
Most physical systems possess natural symmetries (or invariance), i.e. there exist transformations (e.g. rotations, translations, scalings) that leave the system unchanged. From mathematical and engineering viewpoint, it makes sense that a filter well-designed for the considered system should preserve the same invariance properties. The idea for the IEKF is a modification of the EKF equations to take advantage of the symmetries of the system.
Consider the system
where are independent white Gaussian noises. Consider a Lie group with identity , and (local) transformation groups () such that . The previous system with noise is said to be invariant if it is left unchanged by the action the transformations groups ; that is, if
Since it is a symmetry-preserving filter, the general form of an IEKF reads [4]
where
To analyze the error convergence, an invariant state error is defined, which is different from the standard output error , since the standard output error usually does not preserve the symmetries of the system.
Given the considered system and associated transformation group, there exists a constructive method to determine , based on the moving frame method.
Similarly to the EKF, the gain matrix is determined from the equations [5]
where the matrices depend here only on the known invariant vector , rather than on as in the standard EKF. This much simpler dependence and its consequences are the main interests of the IEKF. Indeed, the matrices are then constant on a much bigger set of trajectories (so-called permanent trajectories) than equilibrium points as it is the case for the EKF. Near such trajectories, we are back to the "true", i.e. linear, Kalman filter where convergence is guaranteed. Informally, this means the IEKF converges in general at least around any slowly varying permanent trajectory, rather than just around any slowly varying equilibrium point for the EKF.
Invariant extended Kalman filters are for instance used in attitude and heading reference systems. In such systems the orientation, velocity and/or position of a moving rigid body, e.g. an aircraft, are estimated from different embedded sensors, such as inertial sensors, magnetometers, GPS or sonars. The use of an IEKF naturally leads [5] to consider the quaternion error , which is often used as an ad hoc trick to preserve the constraints of the quaternion group. The benefits of the IEKF compared to the EKF are experimentally shown for a large set of trajectories. [6]
A major application of the Invariant extended Kalman filter is inertial navigation, which fits the framework after embedding of the state (consisting of attitude matrix , velocity vector and position vector ) into the Lie group [7] defined by the group operation:
The problem of simultaneous localization and mapping also fits the framework of invariant extended Kalman filtering after embedding of the state (consisting of attitude matrix , position vector and a sequence of static feature points ) into the Lie group (or for planar systems) [7] defined by the group operation:
The main benefit of the Invariant extended Kalman filter in this case is solving the problem of false observability. [7]
In statistics, a normal distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is
A Fourier transform (FT) is a mathematical transform that decomposes functions depending on space or time into functions depending on spatial frequency or temporal frequency. That process is also called analysis. An example application would be decomposing the waveform of a musical chord into terms of the intensity of its constituent pitches. The term Fourier transform refers to both the frequency domain representation and the mathematical operation that associates the frequency domain representation to a function of space or time.
For statistics and control theory, Kalman filtering, also known as linear quadratic estimation (LQE), is an algorithm that uses a series of measurements observed over time, including statistical noise and other inaccuracies, and produces estimates of unknown variables that tend to be more accurate than those based on a single measurement alone, by estimating a joint probability distribution over the variables for each timeframe. The filter is named after Rudolf E. Kálmán, who was one of the primary developers of its theory.
In mathematics, the directional derivative of a multivariable differentiable (scalar) function along a given vector v at a given point x intuitively represents the instantaneous rate of change of the function, moving through x with a velocity specified by v.
In mathematics, the Radon transform is the integral transform which takes a function f defined on the plane to a function Rf defined on the (two-dimensional) space of lines in the plane, whose value at a particular line is equal to the line integral of the function over that line. The transform was introduced in 1917 by Johann Radon, who also provided a formula for the inverse transform. Radon further included formulas for the transform in three dimensions, in which the integral is taken over planes. It was later generalized to higher-dimensional Euclidean spaces, and more broadly in the context of integral geometry. The complex analogue of the Radon transform is known as the Penrose transform. The Radon transform is widely applicable to tomography, the creation of an image from the projection data associated with cross-sectional scans of an object.
In quantum mechanics, the canonical commutation relation is the fundamental relation between canonical conjugate quantities. For example,
In control theory, a state observer or state estimator is a system that provides an estimate of the internal state of a given real system, from measurements of the input and output of the real system. It is typically computer-implemented, and provides the basis of many practical applications.
Particle filters, or sequential Monte Carlo methods, are a set of Monte Carlo algorithms used to solve filtering problems arising in signal processing and Bayesian statistical inference. The filtering problem consists of estimating the internal states in dynamical systems when partial observations are made and random perturbations are present in the sensors as well as in the dynamical system. The objective is to compute the posterior distributions of the states of a Markov process, given the noisy and partial observations. The term "particle filters" was first coined in 1996 by Del Moral in reference to mean-field interacting particle methods used in fluid mechanics since the beginning of the 1960s. The term "Sequential Monte Carlo" was coined by Liu and Chen in 1998.
In probability theory and statistics, the generalized extreme value (GEV) distribution is a family of continuous probability distributions developed within extreme value theory to combine the Gumbel, Fréchet and Weibull families also known as type I, II and III extreme value distributions. By the extreme value theorem the GEV distribution is the only possible limit distribution of properly normalized maxima of a sequence of independent and identically distributed random variables. Note that a limit distribution needs to exist, which requires regularity conditions on the tail of the distribution. Despite this, the GEV distribution is often used as an approximation to model the maxima of long (finite) sequences of random variables.
In statistics and probability theory, a point process or point field is a collection of mathematical points randomly located on a mathematical space such as the real line or Euclidean space. Point processes can be used for spatial data analysis, which is of interest in such diverse disciplines as forestry, plant ecology, epidemiology, geography, seismology, materials science, astronomy, telecommunications, computational neuroscience, economics and others.
In statistics, the generalized Pareto distribution (GPD) is a family of continuous probability distributions. It is often used to model the tails of another distribution. It is specified by three parameters: location , scale , and shape . Sometimes it is specified by only scale and shape and sometimes only by its shape parameter. Some references give the shape parameter as .
The ensemble Kalman filter (EnKF) is a recursive filter suitable for problems with a large number of variables, such as discretizations of partial differential equations in geophysical models. The EnKF originated as a version of the Kalman filter for large problems, and it is now an important data assimilation component of ensemble forecasting. EnKF is related to the particle filter but the EnKF makes the assumption that all probability distributions involved are Gaussian; when it is applicable, it is much more efficient than the particle filter.
In operator theory, a bounded operator T: X → Y between normed vector spaces X and Y is said to be a contraction if its operator norm ||T || ≤ 1. This notion is a special case of the concept of a contraction mapping, but every bounded operator becomes a contraction after suitable scaling. The analysis of contractions provides insight into the structure of operators, or a family of operators. The theory of contractions on Hilbert space is largely due to Béla Szőkefalvi-Nagy and Ciprian Foias.
Bilinear time–frequency distributions, or quadratic time–frequency distributions, arise in a sub-field of signal analysis and signal processing called time–frequency signal processing, and, in the statistical analysis of time series data. Such methods are used where one needs to deal with a situation where the frequency composition of a signal may be changing over time; this sub-field used to be called time–frequency signal analysis, and is now more often called time–frequency signal processing due to the progress in using these methods to a wide range of signal-processing problems.
In mathematics, the spectral theory of ordinary differential equations is the part of spectral theory concerned with the determination of the spectrum and eigenfunction expansion associated with a linear ordinary differential equation. In his dissertation Hermann Weyl generalized the classical Sturm–Liouville theory on a finite closed interval to second order differential operators with singularities at the endpoints of the interval, possibly semi-infinite or infinite. Unlike the classical case, the spectrum may no longer consist of just a countable set of eigenvalues, but may also contain a continuous part. In this case the eigenfunction expansion involves an integral over the continuous part with respect to a spectral measure, given by the Titchmarsh–Kodaira formula. The theory was put in its final simplified form for singular differential equations of even degree by Kodaira and others, using von Neumann's spectral theorem. It has had important applications in quantum mechanics, operator theory and harmonic analysis on semisimple Lie groups.
In mathematics, the Plancherel theorem for spherical functions is an important result in the representation theory of semisimple Lie groups, due in its final form to Harish-Chandra. It is a natural generalisation in non-commutative harmonic analysis of the Plancherel formula and Fourier inversion formula in the representation theory of the group of real numbers in classical harmonic analysis and has a similarly close interconnection with the theory of differential equations. It is the special case for zonal spherical functions of the general Plancherel theorem for semisimple Lie groups, also proved by Harish-Chandra. The Plancherel theorem gives the eigenfunction expansion of radial functions for the Laplacian operator on the associated symmetric space X; it also gives the direct integral decomposition into irreducible representations of the regular representation on L2(X). In the case of hyperbolic space, these expansions were known from prior results of Mehler, Weyl and Fock.
In estimation theory, the extended Kalman filter (EKF) is the nonlinear version of the Kalman filter which linearizes about an estimate of the current mean and covariance. In the case of well defined transition models, the EKF has been considered the de facto standard in the theory of nonlinear state estimation, navigation systems and GPS.
In mathematics, Symmetry-preserving observers, also known as invariant filters, are estimation techniques whose structure and design take advantage of the natural symmetries of the considered nonlinear model. As such, the main benefit is an expected much larger domain of convergence than standard filtering methods, e.g. Extended Kalman Filter (EKF) or Unscented Kalman Filter (UKF).
Symmetries in quantum mechanics describe features of spacetime and particles which are unchanged under some transformation, in the context of quantum mechanics, relativistic quantum mechanics and quantum field theory, and with applications in the mathematical formulation of the standard model and condensed matter physics. In general, symmetry in physics, invariance, and conservation laws, are fundamentally important constraints for formulating physical theories and models. In practice, they are powerful methods for solving problems and predicting what can happen. While conservation laws do not always give the answer to the problem directly, they form the correct constraints and the first steps to solving a multitude of problems.
In image analysis, the generalized structure tensor (GST) is an extension of the Cartesian structure tensor to curvilinear coordinates. It is mainly used to detect and to represent the "direction" parameters of curves, just as the Cartesian structure tensor detects and represents the direction in Cartesian coordinates. Curve families generated by pairs of locally orthogonal functions have been the best studied.