This article includes a list of references, related reading, or external links, but its sources remain unclear because it lacks inline citations .(December 2013) |
A radar tracker is a component of a radar system, or an associated command and control (C2) system, that associates consecutive radar observations of the same target into tracks. It is particularly useful when the radar system is reporting data from several different targets or when it is necessary to combine the data from several different radars or other sensors.
A classical rotating air surveillance radar system detects target echoes against a background of noise. It reports these detections (known as "plots") in polar coordinates representing the range and bearing of the target. In addition, noise in the radar receiver will occasionally exceed the detection threshold of the radar's Constant false alarm rate detector and be incorrectly reported as targets (known as false alarms). The role of the radar tracker is to monitor consecutive updates from the radar system (which typically occur once every few seconds, as the antenna rotates) and to determine those sequences of plots belonging to the same target, whilst rejecting any plots believed to be false alarms. In addition, the radar tracker is able to use the sequence of plots to estimate the current speed and heading of the target. When several targets are present, the radar tracker aims to provide one track for each target, with the track history often being used to indicate where the target has come from.
When multiple radar systems are connected to a single reporting post, a multiradar tracker is often used to monitor the updates from all of the radars and form tracks from the combination of detections. In this configuration, the tracks are often more accurate than those formed from single radars, as a greater number of detections can be used to estimate the tracks. In addition to associating plots, rejecting false alarms and estimating heading and speed, the radar tracker also acts as a filter, in which errors in the individual radar measurements are smoothed out. In essence, the radar tracker fits a smooth curve to the reported plots and, if done correctly, can increase the overall accuracy of the radar system. A multisensor tracker extends the concept of the multiradar tracker to allow the combination of reports from different types of sensor - typically radars, secondary surveillance radars (SSR), identification friend or foe (IFF) systems and electronic support measures (ESM) data.
A radar track will typically contain the following information:
In addition, and depending on the application or tracker sophistication, the track will also include:
There are many different mathematical algorithms used for implementing a radar tracker, of varying levels of sophistication. However, they all perform steps similar to the following every time the radar updates:
Perhaps the most important step is the updating of tracks with new plots. All trackers will implicitly or explicitly take account of a number of factors during this stage, including:
Using this information, the radar tracker attempts to update the track by forming a weighted average of the current reported position from the radar (which has unknown errors) and the last predicted position of the target from the tracker (which also has unknown errors). The tracking problem is made particularly difficult for targets with unpredictable movements (i.e. unknown target movement models), non-Gaussian measurement or model errors, non-linear relationships between the measured quantities and the desired target coordinates, detection in the presence of non-uniformly distributed clutter, missed detections or false alarms. In the real world, a radar tracker typically faces a combination of all of these effects; this has led to the development of an increasingly sophisticated set of algorithms to resolve the problem. Due to the need to form radar tracks in real time, usually for several hundred targets at once, the deployment of radar tracking algorithms has typically been limited by the available computational power.
In this step of the processing, the radar tracker seeks to determine which plots should be used to update which tracks. In many approaches, a given plot can only be used to update one track. However, in other approaches a plot can be used to update several tracks, recognising the uncertainty in knowing to which track the plot belongs. Either way, the first step in the process is to update all of the existing tracks to the current time by predicting their new position based on the most recent state estimate (e.g. position, heading, speed, acceleration, etc.) and the assumed target motion model (e.g. constant velocity, constant acceleration, etc.). Having updated the estimates, it is possible to try to associate the plots to tracks.
This can be done in a number of ways:
Once a track has been associated with a plot, it moves to the track smoothing stage, where the track prediction and associated plot are combined to provide a new, smoothed estimate of the target location.
Having completed this process, a number of plots will remain unassociated with existing tracks and a number of tracks will remain without updates. This leads to the steps of track initiation and track maintenance.
Track initiation is the process of creating a new radar track from an unassociated radar plot. When the tracker is first switched on, all the initial radar plots are used to create new tracks, but once the tracker is running, only those plots that couldn't be used to update an existing track are used to spawn new tracks. Typically a new track is given the status of tentative until plots from subsequent radar updates have been successfully associated with the new track. Tentative tracks are not shown to the operator and so they provide a means of preventing false tracks from appearing on the screen - at the expense of some delay in the first reporting of a track. Once several updates have been received, the track is confirmed and displayed to the operator. The most common criterion for promoting a tentative track to a confirmed track is the "M-of-N rule", which states that during the last N radar updates, at least M plots must have been associated with the tentative track - with M=3 and N=5 being typical values. More sophisticated approaches may use a statistical approach in which a track becomes confirmed when, for instance, its covariance matrix falls to a given size.
A variation of JPDA called JIPDA also estimates target existence, therefore handling plot-to-track, initialization, and maintenance. [1]
Track maintenance is the process in which a decision is made about whether to end the life of a track. If a track was not associated with a plot during the plot to track association phase, then there is a chance that the target may no longer exist (for instance, an aircraft may have landed or flown out of radar cover). Alternatively, however, there is a chance that the radar may have just failed to see the target at that update, but will find it again on the next update. Common approaches to deciding on whether to terminate a track include:
In this important step, the latest track prediction is combined with the associated plot to provide a new, improved estimate of the target state as well as a revised estimate of the errors in this prediction. There is a wide variety of algorithms, of differing complexity and computational load, that can be used for this process.
An early tracking approach, using an alpha beta filter, that assumed fixed covariance errors and a constant-speed, non-maneuvering target model to update tracks.
The role of the Kalman Filter is to take the current known state (i.e. position, heading, speed and possibly acceleration) of the target and predict the new state of the target at the time of the most recent radar measurement. In making this prediction, it also updates its estimate of its own uncertainty (i.e. errors) in this prediction. It then forms a weighted average of this prediction of state and the latest measurement of state, taking account of the known measurement errors of the radar and its own uncertainty in the target motion models. Finally, it updates its estimate of its uncertainty of the state estimate. A key assumption in the mathematics of the Kalman filter is that measurement equations (i.e. the relationship between the radar measurements and the target state) and the state equations (i.e. the equations for predicting a future state based on the current state) are linear.
The Kalman filter assumes that the measurement errors of the radar, and the errors in its target motion model, and the errors in its state estimate are all zero-mean with known covariance. This means that all of these sources of errors can be represented by a covariance matrix. The mathematics of the Kalman filter is therefore concerned with propagating these covariance matrices and using them to form the weighted sum of prediction and measurement.
In situations where the target motion conforms well to the underlying model, there is a tendency of the Kalman filter to become "overconfident" of its own predictions and to start to ignore the radar measurements. If the target then manoeuvres, the filter will fail to follow the manoeuvre. It is therefore common practice when implementing the filter to arbitrarily increase the magnitude of the state estimate covariance matrix slightly at each update to prevent this.
The MHT allows a track to be updated by more than one plot at each update, spawning multiple possible tracks. As each radar update is received every possible track can be potentially updated with every new update. Over time, the track branches into many possible directions. The MHT calculates the probability of each potential track and typically only reports the most probable of all the tracks. For reasons of finite computer memory and computational power, the MHT typically includes some approach for deleting the most unlikely potential track updates. The MHT is designed for situations in which the target motion model is very unpredictable, as all potential track updates are considered. For this reason, it is popular for problems of ground target tracking in Airborne Ground Surveillance (AGS) systems.
The MHT calculates not only the probabilities of previously-detected tracks, but also the probabilities of false alarms and new tracks. As a result, it might handle multiple parts of the radar tracker algorithm, i.e. the plot-to-track, the smoothing, and the initialization. MHT can be implemented as either hypothesis-oriented or track-oriented. The latter type organizes hypotheses into trees, and each track has a tree. Multiple frames of history can be used. TOMHT typically does not handle initialization and termination. [1]
The IMM is an estimator which can either be used by MHT or JPDAF. IMM uses two or more Kalman filters which run in parallel, each using a different model for target motion or errors. The IMM forms an optimal weighted sum of the output of all the filters and is able to rapidly adjust to target maneuvers. While MHT or JPDAF handles the association and track maintenance, an IMM helps MHT or JPDAF in obtaining a filtered estimate of the target position.
Non-linear tracking algorithms use a Non-linear filter to cope with the situation where the measurements have a non-linear relationship to the final track coordinates, where the errors are non-Gaussian, or where the motion update model is non-linear. The most common non-linear filters are:
The EKF is an extension of the Kalman filter to cope with cases where the relationship between the radar measurements and the track coordinates, or the track coordinates and the motion model, is non-linear. In this case, the relationship between the measurements and the state is of the form h = f(x) (where h is the vector of measurements, x is the target state and f(.) is the function relating the two). Similarly, the relationship between the future state and the current state is of the form x(t+1) = g(x(t)) (where x(t) is the state at time t and g(.) is the function that predicts the future state). To handle these non-linearities, the EKF linearises the two non-linear equations using the first term of the Taylor series and then treats the problem as the standard linear Kalman filter problem. Although conceptually simple, the filter can easily diverge (i.e. gradually perform more and more badly) if the state estimate about which the equations are linearised is poor.
The unscented Kalman filter and particle filters are attempts to overcome the problem of linearising the equations.
The UKF attempts to improve on the EKF by removing the need to linearise the measurement and state equations. It avoids linearization by representing the mean and covariance information in the form of a set of points, called sigma points. These points, which represent a distribution with specified mean and covariance, are then propagated directly through the non-linear equations, and the resulting updated samples are then used to calculate a new mean and variance. This approach then suffers none of the problems of divergence due to poor linearisation and yet retains the overall computational simplicity of the EKF.
The particle filter could be considered as a generalisation of the UKF. It makes no assumptions about the distributions of the errors in the filter and neither does it require the equations to be linear. Instead it generates a large number of random potential states ("particles") and then propagates this "cloud of particles" through the equations, resulting in a different distribution of particles at the output. The resulting distribution of particles can then be used to calculate a mean or variance, or whatever other statistical measure is required. The resulting statistics are used to generate the random sample of particles for the next iteration. The particle filter is notable in its ability to handle multi-modal distributions (i.e. distributions where the PDF has more than one peak). However, it is computationally very intensive and is currently unsuitable for most real-world, real-time applications.[ citation needed ]
Linear prediction is a mathematical operation where future values of a discrete-time signal are estimated as a linear function of previous samples.
In statistics and control theory, Kalman filtering is an algorithm that uses a series of measurements observed over time, including statistical noise and other inaccuracies, to produce estimates of unknown variables that tend to be more accurate than those based on a single measurement, by estimating a joint probability distribution over the variables for each time-step. The filter is constructed as a mean squared error minimiser, but an alternative derivation of the filter is also provided showing how the filter relates to maximum likelihood statistics. The filter is named after Rudolf E. Kálmán.
Simultaneous localization and mapping (SLAM) is the computational problem of constructing or updating a map of an unknown environment while simultaneously keeping track of an agent's location within it. While this initially appears to be a chicken or the egg problem, there are several algorithms known to solve it in, at least approximately, tractable time for certain environments. Popular approximate solution methods include the particle filter, extended Kalman filter, covariance intersection, and GraphSLAM. SLAM algorithms are based on concepts in computational geometry and computer vision, and are used in robot navigation, robotic mapping and odometry for virtual reality or augmented reality.
The fast Kalman filter (FKF), devised by Antti Lange (born 1941), is an extension of the Helmert–Wolf blocking (HWB) method from geodesy to safety-critical real-time applications of Kalman filtering (KF) such as GNSS navigation up to the centimeter-level of accuracy and satellite imaging of the Earth including atmospheric tomography.
Video tracking is the process of locating a moving object over time using a camera. It has a variety of uses, some of which are: human-computer interaction, security and surveillance, video communication and compression, augmented reality, traffic control, medical imaging and video editing. Video tracking can be a time-consuming process due to the amount of data that is contained in video. Adding further to the complexity is the possible need to use object recognition techniques for tracking, a challenging problem in its own right.
Data assimilation is a mathematical discipline that seeks to optimally combine theory with observations. There may be a number of different goals sought – for example, to determine the optimal state estimate of a system, to determine initial conditions for a numerical forecast model, to interpolate sparse observation data using knowledge of the system being observed, to set numerical parameters based on training a model from observed data. Depending on the goal, different solution methods may be used. Data assimilation is distinguished from other forms of machine learning, image analysis, and statistical methods in that it utilizes a dynamical model of the system being analyzed.
In probability theory, statistics, and machine learning, recursive Bayesian estimation, also known as a Bayes filter, is a general probabilistic approach for estimating an unknown probability density function (PDF) recursively over time using incoming measurements and a mathematical process model. The process relies heavily upon mathematical concepts and models that are theorized within a study of prior and posterior probabilities known as Bayesian statistics.
The ensemble Kalman filter (EnKF) is a recursive filter suitable for problems with a large number of variables, such as discretizations of partial differential equations in geophysical models. The EnKF originated as a version of the Kalman filter for large problems, and it is now an important data assimilation component of ensemble forecasting. EnKF is related to the particle filter but the EnKF makes the assumption that all probability distributions involved are Gaussian; when it is applicable, it is much more efficient than the particle filter.
GPS/INS is the use of GPS satellite signals to correct or calibrate a solution from an inertial navigation system (INS). The method is applicable for any GNSS/INS system.
An alpha beta filter is a simplified form of observer for estimation, data smoothing and control applications. It is closely related to Kalman filters and to linear state observers used in control theory. Its principal advantage is that it does not require a detailed system model.
In estimation theory, the extended Kalman filter (EKF) is the nonlinear version of the Kalman filter which linearizes about an estimate of the current mean and covariance. In the case of well defined transition models, the EKF has been considered the de facto standard in the theory of nonlinear state estimation, navigation systems and GPS.
The joint probabilistic data-association filter (JPDAF) is a statistical approach to the problem of plot association in a target tracking algorithm. Like the probabilistic data association filter (PDAF), rather than choosing the most likely assignment of measurements to a target, the PDAF takes an expected value, which is the minimum mean square error (MMSE) estimate for the state of each target. At each time, it maintains its estimate of the target state as the mean and covariance matrix of a multivariate normal distribution. However, unlike the PDAF, which is only meant for tracking a single target in the presence of false alarms and missed detections, the JPDAF can handle multiple target tracking scenarios. A derivation of the JPDAF is given in.
Covariance intersection (CI) is an algorithm for combining two or more estimates of state variables in a Kalman filter when the correlation between them is unknown.
The invariant extended Kalman filter (IEKF) (not to be confused with the iterated extended Kalman filter) was first introduced as a version of the extended Kalman filter (EKF) for nonlinear systems possessing symmetries (or invariances), then generalized and recast as an adaptation to Lie groups of the linear Kalman filtering theory. Instead of using a linear correction term based on a linear output error, the IEKF uses a geometrically adapted correction term based on an invariant output error; in the same way the gain matrix is not updated from a linear state error, but from an invariant state error. The main benefit is that the gain and covariance equations have reduced dependence on the estimated value of the state. In some cases they converge to constant values on a much bigger set of trajectories than is the case for the EKF, which results in a better convergence of the estimation.
The unscented transform (UT) is a mathematical function used to estimate the result of applying a given nonlinear transformation to a probability distribution that is characterized only in terms of a finite set of statistics. The most common use of the unscented transform is in the nonlinear projection of mean and covariance estimates in the context of nonlinear extensions of the Kalman filter. Its creator Jeffrey Uhlmann explained that "unscented" was an arbitrary name that he adopted to avoid it being referred to as the “Uhlmann filter.”
The Schmidt–Kalman Filter is a modification of the Kalman filter for reducing the dimensionality of the state estimate, while still considering the effects of the additional state in the calculation of the covariance matrix and the Kalman gains. A common application is to account for the effects of nuisance parameters such as sensor biases without increasing the dimensionality of the state estimate. This ensures that the covariance matrix will accurately represent the distribution of the errors.
In signal processing, a kernel adaptive filter is a type of nonlinear adaptive filter. An adaptive filter is a filter that adapts its transfer function to changes in signal properties over time by minimizing an error or loss function that characterizes how far the filter deviates from ideal behavior. The adaptation process is based on learning from a sequence of signal samples and is thus an online algorithm. A nonlinear adaptive filter is one in which the transfer function is nonlinear.
In target tracking, the multi-fractional order estimator (MFOE) is an alternative to the Kalman filter. The MFOE is focused strictly on simple and pragmatic fundamentals along with the integrity of mathematical modeling. Like the KF, the MFOE is based on the least squares method (LSM) invented by Gauss and the orthogonality principle at the center of Kalman's derivation. Optimized, the MFOE yields better accuracy than the KF and subsequent algorithms such as the extended KF and the interacting multiple model (IMM). The MFOE is an expanded form of the LSM, which effectively includes the KF and ordinary least squares (OLS) as subsets. OLS is revolutionized in for application in econometrics. The MFOE also intersects with signal processing, estimation theory, economics, finance, statistics, and the method of moments. The MFOE offers two major advances: (1) minimizing the mean squared error (MSE) with fractions of estimated coefficients and (2) describing the effect of deterministic OLS processing of statistical inputs
Masreliez theorem describes a recursive algorithm within the technology of extended Kalman filter, named after the Swedish-American physicist John Masreliez, who is its author. The algorithm estimates the state of a dynamic system with the help of often incomplete measurements marred by distortion.
The smoothing problem is the problem of estimating an unknown probability density function recursively over time using incremental incoming measurements. It is one of the main problems defined by Norbert Wiener. A smoother is an algorithm that implements a solution to this problem, typically based on recursive Bayesian estimation. The smoothing problem is closely related to the filtering problem, both of which are studied in Bayesian smoothing theory.