Location estimation in sensor networks

Last updated

Location estimation in wireless sensor networks is the problem of estimating the location of an object from a set of noisy measurements. These measurements are acquired in a distributed manner by a set of sensors.

Contents

Use

Many civilian and military applications require monitoring that can identify objects in a specific area, such as monitoring the front entrance of a private house by a single camera. Monitored areas that are large relative to objects of interest often require multiple sensors (e.g., infra-red detectors) at multiple locations. A centralized observer or computer application monitors the sensors. The communication to power and bandwidth requirements call for efficient design of the sensor, transmission, and processing.

The CodeBlue system [1] of Harvard University is an example where a vast number of sensors distributed among hospital facilities allow staff to locate a patient in distress. In addition, the sensor array enables online recording of medical information while allowing the patient to move around. Military applications (e.g. locating an intruder into a secured area) are also good candidates for setting a wireless sensor network.

Setting

LocationEstimation WSN.JPG

Let denote the position of interest. A set of sensors acquire measurements contaminated by an additive noise owing some known or unknown probability density function (PDF). The sensors transmit measurements to a central processor. The th sensor encodes by a function . The application processing the data applies a pre-defined estimation rule . The set of message functions and the fusion rule are designed to minimize estimation error. For example: minimizing the mean squared error (MSE), .

Ideally, sensors transmit their measurements right to the processing center, that is . In this settings, the maximum likelihood estimator (MLE) is an unbiased estimator whose MSE is assuming a white Gaussian noise . The next sections suggest alternative designs when the sensors are bandwidth constrained to 1 bit transmission, that is =0 or 1.

Known noise PDF

A Gaussian noise system can be designed as follows:

[2]

Here is a parameter leveraging our prior knowledge of the approximate location of . In this design, the random value of is distributed Bernoulli~. The processing center averages the received bits to form an estimate of , which is then used to find an estimate of . It can be verified that for the optimal (and infeasible) choice of the variance of this estimator is which is only times the variance of MLE without bandwidth constraint. The variance increases as deviates from the real value of , but it can be shown that as long as the factor in the MSE remains approximately 2. Choosing a suitable value for is a major disadvantage of this method since our model does not assume prior knowledge about the approximated location of . A coarse estimation can be used to overcome this limitation. However, it requires additional hardware in each of the sensors.

A system design with arbitrary (but known) noise PDF can be found in. [3] In this setting it is assumed that both and the noise are confined to some known interval . The estimator of [3] also reaches an MSE which is a constant factor times . In this method, the prior knowledge of replaces the parameter of the previous approach.

Unknown noise parameters

A noise model may be sometimes available while the exact PDF parameters are unknown (e.g. a Gaussian PDF with unknown ). The idea proposed in [4] for this setting is to use two thresholds , such that sensors are designed with , and the other sensors use . The processing center estimation rule is generated as follows:

As before, prior knowledge is necessary to set values for to have an MSE with a reasonable factor of the unconstrained MLE variance.

Unknown noise PDF

The system design of [3] for the case that the structure of the noise PDF is unknown. The following model is considered for this scenario:

In addition, the message functions are limited to have the form

where each is a subset of . The fusion estimator is also restricted to be linear, i.e. .

The design should set the decision intervals and the coefficients . Intuitively, one would allocate sensors to encode the first bit of by setting their decision interval to be , then sensors would encode the second bit by setting their decision interval to and so on. It can be shown that these decision intervals and the corresponding set of coefficients produce a universal -unbiased estimator, which is an estimator satisfying for every possible value of and for every realization of . In fact, this intuitive design of the decision intervals is also optimal in the following sense. The above design requires to satisfy the universal -unbiased property while theoretical arguments show that an optimal (and a more complex) design of the decision intervals would require , that is: the number of sensors is nearly optimal. It is also argued in [3] that if the targeted MSE uses a small enough , then this design requires a factor of 4 in the number of sensors to achieve the same variance of the MLE in the unconstrained bandwidth settings.

Additional information

The design of the sensor array requires optimizing the power allocation as well as minimizing the communication traffic of the entire system. The design suggested in [5] incorporates probabilistic quantization in sensors and a simple optimization program that is solved in the fusion center only once. The fusion center then broadcasts a set of parameters to the sensors that allows them to finalize their design of messaging functions as to meet the energy constraints. Another work employs a similar approach to address distributed detection in wireless sensor arrays. [6]

Related Research Articles

<span class="mw-page-title-main">Allan variance</span> Measure of frequency stability in clocks and oscillators

The Allan variance (AVAR), also known as two-sample variance, is a measure of frequency stability in clocks, oscillators and amplifiers. It is named after David W. Allan and expressed mathematically as . The Allan deviation (ADEV), also known as sigma-tau, is the square root of the Allan variance, .

In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statistical model, the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.

In statistics, the mean squared error (MSE) or mean squared deviation (MSD) of an estimator measures the average of the squares of the errors—that is, the average squared difference between the estimated values and the actual value. MSE is a risk function, corresponding to the expected value of the squared error loss. The fact that MSE is almost always strictly positive is because of randomness or because the estimator does not account for information that could produce a more accurate estimate. In machine learning, specifically empirical risk minimization, MSE may refer to the empirical risk, as an estimate of the true MSE.

<span class="mw-page-title-main">Expectation–maximization algorithm</span> Iterative method for finding maximum likelihood estimates in statistical models

In statistics, an expectation–maximization (EM) algorithm is an iterative method to find (local) maximum likelihood or maximum a posteriori (MAP) estimates of parameters in statistical models, where the model depends on unobserved latent variables. The EM iteration alternates between performing an expectation (E) step, which creates a function for the expectation of the log-likelihood evaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the E step. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step. It can be used, for example, to estimate a mixture of gaussians, or to solve the multiple linear regression problem.

<span class="mw-page-title-main">Cramér–Rao bound</span> Lower bound on variance of an estimator

In estimation theory and statistics, the Cramér–Rao bound (CRB) relates to estimation of a deterministic parameter. The result is named in honor of Harald Cramér and C. R. Rao, but has also been derived independently by Maurice Fréchet, Georges Darmois, and by Alexander Aitken and Harold Silverstone. It states that the precision of any unbiased estimator is at most the Fisher information; or (equivalently) the reciprocal of the Fisher information is a lower bound on its variance.

In physics, the Hamilton–Jacobi equation, named after William Rowan Hamilton and Carl Gustav Jacob Jacobi, is an alternative formulation of classical mechanics, equivalent to other formulations such as Newton's laws of motion, Lagrangian mechanics and Hamiltonian mechanics.

<span class="mw-page-title-main">Chiral model</span> Model of mesons in the massless quark limit

In nuclear physics, the chiral model, introduced by Feza Gürsey in 1960, is a phenomenological model describing effective interactions of mesons in the chiral limit (where the masses of the quarks go to zero), but without necessarily mentioning quarks at all. It is a nonlinear sigma model with the principal homogeneous space of a Lie group as its target manifold. When the model was originally introduced, this Lie group was the SU(N), where N is the number of quark flavors. The Riemannian metric of the target manifold is given by a positive constant multiplied by the Killing form acting upon the Maurer–Cartan form of SU(N).

In statistics, the theory of minimum norm quadratic unbiased estimation (MINQUE) was developed by C. R. Rao. MINQUE is a theory alongside other estimation methods in estimation theory, such as the method of moments or maximum likelihood estimation. Similar to the theory of best linear unbiased estimation, MINQUE is specifically concerned with linear regression models. The method was originally conceived to estimate heteroscedastic error variance in multiple linear regression. MINQUE estimators also provide an alternative to maximum likelihood estimators or restricted maximum likelihood estimators for variance components in mixed effects models. MINQUE estimators are quadratic forms of the response variable and are used to estimate a linear function of the variances.

Estimation theory is a branch of statistics that deals with estimating the values of parameters based on measured empirical data that has a random component. The parameters describe an underlying physical setting in such a way that their value affects the distribution of the measured data. An estimator attempts to approximate the unknown parameters using the measurements. In estimation theory, two approaches are generally considered:

<span class="mw-page-title-main">Continuous uniform distribution</span> Uniform distribution on an interval

In probability theory and statistics, the continuous uniform distributions or rectangular distributions are a family of symmetric probability distributions. Such a distribution describes an experiment where there is an arbitrary outcome that lies between certain bounds. The bounds are defined by the parameters, and which are the minimum and maximum values. The interval can either be closed or open. Therefore, the distribution is often abbreviated where stands for uniform distribution. The difference between the bounds defines the interval length; all intervals of the same length on the distribution's support are equally probable. It is the maximum entropy probability distribution for a random variable under no constraint other than that it is contained in the distribution's support.

In Bayesian statistics, a maximum a posteriori probability (MAP) estimate is an estimate of an unknown quantity, that equals the mode of the posterior distribution. The MAP can be used to obtain a point estimate of an unobserved quantity on the basis of empirical data. It is closely related to the method of maximum likelihood (ML) estimation, but employs an augmented optimization objective which incorporates a prior distribution over the quantity one wants to estimate. MAP estimation can therefore be seen as a regularization of maximum likelihood estimation.

In statistics, the delta method is a method of deriving the asymptotic distribution of a random variable. It is applicable when the random variable being considered can be defined as a differentiable function of a random variable which is asymptotically Gaussian.

In statistics, M-estimators are a broad class of extremum estimators for which the objective function is a sample average. Both non-linear least squares and maximum likelihood estimation are special cases of M-estimators. The definition of M-estimators was motivated by robust statistics, which contributed new types of M-estimators. However, M-estimators are not inherently robust, as is clear from the fact that they include maximum likelihood estimators, which are in general not robust. The statistical procedure of evaluating an M-estimator on a data set is called M-estimation.

In estimation theory and decision theory, a Bayes estimator or a Bayes action is an estimator or decision rule that minimizes the posterior expected value of a loss function. Equivalently, it maximizes the posterior expectation of a utility function. An alternative way of formulating an estimator within Bayesian statistics is maximum a posteriori estimation.

In statistics, the bias of an estimator is the difference between this estimator's expected value and the true value of the parameter being estimated. An estimator or decision rule with zero bias is called unbiased. In statistics, "bias" is an objective property of an estimator. Bias is a distinct concept from consistency: consistent estimators converge in probability to the true value of the parameter, but may be biased or unbiased; see bias versus consistency for more.

In mathematics, an elliptic hypergeometric series is a series Σcn such that the ratio cn/cn−1 is an elliptic function of n, analogous to generalized hypergeometric series where the ratio is a rational function of n, and basic hypergeometric series where the ratio is a periodic function of the complex number n. They were introduced by Date-Jimbo-Kuniba-Miwa-Okado (1987) and Frenkel & Turaev (1997) in their study of elliptic 6-j symbols.

<span class="mw-page-title-main">Wrapped normal distribution</span>

In probability theory and directional statistics, a wrapped normal distribution is a wrapped probability distribution that results from the "wrapping" of the normal distribution around the unit circle. It finds application in the theory of Brownian motion and is a solution to the heat equation for periodic boundary conditions. It is closely approximated by the von Mises distribution, which, due to its mathematical simplicity and tractability, is the most commonly used distribution in directional statistics.

<span class="mw-page-title-main">Maximum spacing estimation</span> Method of estimating a statistical models parameters

In statistics, maximum spacing estimation (MSE or MSP), or maximum product of spacing estimation (MPS), is a method for estimating the parameters of a univariate statistical model. The method requires maximization of the geometric mean of spacings in the data, which are the differences between the values of the cumulative distribution function at neighbouring data points.

The multi-fractional order estimator (MFOE) is a straightforward, practical, and flexible alternative to the Kalman filter (KF) for tracking targets. The MFOE is focused strictly on simple and pragmatic fundamentals along with the integrity of mathematical modeling. Like the KF, the MFOE is based on the least squares method (LSM) invented by Gauss and the orthogonality principle at the center of Kalman's derivation. Optimized, the MFOE yields better accuracy than the KF and subsequent algorithms such as the extended KF and the interacting multiple model (IMM). The MFOE is an expanded form of the LSM, which effectively includes the KF and ordinary least squares (OLS) as subsets. OLS is revolutionized in for application in econometrics. The MFOE also intersects with signal processing, estimation theory, economics, finance, statistics, and the method of moments. The MFOE offers two major advances: (1) minimizing the mean squared error (MSE) with fractions of estimated coefficients and (2) describing the effect of deterministic OLS processing of statistical inputs

In statistics, the Innovation Method provides an estimator for the parameters of stochastic differential equations given a time series of observations of the state variables. In the framework of continuous-discrete state space models, the innovation estimator is obtained by maximizing the log-likelihood of the corresponding discrete-time innovation process with respect to the parameters. The innovation estimator can be classified as a M-estimator, a quasi-maximum likelihood estimator or a prediction error estimator depending on the inferential considerations that want to be emphasized. The innovation method is a system identification technique for developing mathematical models of dynamical systems from measured data and for the optimal design of experiments.

References

  1. "Archived copy". Archived from the original on 2008-04-30. Retrieved 2008-04-30.{{cite web}}: CS1 maint: archived copy as title (link)
  2. Ribeiro, Alejandro; Georgios B. Giannakis (March 2006). "Bandwidth-constrained distributed estimation for wireless sensor Networks-part I: Gaussian case". IEEE Transactions on Signal Processing. 54 (3): 1131. Bibcode:2006ITSP...54.1131R. doi:10.1109/TSP.2005.863009. S2CID   16223482.
  3. 1 2 3 4 Luo, Zhi-Quan (June 2005). "Universal decentralized estimation in a bandwidth constrained sensor network". IEEE Transactions on Information Theory. 51 (6): 2210–2219. doi:10.1109/TIT.2005.847692. S2CID   11574873.
  4. Ribeiro, Alejandro; Georgios B. Giannakis (July 2006). "Bandwidth-constrained distributed estimation for wireless sensor networks-part II: unknown probability density function". IEEE Transactions on Signal Processing. 54 (7): 2784. Bibcode:2006ITSP...54.2784R. doi:10.1109/TSP.2006.874366. S2CID   11410878.
  5. Xiao, Jin-Jun; Andrea J. Goldsmith (June 2005). "Joint estimation in sensor networks under energy constraint". IEEE Transactions on Signal Processing.
  6. Xiao, Jin-Jun; Zhi-Quan Luo (August 2005). "Universal decentralized detection in a bandwidth-constrained sensor network". IEEE Transactions on Signal Processing. 53 (8): 2617. Bibcode:2005ITSP...53.2617X. doi:10.1109/TSP.2005.850334. S2CID   8072065.