Sensor fusion

Last updated
Eurofighter sensor fusion Eurofighter sensor fusion.png
Eurofighter sensor fusion

Sensor fusion is the process of combining sensor data or data derived from disparate sources so that the resulting information has less uncertainty than would be possible if these sources were used individually. For instance, one could potentially obtain a more accurate location estimate of an indoor object by combining multiple data sources such as video cameras and WiFi localization signals. The term uncertainty reduction in this case can mean more accurate, more complete, or more dependable, or refer to the result of an emerging view, such as stereoscopic vision (calculation of depth information by combining two-dimensional images from two cameras at slightly different viewpoints). [1] [2]

Contents

The data sources for a fusion process are not specified to originate from identical sensors. One can distinguish direct fusion, indirect fusion and fusion of the outputs of the former two. Direct fusion is the fusion of sensor data from a set of heterogeneous or homogeneous sensors, soft sensors, and history values of sensor data, while indirect fusion uses information sources like a priori knowledge about the environment and human input.

Sensor fusion is also known as (multi-sensor) data fusion and is a subset of information fusion .

Examples of sensors

Algorithms

Sensor fusion is a term that covers a number of methods and algorithms, including:

Example calculations

Two example sensor fusion calculations are illustrated below.

Let and denote two sensor measurements with noise variances and , respectively. One way of obtaining a combined measurement is to apply inverse-variance weighting, which is also employed within the Fraser-Potter fixed-interval smoother, namely [6]

,

where is the variance of the combined estimate. It can be seen that the fused result is simply a linear combination of the two measurements weighted by their respective noise variances.

Another (equivalent) method to fuse two measurements is to use the optimal Kalman filter. Suppose that the data is generated by a first-order system and let denote the solution of the filter's Riccati equation. By applying Cramer's rule within the gain calculation it can be found that the filter gain is given by:[ citation needed ]

By inspection, when the first measurement is noise free, the filter ignores the second measurement and vice versa. That is, the combined estimate is weighted by the quality of the measurements.

Centralized versus decentralized

In sensor fusion, centralized versus decentralized refers to where the fusion of the data occurs. In centralized fusion, the clients simply forward all of the data to a central location, and some entity at the central location is responsible for correlating and fusing the data. In decentralized, the clients take full responsibility for fusing the data. "In this case, every sensor or platform can be viewed as an intelligent asset having some degree of autonomy in decision-making." [7]

Multiple combinations of centralized and decentralized systems exist.

Another classification of sensor configuration refers to the coordination of information flow between sensors. [8] [9] These mechanisms provide a way to resolve conflicts or disagreements and to allow the development of dynamic sensing strategies. Sensors are in redundant (or competitive) configuration if each node delivers independent measures of the same properties. This configuration can be used in error correction when comparing information from multiple nodes. Redundant strategies are often used with high level fusions in voting procedures. [10] [11] Complementary configuration occurs when multiple information sources supply different information about the same features. This strategy is used for fusing information at raw data level within decision-making algorithms. Complementary features are typically applied in motion recognition tasks with Neural network, [12] [13] Hidden Markov model, [14] [15] Support-vector machine, [16] clustering methods and other techniques. [16] [15] Cooperative sensor fusion uses the information extracted by multiple independent sensors to provide information that would not be available from single sensors. For example, sensors connected to body segments are used for the detection of the angle between them. Cooperative sensor strategy gives information impossible to obtain from single nodes. Cooperative information fusion can be used in motion recognition, [17] gait analysis, motion analysis, [18] [19] ,. [20]

Levels

There are several categories or levels of sensor fusion that are commonly used. [21] [22] [23] [24] [25] [26]

Sensor fusion level can also be defined basing on the kind of information used to feed the fusion algorithm. [27] More precisely, sensor fusion can be performed fusing raw data coming from different sources, extrapolated features or even decision made by single nodes.

Applications

One application of sensor fusion is GPS/INS, where Global Positioning System and inertial navigation system data is fused using various different methods, e.g. the extended Kalman filter. This is useful, for example, in determining the attitude of an aircraft using low-cost sensors. [32] Another example is using the data fusion approach to determine the traffic state (low traffic, traffic jam, medium flow) using road side collected acoustic, image and sensor data. [33] In the field of autonomous driving, sensor fusion is used to combine the redundant information from complementary sensors in order to obtain a more accurate and reliable representation of the environment. [34]

Although technically not a dedicated sensor fusion method, modern Convolutional neural network based methods can simultaneously process many channels of sensor data (such as Hyperspectral imaging with hundreds of bands [35] ) and fuse relevant information to produce classification results.

See also

Related Research Articles

<span class="mw-page-title-main">Nonlinear dimensionality reduction</span> Projection of data onto lower-dimensional manifolds

Nonlinear dimensionality reduction, also known as manifold learning, is any of various related techniques that aim to project high-dimensional data onto lower-dimensional latent manifolds, with the goal of either visualizing the data in the low-dimensional space, or learning the mapping itself. The techniques described below can be understood as generalizations of linear decomposition methods used for dimensionality reduction, such as singular value decomposition and principal component analysis.

<span class="mw-page-title-main">Quantization (signal processing)</span> Process of mapping a continuous set to a countable set

Quantization, in mathematics and digital signal processing, is the process of mapping input values from a large set to output values in a (countable) smaller set, often with a finite number of elements. Rounding and truncation are typical examples of quantization processes. Quantization is involved to some degree in nearly all digital signal processing, as the process of representing a signal in digital form ordinarily involves rounding. Quantization also forms the core of essentially all lossy compression algorithms.

Latent semantic analysis (LSA) is a technique in natural language processing, in particular distributional semantics, of analyzing relationships between a set of documents and the terms they contain by producing a set of concepts related to the documents and terms. LSA assumes that words that are close in meaning will occur in similar pieces of text. A matrix containing word counts per document is constructed from a large piece of text and a mathematical technique called singular value decomposition (SVD) is used to reduce the number of rows while preserving the similarity structure among columns. Documents are then compared by cosine similarity between any two columns. Values close to 1 represent very similar documents while values close to 0 represent very dissimilar documents.

Wireless sensor networks (WSNs) refer to networks of spatially dispersed and dedicated sensors that monitor and record the physical conditions of the environment and forward the collected data to a central location. WSNs can measure environmental conditions such as temperature, sound, pollution levels, humidity and wind.

The scale-invariant feature transform (SIFT) is a computer vision algorithm to detect, describe, and match local features in images, invented by David Lowe in 1999. Applications include object recognition, robotic mapping and navigation, image stitching, 3D modeling, gesture recognition, video tracking, individual identification of wildlife and match moving.

<span class="mw-page-title-main">Linear discriminant analysis</span> Method used in statistics, pattern recognition, and other fields

Linear discriminant analysis (LDA), normal discriminant analysis (NDA), or discriminant function analysis is a generalization of Fisher's linear discriminant, a method used in statistics and other fields, to find a linear combination of features that characterizes or separates two or more classes of objects or events. The resulting combination may be used as a linear classifier, or, more commonly, for dimensionality reduction before later classification.

Recurrent neural networks (RNNs) are a class of artificial neural networks for sequential data processing. Unlike feedforward neural networks, which process data in a single pass, RNNs processes data across multiple time steps, making them well-adapted for modelling and processing text, speech, and time series.

In computer networking, linear network coding is a program in which intermediate nodes transmit data from source nodes to sink nodes by means of linear combinations.

The structural similarityindex measure (SSIM) is a method for predicting the perceived quality of digital television and cinematic pictures, as well as other kinds of digital images and videos. It is also used for measuring the similarity between two images. The SSIM index is a full reference metric; in other words, the measurement or prediction of image quality is based on an initial uncompressed or distortion-free image as reference.

Machine olfaction is the automated simulation of the sense of smell. An emerging application in modern engineering, it involves the use of robots or other automated systems to analyze air-borne chemicals. Such an apparatus is often called an electronic nose or e-nose. The development of machine olfaction is complicated by the fact that e-nose devices to date have responded to a limited number of chemicals, whereas odors are produced by unique sets of odorant compounds. The technology, though still in the early stages of development, promises many applications, such as: quality control in food processing, detection and diagnosis in medicine, detection of drugs, explosives and other dangerous or illegal substances, disaster response, and environmental monitoring.

<span class="mw-page-title-main">Long short-term memory</span> Artificial recurrent neural network architecture used in deep learning

Long short-term memory (LSTM) is a type of recurrent neural network (RNN) aimed at dealing with the vanishing gradient problem present in traditional RNNs. Its relative insensitivity to gap length is its advantage over other RNNs, hidden Markov models and other sequence learning methods. It aims to provide a short-term memory for RNN that can last thousands of timesteps, thus "long short-term memory". It is applicable to classification, processing and predicting data based on time series, such as in handwriting, speech recognition, machine translation, speech activity detection, robot control, video games, and healthcare.

Mean shift is a non-parametric feature-space mathematical analysis technique for locating the maxima of a density function, a so-called mode-seeking algorithm. Application domains include cluster analysis in computer vision and image processing.

Compressed sensing is a signal processing technique for efficiently acquiring and reconstructing a signal, by finding solutions to underdetermined linear systems. This is based on the principle that, through optimization, the sparsity of a signal can be exploited to recover it from far fewer samples than required by the Nyquist–Shannon sampling theorem. There are two conditions under which recovery is possible. The first one is sparsity, which requires the signal to be sparse in some domain. The second one is incoherence, which is applied through the isometric property, which is sufficient for sparse signals. Compressed sensing has applications in, for example, MRI where the incoherence condition is typically satisfied.

Location estimation in wireless sensor networks is the problem of estimating the location of an object from a set of noisy measurements. These measurements are acquired in a distributed manner by a set of sensors.

In the mathematical theory of artificial neural networks, universal approximation theorems are theorems of the following form: Given a family of neural networks, for each function from a certain function space, there exists a sequence of neural networks from the family, such that according to some criterion. That is, the family of neural networks is dense in the function space.

The Brooks–Iyengar algorithm or FuseCPA Algorithm or Brooks–Iyengar hybrid algorithm is a distributed algorithm that improves both the precision and accuracy of the interval measurements taken by a distributed sensor network, even in the presence of faulty sensors. The sensor network does this by exchanging the measured value and accuracy value at every node with every other node, and computes the accuracy range and a measured value for the whole network from all of the values collected. Even if some of the data from some of the sensors is faulty, the sensor network will not malfunction. The algorithm is fault-tolerant and distributed. It could also be used as a sensor fusion method. The precision and accuracy bound of this algorithm have been proved in 2016.

<span class="mw-page-title-main">Betweenness centrality</span> Measure of a graphs centrality, based on shortest paths

In graph theory, betweenness centrality is a measure of centrality in a graph based on shortest paths. For every pair of vertices in a connected graph, there exists at least one shortest path between the vertices such that either the number of edges that the path passes through or the sum of the weights of the edges is minimized. The betweenness centrality for each vertex is the number of these shortest paths that pass through the vertex.

Extreme learning machines are feedforward neural networks for classification, regression, clustering, sparse approximation, compression and feature learning with a single layer or multiple layers of hidden nodes, where the parameters of hidden nodes need to be tuned. These hidden nodes can be randomly assigned and never updated, or can be inherited from their ancestors without being changed. In most cases, the output weights of hidden nodes are usually learned in a single step, which essentially amounts to learning a linear model.

<span class="mw-page-title-main">Event detection for WSN</span>

Wireless sensor networks (WSN) are a spatially distributed network of autonomous sensors used for monitoring an environment. Energy cost is a major limitation for WSN requiring the need for energy efficient networks and processing. One of major energy costs in WSN is the energy spent on communication between nodes and it is sometimes desirable to only send data to a gateway node when an event of interest is triggered at a sensor. Sensors will then only open communication during a probable event, saving on communication costs. Fields interested in this type of network include surveillance, home automation, disaster relief, traffic control, health care and more.

A graph neural network (GNN) belongs to a class of artificial neural networks for processing data that can be represented as graphs.

References

  1. Elmenreich, W. (2002). Sensor Fusion in Time-Triggered Systems, PhD Thesis (PDF). Vienna, Austria: Vienna University of Technology. p. 173.
  2. Haghighat, Mohammad Bagher Akbari; Aghagolzadeh, Ali; Seyedarabi, Hadi (2011). "Multi-focus image fusion for visual sensor networks in DCT domain". Computers & Electrical Engineering. 37 (5): 789–797. doi:10.1016/j.compeleceng.2011.04.016. S2CID   38131177.
  3. Li, Wangyan; Wang, Zidong; Wei, Guoliang; Ma, Lifeng; Hu, Jun; Ding, Derui (2015). "A Survey on Multisensor Fusion and Consensus Filtering for Sensor Networks". Discrete Dynamics in Nature and Society. 2015: 1–12. doi: 10.1155/2015/683701 . ISSN   1026-0226.
  4. Badeli, Vahid; Ranftl, Sascha; Melito, Gian Marco; Reinbacher-Köstinger, Alice; Von Der Linden, Wolfgang; Ellermann, Katrin; Biro, Oszkar (2021-01-01). "Bayesian inference of multi-sensors impedance cardiography for detection of aortic dissection". COMPEL - the International Journal for Computation and Mathematics in Electrical and Electronic Engineering. 41 (3): 824–839. doi:10.1108/COMPEL-03-2021-0072. ISSN   0332-1649. S2CID   245299500.
  5. Ranftl, Sascha; Melito, Gian Marco; Badeli, Vahid; Reinbacher-Köstinger, Alice; Ellermann, Katrin; von der Linden, Wolfgang (2019-12-31). "Bayesian Uncertainty Quantification with Multi-Fidelity Data and Gaussian Processes for Impedance Cardiography of Aortic Dissection". Entropy. 22 (1): 58. doi: 10.3390/e22010058 . ISSN   1099-4300. PMC   7516489 . PMID   33285833.
  6. Maybeck, S. (1982). Stochastic Models, Estimating, and Control. River Edge, NJ: Academic Press.
  7. N. Xiong; P. Svensson (2002). "Multi-sensor management for information fusion: issues and approaches". Information Fusion. p. 3(2):163–186.
  8. Durrant-Whyte, Hugh F. (2016). "Sensor Models and Multisensor Integration". The International Journal of Robotics Research. 7 (6): 97–113. doi:10.1177/027836498800700608. ISSN   0278-3649. S2CID   35656213.
  9. Galar, Diego; Kumar, Uday (2017). eMaintenance: Essential Electronic Tools for Efficiency. Academic Press. p. 26. ISBN   9780128111543.
  10. Li, Wenfeng; Bao, Junrong; Fu, Xiuwen; Fortino, Giancarlo; Galzarano, Stefano (2012). "Human Postures Recognition Based on D-S Evidence Theory and Multi-sensor Data Fusion". 2012 12th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (ccgrid 2012). pp. 912–917. doi:10.1109/CCGrid.2012.144. ISBN   978-1-4673-1395-7. S2CID   1571720.
  11. Fortino, Giancarlo; Gravina, Raffaele (2015). "Fall-MobileGuard: a Smart Real-Time Fall Detection System". Proceedings of the 10th EAI International Conference on Body Area Networks. doi:10.4108/eai.28-9-2015.2261462. ISBN   978-1-63190-084-6. S2CID   38913107.
  12. Tao, Shuai; Zhang, Xiaowei; Cai, Huaying; Lv, Zeping; Hu, Caiyou; Xie, Haiqun (2018). "Gait based biometric personal authentication by using MEMS inertial sensors". Journal of Ambient Intelligence and Humanized Computing. 9 (5): 1705–1712. doi:10.1007/s12652-018-0880-6. ISSN   1868-5137. S2CID   52304214.
  13. Dehzangi, Omid; Taherisadr, Mojtaba; ChangalVala, Raghvendar (2017). "IMU-Based Gait Recognition Using Convolutional Neural Networks and Multi-Sensor Fusion". Sensors. 17 (12): 2735. Bibcode:2017Senso..17.2735D. doi: 10.3390/s17122735 . ISSN   1424-8220. PMC   5750784 . PMID   29186887.
  14. Guenterberg, E.; Yang, A.Y.; Ghasemzadeh, H.; Jafari, R.; Bajcsy, R.; Sastry, S.S. (2009). "A Method for Extracting Temporal Parameters Based on Hidden Markov Models in Body Sensor Networks With Inertial Sensors" (PDF). IEEE Transactions on Information Technology in Biomedicine. 13 (6): 1019–1030. doi:10.1109/TITB.2009.2028421. ISSN   1089-7771. PMID   19726268. S2CID   1829011.
  15. 1 2 Parisi, Federico; Ferrari, Gianluigi; Giuberti, Matteo; Contin, Laura; Cimolin, Veronica; Azzaro, Corrado; Albani, Giovanni; Mauro, Alessandro (2016). "Inertial BSN-Based Characterization and Automatic UPDRS Evaluation of the Gait Task of Parkinsonians". IEEE Transactions on Affective Computing. 7 (3): 258–271. doi:10.1109/TAFFC.2016.2549533. ISSN   1949-3045. S2CID   16866555.
  16. 1 2 Gao, Lei; Bourke, A.K.; Nelson, John (2014). "Evaluation of accelerometer based multi-sensor versus single-sensor activity recognition systems". Medical Engineering & Physics. 36 (6): 779–785. doi:10.1016/j.medengphy.2014.02.012. ISSN   1350-4533. PMID   24636448.
  17. Xu, James Y.; Wang, Yan; Barrett, Mick; Dobkin, Bruce; Pottie, Greg J.; Kaiser, William J. (2016). "Personalized Multilayer Daily Life Profiling Through Context Enabled Activity Classification and Motion Reconstruction: An Integrated System Approach". IEEE Journal of Biomedical and Health Informatics. 20 (1): 177–188. doi: 10.1109/JBHI.2014.2385694 . ISSN   2168-2194. PMID   25546868. S2CID   16785375.
  18. Chia Bejarano, Noelia; Ambrosini, Emilia; Pedrocchi, Alessandra; Ferrigno, Giancarlo; Monticone, Marco; Ferrante, Simona (2015). "A Novel Adaptive, Real-Time Algorithm to Detect Gait Events From Wearable Sensors". IEEE Transactions on Neural Systems and Rehabilitation Engineering. 23 (3): 413–422. doi:10.1109/TNSRE.2014.2337914. hdl: 11311/865739 . ISSN   1534-4320. PMID   25069118. S2CID   25828466.
  19. Wang, Zhelong; Qiu, Sen; Cao, Zhongkai; Jiang, Ming (2013). "Quantitative assessment of dual gait analysis based on inertial sensors with body sensor network". Sensor Review. 33 (1): 48–56. doi:10.1108/02602281311294342. ISSN   0260-2288.
  20. Kong, Weisheng; Wanning, Lauren; Sessa, Salvatore; Zecca, Massimiliano; Magistro, Daniele; Takeuchi, Hikaru; Kawashima, Ryuta; Takanishi, Atsuo (2017). "Step Sequence and Direction Detection of Four Square Step Test" (PDF). IEEE Robotics and Automation Letters. 2 (4): 2194–2200. doi:10.1109/LRA.2017.2723929. ISSN   2377-3766. S2CID   23410874.
  21. Rethinking JDL Data Fusion Levels
  22. Blasch, E., Plano, S. (2003) “Level 5: User Refinement to aid the Fusion Process”, Proceedings of the SPIE, Vol. 5099.
  23. J. Llinas; C. Bowman; G. Rogova; A. Steinberg; E. Waltz; F. White (2004). Revisiting the JDL data fusion model II. International Conference on Information Fusion. CiteSeerX   10.1.1.58.2996 .
  24. Blasch, E. (2006) "Sensor, user, mission (SUM) resource management and their interaction with level 2/3 fusion [ permanent dead link ]" International Conference on Information Fusion.
  25. "Harnessing the full power of sensor fusion -". 3 April 2024.
  26. Blasch, E., Steinberg, A., Das, S., Llinas, J., Chong, C.-Y., Kessler, O., Waltz, E., White, F. (2013) "Revisiting the JDL model for information Exploitation," International Conference on Information Fusion.
  27. 1 2 Gravina, Raffaele; Alinia, Parastoo; Ghasemzadeh, Hassan; Fortino, Giancarlo (2017). "Multi-sensor fusion in body sensor networks: State-of-the-art and research challenges". Information Fusion. 35: 68–80. doi:10.1016/j.inffus.2016.09.005. ISSN   1566-2535. S2CID   40608207.
  28. Gao, Teng; Song, Jin-Yan; Zou, Ji-Yan; Ding, Jin-Hua; Wang, De-Quan; Jin, Ren-Cheng (2015). "An overview of performance trade-off mechanisms in routing protocol for green wireless sensor networks". Wireless Networks. 22 (1): 135–157. doi:10.1007/s11276-015-0960-x. ISSN   1022-0038. S2CID   34505498.
  29. 1 2 Chen, Chen; Jafari, Roozbeh; Kehtarnavaz, Nasser (2015). "A survey of depth and inertial sensor fusion for human action recognition". Multimedia Tools and Applications. 76 (3): 4405–4425. doi:10.1007/s11042-015-3177-1. ISSN   1380-7501. S2CID   18112361.
  30. Banovic, Nikola; Buzali, Tofi; Chevalier, Fanny; Mankoff, Jennifer; Dey, Anind K. (2016). "Modeling and Understanding Human Routine Behavior". Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems - CHI '16. pp. 248–260. doi:10.1145/2858036.2858557. ISBN   9781450333627. S2CID   872756.
  31. Maria, Aileni Raluca; Sever, Pasca; Carlos, Valderrama (2015). "Biomedical sensors data fusion algorithm for enhancing the efficiency of fault-tolerant systems in case of wearable electronics device". 2015 Conference Grid, Cloud & High Performance Computing in Science (ROLCG). pp. 1–4. doi:10.1109/ROLCG.2015.7367228. ISBN   978-6-0673-7040-9. S2CID   18782930.
  32. Gross, Jason; Yu Gu; Matthew Rhudy; Srikanth Gururajan; Marcello Napolitano (July 2012). "Flight Test Evaluation of Sensor Fusion Algorithms for Attitude Estimation". IEEE Transactions on Aerospace and Electronic Systems. 48 (3): 2128–2139. Bibcode:2012ITAES..48.2128G. doi:10.1109/TAES.2012.6237583. S2CID   393165.
  33. Joshi, V., Rajamani, N., Takayuki, K., Prathapaneni, N., Subramaniam, L. V. (2013). Information Fusion Based Learning for Frugal Traffic State Sensing. Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence.{{cite conference}}: CS1 maint: multiple names: authors list (link)
  34. Mircea Paul, Muresan; Ion, Giosan; Sergiu, Nedevschi (2020-02-18). "Stabilization and Validation of 3D Object Position Using Multimodal Sensor Fusion and Semantic Segmentation". Sensors. 20 (4): 1110. Bibcode:2020Senso..20.1110M. doi: 10.3390/s20041110 . PMC   7070899 . PMID   32085608.
  35. Ran, Lingyan; Zhang, Yanning; Wei, Wei; Zhang, Qilin (2017-10-23). "A Hyperspectral Image Classification Framework with Spatial Pixel Pair Features". Sensors. 17 (10): 2421. Bibcode:2017Senso..17.2421R. doi: 10.3390/s17102421 . PMC   5677443 . PMID   29065535.
  1. Haghighat, Mohammad; Abdel-Mottaleb, Mohamed; Alhalabi, Wadee (2016). "Discriminant Correlation Analysis: Real-Time Feature Level Fusion for Multimodal Biometric Recognition". IEEE Transactions on Information Forensics and Security. 11 (9): 1984–1996. doi:10.1109/TIFS.2016.2569061. S2CID   15624506.