This article has multiple issues. Please help improve it or discuss these issues on the talk page . (Learn how and when to remove these template messages)
|
Machine olfaction is the automated simulation of the sense of smell. An emerging application in modern engineering, it involves the use of robots or other automated systems to analyze air-borne chemicals. Such an apparatus is often called an electronic nose or e-nose. The development of machine olfaction is complicated by the fact that e-nose devices to date have responded to a limited number of chemicals, whereas odors are produced by unique sets of (potentially numerous) odorant compounds. The technology, though still in the early stages of development, promises many applications, such as: [1] quality control in food processing, detection and diagnosis in medicine, [2] detection of drugs, explosives and other dangerous or illegal substances, [3] disaster response, and environmental monitoring.
One type of proposed machine olfaction technology is via gas sensor array instruments capable of detecting, identifying, and measuring volatile compounds. However, a critical element in the development of these instruments is pattern analysis, and the successful design of a pattern analysis system for machine olfaction requires a careful consideration of the various issues involved in processing multivariate data: signal-preprocessing, feature extraction, feature selection, classification, regression, clustering, and validation. [4] Another challenge in current research on machine olfaction is the need to predict or estimate the sensor response to aroma mixtures. [5] Some pattern recognition problems in machine olfaction such as odor classification and odor localization can be solved by using time series kernel methods. [6]
There are three basic detection techniques using conductive-polymer odor sensors (polypyrrole), tin-oxide gas sensors, and quartz-crystal micro-balance sensors.[ citation needed ] They generally comprise (1) an array of sensors of some type, (2) the electronics to interrogate those sensors and produce digital signals, and (3) data processing and user interface software.
The entire system is a means of converting complex sensor responses into a qualitative profile of the volatile (or complex mixture of chemical volatiles) that make up a smell, in the form of an output.
Conventional electronic noses are not analytical instruments in the classical sense and very few claim to be able to quantify an odor. These instruments are first 'trained' with the target odor and then used to 'recognize' smells so that future samples can be identified as 'good' or 'bad'.
Research into alternative pattern recognition methods for chemical sensor arrays has proposed solutions to differentiate between artificial and biological olfaction related to dimensionality. This biologically-inspired approach involves creating unique algorithms for information processing. [7]
Electronic noses are able to discriminate between odors and volatiles from a wide range of sources. The list below shows just some of the typical applications for electronic nose technology – many are backed by research studies and published technical papers.
Odor localization is a combination of quantitative chemical odor analysis and path-searching algorithms, and environmental conditions play a vital role in localization quality. Different methods are being researched for various purposes and in different real-world conditions.
Odor localization is the technique and process of locating a volatile chemical source in an environment containing one or several odors. It is vitally important for all living beings for both finding sustenance and avoiding danger. Unlike the other basic human senses, the sense of smell is entirely chemical-based. However, in comparison with the other dimensions of perception, detection of odor faces additional problems due to the complex dynamic equations of odor and unpredictable external disturbances such as wind.
Odor localization technology shows promise in many applications, including: [8] [1]
The earliest instrument for specific odor detection was a mechanical nose developed in 1961 by Robert Wighton Moncrieff. The first electronic nose was created by W. F. Wilkens and J. D. Hartman in 1964. [9] Larcome and Halsall discussed the use of robots for odor sensing in the nuclear industry in the early 1980s, [10] and research on odor localization was started in the early 1990s. Odor localization is now a fast-growing field. Various sensors have been developed and a variety of algorithms have been proposed for diverse environments and conditions.
Mechanical odor localization can be executed via the following three steps, (1) search for the presence of a volatile chemical (2) search for the position of the source with an array of odor sensors and certain algorithms, and (3) identify the tracked odor source (odor recognition).
Odor localization methods are often classified according to odor dispersal modes in a range of environmental conditions. These modes can generally be divided into two categories: diffusion-dominated fluid flow and turbulence-dominated fluid flow. These have different algorithms for odor localization, discussed below.
Tracking and localization methods for diffusion-dominated fluid flow – which is mostly used in underground odor localization – must be designed so that olfaction machinery can operate in environments in which fluid motion is dominated by viscosity. This means that diffusion leads to the dispersal of odor flow, and the concentration of odor decreases from the source as a Gaussian distribution. [11]
The diffusion of chemical vapor through soil without external pressure gradient is often modeled by Fick's second law:
where D is the diffusion constant, d is distance in the diffusion direction, C is chemical concentration and t is time.
Assuming the chemical odor flow only disperses in one direction with a uniform cross-section profile, the relationship of odor concentration at a certain distance and certain time point between odor source concentrations is modeled as
where is the odor source concentration. This is the simplest dynamic equation in odor detection modeling, ignoring external wind or other interruptions. Under the diffusion-dominated propagation model, different algorithms were developed by simply tracking chemical concentration gradients to locate an odor source.
A simple tracking method is the E. coli algorithm. [12] In this process, the odor sensor simply compares concentration information from different locations. The robot moves along repeated straight lines in random directions. When the current state odor information is improved compared to the previous reading, the robot will continue on the current path. However, when the current state condition is worse than the previous one, the robot will backtrack then move in another random direction. This method is simple and efficient, however, the length of the path is highly variable and missteps increase with proximity to the source.[ further explanation needed ]
Another method based on the diffusion model is the hex-path algorithm, developed by R. Andrew Russel [12] for underground chemical odor localization with a buried probe controlled by a robotic manipulator. [12] [13] The probe moves at a certain depth along the edges of a closely packed hexagonal grid. At each state junction n, there are two paths (left and right) for choosing, and the robot will take the path that leads to higher concentration of the odor based on the previous two junction states odor concentration information n−1, n−2. In the 3D version of the hex-path algorithm, the dodecahedron algorithm, the probe moves in a path that corresponds to a closely packed dodecahedra, so that at each state point there are three possible path choices.
In turbulence-dominated fluid flow, localization methods are designed to deal with background fluid (wind or water) flow as turbulence interruption. Most of the algorithms under this category are based on plume modeling (Figure 1). [14]
Plume dynamics are based on Gaussian models, which are based on Navier–Stokes equations. The simplified boundary condition of the Gaussian-based model is:
where Dx and Dy are diffusion constants; is the linear wind velocity in the x direction, is the linear wind velocity in the y direction. Additionally assuming that the environment is uniform and the plume source is constant, the equation for odor detection in each robot sensor at each detection time point t−th is
where is the t−th sample of i−th sensor, is gain factor, is k−th source intensity, is the location of k−th source, is plume attenuation parameter, is background noise that satisfies . Under plume modeling, different algorithms can be used to localize the odor source.
A simple algorithm that can be used for location estimation is the triangulation method (Figure 2). Consider the odor detection equation above, the position of the odor source can be estimated by organizing sensor distances on one side of the equation and ignoring the noise. The source position can be estimated using the following equations:
The least square method (LSM) is a slightly complicated algorithm for odor localization. The LSM version of the odor tracking model is given by:
where is the Euclidean distance between the sensor node and the plume source, given by:
The main difference between the LSM algorithm and the direct triangulation method is the noise. In LSM, noise is considered, and the odor source location is estimated by minimizing the squared error. The nonlinear least square problem is given by:
where is the estimated source location and is the average of multiple measurements at the sensors, given by:
Another method based on plume modeling is maximum likelihood estimation (MLE). In this odor localization method, several matrices are defined as follows:
With these matrices, the plume-based odor detection model can be expressed with the following equation:
Then the MLE can be applied to the modeling and form the probability density function
where is the estimated odor source position, and the log likelihood function is
The maximum likelihood parameter estimation of can be calculated by minimizing
and the accurate position of the odor source can be estimated by solving:
In 2007, a strategy called infotaxis was proposed in which a mental model is created utilizing previously collected information about where a smell's source is likely to be. The robot moves in a direction that maximizes information. [15] Infotaxis is designed for tracking in turbulent environments. It has been implemented as a partially observable Markov decision process [16] with a stationary target in a two-dimensional grid. [17]
In vector calculus and differential geometry the generalized Stokes theorem, also called the Stokes–Cartan theorem, is a statement about the integration of differential forms on manifolds, which both simplifies and generalizes several theorems from vector calculus. In particular, the fundamental theorem of calculus is the special case where the manifold is a line segment, Green’s theorem and Stokes' theorem are the cases of a surface in or and the divergence theorem is the case of a volume in Hence, the theorem is sometimes referred to as the Fundamental Theorem of Multivariate Calculus.
In particle physics, the Dirac equation is a relativistic wave equation derived by British physicist Paul Dirac in 1928. In its free form, or including electromagnetic interactions, it describes all spin-1/2 massive particles, called "Dirac particles", such as electrons and quarks for which parity is a symmetry. It is consistent with both the principles of quantum mechanics and the theory of special relativity, and was the first theory to account fully for special relativity in the context of quantum mechanics. It was validated by accounting for the fine structure of the hydrogen spectrum in a completely rigorous way. It has become vital in the building of the Standard Model.
In statistical mechanics and information theory, the Fokker–Planck equation is a partial differential equation that describes the time evolution of the probability density function of the velocity of a particle under the influence of drag forces and random forces, as in Brownian motion. The equation can be generalized to other observables as well. The Fokker-Planck equation has multiple applications in information theory, graph theory, data science, finance, economics etc.
In vector calculus, Green's theorem relates a line integral around a simple closed curve C to a double integral over the plane region D bounded by C. It is the two-dimensional special case of Stokes' theorem. In one dimension, it is equivalent to the fundamental theorem of calculus. In three dimensions, it is equivalent to the divergence theorem.
In physics, the Hamilton–Jacobi equation, named after William Rowan Hamilton and Carl Gustav Jacob Jacobi, is an alternative formulation of classical mechanics, equivalent to other formulations such as Newton's laws of motion, Lagrangian mechanics and Hamiltonian mechanics.
The Voigt profile is a probability distribution given by a convolution of a Cauchy-Lorentz distribution and a Gaussian distribution. It is often used in analyzing data from spectroscopy or diffraction.
In theoretical physics and mathematics, a Wess–Zumino–Witten (WZW) model, also called a Wess–Zumino–Novikov–Witten model, is a type of two-dimensional conformal field theory named after Julius Wess, Bruno Zumino, Sergei Novikov and Edward Witten. A WZW model is associated to a Lie group, and its symmetry algebra is the affine Lie algebra built from the corresponding Lie algebra. By extension, the name WZW model is sometimes used for any conformal field theory whose symmetry algebra is an affine Lie algebra.
In differential geometry, the four-gradient is the four-vector analogue of the gradient from vector calculus.
The covariant formulation of classical electromagnetism refers to ways of writing the laws of classical electromagnetism in a form that is manifestly invariant under Lorentz transformations, in the formalism of special relativity using rectilinear inertial coordinate systems. These expressions both make it simple to prove that the laws of classical electromagnetism take the same form in any inertial coordinate system, and also provide a way to translate the fields and forces from one frame to another. However, this is not as general as Maxwell's equations in curved spacetime or non-rectilinear coordinate systems.
In mathematics, a metric connection is a connection in a vector bundle E equipped with a bundle metric; that is, a metric for which the inner product of any two vectors will remain the same when those vectors are parallel transported along any curve. This is equivalent to:
In statistics, the multivariate t-distribution is a multivariate probability distribution. It is a generalization to random vectors of the Student's t-distribution, which is a distribution applicable to univariate random variables. While the case of a random matrix could be treated within this structure, the matrix t-distribution is distinct and makes particular use of the matrix structure.
There are various mathematical descriptions of the electromagnetic field that are used in the study of electromagnetism, one of the four fundamental interactions of nature. In this article, several approaches are discussed, although the equations are in terms of electric and magnetic fields, potentials, and charges with currents, generally speaking.
A ratio distribution is a probability distribution constructed as the distribution of the ratio of random variables having two other known distributions. Given two random variables X and Y, the distribution of the random variable Z that is formed as the ratio Z = X/Y is a ratio distribution.
In fluid mechanics and mathematics, a capillary surface is a surface that represents the interface between two different fluids. As a consequence of being a surface, a capillary surface has no thickness in slight contrast with most real fluid interfaces.
In general relativity, a point mass deflects a light ray with impact parameter by an angle approximately equal to
In mathematics, Maass forms or Maass wave forms are studied in the theory of automorphic forms. Maass forms are complex-valued smooth functions of the upper half plane, which transform in a similar way under the operation of a discrete subgroup of as modular forms. They are eigenforms of the hyperbolic Laplace operator defined on and satisfy certain growth conditions at the cusps of a fundamental domain of . In contrast to modular forms, Maass forms need not be holomorphic. They were studied first by Hans Maass in 1949.
In computer science, online machine learning is a method of machine learning in which data becomes available in a sequential order and is used to update the best predictor for future data at each step, as opposed to batch learning techniques which generate the best predictor by learning on the entire training data set at once. Online learning is a common technique used in areas of machine learning where it is computationally infeasible to train over the entire dataset, requiring the need of out-of-core algorithms. It is also used in situations where it is necessary for the algorithm to dynamically adapt to new patterns in the data, or when the data itself is generated as a function of time, e.g., stock price prediction. Online learning algorithms may be prone to catastrophic interference, a problem that can be addressed by incremental learning approaches.
Stokes' theorem, also known as the Kelvin–Stokes theorem after Lord Kelvin and George Stokes, the fundamental theorem for curls or simply the curl theorem, is a theorem in vector calculus on . Given a vector field, the theorem relates the integral of the curl of the vector field over some surface, to the line integral of the vector field around the boundary of the surface. The classical theorem of Stokes can be stated in one sentence: The line integral of a vector field over a loop is equal to the surface integral of its curl over the enclosed surface. It is illustrated in the figure, where the direction of positive circulation of the bounding contour ∂Σ, and the direction n of positive flux through the surface Σ, are related by a right-hand-rule. For the right hand the fingers circulate along ∂Σ and the thumb is directed along n.
In theoretical physics, relativistic Lagrangian mechanics is Lagrangian mechanics applied in the context of special relativity and general relativity.
Batch normalization is a method used to make training of artificial neural networks faster and more stable through normalization of the layers' inputs by re-centering and re-scaling. It was proposed by Sergey Ioffe and Christian Szegedy in 2015.