Optical correlator

Last updated

An optical correlator is an optical computer for comparing two signals by utilising the Fourier transforming properties of a lens. [1] It is commonly used in optics for target tracking and identification.

Contents

Introduction

The correlator has an input signal which is multiplied by some filter in the Fourier domain. An example filter is the matched filter which uses the cross correlation of the two signals.

The cross correlation or correlation plane, of a 2D signal with is

This can be re-expressed in Fourier space as

where the capital letters denote the Fourier transform of what the lower case letter denotes. So the correlation can then be calculated by inverse Fourier transforming the result.

Implementation

According to Fresnel Diffraction theory a convex lens of focal length will produce the exact Fourier transform at a distance behind the lens of an object placed distance in front of the lens. So that complex amplitudes are multiplied, the light source must be coherent and is typically from a laser. The input signal and filter are commonly written onto a spatial light modulator (SLM).

A typical arrangement is the 4f correlator. The input signal is written to an SLM which is illuminated with a laser. This is Fourier transformed with a lens and this is then modulated with a second SLM containing the filter. The resultant is again Fourier transformed with a second lens and the correlation result is captured on a camera.

Filter design

Many filters have been designed to be used with an optical correlator. Some have been proposed to address hardware limitations, others were developed to optimize a merit function or to be invariant under a certain transformation.

Matched filter

The matched filter maximizes the signal-to-noise ratio and is simply obtained by using as a filter the Fourier transform of the reference signal .

Phase-only filter

The phase-only filter [2] is easier to implement due to limitation of many SLMs and has been shown to be more discriminant than the matched filter.

Related Research Articles

Convolution Binary mathematical operation on functions

In mathematics, convolution is a mathematical operation on two functions that produces a third function that expresses how the shape of one is modified by the other. The term convolution refers to both the result function and to the process of computing it. It is defined as the integral of the product of the two functions after one is reversed and shifted. The integral is evaluated for all values of shift, producing the convolution function.

Dirac delta function Pseudo-function δ such that an integral of δ(x-c)f(x) always takes the value of f(c)

In mathematics, the Dirac delta function is a generalized function or distribution introduced by physicist Paul Dirac. It is called a function, although it is not a function on the level one would expect, that is, it is not a function RC, but a function on the space of test functions. It is used to model the density of an idealized point mass or point charge as a function equal to zero everywhere except for zero and whose integral over the entire real line is equal to one. As there is no function that has these properties, the computations made by theoretical physicists appeared to mathematicians as nonsense until the introduction of distributions by Laurent Schwartz to formalize and validate the computations. As a distribution, the Dirac delta function is a linear functional that maps every function to its value at zero. The Kronecker delta function, which is usually defined on a discrete domain and takes values 0 and 1, is a discrete analog of the Dirac delta function.

Fourier transform Mathematical transform that expresses a function of time as a function of frequency

In mathematics, a Fourier transform (FT) is a mathematical transform that decomposes functions depending on space or time into functions depending on spatial or temporal frequency, such as the expression of a musical chord in terms of the volumes and frequencies of its constituent notes. The term Fourier transform refers to both the frequency domain representation and the mathematical operation that associates the frequency domain representation to a function of space or time.

Fourier optics is the study of classical optics using Fourier transforms (FTs), in which the waveform being considered is regarded as made up of a combination, or superposition, of plane waves. It has some parallels to the Huygens–Fresnel principle, in which the wavefront is regarded as being made up of a combination of spherical wavefronts whose sum is the wavefront being studied. A key difference is that Fourier optics considers the plane waves to be natural modes of the propagation medium, as opposed to Huygens–Fresnel, where the spherical waves originate in the physical medium.

In mathematics, the Fourier inversion theorem says that for many types of functions it is possible to recover a function from its Fourier transform. Intuitively it may be viewed as the statement that if we know all frequency and phase information about a wave then we may reconstruct the original wave precisely.

Radon transform

In mathematics, the Radon transform is the integral transform which takes a function f defined on the plane to a function Rf defined on the (two-dimensional) space of lines in the plane, whose value at a particular line is equal to the line integral of the function over that line. The transform was introduced in 1917 by Johann Radon, who also provided a formula for the inverse transform. Radon further included formulas for the transform in three dimensions, in which the integral is taken over planes. It was later generalized to higher-dimensional Euclidean spaces, and more broadly in the context of integral geometry. The complex analogue of the Radon transform is known as the Penrose transform. The Radon transform is widely applicable to tomography, the creation of an image from the projection data associated with cross-sectional scans of an object.

In mathematics, a Paley–Wiener theorem is any theorem that relates decay properties of a function or distribution at infinity with analyticity of its Fourier transform. The theorem is named for Raymond Paley (1907–1933) and Norbert Wiener (1894–1964). The original theorems did not use the language of distributions, and instead applied to square-integrable functions. The first such theorem using distributions was due to Laurent Schwartz. These theorems heavily rely on the triangle inequality.

Second-order linear partial differential equations (PDEs) are classified as either elliptic, hyperbolic, or parabolic. Any second-order linear PDE in two variables can be written in the form

In mathematics, in the area of harmonic analysis, the fractional Fourier transform (FRFT) is a family of linear transformations generalizing the Fourier transform. It can be thought of as the Fourier transform to the n-th power, where n need not be an integer — thus, it can transform a function to any intermediate domain between time and frequency. Its applications range from filter design and signal analysis to phase retrieval and pattern recognition.

In control theory, a state observer or state estimator is a system that provides an estimate of the internal state of a given real system, from measurements of the input and output of the real system. It is typically computer-implemented, and provides the basis of many practical applications.

Riemann–Lebesgue lemma

In mathematics, the Riemann–Lebesgue lemma, named after Bernhard Riemann and Henri Lebesgue, states that the Fourier transform or Laplace transform of an L1 function vanishes at infinity. It is of importance in harmonic analysis and asymptotic analysis.

Optical resolution describes the ability of an imaging system to resolve detail in the object that is being imaged.

In probability theory, a Cox process, also known as a doubly stochastic Poisson process is a point process which is a generalization of a Poisson process where the intensity that varies across the underlying mathematical space is itself a stochastic process. The process is named after the statistician David Cox, who first published the model in 1955.

The prolate spheroidal wave functions are eigenfunctions of the Laplacian in prolate spheroidal coordinates, adapted to boundary conditions on certain ellipsoids of revolution. Related are the oblate spheroidal wave functions.

Bilinear time–frequency distributions, or quadratic time–frequency distributions, arise in a sub-field of signal analysis and signal processing called time–frequency signal processing, and, in the statistical analysis of time series data. Such methods are used where one needs to deal with a situation where the frequency composition of a signal may be changing over time; this sub-field used to be called time–frequency signal analysis, and is now more often called time–frequency signal processing due to the progress in using these methods to a wide range of signal-processing problems.

In mathematics, the spectral theory of ordinary differential equations is the part of spectral theory concerned with the determination of the spectrum and eigenfunction expansion associated with a linear ordinary differential equation. In his dissertation Hermann Weyl generalized the classical Sturm–Liouville theory on a finite closed interval to second order differential operators with singularities at the endpoints of the interval, possibly semi-infinite or infinite. Unlike the classical case, the spectrum may no longer consist of just a countable set of eigenvalues, but may also contain a continuous part. In this case the eigenfunction expansion involves an integral over the continuous part with respect to a spectral measure, given by the Titchmarsh–Kodaira formula. The theory was put in its final simplified form for singular differential equations of even degree by Kodaira and others, using von Neumann's spectral theorem. It has had important applications in quantum mechanics, operator theory and harmonic analysis on semisimple Lie groups.

Uncertainty theory is a branch of mathematics based on normality, monotonicity, self-duality, countable subadditivity, and product measure axioms.

Grids or meshes are geometrical shapes which are small-sized discrete cells that cover the physical domain, whose objective is to identify the discrete volumes or elements where conservation laws can be applied. They have applications in the fields of computational fluid dynamics (CFD), geography, designing and many more places where numerical solutions to the partial differential equations (PDEs) are required.

Mean-field particle methods are a broad class of interacting type Monte Carlo algorithms for simulating from a sequence of probability distributions satisfying a nonlinear evolution equation. These flows of probability measures can always be interpreted as the distributions of the random states of a Markov process whose transition probabilities depends on the distributions of the current random states. A natural way to simulate these sophisticated nonlinear Markov processes is to sample a large number of copies of the process, replacing in the evolution equation the unknown distributions of the random states by the sampled empirical measures. In contrast with traditional Monte Carlo and Markov chain Monte Carlo methods these mean-field particle techniques rely on sequential interacting samples. The terminology mean-field reflects the fact that each of the samples interacts with the empirical measures of the process. When the size of the system tends to infinity, these random empirical measures converge to the deterministic distribution of the random states of the nonlinear Markov chain, so that the statistical interaction between particles vanishes. In other words, starting with a chaotic configuration based on independent copies of initial state of the nonlinear Markov chain model, the chaos propagates at any time horizon as the size the system tends to infinity; that is, finite blocks of particles reduces to independent copies of the nonlinear Markov process. This result is called the propagation of chaos property. The terminology "propagation of chaos" originated with the work of Mark Kac in 1976 on a colliding mean-field kinetic gas model.

In image analysis, the generalized structure tensor (GST) is an extension of the Cartesian structure tensor to curvilinear coordinates. It is mainly used to detect and to represent the "direction" parameters of curves, just as the Cartesian structure tensor detects and represents the direction in Cartesian coordinates. Curve families generated by pairs of locally orthogonal functions have been the best studied.

References

  1. A. VanderLugt, Signal detection by complex spatial filtering, IEEE Transactions on Information Theory, vol. 10, 1964, pp. 139–145.
  2. J. L. Horner and P. D. Gianino, Phase-only matched filtering, Appl. Opt. 23, 1984, 812–816