Signal transfer function

Last updated

The signal transfer function (SiTF) is a measure of the signal output versus the signal input of a system such as an infrared system or sensor. There are many general applications of the SiTF. Specifically, in the field of image analysis, it gives a measure of the noise of an imaging system, and thus yields one assessment of its performance. [1]

Contents

SiTF evaluation

In evaluating the SiTF curve, the signal input and signal output are measured differentially; meaning, the differential of the input signal and differential of the output signal are calculated and plotted against each other. An operator, using computer software, defines an arbitrary area, with a given set of data points, within the signal and background regions of the output image of the infrared sensor, i.e. of the unit under test (UUT), (see "Half Moon" image below). The average signal and background are calculated by averaging the data of each arbitrarily defined region. A second order polynomial curve is fitted to the data of each line. Then, the polynomial is subtracted from the average signal and background data to yield the new signal and background. The difference of the new signal and background data is taken to yield the net signal. Finally, the net signal is plotted versus the signal input. The signal input of the UUT is within its own spectral response. (e.g. color-correlated temperature, pixel intensity, etc.). The slope of the linear portion of this curve is then found using the method of least squares. [2]

SiTF curve

The red line is the SiTF. As can be seen, a line is fitted to the linear portion of the signal output versus signal input of an infrared sensor. Example graph of SiTF.jpeg
The red line is the SiTF. As can be seen, a line is fitted to the linear portion of the signal output versus signal input of an infrared sensor.
Half moon target - on the left the image of the background region and on the right an image of the signal region. Using specialized software, an operator arbitrarily defines an area of evaluation in both regions to be used in determining the signal transfer function. SiTF Half Moon Target.jpg
Half moon target – on the left the image of the background region and on the right an image of the signal region. Using specialized software, an operator arbitrarily defines an area of evaluation in both regions to be used in determining the signal transfer function.

The net signal is calculated from the average signal and background, as in signal to noise ratio (imaging)#Calculations. The SiTF curve is then given by the signal output data, (net signal data), plotted against the signal input data (see graph of SiTF to the right). All the data points in the linear region of the SiTF curve can be used in the method of least squares to find a linear approximation. Given data points a best fit line parameterized as is given by: [3]

See also

Related Research Articles

In engineering, a transfer function of an electronic or control system component is a mathematical function which theoretically models the device's output for each possible input. In its simplest form, this function is a two-dimensional graph of an independent scalar input versus the dependent scalar output, called a transfer curve or characteristic curve. Transfer functions for components are used to design and analyze systems assembled from components, particularly using the block diagram technique, in electronics and control theory.

Digital filter Filter used on discretely-sampled signals in signal processing

In signal processing, a digital filter is a system that performs mathematical operations on a sampled, discrete-time signal to reduce or enhance certain aspects of that signal. This is in contrast to the other major type of electronic filter, the analog filter, which is an electronic circuit operating on continuous-time analog signals.

Signal-to-noise ratio is a measure used in science and engineering that compares the level of a desired signal to the level of background noise. SNR is defined as the ratio of signal power to the noise power, often expressed in decibels. A ratio higher than 1:1 indicates more signal than noise.

A low-pass filter (LPF) is a filter that passes signals with a frequency lower than a selected cutoff frequency and attenuates signals with frequencies higher than the cutoff frequency. The exact frequency response of the filter depends on the filter design. The filter is sometimes called a high-cut filter, or treble-cut filter in audio applications. A low-pass filter is the complement of a high-pass filter.

A high-pass filter (HPF) is an electronic filter that passes signals with a frequency higher than a certain cutoff frequency and attenuates signals with frequencies lower than the cutoff frequency. The amount of attenuation for each frequency depends on the filter design. A high-pass filter is usually modeled as a linear time-invariant system. It is sometimes called a low-cut filter or bass-cut filter in the context of audio engineering. High-pass filters have many uses, such as blocking DC from circuitry sensitive to non-zero average voltages or radio frequency devices. They can also be used in conjunction with a low-pass filter to produce a bandpass filter.

Additive white Gaussian noise (AWGN) is a basic noise model used in information theory to mimic the effect of many random processes that occur in nature. The modifiers denote specific characteristics:

Fourier optics is the study of classical optics using Fourier transforms (FTs), in which the waveform being considered is regarded as made up of a combination, or superposition, of plane waves. It has some parallels to the Huygens–Fresnel principle, in which the wavefront is regarded as being made up of a combination of spherical wavefronts whose sum is the wavefront being studied. A key difference is that Fourier optics considers the plane waves to be natural modes of the propagation medium, as opposed to Huygens–Fresnel, where the spherical waves originate in the physical medium.

Quantization (signal processing) Process of mapping a continuous set to a countable set

Quantization, in mathematics and digital signal processing, is the process of mapping input values from a large set to output values in a (countable) smaller set, often with a finite number of elements. Rounding and truncation are typical examples of quantization processes. Quantization is involved to some degree in nearly all digital signal processing, as the process of representing a signal in digital form ordinarily involves rounding. Quantization also forms the core of essentially all lossy compression algorithms.

Frequency mixer

In electronics, a mixer, or frequency mixer, is a nonlinear electrical circuit that creates new frequencies from two signals applied to it. In its most common application, two signals are applied to a mixer, and it produces new signals at the sum and difference of the original frequencies. Other frequency components may also be produced in a practical frequency mixer.

Calibration curve


In analytical chemistry, a calibration curve, also known as a standard curve, is a general method for determining the concentration of a substance in an unknown sample by comparing the unknown to a set of standard samples of known concentration. A calibration curve is one approach to the problem of instrument calibration; other standard approaches may mix the standard into the unknown, giving an internal standard.

A network, in the context of electrical engineering and electronics, is a collection of interconnected components. Network analysis is the process of finding the voltages across, and the currents through, all network components. There are many techniques for calculating these values. However, for the most part, the techniques assume linear components. Except where stated, the methods described in this article are applicable only to linear network analysis.

Discrete wavelet transform transform in numerical harmonic analysis

In numerical analysis and functional analysis, a discrete wavelet transform (DWT) is any wavelet transform for which the wavelets are discretely sampled. As with other wavelet transforms, a key advantage it has over Fourier transforms is temporal resolution: it captures both frequency and location information.

In signal processing, a matched filter is obtained by correlating a known delayed signal, or template, with an unknown signal to detect the presence of the template in the unknown signal. This is equivalent to convolving the unknown signal with a conjugated time-reversed version of the template. The matched filter is the optimal linear filter for maximizing the signal-to-noise ratio (SNR) in the presence of additive stochastic noise.

In signal processing, the Wiener filter is a filter used to produce an estimate of a desired or target random process by linear time-invariant (LTI) filtering of an observed noisy process, assuming known stationary signal and noise spectra, and additive noise. The Wiener filter minimizes the mean square error between the estimated random process and the desired process.

In system analysis, among other fields of study, a linear time-invariant system is a system that produces an output signal from any input signal subject to the constraints of linearity and time-invariance; these terms are briefly defined below. These properties apply to many important physical systems, in which case the response y(t) of the system to an arbitrary input x(t) can be found directly using convolution: y(t) = x(t) * h(t) where h(t) is called the system's impulse response and * represents convolution. What's more, there are systematic methods for solving any such system, whereas systems not meeting both properties are generally more difficult to solve analytically. A good example of an LTI system is any electrical circuit consisting of resistors, capacitors, inductors and linear amplifiers.

Savitzky–Golay filter Algorithm to smooth data points

A Savitzky–Golay filter is a digital filter that can be applied to a set of digital data points for the purpose of smoothing the data, that is, to increase the precision of the data without distorting the signal tendency. This is achieved, in a process known as convolution, by fitting successive sub-sets of adjacent data points with a low-degree polynomial by the method of linear least squares. When the data points are equally spaced, an analytical solution to the least-squares equations can be found, in the form of a single set of "convolution coefficients" that can be applied to all data sub-sets, to give estimates of the smoothed signal, at the central point of each sub-set. The method, based on established mathematical procedures, was popularized by Abraham Savitzky and Marcel J. E. Golay, who published tables of convolution coefficients for various polynomials and sub-set sizes in 1964. Some errors in the tables have been corrected. The method has been extended for the treatment of 2- and 3-dimensional data.

Optical transfer function Function that specifies how different spatial frequencies are handled by an optical system

The optical transfer function (OTF) of an optical system such as a camera, microscope, human eye, or projector specifies how different spatial frequencies are handled by the system. It is used by optical engineers to describe how the optics project light from the object or scene onto a photographic film, detector array, retina, screen, or simply the next item in the optical transmission chain. A variant, the modulation transfer function (MTF), neglects phase effects, but is equivalent to the OTF in many situations.

Minimum resolvable contrast (MRC) is a subjective measure of a visible spectrum sensor’s or camera's sensitivity and ability to resolve data. A snapshot image of a series of three bar targets of selected spatial frequencies and various contrast coatings captured by the unit under test (UUT) is used to determine the MRC of the UUT, i.e. the visible spectrum camera or sensor. A trained observer selects the smallest target resolvable at each contrast level. Typically, specialized computer software collects the inputted data of the observer and provides a graph of contrast v.s. spatial frequency at a given luminance level. A first order polynomial is fitted to the data and an MRC curve of spatial frequency versus contrast is generated.

Signal-to-noise ratio (SNR) is used in imaging to characterize image quality. The sensitivity of a imaging system is typically described in the terms of the signal level that yields a threshold level of SNR.

Exponential response formula

In mathematics, the exponential response formula (ERF), also known as exponential response and complex replacement, is a method used to find a particular solution of a non-homogeneous linear ordinary differential equation of any order. The exponential response formula is applicable to non-homogeneous linear ordinary differential equations with constant coefficients if the function is polynomial, sinusoidal, exponential or the combination of the three. The general solution of a non-homogeneous linear ordinary differential equation is a superposition of the general solution of the associated homogeneous ODE and a particular solution to the non-homogeneous ODE. Alternative methods for solving ordinary differential equations of higher order are method of undetermined coefficients and method of variation of parameters.

References

  1. Tom L. Williams (1998). The Optical Transfer Function of Imaging Systems. CRC Press. ISBN   0-7503-0599-1.
  2. Electro Optical Industries, Inc.(2005). EO TestLab Methodology. In Education.
  3. Aboufadel, E. F., Goldberg, J. L., Potter, M. C. (2005).Advanced Engineering Mathematics (3rd ed.).New York, New York: Oxford University Press