Filter design

Last updated

Filter design is the process of designing a signal processing filter that satisfies a set of requirements, some of which may be conflicting. The purpose is to find a realization of the filter that meets each of the requirements to a sufficient degree to make it useful.

Contents

The filter design process can be described as an optimization problem where each requirement contributes to an error function that should be minimized. Certain parts of the design process can be automated, but normally an experienced electrical engineer is needed to get a good result.

The design of digital filters is a deceptively complex topic. [1] Although filters are easily understood and calculated, the practical challenges of their design and implementation are significant and are the subject of advanced research.

Typical design requirements

Typical requirements which are considered in the design process are:

The frequency function

An important parameter is the required frequency response. In particular, the steepness and complexity of the response curve is a deciding factor for the filter order and feasibility.

A first-order recursive filter will only have a single frequency-dependent component. This means that the slope of the frequency response is limited to 6 dB per octave. For many purposes, this is not sufficient. To achieve steeper slopes, higher-order filters are required.

In relation to the desired frequency function, there may also be an accompanying weighting function, which describes, for each frequency, how important it is that the resulting frequency function approximates the desired one. The larger weight, the more important is a close approximation.

Typical examples of frequency function are:

Phase and group delay

The impulse response

There is a direct correspondence between the filter's frequency function and its impulse response: the former is the Fourier transform of the latter. That means that any requirement on the frequency function is a requirement on the impulse response, and vice versa.

However, in certain applications it may be the filter's impulse response that is explicit and the design process then aims at producing as close an approximation as possible to the requested impulse response given all other requirements.

In some cases it may even be relevant to consider a frequency function and impulse response of the filter which are chosen independently from each other. For example, we may want both a specific frequency function of the filter and that the resulting filter have a small effective width in the signal domain as possible. The latter condition can be realized by considering a very narrow function as the wanted impulse response of the filter even though this function has no relation to the desired frequency function. The goal of the design process is then to realize a filter which tries to meet both these contradicting design goals as much as possible. An example is for high-resolution audio in which the frequency response (magnitude and phase) for steady state signals (sum of sinusoids) is the primary filter requirement, while an unconstrained impulse response may cause unexpected degradation due to time spreading of transient signals. [2] [3]

Causality

In order to be implementable, any time-dependent filter (operating in real time) must be causal: the filter response only depends on the current and past inputs. A standard approach is to leave this requirement until the final step. If the resulting filter is not causal, it can be made causal by introducing an appropriate time-shift (or delay). If the filter is a part of a larger system (which it normally is) these types of delays have to be introduced with care since they affect the operation of the entire system.

Filters that do not operate in real time (e.g. for image processing) can be non-causal. This e.g. allows the design of zero delay recursive filters, where the group delay of a causal filter is canceled by its Hermitian non-causal filter.

Stability

A stable filter assures that every limited input signal produces a limited filter response. A filter which does not meet this requirement may in some situations prove useless or even harmful. Certain design approaches can guarantee stability, for example by using only feed-forward circuits such as an FIR filter. On the other hand, filters based on feedback circuits have other advantages and may therefore be preferred, even if this class of filters includes unstable filters. In this case, the filters must be carefully designed in order to avoid instability.

Locality

In certain applications we have to deal with signals which contain components which can be described as local phenomena, for example pulses or steps, which have certain time duration. A consequence of applying a filter to a signal is, in intuitive terms, that the duration of the local phenomena is extended by the width of the filter. This implies that it is sometimes important to keep the width of the filter's impulse response function as short as possible.

According to the uncertainty relation of the Fourier transform, the product of the width of the filter's impulse response function and the width of its frequency function must exceed a certain constant. This means that any requirement on the filter's locality also implies a bound on its frequency function's width. Consequently, it may not be possible to simultaneously meet requirements on the locality of the filter's impulse response function as well as on its frequency function. This is a typical example of contradicting requirements.

Computational complexity

A general desire in any design is that the number of operations (additions and multiplications) needed to compute the filter response is as low as possible. In certain applications, this desire is a strict requirement, for example due to limited computational resources, limited power resources, or limited time. The last limitation is typical in real-time applications.

There are several ways in which a filter can have different computational complexity. For example, the order of a filter is more or less proportional to the number of operations. This means that by choosing a low order filter, the computation time can be reduced.

For discrete filters the computational complexity is more or less proportional to the number of filter coefficients. If the filter has many coefficients, for example in the case of multidimensional signals such as tomography data, it may be relevant to reduce the number of coefficients by removing those which are sufficiently close to zero. In multirate filters, the number of coefficients by taking advantage of its bandwidth limits, where the input signal is downsampled (e.g. to its critical frequency), and upsampled after filtering.

Another issue related to computational complexity is separability, that is, if and how a filter can be written as a convolution of two or more simpler filters. In particular, this issue is of importance for multidimensional filters, e.g., 2D filter which are used in image processing. In this case, a significant reduction in computational complexity can be obtained if the filter can be separated as the convolution of one 1D filter in the horizontal direction and one 1D filter in the vertical direction. A result of the filter design process may, e.g., be to approximate some desired filter as a separable filter or as a sum of separable filters.

Other considerations

It must also be decided how the filter is going to be implemented:

Analog filters

The design of linear analog filters is for the most part covered in the linear filter section.

Digital filters

Digital filters are classified into one of two basic forms, according to how they respond to a unit impulse:

  • Finite impulse response, or FIR, filters express each output sample as a weighted sum of the last N input samples, where N is the order of the filter. FIR filters are normally non-recursive, meaning they do not use feedback and as such are inherently stable. A moving average filter or CIC filter are examples of FIR filters that are normally recursive (that use feedback). If the FIR coefficients are symmetrical (often the case), then such a filter is linear phase, so it delays signals of all frequencies equally which is important in many applications. It is also straightforward to avoid overflow in an FIR filter. The main disadvantage is that they may require significantly more processing and memory resources than cleverly designed IIR variants. FIR filters are generally easier to design than IIR filters - the Parks-McClellan filter design algorithm (based on the Remez algorithm) is one suitable method for designing quite good filters semi-automatically. (See Methodology.)
  • Infinite impulse response, or IIR, filters are the digital counterpart to analog filters. Such a filter contains internal state, and the output and the next internal state are determined by a linear combination of the previous inputs and outputs (in other words, they use feedback, which FIR filters normally do not). In theory, the impulse response of such a filter never dies out completely, hence the name IIR, though in practice, this is not true given the finite resolution of computer arithmetic. IIR filters normally require less computing resources than an FIR filter of similar performance. However, due to the feedback, high order IIR filters may have problems with instability, arithmetic overflow, and limit cycles, and require careful design to avoid such pitfalls. Additionally, since the phase shift is inherently a non-linear function of frequency, the time delay through such a filter is frequency-dependent, which can be a problem in many situations. 2nd order IIR filters are often called 'biquads' and a common implementation of higher order filters is to cascade biquads. A useful reference for computing biquad coefficients is the RBJ Audio EQ Cookbook.

Sample rate

Unless the sample rate is fixed by some outside constraint, selecting a suitable sample rate is an important design decision. A high rate will require more in terms of computational resources, but less in terms of anti-aliasing filters. Interference and beating with other signals in the system may also be an issue.

Anti-aliasing

For any digital filter design, it is crucial to analyze and avoid aliasing effects. Often, this is done by adding analog anti-aliasing filters at the input and output, thus avoiding any frequency component above the Nyquist frequency. The complexity (i.e., steepness) of such filters depends on the required signal-to-noise ratio and the ratio between the sampling rate and the highest frequency of the signal.

Theoretical basis

Parts of the design problem relate to the fact that certain requirements are described in the frequency domain while others are expressed in the time domain and that these may conflict. For example, it is not possible to obtain a filter which has both an arbitrary impulse response and arbitrary frequency function. Other effects which refer to relations between the time and frequency domain are

The uncertainty principle

As stated by the Gabor limit, an uncertainty principle, the product of the width of the frequency function and the width of the impulse response cannot be smaller than a specific constant. This implies that if a specific frequency function is requested, corresponding to a specific frequency width, the minimum width of the filter in the signal domain is set. Vice versa, if the maximum width of the response is given, this determines the smallest possible width in the frequency. This is a typical example of contradictory requirements where the filter design process may try to find a useful compromise.

The variance extension theorem

Let be the variance of the input signal and let be the variance of the filter. The variance of the filter response, , is then given by

= +

This means that and implies that the localization of various features such as pulses or steps in the filter response is limited by the filter width in the signal domain. If a precise localization is requested, we need a filter of small width in the signal domain and, via the uncertainty principle, its width in the frequency domain cannot be arbitrary small.

Discontinuities versus asymptotic behaviour

Let f(t) be a function and let be its Fourier transform. There is a theorem which states that if the first derivative of F which is discontinuous has order , then f has an asymptotic decay like .

A consequence of this theorem is that the frequency function of a filter should be as smooth as possible to allow its impulse response to have a fast decay, and thereby a short width.

Methodology

One common method for designing FIR filters is the Parks-McClellan filter design algorithm, based on the Remez exchange algorithm. Here the user specifies a desired frequency response, a weighting function for errors from this response, and a filter order N. The algorithm then finds the set of N coefficients that minimize the maximum deviation from the ideal. Intuitively, this finds the filter that is as close as you can get to the desired response given that you can use only N coefficients. This method is particularly easy in practice and at least one text [4] includes a program that takes the desired filter and N and returns the optimum coefficients. One possible drawback to filters designed this way is that they contain many small ripples in the passband(s), since such a filter minimizes the peak error.

Another method to finding a discrete FIR filter is filter optimization described in Knutsson et al., which minimizes the integral of the square of the error, instead of its maximum value. In its basic form this approach requires that an ideal frequency function of the filter is specified together with a frequency weighting function and set of coordinates in the signal domain where the filter coefficients are located.

An error function is defined as

where is the discrete filter and is the discrete-time Fourier transform defined on the specified set of coordinates. The norm used here is, formally, the usual norm on spaces. This means that measures the deviation between the requested frequency function of the filter, , and the actual frequency function of the realized filter, . However, the deviation is also subject to the weighting function before the error function is computed.

Once the error function is established, the optimal filter is given by the coefficients which minimize . This can be done by solving the corresponding least squares problem. In practice, the norm has to be approximated by means of a suitable sum over discrete points in the frequency domain. In general, however, these points should be significantly more than the number of coefficients in the signal domain to obtain a useful approximation.

Simultaneous optimization in both domains

The previous method can be extended to include an additional error term related to a desired filter impulse response in the signal domain, with a corresponding weighting function. The ideal impulse response can be chosen independently of the ideal frequency function and is in practice used to limit the effective width and to remove ringing effects of the resulting filter in the signal domain. This is done by choosing a narrow ideal filter impulse response function, e.g., an impulse, and a weighting function which grows fast with the distance from the origin, e.g., the distance squared. The optimal filter can still be calculated by solving a simple least squares problem and the resulting filter is then a "compromise" which has a total optimal fit to the ideal functions in both domains. An important parameter is the relative strength of the two weighting functions which determines in which domain it is more important to have a good fit relative to the ideal function.

See also

Related Research Articles

Linear filters process time-varying input signals to produce output signals, subject to the constraint of linearity. In most cases these linear filters are also time invariant in which case they can be analyzed exactly using LTI system theory revealing their transfer functions in the frequency domain and their impulse responses in the time domain. Real-time implementations of such linear signal processing filters in the time domain are inevitably causal, an additional constraint on their transfer functions. An analog electronic circuit consisting only of linear components will necessarily fall in this category, as will comparable mechanical systems or digital signal processing systems containing only linear elements. Since linear time-invariant filters can be completely characterized by their response to sinusoids of different frequencies, they are sometimes known as frequency filters.

<span class="mw-page-title-main">Digital filter</span> Device for suppressing part of a discretely-sampled signal

In signal processing, a digital filter is a system that performs mathematical operations on a sampled, discrete-time signal to reduce or enhance certain aspects of that signal. This is in contrast to the other major type of electronic filter, the analog filter, which is typically an electronic circuit operating on continuous-time analog signals.

<span class="mw-page-title-main">Wavelet</span> Function for integral Fourier-like transform

A wavelet is a wave-like oscillation with an amplitude that begins at zero, increases or decreases, and then returns to zero one or more times. Wavelets are termed a "brief oscillation". A taxonomy of wavelets has been established, based on the number and direction of its pulses. Wavelets are imbued with specific properties that make them useful for signal processing.

An adaptive filter is a system with a linear filter that has a transfer function controlled by variable parameters and a means to adjust those parameters according to an optimization algorithm. Because of the complexity of the optimization algorithms, almost all adaptive filters are digital filters. Adaptive filters are required for some applications because some parameters of the desired processing operation are not known in advance or are changing. The closed loop adaptive filter uses feedback in the form of an error signal to refine its transfer function.

In signal processing and electronics, the frequency response of a system is the quantitative measure of the magnitude and phase of the output as a function of input frequency. The frequency response is widely used in the design and analysis of systems, such as audio and control systems, where they simplify mathematical analysis by converting governing differential equations into algebraic equations. In an audio system, it may be used to minimize audible distortion by designing components so that the overall response is as flat (uniform) as possible across the system's bandwidth. In control systems, such as a vehicle's cruise control, it may be used to assess system stability, often through the use of Bode plots. Systems with a specific frequency response can be designed using analog and digital filters.

In signal processing, a finite impulse response (FIR) filter is a filter whose impulse response is of finite duration, because it settles to zero in finite time. This is in contrast to infinite impulse response (IIR) filters, which may have internal feedback and may continue to respond indefinitely.

Infinite impulse response (IIR) is a property applying to many linear time-invariant systems that are distinguished by having an impulse response which does not become exactly zero past a certain point, but continues indefinitely. This is in contrast to a finite impulse response (FIR) system in which the impulse response does become exactly zero at times for some finite , thus being of finite duration. Common examples of linear time-invariant systems are most electronic and digital filters. Systems with this property are known as IIR systems or IIR filters.

In signal processing, linear phase is a property of a filter where the phase response of the filter is a linear function of frequency. The result is that all frequency components of the input signal are shifted in time by the same constant amount, which is referred to as the group delay. Consequently, there is no phase distortion due to the time delay of frequencies relative to one another.

<span class="mw-page-title-main">Filter bank</span> Tool for Digital Signal Processing

In signal processing, a filter bank is an array of bandpass filters that separates the input signal into multiple components, each one carrying a single frequency sub-band of the original signal. One application of a filter bank is a graphic equalizer, which can attenuate the components differently and recombine them into a modified version of the original signal. The process of decomposition performed by the filter bank is called analysis ; the output of analysis is referred to as a subband signal with as many subbands as there are filters in the filter bank. The reconstruction process is called synthesis, meaning reconstitution of a complete signal resulting from the filtering process.

<span class="mw-page-title-main">Gaussian blur</span> Type of image blur produced by a Gaussian function

In image processing, a Gaussian blur is the result of blurring an image by a Gaussian function.

<span class="mw-page-title-main">Gabor transform</span>

The Gabor transform, named after Dennis Gabor, is a special case of the short-time Fourier transform. It is used to determine the sinusoidal frequency and phase content of local sections of a signal as it changes over time. The function to be transformed is first multiplied by a Gaussian function, which can be regarded as a window function, and the resulting function is then transformed with a Fourier transform to derive the time-frequency analysis. The window function means that the signal near the time being analyzed will have higher weight. The Gabor transform of a signal x(t) is defined by this formula:

<span class="mw-page-title-main">Gaussian filter</span> Filter in electronics and signal processing

In electronics and signal processing, mainly in digital signal processing, a Gaussian filter is a filter whose impulse response is a Gaussian function. Gaussian filters have the properties of having no overshoot to a step function input while minimizing the rise and fall time. This behavior is closely connected to the fact that the Gaussian filter has the minimum possible group delay. A Gaussian filter will have the best combination of suppression of high frequencies while also minimizing spatial spread, being the critical point of the uncertainty principle. These properties are important in areas such as oscilloscopes and digital telecommunication systems.

In digital signal processing, a cascaded integrator–comb (CIC) is an optimized class of finite impulse response (FIR) filter combined with an interpolator or decimator.

A digital delay line is a discrete element in a digital filter, which allows a signal to be delayed by a number of samples. Delay lines are commonly used to delay audio signals feeding loudspeakers to compensate for the speed of sound in air, and to align video signals with accompanying audio, called audio-to-video synchronization. Delay lines may compensate for electronic processing latency so that multiple signals leave a device simultaneously despite having different pathways.

In signal processing, a filter is a device or process that removes some unwanted components or features from a signal. Filtering is a class of signal processing, the defining feature of filters being the complete or partial suppression of some aspect of the signal. Most often, this means removing some frequencies or frequency bands. However, filters do not exclusively act in the frequency domain; especially in the field of image processing many other targets for filtering exist. Correlations can be removed for certain frequency components and not for others without having to act in the frequency domain. Filters are widely used in electronics and telecommunication, in radio, television, audio recording, radar, control systems, music synthesis, image processing, computer graphics, and structural dynamics.

Two dimensional filters have seen substantial development effort due to their importance and high applicability across several domains. In the 2-D case the situation is quite different from the 1-D case, because the multi-dimensional polynomials cannot in general be factored. This means that an arbitrary transfer function cannot generally be manipulated into a form required by a particular implementation. The input-output relationship of a 2-D IIR filter obeys a constant-coefficient linear partial difference equation from which the value of an output sample can be computed using the input samples and previously computed output samples. Because the values of the output samples are fed back, the 2-D filter, like its 1-D counterpart, can be unstable.

<span class="mw-page-title-main">Sonar signal processing</span> Underwater acoustic signal processing

Sonar systems are generally used underwater for range finding and detection. Active sonar emits an acoustic signal, or pulse of sound, into the water. The sound bounces off the target object and returns an “echo” to the sonar transducer. Unlike active sonar, passive sonar does not emit its own signal, which is an advantage for military vessels. But passive sonar cannot measure the range of an object unless it is used in conjunction with other passive listening devices. Multiple passive sonar devices must be used for triangulation of a sound source. No matter whether active sonar or passive sonar, the information included in the reflected signal can not be used without technical signal processing. To extract the useful information from the mixed signal, some steps are taken to transfer the raw acoustic data.

A two-dimensional (2D) adaptive filter is very much like a one-dimensional adaptive filter in that it is a linear system whose parameters are adaptively updated throughout the process, according to some optimization approach. The main difference between 1D and 2D adaptive filters is that the former usually take as inputs signals with respect to time, what implies in causality constraints, while the latter handles signals with 2 dimensions, like x-y coordinates in the space domain, which are usually non-causal. Moreover, just like 1D filters, most 2D adaptive filters are digital filters, because of the complex and iterative nature of the algorithms.

In signal processing it is useful to simultaneously analyze the space and frequency characteristics of a signal. While the Fourier transform gives the frequency information of the signal, it is not localized. This means that we cannot determine which part of a signal produced a particular frequency. It is possible to use a short time Fourier transform for this purpose, however the short time Fourier transform limits the basis functions to be sinusoidal. To provide a more flexible space-frequency signal decomposition several filters have been proposed. The Log-Gabor filter is one such filter that is an improvement upon the original Gabor filter. The advantage of this filter over the many alternatives is that it better fits the statistics of natural images compared with Gabor filters and other wavelet filters.

Transfer function filter utilizes the transfer function and the Convolution theorem to produce a filter. In this article, an example of such a filter using finite impulse response is discussed and an application of the filter into real world data is shown.

References

  1. Valdez, M.E. "Digital Filters". GRM Networks. Retrieved 13 July 2020.
  2. Story, Mike (September 1997). "A Suggested Explanation For (Some Of) The Audible Differences Between High Sample Rate And Conventional Sample Rate Audio Material" (PDF). dCS Ltd. Archived (PDF) from the original on 28 November 2009.
  3. Robjohns, Hugh (August 2016). "MQA Time-domain Accuracy & Digital Audio Quality". soundonsound.com. Sound On Sound. Archived from the original on 10 March 2023.
  4. Rabiner, Lawrence R., and Gold, Bernard, 1975: Theory and Application of Digital Signal Processing (Englewood Cliffs, New Jersey: Prentice-Hall, Inc.) ISBN   0-13-914101-4