Guided filter

Last updated

Guided filter is a kind of edge-preserving smoothing filter. Same as bilateral filter, this image filter can also filter out noise or texture while retaining sharp edges. [1]


Unlike the bilateral filter, the guided image filter has two advantages: first, Bilateral filters have very high computational complexity, but the guided image filter does not use too complicated mathematical calculations which has linear computational complexity. Furthermore, due to the mathematical model, bilateral filters sometimes have unwanted gradient reversal artifacts and cause image distortion. While the guided image filter, since the filter is mathematically based on linear combination, the output image must be consistent with the gradient direction of the guidance image, and the problem of gradient reversal does not occur.


One key assumption of the guided filter is that the relation between guidance and the filtering output is linear. Suppose that is a linear transformation of in a window centered at the pixel .

In order to determine the linear coefficient , constraints from the filtering input are required. Model the output as the input subtract some unwanted components , such as noise/textures.

The following is the basic model of the guided image filter:



In the above formula:

is the output pixel;
is the input pixel;
is the pixel of noise components;
is the guidance image pixel;
are some linear coefficients assumed to be constant in .

The reason to define as linear combination is that the boundary of an object is related to its gradient. The local linear model ensures that has an edge only if has an edge, since .

Subtract (1) and (2) to get formula (3);At the same time, define a cost function (4):



In the above formula:

is a regularization parameter penalizing large ;
is a window centered at the pixel .

And the cost function's solution is given by:



In the above formula:

and are the mean and variance of in ;
is the number of pixels in ;
is the mean of in .

After obtaining the linear coefficients , we can calculate the filtering output by (1)


By definition, the algorithm can be written as:

Algorithm 1. Guided Filter

input: filtering input image ,guidance image ,window radius ,regularization

output: filtering output


 =  =  =  = 


 =  = 


 =  = 


 =  = 



is a mean filter with a wide variety of O(N) time methods.


When the guidance image is the same as the filtering input . The guided filter will filter out noise in the input image while preserving clear edges.

Specifically, one can define what is a “flat patch” or a “high variance patch” by the parameter of the guided filter. Those patches with variance much lower than the parameter will be smoothed, and those with variances much higher than will be preserved. The role of the range variance in the bilateral filter is similar to in the guided filter. Both of them define “where are the edge/a high variance patches that should be kept. what is noise/flat patch that should be smoothed.”

When using the bilateral filter to filter an image, some artifacts may appear on the edges. This is because the pixel value abrupt change on the edge. These artifacts are inherent and hard to avoid, because edges usually appear in all kinds of pictures.

The guided filter performs better in avoiding gradient reversal. Moreover, in some cases, it can be ensured that gradient reversal does not occur.

Due to the local linear model of , it is possible to transfer the structure from the guidance to the output . This property enables some special filtering-based applications, such as feathering, matting and dehazing.


See also

Related Research Articles

Computable number Real number that can be computed within arbitrary precision

In mathematics, computable numbers are the real numbers that can be computed to within any desired precision by a finite, terminating algorithm. They are also known as the recursive numbers, effective numbers (vanDerHoeven) or the computable reals or recursive reals.

In engineering, a transfer function of an electronic or control system component is a mathematical function which theoretically models the device's output for each possible input. In its simplest form, this function is a two-dimensional graph of an independent scalar input versus the dependent scalar output, called a transfer curve or characteristic curve. Transfer functions for components are used to design and analyze systems assembled from components, particularly using the block diagram technique, in electronics and control theory.

Drude model Model of electrical conduction

The Drude model of electrical conduction was proposed in 1900 by Paul Drude to explain the transport properties of electrons in materials. The model, which is an application of kinetic theory, assumes that the microscopic behaviour of electrons in a solid may be treated classically and looks much like a pinball machine, with a sea of constantly jittering electrons bouncing and re-bouncing off heavier, relatively immobile positive ions.

In econometrics, the autoregressive conditional heteroscedasticity (ARCH) model is a statistical model for time series data that describes the variance of the current error term or innovation as a function of the actual sizes of the previous time periods' error terms; often the variance is related to the squares of the previous innovations. The ARCH model is appropriate when the error variance in a time series follows an autoregressive (AR) model; if an autoregressive moving average (ARMA) model is assumed for the error variance, the model is a generalized autoregressive conditional heteroskedasticity (GARCH) model.

In quantum physics, Fermi's golden rule is a formula that describes the transition rate from one energy eigenstate of a quantum system to a group of energy eigenstates in a continuum, as a result of a weak perturbation. This transition rate is effectively independent of time and is proportional to the strength of the coupling between the initial and final states of the system as well as the density of states. It is also applicable when the final state is discrete, i.e. it is not part of a continuum, if there is some decoherence in the process, like relaxation or collision of the atoms, or like noise in the perturbation, in which case the density of states is replaced by the reciprocal of the decoherence bandwidth.

In mechanics, virtual work arises in the application of the principle of least action to the study of forces and movement of a mechanical system. The work of a force acting on a particle as it moves along a displacement is different for different displacements. Among all the possible displacements that a particle may follow, called virtual displacements, one will minimize the action. This displacement is therefore the displacement followed by the particle according to the principle of least action. The work of a force on a particle along a virtual displacement is known as the virtual work.

In statistics, econometrics and signal processing, an autoregressive (AR) model is a representation of a type of random process; as such, it is used to describe certain time-varying processes in nature, economics, etc. The autoregressive model specifies that the output variable depends linearly on its own previous values and on a stochastic term ; thus the model is in the form of a stochastic difference equation. Together with the moving-average (MA) model, it is a special case and key component of the more general autoregressive–moving-average (ARMA) and autoregressive integrated moving average (ARIMA) models of time series, which have a more complicated stochastic structure; it is also a special case of the vector autoregressive model (VAR), which consists of a system of more than one interlocking stochastic difference equation in more than one evolving random variable.

An elliptic filter is a signal processing filter with equalized ripple (equiripple) behavior in both the passband and the stopband. The amount of ripple in each band is independently adjustable, and no other filter of equal order can have a faster transition in gain between the passband and the stopband, for the given values of ripple. Alternatively, one may give up the ability to adjust independently the passband and stopband ripple, and instead design a filter which is maximally insensitive to component variations.

Generalized least squares

In statistics, generalized least squares (GLS) is a technique for estimating the unknown parameters in a linear regression model when there is a certain degree of correlation between the residuals in a regression model. In these cases, ordinary least squares and weighted least squares can be statistically inefficient, or even give misleading inferences. GLS was first described by Alexander Aitken in 1936.

Betti's theorem, also known as Maxwell-Betti reciprocal work theorem, discovered by Enrico Betti in 1872, states that for a linear elastic structure subject to two sets of forces {Pi} i=1,...,n and {Qj}, j=1,2,...,n, the work done by the set P through the displacements produced by the set Q is equal to the work done by the set Q through the displacements produced by the set P. This theorem has applications in structural engineering where it is used to define influence lines and derive the boundary element method.

Coble creep

Coble creep, a form of diffusion creep, is a mechanism for deformation of crystalline solids. Contrasted with other diffusional creep mechanisms, Coble creep is similar to Nabarro–Herring creep in that it is dominant at lower stress levels and higher temperatures than creep mechanisms utilizing dislocation glide. Coble creep occurs through the diffusion of atoms in a material along grain boundaries. This mechanism is observed in polycrystals or along the surface in a single crystal, which produces a net flow of material and a sliding of the grain boundaries.

The topological derivative is, conceptually, a derivative of a shape functional with respect to infinitesimal changes in its topology, such as adding an infinitesimal hole or crack. When used in higher dimensions than one, the term topological gradient is also used to name the first-order term of the topological asymptotic expansion, dealing only with infinitesimal singular domain perturbations. It has applications in shape optimization, topology optimization, image processing and mechanical modeling.

Bilateral filter

A bilateral filter is a non-linear, edge-preserving, and noise-reducing smoothing filter for images. It replaces the intensity of each pixel with a weighted average of intensity values from nearby pixels. This weight can be based on a Gaussian distribution. Crucially, the weights depend not only on Euclidean distance of pixels, but also on the radiometric differences. This preserves sharp edges.

Sum frequency generation spectroscopy (SFG) is a nonlinear laser spectroscopy technique used to analyze surfaces and interfaces. In a typical SFG setup, two laser beams mix at an interface and generate an output beam with a frequency equal to the sum of the two input frequencies, traveling in a direction given by the sum of the incident beams' wavevectors. The technique was developed in 1987 by Yuen-Ron Shen and his students as an extension of second harmonic generation spectroscopy and rapidly applied to deduce the composition, orientation distributions, and structural information of molecules at gas–solid, gas–liquid and liquid–solid interfaces. Soon after its invention, Philippe Guyot-Sionnest extended the technique to obtain the first measurements of electronic and vibrational dynamics at surfaces. SFG has advantages in its ability to be monolayer surface sensitive, ability to be performed in situ, and its capability to provide ultrafast time resolution. SFG gives information complementary to infrared and Raman spectroscopy.

Lindhard theory, named after Danish professor Jens Lindhard, is a method of calculating the effects of electric field screening by electrons in a solid. It is based on quantum mechanics and the random phase approximation.

Non-local means is an algorithm in image processing for image denoising. Unlike "local mean" filters, which take the mean value of a group of pixels surrounding a target pixel to smooth the image, non-local means filtering takes a mean of all pixels in the image, weighted by how similar these pixels are to the target pixel. This results in much greater post-filtering clarity, and less loss of detail in the image compared with local mean algorithms.

Generalized filtering is a generic Bayesian filtering scheme for nonlinear state-space models. It is based on a variational principle of least action, formulated in generalized coordinates. Note that the concept of "generalized coordinates" as used here differs from the concept of generalized coordinates of motion as used in (multibody) dynamical systems analysis. Generalized filtering furnishes posterior densities over hidden states generating observed data using a generalized gradient descent on variational free energy, under the Laplace assumption. Unlike classical filtering, generalized filtering eschews Markovian assumptions about random fluctuations. Furthermore, it operates online, assimilating data to approximate the posterior density over unknown quantities, without the need for a backward pass. Special cases include variational filtering, dynamic expectation maximization and generalized predictive coding.

Kuwahara filter

The Kuwahara filter is a non-linear smoothing filter used in image processing for adaptive noise reduction. Most filters that are used for image smoothing are linear low-pass filters that effectively reduce noise but also blur out the edges. However the Kuwahara filter is able to apply smoothing on the image while preserving the edges.

In signal processing, nonlinear multidimensional signal processing (NMSP) covers all signal processing using nonlinear multidimensional signals and systems. Nonlinear multidimensional signal processing is a subset of signal processing. Nonlinear multi-dimensional systems can be used in a broad range such as imaging, teletraffic, communications, hydrology, geology, and economics. Nonlinear systems cannot be treated as linear systems, using Fourier transformation and wavelet analysis. Nonlinear systems will have chaotic behavior, limit cycle, steady state, bifurcation, multi-stability and so on. Nonlinear systems do not have a canonical representation, like impulse response for linear systems. But there are some efforts to characterize nonlinear systems, such as Volterra and Wiener series using polynomial integrals as the use of those methods naturally extend the signal into multi-dimensions. Another example is the Empirical mode decomposition method using Hilbert transform instead of Fourier Transform for nonlinear multi-dimensional systems. This method is an empirical method and can be directly applied to data sets. Multi-dimensional nonlinear filters (MDNF) are also an important part of NMSP, MDNF are mainly used to filter noise in real data. There are nonlinear-type hybrid filters used in color image processing, nonlinear edge-preserving filters use in magnetic resonance image restoration. Those filters use both temporal and spatial information and combine the maximum likelihood estimate with the spatial smoothing algorithm.

JCMsuite Simulation software

JCMsuite is a finite element analysis software package for the simulation and analysis of electromagnetic waves, elasticity and heat conduction. It also allows a mutual coupling between its optical, heat conduction and continuum mechanics solvers. The software is mainly applied for the analysis and optimization of nanooptical and microoptical systems. Its applications in research and development projects include dimensional metrology systems, photolithographic systems, photonic crystal fibers, VCSELs, Quantum-Dot emitters, light trapping in solar cells, and plasmonic systems. The design tasks can be embedded into the high-level scripting languages MATLAB and Python, enabling a scripting of design setups in order to define parameter dependent problems or to run parameter scans.