This article includes a list of general references, but it remains largely unverified because it lacks sufficient corresponding inline citations .(June 2011) |

Part of a series on |

Regression analysis |
---|

Models |

Estimation |

Background |

**Local regression** or **local polynomial regression**,^{ [1] } also known as **moving regression**,^{ [2] } is a generalization of the moving average and polynomial regression.^{ [3] } Its most common methods, initially developed for scatterplot smoothing, are **LOESS** (**locally estimated scatterplot smoothing**) and **LOWESS** (**locally weighted scatterplot smoothing**), both pronounced /ˈloʊɛs/ . They are two strongly related non-parametric regression methods that combine multiple regression models in a *k*-nearest-neighbor-based meta-model. In some fields, LOESS is known and commonly referred to as Savitzky–Golay filter ^{ [4] }^{ [5] } (proposed 15 years before LOESS).

- Model definition
- Localized subsets of data
- Degree of local polynomials
- Weight function
- Advantages
- Disadvantages
- See also
- References
- Citations
- Sources
- External links
- Implementations

LOESS and LOWESS thus build on "classical" methods, such as linear and nonlinear least squares regression. They address situations in which the classical procedures do not perform well or cannot be effectively applied without undue labor. LOESS combines much of the simplicity of linear least squares regression with the flexibility of nonlinear regression. It does this by fitting simple models to localized subsets of the data to build up a function that describes the deterministic part of the variation in the data, point by point. In fact, one of the chief attractions of this method is that the data analyst is not required to specify a global function of any form to fit a model to the data, only to fit segments of the data.

The trade-off for these features is increased computation. Because it is so computationally intensive, LOESS would have been practically impossible to use in the era when least squares regression was being developed. Most other modern methods for process modeling are similar to LOESS in this respect. These methods have been consciously designed to use our current computational ability to the fullest possible advantage to achieve goals not easily achieved by traditional approaches.

A smooth curve through a set of data points obtained with this statistical technique is called a **loess curve**, particularly when each smoothed value is given by a weighted quadratic least squares regression over the span of values of the *y*-axis scattergram criterion variable. When each smoothed value is given by a weighted linear least squares regression over the span, this is known as a **lowess curve**; however, some authorities treat **lowess** and loess as synonyms^{[ citation needed ]}.

In 1964, Savitsky and Golay proposed a method equivalent to LOESS, which is commonly referred to as Savitzky–Golay filter. William S. Cleveland rediscovered the method in 1979 and gave it a distinct name. The method was further developed by Cleveland and Susan J. Devlin (1988). LOWESS is also known as locally weighted polynomial regression.

At each point in the range of the data set a low-degree polynomial is fitted to a subset of the data, with explanatory variable values near the point whose response is being estimated. The polynomial is fitted using weighted least squares, giving more weight to points near the point whose response is being estimated and less weight to points further away. The value of the regression function for the point is then obtained by evaluating the local polynomial using the explanatory variable values for that data point. The LOESS fit is complete after regression function values have been computed for each of the data points. Many of the details of this method, such as the degree of the polynomial model and the weights, are flexible. The range of choices for each part of the method and typical defaults are briefly discussed next.

The **subsets** of data used for each weighted least squares fit in LOESS are determined by a nearest neighbors algorithm. A user-specified input to the procedure called the "bandwidth" or "smoothing parameter" determines how much of the data is used to fit each local polynomial. The smoothing parameter, , is the fraction of the total number *n* of data points that are used in each local fit. The subset of data used in each weighted least squares fit thus comprises the points (rounded to the next largest integer) whose explanatory variables' values are closest to the point at which the response is being estimated.^{ [6] }

Since a polynomial of degree *k* requires at least *k* + 1 points for a fit, the smoothing parameter must be between and 1, with denoting the degree of the local polynomial.

is called the smoothing parameter because it controls the flexibility of the LOESS regression function. Large values of produce the smoothest functions that wiggle the least in response to fluctuations in the data. The smaller is, the closer the regression function will conform to the data. Using too small a value of the smoothing parameter is not desirable, however, since the regression function will eventually start to capture the random error in the data.

The local polynomials fit to each subset of the data are almost always of first or second degree; that is, either locally linear (in the straight line sense) or locally quadratic. Using a zero degree polynomial turns LOESS into a weighted moving average. Higher-degree polynomials would work in theory, but yield models that are not really in the spirit of LOESS. LOESS is based on the ideas that any function can be well approximated in a small neighborhood by a low-order polynomial and that simple models can be fit to data easily. High-degree polynomials would tend to overfit the data in each subset and are numerically unstable, making accurate computations difficult.

As mentioned above, the weight function gives the most weight to the data points nearest the point of estimation and the least weight to the data points that are furthest away. The use of the weights is based on the idea that points near each other in the explanatory variable space are more likely to be related to each other in a simple way than points that are further apart. Following this logic, points that are likely to follow the local model best influence the local model parameter estimates the most. Points that are less likely to actually conform to the local model have less influence on the local model parameter estimates.

The traditional weight function used for LOESS is the tri-cube weight function,

where *d* is the distance of a given data point from the point on the curve being fitted, scaled to lie in the range from 0 to 1.^{ [6] }

However, any other weight function that satisfies the properties listed in Cleveland (1979) could also be used. The weight for a specific point in any localized subset of data is obtained by evaluating the weight function at the distance between that point and the point of estimation, after scaling the distance so that the maximum absolute distance over all of the points in the subset of data is exactly one.

Consider the following generalisation of the linear regression model with a metric on the target space that depends on two parameters, . Assume that the linear hypothesis is based on input parameters and that, as customary in these cases, we embed the input space into as , and consider the following *loss function*

Here, is an real matrix of coefficients, and the subscript *i* enumerates input and output vectors from a training set. Since is a metric, it is a symmetric, positive-definite matrix and, as such, there is another symmetric matrix such that . The above loss function can be rearranged into a trace by observing that . By arranging the vectors and into the columns of a matrix and an matrix respectively, the above loss function can then be written as

where is the square diagonal matrix whose entries are the s. Differentiating with respect to and setting the result equal to 0 one finds the extremal matrix equation

Assuming further that the square matrix is non-singular, the loss function attains its minimum at

A typical choice for is the Gaussian weight

As discussed above, the biggest advantage LOESS has over many other methods is the process of fitting a model to the sample data does not begin with the specification of a function. Instead the analyst only has to provide a smoothing parameter value and the degree of the local polynomial. In addition, LOESS is very flexible, making it ideal for modeling complex processes for which no theoretical models exist. These two advantages, combined with the simplicity of the method, make LOESS one of the most attractive of the modern regression methods for applications that fit the general framework of least squares regression but which have a complex deterministic structure.

Although it is less obvious than for some of the other methods related to linear least squares regression, LOESS also accrues most of the benefits typically shared by those procedures. The most important of those is the theory for computing uncertainties for prediction and calibration. Many other tests and procedures used for validation of least squares models can also be extended to LOESS models ^{[ citation needed ]}.

LOESS makes less efficient use of data than other least squares methods. It requires fairly large, densely sampled data sets in order to produce good models. This is because LOESS relies on the local data structure when performing the local fitting. Thus, LOESS provides less complex data analysis in exchange for greater experimental costs.^{ [6] }

Another disadvantage of LOESS is the fact that it does not produce a regression function that is easily represented by a mathematical formula. This can make it difficult to transfer the results of an analysis to other people. In order to transfer the regression function to another person, they would need the data set and software for LOESS calculations. In nonlinear regression, on the other hand, it is only necessary to write down a functional form in order to provide estimates of the unknown parameters and the estimated uncertainty. Depending on the application, this could be either a major or a minor drawback to using LOESS. In particular, the simple form of LOESS can not be used for mechanistic modelling where fitted parameters specify particular physical properties of a system.

Finally, as discussed above, LOESS is a computationally intensive method (with the exception of evenly spaced data, where the regression can then be phrased as a non-causal finite impulse response filter). LOESS is also prone to the effects of outliers in the data set, like other least squares methods. There is an iterative, robust version of LOESS [Cleveland (1979)] that can be used to reduce LOESS' sensitivity to outliers, but too many extreme outliers can still overcome even the robust method.

The method of **least squares** is a standard approach in regression analysis to approximate the solution of overdetermined systems by minimizing the sum of the squares of the residuals made in the results of each individual equation.

In statistics, **Deming regression**, named after W. Edwards Deming, is an errors-in-variables model which tries to find the line of best fit for a two-dimensional dataset. It differs from the simple linear regression in that it accounts for errors in observations on both the *x*- and the *y*- axis. It is a special case of total least squares, which allows for any number of predictors and a more complicated error structure.

**Curve fitting** is the process of constructing a curve, or mathematical function, that has the best fit to a series of data points, possibly subject to constraints. Curve fitting can involve either interpolation, where an exact fit to the data is required, or smoothing, in which a "smooth" function is constructed that approximately fits the data. A related topic is regression analysis, which focuses more on questions of statistical inference such as how much uncertainty is present in a curve that is fit to data observed with random errors. Fitted curves can be used as an aid for data visualization, to infer values of a function where no data are available, and to summarize the relationships among two or more variables. Extrapolation refers to the use of a fitted curve beyond the range of the observed data, and is subject to a degree of uncertainty since it may reflect the method used to construct the curve as much as it reflects the observed data.

In statistical modeling, **regression analysis** is a set of statistical processes for estimating the relationships between a dependent variable and one or more independent variables. The most common form of regression analysis is linear regression, in which one finds the line that most closely fits the data according to a specific mathematical criterion. For example, the method of ordinary least squares computes the unique line that minimizes the sum of squared differences between the true data and that line. For specific mathematical reasons, this allows the researcher to estimate the conditional expectation of the dependent variable when the independent variables take on a given set of values. Less common forms of regression use slightly different procedures to estimate alternative location parameters or estimate the conditional expectation across a broader collection of non-linear models.

In statistics, **nonlinear regression** is a form of regression analysis in which observational data are modeled by a function which is a nonlinear combination of the model parameters and depends on one or more independent variables. The data are fitted by a method of successive approximations.

In statistics, **ordinary least squares** (**OLS**) is a type of linear least squares method for estimating the unknown parameters in a linear regression model. OLS chooses the parameters of a linear function of a set of explanatory variables by the principle of least squares: minimizing the sum of the squares of the differences between the observed dependent variable in the given dataset and those predicted by the linear function of the independent variable.

In statistics, the number of **degrees of freedom** is the number of values in the final calculation of a statistic that are free to vary.

**Weighted least squares** (**WLS**), also known as **weighted linear regression**, is a generalization of ordinary least squares and linear regression in which knowledge of the variance of observations is incorporated into the regression. WLS is also a specialization of generalized least squares.

In statistics, a **generalized additive model (GAM)** is a generalized linear model in which the linear response variable depends linearly on unknown smooth functions of some predictor variables, and interest focuses on inference about these smooth functions.

In statistics, the **projection matrix**, sometimes also called the **influence matrix** or **hat matrix**, maps the vector of response values to the vector of fitted values. It describes the influence each response value has on each fitted value. The diagonal elements of the projection matrix are the leverages, which describe the influence each response value has on the fitted value for that same observation.

**Non-linear least squares** is the form of least squares analysis used to fit a set of *m* observations with a model that is non-linear in *n* unknown parameters (*m* ≥ *n*). It is used in some forms of nonlinear regression. The basis of the method is to approximate the model by a linear one and to refine the parameters by successive iterations. There are many similarities to linear least squares, but also some significant differences. In economic theory, the non-linear least squares method is applied in (i) the probit regression, (ii) threshold regression, (iii) smooth regression, (iv) logistic link regression, (v) Box-Cox transformed regressors.

A **kernel smoother** is a statistical technique to estimate a real valued function as the weighted average of neighboring observed data. The weight is defined by the kernel, such that closer points are given higher weights. The estimated function is smooth, and the level of smoothness is set by a single parameter.

In statistics, **multivariate adaptive regression splines** (**MARS**) is a form of regression analysis introduced by Jerome H. Friedman in 1991. It is a non-parametric regression technique and can be seen as an extension of linear models that automatically models nonlinearities and interactions between variables.

**Smoothing splines** are function estimates, , obtained from a set of noisy observations of the target , in order to balance a measure of goodness of fit of to with a derivative based measure of the smoothness of . They provide a means for smoothing noisy data. The most familiar example is the cubic smoothing spline, but there are many other possibilities, including for the case where is a vector quantity.

In statistics, **polynomial regression** is a form of regression analysis in which the relationship between the independent variable *x* and the dependent variable *y* is modelled as an *n*th degree polynomial in *x*. Polynomial regression fits a nonlinear relationship between the value of *x* and the corresponding conditional mean of *y*, denoted E(*y* |*x*). Although *polynomial regression* fits a nonlinear model to the data, as a statistical estimation problem it is linear, in the sense that the regression function E(*y* | *x*) is linear in the unknown parameters that are estimated from the data. For this reason, polynomial regression is considered to be a special case of multiple linear regression.

In statistics, **projection pursuit regression (PPR)** is a statistical model developed by Jerome H. Friedman and Werner Stuetzle which is an extension of additive models. This model adapts the additive models in that it first projects the data matrix of explanatory variables in the optimal direction before applying smoothing functions to these explanatory variables.

**Linear least squares** (**LLS**) is the least squares approximation of linear functions to data. It is a set of formulations for solving statistical problems involved in linear regression, including variants for ordinary (unweighted), weighted, and generalized (correlated) residuals. Numerical methods for linear least squares include inverting the matrix of the normal equations and orthogonal decomposition methods.

In statistics, several **scatterplot smoothing** methods are available to fit a function through the points of a scatterplot to best represent the relationship between the variables.

In statistics, **linear regression** is a linear approach for modelling the relationship between a scalar response and one or more explanatory variables. The case of one explanatory variable is called *simple linear regression*; for more than one, the process is called **multiple linear regression**. This term is distinct from multivariate linear regression, where multiple correlated dependent variables are predicted, rather than a single scalar variable.

In statistics, the class of **vector generalized linear models** (**VGLMs**) was proposed to enlarge the scope of models catered for by generalized linear models (**GLMs**). In particular, VGLMs allow for response variables outside the classical exponential family and for more than one parameter. Each parameter can be transformed by a *link function*. The VGLM framework is also large enough to naturally accommodate multiple responses; these are several independent responses each coming from a particular statistical distribution with possibly different parameter values.

- ↑ Fox & Weisberg 2018, Appendix.
- ↑ Harrell 2015, p. 29.
- ↑ Garimella 2017.
- ↑ "Savitzky–Golay filtering – MATLAB sgolayfilt".
*Mathworks.com*. - ↑ "scipy.signal.savgol_filter — SciPy v0.16.1 Reference Guide".
*Docs.scipy.org*. - 1 2 3 NIST, "LOESS (aka LOWESS)", section 4.1.4.4,
*NIST/SEMATECH e-Handbook of Statistical Methods,*(accessed 14 April 2017)

- Cleveland, William S. (1979). "Robust Locally Weighted Regression and Smoothing Scatterplots".
*Journal of the American Statistical Association*.**74**(368): 829–836. doi:10.2307/2286407. JSTOR 2286407. MR 0556476. - Cleveland, William S. (1981). "LOWESS: A program for smoothing scatterplots by robust locally weighted regression".
*The American Statistician*.**35**(1): 54. doi:10.2307/2683591. JSTOR 2683591. - Cleveland, William S.; Devlin, Susan J. (1988). "Locally-Weighted Regression: An Approach to Regression Analysis by Local Fitting".
*Journal of the American Statistical Association*.**83**(403): 596–610. doi:10.2307/2289282. JSTOR 2289282. - Fox, John; Weisberg, Sanford (2018). "Appendix: Nonparametric Regression in R" (PDF).
*An R Companion to Applied Regression*(3rd ed.). SAGE. ISBN 978-1-5443-3645-9. - Friedman, Jerome H. (1984). "A Variable Span Smoother" (PDF). Laboratory for Computational Statistics. LCS Technical Report 5, SLAC PUB-3466. Stanford University.Cite journal requires
`|journal=`

(help) - Garimella, Rao Veerabhadra (22 June 2017). "A Simple Introduction to Moving Least Squares and Local Regression Estimation". doi:10.2172/1367799. OSTI 1367799.Cite journal requires
`|journal=`

(help) - Harrell, Frank E. , Jr. (2015).
*Regression Modeling Strategies: With Applications to Linear Models, Logistic and Ordinal Regression, and Survival Analysis*. Springer. ISBN 978-3-319-19425-7.

This article's use of external links may not follow Wikipedia's policies or guidelines.(November 2021) |

- Local Regression and Election Modeling
- Smoothing by Local Regression: Principles and Methods (PostScript Document)
- NIST Engineering Statistics Handbook Section on LOESS
- Local Fitting Software
- Scatter Plot Smoothing
- R: Local Polynomial Regression Fitting The Loess function in R
- R: Scatter Plot Smoothing The Lowess function in R
- The supsmu function (Friedman's SuperSmoother) in R
- Quantile LOESS – A method to perform Local regression on a
**Quantile**moving window (with R code) - Nate Silver, How Opinion on Same-Sex Marriage Is Changing, and What It Means – sample of LOESS versus linear regression

- Fortran implementation
- C implementation (from the R project)
- Lowess implementation in Cython by Carl Vogel
- Python implementation (in Statsmodels)
- LOESS Smoothing in Excel
- LOESS implementation in pure Julia
- JavaScript implementation
- Java implementation

This article incorporates public domain material from the National Institute of Standards and Technology website https://www.nist.gov .

This page is based on this Wikipedia article

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.