Part of a series on |

Regression analysis |
---|

Models |

Estimation |

Background |

**Segmented regression**, also known as **piecewise regression** or **broken-stick regression**, is a method in regression analysis in which the independent variable is partitioned into intervals and a separate line segment is fit to each interval. Segmented regression analysis can also be performed on multivariate data by partitioning the various independent variables. Segmented regression is useful when the independent variables, clustered into different groups, exhibit different relationships between the variables in these regions. The boundaries between the segments are *breakpoints*.

- Segmented linear regression, two segments
- Example
- Test procedures
- No-effect range
- See also
- References

**Segmented linear regression** is segmented regression whereby the relations in the intervals are obtained by linear regression.

Segmented linear regression with two segments separated by a *breakpoint* can be useful to quantify an abrupt change of the response function (Yr) of a varying influential factor (**x**). The breakpoint can be interpreted as a *critical*, *safe*, or *threshold* value beyond or below which (un)desired effects occur. The breakpoint can be important in decision making ^{ [1] }

The figures illustrate some of the results and regression types obtainable.

A segmented regression analysis is based on the presence of a set of ( **y, x** ) data, in which **y** is the dependent variable and **x** the independent variable.

The least squares method applied separately to each segment, by which the two regression lines are made to fit the data set as closely as possible while minimizing the *sum of squares of the differences* (SSD) between observed (**y**) and calculated (Yr) values of the dependent variable, results in the following two equations:

- Yr = A
_{1}.**x**+ K_{1}for**x**< BP (breakpoint) - Yr = A
_{2}.**x**+ K_{2}for**x**> BP (breakpoint)

where:

- Yr is the expected (predicted) value of
**y**for a certain value of**x**; - A
_{1}and A_{2}are regression coefficients (indicating the slope of the line segments); - K
_{1}and K_{2}are*regression constants*(indicating the intercept at the**y**-axis).

The data may show many types or trends,^{ [2] } see the figures.

The method also yields two correlation coefficients (R):

- for
**x**< BP (breakpoint)

and

- for
**x**> BP (breakpoint)

where:

- is the minimized SSD per segment

and

- Y
_{a1}and Y_{a2}are the average values of**y**in the respective segments.

In the determination of the most suitable trend, statistical tests must be performed to ensure that this trend is reliable (significant).

When no significant breakpoint can be detected, one must fall back on a regression without breakpoint.

For the blue figure at the right that gives the relation between yield of mustard (Yr = Ym, t/ha) and soil salinity (**x** = Ss, expressed as electric conductivity of the soil solution EC in dS/m) it is found that:^{ [3] }

BP = 4.93, A_{1} = 0, K_{1} = 1.74, A_{2} = −0.129, K_{2} = 2.38, R_{1}^{2} = 0.0035 (insignificant), R_{2}^{2} = 0.395 (significant) and:

- Ym = 1.74 t/ha for Ss < 4.93 (breakpoint)
- Ym = −0.129 Ss + 2.38 t/ha for Ss > 4.93 (breakpoint)

indicating that soil salinities < 4.93 dS/m are safe and soil salinities > 4.93 dS/m reduce the yield @ 0.129 t/ha per unit increase of soil salinity.

The figure also shows confidence intervals and uncertainty as elaborated hereunder.

The following *statistical tests* are used to determine the type of trend:

- significance of the breakpoint (BP) by expressing BP as a function of
*regression coefficients*A_{1}and A_{2}and the means Y_{1}and Y_{2}of the**y**-data and the means X_{1}and X_{2}of the**x**data (left and right of BP), using the laws of propagation of errors in additions and multiplications to compute the standard error (SE) of BP, and applying Student's t-test - significance of A
_{1}and A_{2}applying Student's t-distribution and the*standard error*SE of A_{1}and A_{2} - significance of the difference of A
_{1}and A_{2}applying Student's t-distribution using the SE of their difference. - significance of the difference of Y
_{1}and Y_{2}applying Student's t-distribution using the SE of their difference. - A more formal statistical approach to test for the existence of a breakpoint, is via the pseudo score test which does not require estimation of the segmented line.
^{ [4] }

In addition, use is made of the correlation coefficient of all data (Ra), the coefficient of determination or coefficient of explanation, confidence intervals of the regression functions, and ANOVA analysis.^{ [5] }

The coefficient of determination for all data (Cd), that is to be maximized under the conditions set by the significance tests, is found from:

where Yr is the expected (predicted) value of **y** according to the former regression equations and Ya is the average of all **y** values.

The Cd coefficient ranges between 0 (no explanation at all) to 1 (full explanation, perfect match).

In a pure, unsegmented, linear regression, the values of Cd and Ra^{2} are equal. In a segmented regression, Cd needs to be significantly larger than Ra^{2} to justify the segmentation.

The optimal value of the breakpoint may be found such that the Cd coefficient is maximum.

Segmented regression is often used to detect over which range an explanatory variable (X) has no effect on the dependent variable (Y), while beyond the reach there is a clear response, be it positive or negative. The reach of no effect may be found at the initial part of X domain or conversely at its last part. For the "no effect" analysis, application of the least squares method for the segmented regression analysis ^{ [6] } may not be the most appropriate technique because the aim is rather to find the longest stretch over which the Y-X relation can be considered to possess zero slope while beyond the reach the slope is significantly different from zero but knowledge about the best value of this slope is not material. The method to find the no-effect range is progressive partial regression ^{ [7] } over the range, extending the range with small steps until the regression coefficient gets significantly different from zero.

In the next figure the break point is found at X=7.9 while for the same data (see blue figure above for mustard yield), the least squares method yields a break point only at X=4.9. The latter value is lower, but the fit of the data beyond the break point is better. Hence, it will depend on the purpose of the analysis which method needs to be employed.

The method of **least squares** is a standard approach in regression analysis to approximate the solution of overdetermined systems by minimizing the sum of the squares of the residuals made in the results of every single equation.

In statistics, the **Pearson correlation coefficient** is a measure of linear correlation between two sets of data. It is the covariance of two variables, divided by the product of their standard deviations; thus it is essentially a normalised measurement of the covariance, such that the result always has a value between −1 and 1. As with covariance itself, the measure can only reflect a linear correlation of variables, and ignores many other types of relationship or correlation. As a simple example, one would expect the age and height of a sample of teenagers from a high school to have a Pearson correlation coefficient significantly greater than 0, but less than 1.

In statistics, the **logistic model** is used to model the probability of a certain class or event existing such as pass/fail, win/lose, alive/dead or healthy/sick. This can be extended to model several classes of events such as determining whether an image contains a cat, dog, lion, etc. Each object being detected in the image would be assigned a probability between 0 and 1, with a sum of one.

In statistics, **Spearman's rank correlation coefficient** or **Spearman's ρ**, named after Charles Spearman and often denoted by the Greek letter (rho) or as , is a nonparametric measure of rank correlation. It assesses how well the relationship between two variables can be described using a monotonic function.

In mathematics and statistics, a **piecewise linear**, **PL** or **segmented** function is a real-valued function of a real variable, whose graph is composed of straight-line segments.

In statistical modeling, **regression analysis** is a set of statistical processes for estimating the relationships between a dependent variable and one or more independent variables. The most common form of regression analysis is linear regression, in which one finds the line that most closely fits the data according to a specific mathematical criterion. For example, the method of ordinary least squares computes the unique line that minimizes the sum of squared differences between the true data and that line. For specific mathematical reasons, this allows the researcher to estimate the conditional expectation of the dependent variable when the independent variables take on a given set of values. Less common forms of regression use slightly different procedures to estimate alternative location parameters or estimate the conditional expectation across a broader collection of non-linear models.

In statistics, **nonlinear regression** is a form of regression analysis in which observational data are modeled by a function which is a nonlinear combination of the model parameters and depends on one or more independent variables. The data are fitted by a method of successive approximations.

In statistics, the **coefficient of determination**, denoted *R*^{2} or *r*^{2} and pronounced "R squared", is the proportion of the variance in the dependent variable that is predictable from the independent variable(s).

In statistics, **ordinary least squares** (**OLS**) is a type of linear least squares method for estimating the unknown parameters in a linear regression model. OLS chooses the parameters of a linear function of a set of explanatory variables by the principle of least squares: minimizing the sum of the squares of the differences between the observed dependent variable in the given dataset and those predicted by the linear function of the independent variable.

The following is a glossary of terms used in the mathematical sciences statistics and probability.

In statistics, **simple linear regression** is a linear regression model with a single explanatory variable. That is, it concerns two-dimensional sample points with one independent variable and one dependent variable and finds a linear function that, as accurately as possible, predicts the dependent variable values as a function of the independent variable. The adjective *simple* refers to the fact that the outcome variable is related to a single predictor.

In statistics, particularly in analysis of variance and linear regression, a **contrast** is a linear combination of variables whose coefficients add up to zero, allowing comparison of different treatments.

**Omnibus tests** are a kind of statistical test. They test whether the explained variance in a set of data is significantly greater than the unexplained variance, overall. One example is the F-test in the analysis of variance. There can be legitimate significant effects within a model even if the omnibus test is not significant. For instance, in a model with two independent variables, if only one variable exerts a significant effect on the dependent variable and the other does not, then the omnibus test may be non-significant. This fact does not affect the conclusions that may be drawn from the one significant variable. In order to test effects within an omnibus test, researchers often use contrasts.

**Groundwater models** are computer models of groundwater flow systems, and are used by hydrogeologists. Groundwater models are used to simulate and predict aquifer conditions.

In statistics, **Scheffé's method**, named after the American statistician Henry Scheffé, is a method for adjusting significance levels in a linear regression analysis to account for multiple comparisons. It is particularly useful in analysis of variance, and in constructing simultaneous confidence bands for regressions involving basis functions.

In statistics, **polynomial regression** is a form of regression analysis in which the relationship between the independent variable *x* and the dependent variable *y* is modelled as an *n*th degree polynomial in *x*. Polynomial regression fits a nonlinear relationship between the value of *x* and the corresponding conditional mean of *y*, denoted E(*y* |*x*). Although *polynomial regression* fits a nonlinear model to the data, as a statistical estimation problem it is linear, in the sense that the regression function E(*y* | *x*) is linear in the unknown parameters that are estimated from the data. For this reason, polynomial regression is considered to be a special case of multiple linear regression.

In statistics and data analysis the application software **CumFreq** is a tool for cumulative frequency analysis of a single variable and for probability distribution fitting.

In statistics and data analysis the application software **SegReg** is a free and user-friendly tool for linear segmented regression analysis to determine the breakpoint where the relation between the dependent variable and the independent variable changes abruptly.

In statistics, **linear regression** is a linear approach to modelling the relationship between a scalar response and one or more explanatory variables. The case of one explanatory variable is called *simple linear regression*; for more than one, the process is called **multiple linear regression**. This term is distinct from multivariate linear regression, where multiple correlated dependent variables are predicted, rather than a single scalar variable.

**Salt tolerance of crops** is the maximum salt level a crop tolerates without losing its productivity while it is affected negatively at higher levels. The salt level is often taken as the soil salinity or the salinity of the irrigation water.

- ↑
*Frequency and Regression Analysis*. Chapter 6 in: H.P.Ritzema (ed., 1994),*Drainage Principles and Applications*, Publ. 16, pp. 175-224, International Institute for Land Reclamation and Improvement (ILRI), Wageningen, The Netherlands. ISBN 90-70754-33-9 . Free download from the webpage , under nr. 20, or directly as PDF : - ↑
*Drainage research in farmers' fields: analysis of data*. Part of project "Liquid Gold" of the International Institute for Land Reclamation and Improvement (ILRI), Wageningen, The Netherlands. Download as PDF : - ↑ R.J.Oosterbaan, D.P.Sharma, K.N.Singh and K.V.G.K.Rao, 1990,
*Crop production and soil salinity: evaluation of field data from India by segmented linear regression*. In: Proceedings of the Symposium on Land Drainage for Salinity Control in Arid and Semi-Arid Regions, February 25th to March 2nd, 1990, Cairo, Egypt, Vol. 3, Session V, p. 373 - 383. - ↑ Muggeo, VMR (2016). "Testing with a nuisance parameter present only under the alternative: a score-based approach with application to segmented modelling" (PDF).
*Journal of Statistical Computation and Simulation*.**86**(15): 3059–3067. doi:10.1080/00949655.2016.1149855. - ↑
*Statistical significance of segmented linear regression with break-point using variance analysis and F-tests*. Download from under nr. 13, or directly as PDF : - ↑ Segmented regression analysis, International Institute for Land Reclamation and Improvement (ILRI), Wageningen, The Netherlands. Free download from the webpage
- ↑ Partial Regression Analysis, International Institute for Land Reclamation and Improvement (ILRI), Wageningen, The Netherlands. Free download from the webpage

This page is based on this Wikipedia article

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.