Proportional hazards model

Last updated

Proportional hazards models are a class of survival models in statistics. Survival models relate the time that passes, before some event occurs, to one or more covariates that may be associated with that quantity of time. In a proportional hazards model, the unique effect of a unit increase in a covariate is multiplicative with respect to the hazard rate. For example, taking a drug may halve one's hazard rate for a stroke occurring, or, changing the material from which a manufactured component is constructed may double its hazard rate for failure. Other types of survival models such as accelerated failure time models do not exhibit proportional hazards. The accelerated failure time model describes a situation where the biological or mechanical life history of an event is accelerated (or decelerated).

Contents

Background

Survival models can be viewed as consisting of two parts: the underlying baseline hazard function, often denoted , describing how the risk of event per time unit changes over time at baseline levels of covariates; and the effect parameters, describing how the hazard varies in response to explanatory covariates. A typical medical example would include covariates such as treatment assignment, as well as patient characteristics such as age at start of study, gender, and the presence of other diseases at start of study, in order to reduce variability and/or control for confounding.

The proportional hazards condition [1] states that covariates are multiplicatively related to the hazard. In the simplest case of stationary coefficients, for example, a treatment with a drug may, say, halve a subject's hazard at any given time , while the baseline hazard may vary. Note however, that this does not double the lifetime of the subject; the precise effect of the covariates on the lifetime depends on the type of . The covariate is not restricted to binary predictors; in the case of a continuous covariate , it is typically assumed that the hazard responds exponentially; each unit increase in results in proportional scaling of the hazard.

The Cox model

Introduction

Sir David Cox observed that if the proportional hazards assumption holds (or, is assumed to hold) then it is possible to estimate the effect parameter(s), denoted below, without any consideration of the full hazard function. This approach to survival data is called application of the Cox proportional hazards model, [2] sometimes abbreviated to Cox model or to proportional hazards model. [3] However, Cox also noted that biological interpretation of the proportional hazards assumption can be quite tricky. [4] [5]

Let Xi = (Xi1, … , Xip) be the realized values of the p covariates for subject i. The hazard function for the Cox proportional hazards model has the form

This expression gives the hazard function at time t for subject i with covariate vector (explanatory variables) Xi. Note that between subjects, the baseline hazard is identical (has no dependency on i). The only difference between subjects' hazards comes from the baseline scaling factor .

Why it is called "proportional"

To start, suppose we only have a single covariate, , and therefore a single coefficient, . Consider the effect of increasing by 1:

We can see that increasing a covariate by 1 scales the original hazard by the constant . Rearranging things slightly, we see that:

The right-hand-side is constant over time (no term has a in it). This relationship, , is called a proportional relationship.

More generally, consider two subjects, i and j, with covariates and respectively. Consider the ratio of their hazards:

The right-hand-side isn't dependent on time, as the only time-dependent factor, , was cancelled out. Thus the ratio of hazards of two subjects is a constant, i.e. the hazards are proportional.

Absence of an intercept term

Often there is an intercept term (also called a constant term or bias term) used in regression models. The Cox model lacks one because the baseline hazard, , takes the place of it. Let's see what would happen if we did include an intercept term anyways, denoted :

where we've redefined to be a new baseline hazard, . Thus, the baseline hazard incorporates all parts of the hazard that are not dependent on the subjects' covariates, which includes any intercept term (which is constant for all subjects, by definition).

Likelihood for unique times

The Cox partial likelihood, shown below, is obtained by using Breslow's estimate of the baseline hazard function, plugging it into the full likelihood and then observing that the result is a product of two factors. The first factor is the partial likelihood shown below, in which the baseline hazard has "canceled out". The second factor is free of the regression coefficients and depends on the data only through the censoring pattern. The effect of covariates estimated by any proportional hazards model can thus be reported as hazard ratios.

The likelihood of the event to be observed occurring for subject i at time Yi can be written as:

where θj = exp(Xjβ) and the summation is over the set of subjects j where the event has not occurred before time Yi (including subject i itself). Obviously 0 < Li(β)  1. This is a partial likelihood: the effect of the covariates can be estimated without the need to model the change of the hazard over time.

Treating the subjects as if they were statistically independent of each other, the joint probability of all realized events [6] is the following partial likelihood, where the occurrence of the event is indicated by Ci = 1:

The corresponding log partial likelihood is

This function can be maximized over β to produce maximum partial likelihood estimates of the model parameters.

The partial score function is

and the Hessian matrix of the partial log likelihood is

Using this score function and Hessian matrix, the partial likelihood can be maximized using the Newton-Raphson algorithm. The inverse of the Hessian matrix, evaluated at the estimate of β, can be used as an approximate variance-covariance matrix for the estimate, and used to produce approximate standard errors for the regression coefficients.

Likelihood when there exist tied times

Several approaches have been proposed to handle situations in which there are ties in the time data. Breslow's method describes the approach in which the procedure described above is used unmodified, even when ties are present. An alternative approach that is considered to give better results is Efron's method. [7] Let tj denote the unique times, let Hj denote the set of indices i such that Yi = tj and Ci = 1, and let mj = |Hj|. Efron's approach maximizes the following partial likelihood.

The corresponding log partial likelihood is

the score function is

and the Hessian matrix is

where

Note that when Hj is empty (all observations with time tj are censored), the summands in these expressions are treated as zero.

Examples

Below are some worked examples of the Cox model in practice.

A single binary covariate

Suppose the endpoint we are interested in is patient survival during a 5-year observation period after a surgery. Patients can die within the 5-year period, and we record when they died, or patients can live past 5 years, and we only record that they lived past 5 years. The surgery was performed at one of two hospitals, A or B, and we would like to know if the hospital location is associated with 5-year survival. Specifically, we would like to know the relative increase (or decrease) in hazard from a surgery performed at hospital A compared to hospital B. Provided is some (fake) data, where each row represents a patient: T is how long the patient was observed for before death or 5 years (measured in months), and C denotes if the patient died in the 5-year period. We have encoded the hospital as a binary variable denoted X: 1 if from hospital A, 0 from hospital B.

hospitalXTC
B060False
B032True
B060False
B060False
B060False
A14True
A118True
A160False
A19True
A131True
A153True
A117True

Our single-covariate Cox proportional model looks like the following, with representing the hospital's effect, and i indexing each patient:

Using statistical software, we can estimate to be 2.12. The hazard ratio is the exponential of this value, . To see why, consider the ratio of hazards, specifically:

Thus, the hazard ratio of hospital A to hospital B is . Putting aside statistical significance for a moment, we can make a statement saying that patients in hospital A are associated with a 8.3x higher risk of death occurring in any short period of time compared to hospital B.

There are important caveats to mention about the interpretation:

  1. a 8.3x higher risk of death does not mean that 8.3x more patients will die in hospital B: survival analysis examines how quickly events occur, not simply whether they occur.
  2. More specifically, "risk of death" is a measure of a rate. A rate has units, like meters per second. However, a relative rate does not: a bicycle can go two times faster than another bicycle (the reference bicycle), without specifying any units. Likewise, the risk of death (rate of death) in hospital A is 8.3 times higher (faster) than the risk of death in hospital B (the reference group).
  3. the inverse quantity, is the hazard ratio of hospital B relative to hospital A.
  4. We haven't made any inferences about probabilities of survival between the hospitals. This is because we would need an estimate of the baseline hazard rate, , as well as our estimate. However, standard estimation of the Cox proportional hazard model does not directly estimate the baseline hazard rate.
  5. Because we have ignored the only time varying component of the model, the baseline hazard rate, our estimate is timescale-invariant. For example, if we had measured time in years instead of months, we would get the same estimate.
  6. It is tempting to say that the hospital caused the difference in hazards between the two groups, but since our study is not causal (that is, we do not know how the data was generated), we stick with terminology like "associated".

A single continuous covariate

To demonstrate a less traditional use case of survival analysis, the next example will be an economics question: what is the relationship between a company's price-to-earnings ratio (P/E) on their first IPO anniversary and their future survival? More specifically, if we consider a company's "birth event" to be their first IPO anniversary, and any bankruptcy, sale, going private, etc. as a "death" event the company, we'd like to know the influence of the companies' P/E ratio at their "birth" (first IPO anniversary) on their survival.

Provided is a (fake) dataset with survival data from 12 companies: T represents the number of days between first IPO anniversary and death (or an end date of 2022-01-01, if did not die). C represents if the company died before 2022-01-01 or not. P/E represents the company's price-to-earnings ratio at its 1st IPO anniversary.

Co.1 year IPO dateDeath date*CTP/E
02000-11-052011-01-22True37309.7
12000-12-012003-03-30True84912.0
22011-01-052012-03-30True4503.0
32010-05-292011-02-22True2695.3
42005-06-232022-01-01False603610.8
52000-06-102002-07-24True7746.3
62011-07-112014-05-01True102511.6
72007-09-272022-01-01False521010.3
82006-07-302010-06-03True14048.0
92000-07-132001-07-19True3714.0
102013-06-102018-10-10True19485.9
112011-07-162014-08-15True11268.3

Unlike the previous example where there was a binary variable, this dataset has a continuous variable, P/E; however, the model looks similar:

where represents a company's P/E ratio. Running this dataset through a Cox model produces an estimate of the value of the unknown , which is -0.34. Therefore, an estimate of the entire hazard is:

Since the baseline hazard, , was not estimated, the entire hazard is not able to be calculated. However, consider the ratio of the companies i and j's hazards:

All terms on the right are known, so calculating the ratio of hazards between companies is possible. Since there is no time-dependent term on the right (all terms are constant), the hazards are proportional to each other. For example, the hazard ratio of company 5 to company 2 is . This means that, within the interval of study, company 5's risk of "death" is 0.33 ≈ 1/3 as large as company 2's risk of death.

There are important caveats to mention about the interpretation:

  1. The hazard ratio is the quantity , which is in the above example. From the last calculation above, an interpretation of this is as the ratio of hazards between two "subjects" that have their variables differ by one unit: if , then . The choice of "differ by one unit" is convenience, as it communicates precisely the value of .
  2. The baseline hazard can be represented when the scaling factor is 1, i.e. .

Can we interpret the baseline hazard as the hazard of a "baseline" company whose P/E happens to be 0? This interpretation of the baseline hazard as "hazard of a baseline subject" is imperfect, as it is possible that the covariate being 0 is impossible. In this application, a P/E of 0 is meaningless (it means the company's stock price is 0, i.e., they are "dead"). A more appropriate interpretation would be "the hazard when all variables are nil".

  1. It is tempting to want to understand and interpret a value like to represent the hazard of a company. However, consider what this is actually representing: . There is implicitly a ratio of hazards here, comparing company i's hazard to an imaginary baseline company with 0 P/E. However, as explained above, a P/E of 0 is impossible in this application, so is meaningless in this example. Ratios between plausible hazards are meaningful, however.

Time-varying predictors and coefficients

Extensions to time dependent variables, time dependent strata, and multiple events per subject, can be incorporated by the counting process formulation of Andersen and Gill. [8] One example of the use of hazard models with time-varying regressors is estimating the effect of unemployment insurance on unemployment spells. [9] [10]

In addition to allowing time-varying covariates (i.e., predictors), the Cox model may be generalized to time-varying coefficients as well. That is, the proportional effect of a treatment may vary with time; e.g. a drug may be very effective if administered within one month of morbidity, and become less effective as time goes on. The hypothesis of no change with time (stationarity) of the coefficient may then be tested. Details and software (R package) are available in Martinussen and Scheike (2006). [11] [12]

In this context, it could also be mentioned that it is theoretically possible to specify the effect of covariates by using additive hazards, [13] i.e. specifying

If such additive hazards models are used in situations where (log-)likelihood maximization is the objective, care must be taken to restrict to non-negative values. Perhaps as a result of this complication, such models are seldom seen. If the objective is instead least squares the non-negativity restriction is not strictly required.

Specifying the baseline hazard function

The Cox model may be specialized if a reason exists to assume that the baseline hazard follows a particular form. In this case, the baseline hazard is replaced by a given function. For example, assuming the hazard function to be the Weibull hazard function gives the Weibull proportional hazards model.

Incidentally, using the Weibull baseline hazard is the only circumstance under which the model satisfies both the proportional hazards, and accelerated failure time models.

The generic term parametric proportional hazards models can be used to describe proportional hazards models in which the hazard function is specified. The Cox proportional hazards model is sometimes called a semiparametric model by contrast.

Some authors use the term Cox proportional hazards model even when specifying the underlying hazard function, [14] to acknowledge the debt of the entire field to David Cox.

The term Cox regression model (omitting proportional hazards) is sometimes used to describe the extension of the Cox model to include time-dependent factors. However, this usage is potentially ambiguous since the Cox proportional hazards model can itself be described as a regression model.

Relationship to Poisson models

There is a relationship between proportional hazards models and Poisson regression models which is sometimes used to fit approximate proportional hazards models in software for Poisson regression. The usual reason for doing this is that calculation is much quicker. This was more important in the days of slower computers but can still be useful for particularly large data sets or complex problems. Laird and Olivier (1981) [15] provide the mathematical details. They note, "we do not assume [the Poisson model] is true, but simply use it as a device for deriving the likelihood." McCullagh and Nelder's [16] book on generalized linear models has a chapter on converting proportional hazards models to generalized linear models.

Under high-dimensional setup

In high-dimension, when number of covariates p is large compared to the sample size n, the LASSO method is one of the classical model-selection strategies. Tibshirani (1997) has proposed a Lasso procedure for the proportional hazard regression parameter. [17] The Lasso estimator of the regression parameter β is defined as the minimizer of the opposite of the Cox partial log-likelihood under an L1-norm type constraint.

There has been theoretical progress on this topic recently. [18] [19] [20] [21]

Software implementations

See also

Notes

  1. Breslow, N. E. (1975). "Analysis of Survival Data under the Proportional Hazards Model". International Statistical Review / Revue Internationale de Statistique. 43 (1): 45–57. doi:10.2307/1402659. JSTOR   1402659.
  2. Cox, David R (1972). "Regression Models and Life-Tables". Journal of the Royal Statistical Society, Series B. 34 (2): 187–220. JSTOR   2985181. MR   0341758.
  3. Kalbfleisch, John D.; Schaubel, Douglas E. (10 March 2023). "Fifty Years of the Cox Model". Annual Review of Statistics and Its Application. 10 (1): 1–23. Bibcode:2023AnRSA..10....1K. doi: 10.1146/annurev-statistics-033021-014043 . ISSN   2326-8298.
  4. Reid, N. (1994). "A Conversation with Sir David Cox". Statistical Science. 9 (3): 439–455. doi: 10.1214/ss/1177010394 .
  5. Cox, D. R. (1997). Some remarks on the analysis of survival data. the First Seattle Symposium of Biostatistics: Survival Analysis.
  6. "Each failure contributes to the likelihood function", Cox (1972), page 191.
  7. Efron, Bradley (1974). "The Efficiency of Cox's Likelihood Function for Censored Data". Journal of the American Statistical Association. 72 (359): 557–565. doi:10.1080/01621459.1977.10480613. JSTOR   2286217.
  8. Andersen, P.; Gill, R. (1982). "Cox's regression model for counting processes, a large sample study". Annals of Statistics. 10 (4): 1100–1120. doi: 10.1214/aos/1176345976 . JSTOR   2240714.
  9. Meyer, B. D. (1990). "Unemployment Insurance and Unemployment Spells" (PDF). Econometrica. 58 (4): 757–782. doi:10.2307/2938349. JSTOR   2938349.
  10. Bover, O.; Arellano, M.; Bentolila, S. (2002). "Unemployment Duration, Benefit Duration, and the Business Cycle" (PDF). The Economic Journal. 112 (479): 223–265. doi:10.1111/1468-0297.00034. S2CID   15575103.
  11. Martinussen; Scheike (2006). Dynamic Regression Models for Survival Data. Springer. doi:10.1007/0-387-33960-4. ISBN   978-0-387-20274-7.
  12. "timereg: Flexible Regression Models for Survival Data". CRAN.
  13. Cox, D. R. (1997). Some remarks on the analysis of survival data. the First Seattle Symposium of Biostatistics: Survival Analysis.
  14. Bender, R.; Augustin, T.; Blettner, M. (2006). "Generating survival times to simulate Cox proportional hazards models". Statistics in Medicine . 24 (11): 1713–1723. doi: 10.1002/sim.2369 . PMID   16680804. S2CID   43875995.
  15. Nan Laird and Donald Olivier (1981). "Covariance Analysis of Censored Survival Data Using Log-Linear Analysis Techniques". Journal of the American Statistical Association. 76 (374): 231–240. doi:10.2307/2287816. JSTOR   2287816.
  16. P. McCullagh and J. A. Nelder (2000). "Chapter 13: Models for Survival Data". Generalized Linear Models (Second ed.). Boca Raton, Florida: Chapman & Hall/CRC. ISBN   978-0-412-31760-6. (Second edition 1989; first CRC reprint 1999.)
  17. Tibshirani, R. (1997). "The Lasso method for variable selection in the Cox model". Statistics in Medicine . 16 (4): 385–395. CiteSeerX   10.1.1.411.8024 . doi:10.1002/(SICI)1097-0258(19970228)16:4<385::AID-SIM380>3.0.CO;2-3. PMID   9044528.
  18. Bradić, J.; Fan, J.; Jiang, J. (2011). "Regularization for Cox's proportional hazards model with NP-dimensionality". Annals of Statistics . 39 (6): 3092–3120. arXiv: 1010.5233 . doi:10.1214/11-AOS911. PMC   3468162 . PMID   23066171.
  19. Bradić, J.; Song, R. (2015). "Structured Estimation in Nonparametric Cox Model". Electronic Journal of Statistics . 9 (1): 492–534. arXiv: 1207.4510 . doi:10.1214/15-EJS1004. S2CID   88519017.
  20. Kong, S.; Nan, B. (2014). "Non-asymptotic oracle inequalities for the high-dimensional Cox regression via Lasso". Statistica Sinica . 24 (1): 25–42. arXiv: 1204.1992 . doi:10.5705/ss.2012.240. PMC   3916829 . PMID   24516328.
  21. Huang, J.; Sun, T.; Ying, Z.; Yu, Y.; Zhang, C. H. (2011). "Oracle inequalities for the lasso in the Cox model". The Annals of Statistics. 41 (3): 1142–1165. arXiv: 1306.4847 . doi:10.1214/13-AOS1098. PMC   3786146 . PMID   24086091.
  22. "CoxModelFit". Wolfram Language & System Documentation Center.

Related Research Articles

The likelihood function is the joint probability of observed data viewed as a function of the parameters of a statistical model.

<span class="mw-page-title-main">Exponential distribution</span> Probability distribution

In probability theory and statistics, the exponential distribution or negative exponential distribution is the probability distribution of the time between events in a Poisson point process, i.e., a process in which events occur continuously and independently at a constant average rate. It is a particular case of the gamma distribution. It is the continuous analogue of the geometric distribution, and it has the key property of being memoryless. In addition to being used for the analysis of Poisson point processes it is found in various other contexts.

In statistics, a statistic is sufficient with respect to a statistical model and its associated unknown parameter if "no other statistic that can be calculated from the same sample provides any additional information as to the value of the parameter". In particular, a statistic is sufficient for a family of probability distributions if the sample from which it is calculated gives no additional information than the statistic, as to which of those probability distributions is the sampling distribution.

<span class="mw-page-title-main">Logistic regression</span> Statistical model for a binary dependent variable

In statistics, the logistic model is a statistical model that models the log-odds of an event as a linear combination of one or more independent variables. In regression analysis, logistic regression is estimating the parameters of a logistic model. Formally, in binary logistic regression there is a single binary dependent variable, coded by an indicator variable, where the two values are labeled "0" and "1", while the independent variables can each be a binary variable or a continuous variable. The corresponding probability of the value labeled "1" can vary between 0 and 1, hence the labeling; the function that converts log-odds to probability is the logistic function, hence the name. The unit of measurement for the log-odds scale is called a logit, from logistic unit, hence the alternative names. See § Background and § Definition for formal mathematics, and § Example for a worked example.

Survival analysis is a branch of statistics for analyzing the expected duration of time until one event occurs, such as death in biological organisms and failure in mechanical systems. This topic is called reliability theory or reliability analysis in engineering, duration analysis or duration modelling in economics, and event history analysis in sociology. Survival analysis attempts to answer certain questions, such as what is the proportion of a population which will survive past a certain time? Of those that survive, at what rate will they die or fail? Can multiple causes of death or failure be taken into account? How do particular circumstances or characteristics increase or decrease the probability of survival?

The Gell-Mann matrices, developed by Murray Gell-Mann, are a set of eight linearly independent 3×3 traceless Hermitian matrices used in the study of the strong interaction in particle physics. They span the Lie algebra of the SU(3) group in the defining representation.

In statistics, the theory of minimum norm quadratic unbiased estimation (MINQUE) was developed by C. R. Rao. MINQUE is a theory alongside other estimation methods in estimation theory, such as the method of moments or maximum likelihood estimation. Similar to the theory of best linear unbiased estimation, MINQUE is specifically concerned with linear regression models. The method was originally conceived to estimate heteroscedastic error variance in multiple linear regression. MINQUE estimators also provide an alternative to maximum likelihood estimators or restricted maximum likelihood estimators for variance components in mixed effects models. MINQUE estimators are quadratic forms of the response variable and are used to estimate a linear function of the variances.

In statistics, the Vuong closeness test is a likelihood-ratio-based test for model selection using the Kullback–Leibler information criterion. This statistic makes probabilistic statements about two models. They can be nested, strictly non-nested or partially non-nested. The statistic tests the null hypothesis that the two models are equally close to the true data generating process, against the alternative that one model is closer. It cannot make any decision whether the "closer" model is the true model.

In probability theory the hypoexponential distribution or the generalized Erlang distribution is a continuous distribution, that has found use in the same fields as the Erlang distribution, such as queueing theory, teletraffic engineering and more generally in stochastic processes. It is called the hypoexponetial distribution as it has a coefficient of variation less than one, compared to the hyper-exponential distribution which has coefficient of variation greater than one and the exponential distribution which has coefficient of variation of one.

In statistics, Poisson regression is a generalized linear model form of regression analysis used to model count data and contingency tables. Poisson regression assumes the response variable Y has a Poisson distribution, and assumes the logarithm of its expected value can be modeled by a linear combination of unknown parameters. A Poisson regression model is sometimes known as a log-linear model, especially when used to model contingency tables.

In statistics, a semiparametric model is a statistical model that has parametric and nonparametric components.

In mathematics, the Weyl character formula in representation theory describes the characters of irreducible representations of compact Lie groups in terms of their highest weights. It was proved by Hermann Weyl. There is a closely related formula for the character of an irreducible representation of a semisimple Lie algebra. In Weyl's approach to the representation theory of connected compact Lie groups, the proof of the character formula is a key step in proving that every dominant integral element actually arises as the highest weight of some irreducible representation. Important consequences of the character formula are the Weyl dimension formula and the Kostant multiplicity formula.

A ratio distribution is a probability distribution constructed as the distribution of the ratio of random variables having two other known distributions. Given two random variables X and Y, the distribution of the random variable Z that is formed as the ratio Z = X/Y is a ratio distribution.

In the statistical area of survival analysis, an accelerated failure time model is a parametric model that provides an alternative to the commonly used proportional hazards models. Whereas a proportional hazards model assumes that the effect of a covariate is to multiply the hazard by some constant, an AFT model assumes that the effect of a covariate is to accelerate or decelerate the life course of a disease by some constant. This is especially appealing in a technical context where the 'disease' is a result of some mechanical process with a known sequence of intermediary stages.

In statistics and machine learning, lasso is a regression analysis method that performs both variable selection and regularization in order to enhance the prediction accuracy and interpretability of the resulting statistical model. The lasso method assumes that the coefficients of the linear model are sparse, meaning that few of them are non-zero. It was originally introduced in geophysics, and later by Robert Tibshirani, who coined the term.

<span class="mw-page-title-main">Relativistic angular momentum</span> Angular momentum in special and general relativity

In physics, relativistic angular momentum refers to the mathematical formalisms and physical concepts that define angular momentum in special relativity (SR) and general relativity (GR). The relativistic quantity is subtly different from the three-dimensional quantity in classical mechanics.

In statistics, the variance function is a smooth function that depicts the variance of a random quantity as a function of its mean. The variance function is a measure of heteroscedasticity and plays a large role in many settings of statistical modelling. It is a main ingredient in the generalized linear model framework and a tool used in non-parametric regression, semiparametric regression and functional data analysis. In parametric modeling, variance functions take on a parametric form and explicitly describe the relationship between the variance and the mean of a random quantity. In a non-parametric setting, the variance function is assumed to be a smooth function.

Hazard rate models are widely used to model duration data in a wide range of disciplines, from bio-statistics to economics.

Nonlinear mixed-effects models constitute a class of statistical models generalizing linear mixed-effects models. Like linear mixed-effects models, they are particularly useful in settings where there are multiple measurements within the same statistical units or when there are dependencies between measurements on related statistical units. Nonlinear mixed-effects models are applied in many fields including medicine, public health, pharmacology, and ecology.

In cosmology, Gurzadyan theorem, proved by Vahe Gurzadyan, states the most general functional form for the force satisfying the condition of identity of the gravity of the sphere and of a point mass located in the sphere's center. This theorem thus refers to the first statement of Isaac Newton’s shell theorem but not the second one, namely, the absence of gravitational force inside a shell.

References