Targeted Maximum Likelihood Estimation (TMLE) (also, more accurately referenced as Targeted Minimum Loss-Based Estimation) is a general statistical estimation framework for causal inference and semiparametric models. TMLE combines ideas from maximum likelihood estimation, semiparametric efficiency theory, and machine learning. It was introduced by Mark J. van der Laan and colleagues in the mid-2000s as a method that yields asymptotically efficient plug-in estimators while allowing the use of flexible, data-adaptive algorithms such as ensemble machine learning for nuisance parameter estimation. [1] [2]
TMLE is used in epidemiology, biostatistics, and the social sciences to estimate causal effects in observational and experimental studies. Applications of TMLE include Longitudinal TMLE (LTMLE) for time-varying treatments and confounders. Variations in how the targeting step in TMLE is carried out have resulted in various versions of TMLE such as Collaborative TMLE (CTMLE) and Adaptive TMLE for improved finite-sample performance and automated variable selection.
The TMLE framework was first described by van der Laan and Rubin (2006) as a general approach for the construction of efficient plug-in estimators of smooth features of the data density. It was demonstrated in the context of causal inference and missing data problems. [1] It was developed to address limitations of traditional doubly robust methods, such as Augmented Inverse Probability Weighting (AIPW), by respecting the plug-in principle in the sense that it respects that the target parameter is a function of the data density that is an element of the statistical model. TMLE estimates the data density or relevant parts of it with machine learning and targets these machine learning fits before it is plugged in the target parameter mapping. In this manner, a TMLE always respects global knowledge and satisfies known bounds such as that the target parameter is a probability . [3]
Since its introduction, TMLE has been developed in a series of theoretical and applied papers, culminating in book-length treatments of the method and its applications to survival analysis, adaptive designs, and longitudinal data. [2] [4]
At its core, TMLE is a two-step estimation procedure:
This approach balances the bias–variance trade-off by combining data-adaptive estimation with semiparametric efficiency theory. TMLE is doubly robust, meaning it remains consistent if either the outcome model or the treatment model is consistently estimated. [6]
Here we explain the TMLE of the average treatment effect of a binary treatment on an outcome adjusting for baseline covariates. Consider i.i.d. observations from a distribution , where are baseline covariates, is a binary treatment, and is an outcome. Let represent the outcome model and represent the propensity score.
The average treatment effect (ATE) is given by
A basic TMLE for the ATE proceeds as follows:
Step 1: Estimate initial models. Obtain estimates and , often using flexible methods such as Super Learner.
Step 2: Compute the clever covariate. Define:
Step 3: Estimate the fluctuation parameter. Fit a logistic regression of on with as offset. This yields , the MLE that solves the score equation:
Step 4: Update the initial estimate. Apply the "blip" to obtain the targeted estimate:
Step 5: Compute the TMLE. The ATE estimate is:
Inference. The efficient influence function (EIF) for the ATE is:
The variance is estimated by , yielding Wald-type confidence intervals . [2]
Remark. For continuous outcomes, a linear fluctuation may be used instead. For bounded continuous outcomes, the logistic fluctuation (after rescaling to ) is often preferred for improved finite-sample performance. [7]
TMLE has been applied in:
Several R packages implement TMLE and related methods: