Missing data

Last updated

In statistics, missing data, or missing values, occur when no data value is stored for the variable in an observation. Missing data are a common occurrence and can have a significant effect on the conclusions that can be drawn from the data.

Contents

Missing data can occur because of nonresponse: no information is provided for one or more items or for a whole unit ("subject"). Some items are more likely to generate a nonresponse than others: for example items about private subjects such as income. Attrition is a type of missingness that can occur in longitudinal studies—for instance studying development where a measurement is repeated after a certain period of time. Missingness occurs when participants drop out before the test ends and one or more measurements are missing.

Data often are missing in research in economics, sociology, and political science because governments or private entities choose not to, or fail to, report critical statistics, [1] or because the information is not available. Sometimes missing values are caused by the researcher—for example, when data collection is done improperly or mistakes are made in data entry. [2]

These forms of missingness take different types, with different impacts on the validity of conclusions from research: Missing completely at random, missing at random, and missing not at random. Missing data can be handled similarly as censored data.

Types

Understanding the reasons why data are missing is important for handling the remaining data correctly. If values are missing completely at random, the data sample is likely still representative of the population. But if the values are missing systematically, analysis may be biased. For example, in a study of the relation between IQ and income, if participants with an above-average IQ tend to skip the question ‘What is your salary?’, analyses that do not take into account this missing at random (MAR pattern (see below)) may falsely fail to find a positive association between IQ and salary. Because of these problems, methodologists routinely advise researchers to design studies to minimize the occurrence of missing values. [2] Graphical models can be used to describe the missing data mechanism in detail. [3] [4]

The graph shows the probability distributions of the estimations of the expected intensity of depression in the population. The number of cases is 60. Let the true population be a standardised normal distribution and the non-response probability be a logistic function of the intensity of depression. The conclusion is: The more data is missing (MNAR), the more biased are the estimations. We underestimate the intensity of depression in the population. Missing not at random.png
The graph shows the probability distributions of the estimations of the expected intensity of depression in the population. The number of cases is 60. Let the true population be a standardised normal distribution and the non-response probability be a logistic function of the intensity of depression. The conclusion is: The more data is missing (MNAR), the more biased are the estimations. We underestimate the intensity of depression in the population.

Missing completely at random

Values in a data set are missing completely at random (MCAR) if the events that lead to any particular data-item being missing are independent both of observable variables and of unobservable parameters of interest, and occur entirely at random. [5] When data are MCAR, the analysis performed on the data is unbiased; however, data are rarely MCAR.

In the case of MCAR, the missingness of data is unrelated to any study variable: thus, the participants with completely observed data are in effect a random sample of all the participants assigned a particular intervention. With MCAR, the random assignment of treatments is assumed to be preserved, but that is usually an unrealistically strong assumption in practice. [6]

Missing at random

Missing at random (MAR) occurs when the missingness is not random, but where missingness can be fully accounted for by variables where there is complete information. [7] Since MAR is an assumption that is impossible to verify statistically, we must rely on its substantive reasonableness. [8] An example is that males are less likely to fill in a depression survey but this has nothing to do with their level of depression, after accounting for maleness. Depending on the analysis method, these data can still induce parameter bias in analyses due to the contingent emptiness of cells (male, very high depression may have zero entries). However, if the parameter is estimated with Full Information Maximum Likelihood, MAR will provide asymptotically unbiased estimates. [ citation needed ]

Missing not at random

Missing not at random (MNAR) (also known as nonignorable nonresponse) is data that is neither MAR nor MCAR (i.e. the value of the variable that's missing is related to the reason it's missing). [5] To extend the previous example, this would occur if men failed to fill in a depression survey because of their level of depression.

Samuelson and Spirer (1992) discussed how missing and/or distorted data about demographics, law enforcement, and health could be indicators of patterns of human rights violations. They gave several fairly well documented examples. [9]

Structured Missingness

Missing data can also arise in subtle ways that are not well accounted for in classical theory. An increasingly encountered problem arises in which data may not be MAR but missing values exhibit an association or structure, either explicitly or implicitly. Such missingness has been described as ‘structured missingness’. [10]

Structured missingness commonly arises when combining information from multiple studies, each of which may vary in its design and measurement set and therefore only contain a subset of variables from the union of measurement modalities. In these situations, missing values may relate to the various sampling methodologies used to collect the data or reflect characteristics of the wider population of interest, and so may impart useful information. For instance, in a health context, structured missingness has been observed as a consequence of linking clinical, genomic and imaging data. [10]

The presence of structured missingness may be a hindrance to make effective use of data at scale, including through both classical statistical and current machine learning methods. For example, there might be bias inherent in the reasons why some data might be missing in patterns, which might have implications in predictive fairness for machine learning models. Furthermore, established methods for dealing with missing data, such as imputation, do not usually take into account the structure of the missing data and so development of new formulations is needed to deal with structured missingness appropriately or effectively. Finally, characterising structured missingness within the classical framework of MCAR, MAR, and MNAR is a work in progress. [11]

Techniques of dealing with missing data

Missing data reduces the representativeness of the sample and can therefore distort inferences about the population. Generally speaking, there are three main approaches to handle missing data: (1) Imputation—where values are filled in the place of missing data, (2) omission—where samples with invalid data are discarded from further analysis and (3) analysis—by directly applying methods unaffected by the missing values. One systematic review addressing the prevention and handling of missing data for patient-centered outcomes research identified 10 standards as necessary for the prevention and handling of missing data. These include standards for study design, study conduct, analysis, and reporting. [12]

In some practical application, the experimenters can control the level of missingness, and prevent missing values before gathering the data. For example, in computer questionnaires, it is often not possible to skip a question. A question has to be answered, otherwise one cannot continue to the next. So missing values due to the participant are eliminated by this type of questionnaire, though this method may not be permitted by an ethics board overseeing the research. In survey research, it is common to make multiple efforts to contact each individual in the sample, often sending letters to attempt to persuade those who have decided not to participate to change their minds. [13] :161–187 However, such techniques can either help or hurt in terms of reducing the negative inferential effects of missing data, because the kind of people who are willing to be persuaded to participate after initially refusing or not being home are likely to be significantly different from the kinds of people who will still refuse or remain unreachable after additional effort. [13] :188–198

In situations where missing values are likely to occur, the researcher is often advised on planning to use methods of data analysis methods that are robust to missingness. An analysis is robust when we are confident that mild to moderate violations of the technique's key assumptions will produce little or no bias, or distortion in the conclusions drawn about the population.

Imputation

Some data analysis techniques are not robust to missingness, and require to "fill in", or impute the missing data. Rubin (1987) argued that repeating imputation even a few times (5 or less) enormously improves the quality of estimation. [2] For many practical purposes, 2 or 3 imputations capture most of the relative efficiency that could be captured with a larger number of imputations. However, a too-small number of imputations can lead to a substantial loss of statistical power, and some scholars now recommend 20 to 100 or more. [14] Any multiply-imputed data analysis must be repeated for each of the imputed data sets and, in some cases, the relevant statistics must be combined in a relatively complicated way. [2] Multiple imputation is not conducted in specific disciplines, as there is a lack of training or misconceptions about them. [15] Methods such as listwise deletion have been used to impute data but it has been found to introduce additional bias. [16] There is a beginner guide that provides a step-by-step instruction how to impute data. [17]  

The expectation-maximization algorithm is an approach in which values of the statistics which would be computed if a complete dataset were available are estimated (imputed), taking into account the pattern of missing data. In this approach, values for individual missing data-items are not usually imputed.

Interpolation

In the mathematical field of numerical analysis, interpolation is a method of constructing new data points within the range of a discrete set of known data points.

In the comparison of two paired samples with missing data, a test statistic that uses all available data without the need for imputation is the partially overlapping samples t-test. [18] This is valid under normality and assuming MCAR

Partial deletion

Methods which involve reducing the data available to a dataset having no missing values include:

Full analysis

Methods which take full account of all information available, without the distortion resulting from using imputed values as if they were actually observed:

Partial identification methods may also be used. [21]

Model-based techniques

Model based techniques, often using graphs, offer additional tools for testing missing data types (MCAR, MAR, MNAR) and for estimating parameters under missing data conditions. For example, a test for refuting MAR/MCAR reads as follows:

For any three variables X,Y, and Z where Z is fully observed and X and Y partially observed, the data should satisfy: .

In words, the observed portion of X should be independent on the missingness status of Y, conditional on every value of Z. Failure to satisfy this condition indicates that the problem belongs to the MNAR category. [22]

(Remark: These tests are necessary for variable-based MAR which is a slight variation of event-based MAR. [23] [24] [25] )

When data falls into MNAR category techniques are available for consistently estimating parameters when certain conditions hold in the model. [3] For example, if Y explains the reason for missingness in X and Y itself has missing values, the joint probability distribution of X and Y can still be estimated if the missingness of Y is random. The estimand in this case will be:

where and denote the observed portions of their respective variables.

Different model structures may yield different estimands and different procedures of estimation whenever consistent estimation is possible. The preceding estimand calls for first estimating from complete data and multiplying it by estimated from cases in which Y is observed regardless of the status of X. Moreover, in order to obtain a consistent estimate it is crucial that the first term be as opposed to .

In many cases model based techniques permit the model structure to undergo refutation tests. [25] Any model which implies the independence between a partially observed variable X and the missingness indicator of another variable Y (i.e. ), conditional on can be submitted to the following refutation test: .

Finally, the estimands that emerge from these techniques are derived in closed form and do not require iterative procedures such as Expectation Maximization that are susceptible to local optima. [26]

A special class of problems appears when the probability of the missingness depends on time. For example, in the trauma databases the probability to lose data about the trauma outcome depends on the day after trauma. In these cases various non-stationary Markov chain models are applied. [27]

See also

Related Research Articles

Analysis of variance (ANOVA) is a collection of statistical models and their associated estimation procedures used to analyze the differences among means. ANOVA was developed by the statistician Ronald Fisher. ANOVA is based on the law of total variance, where the observed variance in a particular variable is partitioned into components attributable to different sources of variation. In its simplest form, ANOVA provides a statistical test of whether two or more population means are equal, and therefore generalizes the t-test beyond two means. In other words, the ANOVA is used to test the difference between two or more means.

<span class="mw-page-title-main">Correlation</span> Statistical concept

In statistics, correlation or dependence is any statistical relationship, whether causal or not, between two random variables or bivariate data. Although in the broadest sense, "correlation" may indicate any type of association, in statistics it usually refers to the degree to which a pair of variables are linearly related. Familiar examples of dependent phenomena include the correlation between the height of parents and their offspring, and the correlation between the price of a good and the quantity the consumers are willing to purchase, as it is depicted in the so-called demand curve.

<span class="mw-page-title-main">Pearson correlation coefficient</span> Measure of linear correlation

In statistics, the Pearson correlation coefficient (PCC) is a correlation coefficient that measures linear correlation between two sets of data. It is the ratio between the covariance of two variables and the product of their standard deviations; thus, it is essentially a normalized measurement of the covariance, such that the result always has a value between −1 and 1. As with covariance itself, the measure can only reflect a linear correlation of variables, and ignores many other types of relationships or correlations. As a simple example, one would expect the age and height of a sample of teenagers from a high school to have a Pearson correlation coefficient significantly greater than 0, but less than 1.

<span class="mw-page-title-main">Spearman's rank correlation coefficient</span> Nonparametric measure of rank correlation

In statistics, Spearman's rank correlation coefficient or Spearman's ρ, named after Charles Spearman and often denoted by the Greek letter (rho) or as , is a nonparametric measure of rank correlation. It assesses how well the relationship between two variables can be described using a monotonic function.

Factor analysis is a statistical method used to describe variability among observed, correlated variables in terms of a potentially lower number of unobserved variables called factors. For example, it is possible that variations in six observed variables mainly reflect the variations in two unobserved (underlying) variables. Factor analysis searches for such joint variations in response to unobserved latent variables. The observed variables are modelled as linear combinations of the potential factors plus "error" terms, hence factor analysis can be thought of as a special case of errors-in-variables models.

<span class="mw-page-title-main">Cross-validation (statistics)</span> Statistical model validation technique

Cross-validation, sometimes called rotation estimation or out-of-sample testing, is any of various similar model validation techniques for assessing how the results of a statistical analysis will generalize to an independent data set. Cross-validation includes resampling and sample splitting methods that use different portions of the data to test and train a model on different iterations. It is often used in settings where the goal is prediction, and one wants to estimate how accurately a predictive model will perform in practice. It can also be used to assess the quality of a fitted model and the stability of its parameters.

Survival analysis is a branch of statistics for analyzing the expected duration of time until one event occurs, such as death in biological organisms and failure in mechanical systems. This topic is called reliability theory or reliability analysis in engineering, duration analysis or duration modelling in economics, and event history analysis in sociology. Survival analysis attempts to answer certain questions, such as what is the proportion of a population which will survive past a certain time? Of those that survive, at what rate will they die or fail? Can multiple causes of death or failure be taken into account? How do particular circumstances or characteristics increase or decrease the probability of survival?

Linear trend estimation is a statistical technique used to analyze data patterns. When a series of measurements of a process are treated as a sequence or time series, trend estimation can be used to make and justify statements about tendencies in the data by relating the measurements to the times at which they occurred. This model can then be used to describe the behavior of the observed data.

Student's t-test is a statistical test used to test whether the difference between the response of two groups is statistically significant or not. It is any statistical hypothesis test in which the test statistic follows a Student's t-distribution under the null hypothesis. It is most commonly applied when the test statistic would follow a normal distribution if the value of a scaling term in the test statistic were known. When the scaling term is estimated based on the data, the test statistic—under certain conditions—follows a Student's t distribution. The t-test's most common application is to test whether the means of two populations are significantly different. In many cases, a Z-test will yield very similar results to a t-test since the latter converges to the former as the size of the dataset increases.

<span class="mw-page-title-main">Regression analysis</span> Set of statistical processes for estimating the relationships among variables

In statistical modeling, regression analysis is a set of statistical processes for estimating the relationships between a dependent variable and one or more independent variables. The most common form of regression analysis is linear regression, in which one finds the line that most closely fits the data according to a specific mathematical criterion. For example, the method of ordinary least squares computes the unique line that minimizes the sum of squared differences between the true data and that line. For specific mathematical reasons, this allows the researcher to estimate the conditional expectation of the dependent variable when the independent variables take on a given set of values. Less common forms of regression use slightly different procedures to estimate alternative location parameters or estimate the conditional expectation across a broader collection of non-linear models.

In statistics, imputation is the process of replacing missing data with substituted values. When substituting for a data point, it is known as "unit imputation"; when substituting for a component of a data point, it is known as "item imputation". There are three main problems that missing data causes: missing data can introduce a substantial amount of bias, make the handling and analysis of the data more arduous, and create reductions in efficiency. Because missing data can create problems for analyzing data, imputation is seen as a way to avoid pitfalls involved with listwise deletion of cases that have missing values. That is to say, when one or more values are missing for a case, most statistical packages default to discarding any case that has a missing value, which may introduce bias or affect the representativeness of the results. Imputation preserves all cases by replacing missing data with an estimated value based on other available information. Once all missing values have been imputed, the data set can then be analysed using standard techniques for complete data. There have been many theories embraced by scientists to account for missing data but the majority of them introduce bias. A few of the well known attempts to deal with missing data include: hot deck and cold deck imputation; listwise and pairwise deletion; mean imputation; non-negative matrix factorization; regression imputation; last observation carried forward; stochastic imputation; and multiple imputation.

Random forests or random decision forests is an ensemble learning method for classification, regression and other tasks that operates by constructing a multitude of decision trees at training time. For classification tasks, the output of the random forest is the class selected by most trees. For regression tasks, the mean or average prediction of the individual trees is returned. Random decision forests correct for decision trees' habit of overfitting to their training set.

Cointegration is a statistical property of a collection (X1X2, ..., Xk) of time series variables. First, all of the series must be integrated of order d (see Order of integration). Next, if a linear combination of this collection is integrated of order less than d, then the collection is said to be co-integrated. Formally, if (X,Y,Z) are each integrated of order d, and there exist coefficients a,b,c such that aX + bY + cZ is integrated of order less than d, then X, Y, and Z are cointegrated. Cointegration has become an important property in contemporary time series analysis. Time series often have trends—either deterministic or stochastic. In an influential paper, Charles Nelson and Charles Plosser (1982) provided statistical evidence that many US macroeconomic time series (like GNP, wages, employment, etc.) have stochastic trends.

This glossary of statistics and probability is a list of definitions of terms and concepts used in the mathematical sciences of statistics and probability, their sub-disciplines, and related fields. For additional related terms, see Glossary of mathematics and Glossary of experimental design.

Robust statistics are statistics which maintain their properties even if the underlying distributional assumptions are incorrect. Robust statistical methods have been developed for many common problems, such as estimating location, scale, and regression parameters. One motivation is to produce statistical methods that are not unduly affected by outliers. Another motivation is to provide methods with good performance when there are small departures from a parametric distribution. For example, robust methods work well for mixtures of two normal distributions with different standard deviations; under this model, non-robust methods like a t-test work poorly.

Bootstrapping is any test or metric that uses random sampling with replacement, and falls under the broader class of resampling methods. Bootstrapping assigns measures of accuracy to sample estimates. This technique allows estimation of the sampling distribution of almost any statistic using random sampling methods.

In statistics, listwise deletion is a method for handling missing data. In this method, an entire record is excluded from analysis if any single value is missing.

Ordinal data is a categorical, statistical data type where the variables have natural, ordered categories and the distances between the categories are not known. These data exist on an ordinal scale, one of four levels of measurement described by S. S. Stevens in 1946. The ordinal scale is distinguished from the nominal scale by having a ranking. It also differs from the interval scale and ratio scale by not having category widths that represent equal increments of the underlying attribute.

<span class="mw-page-title-main">Roderick J. A. Little</span> Ph.D. University of London 1974

Roderick Joseph Alexander Little is an academic statistician, whose main research contributions lie in the statistical analysis of data with missing values and the analysis of complex sample survey data. Little is Richard D. Remington Distinguished University Professor of Biostatistics in the Department of Biostatistics at the University of Michigan, where he also holds academic appointments in the Department of Statistics and the Institute for Social Research.

<span class="mw-page-title-main">Homoscedasticity and heteroscedasticity</span> Statistical property

In statistics, a sequence of random variables is homoscedastic if all its random variables have the same finite variance; this is also known as homogeneity of variance. The complementary notion is called heteroscedasticity, also known as heterogeneity of variance. The spellings homoskedasticity and heteroskedasticity are also frequently used. Assuming a variable is homoscedastic when in reality it is heteroscedastic results in unbiased but inefficient point estimates and in biased estimates of standard errors, and may result in overestimating the goodness of fit as measured by the Pearson coefficient.

References

  1. Messner SF (1992). "Exploring the Consequences of Erratic Data Reporting for Cross-National Research on Homicide". Journal of Quantitative Criminology. 8 (2): 155–173. doi:10.1007/bf01066742. S2CID   133325281.
  2. 1 2 3 4 Hand, David J.; Adèr, Herman J.; Mellenbergh, Gideon J. (2008). Advising on Research Methods: A Consultant's Companion. Huizen, Netherlands: Johannes van Kessel. pp. 305–332. ISBN   978-90-79418-01-5.
  3. 1 2 Mohan, Karthika; Pearl, Judea; Tian, Jin (2013). "Graphical Models for Inference with Missing Data". Advances in Neural Information Processing Systems 26. pp. 1277–1285.
  4. Karvanen, Juha (2015). "Study design in causal models". Scandinavian Journal of Statistics. 42 (2): 361–377. arXiv: 1211.2958 . doi:10.1111/sjos.12110. S2CID   53642701.
  5. 1 2 Polit DF Beck CT (2012). Nursing Research: Generating and Assessing Evidence for Nursing Practice, 9th ed. Philadelphia, USA: Wolters Klower Health, Lippincott Williams & Wilkins.
  6. Deng (2012-10-05). "On Biostatistics and Clinical Trials". Archived from the original on 15 March 2016. Retrieved 13 May 2016.
  7. "Home". Archived from the original on 2015-09-10. Retrieved 2015-08-01.
  8. Little, Roderick J. A.; Rubin, Donald B. (2002), Statistical Analysis with Missing Data (2nd ed.), Wiley .
  9. Samuelson, Douglas A.; Spirer, Herbert F. (1992-12-31), "Chapter 3. Use of Incomplete and Distorted Data in Inference About Human Rights Violations", Human Rights and Statistics, University of Pennsylvania Press, pp. 62–78, doi:10.9783/9781512802863-006, ISBN   9781512802863 , retrieved 2022-08-18
  10. 1 2 Mitra, Robin; McGough, Sarah F.; Chakraborti, Tapabrata; Holmes, Chris; Copping, Ryan; Hagenbuch, Niels; Biedermann, Stefanie; Noonan, Jack; Lehmann, Brieuc; Shenvi, Aditi; Doan, Xuan Vinh; Leslie, David; Bianconi, Ginestra; Sanchez-Garcia, Ruben; Davies, Alisha (2023-01-25). "Learning from data with structured missingness". Nature Machine Intelligence. 5 (1): 13–23. doi:10.1038/s42256-022-00596-z. ISSN   2522-5839.
  11. Jackson, James; Mitra, Robin; Hagenbuch, Niels; McGough, Sarah; Harbron, Chris (2023-07-05), A Complete Characterisation of Structured Missingness, doi:10.48550/arXiv.2307.02650 , retrieved 2024-04-18
  12. Li, Tianjing; Hutfless, Susan; Scharfstein, Daniel O.; Daniels, Michael J.; Hogan, Joseph W.; Little, Roderick J.A.; Roy, Jason A.; Law, Andrew H.; Dickersin, Kay (2014). "Standards should be applied in the prevention and handling of missing data for patient-centered outcomes research: a systematic review and expert consensus". Journal of Clinical Epidemiology. 67 (1): 15–32. doi:10.1016/j.jclinepi.2013.08.013. PMC   4631258 . PMID   24262770.
  13. 1 2 Stoop, I.; Billiet, J.; Koch, A.; Fitzgerald, R. (2010). Reducing Survey Nonresponse: Lessons Learned from the European Social Survey. Oxford: Wiley-Blackwell. ISBN   978-0-470-51669-0.
  14. Graham J.W.; Olchowski A.E.; Gilreath T.D. (2007). "How Many Imputations Are Really Needed? Some Practical Clarifications of Multiple Imputation Theory". Preventative Science. 8 (3): 208–213. CiteSeerX   10.1.1.595.7125 . doi:10.1007/s11121-007-0070-9. PMID   17549635. S2CID   24566076.
  15. van Ginkel, Joost R.; Linting, Marielle; Rippe, Ralph C. A.; van der Voort, Anja (2020-05-03). "Rebutting Existing Misconceptions About Multiple Imputation as a Method for Handling Missing Data". Journal of Personality Assessment. 102 (3): 297–308. doi:10.1080/00223891.2018.1530680. hdl: 1887/138825 . ISSN   0022-3891. PMID   30657714. S2CID   58580667.
  16. van Buuren, S. (2018). Flexible imputation of missing data (2nd ed.). CRC Press.
  17. Woods, Adrienne D.; Gerasimova, Daria; Van Dusen, Ben; Nissen, Jayson; Bainter, Sierra; Uzdavines, Alex; Davis-Kean, Pamela E.; Halvorson, Max; King, Kevin M.; Logan, Jessica A. R.; Xu, Menglin; Vasilev, Martin R.; Clay, James M.; Moreau, David; Joyal-Desmarais, Keven (2023-02-23). "Best practices for addressing missing data through multiple imputation". Infant and Child Development. doi: 10.1002/icd.2407 . ISSN   1522-7227.
  18. Derrick, B; Russ, B; Toher, D; White, P (2017). "Test Statistics for the Comparison of Means for Two Samples That Include Both Paired and Independent Observations". Journal of Modern Applied Statistical Methods. 16 (1): 137–157. doi: 10.22237/jmasm/1493597280 .
  19. Chechik, Gal; Heitz, Geremy; Elidan, Gal; Abbeel, Pieter; Koller, Daphne (2008-06-01). "Max-margin Classification of incomplete data" (PDF). Neural Information Processing Systems: 233–240.
  20. Chechik, Gal; Heitz, Geremy; Elidan, Gal; Abbeel, Pieter; Koller, Daphne (2008-06-01). "Max-margin Classification of Data with Absent Features". The Journal of Machine Learning Research. 9: 1–21. ISSN   1532-4435.
  21. Tamer, Elie (2010). "Partial Identification in Econometrics" (PDF). Annual Review of Economics . 2 (1): 167–195. doi:10.1146/annurev.economics.050708.143401.
  22. Mohan, Karthika; Pearl, Judea (2014). "On the testability of models with missing data". Proceedings of AISTAT-2014, Forthcoming.
  23. Darwiche, Adnan (2009). Modeling and Reasoning with Bayesian Networks. Cambridge University Press.
  24. Potthoff, R.F.; Tudor, G.E.; Pieper, K.S.; Hasselblad, V. (2006). "Can one assess whether missing data are missing at random in medical studies?". Statistical Methods in Medical Research. 15 (3): 213–234. doi:10.1191/0962280206sm448oa. PMID   16768297. S2CID   12882831.
  25. 1 2 Pearl, Judea; Mohan, Karthika (2013). Recoverability and Testability of Missing data: Introduction and Summary of Results (PDF) (Technical report). UCLA Computer Science Department, R-417.
  26. Mohan, K.; Van den Broeck, G.; Choi, A.; Pearl, J. (2014). "An Efficient Method for Bayesian Network Parameter Learning from Incomplete Data". Presented at Causal Modeling and Machine Learning Workshop, ICML-2014.
  27. Mirkes, E.M.; Coats, T.J.; Levesley, J.; Gorban, A.N. (2016). "Handling missing data in large healthcare dataset: A case study of unknown trauma outcomes". Computers in Biology and Medicine. 75: 203–216. arXiv: 1604.00627 . Bibcode:2016arXiv160400627M. doi:10.1016/j.compbiomed.2016.06.004. PMID   27318570. S2CID   5874067. Archived from the original on 2016-08-05.

Further reading

Background

Software