Missing data

In statistics, missing data, or missing values, occur when no data value is stored for the variable in an observation. Missing data are a common occurrence and can have a significant effect on the conclusions that can be drawn from the data.

Missing data can occur because of nonresponse: no information is provided for one or more items or for a whole unit ("subject"). Some items are more likely to generate a nonresponse than others: for example items about private subjects such as income. Attrition ("Dropout") is a type of missingness that can occur in longitudinal studies - for instance studying development where a measurement is repeated after a certain period of time. Missingness occurs when participants drop out before the test ends and one or more measurements are missing.

Data often are missing in research in economics, sociology, and political science because governments choose not to, or fail to, report critical statistics.[1] Sometimes missing values are caused by the researcher—for example, when data collection is done improperly or mistakes are made in data entry.[2]

These forms of missingness take different types, with different impacts on the validity of conclusions from research: Missing completely at random, missing at random, and missing not at random.

Types of missing data

Understanding the reasons why data are missing is important to correctly handle the remaining data. If values are missing completely at random, the data sample is likely still representative of the population. But if the values are missing systematically, analysis may be biased. For example, in a study of the relation between IQ and income, if participants with an above-average IQ tend to skip the question ‘What is your salary?’ , analyses that do not take into account this missing at random (MAR pattern (see below)) may falsely fail to find a positive association between IQ and salary. Because of these problems, methodologists routinely advise researchers to design studies to minimize the occurrence of missing values.[2] Graphical models[3][4] can be used to describe the missing data mechanism in detail.

The graph shows the probability distributions of the estimations of the expected intensity of depression in the population. The number of cases is 60. Let the true population be a standardised normal distribution and the non-response probability be a logistic function of the intensity of depression. The conclusion is: The more data is missing (MNAR), the more biased are the estimations. We underestimate the intensity of depression in the population.

Missing completely at random

Values in a data set are missing completely at random (MCAR) if the events that lead to any particular data-item being missing are independent both of observable variables and of unobservable parameters of interest, and occur entirely at random.[5] When data are MCAR, the analysis performed on the data is unbiased; however, data are rarely MCAR.

In the case of MCAR, the missingness of data is unrelated to any study variable: thus, the participants with completely observed data are in effect a random sample of all the participants assigned a particular intervention. With MCAR, the random assignment of treatments is assumed to be preserved, but that is usually an unrealistically strong assumption in practice.[6]

Missing at random

Missing at random (MAR) occurs when the missingness is not random, but where missingness can be fully accounted for by variables where there is complete information.[7] MAR is an assumption that is impossible to verify statistically, we must rely on its substantive reasonableness.[8] An example is that males are less likely to fill in a depression survey but this has nothing to do with their level of depression, after accounting for maleness. These data can still induce parameter bias in analyses due to the contingent emptyness of cells (male, very high depression may have zero entries).

Missing not at random

Missing not at random (MNAR) (also known as nonignorable nonresponse) is data that is neither MAR nor MCAR (i.e. the value of the variable that's missing is related to the reason it's missing).[5] To extend the previous example, this would occur if men failed to fill in a depression survey because of their level of depression.

Techniques of dealing with missing data

Missing data reduce the representativeness of the sample and can therefore distort inferences about the population. If it is possible try to think about how to prevent data from missingness before the actual data gathering takes place. For example, in computer questionnaires it is often not possible to skip a question. A question has to be answered, otherwise one cannot continue to the next. So missing values due to the participant are eliminated by this type of questionnaire, though this method may not be permitted by an ethics board overseeing the research. And in survey research, it is common to make multiple efforts to contact each individual in the sample, often sending letters to attempt to persuade those who have decided not to participate to change their minds.[9]:161–187 However, such techniques can either help or hurt in terms of reducing the negative inferential effects of missing data, because the kind of people who are willing to be persuaded to participate after initially refusing or not being home are likely to be significantly different from the kinds of people who will still refuse or remain unreachable after additional effort.[9]:188–198

In situations where missing data are likely to occur, the researcher is often advised to plan to use methods of data analysis methods that are robust to missingness. An analysis is robust when we are confident that mild to moderate violations of the technique's key assumptions will produce little or no bias, or distortion in the conclusions drawn about the population.

Imputation

If it is known that the data analysis technique which is to be used is not content robust, it is good to consider imputing the missing data. This can be done in several ways. Recommended is to use multiple imputations. Rubin (1987) argued that even a small number (5 or fewer) of repeated imputations enormously improves the quality of estimation.[2]

For many practical purposes, 2 or 3 imputations capture most of the relative efficiency that could be captured with a larger number of imputations. However, a too-small number of imputations can lead to a substantial loss of statistical power, and some scholars now recommend 20 to 100 or more.[10] Any multiply-imputed data analysis must be repeated for each of the imputed data sets and, in some cases, the relevant statistics must be combined in a relatively complicated way.[2]

Examples of imputations are listed below.

Partial imputation

The expectation-maximization algorithm is an approach in which values of the statistics which would be computed if a complete dataset were available are estimated (imputed), taking into account the pattern of missing data. In this approach, values for individual missing data-items are not usually imputed.

Partial deletion

Methods which involve reducing the data available to a dataset having no missing values include:

Full analysis

Methods which take full account of all information available, without the distortion resulting from using imputed values as if they were actually observed:

Interpolation

Main article: Interpolation

In the mathematical field of numerical analysis, interpolation is a method of constructing new data points within the range of a discrete set of known data points.

Model-based techniques

Model based techniques, often using graphs, offer additional tools for testing missing data types (MCAR, MAR, MNAR) and for estimating parameters under missing data conditions. For example, a test for refuting MAR/MCAR reads as follows:

For any three variables X,Y, and Z where Z is fully observed and X and Y partially observed, the data should satisfy: .

In words, the observed portion of X should be independent on the missingness status of Y, conditional on every value of Z. Failure to satisfy this condition indicates that the problem belongs to the MNAR category.[11]

(Remark: These tests are necessary for variable-based MAR which is a slight variation of event-based MAR.[12][13][14])

When data falls into MNAR category techniques are available for consistently estimating parameters when certain conditions hold in the model.[3] For example, if Y explains the reason for missingness in X and Y itself has missing values, the joint probability distribution of X and Y can still be estimated if the missingness of Y is random. The estimand in this case will be:

where and denote the observed portions of their respective variables.

Different model structures may yield different estimands and different procedures of estimation whenever consistent estimation is possible. The preceding estimand calls for first estimating from complete data and multiplying it by estimated from cases in which Y is observed regardless of the status of X. Moreover, in order to obtain a consistent estimate it is crucial that the first term be as opposed to .

In many cases model based techniques permit the model structure to undergo refutation tests.[14] Any model which implies the independence between a partially observed variable X and the missingness indicator of another variable Y (i.e. ), conditional on can be submitted to the following refutation test: .

Finally, the estimands that emerge from these techniques are derived in closed form and do not require iterative procedures such as Expectation Maximization that are susceptible to local optima.[15]

A special class of problems appears when the probability of the missingness depends on time. For example, in the trauma databases the probability to loose data about the trauma outcome depends on the day after trauma. In these cases various non-stationary Markov chain models are applied. [16]

See also

References

  1. Messner SF (1992). "Exploring the Consequences of Erratic Data Reporting for Cross-National Research on Homicide". Journal of Quantitative Criminology. 8 (2): 155–173. doi:10.1007/bf01066742.
  2. 1 2 3 4 Hand, David J.; Adèr, Herman J.; Mellenbergh, Gideon J. (2008). Advising on Research Methods: A Consultant's Companion. Huizen, Netherlands: Johannes van Kessel. pp. 305–332. ISBN 90-79418-01-3.
  3. 1 2 Mohan, Karthika; Pearl, Judea; Tian, Jin (2013). Advances in Neural Information Processing Systems 26. pp. 1277–1285.
  4. Karvanen, Juha (2015). "Study design in causal models". Scandinavian Journal of Statistics. 42 (2): 361377. doi:10.1111/sjos.12110.
  5. 1 2 Polit DF Beck CT (2012). Nursing Research: Generating and Assessing Evidence for Nursing Practice, 9th ed. Philadelphia, USA: Wolters Klower Health, Lippincott Williams & Wilkins.
  6. Deng. "On Biostatistics and Clinical Trials". Retrieved 13 May 2016.
  7. http://missingdata.lshtm.ac.uk/index.php?option=com_content&view=article&id=76%3Amissing-at-random-mar&catid=40%3Amissingness-mechanisms&Itemid=96
  8. Little, Roderick (2002). Statistical analysis with missing data. Hoboken, N.J: Wiley. ISBN 978-0471183860.
  9. 1 2 Stoop, I.; Billiet, J.; Koch, A.; Fitzgerald, R. (2010). Reducing Survey Nonresponse: Lessons Learned from the European Social Survey. Oxford: Wiley-Blackwell. ISBN 0-470-51669-0.
  10. Graham J.W.; Olchowski A.E.; Gilreath T.D. (2007). "How Many Imputations Are Really Needed? Some Practical Clarifications of Multiple Imputation Theory". Preventative Science. 8 (3): 208–213. doi:10.1007/s11121-007-0070-9.
  11. Mohan, Karthika; Pearl, Judea (2014). "On the testability of models with missing data". Proceedings of AISTAT-2014, Forthcoming.
  12. Darwiche, Adnan (2009). Modeling and Reasoning with Bayesian Networks. Cambridge University Press.
  13. Potthoff, R.F.; Tudor, G.E.; Pieper, K.S.; Hasselblad, V. (2006). "Can one assess whether missing data are missing at random in medical studies?". Statistical Methods in Medical Research. 15 (3): 213–234. doi:10.1191/0962280206sm448oa.
  14. 1 2 Pearl, Judea; Mohan, Karthika (2013). Recoverability and Testability of Missing data: Introduction and Summary of Results (PDF) (Technical report). UCLA Computer Science Department, R-417.
  15. Mohan, K.; Van den Broeck, G.; Choi, A.; Pearl, J. (2014). "An Efficient Method for Bayesian Network Parameter Learning from Incomplete Data". Presented at Causal Modeling and Machine learning Workshop, ICML-2014.
  16. Mirkes, E.M.; Coats, T.J.; Levesley, J.; Gorban, A.N. (2016). "Handling missing data in large healthcare dataset: A case study of unknown trauma outcomes". Computers in Biology and Medicine. 75: 203–216. doi:10.1016/j.compbiomed.2016.06.004.

Further reading

External links

Background

Software

This article is issued from Wikipedia - version of the 11/18/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.