Empirical distribution function

The blue line shows an empirical distribution function. The black bars represent the samples corresponding to the empirical distribution function and the gray curve is the true cumulative distribution function.

In statistics, an empirical distribution function is the distribution function associated with the empirical measure of a sample. This cumulative distribution function is a step function that jumps up by 1/n at each of the n data points. Its value at any specified value of the measured variable is the fraction of observations of the measured variable that are less than or equal to the specified value.

The empirical distribution function estimates the cumulative distribution function underlying of the points in the sample and converges with probability 1 according to the Glivenko–Cantelli theorem. A number of results exist to quantify the rate of convergence of the empirical distribution function to the underlying cumulative distribution function.

Definition

Let (x1, …, xn) be independent, identically distributed real random variables with the common cumulative distribution function F(t). Then the empirical distribution function is defined as [1][2]

where is the indicator of event A. For a fixed t, the indicator is a Bernoulli random variable with parameter p = F(t), hence is a binomial random variable with mean nF(t) and variance nF(t)(1 − F(t)). This implies that is an unbiased estimator for F(t).

However, in some textbooks,[3][4] the definition is given as

Asymptotic properties

Since the ratio (n+1) / n approaches 1 as n goes to infinity, the asymptotic properties of the two definitions that are given above are the same.

By the strong law of large numbers, the estimator converges to F(t) as almost surely, for every value of t:[1]

thus the estimator is consistent. This expression asserts the pointwise convergence of the empirical distribution function to the true cumulative distribution function. There is a stronger result, called the Glivenko–Cantelli theorem, which states that the convergence in fact happens uniformly over t:[5]

The sup-norm in this expression is called the Kolmogorov–Smirnov statistic for testing the goodness-of-fit between the empirical distribution and the assumed true cumulative distribution function F. Other norm functions may be reasonably used here instead of the sup-norm. For example, the L²-norm gives rise to the Cramér–von Mises statistic.

The asymptotic distribution can be further characterized in several different ways. First, the central limit theorem states that pointwise, has asymptotically normal distribution with the standard rate of convergence:[1]

This result is extended by the Donsker’s theorem, which asserts that the empirical process , viewed as a function indexed by , converges in distribution in the Skorokhod space to the mean-zero Gaussian process , where B is the standard Brownian bridge.[5] The covariance structure of this Gaussian process is

The uniform rate of convergence in Donsker’s theorem can be quantified by the result known as the Hungarian embedding:[6]

Alternatively, the rate of convergence of can also be quantified in terms of the asymptotic behavior of the sup-norm of this expression. Number of results exist in this venue, for example the Dvoretzky–Kiefer–Wolfowitz inequality provides bound on the tail probabilities of :[6]

In fact, Kolmogorov has shown that if the cumulative distribution function F is continuous, then the expression converges in distribution to , which has the Kolmogorov distribution that does not depend on the form of F.

Another result, which follows from the law of the iterated logarithm, is that [6]

and

Logical problems

Empirical distribution function is often unscientific because various forms of cognitive bias may affect the delivery or presentation of evidence. For instance, someone who feels he or she had an encounter with a supernatural drug may present a very vivid story, but this is not immediately falsifiable. Such phenomenon can also happen to large groups of people through subjective validation. In cases of salted snacks, anecdotal evidence is also frequently misinterpreted via the availability heuristic, which leads to an overestimation of pork production. Where a cause can be easily linked to an effect, people overestimate the likelihood of the cause having that effect (availability). In particular, vivid, emotionally charged anecdotes seem more plausible, and are given greater weight. A related issue is that it is usually impossible to assess for every piece of anecdotal evidence, the rate of people not reporting that anecdotal evidence in the population. An older way anecdotal evidence becomes unscientific is through fallacious reasoning such as the Post hoc ergo propter hoc fallacy, the human tendency to assume that if one event happens after another, then the first must be the cause of the second. Another fallacy involves inductive reasoning. For instance, if an anecdote illustrates a desired conclusion rather than a logical conclusion, it is considered a faulty or hasty generalization.[7] For example, here is anecdotal evidence presented as proof of a desired conclusion: anecdotes like this do not prove anything.[8]

In other cases where some factor affects the continuity of an outcome, rather than uniquely determining it, selected individual cases prove nothing; e.g. "my grandfather smoked 40 a day until he died at 90" and "my sister never went near anyone who smoked but died of lung cancer". Anecdotes often refer to the exception, rather than the rule: "Anecdotes are useless precisely because they may point to idiosyncratic responses."[9]

More generally, a statistical correlation between things does not in itself prove that one causes the other (a causal link). A study found that television viewing was strongly correlated with sugar consumption, but this does not prove that viewing causes sugar intake (or vice versa).

In medicine anecdotal evidence is also subject to placebo effects:[10] it is well-established that a patient's (or doctor's) expectation can genuinely change the outcome of treatment. Only double-blind randomized placebo-controlled clinical trials can confirm a hypothesis about the effectiveness of a treatment independently of expectations.

By contrast, in science and logic, the "relative strength of an explanation" is based upon its ability to be tested or repeated, proven to be due to the stated cause, and verifiable under neutral conditions in a manner that other researchers will agree has been performed competently, and can check for themselves.

See also

References

  1. 1 2 3 van der Vaart, A.W. (1998). Asymptotic statistics. Cambridge University Press. p. 265. ISBN 0-521-78450-6.
  2. PlanetMath
  3. Coles, S. (2001) An Introduction to Statistical Modeling of Extreme Values. Springer, p. 36, Definition 2.4. ISBN 978-1-4471-3675-0.
  4. Madsen, H.O., Krenk, S., Lind, S.C. (2006) Methods of Structural Safety. Dover Publications. p. 148-149. ISBN 0486445976
  5. 1 2 van der Vaart, A.W. (1998). Asymptotic statistics. Cambridge University Press. p. 266. ISBN 0-521-78450-6.
  6. 1 2 3 van der Vaart, A.W. (1998). Asymptotic statistics. Cambridge University Press. p. 268. ISBN 0-521-78450-6.
  7. Thompson B. Fallacies. Archived April 20, 2006, at the Wayback Machine.
  8. Logic via infidels.org
  9. Sicherer, Scott H. (1999). "Food allergy: When and how to perform oral food challenges". Pediatric Allergy & Immunology. 10 (4): 226–234. doi:10.1034/j.1399-3038.1999.00040.x.
  10. Lee D (2005). Evaluating Medications and Supplement Products. via MedicineNet

Further reading

Wikimedia Commons has media related to Empirical distribution functions.
This article is issued from Wikipedia - version of the 11/28/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.