Biology, images, analysis, design...
Use/Abuse Principles How To Related
"It has long been an axiom of mine that the little things are infinitely the most important" (Sherlock Holmes)

Search this site




Lilliefors test

The Lilliefors test is an adaptation of the Kolmogorov-Smirnov one-sample test which can be used when parameters of the theoretical distribution are estimated from the data rather than being known a priori.

The test statistic is the same as in the Kolmogorov-Smirnov test - namely the maximum difference between the empirical distribution function and the theoretical cumulative distribution function. However, the critical values with which D is compared are different. The use of estimated parameters makes the maximum difference smaller than it would otherwise be if comparison had been with a fully defined distribution. The null distribution of this test statistic was computed by Lilliefors using Monte Carlo methods, although analytical methods have since been proposed.

Lilliefors is most commonly used to test normality, although tabulated critical values are also available for the exponential distribution. The Lilliefors approach can probably be used for any continuous distribution using Monte Carlo methods. It can be implemented in R with lillie.test in the package nortest although R describes its performance as worse than either the Anderson-Darling test or the Cramer-von Mises test.

Although use of the Lilliefors distribution greatly improves the one-sample test, it still lacks power to reject the null hypothesis. In particular it remains relatively insensitive to discrepancies in the tails of the distribution.