InfluentialPoints.com
Biology, images, analysis, design...
Use/Abuse Principles How To Related
"It has long been an axiom of mine that the little things are infinitely the most important" (Sherlock Holmes)

 

 

Kolmogorov-Smirnov test One- & two-sample, and related tests

One-sample Kolmogorov-Smirnov test  Two-sample test 

One-sample Kolmogorov-Smirnov test

Worked example I

We base our first example on some data on sole horn moisture content from a study by Higuchi & Nagahata (2001) on cows with and without laminitis. We will assume for now that we wish to test this observed distribution against a reference normal distribution of mean 35.0 and standard deviation 2.0.

The observed cumulative relative frequencies (S(Y)) are obtained by dividing rank (r) by the number of observations (n). Expected cumulative relative frequencies (Fo(Y)) for these quantiles from a normal distribution are given by the area under the normal curve; they are readily obtained from R. Differences are obtained between observed and expected values (S(Y)i - F(Y)i), and between observed of the previous variate and expected values (S(Y)i-1 - F(Y)i). The largest absolute difference of these two sets of differences is d.

Moisture content of sole horn
Obs #% moisture
(Y)
Observed
(S(Y))
Expected
(Fo(Y))
S(Y)i - F(Y)iS(Y)i-1 - F(Y)i
1
2
3
4
5
6
7
8
9
10
11
12
32.2
32.3
33.1
33.2
33.3
34.5
35.2
35.3
36.5
36.8
37.0
37.6
0.0833
0.1667
0.2500
0.3333
0.4167
0.5000
0.5833
0.6667
0.7500
0.8333
0.9167
1.0000
0.0808
0.0885
0.1712
0.1841
0.1977
0.4013
0.5398
0.5596
0.7734
0.8159
0.8413
0.9032
0.0025
0.0782
0.0788
0.1492
0.2190
0.0987
0.0435
0.1070
-0.0234
0.0174
0.0753
0.0968
-0.0808
-0.0052
-0.0044
0.0659
0.1357
0.0154
-0.0398
0.0237
-0.1067
-0.0659
-0.0080
0.0134

The process may be easier to follow on the first graph below. The red curve shows the expected cumulative relative frequencies from a normal distribution. Blue points show the observed cumulative relative frequencies. Red points show points on the cumulative normal curve equivalent to observed cumulative relative frequencies. Green points lie immediately before each step-up.

{Fig. 1}
figmb1.gif renamed to figmb01.GIF

Using

A more efficient approach is shown in the second figure above. A correction factor (0.5/n) is subtracted from each observed cumulative relative frequency. Only one difference then has to be calculated for each observed cumulative relative frequency. Once the largest absolute difference is identified, the correction factor is added back on again to give d.

We said at the start of this worked example that we were testing the observed distribution against a fully defined normal distribution (μ=35,σ=2). Usually, however, one is more interested in an omnibus test of normality - using the sample mean and standard deviation as estimates of the population parameters.

The Kolmogorov-Smirnov test should not be used to test such a hypothesis - but we will do it here in R in order to see why it is inappropriate. In this example the mean is 34.754 and the standard deviation is 1.92472.

Using

The P-value we obtain is 0.7026 - which gives no indication of a significant deviation from normality. Let us now use three of specialized tests of normality which allow for the fact that one is estimating parameters from the sample.

  • Lilliefors test of normality
    Using
    The maximum difference (D) is estimated in exactly the same way as previously.

    The test statistic is the same (D = 0.191), but the P-value is much lower at 0.2628.

  • Cramér-von Mises's test of normality
    Using
    This test uses a different test statistic from Kolmogorov-Smirnov and Lilliefors.

    The test statistic (W) is 0.0591, with a P-value of 0.3606.

  • Anderson-Darling test of normality
    Using
    This test uses another different test statistic which gives more weight to the tails of the distribution. It is reputedly the most powerful of this family of tests.

    The test statistic (A) is 0.3891, with a P-value of 0.3263.

Conclusions

Whilst the P-value from the Kolmogorov-Smirnov test (0.7026) is not valid for the reasons stated, any of the other three tests could justifiably be used depending on which aspect of the distribution one is most interested in. None of them indicates a significant deviation from normality - although with such a small sample the deviation would have to be very marked to be detected.

Postscript: When there are several (appropriate) tests to choose from, it is very important to select the test a priori, and not just choose the one that gives the desired result. If you want to give more weight to the tails of the distribution, then select the Anderson-Darling test. If you want to give more weight to the centre of the distribution, then select the Lilliefors test.

 

Two-sample Kolmogorov-Smirnov test

Worked example II

Time (hours) from
treatment to lambing
Untreated (U) Treated (T)
45
87
123
120
70
 
51
71
42
37
51
78
51
49
56
47
58
 
= 89.0 = 53.7

We use the same data on the effect of drug treatment on the length of time from treatment to lambing that we have used previously. An equal-variance t-test on the log transformed data gave a P-value of 0.00986, whilst an unequal-variance t-test on the raw data gave a non-significant P-value of 0.0823. A Wald-Wolfowitz test also gave a non-significant P-value.

The observed cumulative relative frequencies (S1(Y) and S2(Y)) are obtained by dividing rank (r) by the number of observations (n). Differences are then obtained between the two sets of observed values (S1(Y)i - S2(Y)i). The largest absolute difference of these two sets of differences is d.

{Fig. 2}
figmb2.gif renamed to figmb02.gif

The maximum difference here is between the smallest of the five values in sample 1 (45, S = 1/5 = 0.2) and the ninth ranked value in sample 2 (58, S = 9/11 = 0.8182). Hence d = 0.6182. This is also the value given by R.

Using

However, we do have a problem with the presence of ties - three observations all have the same reading (51). R makes it clear that it cannot compute correct p-values when ties are present (although it does give a P-value anyway of 0.145).

Since in this case observations were rounded to the nearest hour, one possible way round this problem would be to jitter observations with the same readings - giving them randomly chosen values between 50.6 and 51.4. In this particular example, jittering does not affect the value of the test statistic, and enables R to give a (defensible) P-value of 0.125.

In other words, much like the unequal variance t-test and the Wald-Wolfowitz test, it suggests there is no significant difference in time to lambing between treated and untreated sheep. This reflects the lack of power of the Kolmogorov-Smirnov test to detect differences in distributions between two small samples.