Worked example I
We base our first example on some data on sole horn moisture content from a study by Higuchi & Nagahata (2001) on cows with and without laminitis. We will assume for now that we wish to test this observed distribution against a reference normal distribution of mean 35.0 and standard deviation 2.0.
The observed cumulative relative frequencies (S(Y)) are obtained by dividing rank (r) by the number of observations (n). Expected cumulative relative frequencies (F_{o}(Y)) for these quantiles from a normal distribution are given by the area under the normal curve; they are readily obtained from R. Differences are obtained between observed and expected values (S(Y)_{i}  F(Y)_{i}), and between observed of the previous variate and expected values (S(Y)_{i1}  F(Y)_{i}). The largest absolute difference of these two sets of differences is d.
Moisture content of sole horn 
Obs #  % moisture (Y)  Observed (S(Y))  Expected (F_{o}(Y))  S(Y)_{i}  F(Y)_{i}  S(Y)_{i1}  F(Y)_{i}

1 2 3 4 5 6 7 8 9 10 11 12 
32.2 32.3 33.1 33.2 33.3 34.5 35.2 35.3 36.5 36.8 37.0 37.6 
0.0833 0.1667 0.2500 0.3333 0.4167 0.5000 0.5833 0.6667 0.7500 0.8333 0.9167 1.0000 
0.0808 0.0885 0.1712 0.1841 0.1977 0.4013 0.5398 0.5596 0.7734 0.8159 0.8413 0.9032 
0.0025 0.0782 0.0788 0.1492 0.2190 0.0987 0.0435 0.1070 0.0234 0.0174 0.0753 0.0968
 0.0808 0.0052 0.0044 0.0659 0.1357 0.0154 0.0398 0.0237 0.1067 0.0659 0.0080 0.0134 
The process may be easier to follow on the first graph below. The red curve shows the expected cumulative relative frequencies from a normal distribution. Blue points show the observed cumulative relative frequencies. Red points show points on the cumulative normal curve equivalent to observed cumulative relative frequencies. Green points lie immediately before each stepup.
{Fig. 1}
Using
A more efficient approach is shown in the second figure above. A correction factor (0.5/n) is subtracted from each observed cumulative relative frequency. Only one difference then has to be calculated for each observed cumulative relative frequency. Once the largest absolute difference is identified, the correction factor is added back on again to give d.
We said at the start of this worked example that we were testing the observed distribution against a fully defined normal distribution (μ=35,σ=2). Usually, however, one is more interested in an omnibus test of normality  using the sample mean and standard deviation as estimates of the population parameters.
The KolmogorovSmirnov test should not be used to test such a hypothesis  but we will do it here in R in order to see why it is inappropriate. In this example the mean is 34.754 and the standard deviation is 1.92472.
Using
The Pvalue we obtain is 0.7026  which gives no indication of a significant deviation from normality. Let us now use three of specialized tests of normality which allow for the fact that one is estimating parameters from the sample.
 Lilliefors test of normality
Using
The maximum difference (D) is estimated in exactly the same way as previously.
The test statistic is the same (D = 0.191), but the Pvalue is much lower at 0.2628.
 Cramérvon Mises's test of normality
Using
This test uses a different test statistic from KolmogorovSmirnov and Lilliefors.
The test statistic (W) is 0.0591, with a Pvalue of 0.3606.
 AndersonDarling test of normality
Using
This test uses another different test statistic which gives more weight to the tails of the distribution. It is reputedly the most powerful of this family of tests.
The test statistic (A) is 0.3891, with a Pvalue of 0.3263.
Conclusions
Whilst the Pvalue from the KolmogorovSmirnov test (0.7026) is not valid for the reasons stated, any of the other three tests could justifiably be used depending on which aspect of the distribution one is most interested in. None of them indicates a significant deviation from normality  although with such a small sample the deviation would have to be very marked to be detected.
Postscript: When there are several (appropriate) tests to choose from, it is very important to select the test a priori, and not just choose the one that gives the desired result. If you want to give more weight to the tails of the distribution, then select the AndersonDarling test. If you want to give more weight to the centre of the distribution, then select the Lilliefors test.