Biology, images, analysis, design...
|"It has long been an axiom of mine that the little things are infinitely the most important" |
General principles of Parametric Tests
Parametric tests use the properties of the normal distribution to assess whether you can reject the null hypothesis or not. Hence they assume that your data are drawn from a normal distribution, or that sample sizes are sufficiently large that the test statistic follows a normal distribution.
Let us begin by considering what happens if all the observations in your samples are from just one population. In other words, these observations are all measurements (Yi) of the same variable 'Y'. We will assume that this population is normally distributed about its population mean 'μ', with a standard deviation σ'. In this situation the sample observations are also likely to be distributed approximately normally.
Provided these assumptions are true, 95% of the observations in your samples will be less than 1.96 standard deviations from the population mean.
In other words on average, there is less than a 5% probability that any one observation will deviate more than +/− one standard deviation from its population mean. We can use this to develop our first parametric statistical test based on the standard normal distribution - the one sample Z-test:
The one-sample Z-test
Let us say you have a single PCV observation and you wish to know whether it is likely that it was drawn from a normally distributed population with known mean and standard deviation. You can transform the observation (Yi) to a standard normal deviate (z) by subtracting the population mean (μ) and dividing by the population standard deviation of the observations (σ). Standard normal deviates follow the standard normal distribution.
In order to know which critical value to use to compare with our z-value we need to be clear about our hypotheses. Let us adopt the following hypotheses:
You can reject the null hypothesis for any result falling within the rejection region of 5% (α) of the distribution. This rejection region is split into two extreme tails with each tail corresponding to 2.5% of the distribution. A result has to be more than 1.96 standard deviations above or below the mean in order to reject your null hypothesis. Since the normal distribution is symmetrical, the critical value is the same for each tail.
For any result falling within the remaining 95% of the distribution, you accept the null hypothesis. As you might expect, this portion of the distribution is known as the acceptance region(1−α.)
In some situations, however, you may have a different alternative hypothesis. You may know that it is only possible for your observation to come from a population with a lower mean than your known population. In this situation the alternative hypothesis is different:
This is therefore the situation where you are only interested in 1 tail of the distribution. For a 1-tailed test, the critical value is 1.645 standard deviations from the mean. For your alternative hypothesis a result has to be more than 1.645 standard deviations below the mean in order to reject the null hypothesis. For any result falling within the remaining 95% of the distribution, you can accept the null hypothesis.
We have of course already met z values for individual observations as z-scores in
The same reasoning for testing individual observations also applies to means. Thus the Z test can be used to assess whether a sample mean
Again for a two-tailed test you compare it with the critical value of + or −1.96. It if is greater than 1.96 or less than −1.96, you can conclude with 95% probability that the sample mean was not from that population.
Testing the significance of other statistics
The significance of various other statistics can be tested with the z-test provided they are distributed normally, are based on large samples and an estimate of their standard error is available. For example the distribution of the Kappa coefficient tends to normality, so we can test whether it deviates from 0 by dividing the value of the statistic by its standard error (
Note that some authorities advocate use of the z-test for Kappa irrespective of sample size, whilst others instead recommend use of the t-distribution for small samples.