InfluentialPoints.com
Biology, images, analysis, design...
Use/Abuse Principles How To Related
"It has long been an axiom of mine that the little things are infinitely the most important" (Sherlock Holmes)

 

 

 

Equality of variance tests

The F-ratio test  Levene's test 

1:   the F-ratio test

The F-ratio test is used to test for equality of two variances. The null hypothesis is that two sample variances (v1 and v2) are from independent random samples from a normal population with the same population variance (v). The test statistic (F) is the ratio of the variances where, traditionally, v1 is the larger of the two:

Algebraically speaking -

F = v1 / v2
where:
  • F is the F statistic with n1-1 and n2-1 degrees of freedom;
  • n1 and n2 are the number of observations in each sample.
  • v1 and v2 are two sample variances.
  • The observed value is then tested against the distribution of F with the appropriate number of degrees of freedom.

    BEWARE:
    Most statistical tables only deal with the F distribution's upper tail. As a result, traditionally, v1 is assumed to be the larger variance - and any formulae are arranged accordingly, albeit rather confusingly.

    If you are using probability functions, such as those provided by R, those contortions are seldom needed - unless you use textbook stats formulae.

    When used prior to a t-test, there is usually no reason to anticipate which variance is bigger. Hence a two tailed test is carried out with the alternative hypothesis being that population variances are not equal ( v1v2). However, the test is most commonly used in analysis of variance where, if H0 is untrue, one of the variances is presumed in advance to be larger than the other. Hence tables of F are often only given in the one-tailed form. If this is the case use the value given for P = 0.025 rather than P = 0.05.

     

     

    2:   Levene's test

    There are a number of other tests we can use to test for equality of variances. We will cover these in detail when we compare several means using the technique of analysis of variance. However you may encounter these tests on software packages as an optional test when you carry out a t-test, so we will briefly introduce one of them here, namely Levene's test.

    In Levene's test we first calculate the absolute deviations of the observations from their respective means. All subsequent analysis is then done on the two sets of deviations:

    1. Measure the variability (using sums of squares) of the mean of each set of deviations about the overall mean.
    2. Measure the variability (using sums of squares) of the individual deviations about their respective means.
    3. Convert these sums of squares to 'variances' (usually called mean squares) by dividing by their degrees of freedom.
    4. Use a standard F-ratio test to compare our two variance estimates with 1 and n-2 degrees of freedom.

    All we are doing here is comparing the amount of variability between the two mean deviations with the amount of variability of the deviations within each group. This is called an analysis of variance of the deviations.

    This test is usually considered more robust (that is less sensitive to assumptions not being met) than the F-ratio test. But there are still problems if you have unequal numbers of observations in each group. This is because the absolute deviations may be highly skewed thus violating the assumption for analysis of variance.

    There is a modification of this test called the Brown-Forsythe test. Here an analysis of variance is carried out on the deviations from the group medians rather than the means. However, Glass and Hopkins (1996, p. 436) challenged the robustness of both Levene's test and the modification if sample sizes are unequal and variances are not homogenous. Some statisticians therefore recommend using the unequal-variance t-test if there is any doubt about the homogeneity of variances, or if sample sizes differ markedly.