InfluentialPoints.com
Biology, images, analysis, design...
Use/Abuse Principles How To Related
"It has long been an axiom of mine that the little things are infinitely the most important" (Sherlock Holmes)

 

 

 

Non-orthogonal (unbalanced) factorial designs

There are two ways in which a factorial design can be unbalanced.

  1. The number of replications of each combination may vary.
    This may result from missing observations - say data on a particular replicate in an experiment are lost. This can be allowed for in a factorial ANOVA providing those missing observations are missing at random. We considered this aspect back in Unit 2, but just as a reminder, if one particular combination of treatment levels kills some of the patients, then those observations are clearly not missing at random!
  2. Some combinations may be missing altogether.
    This posses bigger difficulties and is most readily dealt with by ignoring the factorial nature of the design and analyzing the data with one-factor ANOVA treating each combination as a separate treatment. We will concentrate here on a factorial design with different numbers of replicates per combination.

If one calculates sums of squares for an unbalanced design the same way one does it for a balanced design (in other words sequential Type I SS) one (arguably) encounters a problem. Unlike with a balanced design, the sums of squares for the main effects (although not for the interaction or residual) will vary depending on the order in which the factors are entered to the model. This is because the design is no longer orthogonal, and as a result some of the explanatory variables may be positively or negatively correlated. Hence for unbalanced designs it is common to use adjusted sums of squares that are not affected by the order of factors in the model.

    Adjusted sums of squares are calculated for each variable after adjusting for the other variables in the statistical model. With Type II sums of squares (also termed HTO or higher-level terms omitted SS) adjustments are only made for the other variables at the same level - for example for a two factor experiment the main effect A would be adjusted for the main effect of B but not for the A × B interaction. With Type III sums of suares (also termed HTI or higher-level terms included SS) the adjustment includes interactions at higher levels.

    Calculation of these adjusted sums of squares is done by carrying out a series of analyses using sequential sums of squares with each term (main effects and interactions) placed last in the model formula in turn. If the term is placed last, it is automatically adjusted for the other terms given before it. Considering Type II sums of squares first, in a two factor experiment one would obtain the SSAB from using the model A + B + A*B, the SSA from using the model B + A and the SSB from using the model A + B. The sums of squares for SSA , SSB and SSAB are then used to build a composite table together with the residual sums of squares which will remain unchanged. The total sums of squares will, however, differ in the final composite table. Type III sums of squares are calculated in the same way but including higher order interactions.

So far, so good. But which is the 'right' type of sums of squares to use. Well, statisticians have been arguing about this for many years, and we have to refer you to the literature (see below) to get the full flavour of the debate. The commonest recommendation (adopted by Doncaster & Davey (2007) and Quinn & Keough (2002)) is to use Type II sums of squares for models with fixed cross factors and Type III sums of squares for models with random cross factors.

But an alternate approach is gaining ground. We said above that obtaining different sums of squares depending on the order in which terms are presented in the model is arguably a problem. But a growing number of authorities (best reviewed by Hector et al. (2010)) argue that we should shift from searching for the 'right' ANOVA table towards presenting one or more models that best match the objectives of the analysis. For example if one has two fixed factors - gender (A) and treatment (B) - it may be quite logical to only assess the effect of treatment after first taking account of gender. Comparing the results of different sequential analyses may tell one more than a single analysis. Users of R nay well move in this direction simply because R does not support type III sums of squares and Type II sums of squares are only available in an add-on package.