![]() Biology, images, analysis, design... |
|
"It has long been an axiom of mine that the little things are infinitely the most important" |
|
Fully replicated factorial ANOVAOn this page: Principles![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() PrinciplesWe define a factorial design as when you have fully replicated measures on two or more crossed factors. Note that there are several other definitions of a factorial design in the literature. For example Sokal & Rohlf (1995) In a factorial design multiple independent effects are tested simultaneously. Each level of one factor is tested in combination with each level of the other(s) so the design is orthogonal. The analysis of variance aims to investigate both the independent and combined effect of each factor on the response variable. The combined effect is investigated by assessing whether there is a significant interaction between the factors. If there is no interaction, the effect on the response variable of certain levels of two factors acting together will equal the sum of the responses to the same levels of the two factors in isolation. If interaction is present, the effect of the two factors together may be greater or less than the sum of the two in isolation. Use of interaction plots In the analysis one first calculates sums of squares for the main effects. These are the effect of each independent variable on the response variable, ignoring the effects of all other independent variables. Only then do we consider the interactions which are the variablity left-over after main effects have been considered. Since the sums of squares are accounting for components in sequence they are known as sequential or Type I SS. This is important because (as we shall see below) there are other ways to calculate sums of squares that we will encounter when we look at unbalanced designs. When it comes to interpreting those interactions the process works in the opposite direction. One first examines the interaction terms. If an interaction is significant, one compares levels of one factor at each level of the other factor. These are (rather confusingly) known as the simple main effects. Generally the main effect (the average level irrespective of the other factor) is only of interest if the interaction is not significant. However, if there is only slight interaction one may still be interested in average level. One should always bear in mind that any factorial ANOVA has less power to detect an interaction than it has to detect a main effect. For example for a 2 × 2 factorial one would need four times as many experimental units to have the same power to demonstrate the interaction as a main effect. This is important if the main interest is in detecting an interaction (see our example on detecting the interaction between temperature and carbon dioxide on plant Factorial treatment structures can either be used in a completely randomized design or as part of a variety of other designs. We provide here the mathematical model and computational details for the designs we covered in the core text (the completely randomized and randomized complete block designs). We also consider the nested cross-factored design and (in a related topic) the thorny issue of non-orthogonal (unbalanced) factorial designs.
Factorial in completely randomized designFixed effects modelWe first consider an experiment where we have two (or more) fixed factors. Treatment combinations are assigned at random to experimental units.
The model for analysis of variance of this design is given below.
The F-ratio for factor A is obtained by dividing MSA by MSError. The P-value for this F-ratio is obtained for The F-ratio for factor B is obtained by dividing MSB by MSError. The P-value for this F-ratio is obtained for ab − 1 and ab(n − 1) degrees of freedom. The F-ratio for the A × B interaction is obtained by dividing MSA×B by MSError. The P-value for this F-ratio is obtained for ab − 1 and ab(n − 1) degrees of freedom. It is not uncommon to find (usually after a certain amount of introspection) that one has only one independent observation of each combination rather than the anticipated 'n' replicated observations (see nested cross factored design below). One can analyze such designs if one assumes one or more of the interaction terms is zero, and then use the interaction term as the error. This is equivalent to the randomized complete block design which is a two-way factorial anova with only one replicate per cell. Computational formulaeWe will take a balanced experiment with 'a' levels of Factor A, 'b' levels of factor B, with 'n' replicates of each factor combination. Factor A totals are denoted as TA1 to TAa, Factor B totals as TB1 to TBb, subtotals (AB combinations) as T(AB)1 to T(AB)ab and the grand total as G. The factor A, factor B, A × B interaction, error and total sums of squares are calculated as follows:
Mixed effects modelIn field work a more common application of factorial ANOVA is the mixed effects model. This is sometimes used to analyze data from a generalized randomized block desig, for example when completely randomized one factor experiments are repeated in multiple locations (= blocks). This approach is recommended by Underwood There appears to be no simple answer to this matter. We would tend to support the treatment nested within block approach, but have to admit that most recent texts seem quite happy to treat it as a fully replicated factorial design. (Obviously if one has only one replicate of each treatment in each block the issue does not arise - you have to analyze it as a randomized complete block with all the necessary assumptions.)
The F-ratio for (fixed) factor A is obtained by dividing MSA by MSA×B. We have highlighted the MSA×B term as this is the main difference between the fixed and mixed effects models. The P-value for this F-ratio is obtained for The F-ratio for (random) factor B is obtained by dividing MSB by MSError. The P-value for this F-ratio is obtained for b − 1 and ab(n − 1) degrees of freedom. The F-ratio for the A × B interaction is obtained by dividing MSA×B by MSError. The P-value for this F-ratio is obtained for (a− 1)(b− 1) and ab(n − 1) degrees of freedom. If there is little evidence for interaction (in other words if P > 0.25) , then it is permissible pool the interaction mean square with the error mean square to enable more powerful main effects tests of both A and B with mean square error as the denominator. But if there is any indication of interaction, the F-ratio test for factor A over the A × B interaction is a quasi F-ratio and provides only an approximate test of the main effect.
Factorial in randomized blocksTreatment combinations can also be assigned using a randomized block experimental design. Since blocks are assumed to be a random factor, this makes the overall model a mixed effects model.
Two models are in common use which differ only in their assumptions about the existence or otherwise of interaction effects:
The F-ratio for (fixed) factor A is obtained by dividing MSA by MSS × A. The F-ratio for (fixed) factor B is obtained by dividing MSB by MSS × B. The F-ratio for the A × B interaction is obtained by dividing MSA×B by MSError. Note we do not have to assume there is no blocks × treatment interaction, but we do have to assume that there is no three-way interaction. If replication (number of blocks) is inadequate, the model will have low power to identify treatment effects.
The F-ratio for (fixed) factor A is obtained by dividing MSA by MSError. The F-ratio for (fixed) factor B is obtained by dividing MSB by MSError. The F-ratio for the A × B interaction is obtained by dividing MSA×B by MSError. Note we now have to assume there are no blocks × treatment interactions, even though we can estimate those interactions. You will have more power to identify treatment effects, but that power is bought at the cost of adopting the Nelson approach! Nested cross-factored designIn this design a number (n) of evaluation units are nested in each of the the sampling or experimental units (factor C).
In the diagram we have 12 experimental units (C1 - C12) which are allocated at random to treatment combinations: A1B1 A1B2 A2B1 A2B2 Observations are then made on each of three evaluation units for each experimental unit.
The F-ratios for factors A and B and the A × B interaction are obtained by dividing their respective mean squares by MSC(A ×B). This design is a common cause of massive pseudoreplication because the evaluation units are taken as the experimental (or sampling) unit. In other words the main effects and interaction are wrongly tested over the error mean square rather than the C(A × B) mean square. Unfortunately some authorities argue in favour of pooling the error mean square and the C(A × B) mean square, if the P-value for C (A × B) is greater than some prespecified value, say > 0.25. The effect of this recommendation is to make respectable those experiments where there is little or no genuine (independent) replication - simply because one has very low power for the test of C(A × B) if the number of true replicates (c) is very small. Hence we follow Hurlbert's point of view that this is simply pseudoreplication in another guise.
![]()
![]() ![]() AssumptionsThe same assumptions as for a one-factor ANOVA must also hold for factorial ANOVA, namely:
Related topics :
|
|