Larger contingency tables
We deal with testing for independence/association in larger contingency tables in exactly the same way as we tested replicated samples for homogeneity in the section above. We can also partition the G or X^{2}values to investigate what is happening within a large table. We will start with tables with two columns but multiple rows (known as 2×r tables). With only two columns this is equivalent to comparing several proportions.
Relationship between incidence of orf in farmworkers and number of dogs on farm

No. dogs  + orf   orf  % affected  Odds ratio (95%CI)

None  1  79  1.25  1.0

1  25  135  15.63  14.63 (1.95110.0)

2  29  141  17.06  16.25 (2.17121.5)

3/4  19  115  14.18  13.05 (1.7199.5)

4+  18  33  35.29  43.09 (5.52336.1)

G_{df=4} = 32.09 P = .000002

Our example here is taken from a crosssectional study of risk factors for farmworkers contracting the disease orf. We have five percentages that we wish to compare ranging from 1.25% to 35.29%. Overall significance of the association between number of dogs present on a farm and incidence of orf was assessed using Pearson's chi square  we have used G instead (see bottom row of table). The authors then compared each row with the 'control' (no dogs) using odds ratios and their associated confidence intervals. We will use a different approach and partition the G value to obtain more information from the table. However, as we will see, there are only certain ways we can do this without running into problems.
Partitioning the G statistic
When we make a series of comparisons within one data set, it is important that those comparisons are independent Another term for this that we will meet again is orthogonal. What we mean by this is that change in the outcome of one comparison does not affect the outcome of other comparisons. We will discuss this further below, but first let us see how to identify a set of orthogonal comparisons for our 5×2 table.
The procedure is first compute G for the 2×2 table comprising the first two rows. Then compute G for those two rows combined versus row three; then for rows one to three combined versus the fourth row and so on until the last row. This process is shown diagrammatically below:
No. dogs  + orf   orf

None  1  79

1  25  135

2  29  141

3/4  19  115

4+  18  33


No. dogs  + orf   orf

None or 1  26  214

2  29  141

3/4  19  115

4+  18  33


No. dogs  + orf   orf

None or 1 or 2  55  355

3/4  19  115

4+  18  33


No. dogs  + orf   orf

None or 1 or 2 or 3/4  74  470

4+  18  33


So why are these four comparisons orthogonal?
Square IA

No. dogs  + orf   orf

None  10  70

1  16  144

Totals  26  214

G = 0.34

Square IA

No. dogs  + orf   orf

None  1  79

1  25  135

Totals  26  214

G = 15.21

Let us take the first square and vary the frequencies. We assume that the total number of observations and the margin totals are fixed. Despite the change in individual cell frequencies (and consequently the G statistic) from Square IB to Square IB, all the other comparisons above are unaffected because only the column totals are carried over to the next comparison.
Partition I

Comparisons  G  df  Pvalue

0 vs 1  1  15.21  <0.001

01 vs 2  1  3.27  0.071

02 vs 3/4  1  0.05  0.823

04 vs 4+  1  13.56  <0.001

Total  4  32.09 

Partition II

Comparisons  df  G  Pvalue

0 vs 14+  1  21.37  <0.001

1 vs 24+  1  0.68  0.410

2 vs 34+  1  0.51  0.475

3/4 vs 4+  1  9.53  0.002

Total  4  32.09 

Orthogonal comparisons are not all good news though. When you look at the comparisons resulting from this partitioning, you may conclude that that not all these comparisons are terribly useful!
For example, why would you wish to compare the pooled 0/1/2 categories with the 3/4 category, other than that it happens to be part of the orthogonal 'set' of comparisons.
You can gain a measure of control over which comparisons you carry out by putting the rows in a different order. In this case a more informative partitioning of this table would start from the bottom of the table and work upwards to give Partition II. The most significant difference was between no dogs on the farm (incidence of 1.25%) and 14+ dogs on the farm (incidence of 17.6%). The difference between there being 34 dogs and 4+ dogs should also be investigated further.
Note however that you should decide in advance which orthogonal comparison you are going to make before looking at the data. Posthoc selection of comparisons is always open to criticism since we will no longer be operating at our specified probability level. In the real world, of course, one has to accept that in many studies the choice will be made after data inspection, and the choice of orthogonal comparisons will, so to speak, at least limit the damage.
If orthogonal comparisons really don't meet your needs, then you may need to make nonindependent comparisons. This would be the case if you wish to compare incidence with no dogs (the 'control' in this study) with the incidence at each other level. The problem then is that that the more comparisons you make, the greater is your probability of falsely rejecting the null hypothesis. Instead you need to adjust the significance level using a Bonferroni correction so that it is more difficult to reject the null hypothesis. We go into the reasoning behind this when we consider multiple comparison of means in unit 11.
Algebraically speaking 
α'
 =
 α


2(no. rows1)

where
 α' is the adjusted probability level;
 α is the desired probability level (usually 0.05);
 r is the number of rows.

In our example we would therefore require nonorthogonal comparisons to be significant at probability levels of P=0.00625 or less before we can accept them as being different.