InfluentialPoints.com
Biology, images, analysis, design...
Use/Abuse Principles How To Related
"It has long been an axiom of mine that the little things are infinitely the most important" (Sherlock Holmes)

 

 

 

Kendall coefficient of concordance

Whilst the Kendall rank correlation coefficient is used to determine the association between just two variables measured in (or transformed to) ranks, the Kendall coefficient of concordance (W) is used to determine the association between k such variables. It is most commonly used to assess agreement among raters. The coefficient bears a linear relationship to the average Spearman rank correlation coefficient between all possible pairs of raters.

Ranking of quality
of strawberries
Farms Raters
ABCDΣR
1
2
3
4
5
6
7
8
8
4
2
3
5
1
6
7
7
3
1
4
5
2
6
8
8
2
5
6
7
1
4
3
8
3
4
2
5
1
6
7
31
12
12
15
22
5
22
25

We will take an example of four people (raters) ranking the quality of samples of strawberries grown on eight different farms from best (1) to worst (8).

Procedure

  1. If there are any tied observations, assign the average of the ranks they would have been assigned had no ties occurred.
  2. Find the sum of ranks (Rj) for each item being ranked.
  3. Sum these Rj and divide by the number of items being ranked (N)
    to give the mean value of the Rj.
  4. Calculate the sum of squares of the deviations of each Rj
    from the mean using SS = Σ[ Rj − ( ΣRj/N)]2
  5. Compute the Kendall coefficient of concordance (W) using:

Algebraically speaking -

W    =    SS
(n3−n) k2/12
where
  • SS is the sum of squares of the deviations of each Rj from the mean
  • n is the number of items being ranked,
  • k is the number of raters.

  1. For n > 7, the quantity k(N−1)W is distributed as χ2 with N − 1 degrees of freedom.

Worked example I

For the data given above:

Using
  1. Mean rank = 144/8 = 18
  2. Sums of squares = 169+36+36+9+16+169+16+49 = 500
  3. W = 500/[(504 x 16/12] = 0.744
  4. Since n > 7 we compute k(N−1)W as 20.83. R gives a P-value of 0.004 for this value We conclude there is a highly significant degree of concordance between the different raters. Examination of the data suggest that there is a high level of agreement on which farm produces the best strawberries (#6) and the worst strawberries (#1), with less agreement on the intermediates.