InfluentialPoints.com
Biology, images, analysis, design...
Use/Abuse Principles How To Related
"It has long been an axiom of mine that the little things are infinitely the most important" (Sherlock Holmes)

 

 

 

The Log-normal Distribution

Although some measurements in biology do follow a normal distribution, many measurements show a more or less skewed distribution. Skewed distributions are especially common in counts of organisms where mean values are low, the variance is large and values cannot be negative. These sort of distributions often fit the log normal distribution.

There are two ways of looking at a lognormal distribution:

  • It is the distribution of variable x, when the log of x is normal.
    Conversely:
  • It describes how the antilog of y is distributed, when y is normal.

Confusingly perhaps, while the lognormal function uses the normal distribution parameters, the resulting distribution has different parameters. Fortunately, with a little thought, it is easy enough to work out what is going on.

  1. Rescaling a normally-distributed variable using a log transformation does not affect the rank of its values. So aside from this rescaling, order statistics such as the median are unchanged.

  2. However, because the left tail of a lognormal variable is compressed, it contains a higher density of values than the right tail - which shifts the mode leftwards of the median.

  3. Conversely, because a lognormal distribution has its greatest deviations to the right of the median, its mean must be to the right of the median.

The graph below illustrates these effects for 5000 normally-distributed values (variable y), and exactly the same set after it has been detransformed (variable x).

{Fig. 1}
lnorm.gif from hisnm.STG and hislnm.STG using lnorm.sta

Notice that values in red are unchanged, and that we are using natural logs - so antiloge(x) = ex = exp(x).

Where loge(X) is normal, with a mean = μ and variance = σ2, you can obtain summary parameters using the formulae below. (exp[X]=eX)

  • Mean: μ(X) = exp[μ + 0.5σ2]

  • Variance: σ2(X) = exp[2μ + σ2][exp(σ2) - 1]

  • Skew: γ1(X) = [exp(σ2) + 2]√[exp(σ2) - 1]

  • Kurtosis: γ2(X) = exp(4σ2) + 2exp(3σ2) + 3exp(2σ2) - 3

But, if the standard deviation is small, then μ(X) ≅ exp[μ], σ2(X) ≅ exp[σ2], and γ1(X) ≅ γ2(X) ≅ 0