Biology, images, analysis, design...
Use/Abuse Principles How To Related
"It has long been an axiom of mine that the little things are infinitely the most important" (Sherlock Holmes)



The Computer Revolution

Data management has been revolutionised in the last twenty years by the widespread availability of computers and the associated software. As a result there have been rapid changes in how we analyse data, and how we interpret the results.

Until recently most of the techniques for analysing data had to be performed by hand, using calculators and printed tables of information. This had a number of important effects -

  1. Analysis was very time consuming. So researchers tended to prefer pre-planned analyses, and rarely 'explored' their data adequately.
  2. For the same reason, mathematical statisticians simplified the calculations required of researchers as much as possible. Standardized methods were preferred, and much of the logic behind tests was obscured by the mathematical shortcuts.
  3. Because the methods were complicated and laborious, researchers largely learnt a few 'cook book' techniques, for a small number of common tests. Anything unusual was taken to a professional statistician.
  4. Many types of analysis were simply impossible. The calculations were too massive. Where these had to be done, the data was taken to someone specialising in data entry and computer programming, and special programmes were written for the job. A computer was something that filled much of a building, cost a fortune, and was used by specialists.

Today most researchers either have a personal computer, or have ready access to one. Complex and sophisticated statistical and data management packages are available. Many people analyse their own data, and write their own papers or reports on word-processor.

Researchers are beginning to routinely use complex analyses on large data sets, or to fully analyse data that had previously only been summarized. The constraints of data input encourage researchers to gather data more systematically. Powerful statistical software packages can produce vast quantities of results in minutes, and research papers are becoming full of complex equations and statistical notation.

Despite this, surprisingly few researchers actually understand what these tests are doing, and what their results actually mean. Still fewer are aware of the limitations and assumptions implicit in the tests, or how to pick the best test for their data. All too often results are presented that neither the author nor the audience understands. And 'in the land of the blind, the one-eyed man is king.'

But that is what 'revolutions' are all about!