InfluentialPoints.com
Biology, images, analysis, design...
Use/Abuse Principles How To Related
"It has long been an axiom of mine that the little things are infinitely the most important" (Sherlock Holmes)

 

 

Intentional bias and scientific fraud: Use & misuse

(data fabrication, misrepresentation of results, plagiarism, multiple publication, unjustified authorship, paradigm dominance, misconduct denial)

Statistics courses, especially for biologists, assume formulae = understanding and teach how to do  statistics, but largely ignore what those procedures assume,  and how their results mislead when those assumptions are unreasonable. The resulting misuse is, shall we say, predictable...

Use and Misuse

As with pseudoreplication,  our heading 'use and misuse' is a bit of a misnomer here because intentional bias is always a misuse. Considerable research has been devoted to avoiding unintentional bias in gathering and analysing data, but much less has been written about avoiding intentional bias or fraud. Random errors and mistakes are relatively easy to allow for, intentional bias is not!

Effective detection of fraud is a very important component of prevention. There are three main ways by which fraud can be detected - by examination of the data produced, by collaborators 'blowing the whistle' on the perpetrators, and by regular data audits. Curiously enough, very little scientific fraud is detected by simple inspection of published work. This is because, in the absence of corroboration, even the most dubious results cannot be distinguished from simple incompetence (which is all too common since many biologists only learn how to do stats, but not the underlying reasoning).

The majority of known instances of fraud by individual scientists come from medical research. This is for a number of reasons - more research papers are published than in other disciplines, there is more money involved especially with drug testing, and the consequences of fraud are regarded as more important than in other disciplines. In addition it is much easier to check up on the results of a clinical trial, than on, for example, field ecological data for which exact conditions can never be replicated. Despite these points, the high incidence of misconduct denial by administrative and publishing bodies will surprise many.

In veterinary research, we found  far fewer documented cases of data fabrication and misrepresentation of results, although it does occur in attempts to prove the efficacy of certain forms of complementary medicine such as homoeopathy.

Similarly for ecological and wildlife research cases of data fabrication are rare (but not unknown), but with much more evidence of the pervasive influence of paradigm dominance and political and commercial pressure. Dominant paradigms have acted as straitjackets upon critical thinking in environmental matters, whilst political and corporate pressures on research on forest destruction, insecticides for crop protection, global warming and genetically modified foods have rendered large areas of research suspect, especially given that authors (unlike in medical research) do not have to declare their interests when publishing a paper.

 

What the statisticians say

For obvious reasons, there is a strong tendency for the scientific establishment to deny or 'down-play' such problems. However, most studies on the matter, ranging from Savan (1988) to Fanelli, (2009) seem to indicate that scientific deception is a serious problem. It is very difficult to estimate the prevalence of fraud in science and, as might be expected in such a situation, estimates vary widely. Lock (1988) carried out one of the earliest (non-systematic) surveys by writing to professors of medicine and surgery in medical institutions in Britain, as well as other scientists and journal editors.

There are several reviews of scientific fraud and misconduct. The best is probably by Broad & Wade (1985) who give examples from Galileo to recent times. Lock & Wells (1996) focus on medical fraud, which is summarised in Wells (1997). Grayson (1995) gives a guide to the literature on the topic. Levitas & Guy (1999) investigate the murky world of government statistics. They conclude that official statistics embody the interests of the state rather than those of its citizens. Kuhn (1962) produced one of the seminal works on the evolution of science. He introduced the concept of paradigms as methodological, philosophical and social constructs which scientists use to guide their research.

Recent appraisals of scientific misconduct include Fanelli (2009) who focuses exclusively on data fabrication, Carneiro (2007) who looks at plagiarism and Gotzsche (2006) who concluded that significant results in abstracts should generally be disbelieved. Gilbert (2009) highlights the problem of image manipulation in submitted articles. Montori et al. (2004) produced a user's guide to detecting misleading claims in clinical research reports, whilst Buyse et al. (1999) and Al-Marzouki et al. (2005) look at statistical methods for detecting data fabrication in clinical trials.

A more realistic (if controversial) appraisal of the situation is given by McKee, M. (2007) who cautions against the growing influence of the ideology of the religious right on science in the United States. Similarly Martin (1992) notes that a narrow definition of fraud protects the scientific elites and government and corporate interests that have the dominant influence on priorities in science. Martin (1993) argues that the scientific peer review system is ill suited to dealing with unorthodox and challenging views. List (1985) provides a more philosophical (if less illuminating) take on the matter.

Wilmshurst (2002) describes a case of individual and institutional corruption at the Kings College Hospital Medical School and the University of London, whilst Rennie & Riis (1998) give the views of a range of contributors on how medical fraud should be tackled. Graham & Dayton (2002) discuss the evolution of ecological ideas and highlight the dangers of increased specialization. Mahoney (1977) reports on an experimental study of confirmatory bias in the peer review system.

Wikipedia provides a section on scientific misconduct. Although their definition is quite broad, the section focuses almost exclusively on misconduct by individual scientists. Sections on individual alleged misconduct cases in biology include David Baltimore, Jacques Benveniste, Cyril Burt, Ranjit Chandra, John Darsee, Charles Dawson, Shinichi Fujimura, Hwang Woo-Suk, Roy Meadow, Luk Van Parijs, Eric Poehlman, Andrew Wakefield and the MMR vaccine controversy.

Brian Martin (1) (2) provides links to a series of his writings on plagiarism and scientific fraud, and whistleblowing and suppression of dissent. Strongly recommended! The role of government and commercial interests in subverting authentic research and biasing research priorities is covered in a Guardian report and in a Wikipedia section on the global warming controversy. The Committee on Publication Ethics (2005) operates a comprehensive web site on research misconduct, with a variety of papers available on the topic. The web is a rich source of 'pseudoscience', not least on the topic of scientific fraud. One such example is provided by J. Gordon Edwards who falsely describes the environmental hazards of DDT (as well as global warming and mass extinctions) as a case of scientific fraud. A (rather weak) rebuttal is provided on line by NEW-CUE, an environmental education organization.