home
publications list
security and terrorism
science and risk
Spiked
Times Higher Education Supplement
presentations
articles about Bill
contact Bill

Dr Bill Durodié

spiked

The precautionary principle assumes that prevention is better than cure
Contribution to the spiked debate, 'Fearing the unknown'

There is no agreed definition of the precautionary principle. One of the more authoritative versions comes from the 1992 Rio 'Earth' Summit. It contains a rather cumbersome triple negative, to the effect that not having evidence is not a justification for not taking action.

If we undo a couple of the knots, then as two negatives make a positive, we are left with 'action without evidence is justified'. That's it, in a nutshell. The precautionary principle is, above all else, an invitation to those without evidence, expertise or authority, to shape and influence political debates. It achieves that, by introducing supposedly ethical elements into the process of scientific, corporate and governmental decision making.

For this, the precautionary principle relies largely upon a single assumption - that prevention is better than cure. This is also known as 'better safe than sorry'. While this may seem obvious, there is, in fact, precious little evidence for it. The problem with preventative measures is that they are, of necessity, general and long-lasting, whereas cures tend to be targeted and discrete.

What is more, it is possible to cure somebody, or something, without forming a moral judgement about that activity or person. But if your primary focus is upon precaution, then it is morally wrong not to take preventative measures. Therefore, the whole language of precaution is imbued with an excessively moralistic tone.

In fact, prevention is only better than cure, if the probability of the particular problem you have in mind occurring is rather high, and if the proposed preventative measures are largely accurate or effective. But in the majority of debates about risk that we encounter today, neither of these cases are actually met. Probabilities, on the whole, are pretty low - otherwise, society would divert large amounts of resources and concern towards dealing with them. And there is little evidence that the precautionary measures taken actually work.

If you don't believe me, try doing the maths for yourself, with a problem affecting 10 percent of a population, for which you have a screening method accurate in 80 percent of cases. Note that both of these figures are significantly higher than you are likely to find in any real-life example, and you will still be left with two-and-a-half times as many misdiagnosed with the problem as are correctly identified. Is this a price worth paying?

Published on Spiked, 16 March 2004