The precautionary principle, as defined by Soren Holm and John Harris in Nature magazine in 1999, asserts:
When an activity raises threats of serious or irreversible harm to human health or the environment, precautionary measures that prevent the possibility of harm shall be taken even if the causal link between the activity and the possible harm has not been proven or the causal link is weak and the harm is unlikely to occur.
The version from the Wingspread Statement, 1998:
“When an activity raises threats of harm to the environment or human health, precautionary measures should be taken even if some cause and effect relationships are not fully established scientifically.”
The precautionary principle has taken many forms, but these definitions capture the essence of most of them. Starting life as the German Vorsorgeprinzip (literally “precaution principle”), this rule assumed a role in institutional decision making in the North-Sea conferences from 1984 to 1995, and in the deliberations leading to the Rio Declaration of 1992, the UN Framework Climate Convention of 1992, and the Kyoto Protocol. Formulations of the principle do vary in some important ways. The fuzziness resulting from this lack of a standard definition causes trouble, but is also the very characteristic that appeals to advocates of technological and environmental regulation. They have come to favor the precautionary principle—in whatever form best helps them maneuver policies so as to further their goals.
In its most modest form, the principle urges us not to wait for scientific certainty before taking precautionary measures. Considered out of context, that policy is entirely reasonable. We rarely achieve the high standard of scientific certainty about the effects of our activities. But this fact applies just as much to actions in the form of restrictions, regulations, and prohibitions as to innovative and productive activities. By recognizing the frequent necessity to act or refrain from acting in conditions of uncertainty, we are not thereby committed to favoring a policy of restrictive precautionary measures. This message about certainty and action therefore tells us little. And the rest of the principle provides no further guidance about choosing under uncertainty.
Its roots in the German Vorsorgeprinzip mean that the common use of the principle goes well beyond urging preventative or prohibitory action based on inconclusive evidence. An attribute more central to the principle is the judgment of “better safe than sorry”. In other words, err on the side of caution. While this sentiment makes for a perfectly sound proverb, it provides a treacherous foundation for a principle to guide assessments of technological and environmental impacts. As a proverb, “better safe than sorry” is counterbalanced by opposing—but equally valid—proverbs, such as “he who hesitates is lost”, or “make hay while the sun shines.”
Precautionary measures typically impose costs, burdens, and their own harms. Administering precautionary actions becomes especially dangerous when the principle says, or is interpreted as saying, that those actions are justified and required “if any possibility” of harm exists. In this (typical) interpretation, it becomes ridiculously easy to rationalize restrictive measures in the absence of any real evidence. Clearly, this pushes the principle far beyond dismissing the need for fully established cause-effect relationships.
Statements of the precautionary principle vary also in whether or not they specify that the principle deals with threats of serious or irreversible harm or damage. Problems arise with the usage of “serious” and “irreversible”, but at least this clause limits the application of the principle. More demanding versions of the principle, such as the widely-quoted Wingspread Statement, call for precautionary measures to come into play even when the possible harm is not serious or irreversible.
Statements of the precautionary principle may include a cost-effectiveness clause. This happens all too rarely in practice, perhaps because most advocates of the principle aim to stop the targeted technology or activity, not to maximize welfare. The Rio Declaration of 1992 stands out by incorporating such a clause:
“Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures.”
Some worthy attempts have been made to improve the principle by adding to it. In 2001, the European Environment Agency issued a document conveying “Late Lessons from Early Warnings”, which issued twelve accompanying guidelines. These included some excellent recommendations, such as “Identify and reduce interdisciplinary obstacles to learning”, and “Systematically scrutinise the claimed justifications and benefits alongside the potential risks.” Unfortunately, advocates of the principle have not paid attention to these suggestions, and many of them co-exist uncomfortably with the main thrust of the principle. Another noteworthy attempt at amelioration is a May 2000 Science paper titled “Science and the Precautionary Principle”. This set out five “Guidelines for Application of the Precautionary Principle”: Proportionality, nondiscrimination, consistency, cost-benefit examination, and examination of scientific developments.
With or without patches, the deeply flawed precautionary principle can cause trouble. It already has. Awareness of the pervasive, profoundly restrictive force of the principle is all the more remarkable for its relative obscurity, especially outside Europe. Even among widely read people, a large majority do not recall ever having heard the term—although they have certainly heard “better safe than sorry”. Yet the dominant influence of the principle can be found everywhere.
Consider, for a start, the central role of the precautionary principle in shaping environmental policy in the European Union. The foundational Maastricht Treaty on the European Union states that “Community policy on the environment…shall be based on the precautionary principle and on the principles that preventive actions should be taken, that environmental damage should as a priority be rectified at source and that the polluter should pay.” The United Nations joined the precautionary bandwagon when the UN Biosafety Protocol led the way for other international treaties by incorporating the precautionary principle. Some other examples of the principle explicitly at work:
- Protocol on Substances that Deplete the Ozone Layer, Sept. 16, 1987, 26 ILM 1541.
- Second North Sea Declaration.
- Ministerial Declaration Calling for Reduction of Pollution, Nov. 25, 1987, 27 ILM 835.
- United Nations Environment Program.
- Nordic Council’s Conference.
- Nordic Council’s International Conference on Pollution of the Seas: Final Document Agreed to Oct. 18, 1989, in Nordic Action Plan on Pollution of the Seas, 99 app. V (1990) .
- PARCOM Recommendation 89/1 - 22 June, 1989.
- The Contracting Parties to the Paris Convention for the Prevention of Marine Pollution from Land-Based Sources:
- Third North Sea Conference.
- Bergen Declaration on Sustainable Development.
- Second World Climate Conference.
- Bamako Convention on Transboundary Hazardous Waste into Africa.
- OECD Council Recommendation C(90)164 on Integrated Pollution Prevention and Control, January 1991.
- Helsinki Convention on the Protection and Use of Transboundary Watercourses and International Lakes.
- The Rio Declaration on Environment and Development, June 1992.
- Climate Change Conference (Framework Convention on Climate Change, May 9, 1992).
- UNCED Text on Ocean Protection.
- Energy Charter Treaty.
The influence of the principle has been felt in South America too. Transgenic crops have been prohibited throughout Brazil since 1998. In that year, a judge made an interpretation of the version of the principle included in the Rio Declaration on Environment and Development—a statement coming out of the 1992 Earth Summit held in Brazil.
The precautionary principle is followed even more widely than it might seem from official mentions, especially in the United States. We often find the principle being applied without disclosure or explicit acknowledgment. Perhaps this happens because the principle ably captures common intuitions that grow out of fear fed by lack of knowledge. Our first reaction to an apparent threat is usually: Stop it now! We may disregard the costs of stopping the threat. Our sense of urgency may blind us to considering whether we might have better options at our disposal.
When the United Kingdom faced the appalling, if over-inflated, menace of bovine spongiform encephalitis (BSE), people quickly demanded that authorities require proof of virtually zero risk for any substance that might have BSE contamination. Professor James Bridges, chair of the European Commission’s toxicology committee, referred to this “extreme precautionary approach in the context of other food risks” and noted it had “involved enormous costs”. Of course, if such proof could be provided (which it surely cannot) and at a low cost, the demand would be reasonable. But the actual reaction lacks any sense of proportionality and objective risk assessment.
In the United States, the President’s Council on Sustainable Development affirmed the precautionary principle, without using the term explicitly, in its statement:
There are certain beliefs that we as Council members share that underlie all of our agreements. We believe: (number 12) even in the face of scientific uncertainty, society should take reasonable actions to avert risks where the potential harm to human health or the environment is thought to be serious or irreparable.
The United States has made extensive use of precautionary prevention—sometimes quite sensibly—even if no mention is made of a principle. Sometimes precautionary prevention has been applied earlier in the US than in Europe. The European Environment Agency publication “Late Lessons from Early Warnings” notes four examples: The Delaney Clause in the Food, Drug and Cosmetics Act, 1957–96, which banned animal carcinogens from the human food chain; a ban on the use of scrapie-infected sheep and goat meat in the animal and human food chain in the early 1970s; a ban on the use of chlorofluorocarbons (CFCs) in aerosols in 1977, several years before similar action in most of Europe; and a ban on the use of DES as a growth promoter in beef, 1972–79, nearly 10 years before the EU ban in 1987.
The most formidable manifestations of the precautionary principle in the US may be found in the regulatory practices of the FDA (Food and Drug Administration). It’s not the only US government agency applying the principle, usually without naming it—and without calculating its costs and benefits. The EPA (Environmental Protection Agency) bound itself to the principle in developing and enforcing regulations on synthetic chemicals. US regulators have taken an even more strongly precautionary approach than Europe to some kinds of risks, such as nuclear power, lead in gasoline, and the approval of new medicines—which takes us back to the FDA.
Precautionary FDA regulation may have the most drastic impact on human well-being of any mentioned so far. The FDA has successfully sought to extend its powers over the decades, first solidifying its authority to determine when a new medication could be considered safe, and later to determine when it could be considered effective. If the agency were using a purely rational approach to regulation—one that accurately aimed at maximizing human health—it would fully account for both the risks of approving a new medicine that might have damaging side-effects, and the dangers of withholding approval or delaying approval to a potentially beneficial medicine. In practice, this is far from the way the FDA operates.
In reality, the FDA consistently follows a path close to one that the precautionary principle would prescribe: It puts all its energies into minimizing the risk of a new drug that might be approved, then goes on to cause harm. Very little energy goes into considering the potential benefits from making the new treatment available. Regulators can make mistakes on both sides of this balance.
If they approve a drug that turns out to be harmful, they have made a “Type I error”, as it is called in risk analysis. They might also make a Type II error by making a beneficial medication unavailable—by delaying it, rejecting it for consideration, by failing to approve it, or by wrongly withdrawing it from the market.
Both types of error are bad for the public. For the regulators, the risk of Type I errors looks much more frightening that Type II errors. If they make a Type II mistake and prevent a beneficial treatment coming to market, few people will ever be aware of what has been lost. Probably the media will be silent, and Congress will join them. Regulators have little incentive to avoid Type II errors. But what of the prospect of making a Type I error? This is a regulator’s worst nightmare.
Suppose you are the regulator, and you approve a promising new drug that turns out to be another Thalidomide, causing horrible deformations in newborns. Or, consider what it felt like to be one of the regulators who approved the swine flu vaccine in 1976. The vaccine did its job, but turned out to cause temporary paralysis in some patients. Such a Type I error is immediately obvious and attains a high profile as lawyers, the media, the public, and eager politicians pile on, screaming at you with rage and blame. We’ve seen this more recently in the cases of Vioxx and Celebrex.
You will hardly be a happy official, and your career may be destroyed. You approved the drug according to your best judgment, but your error is not forgiven or forgotten. Given these asymmetrical incentives, regulators naturally tend to err far on the side of being overly cautious. They go to great lengths to avoid Type I errors—a factor that has raised the cost of new drug development and approval into the hundreds of millions of dollars and added years to the process. (The only effective countervailing force in recent history has been the focused pressure of activists to speed approval of AIDS drugs.)
Regulators, then, will not make an objective, comprehensive, balanced assessment of both Type I and II risks. The overall outcome is a regulatory scheme driven by incentives that bias it strongly against new products and innovation. Some of the regulators themselves have recognized and publicly expressed these uneven pressures. Former FDA Commissioner Alexander Schmidt put it this way:
In all our FDA history, we are unable to find a single instance where a Congressional committee investigated the failure of FDA to approve a new drug. But, the times when hearings have been held to criticize our approval of a new drug have been so frequent that we have not been able to count them. The message to FDA staff could not be clearer. Whenever a controversy over a new drug is resolved by approval of the drug, the agency and the individuals involved likely will be investigated. Whenever such a drug is disapproved, no inquiry will be made. The Congressional pressure for negative action is, therefore, intense. And it seems to be ever increasing.
The writings of well-known prophets of gloom provide further evidence of the pervasiveness of precautionary thinking. Consider Bill Joy’s much-discussed essay in Wired, “Why the Future Doesn’t Need Us.” Joy proposed that we apply a precautionary approach to a limited number of technologies—but technologies with a powerful reach and impact. He labeled the inventions that frightened him as “GNR”, standing for genetic engineering, nanotechnology, and robotics. Joy focused on these three areas, but his fears apply to any form of technology endowed with the power of self-replication. In his manifesto, he warned of what he saw as immense new threats:
Our most powerful 21st-century technologies - robotics, genetic engineering, and nanotech - are threatening to make humans an endangered species… Thus we have the possibility not just of weapons of mass destruction but of knowledge-enabled mass destruction (KMD), this destructiveness hugely amplified by the power of self-replication.
And if our own extinction is a likely, or even possible, outcome of our technological development, shouldn't we proceed with great caution?
I think it is no exaggeration to say we are on the cusp of the further perfection of extreme evil, an evil whose possibility spreads well beyond that which weapons of mass destruction bequeathed to the nation-states, on to a surprising and terrible empowerment of extreme individuals.
Just like other advocates of precautionary measures, Joy concluded with a call for restricting or relinquishing technology. Going further than many (at least in their public statements), Joy also called for “limiting our pursuit of certain kinds of knowledge.” He also mentioned that he saw many activists joining him as “the voices for caution and relinquishment…” I will return to Joy’s proposed precautionary measures and their effects near the end of the chapter. In a later chapter, I will consider the views of Leon Kass, Francis Fukuyama, and Michael Sandel, all of who take precautionary approaches to enhancement technologies, which include Joy’s GNR trio.
On Type I errors vs Type II - I've observed that managers in large organizations, both public and privately held, tend more to make Type II errors, mainly for the reason you stated, that the consequences of a Type I error are more likely to be noticed by other people. Entrepreneurs working on their own or as part of small organizations are far more willing to take risks that could result in obvious failure. As long as an economy is dominated by large organizations, there will be fewer innovations, with the possible exception of military research.
ReplyDelete