Friday, December 24, 2010

My new responsibility as CEO of Alcor Foundation

After an exhaustive set of interviews and background checks, the board of directors of Alcor Life Extension Foundation have given me the position of CEO, starting January 1, 2011. This is a challenging and exciting opportunity, especially since it's a kind of return -- I started a cryonics organization (with several friends) in England way back in 1986.

The official announcement is here: Alcor blog

Wednesday, September 1, 2010

The Perils of Precaution, full version

The combined six blog posts are now available in a single essay, with notes, on my main website:

Monday, August 23, 2010

Perils, Part 6: Fatal to the Future

Fatal to the Future

The precautionary principle is ultra-conservative. “Conservative” here does not mean “right-wing”, nor does it refer to the Republican Party in the USA or the Conservative Party in England. I mean it in the most literal sense: that which conserves the existing order. Factions of widely differing agendas may share an interest in the status quo. In the USA, this makes sense of the unholy alliance of religious conservatives and extreme environmentalists in their attack on biotechnology.

The ultraconservative nature of the principle explains support for it both by environmentalists and large political and even commercial bodies. Some businesses are highly conservative and opposed to innovation—those who lack confidence in their ability to innovate or don’t want to bother. These organizations can use the principle to lock down the status quo, protecting their position from disruption by new and potentially superior technologies.

Sonja Boehmer-Christiansen, environmental policy specialist and editor of the journal Energy and Environment, noted that “Virtually all scientific and technological discoveries create, initially at least, powerful losers who can activate the prevailing ideological and political system against the new.” The precautionary principle serves as a pretext for activists with anti-technology and anti-business agendas. Once the principle of precaution is in place, defenders of what is only have to raise the barest possibility of a harm to block the creative activity of the forces of what could be.

Defenders of national economic interests (as they see them) can easily invoke the precautionary principle. The protracted dispute between the European Commission and the United States and Canada over restrictions on hormone-treated beef cattle is a case in point. The EC explicitly argued that the precautionary principle justifies restricting imports of U.S. and Canadian beef from cattle treated with particular growth hormones. Perhaps we shouldn’t be surprised that the World Trade Organization (WTO) comes under heavy attack from environmental precautionists, given that this body ruled in favor of the United States and Canada. The WTO pointed out that even the EC’s favored scientific studies failed to demonstrate a real or imminent harm when these hormones were used according to accepted animal husbandry practices. This finding has not stopped the EC from enforcing restrictions on hormone-treated beef. The European Commission has promised that it will not allow the precautionary principle to be abused. Apparently the EC believes that promises are made to be broken.

Organizational learning experts have converged on the view that we learn best by experimenting, by learning in action. This is why companies shelling out their own money for corporate learning programs now favor learning on the job and simulations rather than traditional classrooms or standalone online learning courses. The precautionary principle is fatal to the future because it prevents us from learning by experimenting. Earlier in the chapter, we saw that the principle would have blocked most of the scientific discoveries of the past, as well as the technologies they enabled. Scientific and medical research necessarily gets going before we have all the information. We learned about some blood groups, for example, only by doing transfusions.

The precautionary principle, by halting activity, reduces learning and reinforces uncertainty. When the FDA responded with excessive precaution to the 1999 death of a patient in a University of Pennsylvania gene therapy trial for a genetic disease, work in gene therapy throughout the country and beyond was set back by years. The FDA might instead have taken measures to ensure more thoroughly informed consent, or have put additional safeguards in place without halting all research. We will uncover a wider range of both potential harms and benefits through action learning—what organizational theorist Karl Weick has called “looking while leaping.” Allowing a diversity of directions for technological advancement produces more learning and problem-solving than a single direction imposed by a centralized policy-making institution.

Practically all advances with a scientific basis come with some risk. If the mere possibility of harm—to someone, somewhere, somehow—is held up as sufficient reason to stop activity, we would have to say goodbye to all medical, engineering, and technological advances. Nor can precautionists reasonably require innovators to demonstrate the “necessity” of any particular advance. For one thing, necessity is in the eye of the beholder. The extreme environmental activist will judge just about every technological advance to be “unnecessary”.

Besides, each technology invariably forms a bridge to later technologies with even greater benefits and lower costs. We complain about burning fossil fuels for energy, understandably enough. But they are far cleaner than burning wood, may well be made cleaner, and without them we would not be able to invest the resources and knowledge necessary for the transition to the solar-hydrogen-nuclear future.

We saw that, compared with regulations for traditional breeding techniques, the regulation of gene-spliced crops is inconsistent, arbitrary, and not apportioned to risk. This has the effect of slowing innovation in impoverished parts of the world. Crop breeders who use traditional techniques test thousands of new genetic variants every year. When it comes to gene-spliced crops, however, requiring regulatory review of each and every variant effectively stifles research conducted with the most advanced and precise methods.

Lying next to the almost-dead body of agricultural biotechnology we find medical biotechnology. Carl Djerassi, emeritus professor of chemistry at Stanford University, is the father of the modern contraceptive Pill. According to Djerassi, “The precautionary principle is also the principal reason why we still have no such [contraceptive] Pill for men.”

Talking of dead bodies, if the precautionary principle is used to block genetic modification of insects and bacteria, bodies killed by Chagas’ disease will continue to pile up. This disease—accompanied by a resurgence of malaria, dengue fever, and yellow fever—has erupted in Latin America, infecting 12 million to 18 million people out of the 90 million in the area. Once infected with the protozoa Trypanosoma cruzi, carried by several species of insects, between 10 and 30 percent of people develop chronic, life-threatening maladies such as heart failure. Already, 50,000 people die from invasion by this organism each year.

No vaccine or cure exists for Chagas’ disease. That could change if the precautionary principle is kept at bay. Scientists hope to augment conventional public health measures with genetically modified insects and bacteria. They want to use the “sterile-insect technique” to combat Chagas’ disease—but will governments mouthing the precautionary principle allow the release of these genetically modified bugs?

Many medical techniques and technologies now familiar to us, such as open-heart surgery, and X-rays, had to pass through what Norman Levitt described as a “heroic stage.” This will be just as true of future medical technologies. Consistent application of the precautionary principle would halt developments in their heroic, experimental, poorly understood phase, preventing them from ever becoming the standard techniques. Levitt goes on to say:

At a more basic level, research programs in molecular biology would have been badly crippled. The now-standard tricks associated with 'genetic engineering' - restriction enzymes and the polymerase chain reaction - would have had a difficult time making their way into the armamentarium of investigators.

Change happens regardless of the precautionary principle. If we stifle changes initiated by our brightest, most creative minds, we will be left with inadvertent changes. The direction of those changes is far more likely to be one that we don’t like. The asymmetrical nature of the precautionary principle ignores natural, unchosen changes that have their origin in nature, chance, or the environment. But changes, advance, and progress that come from science are treated as the enemy.

As Ingo Potrykus, emeritus professor of Plant Sciences at the Swiss Federal Institute of Technology, and the inventor of Golden Rice, said: “The application of the precautionary principle in science is in itself basically anti-science. Science explores the unknown, and therefore can a priori not predict the outcome.”

The future is the realm of the unknown. We can do much better to understand, anticipate, and prepare for the possible futures that lie ahead, but a large element of the unknown will remain. If we are to continue improving the human condition—and possible even move beyond it—we must remain open to the unknown. We must throw out the precautionary principle. Friends of the future will see how the principle would prevent us from developing and applying practically all of the emerging technologies for enhancing and transforming the human condition: genetic techniques, neuromedical implants, nanotechnology, biotechnology, machine intelligence, and so on. Had the precautionary principle been in effect at any time in the past, today would never have arrived. The precautionary principle is the enemy of extropy.

If this principle should be avoided by policymakers and executives making a decision about the development, deployment, regulation, or marketing of a new technology, what are the alternatives? They should start out by thinking about the kind of decision they are making, then identify the optimal way to make it. This requires a structured decision-making process. The wisdom of ultimate precaution turns out to be false. Real wisdom comes from structure.

Perils, Part 5: Failures of the Precautionary Principle

Failures of the Precautionary Principle

Having seen the precautionary principle in action, we can identify its shortcomings quickly. Before I get to two of the problems with especially strong anti-innovation effects, I want to examine eight other defects that render the precautionary principle capable of generating only dangerously distorted conclusions.

Failure of Objectivity: Any decision procedure adequate for handling the complexities of technological and environmental risks affecting multiple parties must be objective. Objectivity here means “following a structured, explicit procedure informed by the relevant fields of knowledge.” Those fields include risk analysis, economics, the psychology of decision making, and verified forecasting methods. In the absence of a well-designed, structured procedure, assessment and decision making will be distorted by cognitive and interest-based biases, emotional reactions, ungrounded public perceptions and pressures from lobbyists, and popular but unreliable approaches to analysis and forecasting. The precautionary principle does nothing to ensure that decision makers use reliable, objective procedures. Several of the points below detail ways in which the principle lacks objectivity.

Distracts from Greater Threats: The precautionary principle distracts citizens and policy makers from established, major threats to health. The heavy emphasis on taking precautionary measures for any proposed danger, no matter how speculative, draws attention away from any comparative assessment of risks and costs. The principle embodies the imperative to eliminate all risk from some proposed source, ignoring the background level of risk, and ignoring other sources of risk that may be more deserving of action. Environmental activists usually target human-caused effects while giving the destructive aspects of “nature” a free ride. Nature itself brings with it a risk of harms such as infection, hunger, famine, and environmental disruption.

We should apply our limited resources first to major risks that we know are real, not merely hypothetical. The more we attend to merely hypothetical threats to health and environment, the less money, time, and effort will remain to deal with substantial health problems that are highly probable or thoroughly established. The principle errs in focusing on future technological harms that might occur, while ignoring natural risks that are actually occurring.

Vague and Unclear: Everything about the principle is easily interpreted in differing ways. As the authors of the May 2000 paper in Science, say, “its greatest problem, as a policy tool, is its extreme variability in interpretation.” The Treaty on European Union gives the principle great importance by referring to it, yet does not define it. Once given a specific interpretation, the principle is simple. Simplicity is the source of its appeal. Simplicity is a virtue—so long as it does not come at the expense of adequacy.

The precautionary principle is too simple. In versions that mention “irreversible harm”, no account is given of irreversibility. Most environmental changes can be reversed, though it may be costly to do so. Even when effects are truly irreversible, that fact alone does not make the changes significant. The principle lacks clarity also because it leaves us without any guidance in cases where resulting harm arrives along with benefits—and this is the rule rather than the exception. The principle leaves us in the dark as to how we should go about preventing harm. As we have seen, precautionary measures can themselves be harmful and costly.

Lack of Comprehensiveness: Any procedure that claims to be both rational and equitable in assessing the desirability of restrictions on productive human activity must be comprehensive. This means taking into account the interests of all affected parties with legitimate claims. It also means considering all reasonable alternative actions, including no action.

A comprehensive decision procedure balances the benefits of restricting an activity that brings with it possibly harmful side effects against two factors: the benefits of the activity in question, and the costs and risks of the restrictions, regulations, or prohibitions. If a proposal has been made to restrict a technology, responsible decision makers will estimate the opportunities lost by abandoning it. If needs that were being met by the technology or productive activity will be met by other means, the costs and risks of those alternatives should be estimated. When making these estimates, decision makers should carefully consider not only concentrated and immediate effects, but also widely distributed and follow-on effects.

The precautionary principle, with its typically agenda-driven, single-minded approach, fails the test of comprehensiveness. Officials and activists who use the principle routinely inadvertently or deliberately ignore costs and side-effects of regulations and prohibitions, as well as the potential benefits of a technology, both in the near term and as it might develop over time. As we saw in the case of drug regulation, regulators who start out doing something intended to be beneficial face incentives that encourage them to regulate excessively. The precautionary principle serves as a rationalization and an encouragement for regulators in making Type II errors.

Inappropriate Burden of Proof: The precautionary principle illegitimately shifts the burden of proof (“reverse onus”) by requiring innovators and producers to prove their innocence when anyone raises “threats of harm”. Activists enjoy a favored status since they can raise the prospect of precautionary measures with no more evidence than their fearful imagination. All they need to show is that a possibility of harm exists. No, not even this. All they need to show is that questions have been raised about the possibility of harm. Inventors and producers must then devote effort and resources to answering those questions.

Proponents of the principle portray it as a value-neutral procedure for deciding on policies. Yet it gives the trump card to the status quo and against productive activity and innovation by default—no real evidence is needed. Producing and innovating become de facto crimes whose perpetrators are guilty until proven innocent.

The content—even the very name—of the principle positions environmental activists as friends and protectors of the common citizen. By shifting the burden of proof, advocates of precaution position themselves as responsible protectors of humanity and the environment, while positioning advocates of proposed activities or new technologies as reckless. By using reverse onus, the activists can impose their preferences without providing evidence, and without being accountable for the results of overly-cautious policies.

To add to the fear-inducing power and chilling effect of the reverse onus, activists and regulators who invoke the precautionary principle invariably assume a worst-case scenario. Any release of chemicals into the environment might initiate a chain of events leading to a disaster. Genetically modified organisms might cause unanticipated, serious, and irreversible problems. By imagining the proposed technology or endeavor primarily in a worst-case scenario—while assuming that preventing action will have no disastrous consequences—the adherents of the principle immediately tilt the playing field in their favor. By combining reverse onus and catastrophic scenario-spinning, precautionists guarantee that managing perceptions of risk becomes more influential in policy-making than the reality of risk.

Asymmetrical: The precautionary principle inherently favors nature and the status quo over humanity and progress, while routinely ignoring the potential benefits of technology and innovation. As an example of this—meant only partly in jest—I would point to an odd dichotomy. Many environmentalists have been obsessed with opposing nuclear power. After all, to them, it represents advanced technology and humanity’s triumph over the vagaries of nature. At the same time, when have you ever heard environmentalists and fellow precautionists raising concerns and urging public action over the dangerous act of sunbathing? If this question seems a little peculiar, consider what sunbathers are doing: They are lying prone and helpless directly beneath a gigantic, unshielded nuclear fusion reactor!

The entirely serious point I want to make here is that the precautionary principle fails to treat natural and human threats on the same basis. Users of the principle routinely ignore the potential benefits of technology, in effect favoring nature over humanity. The principle does not account for the fact that the risks created by technological stagnation are at least as real as those of technological advancement. As biochemist Bruce Ames of UCLA has demonstrated, almost all of our exposure to dangerous chemicals comes in the form of natural chemicals, such as aflatoxins in peanuts—which are among the most carcinogenic substances known. Yet fear and attention are primarily directed toward synthetic chemicals. A particular chemical has the same effects regardless of whether its source is natural or synthetic. Despite this, activists treat human-derived chemicals as guilty until proven innocent, and naturally occurring chemicals as innocent until proven guilty.

We can see an excellent example of the asymmetrical favoring of nature over humanity—in other words, the condemnation of conscious creative activity—in the wildly divergent attitudes of hardcore environmentalists and other precautionists toward gene-spliced crops and their more traditional counterparts. Proponents of the precautionary principle—if not utterly opposed to gene-spliced crops in their public pronouncements—encourage authorities to apply a heavy regulatory burden. When it comes to conventional crops, the same precautionary regulations are never urged. This makes sense, precautionists will say, because genetically modified crops introduce new and poorly understood risks. But is this true?

Gene-splicing is simply the most scientifically advanced method of generating new plant varieties. More specifically, compared to other techniques, gene-splicing is more precise, restricted, and predictable than techniques devised earlier. Anti-technologists would have us believe that modern genetic techniques break radically with past practice. In reality, gene-splicing technology is a refinement of the less accurate, more uncertain techniques of the past. Those older methods—such as hybridization and induced-mutation breeding—lie behind the many new plant varieties introduced every year. No scientific review is required of them. No special labeling is required. Yet, because they are less precise, they bring potential hazards greater than those from the more precisely targeted method of gene-splicing.

Often these more traditional products result from “wide crosses”—applications of hybridization in which many genes are transferred from one species to another in a way that does not happen in nature. Wide crosses have many benefits such as introducing the hardiness of wild rice into cultivated rice, or integrating yellow dwarf virus tolerance and resistance into cultivated oats. But they could also lead to problems. By introducing thousands of foreign genes into an established plant variety, the result could be the accidental introduction of toxins or allergens, or qualities such as increased invasiveness in the field.

Similarly, a method in common use for the last 50 years called induced-mutation breeding, lacks the precision of gene-splicing. In this approach, plants used as crops are exposed to ionizing radiation or toxic chemicals in order to stimulate genetic mutations. No way exists to select the mutations; it is essentially a random process that leaves breeders with little idea of which mutations occurred or which ones produced a desired effect. Over the decades in which this method has been used, between one and two thousand mutation-bred plant varieties have been brought to market without any regulation to speak of. Although the results have typically been safe, problems have occasionally cropped up.

If the precautionary principle were actually a useful tool, we would expect to see stricter precautions being applied to these less accurate, less predictable types of genetic modification. But the opposite has been the case. Activists and regulators have put all their energies into severely regulating gene-spliced products out of all proportion to their risk. The regulators not only inflate the risks of gene-spliced or “genetically modified” foods, they ignore the ways in which the newer technique can actually reduce risks.

Aside from minimizing the dangers of introducing toxins, allergens, or other unwanted qualities, this technology makes it far easier to remove many natural allergens from the food supply. In addition, gene-spliced crops enable farmers to drastically reduce the use of pesticides. Unavailability of the better technology especially hurts farmers in poor countries, who continue to suffer from continual exposure to pesticides.

Fails to Accommodate Tradeoffs: The way in which the precautionary principle shifts the burden of proof is no accident. Many proponents of the principle fully intend its nature-deifying, human-denying values to force the innovator and producer onto a rocky path. Another consequence is the inability of the principle to handle tradeoffs between harm to humans and to the environment. Since unaltered nature is implicitly an absolute value in the principle, no tradeoffs are to be allowed. The precautionary principle is all about avoiding possible harm—and human-caused harm, and primarily harm to the environment—rather than respecting a wider set of values.

As the precautionary principle has come to be applied, all other values must bow to that of the precautionary activists. Anyone who expresses willingness to forego perfect environmental protection in favor of an easier life, greater health or wealth, or other values, needs to be shown the light according to the precautionist. In this way, the precautionary principle tends toward authoritarianism. If those poor, ignorant fools (no doubt blinded by the dark power of commercial advertising) cannot see what they should do, the activists will force them to do the right thing. Just as Lenin, Trotsky, and Stalin saw themselves as the vanguard of the proletariat, who knew the interests of the Russian workers far better than did the workers themselves, the precautionists wish to protect us from ourselves.

The precautionary principle, in its absolutist, univalued approach, conflicts with the more balanced approach to risk and harm derived from common law. Common law holds us liable for injuries we cause, with liability increasing along with foreseeable risk. By contrast, the precautionary principle bypasses liability and acts like a preliminary injunction—but without the involvement of a court. By doing this, the precautionary principle denies individuals and communities the freedom to make trade-offs in the way recognized by common-law approaches to risk and harm. No other values are admitted as reason not to pursue extreme precaution.

Vulnerable to Corruption: The inconsistent, discriminatory nature of precautionary regulations (as we saw in the case of gene-spliced foods) puts a kink in the rule of law. By giving regulators the power to insist on any degree of testing they choose, the precautionary principle opens up opportunities for corruption—undue influence, unfair targeting, and regulatory capture. It is the principle’s vagueness, inconsistency, and arbitrariness that appeals to regulators who enjoy expanding their powers and wielding them selectively. An increase in corruption and arbitrary regulatory power is further ensured by making precaution and prevention the default assumption.

Perils part 3, The Tyranny of Safety

The Tyranny of Safety

The precautionary principle rides atop the wild horse that is our fundamental drive to avoid harm. I readily grant that caution is a perfectly sensible practice to adopt as we go about our lives. We get into trouble only when we elevate caution and cautionary measures to the status of an absolute principle—when we endow it with a crude veto power over all other values and over the use of maximum intelligence and creativity. Caution, like suspicion or anger or confidence, enjoys a legitimate place in our toolbox of responses. But it cannot serve by itself us as a comprehensive, judicious, rational basis for making decisions about technological and environmental concerns.

There’s a simple but telling way to appreciate the threat to progress and human well-being posed by the precautionary principle: Take a look back at the scientific and technological achievements of the past, then ask: “Would these advances have been sanctioned or prohibited by the precautionary principle?” Consider a small sample of historical achievements that have improved human life:

The airplane: Planes crash, don’t they? Serious or irreversible harm results without doubt.

Antibiotics and sulfa drugs: Disallowed due to risk of side-effects.

Aspirin: Along with aspirin’s wide range of beneficial effects come some significant adverse side-effects. Today’s level of regulation—which falls well short of the precautionary principle—might deny approval to aspirin.

CAT scans: Disallowed from the start by precaution due to risk from X-rays.

Chlorine: A tremendous public health boon when used for disinfecting water, producing pharmaceuticals, and making pesticides. It’s also a poison gas.

The contraceptive pill: One of the most powerful forces for social change would have been banned due to its association with an elevated risk of some cancers.

DDT (dichlorodiphenyltrichloroethane): This oft-maligned substance, discovered in 1939 by Paul Hermann Mueller (and for which he won the Nobel Prize in medicine), saved the lives of millions threatened by malaria. Throughout the Mediterranean region, DDT transformed malaria from ugly reality to fading bad dream. In 1970, The National Academy of Sciences declared: “To only a few chemicals does man owe as great a debt as to DDT.”

Digitalis: When William Withering extracted digitalis from the foxglove plant (Digitalis purpurea) in 1780, he delivered the first effective drug in medicine. The precautionary principle might have locked down such a highly toxic substance, never allowing its highly beneficial effects on the heart to see the light of day.

Drugs: Do any medical drugs have a proven absence of side effects?

Electrification: Providing electricity to people across the land requires power plants and transmission lines and creates pollution. Each step of the way clearly violates the precautionary principle.

Energy: Production and use of fire, electricity, microwaves, and all forms of energy contravene the precautionary principle. The causal link between accidents with, or misdirected, energy and resulting harm is clear. To prevent the possibility of harm, the principle would prohibit all forms of energy production capable of powering any useful work. Back to living in caves, without fire!

The Green Revolution: This has enormously boosted food production and averted famine throughout much of Asia and elsewhere. The Green Revolution would have been strangled in its crib by the precautionary principle. Genetically modified crops are running up against the principle, yet GM crops are created through a far more precise process. The crops of the Green Revolution were arrived at by randomly mutating seeds and selecting plants with enhanced characteristics. No guarantee could have been given against any possibility of serious harm to humans or ecosystems.

Knives: Enabled humans to eat, build shelter, and develop tools and cultural artifacts. Can also be used for destructive purposes. Say no more.

Nuclear research: The dangers of radiation, illustrated by Marie Curie’s death, would lead the precautionary principle to block the development of NMR imaging, nuclear power, and nuclear physics.

Open-heart surgery: This life-saving surgery might have been blocked early on, since it obviously carried a risk of causing death and opening a path for infections.

Organ transplants: Early recipients often died—a little sooner than they would have otherwise.

Penicillin: Dr. Gail Cardew of the Royal Institution in London has noted that this “wonder drug”, tested early on a human, turned out to be toxic to guinea pigs. A more precautionary approach at the time probably would not have allowed penicillin to be tried on humans.

The periodic table: Systematizes knowledge that can—and has—been used to make explosives for offensive purposes.

Physics research: Study of the principles of motion culminating in work by Newton might have been prohibited. That knowledge created the basis for ballistics.

Radar: If the precautionary ethos, rather than wartime necessity, had prevailed, we would never had enjoyed the benefits of radar. The microwaves emitted by high-powered radar can harm or kill a person standing in front of the antenna.

Railways: When travel by rail first became a real option, some critics warned that people would die when they exceeded 30 mph. Some early travelers attributed their real or imagined sickness to their railroad trip.

Vaccines for rabies, measles, polio, smallpox: Consider that, for instance, Salk’s polio vaccine was a live culture. The probability of protection brought with it a 5% risk of contracting the disease. All vaccines carry a small risk of harmful infection. If Jenner was experimenting with inoculation today, he would be attacked for transferring tissue across species boundaries, and his work shut down as contravening the precautionary principle.

X-rays: Before safe doses had been determined, early researchers into X-ray medicine died.

Just about every human activity could “raise threats of serious or irreversible harm to human health or the environment.” Had the precautionary principle been imposed throughout history, we would still be living poor, nasty, brutish lives—if humans still existed.

The precautionary principle, if applied to real innovation throughout our past, would have stifled progress. As many of the historical examples indicate, this means not only losing the benefits of creativity, but also suffering the natural harms that would continue and multiply unchecked. We need not look to the past to see harm being done. Patients who could benefit from xenotransplantation continue to suffer because of overblown fears about the possible transmission of porcine retroviruses. Neurological disease continues to run its devastating course while drugs that might help are blocked by people fearing a pharmacological “underclass”. Long term storage of nuclear waste continues to be blocked by groups feeding fears of remote, theoretical risks, while they ignore the current, real problems they keep alive.

Recently, a study was published that found an increased incidence of multiple sclerosis (MS) among people who had received hepatitis B shots in the 3 years prior to disease onset. If advocates of the precautionary principle jump on this result, they could cause much needless suffering and death. The kind of fear that drives the principle will ignore the fact that this result comes from a single, unverified study. It will also ignore the crucial information that MS affects about 2.5 million people, but hepatitis B affects 350 million people.

Whether it’s in the name of the principle, or the simple visceral reaction that has the same effect, scared people turning down the hepatitis B vaccine would be a health disaster. Exactly this kind of unmeasured, fearful response to vaccines has already popped up many times, a recent example being parents who refuse to have their children vaccinated with a new combination vaccine.

Consider the case of the rotavirus vaccine. Rotavirus is the most common cause of severe diarrhea among children. According to the Centers for Disease Control, it results in the hospitalization of 55,000 children annually in the United States, and the death of 600,000 children annually worldwide. That’s the death of one child every minute. These children are pushed over a rocky path on the way to death, enduring vomiting and watery diarrhea along with fever and abdominal pain.

Although the disease was discovered three decades ago, no rotavirus vaccine for this devastating disease was available until 1998. In 1999, after just nine months on the market, Wyeth Laboratories voluntarily pulled their vaccine, RotaShield, because it was associated with a slightly increased risk of intussusception (bowel obstruction). In wealthy countries, this can typically be treated, but in places where it isn’t, the result can be a very severe disease or, occasionally, death.

No rotavirus vaccine was available anywhere for five years, even in the developing world, where one in every two hundred and fifty children dies from the disease. The precautionary agency in this case was the Advisory Committee on Immunization Practices (ACIP). This is a horrifying example of caution that kills. Several years after the withdrawal, it remains unclear whether the Rotashield vaccine really causes additional bowel disorders. Even if it does, the risk is small. According to a report by NIH scientists in the Journal of Infectious Diseases, that vaccine might have led to 1 excess case per 32,000 vaccinated infants. More than ten times that number (around 1 in 3,000 infants during their first year) develop intussusception anyway. This report found an overall decrease in intussusception among infants under a year old during the period of exposure to the rotavirus vaccine. The bottom line in the developing world: the precautionary blocking of a vaccine has killed millions of children.

Another upshot of this episode is noteworthy. For half a century, new pharmaceutical products have been introduced first in the United States and Europe, only later reaching the developing world. This is changing. In planning ahead for the launch of its vaccine, Rotarix, GlaxoSmithKline (GSK) held its half-dozen trials in developing world countries including Mexico, Brazil, South Africa, and Malaysia. GSK plans to make Rotarix available in Mexico right away. When it comes to a rotavirus vaccine, it is now the United States that is the third world country. After Wyeth’s experience (albeit with a differently-derived vaccine), GSK has no plans to request approval from the US food and Drug Administration.

The point here is not that precautionary restrictions are never justified. It is not that unfettered innovation is always best. It is not that environmental concerns are to be dismissed. It is that an excessive focus on preventing one perceived problem can create even worse problems. Nor is this a counsel of despair. Better decision processes are available.

Sunday, August 22, 2010

Perils, Part 4: The Paradox of the Precautionary Principle

The rotavirus case illustrates what I call the paradox of the precautionary principle: The principle endangers us by trying too hard to safeguard us. It tries “too hard” by being obsessively preoccupied with a single value—safety. By focusing us on safety to an excessive degree, the principle distracts policymakers and the public from other dangers. The more confident we are in the principle, and the more enthusiastically we apply it, the greater the hazard to our health and our standard of living. The principle ends up causing harm by diverting attention, financial resources, public health resources, time, and research effort from more urgent and weighty risks.

Adding insult to injury, in practice this rule assumes that new prohibitions or regulations will result in no harm to human health or the environment. Unfortunately, well-intended interventions into complex systems invariably have unintended consequences. Only by closely examining possible ramifications can we determine whether or not the intervention is likely to make us better off. By single-mindedly enforcing the tyranny of safety, this principle can only distract decision makers from such an examination.

Our choices of modes of transport provide a simple example of the paradox of the precautionary principle. What image comes to mind when we hear the words “airplane crash”? A terrifying plunge, an enormous smash, hundreds of dead bodies, billows of black smoke. On hearing news of a spectacular plane crash, some travelers choose to go by car instead. (A smaller number of people won’t take a plane at any time, though they rarely claim this to be a calmly rational choice.) The same effect has been observed in the case of train accidents. Plane and train crashes are dramatic events that impress themselves on our minds, encouraging us to believe that those modes of travel are intolerably risky. The facts are otherwise—as most of us know, if only vaguely.

Consider that in 2000, the world’s commercial jet airlines suffered only 20 fatal accidents, yet they carried 1.09 billion people on 18 million flights. Even more remarkable, if you add up all the people who died in commercial airplane accidents in America over the last 60 years, the number you will arrive at is smaller than the number of people killed in U.S. car accidents in any three-month period. These totals are telling, but the most relevant figures compare fatalities per unit of distance traveled. By that measure, in the United States you will be 22 times safer traveling by commercial airline than by car. (This conclusion is from a 1993-95 study by the U.S. National Safety Council.)

Air travel has become much safer since 1950, bringing down the number of fatal accidents per million aircraft miles flown to 0.0005. To put your risk of death into proportion, consider that in 1997 commercial airlines made 8,157,000 departures, carried 598,895,000 passengers, and endured only 3 fatal accidents. While switching from road to air reduces your risk 22 times, if you were to switch from train to air, you would reduce your risk 12-fold.

You’re considerably more likely to die while engaging in recreational boating or biking than while traveling by air. Given that you need to travel, by taking a precautionary approach that leads you to avoid the possibility of a spectacular air crash, you would be exposing yourself to a greater risk of injury or death. Be cautious with precaution!

In comparing the fatality rates of air travel and road travel, we have been comparing like with like. The paradox of the precautionary principle becomes even more dangerous when a preoccupation with one value, such as safety, distracts us from other values. We may be able to improve an outcome according to one measure, but it will often come at the cost of worsening an outcome according to a different measure. In that case we will face choices that the precautionary principle is poorly equipped to handle. To see this, consider the Kyoto protocol.

The Kyoto protocol is an international precautionary commitment to reduce the emission of gases suspected of causing global warming. Supporters of Kyoto typically favor forcing down emissions of these gases by raising fuel economy standards for cars and trucks. The effects of enforcing similar fuel economy standards in the United States has pushed automakers to come out with smaller, lighter, more vulnerable cars. According to a study by the Harvard School of Public Health, this results in an additional 2,000-4,000 highway deaths per year. In this case, we are buying some climate remediation at the cost of many lives. We are improving outcomes according to one measure but worsening them according to another measure.

Regulation of economic activity—whether precautionary in origin or not—involves a more general and well-established tradeoff. Known as the “income effect”, this tradeoff is shaped by a correlation between wealth and health. Implementing and complying with regulations imposes costs. Political scientist Aaron Wildavsky observed that poorer nations tend to have higher mortality rates than richer ones. This correlation is no coincidence. Wealthier people can eat more varied and nutritious diets, buy better health care, and reduce sources of stress (such as excessively long working hours) and thereby reduce consequences such as heart attacks, hypertension, depression, and suicide.

In counting up the anticipated benefits of regulations, we should therefore also consider what they may cost us—or cost poorer people in countries affected by international regulations. Some regulations will amount to a lousy deal. Although precise numbers are hard to pin down, a conservative estimate from the research suggests that the income effects leads to one additional death for every $7.25 million of regulatory costs. Many regulations impose costs in the tens of billions of dollars annually. That implies thousands of additional deaths per year. Safety is not free. Regulatory overkill can be just that.

If only activists would appreciate this point as they move from opposing chlorine to opposing “endocrine disruptors” and phthalates (used to soften plastics). The story of the antichlorine campaign does not offer much hope. The price of precaution can be exorbitant, especially for developing countries. Toward the end of the 1980s, environmental activists had focused their attention on purging society of chlorinated compounds. As part of this campaign, activists spread disinformation in all directions. They worked especially hard to persuade water authorities in numerous countries that allowing chlorination of drinking water amounted to giving people cancer. In Peru, they succeeded. The consequences were dire.

Finding themselves in a budget crisis, Peruvian government officials saw in the cancer-risk claims a handy excuse to stop chlorinating the drinking water in many part of the country. They could cover their backs by pointing to official reports from the US Environmental Protection Agency that had alleged that drinking chlorinated water was linked to elevated cancer risks. (The EPA later admitted that this connection was not “scientifically supportable.”) Soon afterwards, cholera—a disease that had been wiped out in Peru—returned in the epidemic of 1991-96. 800,000 suffered and 6,000 died in Peru. Then it spread to Columbia, Brazil, Chile, and Guatemala. Around 1.3 million people were afflicted, and 11,000 or more were killed by the disease.

The drinking water system had been deteriorating before this, so we cannot place the entire blame on the single decision to stop chlorinating. But chlorinating the water would probably have prevented the epidemic from getting started. Absence of the treatment certainly made the situation far worse. The high price paid for that precautionary measure is not unusual or surprising in poorer countries. The elimination of DDT further illustrates the point.

DDT ended the terrible scourge of malaria in some third-world countries by the late 20th century by ending malaria-carrying mosquitoes. But environmentalists targeted the pesticide, claiming that it might harm some birds and might possibly cause cancer. Malaria control efforts around the world quickly fell apart. This devastating affliction of nature is rapidly gaining strength in earth’s tropical regions. Malaria epidemics in 2000 alone killed over a million people and sickened 300 million. Once again, those least able to bear it were the ones to pay the high price for precautionary tunnel vision.

Have aggressive environmental activists learned from these experiences and changed course? Hardly. “Green at any price” seems to be their motto as they mutter speculations of doom while trying to strangle the technology of gene-spliced (or “genetically modified” or GM) crops. In this case, there may be hope. Late in 2004, both China and Britain looked set to approve gene-spliced crops, despite well-organized and funded opponents—opponents who don’t hesitate to destroy crops being grown for research. And in 2005, the FDA began loosening its restrictions on bioengineered rice.

If these countries open the way for this vital part of agricultural biotechnology, it will mean a reversal of years of public policy that has restricted and raised costs of research and development. The result should be to spur innovation and to renew food productivity growth in the developing countries., ushering in a second Green Revolution.

These are just a few of the many cases illustrating the dangers of the precautionary principle. Environmental and technological activism that wields the precautionary principle, whether explicitly or implicitly, raises clear threats of harm to human health and well-being. If we apply the principle to itself, we arrive at the corollary to the Paradox of the Precautionary Principle:

According to the principle, since the principle itself is dangerous, we should take precautionary measures to prevent the use of the precautionary principle.

The severity of the precautionary principle’s threat certainly does not imply that we should take no actions to safeguard human health or the environment. Nor does it imply that we must achieve full scientific certainty (or its nearest real-world equivalent) before taking action. It does imply that we should keep our attention focused on established and highly probable risks, rather than on hypothetical and inflated risks. It also implies an obligation to assess the likely costs of enforcing precautionary restrictions on human activities. Clearly, we need a better way to assess potential threats to humans and the environment—and the consequences of our responses. In order to develop a suitable alternative, we first need to appreciate the full extent of flaws in the precautionary approach.

Perils, Part 2: Pervasive Precaution

Pervasive Precaution

The precautionary principle, as defined by Soren Holm and John Harris in Nature magazine in 1999, asserts:
When an activity raises threats of serious or irreversible harm to human health or the environment, precautionary measures that prevent the possibility of harm shall be taken even if the causal link between the activity and the possible harm has not been proven or the causal link is weak and the harm is unlikely to occur.

The version from the Wingspread Statement, 1998:
“When an activity raises threats of harm to the environment or human health, precautionary measures should be taken even if some cause and effect relationships are not fully established scientifically.”

The precautionary principle has taken many forms, but these definitions capture the essence of most of them. Starting life as the German Vorsorgeprinzip (literally “precaution principle”), this rule assumed a role in institutional decision making in the North-Sea conferences from 1984 to 1995, and in the deliberations leading to the Rio Declaration of 1992, the UN Framework Climate Convention of 1992, and the Kyoto Protocol. Formulations of the principle do vary in some important ways. The fuzziness resulting from this lack of a standard definition causes trouble, but is also the very characteristic that appeals to advocates of technological and environmental regulation. They have come to favor the precautionary principle—in whatever form best helps them maneuver policies so as to further their goals.

In its most modest form, the principle urges us not to wait for scientific certainty before taking precautionary measures. Considered out of context, that policy is entirely reasonable. We rarely achieve the high standard of scientific certainty about the effects of our activities. But this fact applies just as much to actions in the form of restrictions, regulations, and prohibitions as to innovative and productive activities. By recognizing the frequent necessity to act or refrain from acting in conditions of uncertainty, we are not thereby committed to favoring a policy of restrictive precautionary measures. This message about certainty and action therefore tells us little. And the rest of the principle provides no further guidance about choosing under uncertainty.

Its roots in the German Vorsorgeprinzip mean that the common use of the principle goes well beyond urging preventative or prohibitory action based on inconclusive evidence. An attribute more central to the principle is the judgment of “better safe than sorry”. In other words, err on the side of caution. While this sentiment makes for a perfectly sound proverb, it provides a treacherous foundation for a principle to guide assessments of technological and environmental impacts. As a proverb, “better safe than sorry” is counterbalanced by opposing—but equally valid—proverbs, such as “he who hesitates is lost”, or “make hay while the sun shines.”

Precautionary measures typically impose costs, burdens, and their own harms. Administering precautionary actions becomes especially dangerous when the principle says, or is interpreted as saying, that those actions are justified and required “if any possibility” of harm exists. In this (typical) interpretation, it becomes ridiculously easy to rationalize restrictive measures in the absence of any real evidence. Clearly, this pushes the principle far beyond dismissing the need for fully established cause-effect relationships.

Statements of the precautionary principle vary also in whether or not they specify that the principle deals with threats of serious or irreversible harm or damage. Problems arise with the usage of “serious” and “irreversible”, but at least this clause limits the application of the principle. More demanding versions of the principle, such as the widely-quoted Wingspread Statement, call for precautionary measures to come into play even when the possible harm is not serious or irreversible.

Statements of the precautionary principle may include a cost-effectiveness clause. This happens all too rarely in practice, perhaps because most advocates of the principle aim to stop the targeted technology or activity, not to maximize welfare. The Rio Declaration of 1992 stands out by incorporating such a clause:

“Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures.”

Some worthy attempts have been made to improve the principle by adding to it. In 2001, the European Environment Agency issued a document conveying “Late Lessons from Early Warnings”, which issued twelve accompanying guidelines. These included some excellent recommendations, such as “Identify and reduce interdisciplinary obstacles to learning”, and “Systematically scrutinise the claimed justifications and benefits alongside the potential risks.” Unfortunately, advocates of the principle have not paid attention to these suggestions, and many of them co-exist uncomfortably with the main thrust of the principle. Another noteworthy attempt at amelioration is a May 2000 Science paper titled “Science and the Precautionary Principle”. This set out five “Guidelines for Application of the Precautionary Principle”: Proportionality, nondiscrimination, consistency, cost-benefit examination, and examination of scientific developments.

With or without patches, the deeply flawed precautionary principle can cause trouble. It already has. Awareness of the pervasive, profoundly restrictive force of the principle is all the more remarkable for its relative obscurity, especially outside Europe. Even among widely read people, a large majority do not recall ever having heard the term—although they have certainly heard “better safe than sorry”. Yet the dominant influence of the principle can be found everywhere.

Consider, for a start, the central role of the precautionary principle in shaping environmental policy in the European Union. The foundational Maastricht Treaty on the European Union states that “Community policy on the environment…shall be based on the precautionary principle and on the principles that preventive actions should be taken, that environmental damage should as a priority be rectified at source and that the polluter should pay.” The United Nations joined the precautionary bandwagon when the UN Biosafety Protocol led the way for other international treaties by incorporating the precautionary principle. Some other examples of the principle explicitly at work:

  • Protocol on Substances that Deplete the Ozone Layer, Sept. 16, 1987, 26 ILM 1541.
  • Second North Sea Declaration.
  • Ministerial Declaration Calling for Reduction of Pollution, Nov. 25, 1987, 27 ILM 835.
  • United Nations Environment Program.
  • Nordic Council’s Conference.
  • Nordic Council’s International Conference on Pollution of the Seas: Final Document Agreed to Oct. 18, 1989, in Nordic Action Plan on Pollution of the Seas, 99 app. V (1990) .
  • PARCOM Recommendation 89/1 - 22 June, 1989.
  • The Contracting Parties to the Paris Convention for the Prevention of Marine Pollution from Land-Based Sources:
  • Third North Sea Conference.
  • Bergen Declaration on Sustainable Development.
  • Second World Climate Conference.
  • Bamako Convention on Transboundary Hazardous Waste into Africa.
  • OECD Council Recommendation C(90)164 on Integrated Pollution Prevention and Control, January 1991.
  • Helsinki Convention on the Protection and Use of Transboundary Watercourses and International Lakes.
  • The Rio Declaration on Environment and Development, June 1992.
  • Climate Change Conference (Framework Convention on Climate Change, May 9, 1992).
  • UNCED Text on Ocean Protection.
  • Energy Charter Treaty.

The influence of the principle has been felt in South America too. Transgenic crops have been prohibited throughout Brazil since 1998. In that year, a judge made an interpretation of the version of the principle included in the Rio Declaration on Environment and Development—a statement coming out of the 1992 Earth Summit held in Brazil.

The precautionary principle is followed even more widely than it might seem from official mentions, especially in the United States. We often find the principle being applied without disclosure or explicit acknowledgment. Perhaps this happens because the principle ably captures common intuitions that grow out of fear fed by lack of knowledge. Our first reaction to an apparent threat is usually: Stop it now! We may disregard the costs of stopping the threat. Our sense of urgency may blind us to considering whether we might have better options at our disposal.

When the United Kingdom faced the appalling, if over-inflated, menace of bovine spongiform encephalitis (BSE), people quickly demanded that authorities require proof of virtually zero risk for any substance that might have BSE contamination. Professor James Bridges, chair of the European Commission’s toxicology committee, referred to this “extreme precautionary approach in the context of other food risks” and noted it had “involved enormous costs”. Of course, if such proof could be provided (which it surely cannot) and at a low cost, the demand would be reasonable. But the actual reaction lacks any sense of proportionality and objective risk assessment.

In the United States, the President’s Council on Sustainable Development affirmed the precautionary principle, without using the term explicitly, in its statement:
There are certain beliefs that we as Council members share that underlie all of our agreements. We believe: (number 12) even in the face of scientific uncertainty, society should take reasonable actions to avert risks where the potential harm to human health or the environment is thought to be serious or irreparable.

The United States has made extensive use of precautionary prevention—sometimes quite sensibly—even if no mention is made of a principle. Sometimes precautionary prevention has been applied earlier in the US than in Europe. The European Environment Agency publication “Late Lessons from Early Warnings” notes four examples: The Delaney Clause in the Food, Drug and Cosmetics Act, 1957–96, which banned animal carcinogens from the human food chain; a ban on the use of scrapie-infected sheep and goat meat in the animal and human food chain in the early 1970s; a ban on the use of chlorofluorocarbons (CFCs) in aerosols in 1977, several years before similar action in most of Europe; and a ban on the use of DES as a growth promoter in beef, 1972–79, nearly 10 years before the EU ban in 1987.

The most formidable manifestations of the precautionary principle in the US may be found in the regulatory practices of the FDA (Food and Drug Administration). It’s not the only US government agency applying the principle, usually without naming it—and without calculating its costs and benefits. The EPA (Environmental Protection Agency) bound itself to the principle in developing and enforcing regulations on synthetic chemicals. US regulators have taken an even more strongly precautionary approach than Europe to some kinds of risks, such as nuclear power, lead in gasoline, and the approval of new medicines—which takes us back to the FDA.

Precautionary FDA regulation may have the most drastic impact on human well-being of any mentioned so far. The FDA has successfully sought to extend its powers over the decades, first solidifying its authority to determine when a new medication could be considered safe, and later to determine when it could be considered effective. If the agency were using a purely rational approach to regulation—one that accurately aimed at maximizing human health—it would fully account for both the risks of approving a new medicine that might have damaging side-effects, and the dangers of withholding approval or delaying approval to a potentially beneficial medicine. In practice, this is far from the way the FDA operates.

In reality, the FDA consistently follows a path close to one that the precautionary principle would prescribe: It puts all its energies into minimizing the risk of a new drug that might be approved, then goes on to cause harm. Very little energy goes into considering the potential benefits from making the new treatment available. Regulators can make mistakes on both sides of this balance.

If they approve a drug that turns out to be harmful, they have made a “Type I error”, as it is called in risk analysis. They might also make a Type II error by making a beneficial medication unavailable—by delaying it, rejecting it for consideration, by failing to approve it, or by wrongly withdrawing it from the market.

Both types of error are bad for the public. For the regulators, the risk of Type I errors looks much more frightening that Type II errors. If they make a Type II mistake and prevent a beneficial treatment coming to market, few people will ever be aware of what has been lost. Probably the media will be silent, and Congress will join them. Regulators have little incentive to avoid Type II errors. But what of the prospect of making a Type I error? This is a regulator’s worst nightmare.

Suppose you are the regulator, and you approve a promising new drug that turns out to be another Thalidomide, causing horrible deformations in newborns. Or, consider what it felt like to be one of the regulators who approved the swine flu vaccine in 1976. The vaccine did its job, but turned out to cause temporary paralysis in some patients. Such a Type I error is immediately obvious and attains a high profile as lawyers, the media, the public, and eager politicians pile on, screaming at you with rage and blame. We’ve seen this more recently in the cases of Vioxx and Celebrex.

You will hardly be a happy official, and your career may be destroyed. You approved the drug according to your best judgment, but your error is not forgiven or forgotten. Given these asymmetrical incentives, regulators naturally tend to err far on the side of being overly cautious. They go to great lengths to avoid Type I errors—a factor that has raised the cost of new drug development and approval into the hundreds of millions of dollars and added years to the process. (The only effective countervailing force in recent history has been the focused pressure of activists to speed approval of AIDS drugs.)

Regulators, then, will not make an objective, comprehensive, balanced assessment of both Type I and II risks. The overall outcome is a regulatory scheme driven by incentives that bias it strongly against new products and innovation. Some of the regulators themselves have recognized and publicly expressed these uneven pressures. Former FDA Commissioner Alexander Schmidt put it this way:

In all our FDA history, we are unable to find a single instance where a Congressional committee investigated the failure of FDA to approve a new drug. But, the times when hearings have been held to criticize our approval of a new drug have been so frequent that we have not been able to count them. The message to FDA staff could not be clearer. Whenever a controversy over a new drug is resolved by approval of the drug, the agency and the individuals involved likely will be investigated. Whenever such a drug is disapproved, no inquiry will be made. The Congressional pressure for negative action is, therefore, intense. And it seems to be ever increasing.

The writings of well-known prophets of gloom provide further evidence of the pervasiveness of precautionary thinking. Consider Bill Joy’s much-discussed essay in Wired, “Why the Future Doesn’t Need Us.” Joy proposed that we apply a precautionary approach to a limited number of technologies—but technologies with a powerful reach and impact. He labeled the inventions that frightened him as “GNR”, standing for genetic engineering, nanotechnology, and robotics. Joy focused on these three areas, but his fears apply to any form of technology endowed with the power of self-replication. In his manifesto, he warned of what he saw as immense new threats:

Our most powerful 21st-century technologies - robotics, genetic engineering, and nanotech - are threatening to make humans an endangered species… Thus we have the possibility not just of weapons of mass destruction but of knowledge-enabled mass destruction (KMD), this destructiveness hugely amplified by the power of self-replication.

And if our own extinction is a likely, or even possible, outcome of our technological development, shouldn't we proceed with great caution?
I think it is no exaggeration to say we are on the cusp of the further perfection of extreme evil, an evil whose possibility spreads well beyond that which weapons of mass destruction bequeathed to the nation-states, on to a surprising and terrible empowerment of extreme individuals.

Just like other advocates of precautionary measures, Joy concluded with a call for restricting or relinquishing technology. Going further than many (at least in their public statements), Joy also called for “limiting our pursuit of certain kinds of knowledge.” He also mentioned that he saw many activists joining him as “the voices for caution and relinquishment…” I will return to Joy’s proposed precautionary measures and their effects near the end of the chapter. In a later chapter, I will consider the views of Leon Kass, Francis Fukuyama, and Michael Sandel, all of who take precautionary approaches to enhancement technologies, which include Joy’s GNR trio.

The Perils of Precaution

This is the first in a series of entries of what is chapter 2 of my book-in-progress, The Proactionary Principle. I will post a new section of that chapter every day or two. Following that will be sections from chapter 4, The Proactionary Principle (my alternative to the precautionary principle).

When you hear the word “mother”, what comes to mind? If you are like me, mother stands for comfort, for love and, above all, for protecting and nurturing the young. Those lurid and repellant news stories of mothers who murder their children revolt and fascinate us precisely because they violate our expectations so brutally. We expect mothers to watch over their offspring, to safeguard them, to take precautionary measures. When we see mothers filling this age-old role, all feels right with the world.

But what if a mother is killing with kindness? What if, by protecting her child from a perceived danger, she is opening the door to a greater danger? What if the overprotective mother encourages thousands of other mothers to follow her example? How do we feel then? Her excessive—or misdirected—precaution now puts in peril a multitude of innocents.

In the process of writing this chapter, I came across a website that claims to reveal the truth about vaccines. The mother who runs this site had her son vaccinated with the usual treatments, starting at two months of age. At fifteen months, one week after being vaccinated for several dangerous conditions, the boy starting having seizures. No definite connection was established with the vaccine, and reactions typically occur more quickly. This unfortunate woman is now devoting herself to broadcasting dire warnings about the vaccine menace. To the extent that the conviction of her personal voice succeeds in influencing others, she will be responsible for greatly raising the risk of serious illness in numerous children.

The mother had read a fact sheet explaining that a vaccine can cause serious allergic reactions, and induce seizures in 6 out of 10,000 cases. She writes that, “like so many of us, I never thought it meant my child.” This comment indicates a failing in the thinking of this mother—a failing that sparked such appalled outrage in me that little room was left for sympathy. First, she read about the small odds of an adverse reaction but ignored it—because it didn’t mean her child. (Why not? Because believing something comfortable was more important to her than seeing reality?) Then, after the misfortune of her son being one of those suffering adverse reactions (assuming the vaccine was the cause), she ignored the dangers for which the vaccine was prescribed and set about encouraging other women to refuse to vaccinate their children.

This aggressive ignorance typifies the danger of allowing caution without knowledge and fear without objectivity, to drive our thinking and decision making. When we overly focus on avoiding specific dangers—or what we perceive to be dangers—we narrow our awareness, constrain our thinking, and distort our decisions.

Many factors conspire to warp our reasoning about risks and benefits as individuals. The bad news is that such foolish thinking has been institutionalized and turned into a principle. Zealous pursuit of precaution has been enshrined in the “precautionary principle”. Regulators, negotiators, and activists refer to and defer to this principle when considering possible restrictions on productive activity and technological innovation.

In this chapter, I aim to explain how the precautionary principle, and the mindset that underlies it, threaten our well-being and our future. The extropic advance of our civilization depends on keeping caution in perspective. We do need a healthy dose of caution, but caution must take its place as one value among many, not as the sole, all-powerful rule for making decisions about what should and should not do.

I will show how the single-minded pursuit of precaution has the perverse effect of raising our risks. Then I’ll point out many ways in which the principle fails us as a guide to forming a future with care and courage. That will set the stage for an alternative principle—one explicitly designed for the task.

Our Endangered Future
Continued technological innovation and advance are essential for our progress as a species, as individuals, and for the survival of our core freedoms. Unfortunately, human minds do not find it natural or easy to reason accurately about risks arising from complex circumstances. As a result, technological progress is being threatened by fundamentalists of all kinds, anti-humanists, Luddites, primitivists, regulators, and the distorted perceptions to which we are all vulnerable. A clear case of this shortcoming is our reasoning about the introduction of new technologies and the balance of potential benefits and harms that result.

Most of us want to do two things at the same time: Protect our freedom to innovate technologically, and protect ourselves and our environment from excessive collateral damage. Our traditional thinking has shown itself not to be up to this task. If we are serious about achieving the right balance of progress and protection, we need help. Suppose your friend wanted to make your favorite meal for you, and you knew he was clueless about cooking. To improve the chances of enjoying a delicious feast, while minimizing wasted ingredients, damaged utensils, and hurt feelings, you might gently urge him to use a recipe. Reasoning about risk and benefit is similar. Only we call the recipe structured decision making.

One recipe for making decisions and forming policies about technological and environmental issues has become popular. This decision recipe is known by the catchy name of the precautionary principle. This principle falls far short at encouraging us to make decisions that are objective, comprehensive, and balanced. It falls so far short that cynics might wonder whether it was devised specifically to stifle technological advance and productive activity.

Regulators find the principle attractive because it provides a seemingly clear procedure with a bias towards the exercise of regulatory power. The precautionary principle’s characteristics suit it well for the political arena in which regulators, hardcore environmental, and anti-technological activists pursue their agenda. Their interests, and the nature of the principle, practically guarantee that no consideration is given to an alternate approach: making decision making less political and more open to other methods. With rare exceptions, political decisions ensure that for every winner there is a loser. That’s because political decisions are imposed by the winners on the losers. Decisions made outside the political process typically enable all sides to win because there are multiple outcomes rather than just one.

Some well-intended people who genuinely do share the goal of a healthy balance of progress with protection have attempted to salvage the precautionary principle. In the absence of a more appealing alternative, they hope to reframe it and hedge it so that it does the job.

Before setting out a positive alternative for making decisions about the deployment or restriction of new (or existing) technologies, I want to make completely clear why I consider the precautionary principle not only inadequate but dangerous.