Sunday, August 22, 2010

Perils, Part 4: The Paradox of the Precautionary Principle

The rotavirus case illustrates what I call the paradox of the precautionary principle: The principle endangers us by trying too hard to safeguard us. It tries “too hard” by being obsessively preoccupied with a single value—safety. By focusing us on safety to an excessive degree, the principle distracts policymakers and the public from other dangers. The more confident we are in the principle, and the more enthusiastically we apply it, the greater the hazard to our health and our standard of living. The principle ends up causing harm by diverting attention, financial resources, public health resources, time, and research effort from more urgent and weighty risks.

Adding insult to injury, in practice this rule assumes that new prohibitions or regulations will result in no harm to human health or the environment. Unfortunately, well-intended interventions into complex systems invariably have unintended consequences. Only by closely examining possible ramifications can we determine whether or not the intervention is likely to make us better off. By single-mindedly enforcing the tyranny of safety, this principle can only distract decision makers from such an examination.

Our choices of modes of transport provide a simple example of the paradox of the precautionary principle. What image comes to mind when we hear the words “airplane crash”? A terrifying plunge, an enormous smash, hundreds of dead bodies, billows of black smoke. On hearing news of a spectacular plane crash, some travelers choose to go by car instead. (A smaller number of people won’t take a plane at any time, though they rarely claim this to be a calmly rational choice.) The same effect has been observed in the case of train accidents. Plane and train crashes are dramatic events that impress themselves on our minds, encouraging us to believe that those modes of travel are intolerably risky. The facts are otherwise—as most of us know, if only vaguely.

Consider that in 2000, the world’s commercial jet airlines suffered only 20 fatal accidents, yet they carried 1.09 billion people on 18 million flights. Even more remarkable, if you add up all the people who died in commercial airplane accidents in America over the last 60 years, the number you will arrive at is smaller than the number of people killed in U.S. car accidents in any three-month period. These totals are telling, but the most relevant figures compare fatalities per unit of distance traveled. By that measure, in the United States you will be 22 times safer traveling by commercial airline than by car. (This conclusion is from a 1993-95 study by the U.S. National Safety Council.)

Air travel has become much safer since 1950, bringing down the number of fatal accidents per million aircraft miles flown to 0.0005. To put your risk of death into proportion, consider that in 1997 commercial airlines made 8,157,000 departures, carried 598,895,000 passengers, and endured only 3 fatal accidents. While switching from road to air reduces your risk 22 times, if you were to switch from train to air, you would reduce your risk 12-fold.

You’re considerably more likely to die while engaging in recreational boating or biking than while traveling by air. Given that you need to travel, by taking a precautionary approach that leads you to avoid the possibility of a spectacular air crash, you would be exposing yourself to a greater risk of injury or death. Be cautious with precaution!

In comparing the fatality rates of air travel and road travel, we have been comparing like with like. The paradox of the precautionary principle becomes even more dangerous when a preoccupation with one value, such as safety, distracts us from other values. We may be able to improve an outcome according to one measure, but it will often come at the cost of worsening an outcome according to a different measure. In that case we will face choices that the precautionary principle is poorly equipped to handle. To see this, consider the Kyoto protocol.

The Kyoto protocol is an international precautionary commitment to reduce the emission of gases suspected of causing global warming. Supporters of Kyoto typically favor forcing down emissions of these gases by raising fuel economy standards for cars and trucks. The effects of enforcing similar fuel economy standards in the United States has pushed automakers to come out with smaller, lighter, more vulnerable cars. According to a study by the Harvard School of Public Health, this results in an additional 2,000-4,000 highway deaths per year. In this case, we are buying some climate remediation at the cost of many lives. We are improving outcomes according to one measure but worsening them according to another measure.

Regulation of economic activity—whether precautionary in origin or not—involves a more general and well-established tradeoff. Known as the “income effect”, this tradeoff is shaped by a correlation between wealth and health. Implementing and complying with regulations imposes costs. Political scientist Aaron Wildavsky observed that poorer nations tend to have higher mortality rates than richer ones. This correlation is no coincidence. Wealthier people can eat more varied and nutritious diets, buy better health care, and reduce sources of stress (such as excessively long working hours) and thereby reduce consequences such as heart attacks, hypertension, depression, and suicide.

In counting up the anticipated benefits of regulations, we should therefore also consider what they may cost us—or cost poorer people in countries affected by international regulations. Some regulations will amount to a lousy deal. Although precise numbers are hard to pin down, a conservative estimate from the research suggests that the income effects leads to one additional death for every $7.25 million of regulatory costs. Many regulations impose costs in the tens of billions of dollars annually. That implies thousands of additional deaths per year. Safety is not free. Regulatory overkill can be just that.

If only activists would appreciate this point as they move from opposing chlorine to opposing “endocrine disruptors” and phthalates (used to soften plastics). The story of the antichlorine campaign does not offer much hope. The price of precaution can be exorbitant, especially for developing countries. Toward the end of the 1980s, environmental activists had focused their attention on purging society of chlorinated compounds. As part of this campaign, activists spread disinformation in all directions. They worked especially hard to persuade water authorities in numerous countries that allowing chlorination of drinking water amounted to giving people cancer. In Peru, they succeeded. The consequences were dire.

Finding themselves in a budget crisis, Peruvian government officials saw in the cancer-risk claims a handy excuse to stop chlorinating the drinking water in many part of the country. They could cover their backs by pointing to official reports from the US Environmental Protection Agency that had alleged that drinking chlorinated water was linked to elevated cancer risks. (The EPA later admitted that this connection was not “scientifically supportable.”) Soon afterwards, cholera—a disease that had been wiped out in Peru—returned in the epidemic of 1991-96. 800,000 suffered and 6,000 died in Peru. Then it spread to Columbia, Brazil, Chile, and Guatemala. Around 1.3 million people were afflicted, and 11,000 or more were killed by the disease.

The drinking water system had been deteriorating before this, so we cannot place the entire blame on the single decision to stop chlorinating. But chlorinating the water would probably have prevented the epidemic from getting started. Absence of the treatment certainly made the situation far worse. The high price paid for that precautionary measure is not unusual or surprising in poorer countries. The elimination of DDT further illustrates the point.

DDT ended the terrible scourge of malaria in some third-world countries by the late 20th century by ending malaria-carrying mosquitoes. But environmentalists targeted the pesticide, claiming that it might harm some birds and might possibly cause cancer. Malaria control efforts around the world quickly fell apart. This devastating affliction of nature is rapidly gaining strength in earth’s tropical regions. Malaria epidemics in 2000 alone killed over a million people and sickened 300 million. Once again, those least able to bear it were the ones to pay the high price for precautionary tunnel vision.

Have aggressive environmental activists learned from these experiences and changed course? Hardly. “Green at any price” seems to be their motto as they mutter speculations of doom while trying to strangle the technology of gene-spliced (or “genetically modified” or GM) crops. In this case, there may be hope. Late in 2004, both China and Britain looked set to approve gene-spliced crops, despite well-organized and funded opponents—opponents who don’t hesitate to destroy crops being grown for research. And in 2005, the FDA began loosening its restrictions on bioengineered rice.

If these countries open the way for this vital part of agricultural biotechnology, it will mean a reversal of years of public policy that has restricted and raised costs of research and development. The result should be to spur innovation and to renew food productivity growth in the developing countries., ushering in a second Green Revolution.

These are just a few of the many cases illustrating the dangers of the precautionary principle. Environmental and technological activism that wields the precautionary principle, whether explicitly or implicitly, raises clear threats of harm to human health and well-being. If we apply the principle to itself, we arrive at the corollary to the Paradox of the Precautionary Principle:

According to the principle, since the principle itself is dangerous, we should take precautionary measures to prevent the use of the precautionary principle.

The severity of the precautionary principle’s threat certainly does not imply that we should take no actions to safeguard human health or the environment. Nor does it imply that we must achieve full scientific certainty (or its nearest real-world equivalent) before taking action. It does imply that we should keep our attention focused on established and highly probable risks, rather than on hypothetical and inflated risks. It also implies an obligation to assess the likely costs of enforcing precautionary restrictions on human activities. Clearly, we need a better way to assess potential threats to humans and the environment—and the consequences of our responses. In order to develop a suitable alternative, we first need to appreciate the full extent of flaws in the precautionary approach.

Perils, Part 2: Pervasive Precaution

Pervasive Precaution

The precautionary principle, as defined by Soren Holm and John Harris in Nature magazine in 1999, asserts:
When an activity raises threats of serious or irreversible harm to human health or the environment, precautionary measures that prevent the possibility of harm shall be taken even if the causal link between the activity and the possible harm has not been proven or the causal link is weak and the harm is unlikely to occur.

The version from the Wingspread Statement, 1998:
“When an activity raises threats of harm to the environment or human health, precautionary measures should be taken even if some cause and effect relationships are not fully established scientifically.”

The precautionary principle has taken many forms, but these definitions capture the essence of most of them. Starting life as the German Vorsorgeprinzip (literally “precaution principle”), this rule assumed a role in institutional decision making in the North-Sea conferences from 1984 to 1995, and in the deliberations leading to the Rio Declaration of 1992, the UN Framework Climate Convention of 1992, and the Kyoto Protocol. Formulations of the principle do vary in some important ways. The fuzziness resulting from this lack of a standard definition causes trouble, but is also the very characteristic that appeals to advocates of technological and environmental regulation. They have come to favor the precautionary principle—in whatever form best helps them maneuver policies so as to further their goals.

In its most modest form, the principle urges us not to wait for scientific certainty before taking precautionary measures. Considered out of context, that policy is entirely reasonable. We rarely achieve the high standard of scientific certainty about the effects of our activities. But this fact applies just as much to actions in the form of restrictions, regulations, and prohibitions as to innovative and productive activities. By recognizing the frequent necessity to act or refrain from acting in conditions of uncertainty, we are not thereby committed to favoring a policy of restrictive precautionary measures. This message about certainty and action therefore tells us little. And the rest of the principle provides no further guidance about choosing under uncertainty.

Its roots in the German Vorsorgeprinzip mean that the common use of the principle goes well beyond urging preventative or prohibitory action based on inconclusive evidence. An attribute more central to the principle is the judgment of “better safe than sorry”. In other words, err on the side of caution. While this sentiment makes for a perfectly sound proverb, it provides a treacherous foundation for a principle to guide assessments of technological and environmental impacts. As a proverb, “better safe than sorry” is counterbalanced by opposing—but equally valid—proverbs, such as “he who hesitates is lost”, or “make hay while the sun shines.”

Precautionary measures typically impose costs, burdens, and their own harms. Administering precautionary actions becomes especially dangerous when the principle says, or is interpreted as saying, that those actions are justified and required “if any possibility” of harm exists. In this (typical) interpretation, it becomes ridiculously easy to rationalize restrictive measures in the absence of any real evidence. Clearly, this pushes the principle far beyond dismissing the need for fully established cause-effect relationships.

Statements of the precautionary principle vary also in whether or not they specify that the principle deals with threats of serious or irreversible harm or damage. Problems arise with the usage of “serious” and “irreversible”, but at least this clause limits the application of the principle. More demanding versions of the principle, such as the widely-quoted Wingspread Statement, call for precautionary measures to come into play even when the possible harm is not serious or irreversible.

Statements of the precautionary principle may include a cost-effectiveness clause. This happens all too rarely in practice, perhaps because most advocates of the principle aim to stop the targeted technology or activity, not to maximize welfare. The Rio Declaration of 1992 stands out by incorporating such a clause:

“Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures.”

Some worthy attempts have been made to improve the principle by adding to it. In 2001, the European Environment Agency issued a document conveying “Late Lessons from Early Warnings”, which issued twelve accompanying guidelines. These included some excellent recommendations, such as “Identify and reduce interdisciplinary obstacles to learning”, and “Systematically scrutinise the claimed justifications and benefits alongside the potential risks.” Unfortunately, advocates of the principle have not paid attention to these suggestions, and many of them co-exist uncomfortably with the main thrust of the principle. Another noteworthy attempt at amelioration is a May 2000 Science paper titled “Science and the Precautionary Principle”. This set out five “Guidelines for Application of the Precautionary Principle”: Proportionality, nondiscrimination, consistency, cost-benefit examination, and examination of scientific developments.

With or without patches, the deeply flawed precautionary principle can cause trouble. It already has. Awareness of the pervasive, profoundly restrictive force of the principle is all the more remarkable for its relative obscurity, especially outside Europe. Even among widely read people, a large majority do not recall ever having heard the term—although they have certainly heard “better safe than sorry”. Yet the dominant influence of the principle can be found everywhere.

Consider, for a start, the central role of the precautionary principle in shaping environmental policy in the European Union. The foundational Maastricht Treaty on the European Union states that “Community policy on the environment…shall be based on the precautionary principle and on the principles that preventive actions should be taken, that environmental damage should as a priority be rectified at source and that the polluter should pay.” The United Nations joined the precautionary bandwagon when the UN Biosafety Protocol led the way for other international treaties by incorporating the precautionary principle. Some other examples of the principle explicitly at work:

  • Protocol on Substances that Deplete the Ozone Layer, Sept. 16, 1987, 26 ILM 1541.
  • Second North Sea Declaration.
  • Ministerial Declaration Calling for Reduction of Pollution, Nov. 25, 1987, 27 ILM 835.
  • United Nations Environment Program.
  • Nordic Council’s Conference.
  • Nordic Council’s International Conference on Pollution of the Seas: Final Document Agreed to Oct. 18, 1989, in Nordic Action Plan on Pollution of the Seas, 99 app. V (1990) .
  • PARCOM Recommendation 89/1 - 22 June, 1989.
  • The Contracting Parties to the Paris Convention for the Prevention of Marine Pollution from Land-Based Sources:
  • Third North Sea Conference.
  • Bergen Declaration on Sustainable Development.
  • Second World Climate Conference.
  • Bamako Convention on Transboundary Hazardous Waste into Africa.
  • OECD Council Recommendation C(90)164 on Integrated Pollution Prevention and Control, January 1991.
  • Helsinki Convention on the Protection and Use of Transboundary Watercourses and International Lakes.
  • The Rio Declaration on Environment and Development, June 1992.
  • Climate Change Conference (Framework Convention on Climate Change, May 9, 1992).
  • UNCED Text on Ocean Protection.
  • Energy Charter Treaty.

The influence of the principle has been felt in South America too. Transgenic crops have been prohibited throughout Brazil since 1998. In that year, a judge made an interpretation of the version of the principle included in the Rio Declaration on Environment and Development—a statement coming out of the 1992 Earth Summit held in Brazil.

The precautionary principle is followed even more widely than it might seem from official mentions, especially in the United States. We often find the principle being applied without disclosure or explicit acknowledgment. Perhaps this happens because the principle ably captures common intuitions that grow out of fear fed by lack of knowledge. Our first reaction to an apparent threat is usually: Stop it now! We may disregard the costs of stopping the threat. Our sense of urgency may blind us to considering whether we might have better options at our disposal.

When the United Kingdom faced the appalling, if over-inflated, menace of bovine spongiform encephalitis (BSE), people quickly demanded that authorities require proof of virtually zero risk for any substance that might have BSE contamination. Professor James Bridges, chair of the European Commission’s toxicology committee, referred to this “extreme precautionary approach in the context of other food risks” and noted it had “involved enormous costs”. Of course, if such proof could be provided (which it surely cannot) and at a low cost, the demand would be reasonable. But the actual reaction lacks any sense of proportionality and objective risk assessment.

In the United States, the President’s Council on Sustainable Development affirmed the precautionary principle, without using the term explicitly, in its statement:
There are certain beliefs that we as Council members share that underlie all of our agreements. We believe: (number 12) even in the face of scientific uncertainty, society should take reasonable actions to avert risks where the potential harm to human health or the environment is thought to be serious or irreparable.

The United States has made extensive use of precautionary prevention—sometimes quite sensibly—even if no mention is made of a principle. Sometimes precautionary prevention has been applied earlier in the US than in Europe. The European Environment Agency publication “Late Lessons from Early Warnings” notes four examples: The Delaney Clause in the Food, Drug and Cosmetics Act, 1957–96, which banned animal carcinogens from the human food chain; a ban on the use of scrapie-infected sheep and goat meat in the animal and human food chain in the early 1970s; a ban on the use of chlorofluorocarbons (CFCs) in aerosols in 1977, several years before similar action in most of Europe; and a ban on the use of DES as a growth promoter in beef, 1972–79, nearly 10 years before the EU ban in 1987.

The most formidable manifestations of the precautionary principle in the US may be found in the regulatory practices of the FDA (Food and Drug Administration). It’s not the only US government agency applying the principle, usually without naming it—and without calculating its costs and benefits. The EPA (Environmental Protection Agency) bound itself to the principle in developing and enforcing regulations on synthetic chemicals. US regulators have taken an even more strongly precautionary approach than Europe to some kinds of risks, such as nuclear power, lead in gasoline, and the approval of new medicines—which takes us back to the FDA.

Precautionary FDA regulation may have the most drastic impact on human well-being of any mentioned so far. The FDA has successfully sought to extend its powers over the decades, first solidifying its authority to determine when a new medication could be considered safe, and later to determine when it could be considered effective. If the agency were using a purely rational approach to regulation—one that accurately aimed at maximizing human health—it would fully account for both the risks of approving a new medicine that might have damaging side-effects, and the dangers of withholding approval or delaying approval to a potentially beneficial medicine. In practice, this is far from the way the FDA operates.

In reality, the FDA consistently follows a path close to one that the precautionary principle would prescribe: It puts all its energies into minimizing the risk of a new drug that might be approved, then goes on to cause harm. Very little energy goes into considering the potential benefits from making the new treatment available. Regulators can make mistakes on both sides of this balance.

If they approve a drug that turns out to be harmful, they have made a “Type I error”, as it is called in risk analysis. They might also make a Type II error by making a beneficial medication unavailable—by delaying it, rejecting it for consideration, by failing to approve it, or by wrongly withdrawing it from the market.

Both types of error are bad for the public. For the regulators, the risk of Type I errors looks much more frightening that Type II errors. If they make a Type II mistake and prevent a beneficial treatment coming to market, few people will ever be aware of what has been lost. Probably the media will be silent, and Congress will join them. Regulators have little incentive to avoid Type II errors. But what of the prospect of making a Type I error? This is a regulator’s worst nightmare.

Suppose you are the regulator, and you approve a promising new drug that turns out to be another Thalidomide, causing horrible deformations in newborns. Or, consider what it felt like to be one of the regulators who approved the swine flu vaccine in 1976. The vaccine did its job, but turned out to cause temporary paralysis in some patients. Such a Type I error is immediately obvious and attains a high profile as lawyers, the media, the public, and eager politicians pile on, screaming at you with rage and blame. We’ve seen this more recently in the cases of Vioxx and Celebrex.

You will hardly be a happy official, and your career may be destroyed. You approved the drug according to your best judgment, but your error is not forgiven or forgotten. Given these asymmetrical incentives, regulators naturally tend to err far on the side of being overly cautious. They go to great lengths to avoid Type I errors—a factor that has raised the cost of new drug development and approval into the hundreds of millions of dollars and added years to the process. (The only effective countervailing force in recent history has been the focused pressure of activists to speed approval of AIDS drugs.)

Regulators, then, will not make an objective, comprehensive, balanced assessment of both Type I and II risks. The overall outcome is a regulatory scheme driven by incentives that bias it strongly against new products and innovation. Some of the regulators themselves have recognized and publicly expressed these uneven pressures. Former FDA Commissioner Alexander Schmidt put it this way:

In all our FDA history, we are unable to find a single instance where a Congressional committee investigated the failure of FDA to approve a new drug. But, the times when hearings have been held to criticize our approval of a new drug have been so frequent that we have not been able to count them. The message to FDA staff could not be clearer. Whenever a controversy over a new drug is resolved by approval of the drug, the agency and the individuals involved likely will be investigated. Whenever such a drug is disapproved, no inquiry will be made. The Congressional pressure for negative action is, therefore, intense. And it seems to be ever increasing.

The writings of well-known prophets of gloom provide further evidence of the pervasiveness of precautionary thinking. Consider Bill Joy’s much-discussed essay in Wired, “Why the Future Doesn’t Need Us.” Joy proposed that we apply a precautionary approach to a limited number of technologies—but technologies with a powerful reach and impact. He labeled the inventions that frightened him as “GNR”, standing for genetic engineering, nanotechnology, and robotics. Joy focused on these three areas, but his fears apply to any form of technology endowed with the power of self-replication. In his manifesto, he warned of what he saw as immense new threats:

Our most powerful 21st-century technologies - robotics, genetic engineering, and nanotech - are threatening to make humans an endangered species… Thus we have the possibility not just of weapons of mass destruction but of knowledge-enabled mass destruction (KMD), this destructiveness hugely amplified by the power of self-replication.

And if our own extinction is a likely, or even possible, outcome of our technological development, shouldn't we proceed with great caution?
I think it is no exaggeration to say we are on the cusp of the further perfection of extreme evil, an evil whose possibility spreads well beyond that which weapons of mass destruction bequeathed to the nation-states, on to a surprising and terrible empowerment of extreme individuals.

Just like other advocates of precautionary measures, Joy concluded with a call for restricting or relinquishing technology. Going further than many (at least in their public statements), Joy also called for “limiting our pursuit of certain kinds of knowledge.” He also mentioned that he saw many activists joining him as “the voices for caution and relinquishment…” I will return to Joy’s proposed precautionary measures and their effects near the end of the chapter. In a later chapter, I will consider the views of Leon Kass, Francis Fukuyama, and Michael Sandel, all of who take precautionary approaches to enhancement technologies, which include Joy’s GNR trio.

The Perils of Precaution

This is the first in a series of entries of what is chapter 2 of my book-in-progress, The Proactionary Principle. I will post a new section of that chapter every day or two. Following that will be sections from chapter 4, The Proactionary Principle (my alternative to the precautionary principle).


When you hear the word “mother”, what comes to mind? If you are like me, mother stands for comfort, for love and, above all, for protecting and nurturing the young. Those lurid and repellant news stories of mothers who murder their children revolt and fascinate us precisely because they violate our expectations so brutally. We expect mothers to watch over their offspring, to safeguard them, to take precautionary measures. When we see mothers filling this age-old role, all feels right with the world.

But what if a mother is killing with kindness? What if, by protecting her child from a perceived danger, she is opening the door to a greater danger? What if the overprotective mother encourages thousands of other mothers to follow her example? How do we feel then? Her excessive—or misdirected—precaution now puts in peril a multitude of innocents.

In the process of writing this chapter, I came across a website that claims to reveal the truth about vaccines. The mother who runs this site had her son vaccinated with the usual treatments, starting at two months of age. At fifteen months, one week after being vaccinated for several dangerous conditions, the boy starting having seizures. No definite connection was established with the vaccine, and reactions typically occur more quickly. This unfortunate woman is now devoting herself to broadcasting dire warnings about the vaccine menace. To the extent that the conviction of her personal voice succeeds in influencing others, she will be responsible for greatly raising the risk of serious illness in numerous children.

The mother had read a fact sheet explaining that a vaccine can cause serious allergic reactions, and induce seizures in 6 out of 10,000 cases. She writes that, “like so many of us, I never thought it meant my child.” This comment indicates a failing in the thinking of this mother—a failing that sparked such appalled outrage in me that little room was left for sympathy. First, she read about the small odds of an adverse reaction but ignored it—because it didn’t mean her child. (Why not? Because believing something comfortable was more important to her than seeing reality?) Then, after the misfortune of her son being one of those suffering adverse reactions (assuming the vaccine was the cause), she ignored the dangers for which the vaccine was prescribed and set about encouraging other women to refuse to vaccinate their children.

This aggressive ignorance typifies the danger of allowing caution without knowledge and fear without objectivity, to drive our thinking and decision making. When we overly focus on avoiding specific dangers—or what we perceive to be dangers—we narrow our awareness, constrain our thinking, and distort our decisions.

Many factors conspire to warp our reasoning about risks and benefits as individuals. The bad news is that such foolish thinking has been institutionalized and turned into a principle. Zealous pursuit of precaution has been enshrined in the “precautionary principle”. Regulators, negotiators, and activists refer to and defer to this principle when considering possible restrictions on productive activity and technological innovation.

In this chapter, I aim to explain how the precautionary principle, and the mindset that underlies it, threaten our well-being and our future. The extropic advance of our civilization depends on keeping caution in perspective. We do need a healthy dose of caution, but caution must take its place as one value among many, not as the sole, all-powerful rule for making decisions about what should and should not do.

I will show how the single-minded pursuit of precaution has the perverse effect of raising our risks. Then I’ll point out many ways in which the principle fails us as a guide to forming a future with care and courage. That will set the stage for an alternative principle—one explicitly designed for the task.

Our Endangered Future
Continued technological innovation and advance are essential for our progress as a species, as individuals, and for the survival of our core freedoms. Unfortunately, human minds do not find it natural or easy to reason accurately about risks arising from complex circumstances. As a result, technological progress is being threatened by fundamentalists of all kinds, anti-humanists, Luddites, primitivists, regulators, and the distorted perceptions to which we are all vulnerable. A clear case of this shortcoming is our reasoning about the introduction of new technologies and the balance of potential benefits and harms that result.

Most of us want to do two things at the same time: Protect our freedom to innovate technologically, and protect ourselves and our environment from excessive collateral damage. Our traditional thinking has shown itself not to be up to this task. If we are serious about achieving the right balance of progress and protection, we need help. Suppose your friend wanted to make your favorite meal for you, and you knew he was clueless about cooking. To improve the chances of enjoying a delicious feast, while minimizing wasted ingredients, damaged utensils, and hurt feelings, you might gently urge him to use a recipe. Reasoning about risk and benefit is similar. Only we call the recipe structured decision making.

One recipe for making decisions and forming policies about technological and environmental issues has become popular. This decision recipe is known by the catchy name of the precautionary principle. This principle falls far short at encouraging us to make decisions that are objective, comprehensive, and balanced. It falls so far short that cynics might wonder whether it was devised specifically to stifle technological advance and productive activity.

Regulators find the principle attractive because it provides a seemingly clear procedure with a bias towards the exercise of regulatory power. The precautionary principle’s characteristics suit it well for the political arena in which regulators, hardcore environmental, and anti-technological activists pursue their agenda. Their interests, and the nature of the principle, practically guarantee that no consideration is given to an alternate approach: making decision making less political and more open to other methods. With rare exceptions, political decisions ensure that for every winner there is a loser. That’s because political decisions are imposed by the winners on the losers. Decisions made outside the political process typically enable all sides to win because there are multiple outcomes rather than just one.

Some well-intended people who genuinely do share the goal of a healthy balance of progress with protection have attempted to salvage the precautionary principle. In the absence of a more appealing alternative, they hope to reframe it and hedge it so that it does the job.

Before setting out a positive alternative for making decisions about the deployment or restriction of new (or existing) technologies, I want to make completely clear why I consider the precautionary principle not only inadequate but dangerous.

Saturday, November 7, 2009

The Myth of Stagnation

The philosopher Bernard Williams once wrote a piece on “The Tedium of Immortality”. Although I have long thought his view reeked of sour grapes, he expressed similar sentiments to those I’ve heard many times over the years. “The Myth of Stagnation” is my rebuttal to those sentiments.

This is a slightly-edited excerpt from a chapter (“The Psychology of Forever”) I wrote back in 1996, but which has never been published. Although I might write some of it a little differently today, I haven’t changed my views about any of the ideas expressed here. You will find this essay along with related thoughts as a chapter in the forthcoming book, Death and Anti-Death Volume 7, edited by Charles Tandy.


"Growing old is no more than a bad habit which a busy man has no time to form."
AndrĂ© Maurois, The Art of Living, “The Art of Growing Old” (1940).

Life is good, some will grant. Life offers numerous paths and possibilities. But isn’t life good only because it is limited in length? If we lived indefinitely, potentially forever, wouldn’t we eventually stagnate, lose interest, become bored?

Certainly this belief has been pushed at us for centuries through stories, from Jonathan Swift’s Struldbruggs in Gulliver’s Travels (1726), Eugene Sue’s The Wandering Jew (1844-5), and Karel Capek’s The Makropoulos Secret (1925), to more recent tales as presented in John Boorman’s 1974 movie Zardoz.

The world of Zardoz, set in the distant future, has been divided into two realms: the Vortex, where dwell the immortals, and the Outlands, home to the short-lived Brutals. The decadent, impotent immortals have lost their vitality. An especially intelligent Brutal, played by Sean Connery, invades the Vortex, introducing chaos, destroying their society, and returning the immortals to a natural state. That is: dead. Even in the heroic Highlander movie, the grand prize for the sole surviving immortal (“There can be only one!”) is wisdom-with-death.

I suspect this cultural tendency to see indefinite lifespan or potential immortality as a curse serves as a psychological defense against the historically undeniable fact of human mortality. So long as mortality was an unalterable part of the human condition, it was understandable if we fooled ourselves into believing that physical immortality would be dreadful. I am suggesting that mortality no longer need be accepted as inevitable. If indefinitely extended longevity is achievable, continuing to cling to the immortality-as-curse myth can only destroy us.

To begin uncovering the errors fueling opposition to extreme longevity, consider first the distinction between seeking immortality and seeking indefinite lifespan. Suppose we were to grant that we might become bored of life, whether it be centuries, millennia, or eons from now. We might even grant that boredom was inevitable given a sufficiently extended life. Granting these suppositions for now, what follows? Only that literal immortality—living forever—would not be desirable. But forever is infinitely longer than a billion years. If there were, in principle, some limit to the length of a stimulating, challenging, rewarding life, we could not know where it lies until we reached it.

If immortality should not be a goal, indefinitely long lifespan can be. If, one day we find ourselves drained, if we can think of nothing more to do and our current activities seem pointless, we will have the option of ending our lives. Alternatively, we might change ourselves so radically that, although someone continues to live, it’s unclear that it’s us. But we cannot know in advance when we will reach that point. To throw away what may be a vastly long stretch of joyful living on the basis that forever must bring boredom and stagnation would be a terrible error.

Stagnation sets in when motion ceases. Motion, change, and growth form the core of living. We will stagnate if we either run out of the energy to stay in the flow of life, or if we exhaust all the possibilities. I suggest that while some people run out of energy at any age, doing so is not inevitable. I further suggest that life’s possibilities are literally unbounded. Certainly we can see this to be true for millennia to come.

Theoretically arguments from physics, cosmology, and computer science indicate that even true immortality and infinite variety cannot be ruled out. First, then, why do many people run out of energy and settle into a stagnant decline? If we survey the diversity of personalities around us, one thing will become clear: People get bored because they become boring.

Sadly many people don’t wait for old age to become boring. The prospect of extended longevity repels them since even their current lives are dull. What makes them become weary? They make themselves that way in several ways.


Continue on the full text of this essay.

Wednesday, September 9, 2009

Why Catholics Should Support the Transhumanist Goal of Extended Life

Why Catholics Should Support the Transhumanist Goal of Extended Life

(A talk being translated to Italian and delivered to a Catholic conference on "The idea of earthly immortality: a new challenge for theology", September 2009).

Intellectual honesty is extremely important to me. Therefore, I must say at the beginning that I am not religious. As the founder of modern transhumanism, I am a rationalist and do not see good reason to believe in the existence of a being who is omnipotent, omniscient, and perfectly good. At the same time, I have studied and understand religion in general and the Catholic faith in particular. I have studied and taught philosophy of religion for many years—including Mount St. Mary’s in Brentwood, California, and have engaged in discussions with many Catholic philosophy students. In addition, I have enormous respect for St. Thomas Aquinas—undoubtedly the greatest of all Catholic theologians.

For Aquinas, faith and reason are compatible and should lead to the same answers, so long as we use our God-given reason carefully. This is a core part of Scholastic philosophy and its blending of revealed wisdom with Aristotle’s philosophy. Aristotle appeals to me for several reasons, the main one for our current purpose being his virtue ethics. It is from a perspective of a virtue ethics of human flourishing that I will argue that Catholics should adopt a generally favorable attitude toward transhumanism and, especially, the pursuit of greatly extended maximum life spans.

Catholic theologians and other thinkers have long been strong defenders of the sacredness of life. They have opposed terminating the life of fetuses and have resisted the resort to suicide. The core transhumanist goal of extended life in the physical realm is thoroughly consistent with this pro-life stance. I prefer the term “extended life” (or “indefinite life span” or “agelessness”) to the term “physical immortality”. I am far from sure that genuine immortality—living literally forever—is possible. Even if we live until the far-future decay or implosion of the universe, that falls infinitely short of forever. A trillion years is but an infinitesimal fraction of eternity.

Even if we succeed in fully understanding and conquering the aging process—as I believe we probably will in the coming decades—our life spans will continue to be limited by factors such as accident, murder, and wars. In a world without aging, we are likely to focus on continuing to reduce the death rate. But, for any period of time—whether a year, a century, or a millennium—we will face a certain probability of death. By “death”, I mean a permanent physical death; loss of personal continuity beyond the point where it can be restored by the medical science and practice of the day.

Literal physical immortality, then, is probably not an option. Agelessness or indefinite life may well be. A substantial and growing number of gerontologists see this as a realistic goal. In part, it’s for this reason that I say that immortality is not truly the goal for most of us as transhumanists. The goal is indefinite life spans. We aim to continually improve ourselves and enhance our capabilities. That makes degenerative aging and involuntary death our mortal enemies. We want to live for now and for the indefinite future. But we cannot know whether we will want to continue living far in the future. Perhaps, after centuries or millennia, we will choose to restore the aging process and allow our physical lives to reach an end. (I believe Catholic moral philosophy may not see this as suicide, but as choosing to move on to the afterlife.)

Transhumanists seek radically extended lives as part of a philosophy that affirms continuous improvement of ourselves, not only intellectually and emotionally but also morally and what might be called spiritually. This goal seems consistent with Catholic views about virtue and the duties of human beings to serve and glorify God. This would not be true if it were possible to point to passages in the Bible—especially in the literal text of the New Testament—that declared longer lives to be contrary to God’s will or to His plans for us. In fact, there is nothing in the Bible that rejects extended physical lives. The Bible appears to be neutral on the topic.

We might even interpret it to have a favorable attitude if we focus on the stated life spans of many early people in the Bible. A major effort to combat physical aging is run by the Methuselah Foundation—named after a man reputed to have lived for 969 years, narrowing exceeding the life spans of several others, including Jared (who lived to 962) and Noah (who lived to 950). The longest-lived person for whom we have reliable records in modern history was Jeanne Calment, who died at the age of 122 years, 164 days. The Bible mentions no fewer than 33 people who lived beyond the age of 123. Whether we take those ages literally or metaphorically, the Bible seems to suggest that our current life spans are not as long as those of people clearly favored by God. And why should a life span of 78 be any more privileged and accepted than historical life spans of 40 or even 30?

In other words, the current average longevity of human beings has varied greatly over time. There is no reason to accept the current state of affairs as uniquely right or divinely commanded. The Catholic Church has no objection to the historical progression of science and technology that has gradually reduced the death rate and extended our lives. In fact, Catholics naturally stand behind efforts to alleviate the suffering of disease and aging and the maintenance and restoration of the healthy, flourishing physical being gifted to us by God.

The Catholic Church should no problem supporting the extension not only of the average life span, but also the maximum life span. At least since Pius XII’s 1950 encyclical Humani Generis, it has become clear that there is no conflict between evolution and the doctrine of the faith regarding man and his vocation. As Pope John Paul II put it: “Today, more than a half-century after the appearance of that encyclical, some new findings lead us toward the recognition of evolution as more than a hypothesis.”

This is especially important in the current context, because the maximum human life span is a product of morally arbitrary evolution, not the result of any divine edict that has ever been communicated to us. Aging and biological senescence and death are the products of evolution. As such, they have no special moral status, whether naturalistic or divine. Aging is essentially a disease process. It results from the failure of our evolved biological mechanisms for cellular repair. We have been endowed with rational capacities unique in all the world. I can see no reason why we should not direct those rational faculties toward improving on what nature has so wonderfully but imperfectly developed. The goal, of course, is not an extended period of decrepitude but an extended period of healthy and vigorous life.

It would be enough for the Catholic Church to support anti-aging efforts simply by acknowledging that this involves relieving suffering and infirmity, and that senescence is not a divinely-commanded condition. But there are positive arguments for actively combating the ravages of aging and the inevitability of biological death. One of these might come from taking the lead of Jesus, who repeatedly urged us to “do as I have done”. Jesus did not look at physical weaknesses and sickness and say “My Father has commanded it. Accept your suffering and impending death.” On the contrary, Jesus made it a core part of his mission to heal the sick and even raise the dead.

This implies that, while suffering may have value, the kind of involuntary, guiltless suffering imposed by age-related illness and senescence is not inherently noble. We can grant that suffering might improve us and can have a valuable place in our lives, without accepting every kind of suffering. Suffering comes in many forms, so reducing or even eliminating suffering due to aging and death still allows plenty of room for a salutary or redemptive role for suffering.

Catholics faced with disease and suffering do not hesitate to support medical research even as they minister to the spiritual needs of victims. I believe that, as it becomes ever more feasible to prevent and reverse the diseases of aging, our moral responsibility to help in doing so becomes greater. Extending the maximum human life span has not seemed feasible until recent years. As more evidence accumulates showing that we can successfully combat aging and the inevitability of biological death, I would expect to see the Church actively supporting or conducting research.

A final observation: From a specifically Christian perspective, extending the maximum healthy life span of humans beyond the current limit of around 123 years would have another major benefit: It would give us more time to develop virtue, to do good works, to serve God, and to save souls. This alone should be reason enough to vigorously support the quest for ageless bodies and indefinite life spans. Few of even the most optimistic transhumanists expect the world to ever be perfect. To the extent that the world remains imperfect—and far inferior to Heaven—a longer existence in the physical world might perhaps be regarded as a milder form of purgatory. It can be seen as a divine blessing: an extended opportunity to improve ourselves, do good works to redeem ourselves, to glorify God, and to more fully earn a place in Heaven.

Saturday, August 8, 2009

Climate Consensus? Maybe, But About What?

(This is a slightly edited version of a post I made to the WTA-Talk email list on July 31, 2009.)

James Hughes posted [on the WTA-Talk email list] the results of one particular survey that reported apparently strong agreement on something or other. James especially highlights the figure of 97% agreement. That does indeed sound very impressive. I do think that such a tight consensus among a group of scientists would be something to give considerable epistemic weight to -- at least in the absence of major objections, say from a neighboring discipline. I’m perfectly willing to be persuaded that a consensus on some clear point exists that I currently disagree with. So far, however, I haven’t been given sufficient reason to do so. Let's look at little more closely at this particular survey and my reasons for doubt.

>Two questions were key: Have mean global temperatures risen compared
>to pre-1800s levels, and has human activity been a significant
>factor in changing mean global temperatures?
>
>About 90 percent of the scientists agreed with the first question and 82 percent the second.

These numbers are lower than the most impressive one of 97%, but still high.

>The strongest consensus on the causes of global warming came from
>climatologists who are active in climate research, with 97 percent
>agreeing humans play a role.

Agreement was lower in certain groups:

>Petroleum geologists and meteorologists were among the biggest
>doubters, with only 47 percent and 64 percent, respectively,
>believing in human involvement.

Why would agreement be higher among the climatologists than among other scientists, including meteorologists and physicists? One plausible answer is that it's because the climatologists can make better judgments. (Although evidence-based forecasting shows that expert forecasts of future changes cannot be trusted with this kind of problem.) Another plausible answer is that groupthink is at work, as it is in so many areas of human activity. This is hardly an arbitrary suggestion, given all the accusations of "denial" and "planetary traitors" and the strong pressures being exerted against skeptics.

Of course there are other surveys, which produce different results. Climatologists are only one group qualified to answer these questions. But l'll set that aside here.

One question that comes to mind is; How were the people to be questioned selected? What percentage of the total does the 3,100 or so represent? From what I've seen, some 10,200 earth scientists were contacted. Only 3,100 replied. Now, these may be representative, or they may not be. Anyone with an academic background in the social sciences, or statistics knows that samples can and often do misrepresent the whole. Given the thousands of scientists who have signed dissenting opinions, I'm not terribly confident that the percentages of respondents in this survey accurately represent the whole group. It seems, for instance, that earth scientists working in private industry were ignored. Given that government-funded scientists may have an incentive (above and beyond the obviously heavy peer-pressure) to agree, the results may not give an accurate picture of all relevant scientists.

These questions come to mind especially because of the highly politicized nature of this discussion. Also, specifically, because of misrepresentations such as seen with the IPCC report, where a small group of people claim to speak for a much larger group. (Compare the summary of the IPCC report to the actual details of the report...)

Other surveys have yielded different percentages. You can see that just from the Wikipedia article.

But, set aside these concerns.

Much more troubling are the questions and the conclusions so quickly drawn from them. Consider the questions. What exactly were those surveyed being asked?

1. "Have mean global temperatures risen compared to pre-1800s levels?"
1800 was around the time that we began to recover more quickly from the Little Ice Age. So what does this tell us? Not much about today or about human activity. It does show that climate scientists agree that the global temperature changes over time. Who is going to disagree with that?

2. Has human activity been a significant factor in changing mean global temperatures?
So, 82% said yes to this. Is this anything to get excited about? Should it impress those of us who are a bit skeptical about warming catastrophe stories? Suppose you are entirely certain that carbon dioxide released by humans is not the cause of global warming. You would still easily grant that global mean temperatures has risen due to the urban heat island effect.

In addition, the question is very vague, certainly if "significant" is taken in the sense of statistical significance (as it presumably is by these scientists). If those climate scientists believed that only 2% or 5% of observed warming could be attributed to human activity, they would still agree with that statement.

How many would still agree if the question was:
-- Do you agree that warming was almost certainly primarily due to human activity? (Not just "significant".)
-- Is global warming principally or quantifiably due to human activity?
-- Are you certain or almost certain that human activity would cause a degree of future warming that constituted a catastrophe?
-- Do you believe that large cuts in carbon dioxide would be effective or cost-effective?
-- Do you believe that the Kyoto Protocol is a sensible solution?

Claiming consensus -- even if entirely justified -- on such vague questions that few skeptics would disagree with is an easy victory that gets us nowhere with any discussion that matters. Once again, dumbing down the issue to a "consensus" of some vague kind isn't useful.

Aside from the foregoing points, I have to say that given the inaccuracy of climate models (as shown comparing them to the past), being impressed by a supposed (or even real) consensus of climate scientists doesn't look too different from relying on a consensus of astronomers. (I would have equally harsh things to say about economists, when they model whole economies...) Granted, that's overstating it. But not by a whole hell of a lot. Again, see my previous post pointing to an audit of the forecasting methodology of the IPCC report, which is considered the gold standard.

I just can't see climate modeling as having attained the status of a hard science at this stage. Even if there was a rock solid consensus on some point of interest (rather than on statements that I have no problem with at all), I would not feel rationally compelled to assent to it as I would, for instance, in the case of a consensus among particle physicists who tell me not to worry about strangelets as they start up the Large Hadron Collider.

My Current View of the Global Warming Controversy

In the raging debate over global warming (or climate change), each side contributes to polarization and misrepresentation of views. Too many of those who see themselves as part of the “consensus” about anthropogenic global warming (AGW) have a habit of ignoring the differences among those who disagree with them. These people are eager to slap the label “denier” and “anti-science” on the skeptics. (Both those labels have been applied to me by Mike Treder, who has consistently proven the most dishonest and arrogant example of what I’m talking about.)

We skeptics (okay, “planetary traitors” if you prefer) actually hold a wide range of views. I’m tired of being labeled a “denier” of some unspecified received truth. I do not deny, for instance, that there has been some global warming this century. My doubts about the claimed (and possibly real) consensus concern other beliefs. To set the record straight (and to make it a bit harder for people like Treder to misrepresent me), here are my current views, as of early August 2009:

• It’s highly probable that there has been some global warming this century—probably about 0.7 degrees C.

• The climate is dynamic and is continually changing. Further, it changes in different ways in different places. For instance, it may be warming in some areas while it cools in others. Local, specific examples are not good evidence for a global trend.

• There has been no warming over the past 12 years—despite continued human-related emissions.

• Over the next century, it’s extremely uncertain how much, if any, further GW/AGW to expect.

• Climate models are not proven reliable or accurate. Climate modeling is still an infant science.

• GW is far from the most urgent or important global issue for us to deal with. (See the work of The Copenhagen Consensus. Also see this [yes, it’s from a Cato blog, so start up your ad hominems]. Some problems that are more deserving of attention: Hunger, malaria, and unsafe water.

• Enormously expensive actions to reduce/stop GW by severely restricting CO2 emissions are premature. (Global wealth decades in the future will be far higher than with restrictions; mitigation is vastly more efficient, to whatever extent it might be needed.)

• Given the considerable uncertainty (and also taking in account geopolitical and health considerations), it probably makes sense to move strongly toward nuclear power (and more solar, wind, and wave power where feasible) and to encourage or even subsidize research into alternative energy sources.

• The extent to which global warming is anthropogenic is much more uncertain than asserted by the “consensus”/orthodoxy.

• The extent to which a consensus actually exists is not clear, nor is it clear on what exactly the consensus agrees (beyond the fact of some warming over the last century and some contribution by human activities).

I reserve the right to change these views as I continue to study this interrelated set of complex issues.

Sunday, July 26, 2009

BMI: Badly Misleading Information

Metrics can be helpful in tracking progress and measuring adherence to processes. One process that is important to many people is that of losing weight or, more precisely, reducing their level of body fat. A simple metric for that purpose would certainly be helpful. At the same time, such a metric could help private and public agencies assess the prevalence and degree of obesity nationally and internationally. “But such a metric already exists!” you might exclaim. It’s the Body Mass Index (BMI).

For those of us still using bizarre imperial units, the BMI is calculated by measuring your weight in pounds and your height in feet. You then multiply your weight by 4.88 and divide the result by your height squared. For a 6’ 00” person weighing 200 lbs, their BMI is 27.1. So, yes, we have a simple metric, but this BMI is Balmy Metric Idiocy. It’s Badly Misleading Information.

The Proactionary Principle urges us, when making decisions, to strive for objectivity and to use evidence-based methods—not simply methods that are widely accepted and used. The great disparity between the high popularity of the BMI and its low level of objectivity and accuracy serves as an object lesson. It’s not just that millions of dieters use the BMI. It has been used and recommended for years by nutritionists, trainers, and official health, wellness, and fitness organizations. Governments are using it to define many millions of people as overweight and obese for the purposes of crafting health policy. The US National Institutes for Health (NIH) starting use BMI in 1985 to set cut-off points for weight and health.

So, what’s wrong with the BMI? I first realized one of its shortcomings when I ran the calculation for myself. I had been working on regaining some lost muscle mass. In doing so, I had put on a couple of pounds of fat along with the muscle. Despite the small gain, I know that I was still fairly lean. This was confirmed by having the gym staff (on more than one occasion) use their more expensive version of the Tanita bioelectrical impedance scale I have in my bathroom. The result: 12.5% body fat. This was a bit lower than the result on my cheaper Tanita scale at home, but close. Given that result—and the fact that I could easily see my abdominals in the mirror—I should expect the BMI to come out clearly below 25, right?

Using the BMI calculator at MSNBC (and verified by my own calculation), I discovered that my BMI was 27.1 According to that, I was overweight. At the same time, the BMI calculator complained that my waist size was “not typical”. I take it that “not typical” means that I had more muscle than most people. That is one major problem with the BMI: It utterly fails to distinguish between fat and muscle.

Take a slightly more non-typical example (but not at all an unknown one): An athlete or bodybuilder with 10% body fat weighing 225 lbs and standing 6 feet tall. At that body fat level, the BMI should be no more than around 20 (the lower end of normal). In fact, it might well be under 20, since few people have that low a level of body fat. Instead, the BMI comes out as 30.5. The BMI is telling this highly conditioned, wonderfully lean athlete that he is in fact obese!

It’s true that the BMI is a pleasantly simple metric. Simplicity is good, but not at the expense of necessary accuracy and information. Because it considers only height and weight, the BMI doesn’t discriminate between fat, muscle, organ, and water. As such, it’s a foolish way to define normal, overweight, or obese. It doesn’t take into account body frame, making it blind to the differences between men and shorter women. Studies show that BMI does a particularly poor job when applied to children, especially when comparing children of differing ethnic groups. For instance, “Slight Sri Lankan children in Australia have more body fat than white Australian children with the same BMI."

Another fatal weakness of the BMI is that it tells us little about people’s health status or probable future health. One reason for this is that it makes no distinction between the places where fat is stored on the body. It’s now known that abdominal fat is a better indicator of future health problems than fat in other areas, but the BMI is oblivious to this finding. The numbers of the BMI yield a misleadingly precise classification, despite the fact that it’s hard to see any difference in increased risk for premature death or serious illness between those who are of normal weight (BMIs of 20-25), overweight (25 to 30), and obese (over 40).

Risks only go up for those classified as underweight (BMI < 18) or as morbidly obese (BMI < 40). If you have a BMI between 25 and 26, you’re classified as overweight. Yet studies by Flegal at the US Center for Disease Control found this group had the best longevity prospects. A study by Gronniger found that moderately obese men (as classified by the BMI) had the same mortality rate as men of “normal” weight.

The BMI is arbitrary in the way it classifies people as normal, overweight, and obese. No scientific basis has been found for labeling people as overweight or obese on the basis of their BMI. What the BMI really does is to codify someone’s subjective views of overweight and obesity into a pseudo-objective metric. I don’t say this to make things easier for fat people. Personally, I work at staying reasonably lean and I have a strong aversion to body fat in other people. My own arbitrary measures would be at least at strict as those embodied in the BMI—were I to attempt to force my preferences onto everyone else, under cover of science.

As I have argued in the context of critiquing the “precautionary principle”, activists like arbitrariness. Arbitrary measures and principles are easily manipulated by special interests. Politicians can use the arbitrariness of the BMI to hype a “war on fat” and to troll for votes by exaggerating health risks. The weight loss industry and those who sell weight loss drugs can do the same.

The BMI is a simple, slim measure, but it’s too simple to do the job. A better approach will, of necessity, be a little better filled out with information and wisdom. If you hadn’t considered these points before, now you know. Don’t be a Bloody Moronic Idiot by continuing to use the BMI.

Wednesday, July 22, 2009

6 Ways to Mismanage Risks

How did so many financial companies do such a poor job of risk management during the recent financial crisis? Numerous factors contributed to the problems including (as I argued in an earlier blog entry) problematic government regulation. In a March 2009 Harvard Business Review article, Rene Stulz offers his own insightful take on “6 Ways Companies Mismanage Risks”.

As we’ve seen in responses to previous crises, organizations both public and private have not done well at making the kinds of changes that effectively prevent a different set of problems cropping up in future. Attention to the six problem areas Stulz discusses would probably help. These are: 1. Relying on historical data. 2. Focusing on narrow measures. 3. Overlooking knowable risks, such as those outside the class of risks normally associated with particular units, and those related to the hedging strategies used to manage risks already identified and assessed. 4. Overlooking concealed risks. 5. Failing to communicate. 6. Not managing in real time.

Stulz concludes by calling for “sustainable risk management”. This includes using scenario analysis to take into account catastrophic risks. You can find my more detailed review of Stulz’ article and a link to the article itself here.

Sunday, June 21, 2009

Singularity and Surge Scenarios

How fast will the future arrive? How will that future differ from the present? We need to have a good sense of the possible and plausible answers to those questions if we are to make smart decisions about technology, the economy, the environment, and other complex issues. The process of envisioning possible futures for the purpose of preparing more robust strategies is often called scenario planning. I prefer scenario learning or thinking, because scenarios foster prepared minds by “learning from the future”, and they provide a forum for integrating what has been learned into decision making.

It’s important to realize that scenario learning is not a forecasting method. Its purpose is not to pinpoint future events but to highlight large-scale forces that push the future in different directions. If we are to develop robust strategies, policies, and plans, we need a sufficiently diverse set of scenarios. In recent years, the success of the Singularity concept has narrowed the range of scenarios pondered in many discussions. The Singularity was conceived and developed by Vernor Vinge (inspired by I.J. Good’s 1965 thoughts on “the intelligence explosion”), Hans Moravec, and Damien Broderick. Over the last few years it has become strongly associated with the specific vision expounded in great detail by Ray Kurzweil.

Responses to Kurzweil’s bold and rich Singularity scenario have often been polarized. To some readers, the Singularity is obvious and inevitable. To others, the Singularity is a silly fantasy. My concern is that the very success of Kurzweil’s version of the Singularity has tended to restrict discussion to pro- and anti-Singularity scenarios. Just as the physical singularity of a black hole sucks in everything around it, the technological Singularity sucks in all discussion of possible futures. I’d like to open up the discussion by identifying a more diverse portfolio of futures.

We could chop up the possibilities in differing ways, depending on what we take to be the driving forces and the fixed factors. I choose a 2 x 5 matrix that generates 10 distinct scenarios. The “5” part of the matrix refers to five degrees of change, from a regression or reversal of technological progress at one extreme to a full-blown Singularity of super-exponential change at the other. The “2” part of the matrix refers to outcomes that are either Voluntarist or Authoritarian. I’m making this distinction in terms of how the trajectory of change (or lack of it) is brought about—either by centralized direction or by a primarily emergent or distributed process, as well as by the form it ends up taking.

As a transhumanist, I’m especially interested in the difference between the Singularity and what I call the Surge. In other words, scenarios 9 and 10 compared to 7 and 8.

So, we have five levels of change, with each level having two very broadly defined types, as follows: [click to enlarge]


Level 1 is the realm of Regression (or Reversal) scenarios. In “U-Turn”, civilization voluntarily abandons some or all technology and the social structures technology makes possible. It’s hard to see this happening on a global level, but we can imagine this happening due to cultural exhaustion from the complexities of technologically advanced living (this is the “Mojo Lost” variant. A religion or philosophy might arise to translate this cultural response into action. In the “Hard Return” variant, a similar outcome might result from global war or from the advent of a global theocracy.

Level 2: Stationary. Bill Joy’s advocacy of relinquishing GNR (genetic, nano, robotic) technologies is a partial version of this, at least as Joy describes it. A more thorough relinquishment that attempted to eradicate the roots of dangerous technologies would have to be a partial Level 1 scenario. Some Amish communities embody a partial Stationary scenario, though most Amish are not averse to adopting new technologies that fit their way of life.

The Steady State scenario seems to me quite implausible. It involves everyone somehow voluntarily holding onto existing technology but developing no new technologies. This might be slightly more plausible if hypothesized for a far future time when science has nothing more to discover and all its applications have been developed. The Full Stop variant of the Stationary level of change is more plausible. Here, compulsion is used to maintain technology at a fixed level. Historically, the western world (but not the Islamic world) experienced something very close to Full Stop during the Dark Ages, from around 500 AD to 1000 AD (perhaps until 1350 AD).

If extreme environmentalists were to have their way, we might see a version of Full Stop that I call Hard Green (or Green Totalitarianism) come about. A more voluntarist version of this might be called Stagnant Sustainability.

Level 3: Linear Progressive. This level of change might also be called “Boring Future”. It’s a scenario of slow, gradual advance in traditional areas that we see in most science fiction—especially SF on TV and in the movies. Technology advances and society changes at a linear pace. The recent past is a good guide to the near future. Most of us seem to have expectations that match Level 3. Kurzweil calls this the “intuitive linear” view. I don’t feel much need to distinguish the Voluntarist and Authoritarian versions, except to give them names: Strolling and Marching.

Level 4: Constrained Exponentially Progressive (Surge scenarios). This level of scenarios recognizes that technological progress (and often social progress or change) is not linear but exponential, at least some of the time and at least for many technologies and cultures. The past century is therefore not a good guide to the century to come. Overall, despite setbacks and slowdowns, change accelerates—technology surges ahead, sometimes then slowing down again before surging ahead once more. We can expect to see much more change between 2010 and 2060 then we saw between 1960 and 2010. To the extent that this change comes about without centralized control and direction, it’s a scenario of Emergent Surge. To the extent that a central plan pushes and shapes technological progress, it’s a Forced Surge.

Level 5: Super-exponentially Progressive (Singularity scenarios). The Singularity scenarios arise when we project the discontinuous arrival of superintelligence, or otherwise expect double-exponential progress. Yudkowsky’s “Friendly AI” is a clear instance of the Humanity-Positive Singularity, though not the only possible instance. There are other ways of distinguishing various Singularity scenarios. One way (going back to Vinge) is in terms of how the Singularity comes about: It might be due to the Internet “waking up” augmentation of human biologically-based intelligence, human-technology integration, or the emergence of a singular AI before humans exceed the historical limits on their intellectual capabilities.

By defining and naming these scenarios, I hope to make it easier to discuss a fuller range of possibilities. We might use these scenarios (suitably fleshed out) as a starting point to consider various questions, such as: Is continued technological progress inevitable? Could we plausibly envision civilizations where progress halts or even reverses? What factors, causes, and decisions could lead to halting/stagnation or regression?

My own main interest, for now, lies in considering the differences between the Surge and the Singularity scenarios. They may not appear to be very different. I believe that there is a quite a difference in the underlying view of economics and social, psychological, and organizational factors. I will explore the Surge vs. Singularity issue more in a later post, and in the sixth chapter of my forthcoming book, The Proactionary Principle. I will consider, for instance, factors favoring a Surge rather than a Singularity, such as adoption rates, organizational inertia, cognitive biases, failure to achieve super-intelligent AI, sunk costs, activist opposition, and regulation and bureaucratically-imposed costs—nuclear power in the USA being a good example.