First, for those who aren’t familiar with the good Doctor: Doctor Who is a British TV series, originally beginning in 1963 and re-started a few years ago after a long absence. It’s about the adventures of a Time Lord (who is rather fond of Earth and humans, and often takes a human companion on his adventures). Early on, the series established his most persistent adversary and arch-villain as the only other surviving Time Lord, known as The Master.
In my view, the third season of the revived series is the best so far. Even so, there are some episodes I loathe, especially the one in which a (human) scientist finds a way to achieve physical immortality. Of course, he is portrayed as evil and power mad.
The last two episodes of Season Three revolved around a conflict with The Master. I found myself thinking about the actions and inactions of the Doctor in terms of two moral paradigms that have historically been popular among philosophers: the deontological and consequentialist approaches. (I have long preferred a virtue-based ethics, but it’s hard to get very far away from elements of the other two models of morality.)
In case you don’t remember those Moral Philosophy classes you took years ago, here’s an exceedingly brief recap:
Deontological ethics: The right is prior to the good. In other words, actions are only good if they are the right kind of actions. That typically means that good actions are those that conform to some moral rule (such as Kant’s categorical imperative). The outcomes of an action do not (directly) determine its goodness.
http://en.wikipedia.org/wiki/Deontological_ethics
Consequentialist ethics: The good is prior to the right. In other words, the right action is the one that brings about the most good. (In the utilitarian version, that means bringing about the greatest happiness for the greatest number of people, or sentient beings.) For consequentialists, actions must be judged morally good or bad solely according to their outcomes, not according to motives or the character of the person carrying out the act.
http://en.wikipedia.org/wiki/Consequentialist
Doctor Who has always been portrayed as an unyieldingly moral person. In particular, he doesn’t kill, no matter how sensible a move that seems to be in even the direst of circumstances.
[SPOILER WARNING]
In the last two episodes of Season Three, The Master uses advanced alien technology to mesmerize the population, getting himself elected Prime Minister of England, then essentially taking control of the entire world. From this base, he intends to expand outward and conquer other races on other planets. With enormous help from his companion, Martha Jones, the Doctor succeeds in toppling The Master after a year of merciless, murderous, dictatorial rule. Everyone in the vicinity (all of whom have suffered badly) desperately urge the Doctor to kill the master. (Being a Time Lord, the Master can regenerate after being killed. But the Doctor could overcome that ability.)
The Doctor will not kill The Master, even though that might seem an eminently sensible thing to do, to prevent future evils should The Master escape. Instead, the Doctor forgives The Master, and plans to watch over him, imprisoned in a (relatively) secure location. Once again, the Doctor displays an absolute refusal to take (intelligent) life. Given the enormous genius and driven psyche of The Master, it’s entirely possible he will escape and cause enormous misery and destruction, but this seems not to weigh at all in the Doctor’s decision.
This suggests that the Doctor lives by a clearly Kantian or deontological moral code. His actions are limited according to what Robert Nozick termed “side-constraints”. Yet, the Doctor also appears to be something of a consequentialist, in that he travels the universe and time eagerly engaging with trouble and righting wrongs. He’s also clearly a universalist, in that his moral concerns transcend species boundaries. Arguably, this is more a feature of consequentialism that of deontology. Utilitarian consequentialists count the pain and pleasure of all feeling beings equally in their moral calculations. Deontologists such as Kant limit the realm of moral concern to a narrow class of beings, usually those counted as capable of rationality.
The writers of this episode, however, either haven’t taken enough Moral Philosophy classes or else took an easy way out. Consider the Doctor’s great agony at The Master’s last move. The Master chooses to let himself die (after being shot by one of the nearby victims) in order to spite the Doctor. (The Doctor doesn’t want to be alone as the last of his race of Time Lords.) The Doctor’s personal desire not to be the last of his species is not universalist, but strongly favors a species-specific valuation of an immensely evil Time Lord over any number of individuals of other species. Nor is his response consequentialist. No principled consequentialist would put his own feelings above the implications of his actions for potentially billions of other sentient beings. It seems to me that the Doctor’s response in this final episode was a melodramatic one that betrays a quite clear deontic ethics.
The episode I detested, mentioned above, followed a long line of stories by numerous writers in taking exceedingly cheap shots at the goal of physical immortality. What underlies the Doctor’s opposition to immortality? Does it make any sense? He lives a long time himself (perhaps over 900 Earth years so far). What possible deontic objection could he have to a human being wanting to greatly extend his life, and those of others who so chose? Even if the Doctor is a closet consequentialist, merely masquerading as a deontologist, he cites no consequentialist objection.
Among us regular human beings, I strongly suspect much stated opposition to physical immortality (or superlongevity) is essentially rooted in resentment. If I can’t live indefinitely then, dammit, living indefinitely must be a bad thing. Having lived for centuries and apparently having enjoyed most of it, perhaps even Doctor Who lies to himself about this, being unable to face what he takes to be the inevitability of his eventual death.
Tuesday, May 20, 2008
Friday, March 28, 2008
The Proactionary Principle (March 2008)
The Proactionary Principle emerged out of a critical discussion of the precautionary principle during Extropy Institute’s Vital Progress Summit in 2004. We saw that the precautionary principle is riddled with fatal weaknesses. Not least among these is its strong bias toward the status quo and against the technological progress so vital to the continued survival and well-being of humanity.
Participants in the VP Summit understood that we need to develop and deploy new technologies to feed billions more people over the coming decades, to counter natural threats—from pathogens to environmental changes, and to alleviate human suffering from disease, damage, and the ravages of aging. We recognized the need to formulate an alternative, more sophisticated principle incorporating more extensive and accurate assessment of options while protecting our fundamental responsibility and liberty to experiment and innovate.
With input from some of those at the Summit, I developed the Proactionary Principle to embody the wisdom of structure. The Principle urges all parties to actively take into account all the consequences of an activity—good as well as bad—while apportioning precautionary measures to the real threats we face. And to do all this while appreciating the crucial role played by technological innovation and humanity’s evolving ability to adapt to and remedy any undesirable side-effects.
The exact wording of the Principle matters less than the ideas it embodies. The Principle is an inclusive, structured process for maximizing technological progress for human benefit while heightening awareness of potential side-effects and risks. In its briefest form, it says:
Progress should not bow to fear, but should proceed with eyes wide open.
More flatly stated:
Protect the freedom to innovate and progress while thinking and planning intelligently for collateral effects.
Expanded to make room for some specifics:
Encourage innovation that is bold and proactive; manage innovation for maximum human benefit; think about innovation comprehensively, objectively, and with balance.
We can call this “the” Proactionary Principle so long as we realize that the underlying Principle is less like a sound bite than a set of nested Chinese boxes or Russian babushka dolls. If we pry open the lid of this introductory-level version of the Principle, we will discover ten component principles lying within:
1. Guard the Freedom to Innovate: Our freedom to innovate technologically is valuable to humanity. The burden of proof therefore belongs to those who propose measures to restrict new technologies. All proposed measures should be closely scrutinized.
2. Use Best Objective Methods: Use a decision process that is objective, structured, and explicit. Evaluate risks and generate alternatives and forecasts according to available science, not emotionally shaped perceptions, using the most well validated and effective methods available; use explicit forecasting processes with rigorously structured inputs, and fully disclose the forecasting procedure; reduce biases by selecting disinterested experts, by using the devil’s advocate procedure with judgmental methods, and by using auditing procedures such as review panels.
3. Be Comprehensive: Consider all reasonable alternative actions, including no action. Estimate the opportunities lost by abandoning a technology, and take into account the costs and risks of substituting other credible options. When making these estimates, use systems thinking to carefully consider not only concentrated and immediate effects, but also widely distributed and follow-on effects, as well as the interaction of the factor under consideration with other factors.
4. Embrace Input: Take into account the interests of all potentially affected parties, and keep the process open to input from those parties or their legitimate representatives.
5. Simplify: Use methods that are no more complex than necessary taking into account the other principles.
6. Prioritize and Triage: When choosing among measures to ameliorate unwanted side effects, prioritize decision criteria as follows:
· Give priority to reducing non-lethal threats to human health over threats limited to the environment (within reasonable limits);
· Give priority to reducing immediate threats over remote threats;
· Give priority to addressing known and proven threats to human health and environmental quality over hypothetical risks;
· Prefer the measure with the highest expectation value by giving priority to more certain over less certain threats, to irreversible or persistent impacts over transient impacts, and to proposals that are more likely to be accomplished with the available resources.
7. Apply Measures Proportionally: Consider restrictive measures only if the potential negative impact of an activity has both significant probability and severity. In such cases, if the activity also generates benefits, discount the impacts according to the feasibility of adapting to the adverse effects. If measures to limit technological advance do appear justified, ensure that the extent of those measures is proportionate to the extent of the probable effects, and that the measures are applied as narrowly as possible consistent with being effective.
8. Respect Diversity in Values: Recognize and respect the diversity of values among people, as well as the different weights they place on shared values. Whenever feasible, enable people to make reasonable, informed tradeoffs according to their own values.
9. Treat Symmetrically: Treat technological risks on the same basis as natural risks; avoid underweighting natural risks and overweighting human-technological risks. Fully account for the benefits of technological advances.
10. Revisit and Refresh: Create a trigger to prompt decision makers to revisit the decision, far enough in the future that conditions may have changed significantly, but soon enough to take effective and affordable corrective action.
Participants in the VP Summit understood that we need to develop and deploy new technologies to feed billions more people over the coming decades, to counter natural threats—from pathogens to environmental changes, and to alleviate human suffering from disease, damage, and the ravages of aging. We recognized the need to formulate an alternative, more sophisticated principle incorporating more extensive and accurate assessment of options while protecting our fundamental responsibility and liberty to experiment and innovate.
With input from some of those at the Summit, I developed the Proactionary Principle to embody the wisdom of structure. The Principle urges all parties to actively take into account all the consequences of an activity—good as well as bad—while apportioning precautionary measures to the real threats we face. And to do all this while appreciating the crucial role played by technological innovation and humanity’s evolving ability to adapt to and remedy any undesirable side-effects.
The exact wording of the Principle matters less than the ideas it embodies. The Principle is an inclusive, structured process for maximizing technological progress for human benefit while heightening awareness of potential side-effects and risks. In its briefest form, it says:
Progress should not bow to fear, but should proceed with eyes wide open.
More flatly stated:
Protect the freedom to innovate and progress while thinking and planning intelligently for collateral effects.
Expanded to make room for some specifics:
Encourage innovation that is bold and proactive; manage innovation for maximum human benefit; think about innovation comprehensively, objectively, and with balance.
We can call this “the” Proactionary Principle so long as we realize that the underlying Principle is less like a sound bite than a set of nested Chinese boxes or Russian babushka dolls. If we pry open the lid of this introductory-level version of the Principle, we will discover ten component principles lying within:
1. Guard the Freedom to Innovate: Our freedom to innovate technologically is valuable to humanity. The burden of proof therefore belongs to those who propose measures to restrict new technologies. All proposed measures should be closely scrutinized.
2. Use Best Objective Methods: Use a decision process that is objective, structured, and explicit. Evaluate risks and generate alternatives and forecasts according to available science, not emotionally shaped perceptions, using the most well validated and effective methods available; use explicit forecasting processes with rigorously structured inputs, and fully disclose the forecasting procedure; reduce biases by selecting disinterested experts, by using the devil’s advocate procedure with judgmental methods, and by using auditing procedures such as review panels.
3. Be Comprehensive: Consider all reasonable alternative actions, including no action. Estimate the opportunities lost by abandoning a technology, and take into account the costs and risks of substituting other credible options. When making these estimates, use systems thinking to carefully consider not only concentrated and immediate effects, but also widely distributed and follow-on effects, as well as the interaction of the factor under consideration with other factors.
4. Embrace Input: Take into account the interests of all potentially affected parties, and keep the process open to input from those parties or their legitimate representatives.
5. Simplify: Use methods that are no more complex than necessary taking into account the other principles.
6. Prioritize and Triage: When choosing among measures to ameliorate unwanted side effects, prioritize decision criteria as follows:
· Give priority to reducing non-lethal threats to human health over threats limited to the environment (within reasonable limits);
· Give priority to reducing immediate threats over remote threats;
· Give priority to addressing known and proven threats to human health and environmental quality over hypothetical risks;
· Prefer the measure with the highest expectation value by giving priority to more certain over less certain threats, to irreversible or persistent impacts over transient impacts, and to proposals that are more likely to be accomplished with the available resources.
7. Apply Measures Proportionally: Consider restrictive measures only if the potential negative impact of an activity has both significant probability and severity. In such cases, if the activity also generates benefits, discount the impacts according to the feasibility of adapting to the adverse effects. If measures to limit technological advance do appear justified, ensure that the extent of those measures is proportionate to the extent of the probable effects, and that the measures are applied as narrowly as possible consistent with being effective.
8. Respect Diversity in Values: Recognize and respect the diversity of values among people, as well as the different weights they place on shared values. Whenever feasible, enable people to make reasonable, informed tradeoffs according to their own values.
9. Treat Symmetrically: Treat technological risks on the same basis as natural risks; avoid underweighting natural risks and overweighting human-technological risks. Fully account for the benefits of technological advances.
10. Revisit and Refresh: Create a trigger to prompt decision makers to revisit the decision, far enough in the future that conditions may have changed significantly, but soon enough to take effective and affordable corrective action.
Wednesday, March 26, 2008
Subscribe to:
Posts (Atom)