Sunday, June 21, 2009

Singularity and Surge Scenarios

How fast will the future arrive? How will that future differ from the present? We need to have a good sense of the possible and plausible answers to those questions if we are to make smart decisions about technology, the economy, the environment, and other complex issues. The process of envisioning possible futures for the purpose of preparing more robust strategies is often called scenario planning. I prefer scenario learning or thinking, because scenarios foster prepared minds by “learning from the future”, and they provide a forum for integrating what has been learned into decision making.

It’s important to realize that scenario learning is not a forecasting method. Its purpose is not to pinpoint future events but to highlight large-scale forces that push the future in different directions. If we are to develop robust strategies, policies, and plans, we need a sufficiently diverse set of scenarios. In recent years, the success of the Singularity concept has narrowed the range of scenarios pondered in many discussions. The Singularity was conceived and developed by Vernor Vinge (inspired by I.J. Good’s 1965 thoughts on “the intelligence explosion”), Hans Moravec, and Damien Broderick. Over the last few years it has become strongly associated with the specific vision expounded in great detail by Ray Kurzweil.

Responses to Kurzweil’s bold and rich Singularity scenario have often been polarized. To some readers, the Singularity is obvious and inevitable. To others, the Singularity is a silly fantasy. My concern is that the very success of Kurzweil’s version of the Singularity has tended to restrict discussion to pro- and anti-Singularity scenarios. Just as the physical singularity of a black hole sucks in everything around it, the technological Singularity sucks in all discussion of possible futures. I’d like to open up the discussion by identifying a more diverse portfolio of futures.

We could chop up the possibilities in differing ways, depending on what we take to be the driving forces and the fixed factors. I choose a 2 x 5 matrix that generates 10 distinct scenarios. The “5” part of the matrix refers to five degrees of change, from a regression or reversal of technological progress at one extreme to a full-blown Singularity of super-exponential change at the other. The “2” part of the matrix refers to outcomes that are either Voluntarist or Authoritarian. I’m making this distinction in terms of how the trajectory of change (or lack of it) is brought about—either by centralized direction or by a primarily emergent or distributed process, as well as by the form it ends up taking.

As a transhumanist, I’m especially interested in the difference between the Singularity and what I call the Surge. In other words, scenarios 9 and 10 compared to 7 and 8.

So, we have five levels of change, with each level having two very broadly defined types, as follows: [click to enlarge]


Level 1 is the realm of Regression (or Reversal) scenarios. In “U-Turn”, civilization voluntarily abandons some or all technology and the social structures technology makes possible. It’s hard to see this happening on a global level, but we can imagine this happening due to cultural exhaustion from the complexities of technologically advanced living (this is the “Mojo Lost” variant. A religion or philosophy might arise to translate this cultural response into action. In the “Hard Return” variant, a similar outcome might result from global war or from the advent of a global theocracy.

Level 2: Stationary. Bill Joy’s advocacy of relinquishing GNR (genetic, nano, robotic) technologies is a partial version of this, at least as Joy describes it. A more thorough relinquishment that attempted to eradicate the roots of dangerous technologies would have to be a partial Level 1 scenario. Some Amish communities embody a partial Stationary scenario, though most Amish are not averse to adopting new technologies that fit their way of life.

The Steady State scenario seems to me quite implausible. It involves everyone somehow voluntarily holding onto existing technology but developing no new technologies. This might be slightly more plausible if hypothesized for a far future time when science has nothing more to discover and all its applications have been developed. The Full Stop variant of the Stationary level of change is more plausible. Here, compulsion is used to maintain technology at a fixed level. Historically, the western world (but not the Islamic world) experienced something very close to Full Stop during the Dark Ages, from around 500 AD to 1000 AD (perhaps until 1350 AD).

If extreme environmentalists were to have their way, we might see a version of Full Stop that I call Hard Green (or Green Totalitarianism) come about. A more voluntarist version of this might be called Stagnant Sustainability.

Level 3: Linear Progressive. This level of change might also be called “Boring Future”. It’s a scenario of slow, gradual advance in traditional areas that we see in most science fiction—especially SF on TV and in the movies. Technology advances and society changes at a linear pace. The recent past is a good guide to the near future. Most of us seem to have expectations that match Level 3. Kurzweil calls this the “intuitive linear” view. I don’t feel much need to distinguish the Voluntarist and Authoritarian versions, except to give them names: Strolling and Marching.

Level 4: Constrained Exponentially Progressive (Surge scenarios). This level of scenarios recognizes that technological progress (and often social progress or change) is not linear but exponential, at least some of the time and at least for many technologies and cultures. The past century is therefore not a good guide to the century to come. Overall, despite setbacks and slowdowns, change accelerates—technology surges ahead, sometimes then slowing down again before surging ahead once more. We can expect to see much more change between 2010 and 2060 then we saw between 1960 and 2010. To the extent that this change comes about without centralized control and direction, it’s a scenario of Emergent Surge. To the extent that a central plan pushes and shapes technological progress, it’s a Forced Surge.

Level 5: Super-exponentially Progressive (Singularity scenarios). The Singularity scenarios arise when we project the discontinuous arrival of superintelligence, or otherwise expect double-exponential progress. Yudkowsky’s “Friendly AI” is a clear instance of the Humanity-Positive Singularity, though not the only possible instance. There are other ways of distinguishing various Singularity scenarios. One way (going back to Vinge) is in terms of how the Singularity comes about: It might be due to the Internet “waking up” augmentation of human biologically-based intelligence, human-technology integration, or the emergence of a singular AI before humans exceed the historical limits on their intellectual capabilities.

By defining and naming these scenarios, I hope to make it easier to discuss a fuller range of possibilities. We might use these scenarios (suitably fleshed out) as a starting point to consider various questions, such as: Is continued technological progress inevitable? Could we plausibly envision civilizations where progress halts or even reverses? What factors, causes, and decisions could lead to halting/stagnation or regression?

My own main interest, for now, lies in considering the differences between the Surge and the Singularity scenarios. They may not appear to be very different. I believe that there is a quite a difference in the underlying view of economics and social, psychological, and organizational factors. I will explore the Surge vs. Singularity issue more in a later post, and in the sixth chapter of my forthcoming book, The Proactionary Principle. I will consider, for instance, factors favoring a Surge rather than a Singularity, such as adoption rates, organizational inertia, cognitive biases, failure to achieve super-intelligent AI, sunk costs, activist opposition, and regulation and bureaucratically-imposed costs—nuclear power in the USA being a good example.