How fast will the future arrive? How will that future differ from the present? We need to have a good sense of the possible and plausible answers to those questions if we are to make smart decisions about technology, the economy, the environment, and other complex issues. The process of envisioning possible futures for the purpose of preparing more robust strategies is often called scenario planning. I prefer scenario learning or thinking, because scenarios foster prepared minds by “learning from the future”, and they provide a forum for integrating what has been learned into decision making.
It’s important to realize that scenario learning is not a forecasting method. Its purpose is not to pinpoint future events but to highlight large-scale forces that push the future in different directions. If we are to develop robust strategies, policies, and plans, we need a sufficiently diverse set of scenarios. In recent years, the success of the Singularity concept has narrowed the range of scenarios pondered in many discussions. The Singularity was conceived and developed by Vernor Vinge (inspired by I.J. Good’s 1965 thoughts on “the intelligence explosion”), Hans Moravec, and Damien Broderick. Over the last few years it has become strongly associated with the specific vision expounded in great detail by Ray Kurzweil.
Responses to Kurzweil’s bold and rich Singularity scenario have often been polarized. To some readers, the Singularity is obvious and inevitable. To others, the Singularity is a silly fantasy. My concern is that the very success of Kurzweil’s version of the Singularity has tended to restrict discussion to pro- and anti-Singularity scenarios. Just as the physical singularity of a black hole sucks in everything around it, the technological Singularity sucks in all discussion of possible futures. I’d like to open up the discussion by identifying a more diverse portfolio of futures.
We could chop up the possibilities in differing ways, depending on what we take to be the driving forces and the fixed factors. I choose a 2 x 5 matrix that generates 10 distinct scenarios. The “5” part of the matrix refers to five degrees of change, from a regression or reversal of technological progress at one extreme to a full-blown Singularity of super-exponential change at the other. The “2” part of the matrix refers to outcomes that are either Voluntarist or Authoritarian. I’m making this distinction in terms of how the trajectory of change (or lack of it) is brought about—either by centralized direction or by a primarily emergent or distributed process, as well as by the form it ends up taking.
As a transhumanist, I’m especially interested in the difference between the Singularity and what I call the Surge. In other words, scenarios 9 and 10 compared to 7 and 8.
So, we have five levels of change, with each level having two very broadly defined types, as follows: [click to enlarge]
Level 1 is the realm of Regression (or Reversal) scenarios. In “U-Turn”, civilization voluntarily abandons some or all technology and the social structures technology makes possible. It’s hard to see this happening on a global level, but we can imagine this happening due to cultural exhaustion from the complexities of technologically advanced living (this is the “Mojo Lost” variant. A religion or philosophy might arise to translate this cultural response into action. In the “Hard Return” variant, a similar outcome might result from global war or from the advent of a global theocracy.
Level 2: Stationary. Bill Joy’s advocacy of relinquishing GNR (genetic, nano, robotic) technologies is a partial version of this, at least as Joy describes it. A more thorough relinquishment that attempted to eradicate the roots of dangerous technologies would have to be a partial Level 1 scenario. Some Amish communities embody a partial Stationary scenario, though most Amish are not averse to adopting new technologies that fit their way of life.
The Steady State scenario seems to me quite implausible. It involves everyone somehow voluntarily holding onto existing technology but developing no new technologies. This might be slightly more plausible if hypothesized for a far future time when science has nothing more to discover and all its applications have been developed. The Full Stop variant of the Stationary level of change is more plausible. Here, compulsion is used to maintain technology at a fixed level. Historically, the western world (but not the Islamic world) experienced something very close to Full Stop during the Dark Ages, from around 500 AD to 1000 AD (perhaps until 1350 AD).
If extreme environmentalists were to have their way, we might see a version of Full Stop that I call Hard Green (or Green Totalitarianism) come about. A more voluntarist version of this might be called Stagnant Sustainability.
Level 3: Linear Progressive. This level of change might also be called “Boring Future”. It’s a scenario of slow, gradual advance in traditional areas that we see in most science fiction—especially SF on TV and in the movies. Technology advances and society changes at a linear pace. The recent past is a good guide to the near future. Most of us seem to have expectations that match Level 3. Kurzweil calls this the “intuitive linear” view. I don’t feel much need to distinguish the Voluntarist and Authoritarian versions, except to give them names: Strolling and Marching.
Level 4: Constrained Exponentially Progressive (Surge scenarios). This level of scenarios recognizes that technological progress (and often social progress or change) is not linear but exponential, at least some of the time and at least for many technologies and cultures. The past century is therefore not a good guide to the century to come. Overall, despite setbacks and slowdowns, change accelerates—technology surges ahead, sometimes then slowing down again before surging ahead once more. We can expect to see much more change between 2010 and 2060 then we saw between 1960 and 2010. To the extent that this change comes about without centralized control and direction, it’s a scenario of Emergent Surge. To the extent that a central plan pushes and shapes technological progress, it’s a Forced Surge.
Level 5: Super-exponentially Progressive (Singularity scenarios). The Singularity scenarios arise when we project the discontinuous arrival of superintelligence, or otherwise expect double-exponential progress. Yudkowsky’s “Friendly AI” is a clear instance of the Humanity-Positive Singularity, though not the only possible instance. There are other ways of distinguishing various Singularity scenarios. One way (going back to Vinge) is in terms of how the Singularity comes about: It might be due to the Internet “waking up” augmentation of human biologically-based intelligence, human-technology integration, or the emergence of a singular AI before humans exceed the historical limits on their intellectual capabilities.
By defining and naming these scenarios, I hope to make it easier to discuss a fuller range of possibilities. We might use these scenarios (suitably fleshed out) as a starting point to consider various questions, such as: Is continued technological progress inevitable? Could we plausibly envision civilizations where progress halts or even reverses? What factors, causes, and decisions could lead to halting/stagnation or regression?
My own main interest, for now, lies in considering the differences between the Surge and the Singularity scenarios. They may not appear to be very different. I believe that there is a quite a difference in the underlying view of economics and social, psychological, and organizational factors. I will explore the Surge vs. Singularity issue more in a later post, and in the sixth chapter of my forthcoming book, The Proactionary Principle. I will consider, for instance, factors favoring a Surge rather than a Singularity, such as adoption rates, organizational inertia, cognitive biases, failure to achieve super-intelligent AI, sunk costs, activist opposition, and regulation and bureaucratically-imposed costs—nuclear power in the USA being a good example.
Great analysis Max, and I also look forward to a Surge.
ReplyDeleteI never believed too much in a hard takeoff, exponential Singularity. As an engineer, I know that the real world is not simple, clear and pristine like a mathematical equation, but often messy, chaotic and greasy. So while I do expect an overall exponential trend, I do expect one with roadblocks, false starts, backsteps etc., which will result in a fractal rising at a rate halfway between linear and exponential.
Let's hope this Surge will be spontaneously emergent and not forced.
I always found Ray's predictions way too optimist and "clean". But I also find them refreshing against the often overcautious and defeatist attitude of some modern "transhumanists" (quotes intentional).
Thanks for linking to my blog (formerly Transumanar), which now has a new title and URL:
cosmi2le
http://cosmi2le.com/
Please update the link. Best, G.
Nice post. What is my primary concern is that the world runs a significant riks of undergoing a semi-permanent Full Stop, with a hint of Strolling.
ReplyDeleteThis in turn describes very well what I call a Brave New World scenario, where technology - actually, a slightly more advanced technology than we have - is not relinquished, but on the contrary employed at avoiding any further progress, at least of the sort that can be defined as a "paradigm shift".
The underlying factors would be both cultural:
- the extinction of the "fundamentals" that allowed in certain eras, and especially in the recent past (1860-1960), dramatic changes and accelerations in the progress rate;
- the current obsession for security / safety / sustainability / "end of history", etc.
Of course, I may be wrong.
But, be it as it may, the very mission of transhumanis, as I see it, is to fight this trend.
The current obsession for security / safety / sustainability / "end of history", etc. is a typical disease of our old Western society. Most old persons grow over-worried and over-scared, and tend to avoid all risks. I am afraid we, as a society, are falling to the same syndrome.
ReplyDeleteA young friend who just spent some months in Africa told me a wonderful thing: that instead of fearing change, they are hungry for it. Probably overstated, but a positive indication nonetheless.
Young cultures as opposed to old civilisations? :-)
ReplyDeleteBut it is true, in some sense, that by now we are living an overpopulated, but above all entirely explored/conquered, Earth.
This is why, if we really prize some features of past "dynamism", there are no alternatives IMHO to some sort of expansion in the outer space, and, to a much lesser degree, in the oceanic depths.
I am a big supporter of expansion to the outer space -- going back to space to stay -- for the reason you mention: to recover our past "dynamism".
ReplyDeleteAt the same time I don't think space "belongs" to us flesh and blood humans 1.0: it is our robotic and uploaded mind children who will move beyond the solar system and spread to the cosmos.
But we need dynamism now to enable this future, and this is why space is so important: for the mental health of our species. We need to see people walking on the Moon again, and then Mars. The night sky itself, that everyone can see from everywhere, will be a constant and powerfully symbilic reminder of the cosmic future waiting for us.
"But we need dynamism now to enable this future, and this is why space is so important: for the mental health of our species."
ReplyDeleteExactly. As in "societal projects". In any event, even in the case of the XV and XVI explorations (most of) the inhabitants of the old world remained where they were, and this is of course very likely to be the case when we are speaking of deep gravitional wells, not to mention the other issues involved...
Nice exposé, but I wonder about the naming of the last two scenarios, for singularity.
ReplyDeleteIt seems to me that a voluntarist singularity not at all guaranteed to be positive (as in unfriendly AI), and an authoritarian one not necessarily negative either (however unlikely you think that may be, an authoritarian may for example enforce the emergence of a friendly AI - eg in a Yudkowsky lab-dictatorship).
Hey Max
ReplyDeleteJust wanted to pass on my thanks. You're doing a great job for us Transhumanists/Immortalists.
I know it was a while ago, but also wanted to say 'nice work' in Technocalyps: http://video.google.com/videoplay?docid=-7141762977713668208
Keep on rockin' mate ...I'll be joining the fight soon. Just need a few more years to reach financial independence...after that, I'll be contributing to the movement.
Cheers
Dan