In 2000, Joy published an essay in Wired magazine which has has become more and more influential over the years: Why the Future Doesn’t Need Us, I read this years ago (here’s my blog post about it on Gadgetopia, 13 years ago), but just re-read it for new perspective.
It’s depressing. Still as depressing as it was a decade-and-a-half ago.
Joy explains that he’s afraid of the future. We’re doing things with technology that might get out of control, Specifically, he discusses a trio of technologies he calls GNR:
The 21st-century technologies—genetics, nanotechnology, and robotics (GNR)—are so powerful that they can spawn whole new classes of accidents and abuses. Most dangerously, for the first time, these accidents and abuses are widely within the reach of individuals or small groups. They will not require large facilities or rare raw materials. Knowledge alone will enable the use of them.
This is more dangerous that the technological basis of the cold war: NBC (Nuclear, Biological, Chemical), because of the threat of self-replication:
Specifically, robots, engineered organisms, and nanobots share a dangerous amplifying factor: They can self-replicate. A bomb is blown up only once—but one bot can become many, and quickly get out of control.
He’s not optimistic:
But now, with the prospect of human-level computing power in about 30 years, a new idea suggests itself: that I may be working to create tools which will enable the construction of the technology that may replace our species. How do I feel about this? Very uncomfortable.
Not at all:
This is the first moment in the history of our planet when any species, by its own voluntary actions, has become a danger to itself—as well as to vast numbers of others.
He ends on a positive note that we can control these things, but only through positive action. We need to police GNR technology at the same level as police NBC technology, But, in the 15 years since he wrote this, our inability to reduce nuclear threats to the world—or even compell countries to be transparent about them—has been limited at best.
In fact, the publication of the essay itself was design to launch this discussion, it seems:
My immediate hope is to participate in a much larger discussion of the issues raised here, with people from many different backgrounds, in settings not predisposed to fear or favor technology for its own sake.
His closing sentiment is thus summed up with a callback to the discussion of nuclear war:
We must do more thinking up front if we are not to be similarly surprised and shocked by the consequences of our inventions.