In a quaint Regency-era office overlooking London’s Russell Square, I cofounded a company called DeepMind with two friends, Demis Hassabis and Shane Legg, in the summer of 2010. Our goal, one that still feels as ambitious and crazy and hopeful as it did back then, was to replicate the very thing that makes us unique as a species: our intelligence.
To achieve this, we would need to create a system that could imitate and then eventually outperform all human cognitive abilities, from vision and speech to planning and imagination, and ultimately empathy and creativity. Since such a system would benefit from the massively parallel processing of supercomputers and the explosion of vast new sources of data from across the open web, we knew that even modest progress toward this goal would have profound societal implications.
It certainly felt pretty far-out at the time.
But AI has been climbing the ladder of cognitive abilities for decades, and it now looks set to reach human-level performance across a very wide range of tasks within the next three years. That is a big claim, but if I’m even close to right, the implications are truly profound.
Further progress in one area accelerates the others in a chaotic and cross-catalyzing process beyond anyone’s direct control. It was clear that if we or others were successful in replicating human intelligence, this wasn’t just profitable business as usual but a seismic shift for humanity, inaugurating an era when unprecedented opportunities would be matched by unprecedented risks. Now, alongside a host of technologies including synthetic biology, robotics, and quantum computing, a wave of fast-developing and extremely capable AI is starting to break. What had, when we founded DeepMind, felt quixotic has become not just plausible but seemingly inevitable.
As a builder of these technologies, I believe they can deliver an extraordinary amount of good. But without what I call containment, every other aspect of a technology, every discussion of its ethical shortcomings, or the benefits it could bring, is inconsequential. I see containment as an interlocking set of technical, social, and legal mechanisms constraining and controlling technology, working at every possible level: a means, in theory, of evading the dilemma of how we can keep control of the most powerful technologies in history. We urgently need watertight answers for how the coming wave can be controlled and contained, how the safeguards and affordances of the democratic nation-state, critical to managing these technologies and yet threatened by them, can be maintained. Right now no one has such a plan. This indicates a future that none of us want, but it’s one I fear is increasingly likely.
Facing immense ingrained incentives driving technology forward, containment is not, on the face of it, possible. And yet for all our sakes, containment must be possible.
It would seem that the key to containment is deft regulation on national and supranational levels, balancing the need to make progress alongside sensible safety constraints, spanning everything from tech giants and militaries to small university research groups and startups, tied up in a comprehensive, enforceable framework. We’ve done it before, so the argument goes; look at cars, planes, and medicines. Isn’t this how we manage and contain the coming wave?
If only it were that simple. Regulation is essential. But regulation alone is not enough. Governments should, on the face of it, be better primed for managing novel risks and technologies than ever before. National budgets for such things are generally at record levels. Truth is, though, novel threats are just exceptionally difficult for any government to navigate. That’s not a flaw with the idea of government; it’s an assessment of the scale of the challenge before us. Governments fight the last war, the last pandemic, regulate the last wave. Regulators regulate for things they can anticipate.
Source