Artificial intelligence researchers may wonder whether they’re in a modern-day arms race for more powerful AI systems. If so, who is it between? China and the US—or the handful of mostly US-based labs developing these systems?
It might not matter. One lesson from The Making of the Atomic Bomb is that imagined races are just as powerful a motivator as real ones. If an AI lab goes quiet, is that because it’s struggling to push the science forward, or is it a sign that something major is on the way?
When OpenAI released ChatGPT in November 2022, Google’s management announced a code red situation for its AI strategy, and other labs doubled-down on their efforts to bring products to the public. “The attention [OpenAI] got clearly created some level of race dynamics,” says David Manheim, head of policy and research at the Association for Long Term Existence and Resilience in Israel.
More transparency between companies could help head off such dynamics. The US kept the Manhattan Project a secret from the USSR, only informing its ally of its devastating new weapon a week after the Trinity test. At the Potsdam conference on July 24, 1945, President Truman shrugged off his translator and sidled over to the Soviet premier to tell him the news. Joseph Stalin seemed unimpressed by the revelation, only saying that he hoped the US would make use of the weapon against the Japanese. In lectures he gave nearly 20 years later, Oppenheimer suggested that this was the moment the world lost the chance to avoid a deadly nuclear arms race after the war.
In July 2023, the White House secured a handful of voluntary commitments from AI labs that at least nodded toward some element of transparency. Seven AI companies, including OpenAI, Google, and Meta, agreed to have their systems tested by internal and external experts before their release and also to share information on managing AI risks with governments, civil society, and academia.
But if transparency is crucial, governments need to be specific about the kinds of dangers they’re protecting against. Although the first atomic bombs were “of unusual destructive force”—to use Truman’s phrase—the kind of citywide destruction they could wreak was not wholly unknown during the war. On the nights of March 9 and 10, 1945, American bombers dropped more than 2,000 tons of incendiary bombs on Tokyo in a raid that killed more than 100,000 residents—a similar number as were killed in the Hiroshima bombing. One of the main reasons why Hiroshima and Nagasaki were chosen as the targets of the first atomic bombs was that they were two of the few Japanese cities that had not been utterly decimated by bombing raids. US generals thought it would be impossible to assess the destructive power of these new weapons if they were dropped on cities that were already gutted.
When US scientists visited Hiroshima and Nagasaki after the war, they saw that these two cities didn’t look all that different from other cities that had been firebombed with more conventional weapons. “There was a general sense that, when you could fight a war with nuclear weapons, deterrence or not, you would need quite a few of them to do it right,” Rhodes said recently on the podcast The Lunar Society. But the most powerful fusion nuclear weapons developed after the War were thousands of times more powerful than the fission weapons dropped on Japan. It was difficult to truly appreciate the amount of stockpiled destruction during the Cold War simply because earlier nuclear weapons were so small by comparison.
There is an order of magnitude problem when it comes to AI too. Biased algorithms and poorly-implemented AI systems already threaten livelihoods and liberty today—particularly for people in marginalized communities. But the worst risks from AI lurk somewhere in the future. What is the real magnitude of risk that we’re preparing for—and what can we do about it?
Source