AI’s Oppenheimer moment
Plus: Reining in facial recognition in China, good ocean plastic news, space junk leaderboard, Europe’s climate backslide, NYT gets NZN award, paid subscriber perks, and more!
Christopher Nolan, director of the movie Oppenheimer, “sees parallels between the early nuclear age and the dawn of the AI era,” reports Christopher Grimes, who recently interviewed Nolan for the Financial Times. Indeed, writes Nathan Gardels in Noema, “some are already comparing OpenAI’s Sam Altman to the father of the atomic bomb.”
Here’s how Grimes summarizes Nolan’s take on the AI-nukes analogy: “Oppenheimer’s calls for international nuclear arms control faced stiff resistance from the US and other nations, which feared losing sovereignty. With AI, Nolan says, there are similar questions.”
Yes, there are similarities between the challenges of regulating artificial intelligence and regulating nuclear weapons—and the similarity Nolan highlights is an important one. But there are also differences, and they’re worth exploring, because they underscore a fact that should be influencing US foreign policy thinking but isn’t: There’s a sense in which AI is scarier than nuclear weapons.
I don’t mean that AI has more potential destructive power than the planet’s 13,000 nuclear warheads (though I’m not ready to completely dismiss the claim of extreme doomers that AI is an extinction-level risk). I mean that AI is much harder to control than nuclear weapons—and that this unruliness, combined with the amount of destructive potential AI does have, gives it a uniquely disturbing risk profile.
Before elaborating on the similarities and differences between nukes and AI, let me offer this plot spoiler for those who would just as soon cut to the chase: Whereas it was possible to keep nuclear war at bay—and even negotiate arms control pacts—amid a bitterly fought Cold War, keeping AI under control will require more harmonious relations among great powers than a Cold War permits.
Now for those similarities and differences:
With both nuclear weapons and AI, international regulation is warranted by the prospect of bad outcomes that transcend borders. All-out nuclear war would have been very bad for both the Soviet Union and the US—and the more warheads involved, the worse it would be. That’s why it was in both nations’ interests to negotiate arms reductions (which have now cut the world’s nuclear stockpile from 64,000 warheads in 1985 to one fifth of that). And that’s why the two nations also agreed on various measures—such as the Anti-Ballistic Missile Treaty of 1972—that made it harder for either nation to imagine a nuclear war being “winnable” via a devastating first strike.
In the case of AI, the bad outcomes that could transcend borders are less obviously devastating but are more complex; they come in many different forms. One example would be some malicious hacker building a bot that jumps from computer to computer, getting more and more clever and destructive as it travels, and ultimately wreaking global havoc. It may well be possible someday to issue instructions as general as, “Try to take down the Internet,” or “Figure out a way to cripple as many communications satellites as possible,” and have an AI pursue those goals with focus, intelligence, and creativity.
There are lots of other examples—some involving bad actors, some involving accidents, some involving rogue AIs, and some, no doubt, that we don’t yet foresee. But many of them have this key property of potentially threatening any number of nations and thus giving all nations an interest in making sure that AI is governed effectively throughout Planet Earth.
When it comes to the challenge of doing this—the challenge of international regulation—the basic difference between AI and nukes persists: things are more complicated in the case of AI.
The great thing about a nuclear weapons program (from a regulatory perspective, I mean) is that it’s pretty big and conspicuous and involves exotic materials and technologies. So during the first Cold War (1) there wasn’t much chance that non-government actors in the US or USSR were building a nuclear arsenal; and (2) verifying compliance with agreements between the US and USSR was, if sometimes a bit challenging, doable.
With AI, in contrast, most of the research, development, and deployment is being done by non-government actors—big companies like OpenAI and Google and Meta and Baidu, but also smaller and relatively obscure companies and research teams (some of which are taking advantage of open source large language models, like the one Meta recently released). So even assuming nations could reach agreement on international regulations—which is hard for many reasons, including the rapid evolution of the technology being regulated—enforcing them would be much harder than with nukes.
In a piece I wrote for the Washington Post two months ago (a piece that also contains elaboration on the harms unregulated AI could do) I summarized that challenge like this:
The effective international regulation of AI will call for fine-grained and intrusive monitoring… Ideally, there will be, in addition, the kind of organic transparency afforded by an atmosphere of economic engagement, cultural exchange and scientific collaboration.
I’d put special emphasis on the “organic transparency” part. It will be very hard to enforce a global system of AI laws and norms if there isn’t a robust informal sharing of information across borders. And that kind of sharing calls for the kind of benign transborder relationships—collegial or collaborative or just friendly relationships—that strong economic and cultural and scientific engagement brings.
Of course, in the current environment, a common reaction to such a prospect would be: “But that kind of engagement risks giving away our technical edge—watching the Chinese put our ideas to commercial and military use.”
Well, yes, cross-border engagement often brings that possibility. But that possibility is only considered prohibitive if your relations with the country in question are really bad—if the two nations actively wish each other harm. And it’s possible in principle to have relations with any given country that aren’t really bad. In fact, only a couple of decades ago, our relations with China and Russia were pretty good.
Returning to that world may seem like a heavy lift; it would entail real costs and risks. But the more subtle and complex and scary the transborder perils we face—that is, the graver and more challenging the threats to national security that can only be addressed through international cooperation—the more justified those costs and risks are.
One general tendency of technological evolution is to create more and more of those kinds of threats—threats that firmly align national self-interest with international regulation. Consider genetic engineering: You’d think that all the discussion of the Wuhan lab leak scenario—whether or not the leak actually happened—would get people talking vigorously about the international regulation of biotechnology.
But no. With biotech, AI, and various other technologies that pose serious transborder threats, it may take a lot of harrowing sequels before the foreign policy establishment gets the picture.
Major US media outlets have a spotty record when it comes to highlighting America’s role in elevating international tensions. So the New York Times and the Wall Street Journal deserve a pat on the back for yesterday’s homepage headlines about the Biden administration’s latest tightening of the screws on China’s tech sector.
And the New York Times gets more than a pat on the back. It gets NZN’s coveted cognitive empathy award—given periodically for outstanding achievement in understanding the perspective of a tribe other than your own. The piece accompanying the Times headline explicitly describes the disparity between how Washington and Beijing view this latest development: “Administration officials stressed that the move was tailored to guard national security, but China is likely to see it as part of a wider campaign to contain its rise.”
Attention NZN members! This week we bring you three paid subscriber perks:
An audio version of Bob’s essay “AI and the Noosphere, Part II.” The essay, published in written form last month, is the second installment in a series that views artificial intelligence from a cosmic perspective.
The Overtime segment of Bob’s conversation with political scientist Joshua Landis about the Middle East and US foreign policy. (If you’re a paid subscriber and you don’t yet have the special podcast feed that automatically gives you the full version of all podcasts, complete with Overtime, click the above link, then click “Listen on,” and follow the instructions.)
The latest edition of the Parrot Room, Bob’s after-hours conversation with arch-frenemy Mickey Kaus.
Over the past two decades, two kinds of things have multiplied profusely in Earth’s orbit: (1) satellites, which have grown in number from around 500 to more than 7,000; and (2) things that can collide with satellites: discarded rocket parts, defunct satellites, flotsam from past collisions, and so on; all told, there are around 14,000 pieces of debris that are big enough to track.
Now a site called Visual Capitalist, citing data from Space-Track.org, has created a graphic depiction of debris-creation over time and has also ranked countries in terms of culpability. Russia has added the most space junk, but the US is a close second. China had to settle for a bronze medal, but it finished first in the “biggest single act of junk creation” category; in 2007 Beijing tested an anti-satellite weapon by smashing one of its own weather satellites, creating some 3,500 pieces of debris.
Here are the ten nations that have contributed the most space junk along with the amount of junk they’ve contributed: