Are AI luminaries too freaked out by their creation?
Plus: The passing of pax Americana, creating climate-change-resistant animals, China sanctions update: still not working, spy whale spotted!—and more.
If you feel you’ve been spending too much time reading about the perils of artificial intelligence, this week brought good news: The latest AI warning issued by tech luminaries is the most concise ever—a mere 22 words.
The statement reads, in its entirety, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
You could generate more words than that just by listing the signatories who qualify as either AI pioneers (such as Geoffrey Hinton and Yoshua Bengio) or AI corporate titans (OpenAI CEO Sam Altman, Google/Deep Mind CEO Demis Hassabis, Anthropic CEO Dario Amodei, et. al.), or a cross between the two (like Ilya Sutskever, co-founder of OpenAI and its chief scientist). All told, this statement has a significantly higher AI eminence quotient than the last big warning, the one that called for a six-month pause on training the biggest Large Language Models (and a lower clown quotient, since Elon Musk’s name isn’t affixed this time).
As for the substance of the statement: “Extinction” is a strong word! In AI alarm circles, “extinction from AI” connotes scenarios where the AI seizes power, deems humans a pesky nuisance, and assigns them a fate warranted by that status. Is that really the AI peril that most need’s the world’s attention right now?
This question was raised by Princeton computer scientist Arvind Narayanan and two co-authors on Substack. “Extinction from AI,” they wrote, “implies an autonomous rogue agent.” But “what about risks posed by people who negligently, recklessly, or maliciously use AI systems? Whatever harms we are concerned might be possible from a rogue AI will be far more likely at a much earlier stage as a result of a ‘rogue human’ with AI’s assistance.”
That’s probably true. Even an extinction-level pandemic—listed in the statement as a risk separate from AI—could result from a rogue human using AI to help engineer a new virus. And it’s easy to imagine less cinematic but collectively momentous episodes of AI-assisted wrongdoing in the near future: dispatching an army of robot hackers that disable a series of power grids or a network of satellites, or just dispatching an army of social media bots that massively trigger the opposing tribe.
So why did all these tech luminaries focus on “extinction from AI”? For one thing, that is the threat that some of them (such as Hinton) seem genuinely haunted by. Also, the earlier statement that some of them signed—the six-month pause statement—did include less exotic scenarios (“Should we let machines flood our information channels with propaganda and untruth?”… “Should we automate away all the jobs, including the fulfilling ones?”) and failed to galvanize much political action. So maybe they figured they should turn up the volume this time.
Here’s a deeply challenging two-part truth about artificial intelligence: (1) Even if AI doesn’t constitute an extinction-level risk, or even a risk very close to extinction-level, it is definitely a very big risk, when you add up all the bad consequences it could have. (2) AI is an extremely hard technology to regulate, a technology whose effective governance requires not just very creative new national laws, but very creative new international laws. A regulatory scheme that significantly constrains AI’s downside without unduly constraining its upside would be a big ask even if the world’s systems of national governance were working well. Speaking as an American, I can confidently say that at least one of them isn’t.
So I don’t blame the signatories of the latest AI warning for wanting to grab people and shake them by the shoulders.
Max Tegmark, a professor at MIT who helped organize the six-month pause proposal, has compared the AI challenge to an asteroid hurtling toward Planet Earth—a textbook example of something that gives nations cause to focus on international cooperation at the expense of policy priorities that only yesterday seemed essential.
Tegmark seems to take the rogue AI threat seriously, but you don’t need to share that concern to embrace something like the asteroid metaphor. Even the more mundane disturbances that AI will bring could add up to a kind of epic meteorological event—a storm that hits the planet and leaves no nation untouched, roiling the social fabric and shaking the foundations of major institutions.
That may not be an existential threat, but it’s enough to make you want to press pause.
Attention NZN members! This week we bring you two paid subscriber perks:
1. The latest edition of “Earthling Unplugged,” Bob’s biweekly (more or less) conversation with NZN staffer Andrew Day about all the stories from the latest Earthling—plus some stories that didn’t make the cut but are worth talking about. You can access the conversation here or look for it (and this week’s other perk) in your paid-subscriber podcast feed.
2. A special edition of the Parrot Room, the after-hours weekly conversation that Bob normally has with Mickey Kaus. Mickey is AWOL this week, so Bob has arranged to talk with two old friends and colleagues from his days working at a daily newspaper, back before the internet was a thing. They’ll discuss how media has changed since then and various other topics, including (if they stick to the script) what a great guy Bob was and is.
And by the way, since Mickey is also unavailable for the weekly public Bob-and-Mickey podcast, we scoured the media landscape in search of someone who can fill Mickey’s shoes. We succeeded: The guest is the even-more-famous-than-Mickey New York Times columnist Thomas Friedman. He and Bob discuss Israel-Palestine, Russia-Ukraine, the Kosovo crisis, and the world’s descent into Cold War II.
Good news! In a recent speech, a veteran member of the US foreign policy establishment displayed levels of cognitive empathy not normally seen in those quarters. Fiona Hill, by looking at things from the vantage point of non-western nations, reframed the Ukraine War in a way that will usefully unsettle some of her colleagues.
Some see the war as a simple war of self-defense, others as a battle between democracy and autocracy, and others as a proxy war between Russia and the US that the US secretly welcomed. Hill, a national security official under Presidents Bush, Obama, and Trump, suggests another framing: “In the current geopolitical arena,” she says, the war is “a proxy for a rebellion by Russia and the ‘Rest’ against the United States”—a rebellion against American hegemony.
In Hill’s telling, decades of American militarism have left people around the world feeling burnt out on US leadership. And America’s armed interventions—especially the Iraq war—have eroded norms against transborder aggression that might otherwise have strengthened global opposition to Putin’s invasion. The result is a climate in which many nations, while not endorsing the invasion, nonetheless join Russia and China in hoping to see the US cut down to size.
Hill calls for a “diplomatic surge” to win non-westerners over and help end the Ukraine war. But she concedes this will be hard. India, Singapore, the Scandinavian nations, and a few others might have the goodwill needed to lead a push for peace—but America’s geopolitical capital seems to be running out. The war in Ukraine, she writes, “is perhaps the event that makes the passing of pax Americana apparent to everyone.”
It’s not every week that NZN chooses a Cold War II headline of the week. Then again, it’s not every week that brings us a headline like this: