Earthling: Will AI turn Trumpists into globalists?
Plus: Reckless Zelensky? China’s nuke buildup, Plastic pollution solution, FBI’s Trump fail, this week’s NZN member benefits, and more!
Some lawmakers at this week’s Senate hearing on AI were struck by the spectacle they were witnessing: a corporate executive coming to Capitol Hill and asking for tougher regulation. “Sam Altman is night and day compared to other CEOs,” marveled Sen. Richard Blumenthal after Altman, who runs OpenAI, proposed a federal licensing requirement for companies that build powerful AIs.
Some observers offered a cynical explanation for Altman’s proposal: he wants to use regulation to fend off upstart competitors. But there are other explanations, including not-at-all-cynical ones, like caring about humankind (it’s possible!), and not-all-that-cynical ones, like sensing the tide of public opinion. A Reuters/IPSOS poll released this week found that 61 percent of Americans believe AI could threaten civilization. When most Americans consider your business model an apocalyptic peril, you need to change the story line.
In any event, from the Earthling’s point of view the prospect of federal AI regulation isn’t the most interesting thing to come out of this week’s hearing, and that 61 percent number isn’t the most interesting thing to come out of that survey. The more interesting things are, respectively:
(1) Altman proposed not just federal regulation but international regulation. “I think the US should lead here and do things first, but to be effective we do need something global,” he said. “I know it sounds naive to call for something like this, and it sounds really hard. There is precedent. We’ve done it before with the IAEA [International Atomic Energy Agency].” With Google CEO Sundar Pichai having already endorsed the idea of a treaty governing AI, the two most advanced AI companies in the world are now on the global governance bandwagon—along with lots of other people who have thought about the AI issue.
(2) Trump supporters are even more worried about AI than Americans generally. According to the Reuters/IPSOS survey, fully 70 percent of them believe AI could threaten civilization.
When you put these two things together—intense Trumpist suspicion of AI and an emerging consensus that AI can’t be effectively governed via national policy alone—you get the prospect of Trumpists embracing a new global governance initiative.
Which is a weird prospect! Extreme opposition to global governance has been part of the nationalist right at least since half a century ago, when people in the John Birch Society started predicting the arrival of black helicopters sent by the United Nations to subjugate America. This ideological strand was sustained by the Christian right in the 1980s and 1990s, as reflected in televangelist Pat Robertson’s warnings about the “New World Order.” And Trump himself has picked up the torch—not to the point of scanning the skies for black helicopters, but to the point of explicitly denouncing global governance (and, while president, doing his part to weaken it).
So, yes, getting many Trumpists on the global governance bandwagon is a long shot. But it’s not an impossible shot. After all, it’s hard to find someone, regardless of ideology, who thinks AI can be effectively governed via domestic policy alone. At this week’s hearings even Lindsey Graham, in the course of a predictably skeptical allusion to global governance, acknowledged the international nature of the challenge. After Altman’s reference to the need for an international body to regulate AI, Graham said, “And you also agree that China’s doing AI research. Is that right? This world organization that doesn’t exist, maybe it will, but if you don’t do something about the China part of it, you’ll never quite get this right.”
It’s unclear what Graham’s idea of “doing something” about “the China part of it” is. Assuming (perhaps naively) that it doesn’t involve cruise missiles, it may be something like what Biden is already doing— pulling various levers to constrict China’s import of the most powerful microchips—except more so.
But Biden’s policy is no long-term solution. It’s just a long-term guarantee that China will develop an indigenous ability to make cutting-edge chips. (It’s also a way to increase the chances of war. By reducing China’s access to chips from the TSMC chip factory on Taiwan, Biden’s policy gives Beijing less reason to worry about the consequences of the factory being blown up or otherwise disabled as a result of an invasion; and more reason to want control of all the chipmaking knowledge that will remain in Taiwan even if the TSCM factory gets blown up.)
Wisely guiding the evolution of AI—which will probably mean slowing the evolution of AI—is a huge challenge. There are smart and well-informed people who think it’s basically impossible. But there are also smart and well-informed people—lots of them—who think that failing in this mission will be very bad for humankind, possibly even fatal.
So we have to try—try to forge not just sound policy at the national level but innovative policy at the international level, policy that involves extensive, even unprecedented, collaboration among nations. The recent history of global governance in realms like arms control and environmental regulation isn’t auspicious. But the biggest American political obstacle to progress in those areas—the nationalist right—may be at least a bit less of an obstacle when it comes to AI. And as all computer scientists know, sometimes a bit can make all the difference.
A common reply to people who worry about the Ukraine conflict going nuclear is that Russia has no reason to feel existentially threatened; Ukraine aspires to regain all its lost territory but not to send troops into Russia.
That claim fell into doubt this week after the Washington Post’s latest report on the “Discord leaks”—the trove of US intelligence documents that a national guardsman shared with a Discord discussion group. According to documents obtained by the Post, which summarize intercepted communications between President Zelensky and other Ukrainian officials, Zelensky suggested in late January that Ukrainian troops cross the border and “occupy unspecified Russian border cities” to “give Kyiv leverage in talks with Moscow.”
And in February, the Post says, Zelensky suggested that Ukraine blow up a pipeline that supplies NATO-member Hungary with Russian oil (in part to punish Putin-friendly Hungarian leader Viktor Orban) and use drones to attack military targets in Russia. Drone attacks of unknown origin did later hit the Kremlin in Moscow and oil facilities in western Russia.
Zelensky dismissed the latest reports as “fantasies,” but the Post said Pentagon officials didn’t dispute the authenticity of the documents it reported on.
Attention NZN members!
This week we bring you early access to a conversation with British psychologist Simon Baron-Cohen that will go public in a few weeks. The conversation is about empathy—both cognitive empathy and emotional empathy—and also about autism; Baron-Cohen did pioneering work showing that people on the autism spectrum have trouble with cognitive empathy. That finding, along with subsequent research, has lots of implications both for people on the spectrum and for neurotypical people.
You can find the conversation in your paid-subscriber podcast feed or access it here (where you can also set up a paid-subscriber feed if you don’t have one—just click “listen on,” copy the RSS link and paste it into your podcast app).
And, as usual, tonight the paid-subscriber feed will be populated by the latest edition of the Parrot Room, the ill-advisedly unconstrained conversation between Bob and Mickey Kaus that follows their (relatively) buttoned up public podcast.
US officials are wrong about why China is beefing up its nuclear arsenal—and that could lead to trouble. That’s the upshot of an article published in the political science journal International Security.
China’s nuclear buildup could give it as many as 1,000 nuclear warheads by 2030, the Pentagon estimates (compared with 3,750 warheads in the US arsenal). Secretary of State Antony Blinken takes this to mean that “Beijing has sharply deviated from its decades-old nuclear strategy based on minimum deterrence”—that is, the strategy of having enough nukes to deter US attack.
But the study’s authors, who combed through Chinese-language materials in the course of exploring Beijing’s recent national security discourse, have a different explanation: