Sam Altman’s Big, Bad Idea
Plus: Egypt preps for exodus, Putin endorses Biden, Biden curses Bibi, AI updates, the rules-based order sham, and more!
This week Jensen Huang, co-founder and CEO of microchip maker NVIDIA, said that all nations should build their own high-powered large language models. That way, he said, they will have “sovereign AI.”
That way they will also make him even richer than he is. Training a big LLM takes tens of thousands of microchips that cost tens of thousands of dollars each. And NVIDIA, which now has the third highest market valuation in the world (behind Microsoft and Apple), dominates the AI chipmaking business.
At least for now. According to the Wall Street Journal, OpenAI CEO Sam Altman is seeking investors in “a wildly ambitious tech initiative that would boost the world’s chip-building capacity” and also boost its production of energy. (Huge amounts of power go into the training and mass use of LLMs.) One source told the Journal that Altman is trying to round up between five and seven trillion dollars—more than five percent of the world’s GDP.
So the king of AI hardware and the king of AI software agree: Planet Earth needs to devote more resources to AI than it’s already devoting.
But does it? Is accelerating the evolution of AI in the interest of the 7.9 billion people who aren’t Jensen Huang and aren’t Sam Altman?
AI can bring lots of wonderful things—cheaper, better medical care, leaps in economic productivity, new forms of creative expression—even, for some people, new and welcome forms of companionship. But those things tend to have a flip side; they’re ‘disruptive’ in both the good and bad senses of the term.
“Leaps in economic productivity,” for example, is often another name for “people losing their jobs.” Hence this headline in Monday’s Wall Street Journal: “AI Is Starting to Threaten White-Collar Jobs. Few Industries Are Immune.” Subhead: “Leaders say the fast-evolving technology means many jobs might never return.” And even if as many jobs are created as disappear, the transition will be wrenching for many workers and life-shattering for some.
So too with AIs-as-companions (which is already a thing): Yes, as with social media, we’ll eventually learn what the downside is, and presumably we’ll figure out how to handle that (even if that mission still isn’t accomplished in the case of social media). But meanwhile there will be some psychological carnage. AI companies, like social media companies, will naturally “optimize for engagement”—and we’ve seen how suboptimal that is.
And, of course, there’s the problem of AIs that, in the hands of bad actors, wreak havoc. This week researchers at the University of Illinois published a paper reporting that “LLM agents can autonomously hack websites, performing tasks as complex as blind database schema extraction and SQL injections without human feedback.” I don’t know what that means, but, given that the italics were in the original text, I’m pretty sure it’s not good.
With bad-actor AI, as with more legitimate AI that has bad side effects, we can eventually get things under control. In principle. But, as a practical matter, if lots of different AI disruptions hit us fast, non-catastrophic transition could be hard.
Oddly, Sam Altman seems to agree with much of this analysis. In October, during an on-stage interview, he noted that, even if the age of AI brings more and better jobs to replace the old jobs, many of the people who lose the old ones will suffer. He even said, “The thing I think we do need to confront as a society is the speed at which this is going to happen.”
But note that word “confront.” Altman isn’t proposing that, faced with dangerously disruptive speed, we try to slow things down. He’s not even proposing that we not aggressively speed things up. In fact, he seems to think that, in some ways, speeding things up will solve the problem of things moving too fast. In that on-stage interview, he elaborated on how we can “confront” the speed of social transformation:
“One of the reasons that we feel so strongly about deploying this tech as we do [is that]… by putting this out in people’s hands and making this super widely available and getting billions of people to use ChatGPT, not only do people have the opportunity to think about what’s coming and participate in that conversation, but people use the tool to push the future forward.”
In short: It’s all good! But isn’t that what Silicon Valley told us last time around? Right before social media helped polarize our politics and spawn pathological subcultures and make adolescence even more stressfully weird than it used to be?
The good news is that some people are thinking seriously about the challenge of governing AI. This week saw the release of a paper called “Computing Power and the Governance of Artificial Intelligence” (whose authors include AI eminence Yoshua Bengio and also—credit where due—someone who works at OpenAI). The paper’s main point is that computing power, aka “compute”—which means, roughly speaking, the high-end chips NVIDIA makes and Altman wants to start making—is a key, even the key, policy lever when it comes to governing AI.
The paper is policy-agnostic; it’s not recommending anything in particular. It just explains how such things as the compute-intensive and energy-intensive process of training big LLMs, and the trackable supply chains involved in producing high-end chips, make the future development and deployment of AI amenable to various kinds of transparency and governance. The specifics are largely left to the reader’s imagination.
So let’s dream! Suppose the world’s governments got together and decided to slightly slow the evolution of AI. They might, for example, put a steep tax on advanced microchips—and put the revenue to related uses, like studying the AI “alignment” problem or steering some of AI’s brainpower toward solving problems faced by poorer nations, problems market forces alone wouldn’t address.
A global tax on advanced microchips would probably annoy both Jensen Huang and Sam Altman, but that’s not the biggest problem. The biggest problem is the very idea of getting the world’s governments together to talk seriously about an innovative policy. It’s hard enough to get governments together to talk about ending the wars they keep getting into!
This is humankind’s current problem, and possibly its fatal problem: Our political evolution hasn’t reached the level that the current level of our technological evolution demands. I’m pretty sure the solution to this problem isn’t the acceleration of technological evolution. —RW
Note: This isn’t the first time I’ve pondered the paradox posed by Altman’s professed concerns about AI risk and his manifest commitment to accelerating AI’s evolution. For one hypothetical resolution of the paradox, see my “Sam Altman, Aspiring Messiah.”
The Egyptian government is building a “security zone,” surrounded by concrete walls more than 15 feet high, that could hold Palestinians who are pushed into Egypt by Israel’s impending assault on Rafah. The construction project, not acknowledged by the government until it was disclosed this week by an Egyptian NGO, encompasses eight square miles.
Egyptian President Abdel Fattah el-Sisi has so far rejected calls by Israeli officials to accept displaced Gazans. Tariq Kenney-Shawa of the Palestinian Policy Network told Middle East Eye that completion of the security zone would "encourage Israel to move ahead with its ground assault on Rafah because they will read it as a green light and see it as Egypt's acquiescence."
Kenney-Shawa said Sisi’s name “will be forever tarnished in the eyes of Palestinians and Arabs throughout the region” because Egypt will have been “complicit in the forced displacement of Palestinians from Gaza." Forced displacement is a war crime under the Geneva Conventions.
Rafah is the only Gazan city not yet invaded by Israel. Around a million homeless Palestinians are now clustered in and around the city, in addition to its native population of 250,000. Kenney-Shawa said Gazans “will either have to endure Israel's advance into Rafah, which is expected to be especially brutal, or be forced across the border into the Sinai, where they may never be allowed to return and will have to live in limbo for the foreseeable future." He said Sisi also faces a dilemma: Either "keep the borders closed and watch thousands be massacred by Israeli forces or open the borders and be complicit in the ethnic cleansing of Gaza".
Paid subscribers have early access to Bob’s Nonzero podcast conversation with AI expert (and generative AI skeptic) Gary Marcus, which will go public sometime next week. The two debated, among other things, whether large language models truly “understand” things.
Also, this week the Nonzero podcast aired two episodes, one with AI researcher Nora Belrose about AI doomerism, AI cognition, and other AI topics; and the other with journalist Leonid Ragozin on the Ukraine war and Putin’s worldview. Each episode includes an Overtime segment for paid subscribers.
Four AI updates:
Walmart, Starbucks, Delta, and other major companies are using AI tools made by the tech startup Aware to analyze employees’ messages on workplace communications platforms, CNBC reports. In 2023, Aware’s analytics AI surveilled 6.5 billion messages, tracking fluctuating worker sentiment and flagging toxic interactions. This tool doesn’t link messages to individual employees, but another Aware tool, eDiscovery, does. Amba Kak of the AI Now Institute criticized the use of AI to spy on workers. “It results in a chilling effect on what people are saying in the workplace,” she said. “These are as much worker rights issues as they are privacy issues.”
Hackers are using AI in their cyberattacks, according to research released this week by Microsoft and OpenAI. In dual blog posts, the two companies detailed ways they’ve detected and disrupted groups they say are state-backed, such as “Crimson Sandstorm” (Iran) and “Forest Blizzard” (Russia). The groups, according to the companies’ researchers, use large language models for things like intelligence gathering, improving their code, and setting up email phishing schemes. The researchers said no truly “significant attacks” were identified. (Also this week, OpenAI was the target of protests over the recent loosening of the firm’s policy against defense department collaboration.)