Sam Altman, aspiring savior
Plus: How the West screwed Ukraine, Israel-Palestine roils White House, the Sports Illustrated AI scandal, US and Russia unite to save killer robots, and more!
I think I’ve found the Sam Altman Rosetta Stone—the key to resolving a momentous paradox that was highlighted by the recent OpenAI drama (which, in case you somehow missed it, began with OpenAI’s board ousting Altman as CEO and ended with Altman, in a masterful Jiu-jitsu move aided by OpenAI investor Microsoft, ousting most of the board and restoring himself to the throne).
Here are the two parts of the paradox:
On the one hand: According to mainstream conjecture, the board’s ousting of Altman was motivated partly by concerns that he was reckless—that he had put OpenAI on the fast track, pursuing a Zuckerbergesque “move fast and break things” course, rather than proceed cautiously, with due respect for risks posed by AI.
On the other hand: Altman is on record as being deeply concerned about those risks, including the possibility of a superintelligent AI someday choosing to extinguish humankind. And that record begins many years ago, before his and OpenAI’s prominence gave him reason to fake such concern. In 2015, 10 months before OpenAI came into existence, Altman began a blog post about “superhuman machine intelligence” (which I gather is an increment beyond “artificial general intelligence” or AGI ) like this: “Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity.”
Of course, it’s possible that mainstream conjecture about the OpenAI board’s motivation is wrong; some observers think the AI safety question didn’t play a big role. But for present purposes that doesn’t matter, because a version of the Sam Altman paradox was visible before the recent drama and remains visible today. Whether or not the board was worried about how fast Altman was moving, the fact is that he was, and is, moving fast. For a guy who believes that advanced AI could be an existential threat to our species, he’s working pretty hard to make AI very advanced very soon.
And I don’t just mean he’s trying to keep ChatGPT ahead of rival large language models (though he is). He’s also accelerating AI progress in subtler but perhaps more important ways. OpenAI facilitates the creation of novel GPT variants via its API (Application Programming Interface)—in fact, via two kinds of APIs, one that lets developers build and market GPT plug-ins and one that lets big companies adapt GPT to their needs. Plus, Altman just unveiled—in a performance that drew comparisons to Steve Jobs’s showmanship—a kind of DIY bot designer, which lets individual users create versions of ChatGPT tailored to their specific needs.
And of course, the more useful AI variants there are, the more capital will be poured into the development of more powerful AIs and their variants. And so on. Hence the Sam Altman paradox: If you’re worried that technological evolution is already pushing us rapidly toward possible oblivion, accelerating the rate of mutation doesn’t seem like a great idea.
Which brings us to the paradox-resolving Rosetta Stone, the thing that may explain how Altman could be deeply and sincerely worried about the power of superintelligence yet deeply and sincerely committed to hastening the advent of superintelligence.
In the course of that 2015 blog post, Altman wrote this about government regulation of AI: “In an ideal world, regulation would slow down the bad guys and speed up the good guys—it seems like what happens with the first SMI to be developed will be very important.”
In other words: Altman thinks that the headlong pursuit of artificial superintelligence can work out fine so long as the company that wins the race is run by a “good guy” and not a “bad guy.” And I’m guessing that, like the rest of humankind (including the bad guys), he considers himself a good guy.
And he probably is! (As humans go, I mean.) But I still have a problem with his logic.
There are basically two kinds of concerns about AI risk.
First are the relatively near term and concrete risks, such as job displacement and various destructive uses of AI by bad actors (ranging from disinformation to hacking to bioweapons creation to cult recruitment via mass-deployed charismatic chatbots). In principle, we can adapt to these challenges, but the faster they come at us, the more damage they’ll do before the adaptation—and the more likely they’ll be to so destabilize the world that out-and-out chaos ensues. I personally think that this category of risks is a strong argument for slowing AI evolution down (leaving aside for now the admittedly tough question of how practical such a thing is).
The second kind of concern is Altman’s 2015 concern—that AI evolves into a superintelligence that could survive, and even improve itself, without human intervention and so might choose to discard the human species. This strikes many people as the stuff of science fiction, but Silicon Valley seems to feature a disproportionate number of science fiction fans, and imaginative extrapolation into the future seems to be a common pastime there. In that 2015 blog post, Altman said that one of the main upsides of building a superintelligence was the possibility that it “could help us figure out how to upload ourselves, and we could live forever in computers.”
Now, you might think that this second concern, about a humankind-squashing superbrain, is, like the first concern, a reason to slow things down. You’re not alone. Eliezer Yudkowsky, the dean of sci-fi AI doomers, wants to reduce the rate of AI progress to roughly zero and keep it there. He thinks there’s basically no chance of our species escaping a horrible fate once AI reaches the superintelligence threshold.
But another school of thought holds that there must be a way to keep superintelligence aligned with human interests, and if we can just discover the formula for alignment, things will be fine. Altman seems to be in that school. And so far as I know he’s putting his money where his mouth is; OpenAI does reportedly devote a fair chunk of its budget to alignment research. Which is consistent with my theory about Altman’s self-conception: He thinks he’s the guy who can save the planet by winning the race to superintelligence and thus ensuring that it is aligned with human interests.
Am I saying Sam Altman has a messiah complex? Well, not in the clinical-psychology sense of the term—the sense in which Elon Musk seems to have one. And, actually, I’m not sure Altman would object to my saying he has a non-clinical version of a messiah complex. In a 2019 blog post called “How to be Successful,” he laid out 13 tips on how to achieve “outlier success”—how to “make a huge amount of money or to create something important.” The second tip was, “Have almost too much self-belief.” He elaborated: “Self-belief is immensely powerful. The most successful people I know believe in themselves almost to the point of delusion.”
One problem with believing in yourself that much is that you could fail to ask yourself such questions as: “Wait, could my sprint toward my appointment with destiny have some bad side effects along the way? And if it turns out I’m not the guy who can save the planet, would that sprint therefore seem regrettable in retrospect?”
Of course, it’s not within the power of any one AI company to greatly accelerate, or greatly slow, the pace of AI’s evolution. And Altman, to his credit, has at least noted that AI calls not just for regulation but for international regulation. Still, only a handful of companies have superpowerful large language models, so you can imagine a few basic norms of self-restraint winning the assent of all key players. And I’m not aware of Altman getting very aggressive on the norm advocacy front.
Meanwhile, he does seem to be reveling in his role as the world’s AI alpha, the man who knows things about the future that aren’t visible to the rest of us. The day before the OpenAI board gave him his walking papers, he had said at a high-profile event that by the end of next year, “the model capability will have taken such a leap forward that no one expected.” In fact, “just in the last couple of weeks, I have gotten to be in the room when we sort of push the veil of ignorance back and the frontier of discovery forward.”
At the risk of sounding like an intellectual snob: That’s an ignorant use of the term “veil of ignorance.” The term was coined by John Rawls in the course of outlining one of the most influential thought experiments in the history of political philosophy. Suppose, said Rawls, you were designing a society, and you knew you were going to live in it, but you didn’t know which role in it you would occupy. You couldn’t assume you’d get to be John Rawls, famous political philosopher; for all you know you’d be the guy who empties the trash can in Rawls’s office. Doing this—designing a society from behind a “veil of ignorance”—is the way to design a just society, Rawls said.
So if Sam Altman was making decisions about the future course of AI, and he was working from behind a veil of ignorance, he couldn’t assume he’d be the world famous CEO of OpenAI—the guy who commands the world’s attention, gets fresh admiration with each technical breakthrough, and even stands a chance of saving the world. He could just as well be the guy who lost his tech support job to a bespoke OpenAI bot.
I think it would be great if all the heads of all the big AI companies deployed the veil of ignorance whenever they think about the future they want to foster. But I’m sure that’s hard. And I doubt that believing in yourself “almost to the point of delusion” makes it easier.
—RW (Note: I discussed the OpenAI drama, and the question of how threatening artificial superintelligence is or isn’t, with tech writer Timothy B. Lee on the Nonzero Podcast this week.)
Nonzero Newsletter is a reader-supported publication. To support my work, consider becoming a paid subscriber.
The New York Times reports on a push by some nations for the international regulation of killer robots—and on resistance to this push by bigger and more powerful nations. The US and Russia say international restrictions on these autonomous weapons, which make life-or-death decisions on the battlefield, aren’t needed. And China supports only minimal rules.
The piece quotes US Air Force Secretary Frank Kendall explaining why it wouldn’t make sense for America to unilaterally ban the weapons: “I don’t think people we would be up against would do that, and it would give them a huge advantage if we put that limitation on ourselves.”
He’s right. That’s why regulation has to be at the international level if it’s going to work.
If Kendall’s name sounds familiar, that may be because it appeared in NZN in 2021, after President Biden nominated him for his present position. The background information we provided then is worth repeating now that he is influentially advocating the untrammelled development of a whole new species of weaponry:
After retiring from military service in 1994, he worked as an executive at Raytheon, a major weapons maker, and later became a consultant for various defense companies. He came back into government as head of acquisitions at the Defense Department under President Obama, then, when Trump took office, quickly began working for a weapons manufacturer that had received a big Pentagon contract during his tenure.
The contract went to Northrop Grumman for production of the B-21 bomber. Since he left office, the firm has paid him over $700,000 in consulting fees, according to Eli Clifton of the Quincy Institute. Kendall also spent some of the Trump years working with Leidos, a major defense contractor that won a $4 billion IT contract with the Pentagon during Kendall’s time in government. He holds at least $500,000 in Leidos stock and has earned $125,000 per year for his work on the firm’s board, according to Clifton.
Did western nations obstruct a peace deal in the early months of the Russia-Ukraine war? Evidence to that effect keeps piling up.
This week’s data point: a television interview with David Arakhamia, leader of President Zelensky’s “Servant of the People” party, who was present at peace talks in Istanbul in late March of 2022, a month after Russia’s invasion. According to Arakhamia, Russia offered to end the war if Kyiv committed not to join NATO.
Arakhamia provided multiple reasons why the Ukrainian government rejected the draft agreement. One of them was its distrust of Moscow. Another one: In early April, Boris Johnson, then Britain’s prime minister, flew to Kyiv and told Ukrainian leaders to keep fighting rather than sign a peace deal.
Ukrainian media previously reported that Johnson had discouraged peace talks. Since then, former Israeli Prime Minister Naftali Bennett and former Chancellor of Germany Gerhard Schröder have said that objections from the US-led West derailed negotiations.
This week, former Zelensky adviser Oleksii Arestovych reflected on the missed opportunity: “Our war could have ended with the Istanbul agreements, and several hundreds of thousands of people would still be alive,” he wrote on the social media platform Telegram.
Four Israel-Palestine updates:
1. As the week-long truce in Gaza came to an end Thursday night, the Wall Street Journal reported that, when the war there is finally over, that won’t really be the end of the war. “Israel’s intelligence services are preparing to kill Hamas leaders around the world when the nation’s war in the Gaza Strip winds down, setting the stage for a yearslong campaign to hunt down militants responsible for the Oct. 7 massacres, Israeli officials said.”