The jobocalypse is near
Plus: Modi joins Obama and Trump in assassins’ club. China dove throws in towel. A climate solution that’s worse than the disease? And tons of AI news!
Sorry about being a day late with the Earthling! I was traveling all week, and on Thursday and Friday I was at one of those conferences where it’s hard to play hooky without getting caught. (Plus, it was a good conference, so I didn’t want to play hooky.) Next week, barring a calamity of greater-than-average magnitude, we’ll be back on the Friday schedule.
There are two basic schools of thought about the impact artificial intelligence will have on the job market:
(1) Though AI will do lots of things currently done by human workers, it will also create lots of new things for human workers to do—so there will be no net job loss and probably a net job gain.
(2) Though in the past new technologies have often brought net job gains, this time is different; the jobocalypse is near.
There are reputable economists on the optimistic side of this question, but last week Business Insider brought some unsettling news: Among the pessimists is “perhaps the world's leading expert on technology's effects on the economy.”
The good news is that this economist—Daron Acemoglu of MIT—thinks that averting the jobocalypse is possible if we get proactive. But we’d better hurry. "This is the last opportunity for us to wake up," he told Aki Ito, author of the Business Insider piece.
After studying centuries’ worth of economic data, Acemoglu has concluded that some technological revolutions create more jobs than they take and have a positive effect on wages, but some technological revolutions don’t. And he finds the early evidence about the AI revolution worrisome.
That’s not shocking when you think about it. After all, large language models (which aren’t the only kind of “generative” AI, let alone the only kind of AI) seem poised to take lots of different kinds of jobs: customer service, journalism, tutoring, and on and on. And the kinds of jobs that would seem most likely to be created by the AI wave—jobs in computer programming—turn out to be threatened by the LLMs, too! (And, no, “prompt engineer” isn’t likely to be a service that’s enduringly in demand.)
Of course, there’s the techno-optimist’s standard reply: The LLMs are just a tool that help programmers (and journalists and so on) become “more productive.” True. But increased productivity is ultimately a euphemism for job loss unless the overall demand for the service in question grows. And, though the AI revolution may well boost the demand for computer programming, the boost may not be big enough to compensate for the automation of more and more kinds of programming.
Besides: Most people don’t have programming skills, and many people aren’t equipped to acquire those skills—especially the highest-end programming skills, which will presumably be the kind most resistant to automation.
I’m sure the picture is more complicated than I’m making it sound, and there are more rays of hope than I’m acknowledging. No doubt lots of people will be able to find some kind of work at some kind of wage. Still, Acemoglu’s current guess is that "a small number of people are going to be on top—they're going to design and use those technologies—and a very large number of people will only have marginal jobs, or not very meaningful jobs."
Again, Acemoglu can imagine a brighter future—but only if, as Ito puts it, “workers, policymakers, researchers, and maybe even a few high-minded tech moguls make it so.”
By the way, the term “jobocalypse” seems to have first gained currency a decade ago via a book with that title which was about the impact of robots on employment. The result of the robot revolution so far seems less than jobocalyptic, but Acemoglu has found evidence that robots have displaced lots of workers and ultimately depressed working-class wages.
If you want to see an argument that there’s a lot of human work that robots won’t master anytime soon, check out Timothy B. Lee’s piece in his Understanding AI newsletter from a few months ago (or watch me discuss the issue with him on the Nonzero podcast here or here).
Lee’s argument is persuasive in so far as it goes, but it only goes so far. After all, the automation of manual labor isn’t the big looming threat posed by the current wave of generative AI; the automation of mental labor is. And, though Lee thinks “there will continue to be plenty of work for human beings to do,” he’s making no guarantee about wages, and he acknowledges that the overall economic impact of AI may well be “even bigger than [the impact of] the Internet.”
One way or another, the effect of AI on the job market is going to be very “disruptive,” as they like to say in Silicon Valley. Even in the best case scenario—where displaced workers find new jobs without big wage losses—displacement will be no picnic. And the displacement could be so widespread, and unfold so rapidly, as to be socially destabilizing, especially when combined with all the other impacts AI will have. I hope some congressional staffers are in touch with Acemoglu about policies that might help.
Nonzero Newsletter is a reader-supported publication. To support my work, consider becoming a paid subscriber.
This week Washington Post reporter Greg Miller wrote the following about Canada’s allegation that the Indian government was behind the assassination of a Canadian Sikh on Canadian soil:
If confirmed, India would join Russia, Saudi Arabia, Iran and other countries credibly accused of plotting lethal attacks overseas against perceived adversaries, including their own citizens, in recent years, according to Western security officials and experts.
Also on that list, Miller might have added, is the United States. In 2011, President Obama ordered the assassination in Yemen of Anwar al-Awlaki, an American citizen (notwithstanding the US constitution’s reference to “due process of law”). And in 2020, President Trump ordered the assassination in Iraq of Qasem Soleimani, the commander of Iran’s Revolutionary Guard.
To Miller’s credit, he does get around to mentioning the Soleimani and Alwaki killings. But that comes in the 21st paragraph—19 paragraphs after the list of outlaw nations from which the US was somehow omitted. Also to his credit, Miller writes this:
A former senior US intelligence official said: “This is [Indian Prime Minister] Modi looking at the world and saying to himself, ‘The United States conducts targeted killings outside of war zones. The Israelis do it. The Saudis do it. The Russians do it. Why not us?’ And none of the [nations] we just mentioned pay much of a price.”
But that paragraph also comes late in the piece—long after Miller has quoted former State Department official Daniel Benjamin saying gravely, “What concerns me is the slide of more and more governments into committing violence extraterritorially.”
Benjamin was the Director of Counterterrorism in the Obama state department when Obama decided to kill Awlaki on grounds that his rhetoric could inspire terrorism in the US—a rationale strikingly like the rationale that is thought to have motivated the Indian government’s killing of Sikh activist Hardeep Singh Nijjar, who supports a separatist movement in India’s Punjab region.
But this bullet point on Benjamin’s resume doesn’t keep him from now lamenting the “rise of states that are prepared to use violence, take chances and violate norms.”
Nowhere is irony deader than in the American foreign policy establishment.
5 AI updates:
ChatGPT is getting more animated! OpenAI announced a version of the chatbot (available only to paying customers) that will not only accept spoken input but respond in kind. And, in addition to hearing and speech, the chatbot is acquiring vision. If people show it a picture of the inside of their refrigerator, “the chatbot can give them a list of dishes they could cook with the ingredients they have,” reports the New York Times. (The hardworking team at the Nonzero Podcast wishes ChatGPT had gotten the gift of speech earlier. In March, we aired a conversation in which it evinced a capacity for very subtle cognitive empathy (aka perspective taking). Rendering that conversation in audio form required running written answers through text-to-speech software.)