The New Persuaders
Plus: Bibi beats Biden (again). How tribalism can save TikTok. America’s wanderlust epidemic. AI’s forbidden knowledge. And more!
A year ago—20 months before the coming presidential election—two AI researchers published a paper that looked at ways AI might shift “the balance of persuasive power.” They said that “increasingly anthropomorphic AI systems” could “allow personalized persuasion to be deployed at scale,” bringing newly powerful disinformation campaigns and other bad things.
The logic is straightforward. Traditionally, deploying an army of online influencers has cost real money, since “keyboard warriors” don’t work for free. With the advent of large language models, you can deploy thousands and thousands (and thousands, and so on…) of sophisticated chatbots that do work for free.
Last week researchers at two European universities provided evidence that the problem is even worse than that. Their study found that LLM-based chatbots, in addition to being cheap and verbally fluent, are better than humans at persuading humans—and better in a particularly ominous way.
The researchers, from Switzerland’s École Polytechnique Fédérale de Lausanne and Italy’s Fondazione Bruno Kessler, recruited 820 humans and one chatbot (ChatGPT-4) and placed them in short, structured online debates. Half of the debates were human-on-human and half were human-on-bot. Before the debates, the humans were asked for their views on a given issue (abortion, death penalty, arming Ukraine, etc.). Then they were randomly assigned to defend one side of the issue. They then debated a person or a bot (they weren’t told which), and afterwards they were asked again about their views on the issue.
In the bare bones version of the experiment, the bots were only slightly better than humans at moving their debate opponents toward their position; the difference wasn’t big enough to pass the test for statistical significance. But when bots were given demographic data about their debate opponents, they got much better at changing minds. Humans, given the same data about their opponents, got slightly worse at persuasion. The authors write, “Not only are LLMs able to effectively exploit personal information to tailor their arguments, but they succeed in doing so far more effectively than humans.”
The demographic data given to the chatbots wasn’t very extensive: just gender, age, ethnicity, education level, employment status, and political affiliation. That’s the scary part. There’s more voluminous and fine-grained information about us online, information that a bot could readily find and digest—our history of social media posts, for example.
Those archives are, for an astute LLM, the Rosetta stone of insidious influence. They could help a bot endear itself to us, thus gaining credibility that it could use at an opportune moment. Or, if in troll mode, the bot could taunt us with our contradictions and hypocrisies.
The AIs in this study couldn’t take advantage of all the tools that would be available to online persuader bots. As that 2023 paper noted, a chatbot could try out different angles, see which ones are most persuasive, and adjust its tactics accordingly. It might even convey its findings to comrade bots, generating big waves of successful rhetorical recalibration.
So will armies of persuasive AIs have a big impact on the 2024 election? Will voters fall prey to remarkably lifelike bots deployed by foreign governments, US political parties, and aspiring Silicon Valley oligarchs?
Maybe. But even if not, public awareness of that possibility will create its own form of disruption. In 2016 some Democratic voters attributed Hillary Clinton’s loss to the work of 100 human beings in a small building in St. Petersburg, Russia. Surely we can convince ourselves that millions of bots—each of them more persuasive than the average human being—are responsible for our candidate’s loss. And presumably we’ll convince ourselves that the bots were dispatched by the most convenient scapegoat—Russia, China, DNC, RNC, Zuckerberg, Elon. In this respect and others, AI could give America exactly the paranoia boost that it doesn’t need.
That 2023 paper about “the balance of persuasive power” wasn’t just about, or even mainly about, electoral politics. If you want to start a religious cult or recruit terrorists or find and chat up your next romantic conquest while you’re busy with the current one, your ship has come in.
And, needless to say, the bot revolution is good news for capitalism. AIs with access to your social media history or your web browsing history or your search history can presumably do a better job of getting you to buy stuff than even micro-targeted ads now do. And you don’t have to be duped into believing that the AIs are human for this to work. LLM bots are becoming trusted sources of guidance and even friendship for some people. This probably hasn’t escaped the attention of Google and Meta—both of which are world leaders in gathering personal information and converting it into ad revenue, and both of which are investing jillions of dollars in the development of large language models.
Obviously, the skills that can turn bots into insidious persuaders can also empower them to do valuable things, like provide helpful information and even sage counsel. Also obviously, we’ll adapt to the age of automated persuasion, developing laws and norms that help us limit its downside. And each of us can experiment with therapeutic and/or spiritual disciplines that help us preserve our autonomy amid the encroachment of AI—something I discussed with meditation researcher Kathryn Devaney on the Nonzero podcast this week.
So, all told, my prediction is: We’ll survive.
Still, the challenge of AI persuasion is going to get big pretty soon, and it’s going to be disruptive in the old-fashioned, not wholly positive sense of that word. And those are two things it has in common with various other adaptive challenges posed by AI, such as upheaval in the job market.
So the next time you hear Silicon Valley libertarians like Marc Andreessen or Peter Thiel complain that regulation slows technological progress, feel free to reply that sometimes slowing technological progress is a feature, not a bug. And maybe you should reply quickly, before some online being persuades you not to.
—RW
NZN’s graph of the week suggests that a growing number of Americans aren’t delighted with the state of the Union.
But there’s also, as Washington Post columnist Ishaan Tharoor notes, a glass-half-full interpretation:
NZN readers, feel free to opine on this question in our comments section: Why do you think Americans have become more open to settling down in a foreign country?
Three weeks ago President Biden said that an Israeli invasion of Rafah—the southern Gaza city where more than a million Palestinians are sheltering—would cross a “red line.” He seems to have changed his mind. The Wall Street Journal reports that, in private discussions with Israeli officials about a Rafah military operation, White House officials are now focusing “not on how to stop it, but on how to protect civilians during its rollout.”
The turnabout isn’t shocking. Within days of Biden setting his red line, the White House denied that he had done so. (Strangely, Peter Baker and Alan Yuhas of the New York Times reported that denial without also reporting that Biden had in fact done so.) Besides, Israeli Prime Minister Bibi Netanyahu has pretty consistently defied Gaza-related entreaties from the US, and the US has pretty consistently acquiesced.
Which raises a question: Why is Netanyahu so successful in his defiance? Given America’s leverage over Israel—billions of dollars in military assistance and crucial diplomatic support, for starters—why can’t Biden get Netanyahu to do what he wants? Last week, Harvard political scientist Stephen Walt, writing in Foreign Policy, suggested an answer: because the power at Biden’s disposal isn’t as great as it seems, especially when you consider the realities of American politics.
Walt says there are three things that can keep a powerful nation from using its support of a client state as leverage: (1) “if a client can get similar help from someone else,” (2) “if [the client] cares far more than its patron about the issues in dispute and is therefore willing to pay the price of reduced support,” and (3) “if the patron cannot reduce its support due to domestic or institutional constraints.”
Two out of three of those are a problem for Biden, says Walt. Though Israel would have trouble replacing America’s military aid and diplomatic protection, it cares about the situation in Gaza more than America does. And as for Biden’s domestic constraints: Walt literally wrote the book on how pro-Israel groups make US politicians pay a price for departing from the traditional American consensus on support for Israel.
But things can change—and both of the variables that have so far favored Netanyahu are changing.