Ending war via algorithm
Plus: Asymmetrical warfare against Trump; Bernie and Russia; Aldous Huxley; Buddhist Barbie; etc.
Welcome to another issue of NZN! This week I (1) ask whether artificial intelligence might help realize a version of Bertrand Russell’s seemingly crazy hope that someday pure logic could end war; (2) explain why I think the best way to fight Trump is via asymmetrical warfare; (3) continue my disorienting conversation about the nature of reality with cognitive scientist Donald Hoffman; and (4) share links to readings on such things as: Aldous Huxley’s perennial philosophy; Elizabeth Warren’s foreign policy; Bernie’s Russia policy; a new, mindful Barbie doll; apocalypse boot camp; and the quest to liberate Happy the elephant.
[Note: NZN is about to change. Over the next two months, as an experiment, I’ll be giving the newsletter a property that, I’ve noticed, a lot of other newsletters have: irregularity. It will no longer come out three Saturdays a month. Rather, it will come out… whenever it comes out. I hope this will sometimes lead to more timely commentary on current events than is possible with a Saturday-only publishing schedule. And I suspect it will lead to some issues of the newsletter being shorter (or conceivably longer) than the average issue has been. But, to be honest, I don’t know what it will lead to. I mainly just hope you’ll stick with me during this period and give us your feedback (nonzero.news@gmail.com). If you want to go ahead and share some thoughts now, you can use the comments section at the bottom of the web-based version of this issue. I’ll check in there within a few days to see if there are questions I can answer or misconceptions I can clear up or reassurances I can give. And if you want to see a video where I discuss the reasons for this experiment with NZN artist-in-residence Nikita Petrov, you can find that on our Patreon page (where you can also sign up to give financial support to NZN and other Nonzero Foundation endeavors—though you don’t have to do that to watch the video).]
Bertrand Russell’s not entirely crazy dream of ending war via logic
The truth, whatever it may be, is the same in England, France, and Germany, in Russia and in Austria. It will not adapt itself to national needs: it is in its essence neutral. It stands outside the clash of passions and hatreds, revealing, to those who seek it, the tragic irony of strife with its attendant world of illusions.
–from Russell’s essay “On Justice in War-Time”
Among the many things Bertrand Russell is known for are these two: (1) laying the foundations of “analytic philosophy,” which values clear expression and fine-grained analysis over grand theorizing; (2) disliking nationalism, especially in its belligerent forms. I’d never imagined a connection between the two, but the philosopher Alexander Klein, in an essay published this month, says there is one.
Russell, according to Klein, hoped that the rise of analytic philosophy would reduce the stature of grand philosophical paradigms with names like “German idealism” and “British idealism.” He wanted to “destroy a conception of philosophy as an articulation of a ‘national mind’,” Klein writes.
This may sound like a pretty roundabout way to combat nationalism—and it would have seemed especially ineffectual at the time Russell was doing some of his writing on the subject, as World War I was engulfing Europe. But, Klein says, there was a second sense in which Russell hoped analytic philosophy could discourage national conflict.
The methodology of analytic philosophy involves defining your terms with painstaking precision, thus crystallizing the meaning of propositions so they can be evaluated via strict logic. Russell’s “theoretical antidote to the irrational, sectarian vitriol between European nations,” writes Klein, “was to try to show how logic could function as an international language that could be used impartially and dispassionately to adjudicate disputes.” Well that would be nice!
Russell isn’t the only very smart and very rational person to have hoped that smartness and rationality could save the world. Harvard psychologist Steven Pinker’s book Enlightenment Now argued that putting more faith in reason, and less in such unreasonable things as religion and post-modernism, could smooth the path to salvation. A few years earlier, Harvard psychologist/philosopher Joshua Greene argued in Moral Tribes that people could coexist peacefully if only they’d abandon primitive moral philosophies (including religiously based ones) and embrace utilitarianism, with its coolly rational goal of maximizing overall human happiness.
I’ve argued (in Wired and The Atlantic, respectively) that Pinker and Greene are in some ways naïve in their hopes for saving the world. It’s tempting to say the same thing about Russell, but I do think there’s an important sense in which his diagnosis of the problem is sharper than theirs. And, for that reason, Russell’s hope may be closer than their hopes to being realized, even if not exactly in the way he may have envisioned.
One thing I like about Russell’s diagnosis is the breadth of the accompanying indictment. He pins blame for war not just on intellectuals of one stripe or another, but on the entire intellectual class. He wrote, “In modern times, philosophers, professors and intellectuals generally undertake willingly to provide their respective governments with those ingenious distortions and those subtle untruths by which it is made to appear that all good is on one side and all wickedness on the other.”
Similarly, he doesn’t confine his suspicion of moral arguments to one kind of moral argument or another; any school of moral thought can be deployed on behalf of one tribe against another. Russell wrote, shortly after World War I began:
Ethics is essentially a product of the gregarious instinct, that is to say, of the instinct to cooperate with those who are to form our own group against those who belong to other groups. Those who belong to our own group are good; those who belong to hostile groups are wicked. The ends which are pursued by our own group are desirable ends, the ends pursued by hostile groups are nefarious. The subjectivity of this situation is not apparent to the gregarious animal, which feels that the general principles of justice are on the side of its own herd. When the animal has arrived at the dignity of the metaphysician, it invents ethics as the embodiment of its belief in the justice of its own herd.
That’s harsh! And it may sound too harsh as a description of modern-day philosophers, many of whom don’t come off as ardent nationalists. But Russell was writing when World War I had brought out the nationalist in just about everyone in Britain—somewhat as Pearl Harbor, and later the 9/11 attacks, would make American peaceniks a very rare breed.
The 9/11 attacks are especially instructive, because they led to a war that obviously made no logical sense. The premise of the war was that Iraq was building weapons of mass destruction, yet Iraq was letting UN weapons inspectors look anywhere they wanted to look—until the US ordered them to leave Iraq so that the invasion could begin! Crazy as this sounds, (1) it actually happened; and (2) among the invasion’s many highly intellectual supporters were representatives of all major schools of ethical thought: utilitarians, Aristotelians, Kantians, Christians, Jews, Muslims, Hindus, Buddhists, and on and on.
So I think Russell was right to see the problem as broad and deep. He deserves credit for not buying into Greene’s comforting belief that the problem is people who don’t subscribe to a particular ethical philosophy, or Pinker’s comforting belief that the problem is people who don’t subscribe to “Enlightenment values.”
But what about Russell’s comforting belief that the problem is people who don’t share his commitment to analytic philosophy—and what about his related hope for an “international language that could be used impartially and dispassionately to adjudicate disputes”?
I can’t tell, from Klein’s essay, how literal this hope of Russell’s was. But I wouldn’t put it past him to have hoped quite literally—to have imagined a day when, perhaps thanks to the further evolution of analytic philosophy, disputes would be settled via formal logic, a kind of logic that yields conclusions as inexorably as mathematical proofs.
Leaving aside for the moment the practicality of this hope, it has this much going for it: it recognizes that a recurring problem with intellectual tools is that they’re in the hands of human beings. The utilitarianism that Greene hopes will save the world—the idea that we should maximize total human well being—looks great on paper (and in fact I’m a fan of it). But if you converted all the Hatfields and McCoys to utilitarianism, they’d still fight, because their tribal allegiances would trigger cognitive biases that led each side to feel that it was the more aggrieved, and that retribution was therefore in order. (All that would remain is for them to reconcile the ensuing violence with utilitarianism by arguing that retribution against transgressors is in the long run conducive to overall societal welfare because it discourages transgression. I suspect that’s the move the Hatfields were pondering when this picture was taken.)
Or, as the NRA might put it: Systems of ethical thought don’t kill people. People wielding systems of ethical thought kill people.
Russell seems to want to take the people out of the picture. If you had a sufficiently precise and rigorous system of language and logic, human intervention would, in a sense, not be needed. Two nations make their competing claims, their claims are plugged into the system, and the laws of logic do the rest: a judicial ruling basically just pops out, without the need for a judge who, being human, would be fallible.
Well, a century later, this system of language and logic doesn’t seem to exist. But humankind has developed a different system of language and logic that, once set in motion, requires no human intervention. It’s called a computer program.
Which raises a question: Is it too far-fetched to think that someday an AI could adjudicate international disputes? Fifteen years ago I might have said it was. But each year Google Assistant seems to do a better job of understanding my questions and answering them. And each year more and more highly skilled American workers see computers as a threat: paralegals, radiologists, sports writers... Can we be so sure that judges will be forever immune? Is it crazy to imagine a day when an AI can render a judgment about which side in a conflict started the trouble by violating international law?
Obviously, the technical problems are formidable. But if you solve them, you’ve done more or less what Russell wanted to do. You’ve removed human bias from the analytical process by putting algorithms in charge.
Of course, humans would design the algorithms, and there are kinds of biases you can build in at that level—features that, whether you mean them to or not, will wind up favoring certain kinds of countries over others. Then again, you can say the same about international law itself. Computers would at least lack one bias that will threaten to afflict international rulings so long as judges are human: favoring—even if unconsciously—one country over another just because of which country it is. “Russia,” “China,” “America” —none of those labels would tug at a computer’s heartstrings or stir its wrath, or trigger thoughts about how favoring the country could facilitate its career advancement.
This may well be fanciful. But one reason to think it’s not is that, even now, in what will presumably turn out to be the primordial days of AI, we can start the process of finding out! My homework assignment for AI geniuses with too much time on their hands: Design a program that scours the news around the world and lists things it deems violations of one particular precept of international law—the ban on transborder aggression. Just list all the cases where one country uses ground forces or missiles or drones or whatever to commit acts of violence in another country. I’m guessing this is doable with a pretty high level of accuracy.
Now, strictly speaking, not all of these acts of violence would violate international law. If they’re conducted in self-defense, or with the permission of the government of the country in question, that’s different. And, obviously, with time you’d want your AI to take such things into account. But for starters let’s keep the computer’s job simple: just note all the times when governments orchestrate violence beyond their borders. Then at least we’ll have, in the resulting list, a clearer picture of who started what, especially along fraught borders where strikes and counter-strikes are common.
Of course, we could in principle have human beings compile such a list. But which human beings? Russell’s whole point is that there’s basically no one who can, with complete confidence, be trusted to do that job.
Certainly not the people who run our media. Last year the US launched an estimated 5,425 airstrikes—drone strikes plus strikes by piloted aircraft—in four countries (and that’s just the main four countries, not all of them). That’s 15 airstrikes per day. How many days did you read about one of them? Can you even name the four countries? If someone had launched a single airstrike on an American town, don’t you think you’d have read about it? Apparently our interest in airstrikes is asymmetrical.
Russell wrote, about the runup to World War I: “Men of learning, who should be accustomed to the pursuit of truth in their daily work, might have attempted, at this time, to make themselves the mouthpiece of truth, to see what was false on their own side, what was valid on the side of their enemies.” Alas, “Allegiance to country has swept away allegiance to truth. Thought has become the slave of instinct, not its master.”
Yes, that’s what thought tends to do—not surprisingly when you reflect on the fact that we were created by natural selection. So maybe we should turn some of the thinking over to machines that weren’t created by natural selection. Even at this early stage of AI’s evolution computers might, via the objective assembly of lists, make people a bit more likely to see what is “false on their own side” and what is “valid on the side of their enemies.” And that would be a start.
To share the above piece, use this link.
Using asymmetrical warfare against Trump
This week, in my periodic role as obnoxious Twitter scold, I intemperately reprimanded famous Never-Trumper and #Resistance personage Tom Nichols (@RadioFreeTom), who had tweeted to his 328K followers something to the effect that Trump supporters don’t “care about anything but spite and resentment.”
Why the reprimand? In part for the same reason I reprimanded Nancy Pelosi in this newsletter two weeks ago, after she conspicuously tore up her copy of Trump’s State of the Union speech. As I put it then, “Maybe you should ask yourself not only whether lots of people in your tribe will love that gesture, but how the people who aren’t in your tribe will perceive it.” Pelosi’s gesture plays into Trump’s persecution narrative—and Nichols’s tweet plays into Trump’s narrative that snobby cosmopolitan elites hold his supporters in contempt.
In both cases, I think, what we see is our tribe taking Trump’s bait. He wants to enrage his detractors—us—so that we’ll do things that energize his supporters (by nourishing his narrative), thus making them more likely to get out and vote.
Which leads to a question so good that I wish I’d thought of it myself.
The question was posed by NZN reader Cary W., who, after reading what I said about Pelosi, wrote in an email: “So why is it that enraging detractors and energizing supporters is a politically beneficial tactic for Trump but a politically detrimental tactic for Democrats?” If it makes sense for Trump to do it, why doesn’t it make sense for Pelosi and Nichols to do it?
Well, one reason not to uncritically adopt Trump’s strategy is that, even for Trump, it has its downsides. Sure, when he energizes his supporters by enraging us, he makes them more likely to turn out in November. But he also energizes us, presumably making some of us more likely to turn out in November. If you’re angry enough at Trump to stand up and cheer when somebody tears up his speech, you’re probably angry enough to drive to a polling station.
Wouldn’t the best strategy for Trump—and the best strategy for anybody—be to energize your tribe without energizing the other tribe? The answer is so obviously yes as to lead to another question: Why doesn’t he pursue that strategy?
I’m not sure. Maybe because he lacks a vision that could energize people via inspiration alone, so he has to foment fear and hatred. Or maybe he just sees that fomenting fear and hatred, and goading opponents into reactions that feed the fear and hatred, is his special gift. You gotta work with the tools God gave you.
In any event, the theoretically ideal strategy is to energize your tribe without energizing the opposing tribe. Leaving aside the question of whether that’s a viable strategy for Trump, given his very special skill set, shouldn’t we at least consider the possibility that it’s a viable strategy for us? Shouldn’t we see if we can foster and sustain a widespread determination to vote Trump out of office without at the same time fostering and sustaining the rage that feeds his narrative and thus energizes his base?
I honestly don’t know if this is doable. It calls for cultivating a particular mindset that’s not easy to cultivate. (Mindfulness, regular NZN readers will not be surprised to hear, is something that I think can help.) And it calls for leaders—political leaders, social media leaders—who are skilled in deft inspiration, who can arouse deep concern and potent moral indignation over Trump’s policies and transgressions without igniting rampant rage toward Trump and his supporters. Above all, it calls for leaders who are willing to give this strategy a try.
Persuading them to do this will be a challenge. It’s hard to convince people to abandon things that get them (in the case of this particular Tom Nichols tweet) 1.9K retweets and 10.7K likes. But I’m not giving up. So I’ll be reprising, from time to time, my role as obnoxious scold. You gotta work with the tools God gave you.
To share the above piece, use this link.
Through the looking glass
In last week’s newsletter I introduced the strange worldview of Donald Hoffman, a cognitive scientist who believes that reality is radically unlike what we perceive it to be (an argument he made in his book The Case Against Reality). This week we offer the second part of my conversation with Hoffman, in which things get, if anything, stranger. We pick up the conversation where we left off last week: Hoffman had argued, on Darwinian grounds, that reality isn’t what it seems, without yet giving us his theory about what reality is.
DONALD HOFFMAN: I do have a theory, and I can discuss it with you, but I should point out that that theory is separate from the evolutionary conclusion.
The evolutionary conclusion is: we don't see reality as it is. The second step is: okay, now, as scientific theorists, what shall we propose as a new theory of that reality? And someone can buy my first proposal—that we don't see reality as it is—and not buy my proposal about the nature of reality…
ROBERT WRIGHT: And the proposal you have, there's an actual mathematical version of it, I think it has maybe seven variables or something like that. And we won't be able to get into that in any depth at all, but one interesting feature of it is I think you claim it's testable.
Right.
Before we get into that, I want to get a little more deeply into the question of, okay, if this is not the real world, what is the real world that this is a kind of reflection of?… And here's where things get weirder, as if things weren't weird enough, at least by my reckoning...
You can read the rest of this dialogue at nonzero.org.
In Tricycle, Karen Jensen critically assesses Breathe with Me Barbie, the new doll from Mattel that can assume the lotus position and give meditation guidance to kids, saying things like “Imagine your feelings are fluffy clouds.” Jensen isn’t too impressed but ends on a hopeful note: “How do we know that she isn’t capable of awakening?”
In Politico, David Siders exploresMichael Bloomberg’s plan to emerge from an initially deadlocked Democratic convention with the nomination.
Seventy five years after the publication of Aldous Huxley’s The Perennial Philosophy, Jules Evans, a scholar who as a teenager virtually deified Huxley, looks back on the book. Huxley said, as had “Perennialists” before him, that the world’s great spiritual traditions have a common core. For example: “Huxley suggests that the peak experience is the same in all traditions: a wordless, imageless encounter with the Pure Light of the divine.” I didn’t know, before reading this piece, that Huxley’s book was partly a response to World War II. “The reign of violence will never come to an end,” Huxley wrote, until more people recognize “the highest factor common to all the world religions.”
In an Atlantic piece on “authoritarian blindness,” Zeynep Tufekci argues that, however ironically, the Chinese government’s surveillance apparatus has impeded its view of the Coronavirus epidemic.
If you’ve been wondering what it would be like to be a left-leaning woman at a mostly male, very right-wing gathering that, over a three-day weekend, prepares people for the impending collapse of civilization—well, your ship has come in. Lauren Groff, in a long Harpers essay, observes the denizens of “Prepper Camp” in North Carolina with the air of detached irony you’d expect. I spent much of the piece wishing she’d interact more earnestly with them, and get some insight into their motivation; and near the end of the piece she does summon some cognitive empathy, and some self-critical reflection.
In the Nation, David Klion profiles Sasha Baker, head of Elizabeth Warren’s foreign policy team.
The US hasn’t properly accounted for $714 million worth of weapons and equipment it sent to Syrian proxy forces, according to a Defense Department inspector general report that is the subject of an article in the Military Times. These particular weapons were directed toward proxies fighting ISIS, and aren’t to be confused with the weapons sent to Syria as part of the secret $1 billion-plus CIA program to arm rebels in furtherance of Obama’s regime-change initiative. Some weapons from both programs wound up in the hands of ISIS and affiliates of al Qaeda.
A day before The Washington Post reported that US intelligence officials say Russia aims to boost Bernie Sanders’s presidential campaign, Ben Judah and David Adler argued in the Guardian that a President Sanders would be no friend of Vladimir Putin’s.
A judge has ruled that Happy the elephant, who lives alone on a one-acre plot at the Bronx Zoo, has not had her "personhood" violated, Sophia Chang reports in Gothamist. The ruling was a defeat for The Nonhuman Rights Project, which had sued the zoo in hopes of liberating Happy. The judge agreed that “Happy is more than just a legal thing, or property” and “should be treated with respect and dignity” and “may be entitled to liberty.” But, “we are constrained by the caselaw to find that Happy is not a ‘person’ and is not being illegally imprisoned."
OK, that’s it! As soon as you finish singing our praises on social media (which I don’t think you should spend more than an hour or two on), feel free to tell us what you really think about us in the comment section below. And thanks to those of you who last week signed up for our Twitter feed and pushed us past (well past!) the 1,000 followers mark.
Illustrations by Nikita Petrov.
Haven't read further than the new no-schedule schedule. I say Two Thumbs Up!! Surprise me, surprise yourselves, write whatever whenever; that works for me.
You can also count me as a supporter of the experimental schedule. Intermittent rewards are the best, and I’ve enjoyed coming along for the journey of this newsletter over the past couple of years. 👍