“A glow ripples outward from the first spark of conscious reflection. The point of ignition grows larger. The fire spreads in ever widening circles till finally the whole planet is covered with incandescence… [O]utside and above the biosphere, there is the noosphere… To a Martian capable of analyzing sidereal radiations psychically no less than physically, the first characteristic of our planet would be, not the blue of the seas or the green of the forests, but the phosphorescence of thought.”
—Pierre Teilhard de Chardin, The Phenomenon of Man (1955)
Arvind Narayanan is a computer science professor at Princeton who publishes a newsletter called AI Snake Oil and is writing a book by that name—and is, as you might guess, inclined to downplay the significance of recent developments in AI. An example of something he downplayed:
Earlier this year, computer scientists at Microsoft published a paper called “Sparks of Artificial General Intelligence: Early experiments with GPT-4.” They argued that the latest version of OpenAI’s large language model was showing some of the hallmarks of “artificial general intelligence” or AGI. On Twitter, after someone paraphrased the paper’s thesis as “the sparks of AGI have been ignited,” Narayanan tweeted dismissively: “The ‘sparks’ of AGI were ignited a few hundred thousand years ago when our ancestors learned to control fire, setting us on a path to technological civilization.”
I agree that the roots of artificial intelligence run at least that deep—in fact, I’d say they run deeper. But my takeaway from this depth is in some ways the opposite of Narayanan’s. I think it underscores how epic the technological moment we’re living in is.
Last month I made the claim that we can do a much better job of navigating the age of AI, and more clearly see its philosophical and even spiritual implications, if we view it in cosmic perspective. By which I meant viewing it through a wide lens in two senses:
(1) In a spatial sense—by thinking about how AI is going to fit into the whole global information processing system that humans have built and are enmeshed in: the “noosphere,” as this system was called by the paleontologist and Jesuit priest Pierre Teilhard de Chardin (who also called it a “planetary mind,” a “brain of brains,” the “thinking envelope of the Earth” and various other things). I think AI could catalyze a kind of climactic coalescence of the noosphere—the “crystallization of the noosphere,” as I put it last month.
(2) In a temporal sense—by stepping back and seeing how long this noosphere has been in the making, and how organically it grows out of the past, and how organically AI has grown out of the noosphere.
This widening of both the spatial and temporal angles, I claim, could prove salvific. It could make the difference between a future that vindicates AI doomers (a future somewhere on the spectrum between hellscape and extinction) and a future that’s way, way better than that: a future in which humans haven’t just survived but are flourishing—and have become better beings, more enlightened beings, in the process.
Since making that bold but vague claim I’ve been struggling with the question of how to make it less vague and more compelling—how to start explaining exactly what I mean and convince people that what I mean makes sense.
The good news, for me, is that this cosmic view of AI’s significance does have the potential to induce a kind of “Woah!” moment, when suddenly things fall into place and you’re seeing the forest, not the trees. It’s as if you were studying some colored tiles affixed to a wall and then you stepped back and saw more tiles and stepped back further and then suddenly realized you were looking at a mosaic, and all the tiles dissolved into a beautiful and coherent picture and you saw what it all means.
The problem is that, with a picture as big as the one I’m trying to convey, the most I can hope to get you to see in the course of a single not-too-long newsletter essay is a handful of tiles. So it could take a few of these essays to get to the “Woah” moment. (And even that assumes you’re not too woah-resistant, as some people, sadly, are.)
Well, we might as well get started! And I might as well give each handful of tiles a memorable label that summarizes much of their upshot. Today’s handful of tiles is called “Artificial intelligence isn’t artificial.”
This label is, for starters, an affirmation of Narayanan’s point: AI is rooted organically in our distant past; it is a natural outgrowth of the human quest. And, I believe, the same is true of the noosphere. Both artificial intelligence and the noosphere are expressions of human nature.
I should acknowledge that there’s a narrow sense in which you could accurately call AI and other technologies—including all the information technologies that form the infrastructure of the noosphere—“artificial.” The root art comes from the Latin word for skill, and the ficial part of artificial comes from the Latin word for “to make” (facere). And certainly human technologies are “made with skill.” (OK, OK, most human technologies.)
Still, the word “artificial” has always connoted something beyond its literal meaning; it has been taken to mean “not natural.” And when I say always, I mean always. The very first use of the word “artificial” recorded in the Oxford English Dictionary comes from 1382: “Not as by naturel order, bot by artificial ordre.” (No, I don’t know why ‘order’ is spelled two different ways.) It’s this deeply ingrained connotation of artificial—“not natural”—that I’m saying doesn’t apply to AI, or to the other technologies (fiber optics, smartphones, spreadsheets, and on and on) that are part of the noosphere.
I mean, why should it? Why should we equate “made with skill” and “not natural”? Bird’s nests and beaver’s dams are made with skill, but we think of them as natural. What’s the case for putting humans and humans alone in this category labeled “animals whose creations aren’t natural”? What’s the big difference between a beaver’s dam and Hoover Dam?
Granted, there are differences. A beaver dam is made out of materials that occur in nature—not “synthetic” recombinations of them—and it emerges pretty directly out of the beaver’s DNA. If we knew enough about beaver genes, we could presumably point to a number of them that are critically and directly involved in dam building—genes that wouldn’t have been preserved by natural selection had they not been conducive to dam building. There is no such constellation of “dam building genes” in humans, no evolutionarily ingrained tendency to build dams per se.
But humans do seem to have a genetically based, evolutionarily ingrained program for invention more broadly—for fiddling around with the material world and recognizing useful products of the fiddling. This genetically based inventiveness is one reason it makes sense to see technology—including artificial intelligence—as an expression of human nature.
But it’s not the only reason. The word “inventiveness,” by itself, is a long way from capturing how deeply technological evolution is rooted in, and propelled by, human nature. And it’s an especially long way from capturing how deeply the evolution of the noosphere—of the planetary mind, including the “artificial” intelligence that is emerging from it and will increasingly shape it—is rooted in human nature. If you want to appreciate how powerful the impetus behind the evolution of the noosphere has been all along, you have to appreciate two other parts of human nature.
The first of these is our intellectually collaborative nature. Consider, for example, the technological innovation that Narayanan singled out as seminal: the harnessing of fire.
As it happens, one other thinker who put a lot of emphasis on that threshold in our species’s history was Vladimir Vernadsky, the early-twentieth-century Russian scientist who, along with Teilhard de Chardin, championed the importance of the noosphere.
By the time Teilhard coined the term “noosphere,” Vernadsky had long been the world’s most influential champion of the concept of the biosphere. As a geochemist, he was very interested in the way the biosphere—which is to say, the web of life—had transformed the chemical composition of the Earth’s surface and also the planet’s energy flow: plants harnessed solar energy, animals harnessed plant energy, and so on. Naturally, then, he was also interested in the way the noosphere—the realm of human thought and invention—changed the planet’s chemical composition and energy flow. Which meant that the control of fire, one of the earliest big ideas to emerge from the noosphere, was a big deal to him.
The human control of fire, he wrote in a 1938 essay called “The Transition from the Biosphere to the Noosphere,” represents “the first instance in which a living organism takes possession of, and masters, one of the forces of nature… The surface of the planet was radically changed after that discovery. Everywhere sparkled, smoldered, and emerged a hearth of fire, wherever Man lived.”
Here is the way Vernadsky imagined this threshold being crossed: “That discovery was made in one, two, or possibly more places, and slowly spread among the peoples of the Earth. It seems that we are dealing here with a general process of great discoveries, in which it is not the mass action of mankind, smoothing and refining the details, but rather the expression of separate human individuals.”
I think that last passage is misleading—at least, if it’s taken to mean that, in every place where early humans mastered fire, a single individual deserves all the credit. I think pretty much all human invention is in one sense or another collaborative. For example:
Maybe, a million or so years ago, one of our ancestors found a way to transport a glowing ember from the ashes of a forest fire to his or her dwelling. Maybe somebody else—on that occasion or later (or earlier)—figured out that you could use twigs to turn the glow into fire. Maybe someone else thought of adding branches, and creating enough fire to keep people warm at night. And maybe someone else—days later, years later, generations later—discovered that if you crowd all the coals together before retiring for the night, you’ll still have hot coals in the morning, coals that twigs can turn into fire.
Isaac Newton famously wrote that, if he had “seen further” than others, he had done so by “standing on the shoulders of giants.” So had whoever first usefully crowded coals together.
Newton’s quote captures the fact that there can be a kind of intellectual collaboration among people who never directly communicate—who may even live at different times. But, of course, usually when we talk about collaboration we’re talking about people who are in contact with one another. Newton’s own breakthrough ideas about physics were catalyzed by the work of various contemporaries. He drew on data gathered by the astronomer John Flamsteed, and his thinking about gravity was stimulated by exchanges with Robert Hooke.
The evolution of the noosphere, of the social brain, consists largely in the evolution of the infrastructure for this kind of fruitful contact. Newton—living as he did after the invention of writing and of the printing press and of postal delivery—had an easier time finding people who shared his research interests, and could offer useful data or thoughts, than did the first person to crowd coals together. And today, after the invention of the telegraph, the telephone, and the Internet, scientists find it easier still—even when their interests are much narrower than Newton’s.
Thanks to the evolution of this infrastructure for collaboration, what was true when campfires first appeared has gotten only truer: Important inventions and discoveries tend not to emerge from one mind alone; there are lots of minds, some in the past, some in the present, that together figure things out. The greatest human mind has always been the social mind—the noosphere. And it keeps getting greater.
This mind draws a lot of energy from the two parts of human nature discussed above: our inventive nature and our collaborative inclination—our inclination to gravitate toward people who share our interests and extract useful information from them, perhaps sharing information in return. But to fully appreciate how naturally humans drive the social mind to greater and greater feats, you need to appreciate one other part of our nature: our competitiveness. Humans like new ideas and tools that can help them compete with other people or other groups, and this creates a persistent demand for intellectual innovation. People also like being thought of as the person who came up with a good idea—they compete for the status intellectual innovation can bring—and this stimulates the supply of new ideas.
Newton, for example, argued bitterly with Gottfried Wilhelm Leibniz over which of them was the true inventor of calculus. For that matter, the two intellectual catalysts of Newton’s mentioned above—Flamsteed and Hooke—had their issues with him over credit. Hooke complained that Newton had drawn on his ideas about gravity without acknowledging the debt. And Flamsteed wasn’t happy about Newton’s use of his data—understandably, since Newton got access to it without his permission and published it in a book without his permission. Flamsteed demonstrated his annoyance by gathering 300 copies of that book—most of the copies in existence—and burning them.
As for Newton’s own capacity for vengeance: After Hooke died, Newton became president of the Royal Society, and it is suspected that he used this position to ensure that a portrait of Hooke which had hung on the Society’s walls would be lost to posterity. In any event, it was during Newton’s tenure that the portrait disappeared forever.
And yet… Hooke and Newton had at one point carried on a cordial correspondence, sharing their thoughts about gravity, to the intellectual enrichment of both. No doubt they were in some sense competitors even then, but they were also collaborators.
These two forces—collaboration and competition—help drive scientific and technical progress in various contexts and at various levels. There is competition between big groups—like corporations or even whole nations—and there is intricate collaboration within big groups, not to mention productive competition within those groups. And so too with all kinds of smaller groups. But the main point for now is just that the dialectic between competition and inventive collaboration—between basic elements of human nature—has gotten us all the way from campfires to steam engines to nuclear energy, from spears to guns to nuclear bombs, from cuneiform to the printing press to the Internet.
Progress along the last of those three dimensions—progress in information technology—is special. Since this kind of technology is the infrastructure of the noosphere, of the social brain, each increment in its advance increases the speed with which innovation can occur broadly. And since some of the resulting innovations are in information technology itself, these advances can have a uniquely self-reinforcing effect. Advances in, say, energy technology or materials science can set the stage for further advance in those realms, since they bring a new set of challenges and typically entail an increase in relevant knowledge. But advances in information technology both set the stage for further advance and increase the efficiency with which the social mind can exploit that opportunity.
So the social brain would seem to have a natural tendency to upgrade itself at an accelerating rate. The bigger and faster it gets, the faster it gets bigger and faster. This helps explain why the digital age feels so dramatic and in some ways unsettling—why things seem to be moving faster and faster: because they are!
Artificial intelligence brings a new kind of accelerant to the evolution of the social mind. So far the noosphere’s growing power has resulted mainly from hooking up more and more nodes—more human minds—in more and more efficient and productive ways. In AI we have a whole new kind of node—a node that can do lots of things faster than a human mind and can do some things that a human mind can’t do at all.
Adherents of the “singularity” paradigm say that the truly critical phase of accelerating change will come when AI starts improving itself—rewriting its own code to make itself smarter and smarter. But AI will deliver a major jolt before reaching the self-improvement threshold, just by introducing a qualitatively new ingredient to the noosphere. And I think this jolt will come very, very soon.
But however sudden the jolt feels, we’ll have a better chance of surviving it in good shape if we understand the sense in which it isn’t sudden. At least, that’s my contention—that we need to appreciate how gradually it has grown out of the distant past, how powerful and persistent the forces behind it, how firmly and deeply rooted they are in nature.
And when I say deeply, I mean deeply. I think the “natural roots” of the noosphere, and of AI, go back much further than the human mastery of fire, and much further than the coming of the human species. I think the noosphere was immanent in the biosphere itself (as Teilhard de Chardin believed, if for somewhat different reasons than mine). And all along—for hundreds of millions, even billions, of years—what was driving the eventual emergence of the noosphere was, in a certain generic sense, the dialectic between competition and collaboration.
I realize that the last couple of paragraphs may be a bit cryptic. But I’m afraid they’re going to have to stay that way for at least a few weeks. This piece is already pushing 3,000 words, which means this is no time to start in on a whole new handful of tiles. For now let me just say that I’ve delivered on the promise I made at the outset: I said that a single handful of tiles wouldn’t lead to a Woah moment—and it didn’t! More tiles to come.
Image by Clark McGillis
An equally valid deffinition of "artificial" would be "non-organic" (in the carbon sense). Using the word "natural" in reference to "artificial" seems to me to be unhelpful, since human intelligence and activies (naturally) create both organic and non-organic things. Things in a garden that are alive are organic; things in a building that are not alive are often non-organic, and we think of them as artificial. Both come naturally from humans. I suggest that artificial intelligence is artificial because it is non-organic, in spite of the fact that articial intelligence may be a natural result of the Big Bang, or more directly from darwinian evolution. -- Esteban F.
In the past, I took an interest in the concept of the noosphere and dismissed it as too esotheric for my liking. I really love your re-interpretation of it, even despite the fact it did not take away any of my original contentions with the concept.
Any sufficiently advanced technology is indistinguishable from magic. Like fire, the flame of AI has captivated many hearts and minds. It moves, morphs, and dances in front of our eyes. Some argue it may, some day, be alive.
Technology is humanity’s attempt at creating magic. I’m convinced that when we’ll finally succeed, we won’t know what to do with it.