Artificial Intelligence and the Noosphere
Seeing the age of AI in cosmic perspective can help us navigate it
We must enlarge our approach to encompass the formation, taking place before our eyes… outside and above the biosphere, of an added planetary layer, an envelope of thinking substance, to which, for the sake of convenience and symmetry, I have given the name of the Noosphere.
—Pierre Teilhard de Chardin, 1947
I look forward with great optimism. I think that we undergo not only a historical, but a planetary change as well. We live in a transition to the noosphere.
—Vladimir Vernadsky, 1945
Some pretty cosmic things have been said about artificial intelligence lately. For example:
Some people say that, as AI approaches the point where it can improve itself—rewrite its own code to become smarter—we’ll be on the threshold of the “singularity.” Beyond the singularity lies a kind of paradise, or a kind of hell, or even human extinction. I wish I could be more precise, but apparently it’s in the nature of the singularity that you can’t see beyond it. (The term comes from physics, where it denotes the center of a black hole and thus connotes impenetrable opaqueness.) All we can say for sure is that the post-singularity world will be radically, even unrecognizably, different from the pre-singularity world.
See how cosmic that is? But I don’t think it’s cosmic enough. I think if you want to fully appreciate the significance of the AI revolution, you have to widen the lens: put AI in the context not just of the future but of the distant past. The really distant past—like, starting 3.5 billion years ago.
I know that’s a big ask. The good news is that these 3.5 billion years—encompassing the history of life on Earth—can be boiled down to two stages: the evolution of the biosphere, then the evolution of the noosphere. And the cosmic significance of AI can be boiled down to these eight words: Artificial intelligence is the crystallization of the noosphere.
At least, that’s the tentative—and provocatively stark!—version of my thesis. I plan to write a few pieces about AI and the noosphere, and I may revise my thesis as I try to flesh it out, and ponder people’s reactions to what I write. Before proceeding with this first installment, let me spell out some virtues I see in viewing AI this way—from a noospheric perspective.
1. It can help orient us. Many people find the current era, with its rapid technological change, disorienting. A noospheric perspective offers a very, very big ‘you are here’ map.
2. It can help define our mission. This ‘you are here’ map offers a way of framing the terrain that lies before us—a way of thinking about how to approach the age of AI, about how to guide the evolution of AI, and about what its mission—the mission we assign AI—should be. Ideally, this noospheric perspective will help steer us in the general direction of the “paradise” scenario and away from both the “hell” and “extinction” scenarios.
3. It can provide a philosophical framework for our mission—a framework that may even have motivating power. A noospheric perspective helps us see how the evolution of AI can be—and should be, and maybe even, for our sake, must be—intertwined with our own moral (some would even say spiritual) evolution.
4. It can offer a theological (or at least teleological) framework for our mission. I emphasize the word “offer.” Though the best-known proponent of a noospheric perspective was a theologian, the theological emanations of this perspective are more speculative than some of its other emanations. That said, I do think a noospheric perspective gives reason to suspect that there is in some sense a “higher purpose”—that natural selection was set in motion for a reason, a reason reflected in the current technological moment, and that AI is inescapably involved in the realization of that reason. (Contrary to common belief, there are versions of this “higher purpose” scenario that are fully compatible with a scientific world view. But: Whether that purpose comes from a God is another question. The source of the purpose could be some cosmic hacker, as in recently popular “simulation” scenarios, and it could be something that isn’t an intelligent being at all, but rather a process—like natural selection itself except on a much bigger scale: a natural selection among universes. In other words, there could be a “telos” without the telos having been imparted by an intelligent being.)
OK, with the stakes thus laid out, here we go:
The term noosphere seems* to have been coined in the early 1920s by the French paleontologist Pierre Teilhard de Chardin, who, in addition to being a scientist, was a Jesuit priest and theologian. The idea may have emerged in conversations between Teilhard and the French mathematician Edouard Le Roy after they attended a lecture by the Russian geochemist Vladimir Vernadsky about the biosphere—the global layer of living matter that has been in evolution for 3.5 billion years. In any event, Vernadsky was quick to embrace the idea of the noosphere, and he and Teilhard became its two most prominent proponents.
So what is the noosphere? It is, for starters, something made possible by the biological evolution of the human brain. Once our ancestors were thinking and sharing their thoughts and creating tools and building things, the noosphere was being launched.
Vernadsky, more than Teilhard, emphasized the product of all this thinking and inventing and building. He was a geochemist, and he was fascinated by the way the evolving biosphere (about which he wrote a book) had transformed the face of a planet that was once barren. As he thought about the noosphere, he stayed focused on this transformation of the Earth’s outer layer. He wrote: “Mankind taken as a whole is becoming a mighty geological force. There arises the problem of the reconstruction of the biosphere in the interests of freely thinking humanity as a single totality. This new state of the biosphere, which we approach without our noticing it, is the noosphere.”
Teilhard was inclined to see the thinking itself, more than the products of that thinking, as central to the noosphere. He called the noosphere (among other things) “the thinking envelope of the Earth” and a “planetary mind.” (The noos in noosphere is Greek for “mind.”)
Teilhard, to be sure, had an interest in the physical products of human thought. But he was especially interested in the physical products that helped humans share those thoughts, helped draw people into collaborative cognitive webs. In other words, he was interested in information technology. He sometimes referred to the noosphere as a “brain of brains”—a kind of global superbrain in which individual humans were neurons—and he saw how information technology could both link the neurons together in larger numbers and give each neuron more power.
In an essay called “The Formation of the Noosphere,” he wrote: “How can we fail to see the machine as playing a constructive part in the creation of a truly collective consciousness?… I am thinking, of course, in the first place of the extraordinary network of radio and television communications which… already link us all in a sort of ‘etherized’ universal consciousness.” But he was also thinking of the “growth of those astonishing electronic computers which, pulsating with signals at the rate of hundreds of thousands a second, not only relieve our brains of tedious and exhausting work but, because they enhance the essential (and too little noted) factor of ‘speed of thought,’ are also paving the way for a revolution in the sphere of research.”
Teilhard wrote those words in 1947. That was back when a computer with a hundred thousandth of a modern smartphone’s processing power weighed 30 tons and filled a large room. And it was nine years before the 1956 Dartmouth conference that is commonly taken as the birth of AI research. In my 1988 book Three Scientists and Their Gods, I suggested that, had Teilhard envisioned the coming of artificial intelligence, it would have figured prominently in his conception of the noosphere’s future evolution.
But even in 1988, I didn’t appreciate how naturally AI might fit into that conception. The reason is that I didn’t anticipate a turn that artificial intelligence took in subsequent decades. The kind of “generative AI” that burst into popular consciousness last year—image-generating AI like DALL-E and language-generating AI like ChatGPT—isn’t what many people were expecting AI to be like back in the 1980s.
A big part of the difference is captured in the way Jaron Lanier (who coined the term ‘virtual reality’) characterizes the new AI. He recently wrote in the New Yorker: “In my view, the most accurate way to understand what we are building today is as an innovative form of social collaboration.”
This may sound like a strange way to describe a technology that can give a lengthy answer to a question without consulting any human beings—much less asking a bunch of human beings to get together and collaboratively generate the answer. But Lanier’s point is that the AI’s answer is a kind of weaving together of the contributions of lots of humans—the contributions embedded in the texts and images the AI was trained on, the texts and images that collectively shape its responses to questions and prompts. He writes: “The new programs mash up work done by human minds. What’s innovative is that the mashup process has become guided and constrained, so that the results are usable and often striking.”
Lanier says that the recent burst of progress in AI doesn’t amount to “the invention of a new mind.” Personally, I’m fine with calling an AI a mind (which isn’t to say it’s sentient, but just that it does the functional equivalent of what we call cognition when humans do it). I’m also fine with calling it a collective mind—whose constituents are, in a sense, human minds. If you ask ChatGPT a question about some historical event, it will give you roughly the answer you’d get if you assembled everyone whose writing about the event was part of the texts it was trained on and had them vote on the answer. In that sense, you could give ChatGPT that label Teilhard affixed to the noosphere: a “brain of brains.”
I’m not saying AI is the noosphere—like, the whole noosphere—or that it will ever be. I’m just saying it has some of the basic properties that Teilhard attributed to the fabric of the noosphere. AI is turning out to be something that can fit into the noosphere more organically than I would have guessed a few decades ago; it’s more like collective human cognition, and less like some alien intelligence that walked out of an MIT lab, than I would have imagined.
I think I’ll leave things here for now—except to note one more thing Teilhard and Vernadsky agreed on: Both men expected the evolving noosphere to carry humankind to a good place. And both expected that place to involve the unification of humankind; they saw the two world wars they had lived through as temporary setbacks.
But they emphasized different sides of this unification.
Vernadsky tended to focus on the political side. He saw movement toward “a just distribution of wealth associated with a consciousness of the unity and equality of all peoples, the unity of the noosphere.” And this equality would be manifest in governance: “Our democratic ideals are in tune with… the laws of nature, and with the noosphere.”
Teilhard was more interested in the spiritual side of things. He saw us moving toward universal love. “Humanity… is building its composite brain beneath our eyes. May it not be that tomorrow, through the logical and biological deepening of the movement drawing it together, it will find its heart, without which the ultimate wholeness of its powers of unification can never be fully achieved?”
I don’t exactly share the optimism of either man. (And Teilhard’s scenario, beyond a certain point, becomes hard to even comprehend, owing to his mystical metaphysics, his theological presuppositions, and his sometimes poetic form of expression.) But I do think that, if the age of AI is going to work out well, there will have to be at least some movement toward the goals they identified—a more unified global political community and more in the way of international affinity and sympathy. Or, to put that second goal a bit differently: less in the way of animosity between nations and more of a sense that the world, not just the nation, is our community.
I think looking at things from a noospheric perspective—looking at AI in its broadest evolutionary context—can help us make progress along these dimensions. I’ll pursue that idea in subsequent posts on this subject.
*Note: As for who coined the term noosphere: Vernadsky wrote that “The French mathematician Le Roy… introduced in 1927 the concept of the noosphere… He emphasized that he arrived at such a notion in collaboration with his friend Teilhard de Chardin, a great geologist and palaeontologist, now working in China.” But Wikipedia says Teilhard used the term in his essay “Cosmogenesis,” written in 1922. And Teilhard himself (see epigraph of this piece) seems to be under the impression that he coined the term. (Teilhard was prohibited by the Catholic Church from publishing many of his writings in the 1920s and 1930s—including some, such as The Phenomenon of Man, that discussed the noosphere—but “Cosmogenesis” seems to have been written before the hammer came down.)
Image by Clark McGillis
I for one welcome our soon to be AI overlords. I’ll make a great pet. Just feed and water (Diet Coke) me and I’ll be happy to bow down to our silicon gods.
Really great piece connecting AI and the noosphere! Looking forward to the following ones!
As noted in the piece and comments, there's been a simplistically hopeful, even utopian vibe in the “noosphere” concept from its genesis in Teilhard and Vernadsky. Almost unavoidable given the largely positive associations of the word “noos/nous” in ancient Greek thought.
One way to make the noosphere concept more nuanced is to keep it in conversation with two more recent theories/formulations of human-related planetary spheres: (1) that we are contributing to a massive, perhaps out-of-control “technosphere” (see Peter Haff's work on this) and, (2) that we are contributing to a Leviathan-like “infosphere” (Luciano Floridi; Alexander Wilson). These two concepts are less utopian—and, in their combination, they emphasize an important distinction of material technology (“technosphere”) vs. communication-informational technology (“infosphere”). My sense is that they're also being more widely used by scientists and thinkers today than the noosphere.
AI/ChatGPT are significant developments in the “infosphere” that promise also to have an immense impact on the functioning and development of the “technosphere”.
In my piece grappling with some of these "-sphere" concepts (https://www.noemamag.com/the-poetry-of-planetary-identity/), I proposed that in the 21st century the noosphere concept would be better reformulated as an overarching aspiration to keep three planetary “spheres” in harmony; i.e. we are closer to having/being a "noosphere" whenever we have harmony between the biosphere, the technosphere, and the infosphere. (I think this is in the spirit of how you're adapting/tempering the utopian concept in your piece.)
So with AI, the challenge is reacting to how this particular leap in the “infosphere” affects us not only cognitively, economically, and socio-politically but also will affect other non-informational technologies (“technosphere”) that directly impact humans and other living organisms (“biosphere”).
In other words, the better high-level metaphor may not be a “singularity” toward which we’re heading. Instead, AI is a specifically “infospheric” lurch occurring in a complex tri-partite planetary system, and the collective task is to re-harmonize this system. Otherwise, 'progress' in AI may actually be taking us farther from a functioning noosphere and deeper into an exaggerated “infosphere” that is out of step with other key aspects of the human and more-than-human world. A kind of 'planetary misalignment problem.'