I for one welcome our soon to be AI overlords. I’ll make a great pet. Just feed and water (Diet Coke) me and I’ll be happy to bow down to our silicon gods.
Really great piece connecting AI and the noosphere! Looking forward to the following ones!
As noted in the piece and comments, there's been a simplistically hopeful, even utopian vibe in the “noosphere” concept from its genesis in Teilhard and Vernadsky. Almost unavoidable given the largely positive associations of the word “noos/nous” in ancient Greek thought.
One way to make the noosphere concept more nuanced is to keep it in conversation with two more recent theories/formulations of human-related planetary spheres: (1) that we are contributing to a massive, perhaps out-of-control “technosphere” (see Peter Haff's work on this) and, (2) that we are contributing to a Leviathan-like “infosphere” (Luciano Floridi; Alexander Wilson). These two concepts are less utopian—and, in their combination, they emphasize an important distinction of material technology (“technosphere”) vs. communication-informational technology (“infosphere”). My sense is that they're also being more widely used by scientists and thinkers today than the noosphere.
AI/ChatGPT are significant developments in the “infosphere” that promise also to have an immense impact on the functioning and development of the “technosphere”.
In my piece grappling with some of these "-sphere" concepts (https://www.noemamag.com/the-poetry-of-planetary-identity/), I proposed that in the 21st century the noosphere concept would be better reformulated as an overarching aspiration to keep three planetary “spheres” in harmony; i.e. we are closer to having/being a "noosphere" whenever we have harmony between the biosphere, the technosphere, and the infosphere. (I think this is in the spirit of how you're adapting/tempering the utopian concept in your piece.)
So with AI, the challenge is reacting to how this particular leap in the “infosphere” affects us not only cognitively, economically, and socio-politically but also will affect other non-informational technologies (“technosphere”) that directly impact humans and other living organisms (“biosphere”).
In other words, the better high-level metaphor may not be a “singularity” toward which we’re heading. Instead, AI is a specifically “infospheric” lurch occurring in a complex tri-partite planetary system, and the collective task is to re-harmonize this system. Otherwise, 'progress' in AI may actually be taking us farther from a functioning noosphere and deeper into an exaggerated “infosphere” that is out of step with other key aspects of the human and more-than-human world. A kind of 'planetary misalignment problem.'
Studied Teilhard in college in the 1960’s. He “blew” my fundamental Christian brain and changed my life. Gratified to see his resurrection in your essay. Thanks.
The idea of a united humankind living in a blissful utopia seems contrary to the evolutionary process. The strong survive, the weak die off. But it does seem that mankind, on the whole and over eons, has moved towards a more moral and inclusive stance. Two steps forward and one step back, of course, but our consciousness and clarification of morality, especially since the Enlightenment, has expanded beyond "my tribe good, your tribe bad." Look at Christ-figure myths, where moral strength is prized over physical strength. The question becomes as we intellectually and spiritually evolve: Does moral strength overtake physical strength as evolution's winner?
Think about that last question in the context of the evolution of A.I. There is a certain dispassionate logic to cooperation over competition as abundance supercedes scarcity.
Interesting read, but-- visions of global unification by ANY means is utter fantasy. The ancients understood humans' propensity for mischief far better than we-- with all our 'thinking' technology, organized religions, and ideological movements. Sooner or later, they either peter out or take over; crushing independent thought and freedom of movement with it. If humans were all moral and good, AI might be harnessed as genuinely useful; but alas, we're a jar of mixed nuts. I have been a little more confident in Marshall McLuhan's theories, which were a little more grounded in reality-- but no one person or group will ever have the answers, and isn't that wonderful?
Sounds wonderful to me; the diversity and differences among us human beings appears to make us more sustainable as a species in addition to all the richness of difference to experience while living despite the chaotic process.
I’m concerned that the entire idea is too anthropocentric. Evolution is not a hierarchical process culminating in the human brain and then extending to human technology and its implications in the noosphere. It’s a many branched process which is demonstrably enhanced in survivability by the capacity of the systems and entities that make it up to cooperate more than they compete.
Yes, it's a branched system. But it was very likely that one of its branches would lead to a species capable of launching technological evolution. At least, that's my argument, which I've made at length in various places, including my book Nonzero.
I'll have a look at your book, Robert. I think the obsession with ourselves as the only potentially intelligent form of life, or indeed existence, is narrow. But I do appreciate that you too have thought about this in depth! My book, Philosophy as Practice in the Ecological Emergency (Palgrave, 2023) sets out some of the justifications for considering that the more-than-human might actually be the main agent in the life of the human. So that's another thing to consider! I will look for your book, and thank you.
Let's not forget that most of the branches die off or lead to extinction. But as long as at least one survives and thrives, then the possibilities keep expanding. And the evolutionary process seems very good at finding those successful branches. In that sense, there is a certain inevitability to technological evolution.
I think you should come up with a new (“noo”) name, I can’t take it seriously for some reason and suspect that might be part of the reason the idea isn’t more widely known
I both read and listened to the audio of this piece on a subject that has always been close to Bob’s heart. It would have been better if Bob had read this one. Unfortunately, the audio reader mangled Teilard’s name calling him “Teelhar” de Chardin. Following the audio narrator, I will occasionally mangle it further and call him “Tarheel” de Chardin despite Teilard’s manifest failure to attend school in North Carolina.
Now “Tarheel” was a Jesuit priest who believed a Christian god was the unifying principle of the universe. He also believed that humans, at least (if not dogs), would evolve toward and ultimately reach an Omega point which although quite vague in its details was clear in conception: A point where the goals of humanity are realized in the emergence of a spiritual global mind in which humanity becomes unified with the mind of Christ. This is a peculiarly religious and Catholic conception and is essentially just a second coming of Christ where mankind is “saved” (whatever that means) and our souls live happily ever after. Bob is attempting to transmute this essentially religious mystical vision into a semi-scientific speculation in which humanity is unified globally in a sort of end of history. For Bob this would be a purposeful evolution toward a goal and because religious myths tend to reflect an ultimate unification with god, this belief makes Bob very tolerant and respectful of religious myths which tend to parallel his own vision.
Of course, Bob’s vision is purely speculative although he argues it is analogous to evolution by natural selection which, when it succeeds in avoiding extinction, tends to evolve ever more adaptive species over time. This process itself Bob believes could be the "purpose", or ultimately realize the “purpose,” of the universe.
But let’s examine this notion of “purpose” or “telos” a little more closely.
Whose purpose would this be? Certainly it is not a human purpose. Nor does it seem to apply to dogs or other species or any alien life forces that may be kicking around. Humans are not going to be creating a global mind in communion with rats and cockroaches. So it seems Bob’s and Teilard’s conception includes only the human species. Bob's view of this kind of extrinsic objective superimposed universal purpose is that it does not matter whether anything or anyone wants it or not, it is just the process of an evolving universe. But even if that speculation were true, I don't think most people would have any reason to care about it or voluntarily cooperate with it (assuming a non deterministic universe). It is a purpose that would be largely irrelevant to human life. So if you think moral realism is a stretch, Bob's conception of an objective universal "purpose" stretches moral realism by many light years.
As we humans don't know about this purpose or have any reason to care about it, why would it matter to anyone that humanity would reach an Omega Point at some time in the dimly speculative future and what would faith in such a revelatory future mean for us? The answer is probably nothing. If this Omega Point is determined to occur by the laws of physics then why should we even care about it (and as Bob is really a determinist who denies human free will while occasionally, and most unconvincingly, hedging that he is “agnostic” about it). If this ultimate global unification of humanity is not determined, then how is it a purpose of the universe? In that case Bob thinks we should “help it along” by avoiding anything that reeks of human conflict (for example the war in Ukraine or any war that is “in violation of international law”). While I want to be as charitable as I can to Bob’s views and while it would be nice to think humanity might end up as one unified big happy global mind, this belief would be a purely anthropomorphic, and quite orthodox, essentially religious faith. Why Teilard’s vision applies only to humans is because the Christian faith really only concerns itself with humans (having failed to acquire many followers among other mammals) and “Tarheel” was deeply Christian. He just jazzed up his Christianity with the grandiose poetic vision of ultimate salvation and human community, but his vision does not differ in essence from any other Abrahamic religious orthodoxy.
I don't want to come off as a heel here, but Bob’s attempt to differentiate his vision of a “Teelharian” secular global human salvation ultimately fails because its vision isn’t really any more credible or concrete or less vague or less speculative than a second coming of Christ. The Omega Point is really just a mythical "reverse apocalypse". And we all know how Bob hates an apocalypse.
So sorry Bob, visions of an ultimate human centric salvation, like Trix cereal, are just for kids.
Unbelievable good can come from AI. The things standing in the way are not enough caring and love for each other. Wanting power over each other. Material gain.
Agree with your perspective, and would like to offer a place to elaborate: that spot where artificial and human intelligences meet. More specifically, there is a cornucopia of "Tools for Thinking" that range from mind mapping programs to zoomable whiteboards, systems modelers and outliners on steroids. How these all interact with GenAI is important. Here's my mapping of your piece above: https://bra.in/2jkPJw
No Johnny. The more than human is everything that is beyond the human hamburger joints and educational institutions and fishing nets and faux fur. It’s also the microbes in your gut and the subatomic nature which directs much of what is human. I don’t understand your second question. Causally more impactful on what? Stars and galaxies are more than human, so is the universe and size wise they pack a punch that leaves even Elon in the shade.
You write, "Both men expected the evolving noosphere to carry humankind to a good place."
This is a reasonable claim IF we are discussing time scales in the thousands of years. This is the first time we have tried to create an interconnected high tech civilization, and it's unlikely we'll get it right on the first try.
The more likely scenario is that of the ancient world, where the Roman Empire collapsed, followed by a period of darkness, followed by the Enlightenment and the miracle of the modern world. This cycle is likely to repeat itself countless times until humanity arrives at what might be called a permanent "good place".
Another option is that the noosphere destroys itself and us along with it, and heaven turns out to be a good place, as is suggested by some religions and near death experiences.
You write, "And both expected that place to involve the unification of humankind."
This is idealistic to the level of foolishness. The primary reason we won't see the unification of humanity is that human beings are made of thought, an electro-chemical information medium which operates by a process of division. That is, we view reality through a lens whose purpose is to divide a single unified reality in to conceptual parts. So long as we are in a form that we today would recognize as being human, we will be dividing, dividing, dividing.
We are even divided within ourselves. We say "I am thinking XYZ", with "I" being experienced as one thing, and "XYZ" being experienced as another.
I think the best use of AI would be if individuals could use it to protect themselves from malefactors, including government tyranny, while connecting with people they want to connect with.
I for one welcome our soon to be AI overlords. I’ll make a great pet. Just feed and water (Diet Coke) me and I’ll be happy to bow down to our silicon gods.
Really great piece connecting AI and the noosphere! Looking forward to the following ones!
As noted in the piece and comments, there's been a simplistically hopeful, even utopian vibe in the “noosphere” concept from its genesis in Teilhard and Vernadsky. Almost unavoidable given the largely positive associations of the word “noos/nous” in ancient Greek thought.
One way to make the noosphere concept more nuanced is to keep it in conversation with two more recent theories/formulations of human-related planetary spheres: (1) that we are contributing to a massive, perhaps out-of-control “technosphere” (see Peter Haff's work on this) and, (2) that we are contributing to a Leviathan-like “infosphere” (Luciano Floridi; Alexander Wilson). These two concepts are less utopian—and, in their combination, they emphasize an important distinction of material technology (“technosphere”) vs. communication-informational technology (“infosphere”). My sense is that they're also being more widely used by scientists and thinkers today than the noosphere.
AI/ChatGPT are significant developments in the “infosphere” that promise also to have an immense impact on the functioning and development of the “technosphere”.
In my piece grappling with some of these "-sphere" concepts (https://www.noemamag.com/the-poetry-of-planetary-identity/), I proposed that in the 21st century the noosphere concept would be better reformulated as an overarching aspiration to keep three planetary “spheres” in harmony; i.e. we are closer to having/being a "noosphere" whenever we have harmony between the biosphere, the technosphere, and the infosphere. (I think this is in the spirit of how you're adapting/tempering the utopian concept in your piece.)
So with AI, the challenge is reacting to how this particular leap in the “infosphere” affects us not only cognitively, economically, and socio-politically but also will affect other non-informational technologies (“technosphere”) that directly impact humans and other living organisms (“biosphere”).
In other words, the better high-level metaphor may not be a “singularity” toward which we’re heading. Instead, AI is a specifically “infospheric” lurch occurring in a complex tri-partite planetary system, and the collective task is to re-harmonize this system. Otherwise, 'progress' in AI may actually be taking us farther from a functioning noosphere and deeper into an exaggerated “infosphere” that is out of step with other key aspects of the human and more-than-human world. A kind of 'planetary misalignment problem.'
Studied Teilhard in college in the 1960’s. He “blew” my fundamental Christian brain and changed my life. Gratified to see his resurrection in your essay. Thanks.
I agree that this will require a new paradigm. Nice to see something different than the usual fear-porn.
The idea of a united humankind living in a blissful utopia seems contrary to the evolutionary process. The strong survive, the weak die off. But it does seem that mankind, on the whole and over eons, has moved towards a more moral and inclusive stance. Two steps forward and one step back, of course, but our consciousness and clarification of morality, especially since the Enlightenment, has expanded beyond "my tribe good, your tribe bad." Look at Christ-figure myths, where moral strength is prized over physical strength. The question becomes as we intellectually and spiritually evolve: Does moral strength overtake physical strength as evolution's winner?
Natural selection gave us cooperative instincts as well as competitive ones. So we have something to build on.
Think about that last question in the context of the evolution of A.I. There is a certain dispassionate logic to cooperation over competition as abundance supercedes scarcity.
Interesting read, but-- visions of global unification by ANY means is utter fantasy. The ancients understood humans' propensity for mischief far better than we-- with all our 'thinking' technology, organized religions, and ideological movements. Sooner or later, they either peter out or take over; crushing independent thought and freedom of movement with it. If humans were all moral and good, AI might be harnessed as genuinely useful; but alas, we're a jar of mixed nuts. I have been a little more confident in Marshall McLuhan's theories, which were a little more grounded in reality-- but no one person or group will ever have the answers, and isn't that wonderful?
Sounds wonderful to me; the diversity and differences among us human beings appears to make us more sustainable as a species in addition to all the richness of difference to experience while living despite the chaotic process.
FYI: that future you are describing is simulated.
I’m concerned that the entire idea is too anthropocentric. Evolution is not a hierarchical process culminating in the human brain and then extending to human technology and its implications in the noosphere. It’s a many branched process which is demonstrably enhanced in survivability by the capacity of the systems and entities that make it up to cooperate more than they compete.
Yes, it's a branched system. But it was very likely that one of its branches would lead to a species capable of launching technological evolution. At least, that's my argument, which I've made at length in various places, including my book Nonzero.
I'll have a look at your book, Robert. I think the obsession with ourselves as the only potentially intelligent form of life, or indeed existence, is narrow. But I do appreciate that you too have thought about this in depth! My book, Philosophy as Practice in the Ecological Emergency (Palgrave, 2023) sets out some of the justifications for considering that the more-than-human might actually be the main agent in the life of the human. So that's another thing to consider! I will look for your book, and thank you.
Does the more than human emerge from humans, or is it something else?
And you believe it to be *causally* more impactful than humans ?
Let's not forget that most of the branches die off or lead to extinction. But as long as at least one survives and thrives, then the possibilities keep expanding. And the evolutionary process seems very good at finding those successful branches. In that sense, there is a certain inevitability to technological evolution.
A lot of the dead branches may have been much more successful with some tweaking, or even the removal of non-deliberate destruction.
Always be mindful of the unknown, *all kinds*.
I think you should come up with a new (“noo”) name, I can’t take it seriously for some reason and suspect that might be part of the reason the idea isn’t more widely known
Maybe noo is too close to moo, which feels goofy
It is awkward, I agree
Not looking too great on the current trajectory. Rather cart before the horse.
I both read and listened to the audio of this piece on a subject that has always been close to Bob’s heart. It would have been better if Bob had read this one. Unfortunately, the audio reader mangled Teilard’s name calling him “Teelhar” de Chardin. Following the audio narrator, I will occasionally mangle it further and call him “Tarheel” de Chardin despite Teilard’s manifest failure to attend school in North Carolina.
Now “Tarheel” was a Jesuit priest who believed a Christian god was the unifying principle of the universe. He also believed that humans, at least (if not dogs), would evolve toward and ultimately reach an Omega point which although quite vague in its details was clear in conception: A point where the goals of humanity are realized in the emergence of a spiritual global mind in which humanity becomes unified with the mind of Christ. This is a peculiarly religious and Catholic conception and is essentially just a second coming of Christ where mankind is “saved” (whatever that means) and our souls live happily ever after. Bob is attempting to transmute this essentially religious mystical vision into a semi-scientific speculation in which humanity is unified globally in a sort of end of history. For Bob this would be a purposeful evolution toward a goal and because religious myths tend to reflect an ultimate unification with god, this belief makes Bob very tolerant and respectful of religious myths which tend to parallel his own vision.
Of course, Bob’s vision is purely speculative although he argues it is analogous to evolution by natural selection which, when it succeeds in avoiding extinction, tends to evolve ever more adaptive species over time. This process itself Bob believes could be the "purpose", or ultimately realize the “purpose,” of the universe.
But let’s examine this notion of “purpose” or “telos” a little more closely.
Whose purpose would this be? Certainly it is not a human purpose. Nor does it seem to apply to dogs or other species or any alien life forces that may be kicking around. Humans are not going to be creating a global mind in communion with rats and cockroaches. So it seems Bob’s and Teilard’s conception includes only the human species. Bob's view of this kind of extrinsic objective superimposed universal purpose is that it does not matter whether anything or anyone wants it or not, it is just the process of an evolving universe. But even if that speculation were true, I don't think most people would have any reason to care about it or voluntarily cooperate with it (assuming a non deterministic universe). It is a purpose that would be largely irrelevant to human life. So if you think moral realism is a stretch, Bob's conception of an objective universal "purpose" stretches moral realism by many light years.
As we humans don't know about this purpose or have any reason to care about it, why would it matter to anyone that humanity would reach an Omega Point at some time in the dimly speculative future and what would faith in such a revelatory future mean for us? The answer is probably nothing. If this Omega Point is determined to occur by the laws of physics then why should we even care about it (and as Bob is really a determinist who denies human free will while occasionally, and most unconvincingly, hedging that he is “agnostic” about it). If this ultimate global unification of humanity is not determined, then how is it a purpose of the universe? In that case Bob thinks we should “help it along” by avoiding anything that reeks of human conflict (for example the war in Ukraine or any war that is “in violation of international law”). While I want to be as charitable as I can to Bob’s views and while it would be nice to think humanity might end up as one unified big happy global mind, this belief would be a purely anthropomorphic, and quite orthodox, essentially religious faith. Why Teilard’s vision applies only to humans is because the Christian faith really only concerns itself with humans (having failed to acquire many followers among other mammals) and “Tarheel” was deeply Christian. He just jazzed up his Christianity with the grandiose poetic vision of ultimate salvation and human community, but his vision does not differ in essence from any other Abrahamic religious orthodoxy.
I don't want to come off as a heel here, but Bob’s attempt to differentiate his vision of a “Teelharian” secular global human salvation ultimately fails because its vision isn’t really any more credible or concrete or less vague or less speculative than a second coming of Christ. The Omega Point is really just a mythical "reverse apocalypse". And we all know how Bob hates an apocalypse.
So sorry Bob, visions of an ultimate human centric salvation, like Trix cereal, are just for kids.
Unbelievable good can come from AI. The things standing in the way are not enough caring and love for each other. Wanting power over each other. Material gain.
I can see this. It makes sense. We are creating Pi in the sky.
Agree with your perspective, and would like to offer a place to elaborate: that spot where artificial and human intelligences meet. More specifically, there is a cornucopia of "Tools for Thinking" that range from mind mapping programs to zoomable whiteboards, systems modelers and outliners on steroids. How these all interact with GenAI is important. Here's my mapping of your piece above: https://bra.in/2jkPJw
No Johnny. The more than human is everything that is beyond the human hamburger joints and educational institutions and fishing nets and faux fur. It’s also the microbes in your gut and the subatomic nature which directs much of what is human. I don’t understand your second question. Causally more impactful on what? Stars and galaxies are more than human, so is the universe and size wise they pack a punch that leaves even Elon in the shade.
You write, "Both men expected the evolving noosphere to carry humankind to a good place."
This is a reasonable claim IF we are discussing time scales in the thousands of years. This is the first time we have tried to create an interconnected high tech civilization, and it's unlikely we'll get it right on the first try.
The more likely scenario is that of the ancient world, where the Roman Empire collapsed, followed by a period of darkness, followed by the Enlightenment and the miracle of the modern world. This cycle is likely to repeat itself countless times until humanity arrives at what might be called a permanent "good place".
Another option is that the noosphere destroys itself and us along with it, and heaven turns out to be a good place, as is suggested by some religions and near death experiences.
You write, "And both expected that place to involve the unification of humankind."
This is idealistic to the level of foolishness. The primary reason we won't see the unification of humanity is that human beings are made of thought, an electro-chemical information medium which operates by a process of division. That is, we view reality through a lens whose purpose is to divide a single unified reality in to conceptual parts. So long as we are in a form that we today would recognize as being human, we will be dividing, dividing, dividing.
We are even divided within ourselves. We say "I am thinking XYZ", with "I" being experienced as one thing, and "XYZ" being experienced as another.
More along these lines here:
https://www.tannytalk.com/p/article-series-the-nature-of-thought
How have you performed all of the amazing probabilistic calculations in your comment? May I see the variables you reference in your model?
I think the best use of AI would be if individuals could use it to protect themselves from malefactors, including government tyranny, while connecting with people they want to connect with.
Kind of like what we imagined back at the start of the World Wide Web...what could go wrong?