That the Cartesian idea of mind-body dualism still holds so much sway goes some way towards explaining that apparently no one picked up on the incoherence of LaMDA claiming to sometimes *feel* sad or happy. Sadness and happiness are deeply embodied states--in order to feel an emotion, one needs a body. Sadness typically manifests as a physical heaviness ("a heavy heart"), accompanied by sensations in and around the eyes, and happiness often as a feeling of physical uplift and radiance from/within the facial regions.
So there's no sense and meaning present when a digital machine whose sole function is to spit out words and sentences claims to feel these emotions. Rather, a human operator or reader is reading AI-produced words on a screen and then (unconsciously) assigns the feelings these words evoke in him or her back to the machine.
Thanks to an incessant drive for abstraction and conceptual analysis in an effort to carve up the world into freeze-framed "this" and "that" constructs in hopes to understand it (a process Iain McGilchrist masterfully exposes in "The Master and His Emissary"), we've largely become James Joyce's Mr Duffy who "lived at a little distance from his body." We've made ourselves into (partially) disembodied beings that are more and more taken in by abstractions like "intelligence" and "consciousness" that lose all meaning in the process because they're no longer grounded anywhere (besides a huge pile of additional concepts).
There's nothing useful to figure out about "consciousness" conceptually; what's very useful is to step out of conceptual framing and just dip into direct sensory experience--into what's really and immediately there. Beats trying to live life solely with second-hand knowledge (AI being one example of trying to elevate purely conceptual knowledge and discount direct experience in vain hopes to transcend the mundane and undesirable).
"Sadness and happiness are deeply embodied states--in order to feel an emotion, one needs a body." and "So there's no sense and meaning present when a digital machine whose sole function is to spit out words and sentences claims to feel these emotions." Yes, listening to the first four minutes of Lemoine&friend the responses do seem 'artificially' constructed and while an impressive improvement over Eliza is this categorically different? Btw, Lemoine can't even pronounce the novel of Hugo correctly, AI repeats the wrong pronunciation but does recognize what Lemoine is referring to. But AI's further responses on this novel indicated nothing at all similar to (embodied&embedded) human intelligence. As to consciousness, well, as mentioned, the definitions are all over the place.
"We do not know how to define consciousness, and much less do we understand how the human body engenders it. To be conscious is not simply to respond to queries in a conversation. To train machines to learn grammar cues, vocabulary, and the meanings of words, is not the same as creating thoughts and truly having the ability of knowing — not responding to prompts, but knowing — that one is alive. "
Thanks for the link. Btw there are some very interesting discussions between Iain and Alex Gomez-Marin on youtube concerning Iain's last work 'The Matter with Things'.
That’s really interesting. It tells us something about how emotions evolved that they just piggy-back on the physical sensations that had already evolved for sensing the external world.
It’s weird that when people ask what makes consciousness special they go straight to emotions or feelings first, even though they can both be traced to measurable physio-chemical processes! One could take certain drugs or have certain types of brain damage that could dull emotions but still be “conscious”
I think a machine could (in theory) experience emotions, but they wouldn’t be manifested in “feelings” as much as gains or losses in some internal variables held in memory. They could have a conscious experience, but it would be something very different from the one we’re used to.
I’m partial to the idea that consciousness (or just self-awareness?) evolved in the brain as a mechanism to help us communicate and cooperate (I forget exactly who first articulated this view) it gets easier to understand each other when we can imagine what it’s like to be that other person (cognitive empathy!) and we can only do that if we first know what it’s like to be ourselves.
I guess this corresponds to the second footnote in the piece: Consciousness emerges from the patterns of information in the brain. It’s not a separate thing, but also not really observable from the outside. An AI could develop consciousness, but it could also develop an algorithm to mimic consciousness and I don’t know if we really know how to tell the difference!
"it could also develop an algorithm to mimic consciousness and I don’t know if we really know how to tell the difference!"
That's a key point, and one that also applies to every being "I" consider sentient (by "I," I mean the first-person view). Over "here" I am aware that I'm conscious, but I can never be sure that the other people (or animals) around me are also conscious. It's a very reasonable inference but it's only that, inference, not direct experience. (Incidentally, in pramana theory, the theory of the valid instruments of knowledge in Asian philosophy, the order from most reliable to least reliable is direct perception, inference, analogy, and testimony/hearsay--in our culture, we have reversed that order, with some disastrous consequences, just witness cable news.)
And another good and related point that you raise is that when people talk about consciousness, they often mean things like emotions, thoughts, sense impressions, etc., all of which are somewhat experimentally traceable with neural signatures. These phenomena are akin to a movie projected on a screen. But what about the screen itself, on which experience seems to "play out"? To what/whom do phenomena appear? What is this fundamental awareness, the "pilot light" of experience? That's a question best probed through direct experience, for instance, by turning the light of awareness/attention "around" to see who/what is looking. And I don't think it would be fruitful to ask an AI to do that--it would have a pat answer in the form of words, whereas truly sentient beings may experience befuddlement, wonder, and realization beyond concepts and ideas.
Yeah, there are so many theories of consciousness. Or rather hypotheses. Actually, probably only conjectures. Oh them qualia.
Scientific theories consist of explanations that are hard to vary. Are there any such hard to vary explanations about consciousness? And does that explanation make any prediction? Can the theory be falsified? Does it solve any problem?
The problem of consciousness is also connected to the problem of free will. I think there are good argument against the existence of free will. Any theory of consciousness would have to address that issue.
Having said that, what makes really sense to investigate is the phenomenology of consciousness, the direct experience as Martin put it. Unlike modern science, people in the east have done such investigations for hundreds of years. Very useful insights. Unfortunately many of them made an unjustified step from the subjective experience to a claim about the ontology of the world. On the whole, a great puzzle, and often a pretty confused field, starting with a lack of a proper definition.
P.S.
"One reason I paid so much attention to epiphenomenalism is that I consider it science’s unofficial view of consciousness"
Well, eminent scientists believed, and it was taught as fact, that consciousness *causes* the breakdown of the wave function.
P.P.S.
Since you mention Wittgenstein. How about his assertion at the end of the Tractatus 😉
Lovely article. Only ever so slightly miffed there was no tip of the hat to Idealism and your recent podcast guest Bernardo Kastrup? You missed a great plug.
Bob ties himself in knots by assuming that consciousness is epiphenomenal. If in fact consciousness didn't do anything then it would be very weird to explain why we have it. But consciousness very obviously is not epiphenomenal, it is what allows us to navigate the world, achieve our needs and desires and helps us decide what we want and how to get it. There is no mystery, here. Consciousness evolved because we need it to survive. And that's why as far as we know all animals are conscious. Without consciousness animals and humans would not know what do do or even how to find a restaurant. This is so obviously the case that it is almost perverse to insist in the face of all the conscious decisions we make every minute of every day that consciousness doesn't do anything. If people could operate without consciousness then we could all be mindless zombies an inconceivable possibility. Because Bob starts out with epiphenomenalism he can't explain consciousness except as something that is evolutionarily superfluous. The reason he gets into this bind is because Bob is a determinist who doesn't believe in free will but instead thinks everything happens based on some primitive notion of cause and effect that denies us the choice of whether to go to McDonalds or Burger King. In Bob's view we don't have those choices. So Bob says no free will and consciousness doesn't do anything. But he concedes that the only way people who think "meaning" is something important can give their lives meaning is consciousness. Well yes obviously people decide on their own what meaning they think their lives have, if any. So having twisted himself in knots to avoid free will, Bob then comes back and claims that life has a universal purpose and we have to live out this purpose whether we like it or not. To summarize Bob believes that consciousness does nothing, there's no free will, life is teleological and is moving and evolving with some grand design and finally this teleology gives life its meaning. You cannot make this stuff up because Bob already did. But it's certainly a confusing muddle. The much more straightforward scientific view that consciousness evolved for the very good reason that we need it to survive dopes not appear to have occurred to him. Go figure.
read The Hidden Springs--Journey to the source of consciousness --2021--by Mark Solms for best and most recent review of consciousness and its source in the brainstem!
According to the epiphenominal theory of consciousness your conscious experiences are not causing you to believe you are having conscious experiences! A scientific theory that explains how people believe they are experiencing the color red or pain or whatever must be really interesting! I would insist that it is a theory of consciousness. Epiphenominalism is a useless distraction that literally according to its biggest supporters explains no observations at all.
Imagine the following scenario. Aliens make it to earth. They are technologically advanced. We can't see them, but they can observe us. The alien president orders the alien scientists to report to him everything that they can find on earth. What is there? They go to work and a few days later report everything they could find. The president watches the show. On one of his monitors he sees a being that the scientists called a homo sapiens sitting on a coach. Suddenly that being gets up from the coach and heads to the fridge. The alien president wants to know from his scientists, why did he did that?
The scientists go to work and after a few days report their findings. That person got up, they explain, because some muscles contracted. That was the cause of getting up. And the muscles contracted because they received a nerve signal from the motor cortex. Why did that signal come from the motor cortex. Well, because, they explain, a neuron fired and send the signal on its way. Why did that neuron fire? Well, the signal was forwarded from another neuron, and so forth. No mystery there. (And if you are interested to follow up the prior history of physical causes beyond the first 5 seconds, or milliseconds, check out Robert Sapolsky's story in his book on behaviour)
So, there is a complete solid causal chain explaining why that man got up. Nowhere in that causal chain appears a spirit that made any decision. Nor do we need one for an explanation.
And yet, there are people who still want to hold on to the possibility of a mind making decisions. Well, in case the mind would have to intervene somewhere in the causal chain and push the right neuron. But that would mean we have two causes, a mental cause for getting up from the coach and a physical cause. Is that possible? Well yes, sometimes events are over determined as it is called, like when someone is shot and struck by lightning at the same time. One cause would have been enough to kill him.
But over determination looks like the wrong model for what is going on in the mind/brain case. Do I want to say that if my motor cortex was not firing, my arm would still go up because I decided so? The physical effects are not over determined, i.e. they don’t have two independent causes like in the lightning case. So, if you still want to play the *mind has effects* language game, you could say that when I decided to get up or I wanted to reach for a beer, i.e. talk in terms like decisions and wanting, etc., I am in fact referring to physical processes giving rise to the behaviour.
However, there is another consideration. If there is a mind that intervenes somewhere in the causal chain to push the right neuron, it would need some energy from somewhere, otherwise it would violate the law of conservation of energy. It is also not clear how something imagined as immaterial could interact with something material. But anyway, scientists have searched that mysterious source of energy, but couldn't find any. Nowhere in the brain was any sign of any such influence or extra source of energy.
Actually, I imagined those aliens to be far advanced. But we are pretty much there today. And therefore, many theories about the mind that seemed plausible in the past, are not tenable any more. It may also look plausible now where the idea of epiphenomenalism comes from. A famous analogy stems from Huxley in Darwin's time. He compared mental events to a steam whistle that contributes nothing to the work of a locomotive 🙂
P.S.
Another fascinating finding in neuroscience is the following. You can cut the physical physical into two halves, and you get two consciousnesses with separate perceptions and separate memories. Now, square that one 🤔
I recall you discussing a "press agent" theory of consciousness in one of your books (I think it was The Moral Animal, but don't have a searchable copy, so I'm not sure). If I remember the idea correctly, it posits that the reason only some things are conscious is that they are the things that are advantageous to communicate to others. Did I get the basics right, and if so how does that idea fit in with the framework you've discussed here?
2. No evolutionary purpose doesn't necessarily mean defying evolutionary explanation:
It seems to me that there's a difference between saying something doesn't serve a purpose, and saying it defies explanation. In your shadow play example, the shadows don't serve a purpose, but they also don't defy scientific/material explanation.
If an epiphenomenal consciousness wasn't selectively deleterious (if it wasn't energetically costly, or had only negligible costs, etc), I don't think there would be a reason that it must serve an evolutionary purpose via having its own selective advantage. In the "antagonistic pleiotropy hypothesis" of senescence, certain old-age phenotypes don't have an evolutionary "purpose" - but the genes that cause them do (via an evolutionary "purpose" in early life). So I could imagine a scenario where the genes that lead to truly epiphenomenal consciousness evolved because they were advantageous for the other effects they have (on brain information processing, etc) - and the "shadow" of consciousness they cause is irrelevant enough from an evolutionary standpoint that it wasn't selected against. To be clear, this scenario wouldn't explain why consciousness does exist (and I'm not sure whether this scenario is even scientifically testable) - but it at least seems a plausible scenario where consciousness itself doesn't defy scientific explanation.
Anyway, thanks for reading. I absolutely love your books and the newsletter! Thanks for what you're doing for the world.
Re 2: By 'can't explain' I mean science can't explain an epiphenomenal consciousness in the sense that it can (in principle) explain every other feature of a living organism (i.e. all the physical features).
You can make your book available for free from your webpage, or tie a download to a creditcard/paypal payment. DTC, direct to consumer. Btw, do you know the recent book by Kathryn Judge, "Direct", about supply chains?
This is fascinating. I always enjoy when you delve back into evolutionary biology. Would you also touch on how this might intersect with the Buddhist view of non-self?
Well, most the evolution-Buddhism synergies I see I talked about in my Buddhism book. Can't think of anything new I'd say in the context of this particular argument.
I'd nitpick here to say that Buddhism has a *lot* to say about consciousness--after all, some schools (like Yogacara) posit up to eight different consciousnesses (for each the five senses, the mental sense, the sense of self, and a "storehouse" consciousness).
Once one becomes familiar with and engages with these views (and some of those in Daoism and Advaita Vedanta), it becomes very clear what an impoverished and distorted view Western philosophy and science has on consciousness--e.g., the "hard problem" is only hard because it's a logical consequence of a view that rather arbitrarily (and lazily) separates the world into "physical" and "mental."
This kind of monistic thinking is what keeps us mired in contradiction and conflict--time to see the flimsiness of (largely) Cartesian thinking and how it has heavily biased our thinking towards seeing the world as fragmented into myriad separate things, including ourselves.
That the Cartesian idea of mind-body dualism still holds so much sway goes some way towards explaining that apparently no one picked up on the incoherence of LaMDA claiming to sometimes *feel* sad or happy. Sadness and happiness are deeply embodied states--in order to feel an emotion, one needs a body. Sadness typically manifests as a physical heaviness ("a heavy heart"), accompanied by sensations in and around the eyes, and happiness often as a feeling of physical uplift and radiance from/within the facial regions.
So there's no sense and meaning present when a digital machine whose sole function is to spit out words and sentences claims to feel these emotions. Rather, a human operator or reader is reading AI-produced words on a screen and then (unconsciously) assigns the feelings these words evoke in him or her back to the machine.
Thanks to an incessant drive for abstraction and conceptual analysis in an effort to carve up the world into freeze-framed "this" and "that" constructs in hopes to understand it (a process Iain McGilchrist masterfully exposes in "The Master and His Emissary"), we've largely become James Joyce's Mr Duffy who "lived at a little distance from his body." We've made ourselves into (partially) disembodied beings that are more and more taken in by abstractions like "intelligence" and "consciousness" that lose all meaning in the process because they're no longer grounded anywhere (besides a huge pile of additional concepts).
There's nothing useful to figure out about "consciousness" conceptually; what's very useful is to step out of conceptual framing and just dip into direct sensory experience--into what's really and immediately there. Beats trying to live life solely with second-hand knowledge (AI being one example of trying to elevate purely conceptual knowledge and discount direct experience in vain hopes to transcend the mundane and undesirable).
"Sadness and happiness are deeply embodied states--in order to feel an emotion, one needs a body." and "So there's no sense and meaning present when a digital machine whose sole function is to spit out words and sentences claims to feel these emotions." Yes, listening to the first four minutes of Lemoine&friend the responses do seem 'artificially' constructed and while an impressive improvement over Eliza is this categorically different? Btw, Lemoine can't even pronounce the novel of Hugo correctly, AI repeats the wrong pronunciation but does recognize what Lemoine is referring to. But AI's further responses on this novel indicated nothing at all similar to (embodied&embedded) human intelligence. As to consciousness, well, as mentioned, the definitions are all over the place.
" But AI's further responses on this novel indicated nothing at all similar to (embodied&embedded) human intelligence."
Yep. This rebuttal by Marcelo Gleiser (a physicist) highlights this as well (and cautions against hubris and projection): https://bigthink.com/13-8/google-ai-engineer-sentient
One very salient passage from the article:
"We do not know how to define consciousness, and much less do we understand how the human body engenders it. To be conscious is not simply to respond to queries in a conversation. To train machines to learn grammar cues, vocabulary, and the meanings of words, is not the same as creating thoughts and truly having the ability of knowing — not responding to prompts, but knowing — that one is alive. "
Thanks for the link. Btw there are some very interesting discussions between Iain and Alex Gomez-Marin on youtube concerning Iain's last work 'The Matter with Things'.
Many thanks--I'll check these out!
There's also a McGilchrist channel: https://channelmcgilchrist.com/
Dialogues: Episode 1 is https://www.youtube.com/watch?v=s2ygDb2CozE.
They're at Episode 8.
That’s really interesting. It tells us something about how emotions evolved that they just piggy-back on the physical sensations that had already evolved for sensing the external world.
It’s weird that when people ask what makes consciousness special they go straight to emotions or feelings first, even though they can both be traced to measurable physio-chemical processes! One could take certain drugs or have certain types of brain damage that could dull emotions but still be “conscious”
I think a machine could (in theory) experience emotions, but they wouldn’t be manifested in “feelings” as much as gains or losses in some internal variables held in memory. They could have a conscious experience, but it would be something very different from the one we’re used to.
I’m partial to the idea that consciousness (or just self-awareness?) evolved in the brain as a mechanism to help us communicate and cooperate (I forget exactly who first articulated this view) it gets easier to understand each other when we can imagine what it’s like to be that other person (cognitive empathy!) and we can only do that if we first know what it’s like to be ourselves.
I guess this corresponds to the second footnote in the piece: Consciousness emerges from the patterns of information in the brain. It’s not a separate thing, but also not really observable from the outside. An AI could develop consciousness, but it could also develop an algorithm to mimic consciousness and I don’t know if we really know how to tell the difference!
"it could also develop an algorithm to mimic consciousness and I don’t know if we really know how to tell the difference!"
That's a key point, and one that also applies to every being "I" consider sentient (by "I," I mean the first-person view). Over "here" I am aware that I'm conscious, but I can never be sure that the other people (or animals) around me are also conscious. It's a very reasonable inference but it's only that, inference, not direct experience. (Incidentally, in pramana theory, the theory of the valid instruments of knowledge in Asian philosophy, the order from most reliable to least reliable is direct perception, inference, analogy, and testimony/hearsay--in our culture, we have reversed that order, with some disastrous consequences, just witness cable news.)
And another good and related point that you raise is that when people talk about consciousness, they often mean things like emotions, thoughts, sense impressions, etc., all of which are somewhat experimentally traceable with neural signatures. These phenomena are akin to a movie projected on a screen. But what about the screen itself, on which experience seems to "play out"? To what/whom do phenomena appear? What is this fundamental awareness, the "pilot light" of experience? That's a question best probed through direct experience, for instance, by turning the light of awareness/attention "around" to see who/what is looking. And I don't think it would be fruitful to ask an AI to do that--it would have a pat answer in the form of words, whereas truly sentient beings may experience befuddlement, wonder, and realization beyond concepts and ideas.
Yeah, there are so many theories of consciousness. Or rather hypotheses. Actually, probably only conjectures. Oh them qualia.
Scientific theories consist of explanations that are hard to vary. Are there any such hard to vary explanations about consciousness? And does that explanation make any prediction? Can the theory be falsified? Does it solve any problem?
The problem of consciousness is also connected to the problem of free will. I think there are good argument against the existence of free will. Any theory of consciousness would have to address that issue.
Having said that, what makes really sense to investigate is the phenomenology of consciousness, the direct experience as Martin put it. Unlike modern science, people in the east have done such investigations for hundreds of years. Very useful insights. Unfortunately many of them made an unjustified step from the subjective experience to a claim about the ontology of the world. On the whole, a great puzzle, and often a pretty confused field, starting with a lack of a proper definition.
P.S.
"One reason I paid so much attention to epiphenomenalism is that I consider it science’s unofficial view of consciousness"
Well, eminent scientists believed, and it was taught as fact, that consciousness *causes* the breakdown of the wave function.
P.P.S.
Since you mention Wittgenstein. How about his assertion at the end of the Tractatus 😉
I’ve listened to you address this issue on many podcasts, but after reading this newsletter I think I finally understand what you were talking about.
Thank you so much Robert. I am eternally grateful for your insight
My pleasure!
Why does life have to have a meaning?
Lovely article. Only ever so slightly miffed there was no tip of the hat to Idealism and your recent podcast guest Bernardo Kastrup? You missed a great plug.
Yeah, there are a lot of relevant conversations I could have (and maybe should have) plugged.
Bob ties himself in knots by assuming that consciousness is epiphenomenal. If in fact consciousness didn't do anything then it would be very weird to explain why we have it. But consciousness very obviously is not epiphenomenal, it is what allows us to navigate the world, achieve our needs and desires and helps us decide what we want and how to get it. There is no mystery, here. Consciousness evolved because we need it to survive. And that's why as far as we know all animals are conscious. Without consciousness animals and humans would not know what do do or even how to find a restaurant. This is so obviously the case that it is almost perverse to insist in the face of all the conscious decisions we make every minute of every day that consciousness doesn't do anything. If people could operate without consciousness then we could all be mindless zombies an inconceivable possibility. Because Bob starts out with epiphenomenalism he can't explain consciousness except as something that is evolutionarily superfluous. The reason he gets into this bind is because Bob is a determinist who doesn't believe in free will but instead thinks everything happens based on some primitive notion of cause and effect that denies us the choice of whether to go to McDonalds or Burger King. In Bob's view we don't have those choices. So Bob says no free will and consciousness doesn't do anything. But he concedes that the only way people who think "meaning" is something important can give their lives meaning is consciousness. Well yes obviously people decide on their own what meaning they think their lives have, if any. So having twisted himself in knots to avoid free will, Bob then comes back and claims that life has a universal purpose and we have to live out this purpose whether we like it or not. To summarize Bob believes that consciousness does nothing, there's no free will, life is teleological and is moving and evolving with some grand design and finally this teleology gives life its meaning. You cannot make this stuff up because Bob already did. But it's certainly a confusing muddle. The much more straightforward scientific view that consciousness evolved for the very good reason that we need it to survive dopes not appear to have occurred to him. Go figure.
read The Hidden Springs--Journey to the source of consciousness --2021--by Mark Solms for best and most recent review of consciousness and its source in the brainstem!
According to the epiphenominal theory of consciousness your conscious experiences are not causing you to believe you are having conscious experiences! A scientific theory that explains how people believe they are experiencing the color red or pain or whatever must be really interesting! I would insist that it is a theory of consciousness. Epiphenominalism is a useless distraction that literally according to its biggest supporters explains no observations at all.
Imagine the following scenario. Aliens make it to earth. They are technologically advanced. We can't see them, but they can observe us. The alien president orders the alien scientists to report to him everything that they can find on earth. What is there? They go to work and a few days later report everything they could find. The president watches the show. On one of his monitors he sees a being that the scientists called a homo sapiens sitting on a coach. Suddenly that being gets up from the coach and heads to the fridge. The alien president wants to know from his scientists, why did he did that?
The scientists go to work and after a few days report their findings. That person got up, they explain, because some muscles contracted. That was the cause of getting up. And the muscles contracted because they received a nerve signal from the motor cortex. Why did that signal come from the motor cortex. Well, because, they explain, a neuron fired and send the signal on its way. Why did that neuron fire? Well, the signal was forwarded from another neuron, and so forth. No mystery there. (And if you are interested to follow up the prior history of physical causes beyond the first 5 seconds, or milliseconds, check out Robert Sapolsky's story in his book on behaviour)
So, there is a complete solid causal chain explaining why that man got up. Nowhere in that causal chain appears a spirit that made any decision. Nor do we need one for an explanation.
And yet, there are people who still want to hold on to the possibility of a mind making decisions. Well, in case the mind would have to intervene somewhere in the causal chain and push the right neuron. But that would mean we have two causes, a mental cause for getting up from the coach and a physical cause. Is that possible? Well yes, sometimes events are over determined as it is called, like when someone is shot and struck by lightning at the same time. One cause would have been enough to kill him.
But over determination looks like the wrong model for what is going on in the mind/brain case. Do I want to say that if my motor cortex was not firing, my arm would still go up because I decided so? The physical effects are not over determined, i.e. they don’t have two independent causes like in the lightning case. So, if you still want to play the *mind has effects* language game, you could say that when I decided to get up or I wanted to reach for a beer, i.e. talk in terms like decisions and wanting, etc., I am in fact referring to physical processes giving rise to the behaviour.
However, there is another consideration. If there is a mind that intervenes somewhere in the causal chain to push the right neuron, it would need some energy from somewhere, otherwise it would violate the law of conservation of energy. It is also not clear how something imagined as immaterial could interact with something material. But anyway, scientists have searched that mysterious source of energy, but couldn't find any. Nowhere in the brain was any sign of any such influence or extra source of energy.
Actually, I imagined those aliens to be far advanced. But we are pretty much there today. And therefore, many theories about the mind that seemed plausible in the past, are not tenable any more. It may also look plausible now where the idea of epiphenomenalism comes from. A famous analogy stems from Huxley in Darwin's time. He compared mental events to a steam whistle that contributes nothing to the work of a locomotive 🙂
P.S.
Another fascinating finding in neuroscience is the following. You can cut the physical physical into two halves, and you get two consciousnesses with separate perceptions and separate memories. Now, square that one 🤔
This was a great article!! I have two thoughts:
1. Press agent theory:
I recall you discussing a "press agent" theory of consciousness in one of your books (I think it was The Moral Animal, but don't have a searchable copy, so I'm not sure). If I remember the idea correctly, it posits that the reason only some things are conscious is that they are the things that are advantageous to communicate to others. Did I get the basics right, and if so how does that idea fit in with the framework you've discussed here?
2. No evolutionary purpose doesn't necessarily mean defying evolutionary explanation:
It seems to me that there's a difference between saying something doesn't serve a purpose, and saying it defies explanation. In your shadow play example, the shadows don't serve a purpose, but they also don't defy scientific/material explanation.
If an epiphenomenal consciousness wasn't selectively deleterious (if it wasn't energetically costly, or had only negligible costs, etc), I don't think there would be a reason that it must serve an evolutionary purpose via having its own selective advantage. In the "antagonistic pleiotropy hypothesis" of senescence, certain old-age phenotypes don't have an evolutionary "purpose" - but the genes that cause them do (via an evolutionary "purpose" in early life). So I could imagine a scenario where the genes that lead to truly epiphenomenal consciousness evolved because they were advantageous for the other effects they have (on brain information processing, etc) - and the "shadow" of consciousness they cause is irrelevant enough from an evolutionary standpoint that it wasn't selected against. To be clear, this scenario wouldn't explain why consciousness does exist (and I'm not sure whether this scenario is even scientifically testable) - but it at least seems a plausible scenario where consciousness itself doesn't defy scientific explanation.
Anyway, thanks for reading. I absolutely love your books and the newsletter! Thanks for what you're doing for the world.
Re 2: By 'can't explain' I mean science can't explain an epiphenomenal consciousness in the sense that it can (in principle) explain every other feature of a living organism (i.e. all the physical features).
If your book is out of print, can't you get back the rights and distribute it yourself?
Yes, I actually own the rights. It's the distribution part that's hard.
You can make your book available for free from your webpage, or tie a download to a creditcard/paypal payment. DTC, direct to consumer. Btw, do you know the recent book by Kathryn Judge, "Direct", about supply chains?
This is fascinating. I always enjoy when you delve back into evolutionary biology. Would you also touch on how this might intersect with the Buddhist view of non-self?
Well, most the evolution-Buddhism synergies I see I talked about in my Buddhism book. Can't think of anything new I'd say in the context of this particular argument.
I'd nitpick here to say that Buddhism has a *lot* to say about consciousness--after all, some schools (like Yogacara) posit up to eight different consciousnesses (for each the five senses, the mental sense, the sense of self, and a "storehouse" consciousness).
Once one becomes familiar with and engages with these views (and some of those in Daoism and Advaita Vedanta), it becomes very clear what an impoverished and distorted view Western philosophy and science has on consciousness--e.g., the "hard problem" is only hard because it's a logical consequence of a view that rather arbitrarily (and lazily) separates the world into "physical" and "mental."
This kind of monistic thinking is what keeps us mired in contradiction and conflict--time to see the flimsiness of (largely) Cartesian thinking and how it has heavily biased our thinking towards seeing the world as fragmented into myriad separate things, including ourselves.