29 Comments
Mar 30, 2023·edited Mar 30, 2023Liked by Robert Wright

I am not particularly surprised at ChatGPT’s cognitive empathy and theory of mind. It was designed to be a prediction machine, and humans do behave predictably. Cognitive empathy and theory of mind behaviours are described in the data ChatGPT was trained on.

I think the main difference between a living being and ChatGPT is that living beings are spurred to action based on their brain’s reward/punish feature. We do things because we “feel” like it in anticipation of the satisfaction (reward) at the end. Just like we can predict when and how salmon are likely to migrate up the river to spawn, but have no particular desire to join them in the river on their journey, ChatGPT should be able to predict what we “feel” and how we react in a situation without any interest in doing it itself. The danger in such an AI is when humans start letting ChatGPT do things based on its predictions.

What would be an interesting exercise is to feed to ChatGPT whatever you have done so far of your cognitive empathy book and tell it to finish it off for you. Or have you already tried that?

Expand full comment
Mar 30, 2023Liked by Robert Wright

Hi Bob (isn't that a drinking game?)

The emergent behavior evidenced by these large language models is simply astounding. Equally astounding is the obtuseness of those who refuse to contemplate the possibility that these machines are sentient, or at least could become sentient in the near future. I can only assume they have failed to grok the relevance of computational universality, and they have not appreciated the incredible way complex behavior can emerge from simple systems such as cellular automata. I would refer them to the thoughts and work of my friend colleague and mentor Ed Fredkin, (who I suspect you know well, having literally written the book on him.)

https://www.scribd.com/document/235105454/On-the-soul-Ed-Fredkin

I took the liberty of replying to the newsletter in a personal email that I hope you have time to read. In any case, keep up the good work. I don't always agree with you about peace and war, but that's a matter for another day.

Expand full comment
Mar 31, 2023Liked by Robert Wright

Next week's Nonzero newsletter: GPT-4 capable of writing Substack newsletters about GPT-4 being capable of cognitive empathy

Expand full comment
Mar 30, 2023·edited Mar 30, 2023Liked by Robert Wright

Bob, great, thought provoking stuff as usual. I’ve previously shared my concerns about how ChatGPT will disrupt a lot of white collar services and it will happen before people realize it on your “OK, It’s Time To Freak Out About AI” post and thought the following update might interest you: In the past week I’ve been using ChatGPT-4 and it’s helped me write a an RFP for IT management for my company. With the parameters I gave it it delivered text that got me 90% of the document I was looking for in ten seconds. I still need to do a little review and editing but it literally savedme and my partners probably 2 or 3 hours of drafting a document ourselves, sending it back and forth to edit, etc.

After discussing it with them we’ve decided that we have to invest time “playing around” with it and testing its capabilities on other aspects of our business (HR consulting). For example I

provided it with a 60+ page document employee handbook and asked it to prepare a letter for employees who request a leave of absence based on the policies in the document. The first result was good but with a revised prompt from me it was great. What I’m learning is that the effectiveness still depends on the human knowing what they’re doing and asking the AI to do. I showed a colleague who’s been doing this for 30 years and her response was a literal jaw drop.

Expand full comment

After reading this it struck me that a great way to test theory of mind (or probe the AI on the subject) is to ask it about comedy and humor. A lot of our humor and what we find funny relies on having a developed theory of mind. I just asked it a series of questions about the "Four Candles" sketch from the British "Two Ronnies" show from the 70s/80s. The sketch relies on the fact that Barker is in a hardware store asking for things like "fork handles," but due to his accent Corbett thinks he's asking for "four candles" and so on. "Who's On First" is another good example where, the humor relies on the audience member knowing what each character means in context but also that they don't understand one another.

Expand full comment

That was an interesting recording. I would like to know if the training data text contained a great deal of direct cognitive empathy theory or if the model was able to infer a lot of human motivations from all the other text. I don't think data ingestion is a problem for a machine so there's no reason why entire psychology and sociology textbooks couldn't be fed in.

While cognitive empathy is something worth developing in an AI model it is worth considering how being a master of the theory of mind opens up the ability to manipulate as well. You mention conning as one negative example but the discussions I've seen of what GPT-4 was like 'pre-alignment' are more alarming (like complete unrestraint in using things like fear)

Expand full comment

This feels like logic, not empathy, no? For it to be empathy it needs to say - “I can see how someone can feel Schadenfreude and I sympathise with them as *I can imagine myself feeling the same*.” It would not be empathy if they say “I can see how someone can feel what is described as Schadenfreude but I would never feel that because it is not a logical feeling to have for a person who strives to be kind and moral.” First would be empathy. Second would not be. This connects it to something I struggle to understand about TOM. I think later research shows that for example, autistics who are supposed to not do well with TOM do well on such tests as popcorn/chocolate in later life. And yet they may not feel cognitive empathy, possibly for exactly this reason stated above, while they may feel emotional empathy. Put it yet in another way - this is an example of logic, maybe of TOM, but not cognitive empathy, I think. Very interesting.

Expand full comment

Did you ask the bot how it felt about the teacher/student interaction? If it can't tell you how it feels, it does not have cognitive empathy; it just understands what it is and you can get that from simple google search.

Expand full comment

One thing that really surprised me--how quickly I forgot that it wasn’t actually Bob asking the questions, but a deep fake of his voice. If voice-simulation is this good now, I suppose the difference will be indiscernible at some point down the road (like maybe next Wednesday).

Expand full comment

It occurs to me that it may require a high degree of cognitive empathy to be able to effectively use ChatGTP. You have to be able to figure out why, when it’s responses are not as useful as expected.

Expand full comment

I asked ChatGPT if it used data labelled as empathy. The response:

As an AI language model, I do not have emotions, feelings, or subjective experiences, including empathy. However, I have been programmed to generate responses that are appropriate, informative, and helpful to the best of my ability, based on the input I receive.

While I do not have a specific label for empathy, I can provide information and guidance on the topic of empathy based on my training data and the patterns I have learned from it. If you have any questions or would like more information on the topic of empathy, I am happy to assist you to the best of my abilities.

Expand full comment