29 Comments

I am not particularly surprised at ChatGPT’s cognitive empathy and theory of mind. It was designed to be a prediction machine, and humans do behave predictably. Cognitive empathy and theory of mind behaviours are described in the data ChatGPT was trained on.

I think the main difference between a living being and ChatGPT is that living beings are spurred to action based on their brain’s reward/punish feature. We do things because we “feel” like it in anticipation of the satisfaction (reward) at the end. Just like we can predict when and how salmon are likely to migrate up the river to spawn, but have no particular desire to join them in the river on their journey, ChatGPT should be able to predict what we “feel” and how we react in a situation without any interest in doing it itself. The danger in such an AI is when humans start letting ChatGPT do things based on its predictions.

What would be an interesting exercise is to feed to ChatGPT whatever you have done so far of your cognitive empathy book and tell it to finish it off for you. Or have you already tried that?

Expand full comment

Haven't tried it, no. And I can't say I'm optimistic about the outcome. But maybe when I've got more of the book written I'll give it a shot.

Expand full comment

Ooh I can see the headlines

- "Cognitive Empathy: The path to global harmony", becomes New York times bestseller

- Unbelievably well written "Cognitive Empathy" surpasses 1 billion copies sold.

- Entire world's industrial capacity turned over to printing copies of "Cognitive Empathy"

Expand full comment

Hi Bob (isn't that a drinking game?)

The emergent behavior evidenced by these large language models is simply astounding. Equally astounding is the obtuseness of those who refuse to contemplate the possibility that these machines are sentient, or at least could become sentient in the near future. I can only assume they have failed to grok the relevance of computational universality, and they have not appreciated the incredible way complex behavior can emerge from simple systems such as cellular automata. I would refer them to the thoughts and work of my friend colleague and mentor Ed Fredkin, (who I suspect you know well, having literally written the book on him.)

https://www.scribd.com/document/235105454/On-the-soul-Ed-Fredkin

I took the liberty of replying to the newsletter in a personal email that I hope you have time to read. In any case, keep up the good work. I don't always agree with you about peace and war, but that's a matter for another day.

Expand full comment

Non zero, I assume this is Bob?

I tried to email you by replying to the newsletter but I guess that doesn't work. I wanted to follow up on Ed fredkin. I worked with him for many years; I think some of his ideas are very relevant today.

Expand full comment

Next week's Nonzero newsletter: GPT-4 capable of writing Substack newsletters about GPT-4 being capable of cognitive empathy

Expand full comment

Bob, great, thought provoking stuff as usual. I’ve previously shared my concerns about how ChatGPT will disrupt a lot of white collar services and it will happen before people realize it on your “OK, It’s Time To Freak Out About AI” post and thought the following update might interest you: In the past week I’ve been using ChatGPT-4 and it’s helped me write a an RFP for IT management for my company. With the parameters I gave it it delivered text that got me 90% of the document I was looking for in ten seconds. I still need to do a little review and editing but it literally savedme and my partners probably 2 or 3 hours of drafting a document ourselves, sending it back and forth to edit, etc.

After discussing it with them we’ve decided that we have to invest time “playing around” with it and testing its capabilities on other aspects of our business (HR consulting). For example I

provided it with a 60+ page document employee handbook and asked it to prepare a letter for employees who request a leave of absence based on the policies in the document. The first result was good but with a revised prompt from me it was great. What I’m learning is that the effectiveness still depends on the human knowing what they’re doing and asking the AI to do. I showed a colleague who’s been doing this for 30 years and her response was a literal jaw drop.

Expand full comment

After reading this it struck me that a great way to test theory of mind (or probe the AI on the subject) is to ask it about comedy and humor. A lot of our humor and what we find funny relies on having a developed theory of mind. I just asked it a series of questions about the "Four Candles" sketch from the British "Two Ronnies" show from the 70s/80s. The sketch relies on the fact that Barker is in a hardware store asking for things like "fork handles," but due to his accent Corbett thinks he's asking for "four candles" and so on. "Who's On First" is another good example where, the humor relies on the audience member knowing what each character means in context but also that they don't understand one another.

Expand full comment

Yes, this is a really good test which even GPT4 fails at because it does not understand the end goal when it answers, it only pieces words together one after another based on probabilities. It is really bad at creating jokes for this reason because developing a good joke requires you to start with the punchline and go backwards. Exactly the opposite of how these language models work. They don't actually know what they are saying.

Expand full comment

That was an interesting recording. I would like to know if the training data text contained a great deal of direct cognitive empathy theory or if the model was able to infer a lot of human motivations from all the other text. I don't think data ingestion is a problem for a machine so there's no reason why entire psychology and sociology textbooks couldn't be fed in.

While cognitive empathy is something worth developing in an AI model it is worth considering how being a master of the theory of mind opens up the ability to manipulate as well. You mention conning as one negative example but the discussions I've seen of what GPT-4 was like 'pre-alignment' are more alarming (like complete unrestraint in using things like fear)

Expand full comment

Yes, cognitive empathy can be used for ill as well as good--and for lots of kinds of ill. As for the training texts: I suspect the AI's cognitive empathy abilities come more from reading lots of texts about human interaction (including novels, presumably) than reading texts about cognitive empathy per se. For example, I doubt there's a text on cognitive empathy that gets into an example precisely analagous to the embarrassed student example in the excerpt. Yet the AI handled that well.

Expand full comment

This feels like logic, not empathy, no? For it to be empathy it needs to say - “I can see how someone can feel Schadenfreude and I sympathise with them as *I can imagine myself feeling the same*.” It would not be empathy if they say “I can see how someone can feel what is described as Schadenfreude but I would never feel that because it is not a logical feeling to have for a person who strives to be kind and moral.” First would be empathy. Second would not be. This connects it to something I struggle to understand about TOM. I think later research shows that for example, autistics who are supposed to not do well with TOM do well on such tests as popcorn/chocolate in later life. And yet they may not feel cognitive empathy, possibly for exactly this reason stated above, while they may feel emotional empathy. Put it yet in another way - this is an example of logic, maybe of TOM, but not cognitive empathy, I think. Very interesting.

Expand full comment

Sympathizing isn't an inherent part of *cognitive* empathy (which just means understanding what thoughts and emotions someone is having), though it does tend to accompany *emotional* empathy (which involves identifying with the feelings someone is having).

Expand full comment

Gotcha. Cognitive empathy is an ability to see another’s perspective.

Expand full comment

Did you ask the bot how it felt about the teacher/student interaction? If it can't tell you how it feels, it does not have cognitive empathy; it just understands what it is and you can get that from simple google search.

Expand full comment

*Emotional* empathy requires feeling, because it entails sharing in and identifying with the feelings someone else is having. *Cognitive* empathy, as conventionally defined, doesn't require feeling.

Expand full comment

That really wasn't my point, which is that having cognitive empathy and knowing what it is are very different. And ChatGPT does not understand it, it just knows the probable words associated with feelings (which is what you asked it to provide).

Expand full comment

I don't think you have to understand what cognitive empathy is to practice it. A child can practice cognitive empathy without knowing what it is or being able to articulate a subjective experience of it.

Expand full comment

My point is that ChatGPT does not understand anything about cognitive empathy. It just uses probabilities of words and labels to construct sentences.

Expand full comment

Well sure, in that sense it doesn't really understand anything, let alone cognitive empathy. Ultimately it's all just 1s and 0s.

So then the question becomes what is the most useful way to talk about how it's process presents?

Expand full comment

A fine conversation, yes.

My issue is that Bob titled this writeup as "ChatGPT is capable of cognitive empathy". He either believes this or the title is clickbait which then makes the write up the type media that he so opposes. Either way, it worries me.

Expand full comment

IMO it is mistake to separate emotion from cognition. Even in your query you used both feeling and thinking. IMO emotion is the foundation for cognition so they are inseparable. I don't see how separating them helps us understand ourselves and others.

Expand full comment

One thing that really surprised me--how quickly I forgot that it wasn’t actually Bob asking the questions, but a deep fake of his voice. If voice-simulation is this good now, I suppose the difference will be indiscernible at some point down the road (like maybe next Wednesday).

Expand full comment

It occurs to me that it may require a high degree of cognitive empathy to be able to effectively use ChatGTP. You have to be able to figure out why, when it’s responses are not as useful as expected.

Expand full comment

I asked ChatGPT if it used data labelled as empathy. The response:

As an AI language model, I do not have emotions, feelings, or subjective experiences, including empathy. However, I have been programmed to generate responses that are appropriate, informative, and helpful to the best of my ability, based on the input I receive.

While I do not have a specific label for empathy, I can provide information and guidance on the topic of empathy based on my training data and the patterns I have learned from it. If you have any questions or would like more information on the topic of empathy, I am happy to assist you to the best of my abilities.

Expand full comment