62 Comments
Feb 23Liked by Robert Wright

My guess is that a human who knows the name of Tom Cruise’s mother still could have trouble going the other way around if asked who her son is. I would think of Tom Cruise being a big word balloon in somebody’s head with a bunch of factoids that spring from it, whereas his mom is just one of those factoids and therefore harder to index if mentioned out of context.

Expand full comment

As I said in a previous comment you should probably have a strong intuition that 1. you can repose the experiment with two logically equivalent statements that don't require remembering names/words and 2. where one sentence isn't explicitly in the dataset and can't be produced by an LLM despite the other once being able to be produced, because LLMs are next-word-predictors and not logical-inference-machines.

Expand full comment

I've definitely seen plenty of non-human logical errors when chatting with LLMs. However when looking into this particular "reversal curse", it seemed that the questions were asked independently (i.e. in different chat sessions). It struck me that this is actually a very human trait. It's like how if somebody told me a state capital, I could name the state whereas being asked to the name the capital of a state would be harder for me.

Expand full comment

The next logical step in your argument would be to ask "if I remove the problem of remembering the name of things, do you still get the same limitation in LLMs"? It seems that you should have a strong intuition that the answer is yes since the system is a next-token-predictor and depends on explicit string-sentences in the database, it can only derive implicit sentences through a process of something like interpolation.

It feels like motivated reasoning to me to assume some magical properties of the system so that it acts otherwise. But unintuitive things do exist, it should just be incumbent on the people making unintuitive claims to make a carefully crafted persuasive experiment (i.e. to show that the reversal curse or something similar doesn't exist if you eliminate the problem of naming things.)

I'm not clever enough to come up with a good experiment but it might look something like a statement (which exists in the dataset and can be "recalled" by the LLM) and it's double-negation (some weird phrasing/construction that no-one uses and doesn't exist in the dataset).

Expand full comment

I tested this a little bit just now with Gemini. I think what's actually going on is that the LLMs are being extremely careful. If you ask generically

"Who is Mary Lee Pfeiffer's son?", it says it doesn't have enough info. However if you ask "Who is Mary Lee Pfeiffer's son who is a famous movie star?" it says Tom Cruise.

The weird part is that you can do this in the same chat. You can ask, "Who is Tom Cruise's mother?" followed by "Who is Mary Lee Pfeiffer's son?" and it still says it doesn't have enough info to get the answer. Whereas a human would probably answer, "Uh Tom Cruise? Haven't we just covered this?"

See my two chats here:

https://g.co/gemini/share/fdd5d73af0a7

https://g.co/gemini/share/6b0d82e16c87

Expand full comment

You can't rely on this example anymore because, for one, I suspect, especially for OpenAI, they have a tiger team that is patching these systems to fix what people point out as failures (one way to do that is to add whatever statement is missing from dataset into it). There have been many instances of failures becoming unreproducible after a few hours of being widely disseminated. So something like the Tom Cruise example should be considered tainted now, you have to come up with new examples (in fact they may have patched all the mother/son/daughter/father etc pairs).

It's difficult to run experiments on the best systems because they are closed source and are complicated and have many layers of complexity/processing/indirection. You can reduce some of that by using the completion API asking something like "Mary Lee Pfeiffer's son is " but there are still a lot of moving pieces.

Finally, the reversal curse paper did say that whatever the system they tested was able to produce the correct answer in-context, in other words, it could answer your second question correctly. I don't find that particularly surprising because transformers are able to select and shunt input tokens into subsequent layers making them more likely to be selected for output.

In general I think it's a bad idea to treat these things like magical boxes and interacting with them in the most imprecise way possible. Experiments in any field are usually very narrow and controlled, otherwise you have too many variables that can affect the result.

Expand full comment

I agree. I was having fun speculating. I often joke with my wife that even though I have a bachelor of science degree, I'm definitely not a scientist. I just don't have that mindset. I'm more of an a vibe guy. I can think logically (which I need to do for my job), but when it comes to setting up experiments or changing only one variable at a time, I'm awful at it. Don't worry, though. I'm just a video game programmer/designer so I don't think I'm putting anybody at risk with this mentality :-)

Expand full comment

Gemini is tougher on itself than you are!

https://g.co/gemini/share/46031316c8ca

Expand full comment

Interesting. Gemini says, "The Chinese Room highlights the limitations of current language models. While I can communicate and generate human-like text, I may not have the same level of deep understanding as a human." How about that for self-awareness!

Expand full comment
Feb 23·edited Feb 23Liked by Robert Wright

On Searle: I haven't read his arguments for a long time, but when I did with my friend, somewhere around 1995, our impression after a long deciphering process was that it was really about consciousness without him saying it aloud. To us, the Chinese room sounded like a critique of functionalism as a theory of mind.

On understanding: I'd define it even more loosely than Bob, something like "a cognitive system integrating incoming information (perception) to its representations in a way that allows appropriate action". The point there is information meeting a representation.

Then there can be many levels of understanding depending on how sophisticated the representations of a system are. At extreme, in everyday life, people sometimes refer even to computers as them "not understanding something".

From this viewpoint, my feeling of the LLM "understanding" discussion is that it's mostly about:

1. Some people have philosophical fixation to what it means to _really_ understand, in a somewhat or fully metaphysical sense. It may be about "semantic grounding", consciousness, intentionality, and such. Linguists seem to have an issue with grounding and semantics in a way I as a non-linguist don't get.

2. The other thing is that LLMs have representations, that's self-evident, but some people want to emphasize the representations are inaccurate or superficial, while others want to emphasize that they are surprisingly accurate or deep or general. But shallowness or inaccuracy have no common measure, so this is little more than pointing by finger to different directions.

The third relevant issue is the architecture of LLMs, but people arguing about LLMs don't talk about that much. LLMs generate one token at a time, so their internal state is about that token only, conditioned on the "tape" they have output already, and the tape is their working memory. And they cannot edit the tape, so there is no iteration or internal reflection layer—if you want that for an LLM, you need to use chain of thought prompting (CoT) or some such, to simulate self-reflection on the tape. That is all very counterintuitive to us, but explains many of the stupid errors. LLMs are intuition machines, they spit out whatever comes to their mind without the ability to think it through before saying.

They still must have a plan of the output in some sense, a superposition of meaning, always concentrated to the prediction of the next token, a probability distribution that collapses a bit after each token, when the tape conditioning them becomes longer and longer.

Expand full comment

A sighted man, a blind man and an LLM walk into a bar. The sightless bartender has a standing policy, for purposes of jokes or thought experiments, of giving free drinks to whoever best evokes in him an understanding of what it means to be sighted.

Each takes a turn telling tales of blindlessness to the blind.

Who wins? Who understands?

Expand full comment

I copy and pasted your scenario into Chat GPT-4 and this was the response generated if anyone's interested:

"This thought experiment draws on several layers of understanding, perception, and the conveyance of experiences that are fundamentally subjective. Let's explore each participant's potential approach and the challenges they face:

The Sighted Man

Advantage: The sighted man has direct, personal experience with being sighted. He can draw from his own sensations, emotions, and visual experiences to describe what sight is like. His descriptions can include colors, depth, movement, and the nuances of visual perception.

Challenge: The main challenge is translating visual experiences into a format that resonates with someone who has never seen. How do you describe color to someone who has never experienced it? His success depends on his ability to use metaphor, analogy, and other descriptive tools to bridge this experiential gap.

The Blind Man

Advantage: The blind man shares a common ground with the bartender in terms of non-visual perception and understanding of the world. He might use this shared experience to creatively imagine and convey what being sighted could add to their mutual understanding of the world.

Challenge: Since he has never experienced sight, his descriptions would be speculative, based on second-hand accounts and his imagination of what sight might add to his experiences. His narrative might lack the directness and authenticity of personal experience but could be deeply empathetic and inventive.

The LLM

Advantage: The LLM has access to a vast array of descriptions, accounts, and creative expressions related to sight and sighted experiences. It can synthesize this information to generate descriptions that are rich, varied, and potentially enlightening.

Challenge: The LLM lacks personal experience or the subjective understanding of what it means to see or not to see. Its descriptions, while possibly comprehensive and eloquent, are generated from patterns in data rather than genuine understanding or empathy. It can mimic understanding but doesn't "experience" or "understand" in the human sense.

Outcome and Understanding

Who Wins?: The "winner" might depend on the criteria the bartender values most in understanding sightedness. If empathy and emotional resonance are key, the blind man might have an edge. If vividness and detail are prioritized, the sighted man could win. If breadth of description and creativity are most valued, the LLM might come out on top.

Who Understands?: Each participant understands sightedness in their unique way. The sighted man understands through experience, the blind man through the absence of that experience and imagination, and the LLM through the aggregation of human expressions on the topic. The bartender, being blind, would gauge understanding through the lens of empathy, creativity, and the effectiveness of the descriptions in evoking a sense of what it means to be sighted.

This thought experiment beautifully illustrates the complexities of understanding, communication, and the subjective nature of experiences. It highlights that understanding is not solely about the accumulation of facts or descriptions but also involves the ability to evoke a sense of shared experience or empathy, areas where human experiences and creativity currently have an edge over artificial intelligence."

Expand full comment
author

I think I'd have to give that an answer an A. Offhand, I can't think of any major element that it's missing.

Expand full comment

Thanks, this encapsulates the collision of human assumptions about the uniqueness of their minds with the emerging capabilities of AI--vastly different physical and operational structures yielding similar results. Tangentially, I believe that the human tendency to assume free will (see Sapolsky) will be even harder to maintain in this new context.

Expand full comment

I enjoyed Sapolsky, but his book did not do much to grapple with the implications of not having free will.

I accept I do not have free will. Now what? I have some thoughts, but wouldn’t mind seeing a treatment of where we go from here.

One thing free-willers always get wrong: accepting determinism does not free one from determinism. I would like someone to explain to them why this shoe will never drop.

Does the Dharma of Bob take a position on free will? I can’t recall.

Expand full comment

Terrifying.

Expand full comment

Bob,

This article illustrates three things that I find very frustrating in your writing on AI:

1. Ipse dixit arguments. In arguing that AI has "semantics," you place great emphasis on what you call the "map of semantic space." But giving it the name "map of semantic space" is just labeling. Someone else might give it a different label. To demonstrate that AI systems have semantics in the way that humans do, you need to do more than define AI's operations as involving "semantic space." To truly prove your case, you need to rely on the substance of how the systems operate--and to explain what, specifically, you are referring to when you argue that AI systems are doing more than pattern recognition / pattern application.

2. Confusing correlation with "understanding." AI systems generate speech that often appears consistent with our understanding of words--that is, AI systems generate speech that often correlates with our understanding of the underlying words in the text. That is no surprise, given humans speak and write in ways that reflect their understanding of words, and the AI systems follow the patterns of human speech and writing contained in the training data. You are asserting that the remarkable ability of AI systems to generate text that correlates with our understanding means that they must *have* understanding that is at least in some sense similar to ours. But correlation alone does not prove your case, especially given point 3 below.

3. Not engaging with the counterexamples. Gary Marcus gave you a host of examples that demonstrated that the fact that AI systems generate text that often correlates with our understanding of the underlying meaning of words does not mean that they have understanding in the way that we do. He showed, again and again, that when you take an LLM system outside of the patterns in its training data, it quickly falls apart--and does so in ways that demonstrate a lack of command of the underlying meaning of the words. Beyond the Tom Cruise example, you simply don't engage with those types of cases--and you basically say, "well someday computer scientists may fix those problems, and I bet they'll do so in ways similar to how humans reason." See point 1 above.

I've separately commented on other aspects of your AI positions, and I won't belabor those critiques here. I find your writing on AI particularly frustrating given how much respect I have for you. I think you're an incredible thinker and writer. I'll keep reading and listening because of that.

Expand full comment
Feb 28·edited Feb 28

So what would indicate that AIs have "understanding"? If GPT-5 fixed all of Marcus's counterexamples (I forget what they all were honestly - at least the Tom Cruise's mother's son one didn't seem convincing for reasons elsewhere in these comments...), would the no-understanding camp just throw in the towel, or would the goalposts just shift a little bit back for the next round of debate?

Expand full comment

The Marcus examples are really worth your consideration--they are relevant to the definition of "understanding" that you propose in your other comment (i.e., being able to predict outcomes in situations even where those situations are "out of prior context"). Marcus's examples show that LLM systems do not meet this definition, because they falter when taken outside the patterns in their training data. For example, although LLMs can accurately recite highly complex mathematical principles, in terms of the actual application of math principles to math problems, they fail to perform certain basic math problems that fall outside their training data. Elementary school students have far better command of those principles than LLMs. That is precisely the point of the "stochastic parrot" position--the LLMs can parrot words in astounding ways, but they lack an understanding of the underlying meanings of those words.

I believe that truly "understanding the underlying meaning of words" involves subjective experience, but I also agree with you (and Bob) that we can't prove / disprove whether LLMs have subjective experience. But what we *can* prove--and what Marcus's examples *do* prove--is that LLM's do not function in ways consistent with having command of the underlying meaning of words, as shown by their inability to apply those meanings outside the patterns of the training data. The math examples, and many others, demonstrate this.

Bob's article--and frankly, his other commentary on AI--fails to meaningfully engage with those counterexamples. He focuses on examples where LLM systems perform astoundingly well in applying patterns from the training data to user prompts. But none of his examples show that anything other than (wondrous) pattern recognition / pattern application is occurring. These examples show that when within the context of the patterns of the training data, LLMs are incredibly good at producing content that *correlates* with our understanding of the underlying meaning of words. But such correlation is not actual understanding--as the Marcus examples vividly demonstrate, by showing that LLM's run aground when taken outside the patterns from the training data.

None of this is "goalpost moving." Seems like a lot of the criticism of the stochastic parrot/auto-complete view of LLMs is motivated by wanting to defend how incredible LLMs are. But that is simply not the issue--as I've said repeatedly in my comments on Bob's work, the LLMs are incredible. But they are just ("just" not meant as a derogatory term--again, LLMs are incredible) performing pattern recognition / pattern application. They have inherent limits that prevent them from having true "understanding" even under your own definition.

Expand full comment

Alright - well thanks I guess, I went down that rabbit hole now and replied in the Gary Marcus thread. In the end I don't find the examples (Cruise, Chicken, Addition) - to be compelling indicators that these things fail to "understand" in the way humans do. No-Elephants is more compelling - but it's a deficiency of DALL-E not present in GPT itself. Overall - I don't see why there should be a categorical difference between "pattern matching" and "understanding" - they seem to be degrees on a spectrum.

Expand full comment

This is correct. Bob's argument isn't sensible and doesn't even rise to the level of being wrong, because as you say he is simply labeling what LLMs do as "semantic" and then asserting that they "understand" semantic meaning. In addition, separating understanding from subjective experience serves no purpose and redefines a commonly understood word.. If you separate experience from understanding then you can make all kinds of meaningless assertions like: "the thermostat understands when to turn the heat on" or "the gun understands how to fire a bullet when I pull the trigger" or "the car knows how to start when I push the start button." Understanding is a kind of experience. Without the experience it does not mean anything.

Going down this path of relabeling natural language words is not in aid of anything and serves no purpose since as Bob admits it, his argument doesn't show that AIs have subjective experience (and they don't). But as you say it is even worse that Gary Marcus quietly refuted all of the arguments Bob made in this newsletter and yet Bob did not address a single one of the objections Gary made. Even if this was written before Bob's interview with Gary, Bob should have gone back and edited the piece to address Gary's objections or added a postscript.

Expand full comment
author
Mar 1·edited Mar 1Author

I don't think it's accurate to say that I'm asserting that LLMs " 'understand' semantic meaning". (In fact, I wouldn't even use the phrase 'semantic meaning,' since it's redundant as I understand those two words--but I'm guessing that's just kind of a typo, and I'll assume you just mean 'understand meaning'.) I'm saying that (a) John Searle made the semantic representation of symbols a pre-requisite for understanding as he defines understanding; and (b) LLMs have a way of semantically representing symbols. I'm also saying that, as I define understanding, a system for semantically representing symbols is one "element of understanding"--but I wouldn't say that's quite the same as the view you attribute to me. In any event, the big difference of perspective between you and me is that you don't define understanding as I do--which is fine (though, as I say in the piece, I think defining understanding the way you do--as entailing subjective experience--renders discussions about whether AIs 'understand' things pointless).

Expand full comment

Hi Bob,

Thank you for taking the time to respond. Apologies, I did not see your response until just now. Thanks for clarifying the narrow claim you were making. LLMs can clearly respond sensibly to inquiries, commands and the like. To the extent that is an element of "understanding," even without the subjective experience or awareness usually associated with that term, your claim may be correct. I just feel that there is a huge rush by writers and pundits to claim the prize of being among the first to attribute humanlike understanding to LLMs. But if LLM understanding is just correlating words together based on how they are ordinarily used (whether you label this correlation making maps of meaning or just statistical filtering), the claim becomes pretty ephemeral. The breakthrough that people want for AGI is subjective experience. If AIs had subjective experience that would be very big news. But there is zero evidence that they do, particularly since they work based on statistical correlation. So I don't think it is fair to be agnostic about AI subjective experience. They just can't have it because they aren't doing anything that could under any set of circumstances give rise to it. I don't think you can just say we "don't know whether they have subjective experience" when that is the only interesting question they present. You know that as much as you know that other people are conscious, because all people are generally conscious most of the time. I still recommend your listening to Sean Carroll's solo AI podcast and the Decoding the Gurus podcast analyzing Carroll's.

Expand full comment

Granted, Searle’s Chinese Room argument is dead, and AI has some kind of multi-dimensional vector semantic mapping, and the issue of consciousness may be barking up the wrong tree. But AI does not have even a toddler's understanding of the real (physical) world. As our human understanding develops during about 10 or so childhood years, we gradually gain object permanence, a logical sense of cause-and-effect, and conservation of weights and volumes. Open AI's Sora, as just demonstrated this month, shows weird and impossible things like a basket ball player making a turn-around jump shot with the player's head briefly turning into a second basket ball half-way through, and a grandma in the kitchen reaching to stir her dish while the spoon magically appears in her hand as she reaches towards the bowl. But I suppose that, to be fair, I should give AI another 10 years before evaluating whether it is capable of a human level of understanding.

Expand full comment

No they don't.

Based on the prompt, they assign probabilities to the words that are related to the input and pick the words that have highest probability.

LLMs are still stochastic parrots.

The training does not add to their "understanding". The more and more training data simply makes those probabilities slightly better.

Expand full comment

It seems this AI thing has just revealed that everyone had a different definition of "understanding".

I agree that any definition that invokes consciousness is pretty much useless for answering the question "Do AI's understand?" - because there's just no way to know if anyone but me is conscious (and even that is questionable...).

But I'd argue that Bob's definition isn't much better:

' “Understanding” means employing structures of information processing that are functionally comparable to the structures of information processing that, in the human brain, are critical to understanding. '

Who's human brain? Does this definition imply that humans are incapable of not understanding things? Why invoke humans at all in the definition? What does "functionally comparable" even mean here? How do you functionally compare structures? How could you ever quantify that? Why does the definition have "understanding" in it? What does THAT "understanding" mean?

Colloquially, when we say someone "understands" something, we're not talking about their brain structures - we're talking about their ability to predict something despite not having seen that exact thing before. Someone who "understands" Putin's mindset would have been more likely to predict the invasion of Ukraine. An AI who "understands" people would be able to predict how they might react to being subtly insulted. Kepler "understood" planetary motion because he could say where a planet was going to be in the future. Confronted with a new planet (its speed and trajectory), he would also be able to predict its motion.

So all that said - here is my humble proposal for the definition of "understanding":

"Understanding" is the ability to predict an outcome, given a context, despite having never observed that context before. The degree of understanding is a product of the correctness of the prediction and the degree to which past observed contexts have been dissimilar from the current one.

So if you're able to predict an outcome correctly because you've seen exactly that thing happen before: low understanding.

If you've never seen this context before and you are unable to predict the outcome: low understanding.

If you've never seen this context before and you are still able to predict the outcome: high understanding.

Humans need not be involved in the definition - they just tend to be good at finding parsimonious explanations for things - and thereby being able to generalise well outside the bounds of contexts they've seen so far.

Expand full comment
author
Mar 1·edited Mar 1Author

I thought I was pretty clear about the "functionally comparable" issue in the sense that an example of functional comparability is the centerpiece of this entire essay: An LLM's system of semantically representing words is "functionally comparable" to the human brain's system of doing that (whatever that system turns out to be) in the sense that in both cases a function that is critical to intelligently processing language (i.e., this translation of symbols into some kind of semantic representation) is accomplished. That doesn't make sense to you as an example of functional comparability? As for your definition of understanding: Well, it's kind of a variation on the Turing test, in the sense that you define understanding in entirely behavioral terms. And, sure, if you want to define understanding in those terms, have at it. But my piece was a critique of Searle's argument. And at the foundation of Searle's argument is the premise that we can't define understanding in sheerly behavioral terms (in other words, the fact that whatever is in the Chinese Room *acts* like a human who is understanding things isn't enough to establish that there is understanding in the Chinese Room, according to Searle). So Searle posits other things that must be present for us to grant that there is understanding, and he argues that AI can't possess them, and I show that in the case of LLMs, Searle is wrong: those things *are* present. Again, you're free to define understanding in strictly behavioral terms. But, even leaving aside that Searle's argument was the context of this essay, defining understanding in sheerly behavioral terms wouldn't be my choice, in part because it isn't the choice of most people who argue about these things. (Geoffrey Hinton and Yann LeCun agree on what LLMs can do but disagree on whether they have understanding.) So if you really want to engage with the heart of the discourse about this question, you have to go beyond mere behavior. Also: I think not-strictly-behavioral definitions of understanding are conducive to arguments that have clarifying effects, and I think that speaks in favor of such definitions.

Expand full comment

Bob, how can you say it's "functionally comparable" when you have not meaningfully engaged with any of the examples that Marcus offered showing deep functional differences? How do you explain the math issues, for example? I just listened to the latest of your always-entertaining and thought-provoking conversations with Paul Bloom, and at one point he said something to the effect of "if AI can talk about math well, but can't actually do math well, that would show something important about understanding." (I'm paraphrasing.) You said literally nothing in response--even though you knew, from the Marcus conversation (assuming it occurred earlier), that generative AI can't do math well.

It's just really strange that you fail to engage with the most powerful counterexamples to your AI arguments. That failure is very unlike your treatment of other topics over the years.

Expand full comment
Mar 2·edited Mar 2

I'm not Bob - but I don't find the math example all that compelling as a reason why AI's process information in a fundamentally different way from humans. Lets leave aside for now the fact that the new ChatGPT can now write and execute Python code - and actually does find on simple math problems (you can still ask it not to use the code interpreter to test its own abilities).

Marcus's "Math is hard" post points out two things

1) That despite knowing the algorithm for multiplication - ChatGPT fails to apply it correctly to multiply two 5-digit numbers. (Note that it does fine on simpler operations - like adding two 10 digit numbers - try it)

2) That transformers are not particularly good at "discovering" multiplication from training examples.

I suspect humans would struggle on both of those counts - especially if confined (as GPT is) to write down all intermediate notes as a linear sequence of characters without the ability to "edit" the notes in place (or store any internal state). So to me they point to these models having a human-like deficiency rather than a very inhuman lack of "understanding".

Now - moving on to what irks me about Marcus - some of these deficiencies are interesting and point to new directions of research. Eg. using Scheduled Sampling as a way to motivate the model to write down intermediate results could later lead it to a correct answer, or incorporating an internal "hidden notepad" that allows it to store state that is not graded by the loss function, or adding a module with an AlphaZero-like capability for recursively exploring possible future outputs. Even not proposing solutions - but trying to understand the types of problems that transformers succeed and fail at - would be interesting. But Marcus doesn't do this, instead he just kind of says "it all sucks and I was right all along!".

Expand full comment

I don't think Marcus says that last thing, and certainly I don't--I think LLMs are absolutely incredible.

I find the math examples compelling because they starkly juxtapose (1) the truly amazing ability of LLM's to generate content that sounds like humans created it, and in this case, to generate content that accurately describes mathematical principles that humans have written/spoken about and (2) a less-than-amazing ability to apply those principles. I think this juxtaposition illustrates a fundamental limitation of LLM systems.

The limitation is inherent in the nature of the LLM systems. Humans have "understanding" (whatever the human version of "understanding" is) and based on that understanding, humans say and write things. LLM's analyze statistical patterns in what humans write and say, through machine learning on the training data. Bob says this results in "a map of semantic space." I just say it's a set of statistical patterns in the training data. Whatever you call it, it's one degree removed from human understanding (whatever that is). Humans have their "understanding" --> which leads to their speech and writing --> LLM's analyze that speed and writing. Bob's position seems to be that that the LLM's statistical analysis of the speech and writing results in a good surrogate for human understanding. I disagree--I think the one degree of separation is a huge deal, and that statistical analysis of speech and writing does not result in something that truly resembles human understanding--but rather results in something that resembles human speech patterns in contexts similar to the training data.

The math examples show that well. As Marcus points out, there are more examples of people writing about simpler math problems--literally more written examples of such problems--and thus stronger linguistic patterns in the context of such simpler problems. Accordingly, LLM's can draw on those patterns and appear to be able to "understand" simple math. But they don't actually understand the principles they are reciting--at least not in the way we understand those principles--and the longer-digit problems illustrate this. The longer digit problems are not conceptually more complicated--an elementary school student could do them easily. But because there are fewer text examples of longer problems in the training data, the LLMs have weaker speech patterns to draw on, and they falter. (BTW, re "not writing down intermediate results"--LLM's use recursion, where each new token is fed back in as a new prompt along with the previous tokens and the original user prompt. In that sense, they do build on intermediate results--but not in the way that we do.)

I think the fact that ChatGPT is now using Python plug-ins for math further supports this. If LLM's could meaningfully improve on such math problems *using LLM technology*, they would. Pyhton itself is a fundamentally different technology (rules based, zero machine learning, etc.). As Marcus said, such hybridization is likely the most promising path for the future.

There are so fascinating questions here, for now and the future. I'd love to see Bob engage with them. But he seems fixated on defending his overly-anthropomorphized conception of LLM's. It's just not correct in terms of how these systems work, and I think Bob's failure to meaningfully engage with the counterexamples--like math--shows the weakness of his position.

Expand full comment
Mar 3·edited Mar 3

It just seems like there's a non-sequitur in this "math counterexample" argument. "Humans can do X and Y. LLMs can do X, but are not that good at Y. Therefore - the way LLMs do X must be very different from the way humans do X".

I agree that the current architecture and training of LLMs does not encourage them to perform well for long algorithmic tasks (for various reasons - including the fact that they don't (as far as I know) back-propagate through "intermediate results"). But - this will surely change - other models (like AlphaZero) - have figured out much more complicated algorithmic tasks than multiplication, and it's only a matter of time until they're merged into LLMs.

Then there's this distinction between "just pattern matching" and "understanding" - I don't really buy it. How can you be so sure that us humans are doing anything different from "pattern-matching"? What exactly is "pattern matching" - and how exactly does it differ from your notion of "understanding"? Would you not agree that it's a matter of degree? That what we call "pattern matching" is just "shallow understanding"? Even then, on a lot of topics, GPT4 seems to have just as deep an understanding as I would expect a competent human to.

Do we even disagree on anything concrete and testable? If so what?

Expand full comment
Mar 1·edited Mar 1

Agree that LLMs seem to process information in an uncannily similar way to brains (representing ideas as big vectors of activations/numbers embedded in a semantic map).

Also agree that it's fair to call the map "semantic" in that it encodes semantic relationships between words mathematically (not only is shoe closer to boot than tree is, but I'd bet that "boot minus shoe" is close to "tree minus bush").

Also agree that AI's have "understanding" - or at least that there is no evidence for a fundamental categorical difference in the way AI's process information and how humans do it.

Only disagreement is on the practicality of defining "understanding" according to "functional comparability" i.e. the fact that both humans and AI seem to use semantic maps. They also both use electricity and digital communication. But brains also partly use analog and chemistry. They have lots in common, and lots of differences, and defining "understanding" in terms of internal structure seems like a recipe for endless debate. Let's just formalize the colloquial definition instead! Colloquially, when we say someone "understands" something - we see their "understanding" as a hidden variable that can be tested by asking questions. It's totally behavioural.

Re Searl's argument (which admittedly I kind of ignored in my comment). I'd go after the premise that there can't be "understanding" in some guy who doesn't know Chinese looking up rules in a giant book, because it's absurd. It's also absurd to think that a bunch of cells sending electrical pulses to each other has "understanding". But it'd be even absurder to think that some mysterious spirit outside of physics is somehow injecting its understanding into these cells. Everything on the menu is absurd, and the only way forward, it seems to me, is to define things in concrete, testable terms without worrying about what's going on inside the magic box.

Anyway, thanks for the thought provoking article and for always being willing to argue with the hecklers.

Expand full comment
Feb 28·edited Feb 28

Correction: My above definition of "understanding" is bad. It implies that once you make some observation, you immediately stop understanding it (because that context has now been seen). I apologize for any reader distress this may have caused and resolve to come up with better definitions in the future.

A better definition may be:

"Understanding" is the ability to correctly answer a random question about a system. The degree of understanding is a product of the correctness of the answers and the difficulty of the question-set - with correct answers to adversarially selected questions indicating the highest degree of understanding.

Expand full comment

I just came to this after your latest episode with Nathan Labenz. In How the Mind Work's Stephen Pinker says the Searles experiment has a lot of hidden baggage because we imagine it being a very slow process where someone is writing things out by hand. He suggested that if it were near instantaneous we would think about it much differently. It's 27 years later and he's been proven right.

Expand full comment

Would this be true one level “down”? I.e., the LLM has a word, Apple. Each dimension in which it places “Apple” has aboutness with respect to that Apple. I’m articulating this poorly, but isn’t philosophical intentionality present as soon as there’s a single word with a single vector?

Expand full comment

It's a safe hedge to put single quotation marks on understand in the title. Because saying that LLMs understand things is like saying calculators 'understand' math. I just finished reading Max Bennett's convincing "A Brief History of Intelligence," in which the author discusses how understanding partially arises from mammals' ability to simulate in the mind/imagination models of the world. In humans, the simulation of the concrete world contextualizes language use. Gary Marcus wrote about a case in which DALL-E can't even create an image that hides an elephant in the background. The issue? It doesn't really understand what it's being asked to do. It cannot simulate this possibility, owing to the fact that an image like this isn't on its training data. This 'training data' wasn't programmed in the human brain either; but we can easily imagine it and and easily recognize it should an artist produce such an image. That's real understanding, going beyond statistical probabilities.

Expand full comment

The fact that these systems are multimodal is totally irrelevant to the questions about understanding (and intentionality, for that matter). We can grasp abstract functions, and when we do we can distinguish logically incompatible but empirically equivalent functions. This ability is essential for understanding (in any classical sense, at least).

There is no evidence that LLMs--multimodal or not--can do this. In fact, there is plenty of evidence that they cannot. That these systems can generate outputs matching those generated by systems that do understand is obviously not evidence of understanding; otherwise we would have to say that hand calculators (or abacuses, for that matter) understand arithmetic.

Expand full comment

I disagree. A Turing machine, and all equivalent computers, are strict sequential machines. That limits simultaneous large scale data interactions. That is, IMO, a necessary pre-condition for consciousness. Do you really think, a Turing tape, clicking left sometimes, right sometimes, over a very long period is a basis for consciousness? At each click ( or each instruction cycle of a computer) very little is happening, only a few register or memory locations changing. Between clicks nothing is happening, just a static data configuration. Despite arriving at a deterministic computation result after many clicks, or cycles, there is no moment where consciousness is possible. The Chinese room argument actually closely parallels this. Of course, I am arguing that consciousness is essential in declaring a machine “has understanding”. As a humorous aside, I will accept that AI is truly here when AI can handle customer service calls as would a competent human. At present there is nothing remotely like that.

Expand full comment

Bob, you've occasionally given the impression that you think consciousness might be substrate-independent, but it occurred to me halfway through reading this that I couldn't recall you ever having explicitly said so. (You might say, since you frequently emphasize the fact that subjective experience is undetectable by third-party observers, that it's not really worth making a stand either way, since you can'tl make a clear argument for it.) Regardless, do you have any strongly held beliefs or intuitions about a) whether non-organic-brain-based professing systems can have subjective experience and, if so, b) whether the current LLMs are at that point?

Expand full comment
author

I think there's a good chance that consciousness is substrate-independent. How good? Hard to say. If I were forced to make an even money bet, I'm not sure which way I'd bet--which I guess means I see the probability as around 50 percent. But as for what the info-processing threshold for consciousness would be (assuming it's substrate-independent): Beats me. Intuitively, I find it hard to image LLMs being sentient, but who knows?

Expand full comment

I think that is a major issue. Does consciousness depend on the specific physics of chemistry, quantum physics, etc. , or only on abstract structures, however instantiated?

Expand full comment

It's a question that is necessary (although not sufficient) to solve "the hard problem" which, to my mind, is about the most puzzling and important issue in philosophy. Because subjective experience is, by definition, detectable only by the person experiencing it, it's not amenable to scientific scrutiny and therefore we will never truly know whether AI have it. However, I was hoping Bob would answer my question. He's (rightly) skeptical of eliminativist or identitarian (i.e., consciousness *is* the underlying brain state) accounts of consciousness, but you can hold his view and *also* think consciousness is likely substrate-independent. He's talked occasionally as if he thinks it might be, but I don't know if he's ever explicitly shared his intuitions on the matter.

Expand full comment

Bob, commenters like Tom have reprinted ChatGPT comments that directly contradict your AI positions. So, are you saying, "yes, AI 'understands' things, but it doesn't understand itself?"

And if you hold that position, why? At a nuts-and-bolts level, why would ChatGPT "understand" but not understand itself?

Expand full comment