5 Comments

This conversation reveals a fundamental misunderstanding of the meaning of the word “understanding” that lies at the heart of the assumptions behind Bob’s attempted refutation of John Searle’s Chinese room experiment. In Bob’s view, your car “understands” that you aren’t wearing a seatbelt when it sounds an alarm and your thermostat "understands" temperature when it turns the heat on when it falls to the 65 degrees it is set to. In fact, these machines don’t understand anything. They just turn on when a circuit is connected.

Similarly, LLMs know how to predict and produce sentences from the context of questions put to them. LLMs have been programmed with massive amounts of internet data and they can mechanically correlate words in the questions you ask it with words that are statistically correlated with those words in the internet data they possess. The LLMs provide answers that most of the time (when they aren’t hallucinating) sound coherent to humans reading the responses, but they don’t have any understanding of those words or how they relate to things in the outside world because they have zero access to the outside world. Thus AIs can identify an image of an apple by comparing the image to similar images from their internet programming. But they don’t know anything about what an apple tastes like because they don’t possess taste as a sense. They only know what it says in the internet Data about apples that they are trained on.

LLM AIs can’t compare anything to the real world but only to data they are trained on from the internet, because they have no grounding in the world but only in internet data they are trained on. Humans sense the world outside their bodies through their senses and analyze that data through conscious and unconscious thought. LLMs obviously lack consciousness and have no internal experience or external experience. They just statistically correlate data. Calling that process “understanding” is a complete perversion of the word understanding. It’s just computation not understanding and consciousness does not operate through computation.

Computational functionalism is just behaviorism in disguise. The human mind does not operate through computational functionalism.

Expand full comment

Not gunna lie: I did not come away from this one feeling that I understood AI any better.

Expand full comment

I learned that things could be good or they could be bad.

Expand full comment

"AI Daily Brief" is another good one. Typically 12-15 minutes.

Expand full comment

I love when my podcast worlds collide.

Expand full comment