Listen now | 0:00 Nathan's podcast, The Cognitive Revolution 3:26 The enduring enigmas of AI 7:15 Conceptualizing how LLMs conceptualize 15:58 Do AIs actually understand things? 32:44 Why AI doesn’t need to be superhuman to be revolutionary 43:16 Human vs AI representations of the world 56:45 Thinking through high-dimensional AI brain space 1:04:58 Nathan on AI risk: Doomer? Accelerationist? Both? 1:21:35 The open source question and Cold War II 1:41:10 Do LLMs “naturally” learn human values? 1:44:56 Mamba: a new approach to building large language modelsRobert Wright (Nonzero, The Evolution of God, Why Buddhism Is True) and Nathan Labenz (Cognitive Revolution, Waymark AI). Recorded August 06, 2024.
This conversation reveals a fundamental misunderstanding of the meaning of the word “understanding” that lies at the heart of the assumptions behind Bob’s attempted refutation of John Searle’s Chinese room experiment. In Bob’s view, your car “understands” that you aren’t wearing a seatbelt when it sounds an alarm and your thermostat "understands" temperature when it turns the heat on when it falls to the 65 degrees it is set to. In fact, these machines don’t understand anything. They just turn on when a circuit is connected.
Similarly, LLMs know how to predict and produce sentences from the context of questions put to them. LLMs have been programmed with massive amounts of internet data and they can mechanically correlate words in the questions you ask it with words that are statistically correlated with those words in the internet data they possess. The LLMs provide answers that most of the time (when they aren’t hallucinating) sound coherent to humans reading the responses, but they don’t have any understanding of those words or how they relate to things in the outside world because they have zero access to the outside world. Thus AIs can identify an image of an apple by comparing the image to similar images from their internet programming. But they don’t know anything about what an apple tastes like because they don’t possess taste as a sense. They only know what it says in the internet Data about apples that they are trained on.
LLM AIs can’t compare anything to the real world but only to data they are trained on from the internet, because they have no grounding in the world but only in internet data they are trained on. Humans sense the world outside their bodies through their senses and analyze that data through conscious and unconscious thought. LLMs obviously lack consciousness and have no internal experience or external experience. They just statistically correlate data. Calling that process “understanding” is a complete perversion of the word understanding. It’s just computation not understanding and consciousness does not operate through computation.
Computational functionalism is just behaviorism in disguise. The human mind does not operate through computational functionalism.
This conversation reveals a fundamental misunderstanding of the meaning of the word “understanding” that lies at the heart of the assumptions behind Bob’s attempted refutation of John Searle’s Chinese room experiment. In Bob’s view, your car “understands” that you aren’t wearing a seatbelt when it sounds an alarm and your thermostat "understands" temperature when it turns the heat on when it falls to the 65 degrees it is set to. In fact, these machines don’t understand anything. They just turn on when a circuit is connected.
Similarly, LLMs know how to predict and produce sentences from the context of questions put to them. LLMs have been programmed with massive amounts of internet data and they can mechanically correlate words in the questions you ask it with words that are statistically correlated with those words in the internet data they possess. The LLMs provide answers that most of the time (when they aren’t hallucinating) sound coherent to humans reading the responses, but they don’t have any understanding of those words or how they relate to things in the outside world because they have zero access to the outside world. Thus AIs can identify an image of an apple by comparing the image to similar images from their internet programming. But they don’t know anything about what an apple tastes like because they don’t possess taste as a sense. They only know what it says in the internet Data about apples that they are trained on.
LLM AIs can’t compare anything to the real world but only to data they are trained on from the internet, because they have no grounding in the world but only in internet data they are trained on. Humans sense the world outside their bodies through their senses and analyze that data through conscious and unconscious thought. LLMs obviously lack consciousness and have no internal experience or external experience. They just statistically correlate data. Calling that process “understanding” is a complete perversion of the word understanding. It’s just computation not understanding and consciousness does not operate through computation.
Computational functionalism is just behaviorism in disguise. The human mind does not operate through computational functionalism.
Not gunna lie: I did not come away from this one feeling that I understood AI any better.
I learned that things could be good or they could be bad.
"AI Daily Brief" is another good one. Typically 12-15 minutes.
I love when my podcast worlds collide.