0:00 Nathan's podcast, The Cognitive Revolution
3:26 The enduring enigmas of AI
7:15 Conceptualizing how LLMs conceptualize
15:58 Do AIs actually understand things?
32:44 Why AI doesn’t need to be superhuman to be revolutionary
43:16 Human vs AI representations of the world
56:45 Thinking through high-dimensional AI brain space
1:04:58 Nathan on AI risk: Doomer? Accelerationist? Both?
1:21:35 The open source question and Cold War II
1:41:10 Do LLMs “naturally” learn human values?
1:44:56 Mamba: a new approach to building large language models
Robert Wright (Nonzero, The Evolution of God, Why Buddhism Is True) and Nathan Labenz (Cognitive Revolution, Waymark AI). Recorded August 06, 2024.
Twitter: https://twitter.com/NonzeroPods
The Cognitive Revolution podcast: https://www.cognitiverevolution.ai/
Waymark AI: https://waymark.com/
Share this post