This episode includes an Overtime segment that’s available to paid subscribers. If you’re not a paid subscriber, we of course encourage you to become one. If you are a paid subscriber, you can access the complete conversation either via the audio and video on this post or by setting up the paid-subscriber podcast feed, which includes all exclusive audio content. To set that up, simply grab the RSS feed from this page (by first clicking the “...” icon on the audio player) and paste it into your favorite podcast app.
If you have trouble completing the process, check out this super-simple how-to guide.
0:00 Why didn’t OpenAI call its new o1 AI GPT?
12:03 Tim’s first impressions of o1
16:06 What’s the secret to o1’s better reasoning?
26:57 Inspecting AI introspection
30:07 Does o1’s capability for deception bring us closer to doom?
35:35 AI doomers’ Hollywood problem
40:39 Elon’s self-driving speed bumps
46:49 Heading to Overtime
Robert Wright (Nonzero, The Evolution of God, Why Buddhism Is True) and Timothy B. Lee (Full Stack Economics, Understanding AI). Recorded September 17, 2024.
Tim's Understanding AI newsletter on Substack:
Twitter: https://twitter.com/NonzeroPods
Overtime titles:
0:00 The trouble with almost-autonomous vehicles
3:54 Should you up your p(doom) in light of o1?
16:11 How much better can next-gen LLMs get?
25:02 The politics of California’s AI safety bill
33:56 Elon’s brain, Bob’s AI book, and the billionaire conspiracist class