3 Comments

Glenn asks Bob some very clear and pointed questions of Bob on AI and AGI. Bob’s answers suggest he is on the wrong track in his book on the subjects of AGI and super-intelligence.

Artificial General intelligence has always meant human-like and human level intelligence. This does not simply refer to an ability to calculate better than humans can. Even an abacus can do that, never mind the simplest calculator. Computers long ago surpassed human ability to process information. What AGI means, if it means anything at all, is sentience. Human intelligence is conscious, self motivated and directed by internal goals. It wants and cares about things. If AGI is to come to mean anything at all, it must be sentient. That would be a huge step and one that may not be possible in principle. That is why AGI is so important to the AI community and why it would be such a vast improvement over what we have now. Simply adding computational power and data crunching to existing LLMs is not going to create AGI. AGI is a change in the kind and nature of artificial intelligence, not a change in the magnitude of computational capability.

AI commentators recently have begun trying to fudge this issue by not directly addressing the elephant in the room of sentience. But AGI doesn’t mean anything if it doesn’t include sentience that similar to human consciousness. Just adding computational power would be more of what we have already seen and not an advance in kind, nor one that could lead to any kind of super intelligence.

Bob tries to dodge this issue by saying he can’t tell whether any AI is sentient so he doesn’t think it is an issue. This agnosticism about sentience is obtuse in the extreme. No one has seriously suggested that current AIs could be sentient because all they are doing is computation and statistical correlation. That’s why AIs often produce nonsense or hallucinations that no one has been able to stamp out despite our largest companies spending hundreds of $billions in efforts to do so.

Current AIs have no grounding in reality. They don’t experience anything directly. They just accumulate and correlate internet data using statistical algorithms. That is NOT how human intelligence works. Humans experience things directly through sense perception. We care about things and have goals and we try to thrive and survive. AIs do none of those things. Until they do (which may or may not never occur), talk about “AGI” is meaningless. It is like talk about any other mythical religious theme. People who talk about AGI, like people who talk about “god” “nirvana” or any other mythical absolute are just talking about nothing at all.

Expand full comment

Bob, you referred to the invasion of Iraq regarding the US overreacting to 9/11.

Did you mean Afghanistan?

I ask because it seems like in the case of Afghanistan, at least there's a debate to be had.

The idea that invading Iraq was an overaction to 9/11 seems pretty self evident, we might as well have invaded Guatemala, for the increase in security yielded and relevance.

Expand full comment
5dEdited

I listened to the maddening conversation between Glenn and john mcwhorter re: Coates' new book about apartheid in the West Bank. mcwhorter didn't even read it, yet launched into a diatribe about how coates is a bad faith postmodern cultural marxist. Sad to see the reactionary ignorance. mcwhorter also thinks that the suspension of human rights can be justified by security concerns, which, I'm sorry to say, is fascist.

Which is all the more reason I'm glad I listened to this conversation! Happy holidays :)

Expand full comment