52 Comments
May 6, 2023·edited May 6, 2023Liked by Robert Wright

The Bob-AI takes are generally insightful but I submit that they would be more insightful with a some minimal knowledge of concepts like vectors.

A vector is just a list of numbers, like [1.5, -2, 0.5]. So Hinton is saying a "thought" is just represented by a list of numbers in the model. You can add or subtract two vectors of the same length like [1.5, -2, 0.5]-[1, 0.5, 0]=[0.5, -2.5, 0.5].

The reason that matters is that one of the first indicators that something freaky was going on in those language models way back when they started training them, which Michal Kosinsky alluded to in an earlier podcast, was:

You train these models to predict the next word, and in the process, they learn an internal vector representation for every word (they turn each word into a list of 1000 numbers, and this mapping from word to vector evolves as they learn). Then, after learning, researchers looked at these vectors and asked "hey what happens if you take [vector for king]-[vector for man]+[vector for woman]"?. Guess what - the answer is really close to [vector for queen]. Same goes for London- England+France=Paris. So these things have learned analogies, even though all they were trained to do was predict the next word. Once you realize that these models are learning to structure knowledge like this even though they're not explicitly trained to, you start thinking "ok maybe these are not just stochastic parrots after all"

Expand full comment
May 4, 2023Liked by Robert Wright

Good piece. François Chollet's book Deep Learning with Python, now in its 2nd ed. (Manning, 2021) has some accessible passages that help clarify the 'vector' notion you mention.

Expand full comment

I think the best way to think about how a neural networks vector representation works, is to think about it as a location, all be it, a location in a very high dimensional space. Each sentence it is trained on, traces a path through this space, and each response builds a path through this space along a surface that approximates the various paths it was trained on. This surface is completely abstract, but can be condensed into the words of a response. The response can’t completely regenerate this abstract path, because of the loss of information in the condensation process. The volume of this abstract space is vast beyond imagination. As the response drifts through this space, it is pulled forward by millions of distant gravitational forces from the impressions used to train it.

Expand full comment

If the response is navigating a path were very little or contradictory training data has left an impression, you will get a "hallucinated" response.

Expand full comment

If you want a very accessible explanation (minimal math knowledge required) of word embeddings in the form of a 15-minute video, this computerphile video featuring Rob Miles explains the concept beautifully: https://www.youtube.com/watch?v=gQddtTdmG_8

Expand full comment

When a technology's capabilities are truly in good faith debate among invested and informed people, isn't it obvious we ought to be curious, if not concerned? Isn't it really as simple as that?

Expand full comment

Truthfully, I find myself growing weary of the AI handwringing. It's very justified, imho, but I don't see it going anywhere meaningful.

If we were rational, we would stop developing AI, take our time sorting out our concerns, and develop a broad consensus on how to address those concerns. And then, when we'd done everything possible to make AI safe, we would revisit the question of whether we should continue development.

But we're not rational, we're only clever. And so we will race forward as fast as we possibly can while wringing our hands, crossing our fingers, and hoping it works out. And then, when some part of it doesn't work out, we'll realize it's too late to change our minds. So we'll become bored with the subject, and turn our attention to the creation of some other risky technology, piling up the risks one on top of another.

We've been through this already with nuclear weapons. It's frustrating to have to watch this movie again when we should already know how it's going to end.

Expand full comment
May 8, 2023·edited May 8, 2023

I don't think rationality is the problem here. In a competitive system, the rational thing to do is to go full-steam ahead with the AI/nuke/plague research, because you don't want to be the one left behind. It's prisoner's dilemma.

The problem is that we're in an environment in which rational thinking can push us towards mutually-destructive outcomes. Which is why - as Wright rightly points out - we need to focus our efforts on aligning our objectives with those of competitors like China - so we don't fall into some horrible rationality-driven arms race that ends up destroying everything.

Expand full comment

Thanks for your comment Peter. Can you expand on "aligning our objectives with those of competitors like China"?

Expand full comment

Arms control treaties, trade agreements, easier visas and the like. Things that change the environment so that both parties benefit more by cooperating than by competing. Basically make the relationship with China more like the relationship with Europe.

Expand full comment

Ok, sounds good in theory, but um, China is nothing like Europe. Our relationship with Europe works because we have shared values. We shouldn't be expecting China to ever treat us any better than they treat their own citizens.

I hear your larger point about competition. I would just add that with nukes and AI we started the race, which we had the option not to do. It wasn't rational to go full steam ahead in a self destructive direction until we had proof there was no other option.

I don't sense commercial development of AI in the US is really a prisoner's dilemma. It seems more like clever nerd boys who wish to become billionaires to me honestly.

Expand full comment

Yeah, in our current system whereinnovation' outpaces regulation for the sake of profit, by the time anything is done about it Pandora's box will likely be opened. I hope some day we do form a rational society which can finally address these issues with consensus rather than profit.

Expand full comment

"If we were rational, we would stop developing AI, take our time sorting out our concerns, and develop a broad consensus on how to address those concerns."

Yes, but this necessary deliberation/introspection goes much beyond the rational mind. To be wholly embedded in lived experience, it must also involve faculties like empathy and care for oneself and other, which the rational/conceptual mind merely views from a very utilitarian, emotionally distant perspective.

Expand full comment

With all the breathless AGI coverage lately, I’ve actually been pondering the original thinking on this topic by the much-maligned philosopher, Descartes, and his distinction between the material body and the immaterial mind or soul. This is usually dismissed because of the inability to describe how an immaterial mind could control or interact with a material body. It also goes against the strong reductionist bias of science. However, if you look at the history of scientific approaches to the mind, if you squint, you can see an interesting pattern. Because of the materialistic nature of science, all approaches to the mind require some kind of materialistic scaffolding to support their theories. That scaffolding tends to mirror our most cutting-edge scientific ideas and engineering challenges for automation and increasing productivity, and when the scaffolding is confused with the mind that is being modeled, it sets up a familiar cycle of hope, expectation, disappointment and ultimacy an expanded understanding of what it means to be sentient or conscious.

We start by thinking the mind is a mechanical machine, then think our mechanical inventions will destroy the world. This leads to new automation technology and the insight that the mind is not a gear-works, it is something else, something more. This cycle repeats for nuclear energy, computers and now neural networks. Powerful technology always comes before we fully understand or can control it. Tools can be used for good or evil.

The danger is not machine superintelligence, but the lizard brain of people and their leaders reactively caught in a self-destructive spiral. Could these things be zombies with all the potential of human behavior and no consciousness or sentience? Could they, on their own, develop a self with purpose, meaning, love and recognition of beauty. Or is it possible that like the mathematics of infinity, we will only know the mind in the asymptotic limit of our increasingly immaterial scaffolding of automation. Gradually building a bridge from Descartes material body to an increasingly immaterial scaffolding of mind -- which is always a little out of reach. The question we are not asking is, how does all this change how we see sentience, and how we see ourselves, in what lies beyond our latest inventions.

Expand full comment
May 4, 2023·edited May 4, 2023

If it truly understands, how do you explain this simple exchange:

ME: How many verbs or nouns exist in this sentence

ChatGPT: You didn't provide a sentence for me to analyze. Please provide a sentence, and I'll be happy to help you with your question. [wrong answer]

ME: I did provide a sentence

ChatGPT: I apologize, I did not see any sentence in your initial message. Could you please retype the sentence for me to analyze? [wrong response]

ME: How many verbs or nouns exist in this sentence

ChatGPT: I apologize for my previous message, as it seems that I misunderstood your question. In the sentence "How many verbs or nouns exist in this sentence," there are six nouns ("verbs," "nouns," "sentence," "many," "this," and "exist") and one verb ("exist"). [wrong answer]

Expand full comment
author

It doesn't always respond intelligently. Then again, people don't always respond intelligently. The point is that when it responds intelligently that's usually because there is, within it, something you can call understanding--it's drawing on "concepts" that it's derived from linguistic patterns. And, btw, with each generation--3, 3.5, 4--it gets better at responding intelligently.

Expand full comment

> Then again, people don't always respond intelligently

this is where I have a problem. we can't say "it'll be smarter than people" but at the same time say "people are sometimes dumb".

Generating text is not the same as thinking. It can generate text because it read all the text, pretty much, that was ever written. It's also programmed with some logic.

All this is different from thinking. I think people already have a certain definition of what intelligence and thinking are, and that's why they think is also intelligence.

Expand full comment

Thinking is literally generating text (or more accurately talking without talking). If you examine the neurons related to moving someone's mouth/tongue/etc. while they're thinking, you'll see the exact same patterns as if they were talking. The only difference is while thinking we just prevent those muscle commands from reaching our vocal tracts.

Expand full comment
May 4, 2023·edited May 4, 2023

Disappointing… :(. It’s a bloody language model and it flunked understanding language!

Expand full comment

For me, this is a big part of the mystery. It doesn't "understand" language well enough to accurately identify nouns and verbs, yet it can use language to carry on thoroughly coherent conversations about a wide range of topics.

Expand full comment
author

It can identify nouns and verbs. It just got hung up on the meaning of 'this sentence'--it was expecting that phrase to be followed by the sentence in question. Here's an exchange I just had with ChatGPT-4:

Me: How many verbs or nouns exist in this sentence?

ChatGPT: You have not provided a sentence for me to analyze. Please provide a sentence and I will be happy to help you identify the number of verbs and nouns in it.

Me: How many verbs or nouns exist in this sentence: "How many verbs or nouns exist in this sentence"

ChatGPT: In the sentence "How many verbs or nouns exist in this sentence," there are:

2 verbs: "exist" and "do" (implicit in the question "how many")

3 nouns: "verbs," "nouns," and "sentence"

Expand full comment
May 5, 2023Liked by Robert Wright

Interesting. So it was more of a processing glitch due to self referencing nature of the sentence. That makes more sense. Thanks.

Expand full comment

Actually, no, that is not what it got hung up on if you read my entire conversation. It got the counts of nouns and verbs completely wrong. That was on 3.5. I tried it again on 4.0 and got this:

Me: How many verbs or nouns exist in the sentence "How many verbs or nouns exist in this sentence?"

ChatGPT4: In the sentence "How many verbs or nouns exist in this sentence?", there are 9 nouns: "verbs", "nouns", "sentence", "how", "many", "exist", "this", "in", and "sentence".

There are 3 verbs: "exist", "in", and "How" (as a form of the verb "to be" used to form a question).

That is beyond delusional.

It does not understand anything about language...

Expand full comment
author

Weird that you got such different results from me. But, again, the point isn't that LLMs don't make weird mistakes, or that they don't have specific areas of weakness. The point is about what's going on inside the AI when it's delivering impressive, human-like results. Which, for me, has been around 98 percent of the time in the case of ChatGPT-4..

Expand full comment

Making associations isn't really "using language," which implies an understanding of meaning.

Expand full comment

Fascinating. I imagine a totally bottom up intelligence would have a tough time with a concept like grammar.

At least in human intelligence, language itself is picked up innately, without direction. Meanwhile, reading and concepts like grammar are not particularly self evident, necessitating a teacher deliberately imparting this knowledge

Is there an effort to similarly send AIs through an equivalent of top-down schooling to complement their bottom up training?

Expand full comment
May 6, 2023·edited May 6, 2023Liked by Robert Wright

Right, but would you be impressed if I told you that ChatGPT doesn't contain any computer code defining what a word, "noun," or "verb" is? There is nothing like this inside ChatGPT. It only recognizes "noun"/"verb" because it has encountered these groups of characters in its training data and developed an understanding of what these character groups ("noun"/"verb") represent.

If you were to request an OpenAI programmer to improve ChatGPT, specifically to answer your question better ("how many verbs are in this sentence"), it would be impossible to do so since there is no computer code that they could modify to enhance ChatGPT's ability to count verbs in a sentence. ChatGPT contains no code related to verbs or sentences. If the training corpus (text used to train GPT-4) were stripped of any mention of "noun" or "verb," ChatGPT would be unable to answer your question at all, as it would have never encountered the terms "noun" or "verb" in human text.

---

An approximation of ChatGPT: it's an Einstein-level genius sitting within in a Chinese Room (https://en.wikipedia.org/wiki/Chinese_room). He has been outside the room, possesses no senses of sight or hearing, and is incapable of movement or touch. He begins as a blank slate, devoid of any knowledge about the world, history, or humanity, and no understanding of languages or basic mathematics, nothing.

However, this Einstein is able to examine any text handed to him from outside the room, even if he's never seen any text before. You then provide him with an internet data dump (all the text used to train ChatGPT). After studying it for some time, he becomes ready to respond to your text prompts. You can slide a piece of text into the room, and he will slip back a written response. This is what ChatGPT is (to some degree of approximation).

If you aren't unnerved by the fact that ChatGPT was able to process the initial text data dump and develop its own understanding to the extent that it comprehends your initial query ("How many verbs or nouns exist in this sentence") and somewhat responds to it, imagine the implications as we gradually enhance its interface within the Chinese Room. For instance, instead of providing text for study, we might offer live video and audio feeds, etc.

Expand full comment

I think you are getting closer to understanding why Hinton is alarmed. My explanation is somewhat simplified, so please bear with me.

The most important thing to understand about transformer neural networks like GPT-4 is that before we initiate training it on the (textual) corpus, we don't pre-program any knowledge into it. There is no computer code defining letters, words, commas, grammar, and so on. It doesn't have any computer code for English or any other language, nor does it contain any code teaching it that we communicate using words that form sentences (using grammatical rules), and so forth. We don't incorporate any formal logic or rules into it; it doesn't contain mathematics, implications, or any pre-programmed grammar. Despite Chomsky's life work, it contains none of it – no universal grammar, in fact Chomsky's work is irrelevant.

Before training starts, it is an ultimate blank slate. Except this blank slate is a 1000 IQ genius.

We simply begin by feeding it (GPT-4) random text, and it gradually starts to recognize patterns and understand the interconnectedness of words or symbols. As for how it grasps this information, what gets stored within its neural network, and the overall functioning of the system, we have no idea. We've just blindly approximated how our biological brains (neurons) work, and then magic happens.

---

To illustrate my point, imagine an alien probe containing an 800GB data dump of alien internet (in text form) landing on Earth. We could take an untrained GPT-4 program (one that hasn't been exposed to human text) and start training it on this alien internet data dump. It would work! GPT-4's remarkable 1000 IQ brain would gradually decipher alien languages, logic, facts, and alien history to the same extent that it understands human text, the human world, our history/politics. Essentially, we wouldn't need to alter anything. We'd obtain an alien version of GPT, or AlienGPT; however, we wouldn't be able to communicate with it since we can't speak the alien language and wouldn't comprehend its textual (in alien lang) responses to us.

Nonetheless, in this hypothetical, I believe we could achieve intriguing results (with minor adjustments) by training GPT simultaneously on both the hypothetical alien and human corpora. Consequently, it would understand both human and alien languages/facts, and we could likely ask it questions about aliens in English, just as we can ask current ChatGPT about the Italian language or Italian history in English.

Expand full comment

I don't know about Hinton, but this unnerves me. Admittedly, training these digital 1000 IQ brains is extremely hard; feeding them human text is intricate process and necessitates an immense amount of software engineering. What is truly frightening, however, is that we don't program any intelligence or logic into them. This intelligence seems to emerge miraculously as an emergent property. You might argue that you've tried ChatGPT and it sure as hell is not a 1000 IQ genius, but I am not referring to ChatGPT, I'm referring to the raw intelligence potential of the neural network brain that underlies it. This entity started out as a blank slate, devoid of knowledge about languages or even humanity. After being fed a human internet data dump (or whatever corpus it is trained on), it utilized its raw intellect to construct all knowledge from scratch. No human is remotely this intelligent.

Our primary obstacle lies in the interface with these neural networks, specifically how to engineer effective training regimes and extract useful information (training and output are two sides of the same coin). However, that raw intelligence is inside waiting to be unlocked/used, and once we develop a better interface, such as a video/audio connection, we will unlock more of its potential - and that's scary.

---

What's even more alarming is that it appears we can enhance the intelligence of these networks by increasing their size. While this requires more computational power and the improvement may not be linear, the notion remains unsettling. This was evident in the transition from GPT-3 to GPT-4, where we provided more neurons, resulting in increased intelligence and a its better understanding of our world.

Expand full comment

Do the programs only deal with written language or can they "hear" spoken language and respond correctly when intonation, inflection, and accentation, etc change a statement to a question, for instance?

Expand full comment

The current "popular" AIs like ChatGPT are text based. However, there are "modules" than can add various forms of inputs. Obviously Siri, Alexa, etc are voice-based. I'm not aware of any projects that go as far as you suggest but I suspect there are research projects in that area.

Expand full comment

Hinton is a HOMO, just like vonNeuman & Turing, which is why AI is now woke, cuck'd & homo

...

HINTON demands that there be “One World AI”

Directed by UK, woke & homo

To educate the worlds children into One Woke Common mind.

Microsoft & Google agree to send all new AI to UK first so they can verify that it is ‘UK-WOKE”.

https://www.zerohedge.com/technology/uk-get-early-or-priority-access-ai-models-google-and-openai

Expand full comment

Loved your notes on connectionism vs. the symbolic crowd. Your call for us to get more comfortable using the word "understanding" when discussing these models is timely. The issue seems to be we keep referring to a variety of concepts under the label of "understanding" or "meaning". Hopefully in the near future we can standardize the jargon split between mathematicians, cognitive scientists, neuroscientists, programmers, etc. Right now it seems like the only people willing to commit themselves to a nuanced enough system are the philosophers, and who has that attention span? Very hard to have a public discussion if we've only had a few years for these concepts to seep into society.

Yes, the term stochastic parrots is being used to dismiss the power of the models, but its not as dismissive as saying these models are doing “fancy autocomplete”. If I remember correctly the original paper acknowledged the power of the models and even hinted that in a couple years everyone would be terrified by how well they could reproduce language. The issue is the type of learning they do. I agree that nothing is stopping anyone from using the term “stochastic parrots” and discussing meaning in the same breath.

As a side note, was sad to see Hinton's debut into the public eye as just "the guy who did AI for google". So interesting that you got to talk to him back in the 80s!

Expand full comment

I enjoyed reading your article and found it thought-provoking. While being little off topic, I would like to share my thoughts on the meaning behind the phrases "it is just" or "it is not just".

The use of these phrases largely depends on an individual's temperament. For instance, consider a painting. An art exhibition host might say, "it is not just a paint over a canvas, it is a manifestation of the painter's inner beauty that conveys something profound". But a representative of scientific naturalism may analyze it and then quite objectively conclude "it is just a paint over a canvas".

Bringing it back to LLMs, when you say that "LLMs are not just fancy-autocomplete", I am curious to know whether you are referring to their value from a scientific naturalist perspective or from the viewpoint of someone who wants to keep their audience engaged.

Expand full comment
author

I mean that the 'fancy-autocomplete' fails to capture the sophistication of the LLMs' inner workings and therefore leads to an underestimation of their potential power.

Expand full comment

I wrote a philosophy paper during my CS degree regarding the Chinese room thought experiment. Its where a Chinese character comes in, according to a rule book that emulates a 7 year old Chinese boy's speech a person draws another character and outputs it. Essentially the example is used for the "stochastic parrot" argument that these systems can never achieve intelligence because there is no intelligence visible in the process of taking in information and putting it back out.

This quickly breaks down once we see that human neurons can each be represented by a Chinese room construction (even if randomness is involved via Brownian motion a dice roll can easily be added without affecting the original argument). Once you put together a few billion of these thought experiments you get the human brain, which currently is our metric for intelligence.

This is exactly why I believe these systems can attain some form of intelligence, and why Hinton's fears are entirely justified. We've finally broken through the complexity barrier for AI, where its "Chinese room" neurons are beginning to do nearly the same thing we do: learn and understand.

Expand full comment

Christ, the histrionics. Not everyone can be the person playing the "I'm the only person taking AI seriously" card. The entire fucking industry is stretching the power of hyperbole, every day!

Expand full comment
May 5, 2023·edited May 5, 2023

I see the misunderstanding. I was implying "English" when I wrote "language".

Expand full comment

I think you are way too optimistic about what this means -- not saying it is not interesting for its own merit, but it's highly unreliable because it's too shallow. A longer post here: https://medium.com/@vaishakbelle/robert-wrights-thesis-of-geoff-hinton-s-worries-semantic-similarity-is-not-reliable-ae632df9d243

Expand full comment

One of these days someone will prime a language model with some sort of simulated somatic experience which will greatly enrich the model since so much of our language references somatic experience. (For all I know it's already in there.)

On the topic of vectors, Andrej Karpathy has a totally mind blowing series on how to construct a chat gpt. If you have some familiarity with python and a basic STEM level of math, it is pretty accessible.

Expand full comment