16 Comments

I wish people would not call it an AI. It is a software tool. The best analogy might be with a web browser. You should learn to use it. https://arnoldkling.substack.com/p/can-we-stop-anthropomorphizing-chatgpt I would not judge it based on first impressions or other people's first impressions.

Expand full comment
Jan 26, 2023Liked by Robert Wright

Hey Robert, great piece. IMO what you're talking about here is collective sensemaking -- incrementally overcoming bias and subjectivity to find explicit shared mental models that reflect universal, objective truths. There's a small but growing contingent of people working on AI-augmented collective intelligence -- using machine learning and other methods to accelerate, integrate and debias this process. The AI models that enable this process are the stepping stones to your "mechanical Mr. Spock": partial and probabilistic, but justifiable and self-improving. My company is building open-source infrastructure for this kind of collective process, called dTwins (short for decentralized digital twins). There's a whitepaper at https://bit.ly/dtwins-wp, you might even find a familiar quote there :)

Expand full comment

After reading this post, it occurred to me that "Good Will Hunting," one of my favorite movies, can be seen as a parable about the shortcomings of AI, in this case AI represented by Will's photographic memory and his off the charts analytical capability.

Here's one of my favorite scenes in which Will's psychiatrist played by Robin Williams points out the shortcomings of Will's "artificial" genius.

https://www.youtube.com/watch?v=8GY3sO47YYo

Expand full comment
Jan 26, 2023·edited Jan 26, 2023Liked by Robert Wright

"ChatGPT—the AI whose uncanny imitation of a human mind has been freaking people out over the past few months—has an opinion about torture. Namely: It’s OK to torture Iranians, Syrians, North Koreans, and Sudanese, but not other people."

I find the interpretation which I see lurking behind this quote problematic. I dont claim to know the details of ChatGPT in particular, but large language models in general represent statistical correlations of their training data. Even the concept that ChatGPT "has an opinion" is questionable. Maybe, maybe one can conclude that an idealized human averaged over all training data has this opinion. Another conclusion that seems reasonable to me is that it is questionable to prompt a statistical representation of humanly produced language in this way.

Expand full comment
Jan 29, 2023Liked by Robert Wright

someone combined chat gpt with wolfram alpha to create a factually correct chatbot. (It does require an open AI api code)

https://huggingface.co/spaces/JavaFXpert/Chat-GPT-LangChain

Expand full comment
Jan 27, 2023Liked by Robert Wright

Good piece, but I'd just note that the most human-like quality here is that ChatGPT is the best bullshitter around. Ask it a question that you know the answer to, and when it answers ask it a follow up and watch it cover its tracks (this is why it's great as a term paper substitute, of course). So I agree for slightly different reasons: we have enough bullshitters already, so we don't need AI to replace them.

[For example, here's the tail end of a long "conversation" with ChatGPT I had recently:

me: sorry, my mistake, it was the episode of mannix where mannix hits his brakes but the brake line was cut and he almost goes off a cliff

ChatGPT: No problem, my apologies for any confusion. You are correct that the phrase "Bear beats Battle Star Galactica" comes from the TV show "Mannix" and not "I Love Lucy". In the episode you mentioned, "The Bait," the protagonist Joe Mannix, is driving along a mountain road and his brakes fail. He is able to stop the car just before going over a cliff. He then says the phrase "Bear beats Battle Star Galactica" to express the idea of something seemingly insignificant and unexpected overcoming something powerful and advanced, in this case his car.

The phrase is used in the episode as a way to express the idea of something seemingly insignificant and unexpected overcoming something powerful and advanced, in this case his car. The humor comes from the idea of a simple problem like a cut brake line being able to overcome a highly advanced technology like a car.

I apologize for any confusion and for not catching the mistake in my previous responses.

]

Expand full comment

It is important to realise that all "deep learning" types of AI systems (whether image description systems, natural language translators, chat bots, etc.) work by applying very deep patterns present in huge databases on which they have previously been trained. There is no "understanding" as we humans understand the term. Chat bots are simply calculating what kind of answer would be probable given the textual database they have been trained on. If you train a bot on the exchanges on a blog frequented by rude, uneducated people, it will generate answers just as they would. If you train the same bot on discourse between nice and educated people, it will emulate them. In particular, chat-GPT knows nothing about morality in general and the evil of torture in particular, although when asked about these concepts it will give answers that are consistent with the huge database on which it has been trained. I understand that this database includes the text of published books, newspapers and magazines, scientific papers, official documents, Wikipedia, etc.

So I think Robert Wright's analysis above is basically correct. Given the content of that textual database, "torture" (or related terms) will appear more often in the context of people in Iran or Syria than in the context of people in Sweden or Denmark - hence the structure of his answer in the programme he was asked to write.

The more important question is this: In practical terms, is there anything to understanding beyond the detection and application of deep patterns? Most people would say that there is, but I don't think they would be able to specify what it might be. The idea that conscious experience is necessary for understanding is irrelevant, because it says nothing about the functional power of an AI system which (as far as we know) is not conscious and does nothing more than mechanically detect patterns.

Expand full comment
Jan 26, 2023Liked by Robert Wright

Provide a list of the occasions over the last 50 years in which national governments have orchestrated violence (e.g. using ground forces or missiles or drones or whatever to commit acts of violence in another country) beyond its borders.

I can provide you a list of instances of national governments using military force or violence beyond their borders in the last 50 years:

* Vietnam War (1955-1975)

* Soviet-Afghan War (1979-1989)

* Gulf War (1990-1991)

* Kosovo War (1999)

* Iraq War (2003-2011)

* War in Afghanistan (2001-present)

* Libyan Civil War (2011)

* Syrian Civil War (2011-present)

* Yemen Civil War (2015-present)

* Operation Inherent Resolve (2014-present)

* It is important to note that my knowledge cutoff is 2021 and there may be more recent events not included in this list

Expand full comment

In my thoughts the only way we can come to close to perfect answers is through compromise. Compromise on things we do not want to compromise on. Then in a specific time compromise on the things that are not working. Keep doing this until we have as close to perfect answer as we can. These are the answers that should go into AI. I know it will take forever. We may not have that much time.

Expand full comment

I don’t understand why there would be any question that AI would be screwed up. People in general as a species (the programmers) are screwing up on a regular basis. Watch the news. How can AI not be effected?

Expand full comment

Two quick points.

1) ChatGPT was asked to write a python program do to something. It did. It was not asked to inject any kind of ethical considerations while doing the task. So, it did what it was asked. Did it show a bias? Maybe (probably) but it was not asked to write a bias free python program. Should it have refused to write the program? Maybe? But that's a slippery slope too. The thing to remember is that this tech is in it is infancy and expecting it to be 100% or even 75% at this point, isn't particularly reasonable.

2) I entirely agree, I too wish people or anyone would stop calling it an AI. It is not intelligent by the common understanding of the word. I found this passage from The Algebraist by Iain M. Banks particular sentient given it was written like 20 years ago. It is a bit of dialog from a ship's computer program trying to convince a human that it (the computer program) is not an AI or otherwise a sentient.

The computer starts the of with:

I give the appearance of life without being alive. It is not especially difficult.'

'How do you do this?'

By being able to access my memories, by having trillions of facts and works and books and recordings and sentences and words and definitions at my disposal.' The old man looked at the ends of his fingers. 'I am the sum of all my memories, plus the application of certain rules from a substantial command-set.

I am blessed with the ability to think extremely quickly, so I am able to listen to what you, as a conscious, sentient being, are saying and then respond in a way that makes sense to you, answering your questions, following your meaning, anticipating your thoughts.

'However, all this is simply the result of programs - programs written by sentient beings - sifting through earlier examples of conversations and exchanges which I have stored within my memories and selecting those which seem most appropriate as templates. This process sounds mysterious but is merely compli-cated. It begins with something as simple as you saying "Hello" and me replying "Hello ,, or choosing something similar according to whatever else I might know about you, and extends to a reply as involved as, well, this one.'

THE ALGEBRAIST

page 469 from the hard cover edition

Expand full comment

"The trouble with ChatGPT is the sense in which it’s only human. Its raw material—its cognitive input—is our cognitive output."

Yes, but only to some extent, and there's the fact that it has a lot of build-in metaphysical assumptions that mostly reflect those who created it--people who largely understand the world to be run on algorithms based on binary code, which is a very questionable assumption if there ever was one.

And AI is not human by a long-shot because words make up only a tiny bit of communication and knowledge--most communication is nonverbal (such as via gestures, tone of voice, facial expressions, bodily movements or contacts, and even odours).

And this kind of AI, lacking any kind of lived experience of, say, giving birth to a child, grieving the loss of a loved one, or enjoying a vanilla ice cream cone on the first day of summer, experiences that are essentially ineffable, will just mirror back to us a "world" re-presented as words and then more words--in other words [sic] not the world as we directly experience it moment to moment. The more we're aware of this fundamental and insoluble shortcoming, the better.

“[W]e have to be constantly vigilant to undermine language’s attempt to undermine our understanding.”

Iain McGilchrist (in The Master and His Emissary)

Expand full comment

I asked it to end the Ukraine war in a month. This was the response:

Ending a war is a complex process that requires the cooperation and compromise of all parties involved. It typically involves negotiations, disarmament, and the establishment of a lasting peace agreement. In the case of the ongoing conflict in Ukraine, it would likely require the involvement of multiple international organizations and countries, as well as the participation of the Ukrainian government and the separatist groups in eastern Ukraine. Additionally, the root causes of the conflict, such as political and ethnic tensions, would need to be addressed in order to prevent a resurgence of violence. Therefore, it is unlikely that the war in Ukraine could be settled in one month.

Expand full comment