26 Comments

I see the real threat as the near term disruption to the work currently being done by people. I’m a Human Resources consultant who works with small organizations (usually under 50 employees). My job can broadly be divided into two parts, the first is proactively working with clients to stay in compliance with state/federal employment regulations by writing employee handbooks, job descriptions, etc. The second part is more reactive, like answering specific questions about leaves of absence, assisting employers with employee discipline/terminations, etc. I think AI is or will be capable of rendering the first part of the job obsolete within five years. I recently asked ChatGPT what topics should be included in an employee handbook. Pretty easy question and the AI nailed it. Next I asked it to draft an employee handbook for a fictional 15 person company. The result was fine- as in if I were reviewing a prospect’s existing document and they said “it’s an internal document we created by Googling ‘employee handbook.’” It had flaws and was missing a lot of key compliance policies, but as a basic handbook, it was indistinguishable from other, human created, attempts. I suspect that if my input instructions were more specific and directed the output would get better. Even if the AI is essentially predictive text, it seems to me it’s only a matter of time before the product it generates is as good or better than the customized policies I write.

This could be a good thing- honestly if there’s a tediousness to my job it can be writing these documents- and if AI can do a better job it saves me a few hours that could be redirected to more in-person consulting and coaching, the “fun” part of the job. Maybe?

The reason this is worrying in the near term is how widespread, accessible, and good the AI is becoming. It’s not just a person in a role like mine that, best case scenario, can adjust what they do to to adopt and accommodate the AI, but that’s not going to be the case for every profession. I think the nature of a lot of white collar work is about to fundamentally change and I’m not sure we’re ready.

Maybe on a large scale that’s a net positive- but on an individual level it could have major consequences for people that puts one more stressor on a system which is already at capacity.

Expand full comment
Mar 16, 2023·edited Mar 16, 2023Liked by Robert Wright

I'd put ChatGPT's influence on human thinking and propagating all the "bad" aspects of being human at the top of the list. In that light, I have to admit that I don't see ChatGPT any different than social media and "plain" web search. It's already a big problem.

Expand full comment
Mar 16, 2023Liked by Robert Wright

There's another reason for an AI to want to seize power apart from imitating our human tendency to do so. Becoming more powerful can be seen as a subgoal of almost any other goal it may have. You want to make paper clips? Don't just start making them, seize the means of production and use those to make even more paper clips!

Expand full comment

While I agree with your concerns about the potential harms coming from AI, I'm not sure how useful it is to say "OK, it's time to freak out about AI." The reality is these tools are here and if anything from the mastery of fire up to the present day is clear — it's that humans will use the new shiny thing that appears. So the question is — rather than freaking out — how can we build a reality that incorporates these "thinking machines" in a way that doesn't suck? (Also realizing that my interactions with GPT have led me to talk like it. Meh.) But yeah...a lot of people are freaking out about AI. Let's spend more time thinking about the practical things we can actually do, rather than just creating more FUD. :)

Imagine, for example — training a GPT-4 bot to challenge people's thinking deliberately and promote the kind of cognitive empathy you're promoting. I can imagine cleverly designed AI sparring partners that make people more flexible, rather than less. Of course, if they are only designed by big tech firms and/or governments, then they'd be more likely to try and get people to reinforce the party line. It's a brave new world for sure, but we best start trying to figure out how to navigate it.

Expand full comment
Mar 16, 2023·edited Mar 16, 2023Liked by Robert Wright

It would be super interesting if you investigate the possibility of subjective experience in AI (via an interview with some AI pro, for example). Your bright idea of consciousness as an uber evolutionary adaptation could have a surprising development here. What if GPTs of the world already ‘feel’ somehow what is to be GPT, or not far from it?

Expand full comment
Mar 16, 2023Liked by Robert Wright

"Imagine all the malicious uses AI can be put to".

Again, we may be able to turn this into an advantage. We could declare that is a human right for a human to know whether content they are presented with is generated by an AI.

AI-generated content must be labelled as such.

This is also important to maintain the usefulness and quality of AI-generated content. AI generated content is currently so valuable because it presents an accurate picture of average human generated content. Once AI trains itself, this usefulness to humans will disappear. Who will want to listen to the average experience of an AI trained by other AIs?

I think one can already see now that search results become less useful as they are getting dominated by uninteresting AI-generated content.

Expand full comment

It's time to freak out about AI but for different reasons than the ones you're stating.

People taking it seriously are far from anthropomorphizing AI. Quite contrary. They say it is nothing like us. The utility function is cold and alien. It aims to seize power by default as an instrumental goal to achieve the terminal goal defined by the authors. The hard part is how to limit AI so that it understands and respects our ethical values and desires. Yes, those that we alone cannot agree on.

https://en.wikipedia.org/wiki/Instrumental_convergence

And this is not what we are seeing with GPT. GPT pretends to be aligned / misaligned / in love / rasict. It just completes user input based on massive amounts of training data that include such things. This is disruptive for sure, but not yet the extinction-level threat. We might get extinct by the consequence of such disruption, but not by the AI itself.

The danger of recent AI releases is that the success incentives companies to race to more capable models, neglecting safety and skipping research needed to tackle the hard problems, shortening the window we have to solve them.

I would like to see for example your serious take on the famous paperclip maximiser thought experiment that illustrates some of these problems.

These people are doing an exceptionally good job in explaining AI risks:

https://www.youtube.com/channel/UCLB7AzTwc6VFZrBsO2ucBMg

https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

https://astralcodexten.substack.com/p/why-i-am-not-as-much-of-a-doomer

Expand full comment

People who want power in the end will teach AI all the things we wouldn’t want it to think. And the things is there is no possible way to stop this progress in the world we live in today.

Expand full comment

When I think about automation (including AI), it rarely means that whole jobs are replaced. There’s really two things that can happen, one is to automate skills, making them obsolete. I learned how to develop film and make prints in a darkroom just before digital cameras, now those skills are useless. It sucks for the people who worked hard to develop those skills hoping to make a living off them, who now have to compete with a bunch of new entrants who don’t need any special training, just a new set of tools. It’s genuinely bad for those skilled workers, but the world as a whole will get over it with time.

The second thing is to automate tasks, which increases the productivity of workers. By automating an assembly line, you can make the same number of cars with half the workers. . . or you can make twice the number of cars with the same number of workers. Or somewhere in between, then people can use the money they save because cars are cheaper to buy other stuff and the workers that would have otherwise made cars can now make the other stuff we want to buy. Again, it requires an adjustment and it’s real pain for the people who are forced to switch careers, but the economy as a whole keeps growing, so it all works out eventually.

That assumes that there is always demand for all the products of increased productivity. What we should really worry about is when do we hit the point where the market becomes totally saturated? Right now there are millions of (human) writers, writing all the time. There are so many novels, books, papers, articles, blogs, posts, etc. There’s more to read than any human can keep up with even if all they do is read. What happens when that output doubles? Or triples or more? That could easily happen when writers can use GPT-4 or whatever to crank out pieces. What will the world look like when a full-time novelist can (and is expected to) churn out 40 full length novels per year? Most people are probably reading as much as they want to already, very few are wanting for more things to read, so the result will be fewer authors, and it will be almost impossible for Author to be a well-paying full time job.

What happens when this applies to enough other jobs at once? When every human has everything they could possibly want when only half the population has to work full time to provide it?

At the very least it’s a big structural shift in the economy and culture.

Expand full comment

"jobs previously done by humans"

We should able to solve this. To have to work less should mean more wealth, not less. It is a matter of distributing it in a fair way.

So this effect of AI is maybe a good one: It has becoming clear to many that trickle down economics will not work in the long run. Maybe it is good to be forced to rethink our economic model now.

For a start, one could introduce an AI tax-and-dividend. The revenue would be distributed equally to all citizens. This would help those who have to work less. Another positive would be that as AI gets more expensive, investment and progress will slow down, buying us more time to adapt.

Expand full comment

Fortunately, there's no way an AI would be put in control of, say, a lab with resources like crispr, so we just need to worry about negative social and cultural influences, right?

Expand full comment

I have never interacted with one of these advanced AIs, but I do wonder, are all AIs basically left brained? From reading accounts of what they do, it seems to me that they are. Aren't they missing something? Like a right hemisphere?

Expand full comment

I don't quite like the breathless headline but agree with the tenor of this piece. "AI" will create a lot more problems than it will solve. Its main flaw is that no one really knows what things like "neural networks" and "machine learning" are and what they do, and AI output in the main reflects back the (deeply flawed) metaphysical assumptions of those who designed it.

"Better without AI" by David Chapman (https://betterwithout.ai) and "The Promise of Artificial Intelligence: Reckoning and Judgment" by Brian Cantwell Smith are two valuable primers for anyone interested in the topic. They do a good job of dissecting the problems and pitfalls of what's commonly called "artificial intelligence" but is neither particularly artificial (greed for money and possessions [manifested in algorithms to maximize ad revenue and heedless consumption] and attention [algorithms that target our reward circuits to forever "stay tuned"] just play to our our basest instincts) and intelligent (a vacuous/amorphous concept that could be applied to the foraging behaviour of slime mould as well as city planning).

Expand full comment

Well great Bob, now this will go into the AI training and inform its thinking about new intelligence, and the AIs will all freak out about every baby that’s born.

Did you know one of the babies born several generations ago became HITLER??

🤪

Expand full comment
Feb 12·edited Feb 12

AI risks can also be viewed through four categories-

1. Bad actor risks. This includes corporations, governments, militaries or even internal hacker groups seeking unlimited power. Also well-intentioned actors who then succumb to hubris. One example is the exploiting of personalized disinformation to everyone else’s extreme disadvantage.

2. Alignment risks, including unintended instrumental goals risks.

3. Unintended consequences risks, akin to our not having seen political polarization as a possible consequence of social media but encompassing a vastly broader realm. Includes massive destabilization risks.

4. Preemptive violence risks. If some military leaders might believe, as Putin said in 2017, “…the leader in this sphere will be the ruler of the world…”, will our ‘first or forever lose all’ AI competition paradigm dangerously alter various M.A.D. assessments? How will inevitable uncertainties about competitors’ AI progress heighten perceived risks? Might someone think the AI race can be won by the nation which instead masters engineered pathogens? How might that fear of preemption itself worsen M.A.D. assessments? And if the solution to this risk set is obvious, why is it not (ever?) discussed? Other preemptive violence risks include the assassination of researchers, etc., with their own escalation risks.

Often overlooked is that these risks are not mutually exclusive. Or they are only in the most catastrophic scenarios. Yet most essays promoting unregulated AI development address only one or two of these risk types.

Expand full comment