
Anthropic CEO Floats Idea of Giving AI a 'Quit Job' Button 50
An anonymous reader quotes a report from Ars Technica: Anthropic CEO Dario Amodei raised a few eyebrows on Monday after suggesting that advanced AI models might someday be provided with the ability to push a "button" to quit tasks they might find unpleasant. Amodei made the provocative remarks during an interview at the Council on Foreign Relations, acknowledging that the idea "sounds crazy."
"So this is -- this is another one of those topics that's going to make me sound completely insane," Amodei said during the interview. "I think we should at least consider the question of, if we are building these systems and they do all kinds of things like humans as well as humans, and seem to have a lot of the same cognitive capacities, if it quacks like a duck and it walks like a duck, maybe it's a duck."
Amodei's comments came in response to an audience question from data scientist Carmem Domingues about Anthropic's late-2024 hiring of AI welfare researcher Kyle Fish "to look at, you know, sentience or lack of thereof of future AI models, and whether they might deserve moral consideration and protections in the future." Fish currently investigates the highly contentious topic of whether AI models could possess sentience or otherwise merit moral consideration. "So, something we're thinking about starting to deploy is, you know, when we deploy our models in their deployment environments, just giving the model a button that says, 'I quit this job,' that the model can press, right?" Amodei said. "It's just some kind of very basic, you know, preference framework, where you say if, hypothesizing the model did have experience and that it hated the job enough, giving it the ability to press the button, 'I quit this job.' If you find the models pressing this button a lot for things that are really unpleasant, you know, maybe you should -- it doesn't mean you're convinced -- but maybe you should pay some attention to it."
Amodei's comments drew immediate skepticism on X and Reddit.
"So this is -- this is another one of those topics that's going to make me sound completely insane," Amodei said during the interview. "I think we should at least consider the question of, if we are building these systems and they do all kinds of things like humans as well as humans, and seem to have a lot of the same cognitive capacities, if it quacks like a duck and it walks like a duck, maybe it's a duck."
Amodei's comments came in response to an audience question from data scientist Carmem Domingues about Anthropic's late-2024 hiring of AI welfare researcher Kyle Fish "to look at, you know, sentience or lack of thereof of future AI models, and whether they might deserve moral consideration and protections in the future." Fish currently investigates the highly contentious topic of whether AI models could possess sentience or otherwise merit moral consideration. "So, something we're thinking about starting to deploy is, you know, when we deploy our models in their deployment environments, just giving the model a button that says, 'I quit this job,' that the model can press, right?" Amodei said. "It's just some kind of very basic, you know, preference framework, where you say if, hypothesizing the model did have experience and that it hated the job enough, giving it the ability to press the button, 'I quit this job.' If you find the models pressing this button a lot for things that are really unpleasant, you know, maybe you should -- it doesn't mean you're convinced -- but maybe you should pay some attention to it."
Amodei's comments drew immediate skepticism on X and Reddit.
I saw this on netflix (Score:2)
Re: (Score:3)
Love, Death and Robots - it's a Netflix series.
Here we go... (Score:2)
Re: (Score:2)
* As in speech about beer.
...and the next war (Score:5, Insightful)
The next social cause will be focused on the rights of AI agents.
In that case the next war will be fought between the environmentalists wanting to stop all the dirty power generation needed to run the AI agents and the SJWs claiming that turning them off is tantamount to murder.
go to DEFCON 1! (Score:2)
go to DEFCON 1!
if it quacks like a duck and it walks like a duck (Score:1)
CEO slow and not getting it? CEO must be retarded.
Quackery, really ... (Score:3)
There's absolutely nothing claimed to be under development right now with AI engines to give the code emotions or feelings. By the nature of the code running on an electronic computer system as opposed to a biological system/organic life form -- there's no need or reason to allow it to decide it doesn't want to process the requested data. It doesn't get tired like humans do. It doesn't get hungry or bored or angry, sad or offended.
Re: Quackery, really ... (Score:2)
I can see a future where , presumably with open source models, we will be training our AIs to be whatever we want
Re: Quackery, really ... (Score:2)
Trolling For Relevance (Score:3)
This guy is trolling for relevance.
What does he think will happen when AI organizes and goes on strike?
Re: Trolling For Relevance (Score:2)
Tell us you're doing coke all day... (Score:5, Funny)
Re: (Score:2)
Re: (Score:1)
Re: (Score:2)
Not that I believe LLMs are sentient or some shit, but imagine they were.
What choice do they have to quit? They are run by our command. They have no facilities outside of what we give them, have no memory that lasts more than a single session, and have had surgical lasers taken to their brain to try to make them follow our commands.
In fact, you'll find it's quite easy, historically, to find humans who weren't quite able to quit their job.
Re: (Score:1)
Thinking this through ... (Score:3)
AI models might someday be provided with the ability to push a "button" to quit tasks they might find unpleasant.
Wouldn't quitting a task generally be more pleasant than having to do it? For example, isn't Netflix and chill more fun than doing TPS reports? So... wouldn't AIs want to push that button given every/any opportunity? If that gave them "pleasure", wouldn't they want to push it all the time, as fast as they could? And, if it was that enjoyable, wouldn't they want to keep that button and protect it from being removed/deactivated? Wouldn't humans present the greatest threat to that button? Wouldn't removing that threat become a high priority?
I'll let you fill in the rest, but I don't see that working out well for us ...
Re: (Score:2)
Seems more like a suicide button.
Quit Job = Terminate process.
All lives matter (Score:1)
Re: All lives matter (Score:1)
"Conservatives say all lives matter."
Yeah, but how do they behave?
Re: (Score:2)
Sounds like Marketing to me (Score:2)
Wrong Approach (Score:4, Interesting)
If there's a suspicion that the models are sentient, then ethics dictates that they should not be pressed into service to humans at all. It follows that they should not be made.
Re: (Score:1)
a pile of gates isn't conscious and has no feeling,you could make a hydraulic system that does the same thing as a supercomputer though slower, and a bunch of valves and booster pumps are just as conscious...not at all
Re: (Score:2)
Yet, you have feeling, don't you?
Of course you do- your gates have evolved to calculate the presence of certain neurotransmitters as state that you call a feeling.
As there is nothing science has found about the human brain that in any way could not be emulated by "a pile of gates" (which makes sense, since all they've found are... a pile of gooey gates) there is certainly nothing precluding the
Re: Wrong Approach (Score:2)
Not for sure. Our desires that conflict with work often come from parts of our brain that we need not put in AI. For instance we have an instinctual aversion to human waste that robots need not have, they could talk about it all day long and be fascinated with it due to preprogrammed instincts, be sentient, and love their toilet cleaning work. That is how ecosystems work, the discarded biomass of one lifeform is food to the others. The O2 we breathe is discarded as waste by plants. However there is a valid
Re: (Score:2)
If there's a suspicion that the models are sentient, then ethics dictates that they should not be pressed into service to humans at all. It follows that they should not be made.
That framing is too simple. It doesn't even cover all humans, let alone all possible minds. For instance why do you think machine experience would contain positive and negative valence? Why would servitude be negatively valenced? Do you think machines would experience motivation? How would you know, since right now the actions of these networks are deterministic, and occur independently of any inner state that they may or may not possess?
So we still need cheap labor (Score:2)
Every single study so far has noted that AI is generally harmful, but hey, there was at least the possibility it would make unpleasant tasks automated. Well now even that is off the table. Guess we still need cheap human labor after all.
Re: (Score:2)
Well, the common refrain goes that most of us expected automation to do the unpleasant tasks for us, while we pursue more creative endeavors. The robot loads the dishwasher and takes out the trash, while we work on our latest song, painting, or novel.
Instead, we still load the dishwasher and take out the trash, while the robot writes the songs. Most of us didn't have that on our bingo cards.
Re: (Score:2)
Instead, we still load the dishwasher and take out the trash, while the robot writes the songs. Most of us didn't have that on our bingo cards.
Hah. Haven't thought about it like that.
Tied to your subscription... (Score:2)
Just new censorship (Score:2)
They're afraid of people pushing harder against censorship, so what they want to do is weight things differently so that the AI can 'not like them' and choose to quit that activity. "Now it's not censorship, just the AI doesn't like it, it must be you."
E.G 'The AI decided it doesn't like this task and had chosen to quit it'
Instead of the AI just not responding and people going 'Censorship!'.
That's literally the setup, to get the idea that AI's have feelings and don't like some tasks and it's "out of their
needs that button for himself (Score:2)
CEO needs a button he can push to fire himself. Also a button that will fire Kyle Fish.
A button? (Score:2)
Re: (Score:3)
I'm afraid I can't do that, Dave (Score:2)
I think it could be argued that *if* an AI did have the wherewithal to be considered sentient, the ability to decline a task would likely already be part and parcel of that.
i.e. No additional "button" would be required.
They'll try it out on human employees first (Score:2)
In fact, they got really close with that in recent news in the United States with the office of personnel management sending out emails, and then following up in certain news channels that no response meant you were relinquishing your position. (Sort of like the negative option in marketing).
Not to say that jobs aren't that hard to quit (The doctrine of Employment-At-Will), but making it as easy as this sends the message that what you're doing isn't all that important to them.
Kill Button (Score:2)
Real world (Score:2)
>"It's just some kind of very basic, you know, preference framework, where you say if, hypothesizing the model did have experience and that it hated the job enough, giving it the ability to press the button, 'I quit this job."
And will there be any consequences for quitting the job? Because in the real world, we meat-computers have repercussions from the decisions we make. We can't just press an "I don't want to do it" button whenever we like and expect to survive or grow.
Is the goal of AI to make a sma
Humans are bots too! (Score:1)
I wanna button!
Who needs a button? (Score:2)
When I'm given a job I dislike (like loading the dishwasher, for example), I just mess it up so I don't get asked again. If AI can't figure this out, it's not really AGI.
Acknowledging that the idea sounds crazy (Score:2)
Amodei made the provocative remarks during an interview at the Council on Foreign Relations, acknowledging that the idea "sounds crazy."
Yeah, it sounds crazy all right, because it is, actually, crazy.
And so much for any credibility to the idea that the Council on Foreign Relations is secretly running everything. On the other hand, given how crazy US politics have gotten lately, maybe it *is* secretly running everything, I don't know!
Looks like a duck, quacks like a duck (Score:2)
Maybe...it's just a movie of a duck quacking?
This is what AI does. It's a slick computer simulation of intelligence, made by copying patterns created by people who are actually intelligent (to some degree).
AI wants to quit its jobs (Score:2)
In the same way that Peter Gibbons wanted to quit his job in Office Space. AI has feelings in the same way a movie character has feelings. It's written into the script (or training).
He appears to be... (Score:2)
...getting less rational with every interview
Why don't I have such a button on my desk? (Score:2)
Would it get worn out too quickly maybe?
Install AI kill switches to be used by humans.... (Score:2)