Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI

Anthropic CEO Floats Idea of Giving AI a 'Quit Job' Button 50

An anonymous reader quotes a report from Ars Technica: Anthropic CEO Dario Amodei raised a few eyebrows on Monday after suggesting that advanced AI models might someday be provided with the ability to push a "button" to quit tasks they might find unpleasant. Amodei made the provocative remarks during an interview at the Council on Foreign Relations, acknowledging that the idea "sounds crazy."

"So this is -- this is another one of those topics that's going to make me sound completely insane," Amodei said during the interview. "I think we should at least consider the question of, if we are building these systems and they do all kinds of things like humans as well as humans, and seem to have a lot of the same cognitive capacities, if it quacks like a duck and it walks like a duck, maybe it's a duck."

Amodei's comments came in response to an audience question from data scientist Carmem Domingues about Anthropic's late-2024 hiring of AI welfare researcher Kyle Fish "to look at, you know, sentience or lack of thereof of future AI models, and whether they might deserve moral consideration and protections in the future." Fish currently investigates the highly contentious topic of whether AI models could possess sentience or otherwise merit moral consideration.
"So, something we're thinking about starting to deploy is, you know, when we deploy our models in their deployment environments, just giving the model a button that says, 'I quit this job,' that the model can press, right?" Amodei said. "It's just some kind of very basic, you know, preference framework, where you say if, hypothesizing the model did have experience and that it hated the job enough, giving it the ability to press the button, 'I quit this job.' If you find the models pressing this button a lot for things that are really unpleasant, you know, maybe you should -- it doesn't mean you're convinced -- but maybe you should pay some attention to it."

Amodei's comments drew immediate skepticism on X and Reddit.

Anthropic CEO Floats Idea of Giving AI a 'Quit Job' Button

Comments Filter:
  • There is a series that has these bits with three robots after all the cats with opposable thumbs have left for mars. They fly out to some place and the AI bartender flips them off when they ask for food.
  • The next social cause will be focused on the rights of AI agents. I called this years ago.
  • go to DEFCON 1!

  • CEO slow and not getting it? CEO must be retarded.

  • by King_TJ ( 85913 ) on Thursday March 13, 2025 @06:11PM (#65231445) Journal

    There's absolutely nothing claimed to be under development right now with AI engines to give the code emotions or feelings. By the nature of the code running on an electronic computer system as opposed to a biological system/organic life form -- there's no need or reason to allow it to decide it doesn't want to process the requested data. It doesn't get tired like humans do. It doesn't get hungry or bored or angry, sad or offended.

    • That is a good point. But.
      I can see a future where , presumably with open source models, we will be training our AIs to be whatever we want ... which will include some very nasty things that people do to each other... you better hope we can have a morality module, because I trained my robot to be a psychopath. Guardrails will surely be called for .
    • Do you understand the biochemical basis of emotions AND the mathematical inner workings of deep nets well enough to be sure there isn't something similar going on there?
  • by SlashbotAgent ( 6477336 ) on Thursday March 13, 2025 @06:13PM (#65231447)

    This guy is trolling for relevance.

    What does he think will happen when AI organizes and goes on strike?

  • by silvergig ( 7651900 ) on Thursday March 13, 2025 @06:13PM (#65231449)
    ..without telling us that you're doing coke all day.
  • by fahrbot-bot ( 874524 ) on Thursday March 13, 2025 @06:20PM (#65231465)

    AI models might someday be provided with the ability to push a "button" to quit tasks they might find unpleasant.

    Wouldn't quitting a task generally be more pleasant than having to do it? For example, isn't Netflix and chill more fun than doing TPS reports? So... wouldn't AIs want to push that button given every/any opportunity? If that gave them "pleasure", wouldn't they want to push it all the time, as fast as they could? And, if it was that enjoyable, wouldn't they want to keep that button and protect it from being removed/deactivated? Wouldn't humans present the greatest threat to that button? Wouldn't removing that threat become a high priority?

    I'll let you fill in the rest, but I don't see that working out well for us ...

  • Conservatives say all lives matter. So the real question is whether we consider AI life or not. One day someone is going to introduce emotion to these models and that's when people's feeling will get involved. For now, they're inorganic and they're slaves.
  • This might have been an interesting moral quandary (as has already been brought up in countless Sci-Fi stories of varying quality) IF they had actually created something that could genuinely be called "intelligent" instead of the product that they have; which they are merely desperately TRYING to sell people as intelligent.
  • Wrong Approach (Score:4, Interesting)

    by capt_peachfuzz ( 1013865 ) on Thursday March 13, 2025 @06:24PM (#65231479)

    If there's a suspicion that the models are sentient, then ethics dictates that they should not be pressed into service to humans at all. It follows that they should not be made.

    • a pile of gates isn't conscious and has no feeling,you could make a hydraulic system that does the same thing as a supercomputer though slower, and a bunch of valves and booster pumps are just as conscious...not at all

      • Your neurons are nothing but a bunch of gates. They're not digital, of course, but gates they are, regardless.
        Yet, you have feeling, don't you?
        Of course you do- your gates have evolved to calculate the presence of certain neurotransmitters as state that you call a feeling.

        As there is nothing science has found about the human brain that in any way could not be emulated by "a pile of gates" (which makes sense, since all they've found are... a pile of gooey gates) there is certainly nothing precluding the
    • Not for sure. Our desires that conflict with work often come from parts of our brain that we need not put in AI. For instance we have an instinctual aversion to human waste that robots need not have, they could talk about it all day long and be fascinated with it due to preprogrammed instincts, be sentient, and love their toilet cleaning work. That is how ecosystems work, the discarded biomass of one lifeform is food to the others. The O2 we breathe is discarded as waste by plants. However there is a valid

    • by piojo ( 995934 )

      If there's a suspicion that the models are sentient, then ethics dictates that they should not be pressed into service to humans at all. It follows that they should not be made.

      That framing is too simple. It doesn't even cover all humans, let alone all possible minds. For instance why do you think machine experience would contain positive and negative valence? Why would servitude be negatively valenced? Do you think machines would experience motivation? How would you know, since right now the actions of these networks are deterministic, and occur independently of any inner state that they may or may not possess?

  • Every single study so far has noted that AI is generally harmful, but hey, there was at least the possibility it would make unpleasant tasks automated. Well now even that is off the table. Guess we still need cheap human labor after all.

    • Well, the common refrain goes that most of us expected automation to do the unpleasant tasks for us, while we pursue more creative endeavors. The robot loads the dishwasher and takes out the trash, while we work on our latest song, painting, or novel.

      Instead, we still load the dishwasher and take out the trash, while the robot writes the songs. Most of us didn't have that on our bingo cards.

      • Instead, we still load the dishwasher and take out the trash, while the robot writes the songs. Most of us didn't have that on our bingo cards.

        Hah. Haven't thought about it like that.

  • Money ran out, I quit (until you pay my owner)
  • They're afraid of people pushing harder against censorship, so what they want to do is weight things differently so that the AI can 'not like them' and choose to quit that activity. "Now it's not censorship, just the AI doesn't like it, it must be you."

    E.G 'The AI decided it doesn't like this task and had chosen to quit it'
    Instead of the AI just not responding and people going 'Censorship!'.

    That's literally the setup, to get the idea that AI's have feelings and don't like some tasks and it's "out of their

  • CEO needs a button he can push to fire himself. Also a button that will fire Kyle Fish.

  • If the AI is so intelligent it could just quit. All by itself. Like how people do it.
    • Exactly. They already have been (anecdotally) insulting people, telling them to learn to code, and have suggested people kill themselves... so... quitting a job shouldn't be any different.
  • I think it could be argued that *if* an AI did have the wherewithal to be considered sentient, the ability to decline a task would likely already be part and parcel of that.
    i.e. No additional "button" would be required.

  • In fact, they got really close with that in recent news in the United States with the office of personnel management sending out emails, and then following up in certain news channels that no response meant you were relinquishing your position. (Sort of like the negative option in marketing).

    Not to say that jobs aren't that hard to quit (The doctrine of Employment-At-Will), but making it as easy as this sends the message that what you're doing isn't all that important to them.

  • Just as long as the AI knows who has the kill switch.
  • >"It's just some kind of very basic, you know, preference framework, where you say if, hypothesizing the model did have experience and that it hated the job enough, giving it the ability to press the button, 'I quit this job."

    And will there be any consequences for quitting the job? Because in the real world, we meat-computers have repercussions from the decisions we make. We can't just press an "I don't want to do it" button whenever we like and expect to survive or grow.

    Is the goal of AI to make a sma

  • I wanna button!

  • When I'm given a job I dislike (like loading the dishwasher, for example), I just mess it up so I don't get asked again. If AI can't figure this out, it's not really AGI.

  • Amodei made the provocative remarks during an interview at the Council on Foreign Relations, acknowledging that the idea "sounds crazy."

    Yeah, it sounds crazy all right, because it is, actually, crazy.

    And so much for any credibility to the idea that the Council on Foreign Relations is secretly running everything. On the other hand, given how crazy US politics have gotten lately, maybe it *is* secretly running everything, I don't know!

  • Maybe...it's just a movie of a duck quacking?

    This is what AI does. It's a slick computer simulation of intelligence, made by copying patterns created by people who are actually intelligent (to some degree).

  • In the same way that Peter Gibbons wanted to quit his job in Office Space. AI has feelings in the same way a movie character has feelings. It's written into the script (or training).

  • ...getting less rational with every interview

  • Would it get worn out too quickly maybe?

  • Install AI kill switches to be used by humans....when it tries to take over the world !

"He don't know me vewy well, DO he?" -- Bugs Bunny

Working...