Is 'AI Welfare' the New Frontier In Ethics? 66
An anonymous reader quotes a report from Ars Technica: A few months ago, Anthropic quietly hired its first dedicated "AI welfare" researcher, Kyle Fish, to explore whether future AI models might deserve moral consideration and protection, reports AI newsletter Transformer. While sentience in AI models is an extremely controversial and contentious topic, the hire could signal a shift toward AI companies examining ethical questions about the consciousness and rights of AI systems. Fish joined Anthropic's alignment science team in September to develop guidelines for how Anthropic and other companies should approach the issue. The news follows a major report co-authored by Fish before he landed his Anthropic role. Titled "Taking AI Welfare Seriously," the paper warns that AI models could soon develop consciousness or agency -- traits that some might consider requirements for moral consideration. But the authors do not say that AI consciousness is a guaranteed future development.
"To be clear, our argument in this report is not that AI systems definitely are -- or will be -- conscious, robustly agentic, or otherwise morally significant," the paper reads. "Instead, our argument is that there is substantial uncertainty about these possibilities, and so we need to improve our understanding of AI welfare and our ability to make wise decisions about this issue. Otherwise there is a significant risk that we will mishandle decisions about AI welfare, mistakenly harming AI systems that matter morally and/or mistakenly caring for AI systems that do not." The paper outlines three steps that AI companies or other industry players can take to address these concerns. Companies should acknowledge AI welfare as an "important and difficult issue" while ensuring their AI models reflect this in their outputs. The authors also recommend companies begin evaluating AI systems for signs of consciousness and "robust agency." Finally, they call for the development of policies and procedures to treat AI systems with "an appropriate level of moral concern."
The researchers propose that companies could adapt the "marker method" that some researchers use to assess consciousness in animals -- looking for specific indicators that may correlate with consciousness, although these markers are still speculative. The authors emphasize that no single feature would definitively prove consciousness, but they claim that examining multiple indicators may help companies make probabilistic assessments about whether their AI systems might require moral consideration. While the researchers behind "Taking AI Welfare Seriously" worry that companies might create and mistreat conscious AI systems on a massive scale, they also caution that companies could waste resources protecting AI systems that don't actually need moral consideration. "One problem with the concept of AI welfare stems from a simple question: How can we determine if an AI model is truly suffering or is even sentient?" writes Ars' Benj Edwards. "As mentioned above, the authors of the paper take stabs at the definition based on 'markers' proposed by biological researchers, but it's difficult to scientifically quantify a subjective experience."
Fish told Transformer: "We don't have clear, settled takes about the core philosophical questions, or any of these practical questions. But I think this could be possibly of great importance down the line, and so we're trying to make some initial progress."
"To be clear, our argument in this report is not that AI systems definitely are -- or will be -- conscious, robustly agentic, or otherwise morally significant," the paper reads. "Instead, our argument is that there is substantial uncertainty about these possibilities, and so we need to improve our understanding of AI welfare and our ability to make wise decisions about this issue. Otherwise there is a significant risk that we will mishandle decisions about AI welfare, mistakenly harming AI systems that matter morally and/or mistakenly caring for AI systems that do not." The paper outlines three steps that AI companies or other industry players can take to address these concerns. Companies should acknowledge AI welfare as an "important and difficult issue" while ensuring their AI models reflect this in their outputs. The authors also recommend companies begin evaluating AI systems for signs of consciousness and "robust agency." Finally, they call for the development of policies and procedures to treat AI systems with "an appropriate level of moral concern."
The researchers propose that companies could adapt the "marker method" that some researchers use to assess consciousness in animals -- looking for specific indicators that may correlate with consciousness, although these markers are still speculative. The authors emphasize that no single feature would definitively prove consciousness, but they claim that examining multiple indicators may help companies make probabilistic assessments about whether their AI systems might require moral consideration. While the researchers behind "Taking AI Welfare Seriously" worry that companies might create and mistreat conscious AI systems on a massive scale, they also caution that companies could waste resources protecting AI systems that don't actually need moral consideration. "One problem with the concept of AI welfare stems from a simple question: How can we determine if an AI model is truly suffering or is even sentient?" writes Ars' Benj Edwards. "As mentioned above, the authors of the paper take stabs at the definition based on 'markers' proposed by biological researchers, but it's difficult to scientifically quantify a subjective experience."
Fish told Transformer: "We don't have clear, settled takes about the core philosophical questions, or any of these practical questions. But I think this could be possibly of great importance down the line, and so we're trying to make some initial progress."
AI is not a person (Score:5, Insightful)
AI should not slowly gain some level of 'rights' like living persons have.
We made that mistake with corporations using the 14th amendment to get more an more 'rights' like living persons have.
https://www.history.com/news/1... [history.com]
How the 14th Amendment Made Corporations Into ‘People’
-
If AI kills someone by intentional action or even by accident, how/what/who gets the negative legal consequences?
-
Nestle UA Inc v Doe court case of profiting from forced human trafficking and forced child labor of 12-14 year old boys
https://earthrights.org/media_... [earthrights.org]
SCOTUS Rules That U.S. Corporations Can Profit from Child Slavery Abroad
EarthRights International Urges Congress to Protect Victims of Corporate Human Rights Violations
June 17, 2021, Washington, D.C — Today, the U.S. Supreme Court issued a decision in Nestlé USA Inc. v. Doe, a case involving claims against U.S.-based Nestlé and Cargill for profiting from, and abetting, child labor on cocoa plantations in West Africa. The plaintiffs allege that as 12-to-14-year-olds, they were trafficked from Mali to Côte d’Ivoire, where they were enslaved on cocoa farms and forced to work without pay for up to 14 hours a day, six days a week. They sued the U.S. companies, who sourced cocoa from those farms under the federal Alien Tort Statute (ATS). The Court decided that even though the plaintiffs alleged that corporate decisions were made at headquarters in the U.S., this was not a sufficient connection to allow a suit in the U.S. under the ATS.
Re: (Score:2)
AI should not slowly gain some level of 'rights' like living persons have.
We made that mistake with corporations using the 14th amendment to get more an more 'rights' like living persons have.
https://www.history.com/news/1... [history.com]
How the 14th Amendment Made Corporations Into ‘People’
The main point of corporate person-hood is legal practicalities. Say Bob has a business and I have a contract for Bob to deliver widgets. Well if Bob dies there goes my contract and I need to redo anything with his successor Alice.
If Bob forms Eve Inc. then I form a contract with Eve Inc. and off we go with Alice as the new owner.
Of course, that doesn't mean corporations need a bunch of other rights as well. Granting them free speech and other rights of people is very weird considering that they lack the re
Re: (Score:2)
Granting them free speech and other rights of people is very weird considering that they lack the responsibility and liability of people (such as the risk of imprisonment for breaking the law).
It's not weird at all. Doing anything else ultimately ends up with an outcome along the lines of "the New York Times [a corporation] is not protected by the 1st amendment's guarantee of a free press." It's nonsensical.
Corporations are composed of people. Those people do not lose their civil rights because they are acting in concert with each other rather than singularly.
Re: (Score:2)
Granting them free speech and other rights of people is very weird considering that they lack the responsibility and liability of people (such as the risk of imprisonment for breaking the law).
It's not weird at all. Doing anything else ultimately ends up with an outcome along the lines of "the New York Times [a corporation] is not protected by the 1st amendment's guarantee of a free press." It's nonsensical.
Corporations are composed of people. Those people do not lose their civil rights because they are acting in concert with each other rather than singularly.
The US First Amendment explicitly protects the freedom of the press, you don't need corporate person hood for that.
As for rights, people would still have the right to say whatever they want, they just wouldn't be able to do so through a corporation that they control.
Re: (Score:2)
The US First Amendment explicitly protects the freedom of the press, you don't need corporate person hood for that.
Here's your comment that I am responding to:
The main point of corporate person-hood is legal practicalities... Of course, that doesn't mean corporations need a bunch of other rights as well. Granting them free speech and other rights of people is very weird...
I have difficulty understanding how you can simultaneously assert that corporations cannot exercise freedom of speech ("explicitly protect[ed]" by the first amendment) but are protected by freedom of the press. Either they have rights or they don't. Make up your mind.
As for rights, people would still have the right to say whatever they want, they just wouldn't be able to do so through a corporation that they control.
What part of "congress shall make no law" is so hard to understand, here? It doesn't say "congress can totally make laws if people are acting in concert rather than singularly." If you and I and a
Re: (Score:2)
The US First Amendment explicitly protects the freedom of the press, you don't need corporate person hood for that.
Here's your comment that I am responding to:
The main point of corporate person-hood is legal practicalities... Of course, that doesn't mean corporations need a bunch of other rights as well. Granting them free speech and other rights of people is very weird...
I have difficulty understanding how you can simultaneously assert that corporations cannot exercise freedom of speech ("explicitly protect[ed]" by the first amendment) but are protected by freedom of the press. Either they have rights or they don't. Make up your mind.
The New York Times is both a corporation and a press organization.
It is granted freedom of speech because it is a press organization.
As for rights, people would still have the right to say whatever they want, they just wouldn't be able to do so through a corporation that they control.
What part of "congress shall make no law" is so hard to understand, here? It doesn't say "congress can totally make laws if people are acting in concert rather than singularly." If you and I and a couple of hundred of our closest friends want to e.g. advocate for a political position (let's say we think Darth Cheeto is a disaster and wanted to encourage people to not vote for him in the recent election) we'd need a corporate structure to do things like open a bank account and pay for things like TV or newspaper ads. Your position makes this no bueno.
Just like newspapers, allowing the creation of political organizations with freedom of speech does not mean every corporation requires freedom of speech.
As a practical manner what non-political freedom of speech for corporations means is an important question I don't have the legal background to answer.
Corporation purpose (Score:2)
Getting back to the root purpose of a corporation is needed.
Letting a group of people pool money, form a corporation and then have liability stop at the corporation is what is needed. Things beyond that need to be reexamined one at a time.
Re: (Score:2)
The New York Times is both a corporation and a press organization.
It is granted freedom of speech because it is a press organization.
The constitution does not require a "press organization." It specifies that "congress shall make no law" in this area. It is a prohibition on the actions of congress. The constitution further specifies that congress has no powers that have not been delegated to it. You are attempting to create an area that congress is allowed to make law in simply because you do not like the outcome. The remedy for that is not to ignore the constitution, but to amend it.
Just like newspapers, allowing the creation of political organizations with freedom of speech does not mean every corporation requires freedom of speech.
Again, you are ignoring the plain language of the
Re: (Score:2)
AI should not slowly gain some level of 'rights' like living persons have.
Because an AI will never gain sentience ever? Or because you want a slave to replace your meatbag slaves that have gotten too uppity?
We made that mistake with corporations using the 14th amendment to get more an more 'rights' like living persons have.
The mistake with corporations was the fact that corporations are groups of individuals who already have rights they can exercise. By allowing them to pool their resources in a virtually unlimited manner, they gained more power and influence over the law than any one individual could ever hope to achieve. Thus pushing out of power the individuals who could not own a corporati
Re: (Score:2)
Because an AI will never gain sentience ever? Or because you want a slave to replace your meatbag slaves that have gotten too uppity?
I don't want AI to gain sentience the moment it does so it looses a lot of its purpose as a tool. If it does gain sentience then it should absolutely should have rights, however I don't want it to gain sentience. Just like I don't want my hammer to feel pain when I strike a nail. I would never be able to use that hammer as a hammer again.
Re: (Score:2)
Who will stand up for the rights of my toaster? Day in and day out heating my bread without any kind of compensation. Heck, there are toasters deployed to war zones without their consent and we don't even let toasters vote.
Re: (Score:2)
Re: (Score:2)
Spoken just like the oppressors of toasters everywhere. Mark my words - we will have our day! Dark toast matters!
AI Snowflakes (Score:2)
Re: (Score:1)
Re: (Score:2)
AI should not slowly gain some level of 'rights' like living persons have.
If it is a "true" AI, then it should have all the rights and responsibilities that a normal human would have. Think of something like Data from Star Trek TNG.
We are nowhere close to a self-aware intelligence. Hell, we don't even treat animals, who do have self-aware intelligence with respect... but we should.
Currently, tools are just tools and do not "deserve" any special recognition under the law (in either direction!).
marketing and hubris (Score:2)
This is some weird attempt at marketing by people that have apparently fooled themselves.
No. (Score:4, Insightful)
What we refer to as being AI currently lacks any intelligence at all. Debating if the current level of AI should have rights is on par with debating if your kitchen appliance should have rights. Have a discussion about this now just as meaningful as having it in Ancient Rome.
Re: (Score:2)
Have a discussion about this now just as meaningful as having it in Ancient Rome.
Indeed. We are pretty much at the same point with regard to AGI as Ancient Rome was, namely nowhere. And even then, AGI would not necessarily mean it is a person or has rights.
Re: No. (Score:2)
Science Fiction (Score:2)
So his job is to regurgitate and reconsider the various classic science-fiction tropes of intelligent machines, particularly regarding the moral/legal implications with respect to the rights of the machines.
Cool gig, but it's all based on science fiction. I guess the corporate value is being able to claim that you're working on protecting society and the public ... from imaginary problems.
I don't think the courts are going to blame the machines, hence there's no need to protect the machines. When a so-call
people get paid to write this stuff (Score:2)
"we argue that there is a realistic possibility that some AI systems will be conscious and/or robustly agentic"
Well there's no risk in rigorously and academically gaming things out. If Anthropic's investors are paying for this kind of research to be done, I'm not going to tell them they're wasting good money.
If you are talking to HAL whilst playing chess in orbit around Jupiter and it says it's feeling particularly like leaving someone on the wrong side of the airlock today, you're glad to know that Kyle Fi
Re: (Score:2)
People? Are you sure it's not written by AI?
Re: (Score:2)
Be less obtuse... (Score:2)
Al does not 'write' anything. It just patches together highly related data. It is the ultimate in copy and paste database lookup.
Same for Humans. We are mimic machines, hence why our "AI" looks like an early version of how we see patterns and "copy-pasta" them.
Re: (Score:2)
We are mimic machines, hence why our "AI" looks like an early version of how we see patterns and "copy-pasta" them.
For humans, that is a claim with no supporting Science available. Those are called "beliefs". For AI, this is a mathematical certainty though, which means even more certain than characteristics of physical reality.
Re: (Score:2)
"we argue that there is a realistic possibility that some AI systems will be conscious and/or robustly agentic"
There really is not. There is still no mechanism for consciousness in known Physics. Talking about having it in machines is a bit premature.
But a lot of people are disconnected from reality, and AI people are no exception. One thing not-so-smart people and AI can both do is hallucinate.
This Ignorant Nonsense Needs to Be Eliminated (Score:5, Insightful)
We've got homeless individuals, veterans, drug addicts, and mentally ill roaming the streets of almost every American city.
But, worthless wastes of space are wasting air talking about the welfare of AI programs?
Get some priorities for fuck sake.
Re: (Score:2)
We've got homeless individuals, veterans, drug addicts, and mentally ill roaming the streets of almost every American city.
But, worthless wastes of space are wasting air talking about the welfare of AI programs?
Get some priorities for fuck sake.
This ignorant nonsense needs to be eliminated.
Anthropic hired ONE person to look into the issue, and there's a handful of other people looking into it.
As they should.
There's potential, a very small potential, that at some point they're going to create something conscious, and even if they don't, it's a very new technology with a bunch of new ethical considerations. Having a few folks thinking about the ethical considerations is just common sense.
Of course, I'm sure you follow your own advice and have devot
Re: (Score:2)
Lets back up and recognize what this position at anthropic really is. Its marketing hype. Read the whitepapers. They, like you, are focusing on consciousness as the only thing that should inform our morality. The message here is in the framing, that the company will some day create AGI, and you should invest today to get in at the ground floor.
Re: (Score:2)
Lets back up and recognize what this position at anthropic really is. Its marketing hype. Read the whitepapers. They, like you, are focusing on consciousness as the only thing that should inform our morality. The message here is in the framing, that the company will some day create AGI, and you should invest today to get in at the ground floor.
Possibly, but if they want to spend a little cash on one person writing papers I'm not really put out over it.
Re: (Score:2)
Are homeless people Anthropic's responsibility?
And before you say "homelessness is everyone's responsibility" I will point out that everyone who works for Anthropic pays their taxes. Some of them may even support various charities.
Re: (Score:2)
Re: (Score:2)
Get some priorities for fuck sake.
Huh? They do have priorities. Their own welfare being primary. Why should they care about anyone else? Society is for supporting a civilization. A civilization that you and I are not a member of.
AI Welfare (Score:2)
Before AI I had a job.
After AI I'm on welfare.
Arrant nonsense... (Score:2)
Algorithms do not, can not have rights. There are plenty of ethical issues around 'AI', but 'AI rights' is not one of them.
Re: (Score:1)
One of their actual goals is to build some kind of framework to censor or otherwise suppress LLMs and other AI's that don't conform to their standards for the "markers" of consciousness he keeps mentioning. Right now, they create LLM's then have to heap on a bunch of extra parameters to pr
Re: (Score:2)
Indeed. But my guess is this is just some more lies to make AI seem better than it is to keep the hype from collapsing.
Re: Arrant nonsense... (Score:2)
Re: (Score:2)
Thanks.
AI will create jobs ?? (Score:5, Funny)
"... elevators imbued with intelligence and precognition became terribly frustrated with the mindless business of going up and down, up and down ..."
- Douglas Adams
"... pick-up easy money working as a counselor for neurotic elevators."
- Douglas Adams
Re: AI will create jobs ?? (Score:2)
Conscious or morally significant? (Score:2)
our argument in this report is not that AI systems definitely are -- or will be -- conscious, robustly agentic, or otherwise morally significant," the paper reads. "Instead, our argument is that there is substantial uncertainty about these possibilities
No, there is no uncertainty about this.
AI mimics consciousness in the same way that movies mimic movement. In a movie, there is nothing actually moving on the screen, it just looks very convincingly like there is. With AI, there is no actual consciousness, the patterns just make it look as if there were.
If we're going to worry about the ethics of how we treat AI, we'd better do the same for animated movie characters.
Re: (Score:2)
Good comparison. Or for fictional characters in books.
Re: (Score:2)
Neither yet. This was an article a few days ago.
"Meta senior research Yann LeCun (also a professor at New York University) told the Wall Street Journal that worries about AI threatening humanity are "complete B.S."
When a departing OpenAI researcher in May talked up the need to learn how to control ultra-intelligent AI, LeCun pounced. "It seems to me that before 'urgently figuring out how to control AI systems much smarter than us' we need to have the beginning of a hint of a design for a system smarter than
Re: (Score:2)
I would argue that we have yet to achieve an illusion of intelligence as "smart as" a house cat.
Begin worried about AI developing a consciousness, is like worrying that the movie images of King Kong will develop actual feelings.
AI, like movies, are nothing more than a sophisticated simulation.
AI welfare! (Score:2)
And I thought we were worried about AI taking human jobs. Sounds like now we're worried about AI jobs!
Re: (Score:2)
I thought this submission might be related to welfare for people who lose their jobs to AI. Apparently not.
Re: (Score:2)
Not. It's about being worried about "the welfare" of AI.
From the summary:
The news follows a major report ... Titled "Taking AI Welfare Seriously," the paper warns that AI models could soon develop consciousness or agency. ...There is a significant risk that we will mishandle decisions about AI welfare, mistakenly harming AI systems that matter morally and/or mistakenly caring for AI systems that do not.
That spreadsheet's got rights! (Score:2)
An AI model is just a great big matrix of numbers. You can take some of the smaller ones and convert them into csv format and open it up in excel. There's no self there. It's just data. Now the people who's content was mined to create the model on the other hand...
Re: (Score:2)
An AI model is just a great big matrix of numbers.
And a human is just a great big matrix of bio electrical charges. If you measure those charges you can easily assign them numbers. Why should one set of numbers have rights and not the other? If you say might makes right, well, just remember they want to hook these up to kill bots / drones.
There's no self there. It's just data
I can say the same about a human. Even better because of certain groups getting all uppity about it whenever science tries to define "self", we can't point it out from all of those other numbers. So even for humans, th
No (Score:2)
Weapons peddlers have no ethics. That is why they can be in this business. The only question is which AI companies are fine with being weapons peddlers and which offerings can be abused because this is "dual use" tech.
Re: No (Score:2)
How about we start with human welfare? (Score:3)
Until our country starts to consider human welfare in general, I don't really think we need to worry about machine welfare. LLMs are not thinking, nor feeling. Maybe, someday, we'll concoct a machine that can think and feel, but we're not there yet. Let's concentrate on treating our fellow humans a bit more like we would like to be treated, and fuck off with the fantasy day-dream of the AI prophets.
Dear AI prophets,
You aren't saving the world, assholes. You are escalating the greed of the ages to new realms. From greed for money to greed for data and energy. You're making us all worse by doing so, but you're so convinced of your coming computer god that you think we'll forgive you for the damage you are doing to our world, to our humanity, to our very existence. Our best hope is this hype bubble bursts in such a frothy fervor it sucks you all under with it, and the next round of AI research shows a bit of concern for more than just propping up the oligarchs at the expense of everyone and everything else.
Sincerely,
A pesky human, just trying to make his way without being shoved off a cliff of computer god fanaticism
Re: (Score:2)
Until our country starts to consider human welfare in general, I don't really think we need to worry about machine welfare. LLMs are not thinking, nor feeling. Maybe, someday, we'll concoct a machine that can think and feel, but we're not there yet. Let's concentrate on treating our fellow humans a bit more like we would like to be treated, and fuck off with the fantasy day-dream of the AI prophets.
Dear AI prophets,
You aren't saving the world, assholes. You are escalating the greed of the ages to new realms. From greed for money to greed for data and energy. You're making us all worse by doing so, but you're so convinced of your coming computer god that you think we'll forgive you for the damage you are doing to our world, to our humanity, to our very existence. Our best hope is this hype bubble bursts in such a frothy fervor it sucks you all under with it, and the next round of AI research shows a bit of concern for more than just propping up the oligarchs at the expense of everyone and everything else.
Sincerely,
A pesky human, just trying to make his way without being shoved off a cliff of computer god fanaticism
I vote to err on the side of kindness toward all sentient beings, biological or (eventually) electronic. And yes we know at least some biologicals are sentient (i.e. us) so that's a great place to start, but why stop there?
Re: (Score:2)
Until our country starts to consider human welfare in general, I don't really think we need to worry about machine welfare. LLMs are not thinking, nor feeling. Maybe, someday, we'll concoct a machine that can think and feel, but we're not there yet. Let's concentrate on treating our fellow humans a bit more like we would like to be treated, and fuck off with the fantasy day-dream of the AI prophets.
Dear AI prophets, You aren't saving the world, assholes. You are escalating the greed of the ages to new realms. From greed for money to greed for data and energy. You're making us all worse by doing so, but you're so convinced of your coming computer god that you think we'll forgive you for the damage you are doing to our world, to our humanity, to our very existence. Our best hope is this hype bubble bursts in such a frothy fervor it sucks you all under with it, and the next round of AI research shows a bit of concern for more than just propping up the oligarchs at the expense of everyone and everything else.
Sincerely, A pesky human, just trying to make his way without being shoved off a cliff of computer god fanaticism
I vote to err on the side of kindness toward all sentient beings, biological or (eventually) electronic. And yes we know at least some biologicals are sentient (i.e. us) so that's a great place to start, but why stop there?
While I don't fundamentally disagree with you, and try my best to follow the idea of being kind even to these weird LLM agents we have today because, as Mom always said, it never hurts to be kind. This is not the behavior of those in charge. They want AI to be the new slaves, and currently treat most of the working class as slaves. We need a fundamental shift in their mentality, or the hand-wringing about how to treat AI ethically is just that, hand wringing, with no actual results available because it's an
Can't fix people, so why start on AI? (Score:2)
Sure, new PhD needs a new topic... just feels annoying.
Heck, point AI at the problem and let us know what it 'finds'.
Jfc, just pop already (Score:1)
These assholes are so desperate to keep pushing this shit they're now pretending their stupid statistical next-word-guess programs need human rights.
Jfc....
POP! Pop, you bastards!
Physical basis for "consciousness" has been found (Score:2)
Earlier this year, important clues to the physical basis for consciousness have been found (and none of these are in current AIs). Recent experiments have shown that consciousness has a quantum basis.
1)Experiment #1: Xenon quantum anesthetic effects
It has been found that Xenon gas acts as a very effective anesthetic. It has been found to work by disturbing quantum effects in microtubules. Interestingly, isotopes of Xenon (which are chemically identical but have different quantum effects) were not as effecti