Forgot your password?
typodupeerror

Comment Telling people how to live their lives (Score 1) 79

We've come full circle to the tech community deciding what's proper for our neighbors. ChatGPT is free to decide not to include adult stuff, and celebrity/CSAM should totally be illegal, but "The proper use of AI is as a tool, not as a friend, lover or therapist, and especially not as an addiction" is how we get the government regulating how adults use the tools at their disposal.

Aside from CSAM and defamatory stuff we don't have the right to decide what's proper for someone else.

Eventually peer to peer training (Petals using Hivemind, etc.) will lead the way.

Comment Re:Double standard (Score 5, Insightful) 38

The problem here is that developers can take responsibility for the action while AI can not. Humans do make mistakes and that's ok; best practice is not to just can employees for messing up. Once is a mistake. Twice is an HR event. When someone does something dumb we forgive but we also insist that meaningful steps are taken to prevent that problem in the future. AI can't really take those steps because AI can't be accountable for "don't do it again." Taking down production because you dropped a table once is forgivable. Taking it down twice for the same reason is a different matter.

The developer can be accountable. And if HR fails to hold them to account for it, HR is accountable. And if HR isn't held accountable, leadership is. And if leadership isn't held accountable, the board is. And if the board isn't held accountable, the stockholders have some hard decisions to make. And if they choose not to make them than it wasn't really that big a deal, was it?

But with an AI the option is "we stop using AI" or "we live with the result."

Slashdot Top Deals

There are no games on this system.

Working...