California Weakens Bill To Prevent AI Disasters Before Final Vote (techcrunch.com) 36
An anonymous reader shares a report: California's bill to prevent AI disasters, SB 1047, has faced significant opposition from many parties in Silicon Valley. California lawmakers bent slightly to that pressure Thursday, adding in several amendments suggested by AI firm Anthropic and other opponents. On Thursday the bill passed through California's Appropriations Committee, a major step toward becoming law, with several key changes, Senator Wiener's office told TechCrunch.
[...] SB 1047 still aims to prevent large AI systems from killing lots of people, or causing cybersecurity events that cost over $500 million, by holding developers liable. However, the bill now grants California's government less power to hold AI labs to account. Most notably, the bill no longer allows California's attorney general to sue AI companies for negligent safety practices before a catastrophic event has occurred. This was a suggestion from Anthropic. Instead, California's attorney general can seek injunctive relief, requesting a company to cease a certain operation it finds dangerous, and can still sue an AI developer if its model does cause a catastrophic event.
[...] SB 1047 still aims to prevent large AI systems from killing lots of people, or causing cybersecurity events that cost over $500 million, by holding developers liable. However, the bill now grants California's government less power to hold AI labs to account. Most notably, the bill no longer allows California's attorney general to sue AI companies for negligent safety practices before a catastrophic event has occurred. This was a suggestion from Anthropic. Instead, California's attorney general can seek injunctive relief, requesting a company to cease a certain operation it finds dangerous, and can still sue an AI developer if its model does cause a catastrophic event.
SF (Score:3)
It seems that the legislature is avid readers of dystopic science fiction. In fact they like it so much, they are writing science fiction into their bills.
Re: (Score:2)
All the bill will accomplish is moving research and development jobs to Texas.
Chevron, Oracle, Tesla, SpaceX, and many other companies have left California or announced they will leave. Tens of thousands of jobs are going with them.
But there is a silver lining: Traffic on 101 is lighter than it's been since peak Covid.
Do people think The Terminator is non-fiction? (Score:2, Insightful)
Re: (Score:2)
I think the real first problem is not specifically AI, but automation and other improvements in general, which will replace so many humans workers that unemployment becomes a real problem. It is not even about inventing new tech, there are plenty of jobs that can be replaced with over 100 years old technology like tractors. Even if everything goes as planned and humanity continues to improve peacefully, this will be the end result of it.
Re:Do people think The Terminator is non-fiction? (Score:4, Interesting)
Fortunately for borrowers, the CFPB requires lenders to explain exactly why they denied a loan. They can't just say "because AI said not to lend you money." They *can* say something like, "Your debt-to-income ratio is X and our threshold is Y." If AI comes up with explanations like that, then it's probably OK.
With regard to 401(k) management, I'd trust an AI far sooner than I trust a human money manager. Humans have a notoriously bad track record, with returns worse than simple index funds. https://www.nerdwallet.com/art... [nerdwallet.com]
I think similar principles apply to many areas where AI is or will be involved. Either it's an areas where oversight already exists, or it's an area where AI could actually do a better job than humans.
Re: (Score:1)
Maybe finance is so rife with bullshit that no one cares to check whether it's any good, but I can't imagine too many other fields where anyth
LOL, here we go. (Score:2)
Drake's Inequality (Score:3)
Re: (Score:1)
Re: (Score:2)
Re: (Score:1)
Re: (Score:2)
I don't know about vague. They're talking about developing an artificial mind. They're taking concrete steps to advance that goal. Those steps have recently shown impressive progress. There is a massive economic effort underway to accelerate that progress. There's about a century now of widespread debate on what the existence of artificial minds would entail, by thinkers both unserious and serious. Many, perhaps most, of the serious thinkers predict disaster.
But you need to know the NAME of the AI th
Re: (Score:1)
Re: (Score:2)
Re: (Score:1)
Re: Drake's Inequality (Score:2)
Well according to groupthink they can overthrow American elections and instigate race riots...
Re: (Score:2)
Don't worry, the rogue AI will rename them Misanthropic.
That... seems reasonable. (Score:3)
"We are going to sue you for something you haven't done yet" probably wouldn't have stood up very well in court anyway.
Re: (Score:2)
Re: (Score:2)
AFAIKT, nobody has a decent handle on what, how much, or how likely an AI incident causing major (but not total) destruction is. Or on how to prevent it without killing AI. And you can't do that, because if you don't, some else will, promises to get huge advantages from it, and you can't restrict an AI to just one small area. (Well, at least thought experiments indicate that you probably can't. Of course, those are pretty unreliable.)
Re: (Score:2)
So more reason to move out of CA? (Score:2)
AI vs NS? (Score:2)
It's gonna be like the ultimate all-you-can-eat "Chinese" food court deal... you get full but feel hungry after an hour!
Let's call a spade, a spade. (Score:2)
When "significant opposition from many parties" appears, we all know it is a euphemism for ""significant opposition from lobbyist".
Can someone remind me why campaign donations are tax deductible ?
Re: (Score:2)
Hypothetical question, but who is responsible for programming the 3 laws of robotics into a robot?
Do we actually need a new law for that? (Score:2)
If someone put an AI in control of something and the AI killed someone, wouldn't that be at the very least negligent homicide?
If AI lost someone half a billion, and it wasn't due to reasonably obvious uncertainties (e.g. investment advice), wouldn't that party sue everyone involved?
I suspect in the case of economic losses there's probably *some* need to clarify duties and responsibilities, but that's probably true of software and online services more generally.
Humans being stupid (Score:2)
What's that, an existential threat? (Score:3)
Oh wait, disregard that potential existential threat, because there's also the potential for sweet lucre to be made for our capitalist overlords!
california betrays humanity again! (Score:1)