Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI

California Weakens Bill To Prevent AI Disasters Before Final Vote (techcrunch.com) 36

An anonymous reader shares a report: California's bill to prevent AI disasters, SB 1047, has faced significant opposition from many parties in Silicon Valley. California lawmakers bent slightly to that pressure Thursday, adding in several amendments suggested by AI firm Anthropic and other opponents. On Thursday the bill passed through California's Appropriations Committee, a major step toward becoming law, with several key changes, Senator Wiener's office told TechCrunch.

[...] SB 1047 still aims to prevent large AI systems from killing lots of people, or causing cybersecurity events that cost over $500 million, by holding developers liable. However, the bill now grants California's government less power to hold AI labs to account. Most notably, the bill no longer allows California's attorney general to sue AI companies for negligent safety practices before a catastrophic event has occurred. This was a suggestion from Anthropic. Instead, California's attorney general can seek injunctive relief, requesting a company to cease a certain operation it finds dangerous, and can still sue an AI developer if its model does cause a catastrophic event.

This discussion has been archived. No new comments can be posted.

California Weakens Bill To Prevent AI Disasters Before Final Vote

Comments Filter:
  • by groobly ( 6155920 ) on Friday August 16, 2024 @12:49PM (#64711664)

    It seems that the legislature is avid readers of dystopic science fiction. In fact they like it so much, they are writing science fiction into their bills.

    • All the bill will accomplish is moving research and development jobs to Texas.

      Chevron, Oracle, Tesla, SpaceX, and many other companies have left California or announced they will leave. Tens of thousands of jobs are going with them.

      But there is a silver lining: Traffic on 101 is lighter than it's been since peak Covid.

  • I think people should worry a lot more about AI turning you down for a mortgage or improperly managing your 401K. It's not about sentience and AI overloading a nuclear reactor. Those processes will always have oversight. It's more about mundane business rules being interpreted and actioned incorrectly.
    • by dvice ( 6309704 )

      I think the real first problem is not specifically AI, but automation and other improvements in general, which will replace so many humans workers that unemployment becomes a real problem. It is not even about inventing new tech, there are plenty of jobs that can be replaced with over 100 years old technology like tractors. Even if everything goes as planned and humanity continues to improve peacefully, this will be the end result of it.

    • by Tony Isaac ( 1301187 ) on Friday August 16, 2024 @01:17PM (#64711784) Homepage

      Fortunately for borrowers, the CFPB requires lenders to explain exactly why they denied a loan. They can't just say "because AI said not to lend you money." They *can* say something like, "Your debt-to-income ratio is X and our threshold is Y." If AI comes up with explanations like that, then it's probably OK.

      With regard to 401(k) management, I'd trust an AI far sooner than I trust a human money manager. Humans have a notoriously bad track record, with returns worse than simple index funds. https://www.nerdwallet.com/art... [nerdwallet.com]

      I think similar principles apply to many areas where AI is or will be involved. Either it's an areas where oversight already exists, or it's an area where AI could actually do a better job than humans.

      • An AI that could find some deeper pattern to use for the purpose of decision making should be able to explain it. If it can't, there's no reason why the people employing the AI should trust that it's actually doing anything useful or beneficial to the bottom line as opposed to producing output no better than some random crank off the street could spit out for them.

        Maybe finance is so rife with bullshit that no one cares to check whether it's any good, but I can't imagine too many other fields where anyth
  • The injunctive petitions will come to be generated by AI, leading to lawfare arms races.
  • by Pseudonymous Powers ( 4097097 ) on Friday August 16, 2024 @12:58PM (#64711700)
    Ironic if it turns out that the principles of a company named Anthropic make it impossible for humans to exist in our universe.
  • by dmay34 ( 6770232 ) on Friday August 16, 2024 @01:03PM (#64711724)

    "We are going to sue you for something you haven't done yet" probably wouldn't have stood up very well in court anyway.

  • . . . only, this is a technical matter being discussed. Perhaps we should listen to the technorati on this subject - I suspect they know more about it than, say, Senator Stevens. At least, they don't think it's just a bunch of tubes.

    Yes, let's leave this up to the people technically qualified to understand the issues. That would be . . . me, come to think of it.

    • by HiThere ( 15173 )

      AFAIKT, nobody has a decent handle on what, how much, or how likely an AI incident causing major (but not total) destruction is. Or on how to prevent it without killing AI. And you can't do that, because if you don't, some else will, promises to get huge advantages from it, and you can't restrict an AI to just one small area. (Well, at least thought experiments indicate that you probably can't. Of course, those are pretty unreliable.)

  • TX and AZ will thank you for all the business migration. I dont think CA knows its a state and not a country.
  • I, for one, welcome the cage fight between out Artificial Intelligence and Natural Stupidity overlords!

    It's gonna be like the ultimate all-you-can-eat "Chinese" food court deal... you get full but feel hungry after an hour!

  • When "significant opposition from many parties" appears, we all know it is a euphemism for ""significant opposition from lobbyist".
    Can someone remind me why campaign donations are tax deductible ?

  • If someone put an AI in control of something and the AI killed someone, wouldn't that be at the very least negligent homicide?

    If AI lost someone half a billion, and it wasn't due to reasonably obvious uncertainties (e.g. investment advice), wouldn't that party sue everyone involved?

    I suspect in the case of economic losses there's probably *some* need to clarify duties and responsibilities, but that's probably true of software and online services more generally.

  • Just like always.
  • Oh wait, disregard that potential existential threat, because there's also the potential for sweet lucre to be made for our capitalist overlords!

  • actual headline from a world not accustomed to doublespeak: california betrays humanity again!

One small step for man, one giant stumble for mankind.

Working...