Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Businesses

OpenAI Employees Want Protections To Speak Out on 'Serious Risks' of AI (bloomberg.com) 36

A group of current and former employees from OpenAI and Google DeepMind are calling for protection from retaliation for sharing concerns about the "serious risks" of the technologies these and other companies are building. From a report: "So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public," according to a public letter, which was signed by 13 people who've worked at the companies, seven of whom included their names. "Yet broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues."

In recent weeks, OpenAI has faced controversy about its approach to safeguarding artificial intelligence after dissolving one of its most high-profile safety teams and being hit by a series of staff departures. OpenAI employees have also raised concerns that staffers were asked to sign nondisparagement agreements tied to their shares in the company, potentially causing them to lose out on lucrative equity deals if they speak out against the AI startup. After some pushback, OpenAI said it would release past employees from the agreements.

This discussion has been archived. No new comments can be posted.

OpenAI Employees Want Protections To Speak Out on 'Serious Risks' of AI

Comments Filter:
  • These folks would actually trust a comment saying 'no, there won't be retaliation!' Really?
    • by narcc ( 412956 )

      There will be no retaliation because they're being paid to pretend to blow the whistle. It's just a lame marketing stunt.

      The only people in any danger from AI are investors who get in too late or hang on too long.

      • Yes, of course, I have to admit I hadn't thought about that; I had the fantasy that there might be some integrity left in the world.
        • by narcc ( 412956 )

          Hang on to that as long as you can.

          • Trust me, it's already gone - it just sometimes pops up in unusual instances. Decades of working in corporate America and public safety have removed from me any illusions. I do have a few blind spots though.
      • It's just a lame marketing stunt.

        Yes it is. "We have suckered people into giving us millions upon millions of dollars, and we want enough regulation to not only make those suckers believe their losses were reasonable, but also to convince them to give us even more money until the house of cards falls down."

    • by DarkOx ( 621550 )

      There *SHOULD* be retaliation. None of this based on anything its literally just idle speculation, and science-fiction.

      Its one thing if your job is product safety to be actively searching for risks and potential harms associated with the companies products. Its another if in the course of your actual duties you spot a potential safety hazard, operative word being 'spot' and report it. Its still another if after reporting it internally you are ignored and you become a whistle blower or something.

      What is s

      • One would presume that a whistle blower , when blowing a whistle, provides evidence, yes? Other than that, it's just gossip, suppositions, and baseless speculation.
    • by sheph ( 955019 )
      Even if such protections existed I don't think it's too hard to imagine if someone gets too vocal they'll get the Boeing treatment.
  • If you have to (Score:3, Insightful)

    by smooth wombat ( 796938 ) on Tuesday June 04, 2024 @11:52AM (#64522471) Journal

    Strong arm employees who leave into not saying bad things about you, what are you trying to hide?

  • I would say the engineers are starting to worry about home much disregard for copyright is going about, that darn engineering ethics course was easy, but now not so much
    • I would say the engineers are starting to worry about home much disregard for copyright is going about, that darn engineering ethics course was easy, but now not so much

      Look at the way copyright is viewed around Slashdot. I have my doubts this is the root of their ethical concern. At heart, these engineers would be techies, and techies seem to abhor copyright as it exists.

      I just have this funny feeling it's not copyright that's giving them the heebees.

  • start an union! (Score:5, Insightful)

    by Joe_Dragon ( 2206452 ) on Tuesday June 04, 2024 @12:05PM (#64522509)

    start an union!

  • do people think we'd believe them a second??? how stupid do you have to be!?!?!?
  • Well DUH!

    I think we have all seen The Terminator Movies, The Matrix, iRobot, Wargames, Colossus (The Forbin Project), 2001 a Space Odyssey. We all know how this "AI" thing all ends.

    There has been enough information coming out about people breaking out of the "security sandboxes" around the models, how the models are being molded to specific viewpoints. Oh lest we forget how the Microsoft one demanded the Nuclear Launch codes. If that didn't throw a big red flag in all of this nothing does.

    We need to
    • think we have all seen The Terminator Movies, The Matrix, iRobot, Wargames, Colossus (The Forbin Project), 2001 a Space Odyssey. We all know how this "AI" thing all ends.

      You've watched WAY too much science fiction, and apparently know far too little about AI. Nothing in any of those movies is possible, since we have nothing even remotely resembling sentient AI, and likely never will. We have statistical modeling/pattern matching, and nothing more. The statistical models are only even halfway decent when trained to do a VERY specific task that lends itself to statistics. They completely fall flat when having to interpret the statistics, because the statistical models are not

      • Generative AI - such as Chat GPT and friends - doesn't create anything. It predicts the next word or pixel based on the sum of the input into a neural net. There is no creativity. Thus, by definition, it can be no more "dangerous" than the training data.

        The people screeching about the "dangers of AI!!!!" are basically the DEI department in the AI world. They add nothing of value but get paid too much to leech off the companies that were doing just fine without them. They have to keep everything at emer

      • by DarkOx ( 621550 )

        Right the risk of on going mayhem is virtually none. None of the AI products that exist today have anything near the capability to become and active and competitive threat actor.

        Which is not to say, it can't or won't do things we did not expect and don't like; this though is true of every form of mechanization and automation; especially that which processes any feedback. Bad inputs, poor operator choices, and failures of imagination lead to minor calamities. We put the big red 'Emergency Power Off' button o

    • Any OpenAI researcher worried about LessWrong-style existential risk should not be stopped by a confidentiality agreement. It's clearly preferable to get sued than turned to paperclips.

  • The corporations have decided this is safe. Why? Number one? Massive profit potential. And that, right there, is all that is absolutely required for any corporation to move forward with something. On top of that, it seems like a great way to replace humans (fallible, weak-willed, need things) with machines (PERFECT LITTLE AUTOMOTONS AT LAST!). Which raises the potential to cut costs, which again, leads to profit.

    The corporations have deemed this not only safe, but absolutely *VITAL* to their bottom line con

  • by bzipitidoo ( 647217 ) <bzipitidoo@yahoo.com> on Tuesday June 04, 2024 @12:55PM (#64522631) Journal

    The hype, and fear, over current AI is wildly overblown. Perhaps there are too many stories of computers magically acquiring sentience somehow, for instance, in the Terminator movies. I suspect we've grossly underestimated what it takes to achieve real, general intelligence.

    Being able to play chess better than any person, ever, is not enough, not even close. All that chess computers have really shown us is that chess is amenable to brute force computation. Ought to have known that all along. Some scientists really hoped that the ability to play chess would translate into or perhaps derive from general intelligence.

    Driving a car is another task that's been touted as AI. Not only is the way they do it not intelligent, they can't even do it reliably. They don't understand that they are driving a car, they only respond to visual stimuli. They go wrong in ways that a human would never go wrong.

    And finally, these LLMs. LLMs are not in the least intelligent. They merely bandy words, they don't understand them.

    The way OpenAI is behaving is so typical of these kinds of businesses. Exaggerate the capabilities of their products to the point of lying, as Tesla did with the self-driving they sell. Try to silence employees, to keep those lies from being exposed. As to the "serious risks", these are not risks that AI is going to get loose, no, these are risks that potential customers are going to believe the hype, and get hurt when the stuff can't perform at the level of expectations the sellers sold everyone on. Again, Tesla's self-driving is a case in point. When Tesla's AI misses, sometimes people die.

    • The way OpenAI is behaving is so typical of these kinds of businesses. Exaggerate the capabilities of their products to the point of lying, as Tesla did with the self-driving they sell. Try to silence employees, to keep those lies from being exposed. As to the "serious risks", these are not risks that AI is going to get loose, no, these are risks that potential customers are going to believe the hype, and get hurt when the stuff can't perform at the level of expectations the sellers sold everyone on. Again, Tesla's self-driving is a case in point. When Tesla's AI misses, sometimes people die.

      I think this is the fear most of us that have been paying attention would have. Someone, somewhere, will be sold a load of absolute dreck, put one of these unintelligent agents in charge of some bit of critical something, and bad things will ensue. AI won't kill us in its current form. It may be used by misinformed people to kill us though. And honestly? With as dumb as some folks in the business sector are when seeing profit potential just sitting there? I'd say there's valid reasons for concern.

      It's more

    • Comment removed based on user account deletion
      • by zlives ( 2009072 )

        only if you force it to really read all your content wich will never happen, just ask my copilot enabled windows devices, we are completely protected by microsoft because they have made security their priority again.

        • only if you force it to really read all your content wich will never happen, just ask my copilot enabled windows devices, we are completely protected by microsoft because they have made security their priority again.

          Well, this here is a WTF wrapped in a WTF wrapped in a "Is this written by copilot?" fog. I mean, just "protected by microsoft because they have made security their priority again" is enough of a gag inducer I needed to hold my wastebasket for a few moments, thinking I was gonna hurl my lunch.

          Whatever it is you're on? Could you pass some along? I'd like to feel that level of disconnect from reality for a bit.

    • by ljw1004 ( 764174 )

      The hype, and fear, over current AI is wildly overblown... I suspect we've grossly underestimated what it takes to achieve real, general intelligence.

      Your argument boils down to: (1) this is what the OpenAI people are scare of, (2) it is a stupid fear, (3) therefore the OpenAI people are stupid.

      I totally agree that the fear of sentient Terminator-style robots is stupid. But doesn't it seem likely that they've thought of other *sensible* fears that you haven't listed?

      For instance, the EU's state concerns about so-called AI is that they'll lead to violations of human rights, e.g. like we've seen in machine-driven sentencing in the judicial system, or polic

    • by ljw1004 ( 764174 )

      Here are some other example "serious risks" that employees may wish to be sharing:
      * "The self-driving AI technology is so seriously bad and we believe it should not be allowed onto the streets in its current form"
      * "The safety failures in our self-driving AI systems have been wrongly hidden from investigators and regulators"
      * "We see our company trying to sell autonomous flying solutions but we know the failure modes will be too severe"
      * "We have evidence that our AI self-driving software is vulnerable to e

  • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Tuesday June 04, 2024 @01:09PM (#64522691) Homepage Journal

    If they want to blow the whistle on actual malfeasance then they deserve protection for that.

    If their goal is to spread FUD that harms their employer, there is no reason their employer should support that. Literally none.

    There is a gray area in between. We ALL live there. Welcome to capitalism!

  • We have no idea what the "serious risks" are and we won't know until the lights go out or the internet stops or the water stops flowing.

    We don't know the potential or scope for harm, but I'm sure we're all going to learn about how easy it is to use AI to fuck shit up over the next 5 years or so.

    Maybe some asshole uses an AI attack against Wall Street and *boom*, the economy hits a wall. Or it's used to corrupt supply lines and manufacturing processes, medical records, test results, QA records, unit tests...

  • Except that AI can, at this time, replace customer service and low-level bureaucrats? Yes, that is a problem. But it is a pretty obvious one.

C makes it easy for you to shoot yourself in the foot. C++ makes that harder, but when you do, it blows away your whole leg. -- Bjarne Stroustrup

Working...