Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI

OpenAI CEO Suggests International Agency Like UN's Nuclear Watchdog Could Oversee AI 36

OpenAI CEO Sam Altman warned during a visit to the United Arab Emirates that AI poses an "existential risk" to humanity and suggested the establishment of an international agency, similar to the International Atomic Energy Agency (IAEA), to oversee AI. The Associated Press reports: "We face serious risk. We face existential risk," said Altman, 38. "The challenge that the world has is how we're going to manage those risks and make sure we still get to enjoy those tremendous benefits. No one wants to destroy the world." Altman made a point to reference the IAEA, the United Nations nuclear watchdog, as an example of how the world came together to oversee nuclear power. That agency was created in the years after the U.S. dropping atom bombs on Japan at the end of World War II.

"Let's make sure we come together as a globe -- and I hope this place can play a real role in this," Altman said. "We talk about the IAEA as a model where the world has said 'OK, very dangerous technology, let's all put some guard rails.' And I think we can do both. "I think in this case, it's a nuanced message 'cause it's saying it's not that dangerous today but it can get dangerous fast. But we can thread that needle."
This discussion has been archived. No new comments can be posted.

OpenAI CEO Suggests International Agency Like UN's Nuclear Watchdog Could Oversee AI

Comments Filter:
  • Desperation.. (Score:2, Insightful)

    by jythie ( 914043 )
    Anyone else getting the feeling that companies that depend on people buying into AI are really trying to hype up how dangerous their product it?
    • by taustin ( 171655 )

      "The only bad publicity is your obituary."

    • Re: Desperation.. (Score:4, Insightful)

      by Bobknobber ( 10314401 ) on Wednesday June 07, 2023 @07:03PM (#63584586)

      Basically the same advertising tricks you see from doomsday cults and/or televangelists where they hype up the danger to draw attention and earn money. And unfortunately this is far too common in the tech industry.

      Personally I find it all tiresome. We have been through this phase so many times in history, yet we keep falling for it time and time again.
      Altman is doing this to advertise his product, build a legal moat around it, and maintain a clean PR image. Everything else is theatrics.

      If Altman really cared about transparency he would release the dataset to GPT-4 to the public instead of building a walled garden around it. Surely the dataset is clean and does not contain any copyrighted and/or confidential data in it if he cares about regulation that much. Or that the research papers about it have not been potentially framed in such a way as to hype up the abilities of the model if he truly cares about scientific research.

      • by gweihir ( 88907 )

        Indeed. This is yet another money-grab, and as usual they need to execute before the usual fanbois realize how restricted this product actually is in what it can do.

        • Still, even as incomplete as the tech may be it can cause a ton of harm to people without proper legal, technical, and cultural oversight.

          Altman and his ilk are using the whole AI-apocalypse angle to distract people from the fact that he and many others are trying to normalize mass data scraping to make commercial products. That alone should be massively problematic for everyone, not just the creative industry. It might sound a bit on the Luddite side, but I just cannot accept LLM tech as is until the issue

    • It might also be a personal effort by people like Altman to secure a lucrative position in such a UN agency.

    • Re:Desperation.. (Score:4, Interesting)

      by narcc ( 412956 ) on Thursday June 08, 2023 @02:48AM (#63585124) Journal

      What's amazing is that it keeps working.

      First, GPT-2 was too dangerous to release [openai.com] It seems laughable now, but it's still on their website.

      Our model, called GPT-2 (a successor to GPT), was trained simply to predict the next word in 40GB of Internet text. Due to our concerns about malicious applications of the technology, we are not releasing the trained model.

      Then it was GTP-3 that too dangerous to release [itpro.com] ... before offering us a free trial.

      The doomsday device known as GPT-4 is now available by subscription.

      They don't have anything new to terrify us with at the moment, but that doesn't mean they can't turn the fear dial up to 11.

    • It likely has something to do with this. [semianalysis.com]

      Regulation is the best defense of giant monopolists against small contesters, in particular F/OSS. Add tonthe mix that the dangers of AI aren't Skynet-like (i.e. from the "intelligence" of the, capable but stupid, model itself). They're from greedy and fast-and-loose corporations and applications, i.e. calculating intransparent credit scoring, replacing dialog where a person is reasonably expected at the other end of the line by AI which then fucks up (e.g. eating dis

    • by gweihir ( 88907 )

      Yes. Seems to be marketing, because the actual capabilities of ChatAI are not that impressive and not a new threat. That replacing low-level white-collar workers with automation would become possible has been expected for a decade or so now. There are basically no other real threats that did not already exist before. And "tremendous benefits"? That seems rather unlikely. Some small benefits, sure, but that is it.

    • It's a nice two-pronged attack on others interested in the technology.

      First? It's so very scary. OMG! IT COULD END HUMANITY! Do not touch this without our specialized guidance if you know what's good for you!

      Second? If they hype it enough, they'll get regulations, regulations they are DESPERATE to get implemented worldwide, so that they can prevent other players from joining the game that they've started.

      So by hyping the danger they're both creating fear of non-expert dabbling, and creating the proper envir

    • by Pieroxy ( 222434 )

      Here are some of the dangers of LLM: https://youtu.be/xoVJKj8lcNQ [youtu.be]

      Looks pretty real to me, especially given the fact that no one understands how they got at this point.

  • by Baron_Yam ( 643147 ) on Wednesday June 07, 2023 @07:05PM (#63584590)

    This isn't nuclear technology that requires difficult to source resources and a lot of effort to put together... 'AI' can run on standard computers that pretty much anyone with a room temperature IQ and a manual can set up.

    There is no monitoring it. There is no deciding how it's going to be directed. Once it exists, someone, somewhere will do something bad with it and all we can really do is try to predict what, where, and when and try to plan to deal with that fact. Prevention of malicious or merely deleterious deployment of the technology is not possible.

    • Yeah, what they want is a watchdog agency keeping tabs on the LEGAL entities. That way you can't legally compete with them without having the overhead necessary to deal with a watchdog.
      They just want to raise that barrier to entry a little higher.

      • Yep, they gave away the game in their statement: "We talk about the IAEA as a model" I.e. "We want an international agency accountable to no-one to oversee global development of a product we want full control over." Let's also not discount the intentional conflating of AI with Nuclear Weapons. The whole purpose of which is to scare people into going along with their unaccountable governance body without question.

        This shit needs to be reigned in. If there's one group I don't trust with overseeing AI, it's
    • by Rademir ( 168324 )

      > Once it exists, someone, somewhere will do something bad with it

      People have been doing harmful stuff with it for a while. Check out Cathy O'Neil's Weapons of Math Destruction.

    • Agreed. We're past the point of no return.
    • by AmiMoJo ( 196126 )

      Anyone with a computer can create malware, but in practice you have a relatively small number of companies that sell it to governments and corporations. The vendors and the customers are regulated.

  • Skeptical (Score:4, Insightful)

    by ZipNada ( 10152669 ) on Wednesday June 07, 2023 @07:14PM (#63584606)

    I am skeptical that this can be contained at all. This kind of AI is a lot easier to implement than making a plutonium bomb and building a delivery vehicle. There already are multiple open source LLM implementations that you can install and run on your laptop. How can anything like that be constrained?

    This is entering a realm that was science fiction just a short time ago. And in those kind of stories there are often competing AI's, some are malicious and some are defensive. It evolves into a cyber-war that humans can't grasp at all. I'm thinking that the only chance we have is to seriously beef up the defense.

    • by Tablizer ( 95088 )

      Indeed! All a watch-dog could do is monitor for oddities and alert the authorities if something dodgy shows up. It's better than nothing, but probably won't stop smaller crooks and state-sponsored secret labs.

      Sarah Connor, please start packing...

    • Oh, yes. This technology wasn't possible when Westworld aired its last season, and look where we are now. Multiple sectors are being destabilised because no-one knows how to use the technology rather than be used by it (thinking of that lawyer who got scammed by AI citations that didn't exist). This will continue, because we as a species don't know how to think long-term. That's my ha'penny's worth, anyway.
    • The new organization would need a powerful AI to assist in identifying rogue AI's. What could possibly go wrong?
  • What would this International Agency do - meet every 6 months at some tropical conference venue to (retrospectively) oversee and certify (often open-source) software applications that have been patiently waiting, unreleased, since the last meeting?
    • What would they do?

      Simple - ban any country that isn't on perfect terms with the US from importing, owning or operating any AI related technologies.

      Which is why the suggestion wont work.

  • It's the only way to contain AI. It's a joke folks! People don't seem to get jokes anymore.
  • ... from simply declaring themselves for-profits.

    That would be just.

    Fuck Sam Altman.
  • The UN "Watchdogs" are corrupt.

    https://thegrayzone.com/2019/1... [thegrayzone.com]

    Wealthy AI implementers can just buy them off.

  • Like most UN crap, it is an organization without control, with huge budgets, spent on pointless perks for its few employees and zero useful results. Under the "nuclear watchdog" the nuclear industry managed to have its worst accidents, all due to lack of meaningful safety and at the same time kill itself by useless PR. We also saw the worst nuclear proliferation in ages. So yeah, the same approach will work MIRACLES for the "AI threat".

  • - the Turing Heat.
  • ...generally ends up being a cluster-flop.

  • Kind of like the agency that allowed Iran to advance their nuclear program while the whole time stating they weren't? Yes that should be very effective.

Happiness is twin floppies.

Working...