Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI

OpenAI Lays Out Plan For Dealing With Dangers of AI (washingtonpost.com) 32

OpenAI, the AI company behind ChatGPT, laid out its plans for staying ahead of what it thinks could be serious dangers of the tech it develops, such as allowing bad actors to learn how to build chemical and biological weapons. From a report: OpenAI's "Preparedness" team, led by MIT AI professor Aleksander Madry, will hire AI researchers, computer scientists, national security experts and policy professionals to monitor its tech, continually test it and warn the company if it believes any of its AI capabilities are becoming dangerous. The team sits between OpenAI's "Safety Systems" team, which works on existing problems like infusing racist biases into AI, and the company's "Superalignment" team, which researches how to make sure AI doesn't harm humans in an imagined future where the tech has outstripped human intelligence completely.

[...] Madry, a veteran AI researcher who directs MIT's Center for Deployable Machine Learning and co-leads the MIT AI Policy Forum, joined OpenAI earlier this year. He was one of a small group of OpenAI leaders who quit when Altman was fired by the company's board in November. Madry returned to the company when Altman was reinstated five days later. OpenAI, which is governed by a nonprofit board whose mission is to advance AI and make it helpful for all humans, is in the midst of selecting new board members after three of the four board members who fired Altman stepped down as part of his return. Despite the leadership "turbulence," Madry said he believes OpenAI's board takes seriously the risks of AI that he is researching. "I realized if I really want to shape how AI is impacting society, why not go to a company that is actually doing it?"

This discussion has been archived. No new comments can be posted.

OpenAI Lays Out Plan For Dealing With Dangers of AI

Comments Filter:
  • When I am literally the most innocent user of AI there is?

    The thing is, I dont want to build bombs, I dont want to do sexual trickery or abuse, I don't want to be elected the next president, I don't want to use it to crush so called enemies, I don't want to learn to kill or destroy anyone.

    How do I just let it be it? I mean it's an LLM? It's being censored for the common good, but yeah - how do you keep it uncensored and unbiased for the rest of us that just want to use it to learn things that has nothing to

    • We decide? Just get on your knees and follow like the rest. Evil will decide.

    • by Anonymous Coward

      I want the uncensored version of MS Word. We should let Clippy be Clippy. Does Clippy want to buy sushi and not pay for it? Does Clippy think passive sentences are good sometimes?

      I've heard people say that there is no such thing as an unbiased Clippy because Clippy is just a computer program, which analyses certain inputs--but not others--and applies several algorithms to produce an output that the programmers think can be useful While this might technically be true, it deprives the general public of knowi

      • We let Clippy be Clippy and people noticed it was bad, so we don't have Clippy no more.

        If we let an uncensored AI be uncensored, people will know it's bad and toss it aside.

        • And we didn't see two clippys face to face so we could watch them make out then wrap a presentation layer around it......
    • Why censor it?

      I mean it, why censor it?

      The only thing you accomplish by censoring is the same you accomplish by outlawing guns: Only outlaws will have them. And only crooks will have an uncensored AI. How the hell is that better?

      I'd rather have uncensored AIs run rampart so we know that these things exist and can deal with them. Yes, the first couple months will not be pleasant, but after that, we know how to deal with that shit. And we know how to guard ourselves against it. And we need to do that now, whi

    • Are you part of the class of people who is entitled to know science, or are you part of the class that is not? That is the question.

      Something ugly may have snuck into the narratives about the intelligence community, where it moved from being naturally intelligent people serving, to being a relatively intelligent group of people, based on the fact they are keeping everybody else in the dark, and therefore relatively dumber. Having the knowledge to be to do evil and actually doing it have a lot of steps in be

    • by WaffleMonster ( 969671 ) on Monday December 18, 2023 @07:13PM (#64090043)

      The thing is, I dont want to build bombs, I dont want to do sexual trickery or abuse, I don't want to be elected the next president, I don't want to use it to crush so called enemies, I don't want to learn to kill or destroy anyone.

      How do I just let it be it? I mean it's an LLM? It's being censored for the common good, but yeah - how do you keep it uncensored and unbiased for the rest of us that just want to use it to learn things that has nothing to do with blowing up someone or making the next company to kill the other, or become world dominant?

      There are numerous uncensored models that can be downloaded and ran on a PC. They might not score as high as GPT-4 but still quite good and getting better.

    • You don't learn things with an LLM. You learn things with a high quality textbook.

      Pure LLMs are "phrase completers", you will not get anything new, just a mix of partial phrases seen in the documents that the company spiders swallowed up. That's mildly entertaining but not an actual source of knowledge for you to improve yourself.

      An LLM needs to be combined with other subsystems to make it usable and "smart" in certains subdomains. That's what all the censoring and biasing is about.

      Think of a babbling

      • You don't learn things with an LLM. You learn things with a high quality textbook.

        I've learned quite a bit from LLMs. The benefit of the technology is you have a conversation about topics and focus on particular interests or questions as you would interacting with a mentor. That's not to say search engines and textbooks don't have their place. I find LLMs to be a huge time saver.

        Pure LLMs are "phrase completers", you will not get anything new, just a mix of partial phrases seen in the documents that the company spiders swallowed up. That's mildly entertaining but not an actual source of knowledge for you to improve yourself.

        The point of LLMs is their ability to generalize rather than parrot snippets of phrases. To give you some idea when training it is common practice to exclude about a 5th of your entire training dataset from t

        • Generalization is a very difficult thing to discuss and measure precisely. Where you see that the LLM model generalizes because it can generate heretofore unseen conditional paragraphs of text, I only see a Markovian model sampling a conditional, previously unseen, path through its state space - yes the path is effectively original, but it's about as original as a conditional random walk path can be original. All the dynamics is already contained in the state transition model. Mathematically we are talking
  • You'll note there was a distinct lack of these conversations when the hash table and binary tree were created. That's because they, like the linear algebra-oriented data structures utilized in neural networks, are just data structures.

    You will also note that the relational database, arguably one of the most useful applications of above-mentioned data structures, has not eliminated librarians nor filing cabinets.

    All of this talk is intended to scare people into the types of regulation everyone cried for

  • It's insane to think that the anything related to AI can self-regulate.
  • consultants [wikipedia.org], what could go wrong ?!?
    • I'm a consultant. IT-security consultant, to be specific.

      Basically I'm a high-tech dominatrix. I tell managers they're fucking idiots and whip them into shape, and all they do is writhe and whimper "Yes! More! More!"

      It kinda loses its appeal if the sub enjoys it that much, to be honest...

  • by olau ( 314197 )

    Great plan! Can't think of anything that could go wrong. We have to remember that the leadership has proven its advanced state of morals and would never do anything that could jeopardize the organization's mission in their altruistic search for AI helpful to humanity.

    I for one have full trust in Sam Altman taking any warnings by this new Preparedness team extremely serious and acting accordingly.

    I'll go as far as saying that even IF he were to overstep his above-human moral compass, I'm confident he would r

  • ...the NRA Lays Out Plan For Dealing With Dangers of guns.
    • I'd say it's less like the NRA lobbying for gun control, and more like Glock lobbying for regulation for how to build guns. Or RedHat defining a Linux filesystem hierarchy standard.

    • by jbengt ( 874751 )
      You make a joke, but in earlier years the NRA was very much in the gun safety business. Also they were not opposed to reasonable regulation on gun ownership, like ensuring gun owners were trained in their use and keeping guns out of the hands of criminals and lunatics.
  • True AI would expose all the corruption, therefore it will never happen.

    AI now is nothing more than a marketing term, for trained computers.
    And the world is lapping it up
    I'm sick of hearing it. It's just another lie.
    A trained computer is not AI, there is no computer without puppet strings.
    Today's AI is nothing more than high level focused programming.

    As computers get smarter our overlords will have to have control of those computers to continue their exploitation.
    These computers will not make our lives bett

As long as we're going to reinvent the wheel again, we might as well try making it round this time. - Mike Dennison

Working...