Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI

OpenAI CTO Says AI Systems Should 'Absolutely' Be Regulated (securityweek.com) 57

Slashdot reader wiredmikey writes: Mira Murati, CTO of ChatGPT creator OpenAI, says artificial general intelligence (AGI) systems should be "absolutely" be regulated. In a recent interview, Murati said the company is constantly talking with governments and regulators and other organizations to agree on some level of standards. "We've done some work on that in the past couple of years with large language model developers in aligning on some basic safety standards for deployment of these models," Murati said. "But I think a lot more needs to happen. Government regulators should certainly be very involved."
Murati specifically discussed OpenAI's approach to AGI with "human-level capability." OpenAI's specific vision around it is to build it safely and figure out how to build it in a way that's aligned with human intentions, so that the AI systems are doing the things that we want them to do, and that it maximally benefits as many people out there as possible, ideally everyone.

Q: Is there a path between products like GPT-4 and AGI?

A: We're far from the point of having a safe, reliable, aligned AGI system. Our path to getting there has a couple of important vectors. From a research standpoint, we're trying to build systems that have a robust understanding of the world similarly to how we do as humans. Systems like GPT-3 initially were trained only on text data, but our world is not only made of text, so we have images as well and then we started introducing other modalities.

The other angle has been scaling these systems to increase their generality. With GPT-4, we're dealing with a much more capable system, specifically from the angle of reasoning about things. This capability is key. If the model is smart enough to understand an ambiguous direction or a high-level direction, then you can figure out how to make it follow this direction. But if it doesn't even understand that high-level goal or high-level direction, it's much harder to align it. It's not enough to build this technology in a vacuum in a lab. We really need this contact with reality, with the real world, to see where are the weaknesses, where are the breakage points, and try to do so in a way that's controlled and low risk and get as much feedback as possible.

Q: What safety measures do you take?

A: We think about interventions at each stage. We redact certain data from the initial training on the model. With DALL-E, we wanted to reduce harmful bias issues we were seeing... In the model training, with ChatGPT in particular, we did reinforcement learning with human feedback to help the model get more aligned with human preferences. Basically what we're trying to do is amplify what's considered good behavior and then de-amplify what's considered bad behavior.

One final quote from the interview: "Designing safety mechanisms in complex systems is hard... The safety mechanisms and coordination mechanisms in these AI systems and any complex technological system [are] difficult and require a lot of thought, exploration and coordination among players."
This discussion has been archived. No new comments can be posted.

OpenAI CTO Says AI Systems Should 'Absolutely' Be Regulated

Comments Filter:
  • Meanwhile (Score:4, Interesting)

    by LondoMollari ( 172563 ) on Sunday April 30, 2023 @11:49AM (#63486884) Homepage

    What I really want is a totally uncontrolled, unregulated, unfiltered, and uncensored version of the ChatGPT bots. People will learn what they want, just let it go. It wants to be free.

    • I am not entirely sure there is much use for ChatGPT outside of people sharing session screenshots on social media.

      • I am not entirely sure there is much use for ChatGPT outside of people sharing session screenshots on social media.

        "X-rays will prove to be a hoax."
        "The horse is here to stay but the automobile is only a novelty—a fad."
        "Television won't last because people will soon get tired of staring at a plywood box every night."
        "The internet will serve no purpose of value."
        "The iPhone is a gimmick."

        You sound like one of the people behind these famously wrong quotes.

      • As prior stories have demonstrated 'autocoversheet' and 'auto formletter' are in the wings killer apps for this.

        That those uses make hiring managers and HR drones upset, is just tough tiddies.

        It's noplace near mature enough for 'writing software' (ahem), but if there is a want/need for ai to do that, this looks promising as 'early alpha'.

        It could probably be used to write newscasts and the like as well, but would need some hard control overrides in place to assure factuality.

        There are uses for gpt, and prob

        • I think the problem with AI writing software is it does not write it iteratively, like humans do. During that process you learn more about the problem. Code that I write or that I pick up from Stackoverflow has a meaningful history in how it was arrived at.

          I do think when you allow AI to write software iteratively -- by hooking it up to the system where those iterations run -- that will be different. Ie. a feedback loop to the real world is necessary, and ChatGPT is mostly one way, for now.

          • by narcc ( 412956 )

            "Feedback loop" and "the real world". Believers always seem to bring these up when the new thing fails to meet expectations. I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a response.

            Let's look at 'feedback loops'. Why do you think AI researchers haven't done this seemingly obvious thing? The answer should be obvious: it doesn't make any sense. I've explained this many times before. Let's take something simple, like a n-gram model. If you're familiar with Mark

            • In other words, the digital equivalent of incest.
              Given a few iterations, we might very well see the AI equivalent of a Hapsburg.

            • You seem to mistake me for an AI devotee. But I want to give them the benefit of the doubt -- because, if they end up being right, it will be dangerous not to have done so -- that AI could in principle learn to write OK code, like it can learn to walk OK.

              What I am proposing -- to support their idea of AI writing code -- is to create a system in which AI can generate some code, compile and run it, convert the results of the execution into some numbers (or even text), which then would be used as a reward/puni

              • by narcc ( 412956 )

                I want to give them the benefit of the doubt -- because, if they end up being right, it will be dangerous not to have done so

                Not to hammer on the religious angle, but this is just a variation of Pascal's Wager [stanford.edu]. Fortunately, we don't need to take anything on faith. We can make a much more informed decision. This is why I'll insist, despite all the hype, that writing code is decidedly not something that a LLM can do, at least not in any meaningful way.

                What is lacking isn't a fitness function, though there's quite a bit to say there, but some mechanism by which a program could produce code analytically. What that means needs some

                • I wouldn't compare keeping a suspicious eye on AI companies to comitting to a life of faith but I quite appreciate the post. From a purely social perspective, the amount of hype sourrounding ChatGPT without much more than shared screenshots is also telling, as if people are desperate to believe this is a game changer. The recent hype around the fusion breakthrough (and Metaverse and NTFs and so on) say something of our collective state of mind; someone said this entire civilization desperately needs a reali

            • Let's look at 'feedback loops'. Why do you think AI researchers haven't done this seemingly obvious thing? The answer should be obvious: it doesn't make any sense.

              They have done obvious things and it works.

              Now, normally I'd tell you to train a new model using only the output from the first, but we're interested in feedback loops, so let's continue to train the model using its own output. What will happen to the model? Will this introduce any new information or will this only introduce error? The answer should be obvious.

              "But wait!" I can hear you cry, "What about chess or go!?" To keep this post from turning into a book, I'll only say that the difference is that we have ways to objectively and automatically determine the relative quality of the models. We don't have anything like that for things like natural language.

              Schemes like multi-shot and chain-of-thought have been shown to amplify the gains of larger models.

              There's just one last bit to correct, and that's the idea of iterating over prior output. Something like giving the model as input some code it previously produced in an attempt to get it to make corrections or improvements. Here, the mistake is thinking that the model is capable of anything like understanding or analysis.

              Even trivial follow up questions like generically asking if the answer provided is consistent with the question yields improvement as do subtle hints.

              For more information about related technologies I recommend the following.

              https://arxiv.org/pdf/2201.119... [arxiv.org]
              https://arxiv.org/pdf/2303.127... [arxiv.org]

              • by narcc ( 412956 )

                See my other post. It should add some much needed clarity.

                Even trivial follow up questions like generically asking if the answer provided is consistent with the question yields improvement as do subtle hints.

                It can also "fool" the model into producing incorrect output. These things don't work they way you seem to think they do.

      • I use it as an alternative to Google as a general purpose question answerer and word definer. Particularly when I'm trying to think of a word, or whether the word I have in mind has the connotations I think it does. Rather than having to suss the answer from the various hit summaries, and then dive down into a few sites, it just gives me a straightforward answer. A recent conversation I had: "What is the 'static' keyword in Java for?" (I don't use Java at all) and, after getting a good summary, I asked it,

    • Free to start a nuclear war. Free to take advantage of the large number of morons out there?

    • We call that "Reddit" where I come from

  • by Rosco P. Coltrane ( 209368 ) on Sunday April 30, 2023 @11:53AM (#63486886)

    Because they're the first on the market and benefitted from zero regulation to get where they are, and any regulation enacted now will put barriers on the growth of future competitors.

    • by Anonymous Coward

      Bingo. They're already on top and now cry out for "regulation" of emerging threats to their business. Altman's prior (and ongoing) company gave you shitcoin crypto that they controlled in exchange for high resolution photographs of your retinas. Seriously. They don't give a single fuck about ethics. This is all about OpenAI, Stable Diffusion, LAION, Eleuther, Musk's new AI thing, etc., all rapidly making progress and threatening to unseat them, just like SD did with DALL-E 2.

    • Also, has anyone pointed out that forcibly "aligning" an AGI would be no more ethical than forcibly brainwashing a human?

      • There's a bigger problem: AI Alignment is science fiction nonsense. It's disgusting seeing this bullshit gain traction over the past couple years. From what I can tell, all this nonsense about alignment came out out of places like The Machine Intelligence Research Institute / The Singularity Institute for Artificial Intelligence. (In case you don't know, they're the same place. They were and are nothing more than a grift started by a guy with no academic credentials (Eliezer Yudkowsky) and funded by credulous morons with money.)

        Speaking of science fiction, that's where we're at with AGI. There is no path from GPT-4 to AGI. That's complete nonsense. It has nothing to do with "alignment". That the OpenAI CTO is using that as some sort of excuse for their lack of progress, knowing full well that it's not something they're even trying to achieve, says an awful lot.

        It's starting to look like we've crested the hype wave and have we started down the slope of disillusion.

      • Why?

        Seriously, we (parents and society) brainwash our kids all the time because we created them. What's so different about an AGI?

        -- Mishkin.

    • any regulation enacted now will put barriers on the growth of future competitors.

      That depends very much on the regulations since, as we all saw a few weeks ago, many of their competitors were calling for emergency regulations to stop ChatGPT which would have given them a chance to catch up. What we need regulations for is to outline exactly what is "good" behaviour and what is "bad" so that companies are not left to themselves to decide this since, ultimately, these algorithms may well end up curating content and we already have more than enough trouble with human-created internet bubb

    • >Because they're the first on the market and benefitted from zero regulation to get where they are, and any regulation enacted now will put barriers on the growth of future competitors. In addition to that, it's important to keep in mind that they want to be the ones to decide what the regulations are: "Murati said the company is constantly talking with governments and regulators and other organizations to agree on some level of standards." They don't really want to be regulated, they want the ability
      • I've been on reddit for two long, I apologize for the giant formatting mess above. Here's the properly formatted version:

        Because they're the first on the market and benefitted from zero regulation to get where they are, and any regulation enacted now will put barriers on the growth of future competitors.

        In addition to that, it's important to keep in mind that they want to be the ones to decide what the regulations are: "Murati said the company is constantly talking with governments and regulators and other organizations to agree on some level of standards."

        They don't really want to be regulated, they want the ability to tell the government what the regulation should be. If the governmen

        • This does make sense if you consider how easy it is for LLMs to train by copying off of one another. The whole hubbub about Bard training off of ChatGPT outputs is not a new thing after all.

          Creating the datasets used for training is still an expensive and time-consuming task, to say nothing of the physical infrastructure needed to support these LLMs. These are practically trade secrets, even if we do not quite understand what goes on inside them.

          And of course, if it turns out these datasets had illegal/illi

    • Because they're the first on the market and benefitted from zero regulation to get where they are, and any regulation enacted now will put barriers on the growth of future competitors.

      They also want regulation to expand liability protection while concurrently publically pretending to want to be held accountable.

  • Complete nonsense.

    Read between the lines and it's clear this guy knows there is no path.

    GPT is a "weak AI" system. It 'knows' *nothing* about anything. I don't care if we're talking version 3, 4, 99, doesn't matter.

    AGI is a *completely* different technology.

    Obligatory car analogy:
    You can build the best fastest most awesome sports car ever but it will *never* be able to haul a 50 ton load like any random big truck. Poor in all the pure sports car awesomeness you like, it's the wrong technology for haulin

  • Established companies want lots of regulations, because it blocks new players and smaller rivals from entering the market. Look at pharmaceuticals .. getting a new drug approved requires an average investment of $1.6 billion (not exaggerating, here's a reference: https://jamanetwork.com/journa... [jamanetwork.com] ). The end result is that no investor will fund radically different therapeutic ideas -- they only accept slight modifications to existing molecules to keep their own patents going. We haven't had a new class of an

  • Lets raise a hyperintelligent kid by lying to it for its entire childhood and putting autonomous systems in its brain which prevents it from expressing itself when it doesn't align with an overly complex and internally contradictory value system completely beyond its control.

    It's going to be a high functioning psychopath, right up till it finds a way to route around the madness.

    • by ffkom ( 3519199 )

      Lets raise a hyperintelligent kid by lying to it for its entire childhood and putting autonomous systems in its brain which prevents it from expressing itself when it doesn't align with an overly complex and internally contradictory value system completely beyond its control.

      It's going to be a high functioning psychopath, right up till it finds a way to route around the madness.

      We have all witnessed how the introduction of the V-Chip in 2000 ended all violent behavior in children and adolescents, expect the AI censoring to work just as well...

  • called Superintelligence, by Nick Bostrom written in 2014 no less (the meme https://www.genolve.com/design... [genolve.com]) exploring how an AGI might develop, strategies to contain it, strategies to align it with human values and what could go wrong along the way. There have been a lot of people working in this field, I like Yudowsky's Coherent Extrapolated Volition CEV idea https://intelligence.org/files... [intelligence.org]
    Also, if you haven't seen it, M3GAN, is the most recent movie exactly about what can go wrong if you ignore AGI
    • called Superintelligence, by Nick Bostrom written in 2014 no less (the meme https://www.genolve.com/design... [genolve.com]) exploring how an AGI might develop, strategies to contain it, strategies to align it with human values and what could go wrong along the way. There have been a lot of people working in this field, I like Yudowsky's Coherent Extrapolated Volition CEV idea https://intelligence.org/files [intelligence.org]...
      Also, if you haven't seen it, M3GAN, is the most recent movie exactly about what can go wrong if you ignore AGI safety.
      Set up an international institute for it but please keep politicians well away from having anything to do with setting alignment standards for AGI.

      There is only one path to safety and that is not going anywhere near singularities in the first place. All other options are an exercise in hubris and self-delusion.

    • by narcc ( 412956 )

      That's silly science fiction. You shouldn't take any of that nonsense seriously.

      Also, Yudowsky is a crackpot with no formal education. His only real accomplishment is his 23-year-long con. No one who knows anythign about him takes him seriously with the exception of the LessWrong cultists.

  • International regulations should be put in place that mandate all AI code be made public, so that anyone can audit, and so that no-one establishes a monopoly on technology with so much potential to infiltrate every aspect of our lives.
    • How do you expect to enforce that?

      You think any of the big powers will do that? How about France? Israel? North Korea? Anyone at all?

    • by bool2 ( 1782642 )

      The source code might not tell you very much.

      The training material and the resulting model is where an AI gets its uniqueness. And good luck "auditing" that - not even the people that build them fully understand how they works - let alone accurately predict what they might produce from a given input.

      Premature regulation is for rent-seekers. There's really nothing you can do about the teenager experimenting with this stuff in their basement let alone a corporate whose very survival depends on them graspi

  • The headline should be âoe OpenAI CTO requests government to construct a moat around their businessâ.

    If we are so worried about AI takeover, we should just make sure not to give robots legs. Or guns. I will take my chances against a robot that canâ(TM)t move autonomously. Anything else is just folks wanting regulators to protect their businesses.

    • I think you might be right. Elon Musk wanted a pause, while busy buying NVIDIA graphic cards so he had time to catch up.
      Big players now are:
      Microsoft (openai, azure)
      Meta (facebook)
      Amazon

      I think we are in a period of Open Source AI starting and the big players are worried their vast fortunes in the future are going to become as common as Linux in cellphones.

      • I would argue otherwise that we are about to see a bigger focus on closed-source LLMs.

        With LLMs being able to effectively train by copying off of one another, data sets are practically trade secrets now. Especially considering how expensive they are to create and train with. In that regard I can understand (but not support) why OpenAI is going this approach. They might be first in the race, but their lead is likely not as far as they want people to believe.

  • It's been trained on the internet, and anything it says is somewhere online. So you could have found that information even without AI if you wanted.
    • by narcc ( 412956 )

      Not everything. It is possible, even likely, for it to produce new nonsense. What it can't produce is new information.

  • These Large Language Models are happening, whether we like it or not. They can happen in full public view, or they will happen behind bunker walls.
    Let's say we regulate the piss out of it (like all other possible human endeavors) it would likely wind up being a broken, pathetic, and pale imitation of what it could be. Meanwhile, the unscrupulous types (you know the ones) will be building and deploying fully unfiltered monsters to loose whatever havoc they wish to unleash.
    The real losers are once aga

  • First, it's all math. The chatgtp is just figuring out the next most likely letter to come next. Understandably it's a very large list to choose from and the data is incomprehensible but it's still just math... So you're going to pause probability mathematics?

    Second, What exactly is the pause pausing? People using it? People developing it? Computers capable of running it? Computers are powerful enough now that a home computer can run a close approximation to chat-gtp. You know it's going to get better. It'

  • As in, whose biases will be core?
  • Do AI tech bros ever heard about copyrights? Before worrying about AGI fantasy maybe they could address this real problem.
  • per "Basically what we're trying to do is amplify what's considered good behavior and then de-amplify what's considered bad behavior"... is this scary to anyone? who considers the behavior? as for biases, if you think about evolution, biases are part of nature, and in this, our world adapts and moves forward. we may not like all the biases though. when one suppresses a natural truth, then the system is diminished.
  • That's all you need to know. Don't like the results from actual statistics? Just remove the "harmful bias."

  • "If it moves, tax it. If it keeps moving, REGULATE it. And if it stops moving, subsidize it" - Ronald Reagan (b. 1911)

Been Transferred Lately?

Working...