Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI

OpenAI's CEO Says Company Isn't Training GPT-5 and 'Won't For Some Time' (theverge.com) 34

In a discussion about threats posed by AI systems, Sam Altman, OpenAI's CEO and co-founder, has confirmed that the company is not currently training GPT-5, the presumed successor to its AI language model GPT-4, released this March. From a report: Speaking at an event at MIT, Altman was asked about a recent open letter circulated among the tech world that requested that labs like OpenAI pause development of AI systems "more powerful than GPT-4." The letter highlighted concerns about the safety of future systems but has been criticized by many in the industry, including a number of signatories. Experts disagree about the nature of the threat posed by AI (is it existential or more mundane?) as well as how the industry might go about "pausing" development in the first place.

At MIT, Altman said the letter was "missing most technical nuance about where we need the pause" and noted that an earlier version claimed that OpenAI is currently training GPT-5. "We are not and won't for some time," said Altman. "So in that sense it was sort of silly." However, just because OpenAI is not working on GPT-5 doesn't mean it's not expanding the capabilities of GPT-4 -- or, as Altman was keen to stress, considering the safety implications of such work. "We are doing other things on top of GPT-4 that I think have all sorts of safety issues that are important to address and were totally left out of the letter," he said.

This discussion has been archived. No new comments can be posted.

OpenAI's CEO Says Company Isn't Training GPT-5 and 'Won't For Some Time'

Comments Filter:
  • Marketing (Score:5, Insightful)

    by Iamthecheese ( 1264298 ) on Friday April 14, 2023 @10:45AM (#63449336)
    Just like 3/4/5G, iphone [number], processor part names, and most other technical things, the term "GPT(i)" is meaningless.The only thing that matters is its abilities. When and whether they train "GPT5" is irrelevant. But here's hoping for some actually intelligent AIs from someone. The current generation can pass law school exams but can't comprehend a 10,000 word paper at once.
    • The current generation can pass law school exams but can't comprehend a 10,000 word paper at once.

      The interesting question to me is not asking what "comprehend" means. When we start talking about intelligence, the conversation almost invariably starts vearing towards terms like "think" or "comprehend" or "knows" or even "wants," and we start getting into almost mystical ideas about what intelligence is. At present, none of those terms are helpful when thinking about what chatGPT can do.

      chatGPT and other similar programs are tools, and they are not directly analagous to the human brain (duh). That does n

      • Re: (Score:3, Funny)

        by phantomfive ( 622387 )
        ChatGPT is basically like this guy [xkcd.com], with more knowledge and less intelligence. People who are impressed with the output are mainly those in the fourth panel.
  • by Anonymous Coward
    We don't sit around and debate the safety of screwdrivers, and yet there are plenty of dangerous ways to use them. We don't sit around and debate the safety of pressure washers, and yet there are plenty of dangerous ways to use them.

    LLMs are a tool. They're good for some things, they're not great at others. Slap a disclaimer on them if we must, and then get on with life. All of this debate about their "safety" seems like attention whores vying for screen time by glomming onto the Luddite gripe about
    • >> We don't sit around and debate the safety of screwdrivers

      Yes, we absolutely do, to exhaustive levels of detail. There are safety standards for nearly all things... including hand tools. Power tools? Even more.

  • by Hugedatabase ( 10329211 ) on Friday April 14, 2023 @11:41AM (#63449482)
    When a guy that doomsday preps and has millions and millions invested in end of the world bunkers the word "security" translates to "how do I keep a lock tightly around the resources I've plundered". When a guy that claims AGI will be responsible for bringing an incomprehensible amount of prosperity to the masses and chooses to lock the masses out from it, what should I think? The really cool thing about math, information and data is it's bigger than one monopolistic wart of a person. Locks can be unlocked, broken and bypassed. Invest in humanity and you won't need a lock brother.
    • Re: (Score:2, Interesting)

      by gweihir ( 88907 )

      Actually, it looks very much like OpenAI has plundered a lot of data it did not have permission to use as it did, including their machine reproducing copyrighted material and personal data in what is very likely illegal behavior.

      Also, OpenAI and its CEO have repeatedly stated that they are not creating AGI and that people expecting that will be disappointed.

      • by Hodr ( 219920 ) on Friday April 14, 2023 @12:06PM (#63449526) Homepage

        Just going to point out that "old" slashdot would have ripped you a new one for suggesting fair use of published works is plundering.

        Where's my Lessig fans? Only a decade and we forget Swartz?

        • Copyright Shmopyright, however I can assure you "old" Slashdot would not have been a fan of the unauthorized use of peoples personal data.

          • by gweihir ( 88907 )

            Indeed. Well, and it _is_ illegal in the EU. Any commercial storage, processing and other use of personal data needs explicite informed consent. The only exceptions are when that data storage and/or processing is required by law, such as when you buy something online.

        • by gweihir ( 88907 )

          Using data as training data for an ANN that is then used to make money is very likely not covered under the current definition "fair use".

      • Actually, it looks very much like OpenAI has plundered a lot of data it did not have permission to use as it did, including their machine reproducing copyrighted material and personal data in what is very likely illegal behavior.

        The very concept is anathema to academia. You are arguing against learning itself.

        • by gweihir ( 88907 )

          I am doing no such thing. OpenAI is not "academia". It is a for-profit company. The "Open" part of the name is a lie by misdirection.

      • by linuxguy ( 98493 )

        "Actually, it looks very much like OpenAI has plundered a lot of data it did not have permission to use as it did"

        Ever read a book? Whoever wrote it, plundered a lot of data from other books before it. Heck, as a software developer, I have plundered a lot of data from other books, articles, websites etc. I charge other people money for putting that plundered knowledge to use.

  • FFS, stop that. (Score:5, Interesting)

    by Petersko ( 564140 ) on Friday April 14, 2023 @12:58PM (#63449672)

    If you describe a link as "an open letter", don't link to another slashdot article that links to an article in Bloomberg that describes the letter. Link to the goddamned letter. Fuck you and your click-baity breadcrumbs.

  • In layman's terms, regardless of how "powerful" you make the language model, the GPT language model is always effectively a parrot.

    That is to say, it reproduces conversational language that it has encountered previously. However, the way it does it is such that the words (or more accurately the "tokens", but in laymans terms you can think of them as just the words) are pseudorandomly generated in sequence. The reason the output doesn't simply look like gibberish is because it skews the random output in favor of words that are most likely to follow the words it has seen so far. Because the context window that it uses is quite large, this ends up resembling original human speech. This is the emergent property of the GPT model that is actually what is so remarkable, and it's actually quite fascinating at how increasing the complexity of the model without actually changing any of its algorithm increases how effectively it can appear to mimic actual human conversation language.

    However, at the end of the day it is not. While the output is biased in favor of what is statistically liable to follow the words it has seen or generated so far, they are all ultimately still randomly generated. The fact that the model can often produce correct answers about something is a testament to the fact that so much of what we understand in natural language actually comes more from the context in which the natural language is being used in the first place more than the language used itself.

    The only "danger" that exists from making models like these more powerful is that people who don't understand how they work might mistake sufficiently convincing output for signs of actual intelligence, and act as if a human being had said the same thing.

    All stopping now does is needlessly give up their momentum

  • They're not training because it still costs too much to train 17.5T parameters. They're pausing for the price to come down.
  • Layers of training exist in v3, 3.5 and v4. GPT5 can score parsing for safety, authentication and verification reliability scores. That’s architecture toward a structured throughput in A.I.

  • Either he's lying and they are building out the hardware they will be training gpt-5 on and "technically" won't be training gpt-5 because they are building the infrastructure and will immediately start training the second the hardware is tested and stable. Or he's an idiot and pivoting from a tech company to a patent troll 'buy all the lawyers strategy' and lobby Congress to protect his baby.

"It's the best thing since professional golfers on 'ludes." -- Rick Obidiah

Working...