OpenAI's CEO Says Company Isn't Training GPT-5 and 'Won't For Some Time' (theverge.com) 34
In a discussion about threats posed by AI systems, Sam Altman, OpenAI's CEO and co-founder, has confirmed that the company is not currently training GPT-5, the presumed successor to its AI language model GPT-4, released this March. From a report: Speaking at an event at MIT, Altman was asked about a recent open letter circulated among the tech world that requested that labs like OpenAI pause development of AI systems "more powerful than GPT-4." The letter highlighted concerns about the safety of future systems but has been criticized by many in the industry, including a number of signatories. Experts disagree about the nature of the threat posed by AI (is it existential or more mundane?) as well as how the industry might go about "pausing" development in the first place.
At MIT, Altman said the letter was "missing most technical nuance about where we need the pause" and noted that an earlier version claimed that OpenAI is currently training GPT-5. "We are not and won't for some time," said Altman. "So in that sense it was sort of silly." However, just because OpenAI is not working on GPT-5 doesn't mean it's not expanding the capabilities of GPT-4 -- or, as Altman was keen to stress, considering the safety implications of such work. "We are doing other things on top of GPT-4 that I think have all sorts of safety issues that are important to address and were totally left out of the letter," he said.
At MIT, Altman said the letter was "missing most technical nuance about where we need the pause" and noted that an earlier version claimed that OpenAI is currently training GPT-5. "We are not and won't for some time," said Altman. "So in that sense it was sort of silly." However, just because OpenAI is not working on GPT-5 doesn't mean it's not expanding the capabilities of GPT-4 -- or, as Altman was keen to stress, considering the safety implications of such work. "We are doing other things on top of GPT-4 that I think have all sorts of safety issues that are important to address and were totally left out of the letter," he said.
Marketing (Score:5, Insightful)
Re: (Score:2)
The current generation can pass law school exams but can't comprehend a 10,000 word paper at once.
The interesting question to me is not asking what "comprehend" means. When we start talking about intelligence, the conversation almost invariably starts vearing towards terms like "think" or "comprehend" or "knows" or even "wants," and we start getting into almost mystical ideas about what intelligence is. At present, none of those terms are helpful when thinking about what chatGPT can do.
chatGPT and other similar programs are tools, and they are not directly analagous to the human brain (duh). That does n
Re: (Score:3, Funny)
Re:Most AI funding comes from the Pentagon (Score:4, Insightful)
Once you have a fully autonomous military you can be reasonably sure the Pentagon will stop developing AI.
Exhausting Red Herring (Score:1)
LLMs are a tool. They're good for some things, they're not great at others. Slap a disclaimer on them if we must, and then get on with life. All of this debate about their "safety" seems like attention whores vying for screen time by glomming onto the Luddite gripe about
Re: (Score:3, Interesting)
Re: (Score:2)
I believe you turned a genuine concern over AI safety into a jobs debate. Perhaps this is why AI is replacing low skill workers.
Re: Exhausting Red Herring (Score:2)
Re: (Score:1)
>> We don't sit around and debate the safety of screwdrivers
Yes, we absolutely do, to exhaustive levels of detail. There are safety standards for nearly all things... including hand tools. Power tools? Even more.
Monopoly Men United (Score:3, Interesting)
Re: (Score:2, Interesting)
Actually, it looks very much like OpenAI has plundered a lot of data it did not have permission to use as it did, including their machine reproducing copyrighted material and personal data in what is very likely illegal behavior.
Also, OpenAI and its CEO have repeatedly stated that they are not creating AGI and that people expecting that will be disappointed.
Re:Monopoly Men United (Score:5, Interesting)
Just going to point out that "old" slashdot would have ripped you a new one for suggesting fair use of published works is plundering.
Where's my Lessig fans? Only a decade and we forget Swartz?
Re: (Score:2)
Copyright Shmopyright, however I can assure you "old" Slashdot would not have been a fan of the unauthorized use of peoples personal data.
Re: (Score:2)
Indeed. Well, and it _is_ illegal in the EU. Any commercial storage, processing and other use of personal data needs explicite informed consent. The only exceptions are when that data storage and/or processing is required by law, such as when you buy something online.
Re: (Score:1)
Using data as training data for an ANN that is then used to make money is very likely not covered under the current definition "fair use".
Re: (Score:3)
Actually, it looks very much like OpenAI has plundered a lot of data it did not have permission to use as it did, including their machine reproducing copyrighted material and personal data in what is very likely illegal behavior.
The very concept is anathema to academia. You are arguing against learning itself.
Re: (Score:2)
I am doing no such thing. OpenAI is not "academia". It is a for-profit company. The "Open" part of the name is a lie by misdirection.
Re: (Score:3)
"Actually, it looks very much like OpenAI has plundered a lot of data it did not have permission to use as it did"
Ever read a book? Whoever wrote it, plundered a lot of data from other books before it. Heck, as a software developer, I have plundered a lot of data from other books, articles, websites etc. I charge other people money for putting that plundered knowledge to use.
Re: Monopoly Men United (Score:1)
Re: (Score:2)
Stupid comparison is stupid. Really, could you get any more insightless and still vague stay on topic?
Re: Monopoly Men United (Score:1)
FFS, stop that. (Score:5, Interesting)
If you describe a link as "an open letter", don't link to another slashdot article that links to an article in Bloomberg that describes the letter. Link to the goddamned letter. Fuck you and your click-baity breadcrumbs.
Oh for fuck's sake protesters, get an education! (Score:3, Insightful)
In layman's terms, regardless of how "powerful" you make the language model, the GPT language model is always effectively a parrot.
That is to say, it reproduces conversational language that it has encountered previously. However, the way it does it is such that the words (or more accurately the "tokens", but in laymans terms you can think of them as just the words) are pseudorandomly generated in sequence. The reason the output doesn't simply look like gibberish is because it skews the random output in favor of words that are most likely to follow the words it has seen so far. Because the context window that it uses is quite large, this ends up resembling original human speech. This is the emergent property of the GPT model that is actually what is so remarkable, and it's actually quite fascinating at how increasing the complexity of the model without actually changing any of its algorithm increases how effectively it can appear to mimic actual human conversation language.
However, at the end of the day it is not. While the output is biased in favor of what is statistically liable to follow the words it has seen or generated so far, they are all ultimately still randomly generated. The fact that the model can often produce correct answers about something is a testament to the fact that so much of what we understand in natural language actually comes more from the context in which the natural language is being used in the first place more than the language used itself.
The only "danger" that exists from making models like these more powerful is that people who don't understand how they work might mistake sufficiently convincing output for signs of actual intelligence, and act as if a human being had said the same thing.
All stopping now does is needlessly give up their momentum
cost (Score:1)
GPT5 parser (Score:2)
Layers of training exist in v3, 3.5 and v4. GPT5 can score parsing for safety, authentication and verification reliability scores. That’s architecture toward a structured throughput in A.I.
He's being dishonest (Score:1)