Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI

OpenAI Still Not Training GPT-5, Says Sam Altman (techcrunch.com) 21

OpenAI is still not training GPT-5, months after the Microsoft-backed startup pledged to not work on the successor to GPT-4Â"for some time" after many industry executives and academics expressed concerns about the fast-rate of advancements by Sam Altman's large language models. From a report: "We have a lot of work to do before we start that model," Altman, the chief executive of OpenAI, said at a conference hosted by Indian newspaper Economic Times. "We're working on the new ideas that we think we need for it, but we are certainly not close to it to start."
This discussion has been archived. No new comments can be posted.

OpenAI Still Not Training GPT-5, Says Sam Altman

Comments Filter:
  • by Junta ( 36770 ) on Wednesday June 07, 2023 @10:07AM (#63583178)

    Basically that we are pretty much past the point of uselessly diminishing returns with respect to current 'AI' methods. So further big advancement is more stalled waiting on a new approach.

    Different stewards of different models keeping their trained models private mean that some people are further away than others, but that generally best of breed as seen today is "as good as it gets" for now. Feeding more data into what we have today isn't making it appreciably better, just making it more impractical.

    In their view, the industry was now about packaging and integrating what we have achieved today and not so much about expecting to do fundamentally better than we can demo today. At least until someone comes up with a categorically distinct approach.

    • by DavenH ( 1065780 ) on Wednesday June 07, 2023 @11:25AM (#63583364)

      Basically that we are pretty much past the point of uselessly diminishing returns with respect to current 'AI' methods. So further big advancement is more stalled waiting on a new approach.

      There are miles to go with the current paradigm. Look up the OpenAI scaling laws paper. There is about 10^5 more room for compute progress before the transformer architecture reaches its assessed modelling limit, and the intrinsic entropy of language is unknown but the performance curves haven't plateaued whatsoever along data scale, compute, or model size, so your claim is fully false.

      • Re: (Score:3, Insightful)

        by Junta ( 36770 )

        I have two thoughts on this:

        -I think we should be wary of OpenAI research. This is a for-profit business concern with a conflict of interest to the tune of 30 billion dollars for desired output. So they have massive incentive to put out data that 'scientifically' shows that they have a long straightforward roadmap to ever increasing heights. They are a company that has made actionable intellectual property proprietary and confidential, so released research material is pretty much directed by marketing.

        -Th

    • by MtHuurne ( 602934 ) on Wednesday June 07, 2023 @11:53AM (#63583456) Homepage

      I'm following llama.cpp [github.com] development and significant improvements are made there on a weekly basis. Not fundamental changes: it's mostly increased efficiency. But with inference becoming accessible to people without deep pockets, there are now many more people contributing to machine learning.

      On the training side there are also efficiency improvements and public datasets are getting better. Additionally, finetuning an existing model is much cheaper than training one from scratch and can improve the output quite a bit. It seems that the limits of the architecture haven't been reached yet.

      While I don't expect we'll see actual intelligence very soon, we are getting ever more useful and accessible language models, at a pretty incredible rate.

  • by ranton ( 36917 ) on Wednesday June 07, 2023 @10:53AM (#63583288)

    We're working on the new ideas that we think we need [to start training the GPT-5 model], but we are certainly not close to it to start.

    This basically just means they are doing research to determine how training will be done for GPT-5, but aren't ready to begin the training itself. So truthfully they are working on GPT-5, they just aren't training the model yet. It is a distinction without much of a difference. The only real insight is we shouldn't expect GPT-5 in the next few months, but they are absolutely still working on it right now.

    • The are also continuing to refine GPT4. It's not actually a fixed thing with "4" being a definite version number. So the distinction is somewhat arbitrary.
    • An additional message is, "Buy our product now and integrate with it, because we're not announcing anything that will make it worth waiting."

  • ... Is training Sam Altman?
  • There is no news.

    -Everything going as planned, but sure why not click-bait the nothing burger.
  • He is basically saying that there is still money to be made with this current model level and have not explored all that you can do with it yet. People and companies are using the data to make their own fine tuned models with this LMM as a base.
    Also they are exploring the limits of how to use the AI in terms of what people like/dislike, accept or are scared of. They are trying to answer bigger societal level questions before going on.

As long as we're going to reinvent the wheel again, we might as well try making it round this time. - Mike Dennison

Working...