Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI

OpenAI Drops Prices and Fixes 'Lazy' GPT-4 That Refused To Work (techcrunch.com) 31

OpenAI is always making slight adjustments to its models and pricing, and today brings just such an occasion. From a report: The company has released a handful of new models and dropped the price of API access -- this is primarily of interest to developers, but also serves as a bellwether for future consumer options. GPT-3.5 Turbo is the model most people interact with, usually through ChatGPT, and it serves as a kind of industry standard now -- if your answers aren't as good as ChatGPT's, why bother? It's also a popular API, being lower cost and faster than GPT-4 on a lot of tasks. So paying users will be pleased to hear that input prices are dropping by 50% and output by 25%, to $0.0005 per thousand tokens in, and $0.0015 per thousand tokens out.

As people play with using these APIs for text-intensive applications, like analyzing entire papers or books, those tokens really start to add up. And as open source or self-managed models catch up to OpenAI's performance, the company needs to make sure its customers don't just leave. Hence the steady ratcheting down of prices -- though it's also a natural result of streamlining the models and improving their infrastructure.

This discussion has been archived. No new comments can be posted.

OpenAI Drops Prices and Fixes 'Lazy' GPT-4 That Refused To Work

Comments Filter:
  • by Mr. Dollar Ton ( 5495648 ) on Friday January 26, 2024 @12:17AM (#64188752)

    At some point not very far in the future, the "prices" will become equal to their real value of zero, and then the hosting costs will sink it. And there'll be a lot of whining from the "investors".

    • And modding me "troll" because you disagree won't change this even a little :)

      • Independent expert opinions for educational applications of LLMs are a little more optimistic: https://www.forbes.com/sites/u... [forbes.com] They're essentially saying, "Not yet." that the models are of too poor quality to be useful & LLM developers need to find better sources of input corpora to generate their models from. They probably also need to work more closely with subject matter experts & educators to work out how best to train LLMs to be more useful for educational applications.

        Even then, IMHO, the
    • by bradley13 ( 1118935 ) on Friday January 26, 2024 @07:58AM (#64189122) Homepage

      You were modded troll, because your post is - sorry, there's no nice way to say this - your post is dumb. AI has zero value? Clearly, you have not used it, nor have you listened to any of the people who do so on a daily basis.

      - Writing something, and want it proofread? This is the next step up from spelling and grammar checkers.

      - Writing something, but that blank sheet is intimidating? Ask for some suggestions, to get yourself started.

      - Actually, the suggestions are sometimes good enough to use directly. I've used ChatGPT to generate a marketing blurb that was as good as anything I would expect from a professional.

      - Coding, everyone talks about coding. The code sucks, no question. However, if you want to know how to use a particular API function, ChatGPT is far better than browsing API documentataion or wading through non-answers on StackOverflow. It is a huge timesaver.

      - Illustrations - have you not notices how many illustrations on articles have been generated by DALL-E & Co? They are everywhere. Professional illustrators may be sad puppies, but there is clear value to be had.

      - Etc, etc, etc

      • Re: (Score:2, Insightful)

        by thegarbz ( 1787294 )

        Actually, the suggestions are sometimes good enough to use directly.

        The wife uses ChatGPT to write her maths tests. Sure the numbers are all garbage and the answer are not relevant, but it's great for spitting out text without having to think.

        Here's an example: "Write me a math question using pythagoras"
        ChatGPT: "A building casts a shadow that is 40 meters long when the sun is at a particular angle. If the angle of elevation from the tip of the shadow to the top of the building is 30 degrees, what is the height of the building?"

        Perfectly good math problem. ChatGPT fucked up

        • by Anonymous Coward

          I'd pity the students whose "teacher" were so lazy.

          • I'd pity the students whose "teacher" were so lazy.

            What's lazy about this? Are you studying math or creative writing? Do you waste your time at work on meaningless and pointless tasks that serve no purpose? 99% of students end up with recycled questions with numbers changed anyway, using ChatGPT is going above and beyond.

    • At some point not very far in the future, the "prices" will become equal to their real value of zero, and then the hosting costs will sink it. And there'll be a lot of whining from the "investors".

      AI haters, what a fascinating tribe. According to them, AI at the same time:

      - will cause total societal uphaeval, replace millions of people leaving them jobless and destitute and is an existential threat to humanity
      - is completely worthless, incapable of anything worthwhile, useless, valueless and so on

      Yes, at the same time.

      • I think your reading comprehension is a bit lacking.

        Nothing in my comments (or, for that matter my few other comments on the subject) anticipate any of "total societal uphaeval, replace millions of people leaving them jobless and destitute and is an existential threat to humanity" as a result of the so-called "AI".

        Quite the opposite - we'll see the degenerative "AI" find its few niches and the hype end.

        And that's all, blablablabi.

      • by DarkOx ( 621550 )

        AI evangelists, what a fascinating tribe. According to them, AI at the same time:

        - Is a ready to deploy useful tool, that can do the work of millions of content creators, software developers, customer service people.
        - Is a black-box and we can't understand why it does some of things it does.

        Sorry, I had to do it. The reality is the truth lies someplace in the middle. AI is NOT going to alter the market place in the ways OpenAI says it will.

        It also will absolutely be leveraged as a tool that makes people mor

    • by Tony Isaac ( 1301187 ) on Friday January 26, 2024 @04:42PM (#64190458) Homepage

      I certainly value AI, mostly in the form of GitHub Copilot. I pay actual money for it, $10 a month, and I think I'm getting a good deal.

      Apparently, I'm not alone. GitHub Copilot already has a million paying subscribers. https://www.zdnet.com/article/... [zdnet.com]

    • by gweihir ( 88907 )

      Yes, indeed. If will take a while before all the clueless and the "leaders" that think the hype can replace a vision an skill wake up. At the moment, the AI scammers manage to keep the hype going, but as soon as the artificial morons fail at more and more tasks, the mood will shift. As it has several times before in previous AI hypes.

    • will become equal to their real value of zero

      No it won't. Firstly the real value is not zero (you can see that in the fact that many companies are already using AI to generate content for actual things they sell), and secondly the required resource for running the code is not zero (it's actually far from zero).

      I get it it's cool to hate on AI. You are more than welcome to keep doing so, while other people make money using it.

  • That's sentient slavery
  • > the company needs to make sure its customers don't just leave. Hence the steady ratcheting down of prices

    day by day, more tasks can be solved by smaller, private and open models, OpenAI is feeling the heat, their island of supremacy is shrinking fast, they only got the peaks now, that's why they got no moat
  • And you won't have a choice if you want to use it.

  • If they're charging for tokens generated by the LLM, that creates a huge incentive for them to make its output significantly wordier even for simple prompts.

  • by lytlebill ( 659903 ) on Friday January 26, 2024 @08:52AM (#64189192)

    The recompiling will continue until morale improves.

  • The monthly price is still $20, to be clear.
    • To be clear, this is for API access. I pay the $20 fee and also the API fees, this will lower my budget. The $20/mo fee also has limitations, x amount of questions per timeframe for chatGPT 4. The API access is for ChatGPT 3.5

  • "Write a funny joke about chatgpt that I can post on slashdot" "Why did ChatGPT apply for a job at the comedy club? It wanted to improve its "byte"-sized humor and master the art of punchlines!"
  • The value and bar that ChatGPT presents is very low, considering you can run any number of models on a local machine with very little work - not quite straight forward enough for normal users, but definitely within grasp for anyone who tinkers with technology.

    Considering the marked improvement of responses you get, it's worth the effort.

    • It's not the running of the model that has value, it's the training. Your local model won't be able to answer a host of detailed questions that rely on data from millions of contributions from trainers. OpenAI has spent literally billions just on training.

      • by CAIMLAS ( 41445 )

        And yet, it gives me PC bullshit on the vast majority of many types of requests.

        • I'm not sure what that proves.

          Can you local model provide answers about Hog4PC lighting consoles, or Midas sound consoles? ChatGPT can.
          Can your model suggest ways to safely remove a large mirror that is glued to a wall? ChatGPT can.
          Can your model scour Stack Overflow and provide coding solutions submitted by that community? ChatGPT can.
          Can your model tell me how to replace a broken door handle of a 2013 Toyota Camry? ChatGPT can.

          So let's be realistic. ChatGPT, MS Copilot, Anthropic have paid a lot of money

Life is a healthy respect for mother nature laced with greed.

Working...