OpenAI Drops Prices and Fixes 'Lazy' GPT-4 That Refused To Work (techcrunch.com) 31
OpenAI is always making slight adjustments to its models and pricing, and today brings just such an occasion. From a report: The company has released a handful of new models and dropped the price of API access -- this is primarily of interest to developers, but also serves as a bellwether for future consumer options. GPT-3.5 Turbo is the model most people interact with, usually through ChatGPT, and it serves as a kind of industry standard now -- if your answers aren't as good as ChatGPT's, why bother? It's also a popular API, being lower cost and faster than GPT-4 on a lot of tasks. So paying users will be pleased to hear that input prices are dropping by 50% and output by 25%, to $0.0005 per thousand tokens in, and $0.0015 per thousand tokens out.
As people play with using these APIs for text-intensive applications, like analyzing entire papers or books, those tokens really start to add up. And as open source or self-managed models catch up to OpenAI's performance, the company needs to make sure its customers don't just leave. Hence the steady ratcheting down of prices -- though it's also a natural result of streamlining the models and improving their infrastructure.
As people play with using these APIs for text-intensive applications, like analyzing entire papers or books, those tokens really start to add up. And as open source or self-managed models catch up to OpenAI's performance, the company needs to make sure its customers don't just leave. Hence the steady ratcheting down of prices -- though it's also a natural result of streamlining the models and improving their infrastructure.
The direction is quite clear. (Score:3, Insightful)
At some point not very far in the future, the "prices" will become equal to their real value of zero, and then the hosting costs will sink it. And there'll be a lot of whining from the "investors".
Re: (Score:1)
And modding me "troll" because you disagree won't change this even a little :)
Re: (Score:3)
Even then, IMHO, the
Re:The direction is quite clear. (Score:5, Insightful)
You were modded troll, because your post is - sorry, there's no nice way to say this - your post is dumb. AI has zero value? Clearly, you have not used it, nor have you listened to any of the people who do so on a daily basis.
- Writing something, and want it proofread? This is the next step up from spelling and grammar checkers.
- Writing something, but that blank sheet is intimidating? Ask for some suggestions, to get yourself started.
- Actually, the suggestions are sometimes good enough to use directly. I've used ChatGPT to generate a marketing blurb that was as good as anything I would expect from a professional.
- Coding, everyone talks about coding. The code sucks, no question. However, if you want to know how to use a particular API function, ChatGPT is far better than browsing API documentataion or wading through non-answers on StackOverflow. It is a huge timesaver.
- Illustrations - have you not notices how many illustrations on articles have been generated by DALL-E & Co? They are everywhere. Professional illustrators may be sad puppies, but there is clear value to be had.
- Etc, etc, etc
Re:The direction is quite clear. (Score:4, Insightful)
Re: (Score:2, Insightful)
Actually, the suggestions are sometimes good enough to use directly.
The wife uses ChatGPT to write her maths tests. Sure the numbers are all garbage and the answer are not relevant, but it's great for spitting out text without having to think.
Here's an example: "Write me a math question using pythagoras"
ChatGPT: "A building casts a shadow that is 40 meters long when the sun is at a particular angle. If the angle of elevation from the tip of the shadow to the top of the building is 30 degrees, what is the height of the building?"
Perfectly good math problem. ChatGPT fucked up
Re: (Score:1)
I'd pity the students whose "teacher" were so lazy.
Re: (Score:2)
I'd pity the students whose "teacher" were so lazy.
What's lazy about this? Are you studying math or creative writing? Do you waste your time at work on meaningless and pointless tasks that serve no purpose? 99% of students end up with recycled questions with numbers changed anyway, using ChatGPT is going above and beyond.
Re: (Score:1)
At some point not very far in the future, the "prices" will become equal to their real value of zero, and then the hosting costs will sink it. And there'll be a lot of whining from the "investors".
AI haters, what a fascinating tribe. According to them, AI at the same time:
- will cause total societal uphaeval, replace millions of people leaving them jobless and destitute and is an existential threat to humanity
- is completely worthless, incapable of anything worthwhile, useless, valueless and so on
Yes, at the same time.
Re: (Score:2)
I think your reading comprehension is a bit lacking.
Nothing in my comments (or, for that matter my few other comments on the subject) anticipate any of "total societal uphaeval, replace millions of people leaving them jobless and destitute and is an existential threat to humanity" as a result of the so-called "AI".
Quite the opposite - we'll see the degenerative "AI" find its few niches and the hype end.
And that's all, blablablabi.
Re: (Score:2)
AI evangelists, what a fascinating tribe. According to them, AI at the same time:
- Is a ready to deploy useful tool, that can do the work of millions of content creators, software developers, customer service people.
- Is a black-box and we can't understand why it does some of things it does.
Sorry, I had to do it. The reality is the truth lies someplace in the middle. AI is NOT going to alter the market place in the ways OpenAI says it will.
It also will absolutely be leveraged as a tool that makes people mor
Re:The direction is quite clear. (Score:4, Informative)
I certainly value AI, mostly in the form of GitHub Copilot. I pay actual money for it, $10 a month, and I think I'm getting a good deal.
Apparently, I'm not alone. GitHub Copilot already has a million paying subscribers. https://www.zdnet.com/article/... [zdnet.com]
Re: (Score:2)
Yes, indeed. If will take a while before all the clueless and the "leaders" that think the hype can replace a vision an skill wake up. At the moment, the AI scammers manage to keep the hype going, but as soon as the artificial morons fail at more and more tasks, the mood will shift. As it has several times before in previous AI hypes.
Re: (Score:2)
will become equal to their real value of zero
No it won't. Firstly the real value is not zero (you can see that in the fact that many companies are already using AI to generate content for actual things they sell), and secondly the required resource for running the code is not zero (it's actually far from zero).
I get it it's cool to hate on AI. You are more than welcome to keep doing so, while other people make money using it.
Re: (Score:2)
Yes, it will.
Value is completely orthogonal to making money.
Who told you I'm not making money from "AI"?
Forced work (Score:2)
no moat (Score:1)
day by day, more tasks can be solved by smaller, private and open models, OpenAI is feeling the heat, their island of supremacy is shrinking fast, they only got the peaks now, that's why they got no moat
Eventually It Will Be Free (Score:1)
And you won't have a choice if you want to use it.
$0.0015 per thousand tokens out? (Score:2)
If they're charging for tokens generated by the LLM, that creates a huge incentive for them to make its output significantly wordier even for simple prompts.
'Fixed' the lazy AI? (Score:5, Funny)
The recompiling will continue until morale improves.
You almost had us going (Score:1)
Re: (Score:2)
ChatGPT prompt: (Score:1)
Value (Score:2)
The value and bar that ChatGPT presents is very low, considering you can run any number of models on a local machine with very little work - not quite straight forward enough for normal users, but definitely within grasp for anyone who tinkers with technology.
Considering the marked improvement of responses you get, it's worth the effort.
Re: (Score:2)
It's not the running of the model that has value, it's the training. Your local model won't be able to answer a host of detailed questions that rely on data from millions of contributions from trainers. OpenAI has spent literally billions just on training.
Re: (Score:2)
And yet, it gives me PC bullshit on the vast majority of many types of requests.
Re: (Score:2)
I'm not sure what that proves.
Can you local model provide answers about Hog4PC lighting consoles, or Midas sound consoles? ChatGPT can.
Can your model suggest ways to safely remove a large mirror that is glued to a wall? ChatGPT can.
Can your model scour Stack Overflow and provide coding solutions submitted by that community? ChatGPT can.
Can your model tell me how to replace a broken door handle of a 2013 Toyota Camry? ChatGPT can.
So let's be realistic. ChatGPT, MS Copilot, Anthropic have paid a lot of money