
OpenAI's o1-pro is the Company's Most Expensive AI Model Yet (techcrunch.com) 21
OpenAI has launched a more powerful version of its o1 "reasoning" AI model, o1-pro, in its developer API. From a report: According to OpenAI, o1-pro uses more computing than o1 to provide "consistently better responses." Currently, it's only available to select developers -- those who've spent at least $5 on OpenAI API services -- and it's pricey. Very pricey. OpenAI is charging $150 per million tokens (~750,000 words) fed into the model and $600 per million tokens generated by the model. That's twice the price of OpenAI's GPT-4.5 for input and 10x the price of regular.
Feels like artificial scarcity (Score:1)
Re: (Score:3)
Re: (Score:2)
Thanks for the interesting comments.
I was not familiar with Coreweave - so thanks for introducing it.
I looked it up and found this article from 2 days ago:
CoreWeave Is A Time Bomb
Edward Zitron
Mar 17, 2025
https://www.wheresyoured.at/co... [wheresyoured.at]
It is a long and detailed read, but I found it a very worthwhile use of 20 minutes. It is an insightful look at the current state of affairs of AI and the AI industry with a generally pessimistic view of the commercial viability of AI in the long run.
If anyone has a few min
Re: (Score:3)
I actually think he's got a bit of an ax to grind.
He sees a canceled +14% datacenter expansion by MS as it losing faith in OpenAI as a business, when it's far more likely that they've just seen that there has been a seismic shift in compute cost per unit of LLM performance. DeepSeek style models require a 10th or less power than OpenAI's flagships.
To think that OpenAI isn't working on matching that efficiency right now is eyerolling.
Re: (Score:2)
Interesting. You could be right. Always nice to hear another thoughtful point of view. Thanks.
Re: (Score:2)
They don't get paid per unit of LLM performance; they get paid per unit of compute time(and currently lose money on it). Efficiency is actively bad for them unless it drives up adoption by greater than the amount saved; or if it makes certain workloads amenable to moving off the specialized larger clusters that they operate and onto smaller systems(the big GPU clusters are fairly involved HPC infini
Re: (Score:2)
He's definitely not an industry optimist; but the efficiency thing is arguably part of Coreweave's problem.
Yes, I followed his logic.
But the same can be said for a power company.
They don't get paid per unit of LLM performance
Well, they do.
The price is set per unit of time on a particular machine with known performance and specifications.
I.e., $42.99 per 1 hour of NGX H100 with 80GB of VRAM.
$50.44 per 1 hour of NGX H200 with 141GB of VRAM, etc.
There's no other way (that I can think of) to price something like that.
I'm not trying to argue that their business is good, by any means, but the idea that it's a fundamentally flawed model is... A pretty weak a
Re: (Score:2)
Why can't they sell stock and supply AI for free?
Re: (Score:2)
If the raging bloodbath that is Coreweave's(OpenAI's new pet compute vendor, now that the MS honeymoon has soured a bit) S-1 filing is anything to go by it might genuinely be that expensive, possibly yet another being sold below cost. I assume that some of their high-hype models are at least experiments in trying to break even; but for a technology that is supposedly unleashing a tsunami of efficiency there is shockingly little money in 'AI', once expenses are factored in. A good day to be Nvidia; and enough wheeling and dealing that anyone who knows how to be a transaction cost or gamble with other people's money has a decent shot; but it's genuinely impressive how far underwater all the nominal leaders are on their glorious new innovation.
High initial prices in a rapidly developing industry are not only unsurprising but certainly to be expected. Now is not the time to expect price competition because features and products are still in rapid development. Only after the models have stabilized will we see price competition. This is the same for every industry, including software and hardware products.
Re: (Score:3)
Pandora's box isn't closing. [nu.edu]
One line in here stuck out to me, because I've witnessed its truth:
Only a third of consumers think they are using AI platforms, while actual usage is 77%.
There's no fighting to try to stop AI adoption. It already happened.
Re: (Score:2)
Only a third of consumers think they are using AI platforms, while actual usage is 77%.
You have the weirdest definition of "AI platform usage".
Re: (Score:2)
You have the weirdest definition of "AI platform usage".
Shrug. You're literally interacting with ANNs running on NPUs and GPUs at the edge and datacenter every single day.
Re: (Score:2)
Only if you're one.
Re: (Score:2)
Re: (Score:2)
Denial? Of what? A ridiculous claim made by a random troll on the internets?
Ohnoes!
Re: (Score:2)
On one hand, we've got reality. On the other, we've got your comfort zone.
Right now, those circles don't overlap, and you're trying to maintain the wall around your comfort zone. That's ok. Denial is a fundamentally human condition.
Re: (Score:1)
Re: (Score:2)
How much OpenAI stock do you have to own to pay for the AI?
Desperation? (Score:1)
Re: (Score:2)
I'm no OpenAI fan but there really isn't anything they're not doing in pursuit of AGI, an