Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI

OpenAI's o1-pro is the Company's Most Expensive AI Model Yet (techcrunch.com) 21

OpenAI has launched a more powerful version of its o1 "reasoning" AI model, o1-pro, in its developer API. From a report: According to OpenAI, o1-pro uses more computing than o1 to provide "consistently better responses." Currently, it's only available to select developers -- those who've spent at least $5 on OpenAI API services -- and it's pricey. Very pricey. OpenAI is charging $150 per million tokens (~750,000 words) fed into the model and $600 per million tokens generated by the model. That's twice the price of OpenAI's GPT-4.5 for input and 10x the price of regular.

OpenAI's o1-pro is the Company's Most Expensive AI Model Yet

Comments Filter:
    • If the raging bloodbath that is Coreweave's(OpenAI's new pet compute vendor, now that the MS honeymoon has soured a bit) S-1 filing is anything to go by it might genuinely be that expensive, possibly yet another being sold below cost. I assume that some of their high-hype models are at least experiments in trying to break even; but for a technology that is supposedly unleashing a tsunami of efficiency there is shockingly little money in 'AI', once expenses are factored in. A good day to be Nvidia; and enoug
      • Thanks for the interesting comments.
        I was not familiar with Coreweave - so thanks for introducing it.
        I looked it up and found this article from 2 days ago:

        CoreWeave Is A Time Bomb
        Edward Zitron
        Mar 17, 2025

        https://www.wheresyoured.at/co... [wheresyoured.at]

        It is a long and detailed read, but I found it a very worthwhile use of 20 minutes. It is an insightful look at the current state of affairs of AI and the AI industry with a generally pessimistic view of the commercial viability of AI in the long run.

        If anyone has a few min

        • I dunno, I read a few of the guy's other takes, and they're pretty bad.
          I actually think he's got a bit of an ax to grind.

          He sees a canceled +14% datacenter expansion by MS as it losing faith in OpenAI as a business, when it's far more likely that they've just seen that there has been a seismic shift in compute cost per unit of LLM performance. DeepSeek style models require a 10th or less power than OpenAI's flagships.
          To think that OpenAI isn't working on matching that efficiency right now is eyerolling.
          • Interesting. You could be right. Always nice to hear another thoughtful point of view. Thanks.

          • He's definitely not an industry optimist; but the efficiency thing is arguably part of Coreweave's problem.

            They don't get paid per unit of LLM performance; they get paid per unit of compute time(and currently lose money on it). Efficiency is actively bad for them unless it drives up adoption by greater than the amount saved; or if it makes certain workloads amenable to moving off the specialized larger clusters that they operate and onto smaller systems(the big GPU clusters are fairly involved HPC infini
            • He's definitely not an industry optimist; but the efficiency thing is arguably part of Coreweave's problem.

              Yes, I followed his logic.
              But the same can be said for a power company.

              They don't get paid per unit of LLM performance

              Well, they do.
              The price is set per unit of time on a particular machine with known performance and specifications.
              I.e., $42.99 per 1 hour of NGX H100 with 80GB of VRAM.
              $50.44 per 1 hour of NGX H200 with 141GB of VRAM, etc.
              There's no other way (that I can think of) to price something like that.

              I'm not trying to argue that their business is good, by any means, but the idea that it's a fundamentally flawed model is... A pretty weak a

      • Why can't they sell stock and supply AI for free?

      • If the raging bloodbath that is Coreweave's(OpenAI's new pet compute vendor, now that the MS honeymoon has soured a bit) S-1 filing is anything to go by it might genuinely be that expensive, possibly yet another being sold below cost. I assume that some of their high-hype models are at least experiments in trying to break even; but for a technology that is supposedly unleashing a tsunami of efficiency there is shockingly little money in 'AI', once expenses are factored in. A good day to be Nvidia; and enough wheeling and dealing that anyone who knows how to be a transaction cost or gamble with other people's money has a decent shot; but it's genuinely impressive how far underwater all the nominal leaders are on their glorious new innovation.

        High initial prices in a rapidly developing industry are not only unsurprising but certainly to be expected. Now is not the time to expect price competition because features and products are still in rapid development. Only after the models have stabilized will we see price competition. This is the same for every industry, including software and hardware products.

  • Honestly, this reeks of desperation. Deepseek matches their current performance, and is a lot cheaper. OpenAI doesn't know how to further improve their models, so they are throwing raw computing power at the problem.
    • Deepseek matches their current performance (in compute per output, not necessarily in output quality) but LLMs haven't really been getting better at any compute. The only thing that has worked so far is throwing more compute at it, so I can't blame OpenAI for trying that. And I haven't seen evidence that throwing OpenAI level compute at Deepseek algos really improves things although I would be happy to be proven wrong.

      I'm no OpenAI fan but there really isn't anything they're not doing in pursuit of AGI, an

"It's when they say 2 + 2 = 5 that I begin to argue." -- Eric Pepke

Working...