Forgot your password?
typodupeerror
AI

OpenAI Has No Moat, No Tech Edge, No Lock-in and No Real Plan, Analyst Warns 53

OpenAI faces four fundamental strategic problems that no amount of fundraising or capex announcements can paper over, according to analyst Benedict Evans: it has no unique technology, its enormous user base is shallow and fragile, incumbents like Google and Meta are leveraging superior distribution to close the gap, and its product roadmap is dictated by whatever the research labs happen to discover rather than by deliberate product strategy.

The company claims 800-900 million weekly active users, but 80% of them sent fewer than 1,000 messages across all of 2025, averaging fewer than three prompts a day, and only 5% pay. OpenAI has acknowledged what it calls a "capability gap" between what models can do and what people use them for -- a framing Evans reads as a polite way to avoid admitting the absence of product-market fit. Gemini and Meta AI are meanwhile gaining share rapidly because the products look nearly indistinguishable to typical users, and Google and Meta already have the distribution to push them. Evans compares ChatGPT to Netscape -- an early leader in a category where the products were hard to tell apart, overtaken by a competitor that used distribution as a crowbar.

On capex, Evans argues that Altman's ambitions -- claiming $1.4 trillion and 30 gigawatts of future compute -- amount to an attempt to will OpenAI into a seat at a table where annual infrastructure spending may need to reach hundreds of billions. But a seat at the table is not leverage over it; he compares this to TSMC, which holds a de facto chip monopoly yet captures little value further up the stack.

OpenAI's own strategy diagrams from late last year laid out a full-stack platform vision -- chips, models, developer tools, consumer products -- each layer reinforcing the others. Evans argues this borrows the language of Windows and iOS without possessing any of the underlying dynamics: no network effect, no lock-in preventing developers from calling a different model's API, and no reason customers would know or care which foundation model powers the product they are using.
This discussion has been archived. No new comments can be posted.

OpenAI Has No Moat, No Tech Edge, No Lock-in and No Real Plan, Analyst Warns

Comments Filter:
  • by MpVpRb ( 1423381 ) on Friday February 20, 2026 @04:53PM (#66001788)

    "its product roadmap is dictated by whatever the research labs happen to discover rather than by deliberate product strategy"
    This is the way leading edge research is done
    You can't have a "deliberate product strategy" when inventing totally new stuff

    • yeah, but... (Score:5, Insightful)

      by ebunga ( 95613 ) on Friday February 20, 2026 @05:14PM (#66001830)

      They're spending a trillion dollars without a plan and without a product. They are literally disrupting every single consumer electronics supply chain for the next three years without a plan and without a product.

      • Re:yeah, but... (Score:4, Insightful)

        by Whateverthisis ( 7004192 ) on Friday February 20, 2026 @05:42PM (#66001876)
        Not to mention inflating stock market indices and laying the groundwork for a bubble burst followed by a recession.
      • I like to see it as proof that we do have the money to spend on non-capitalism things that people claim we don't. Just need the will to tax and implement it.

        • by bartoku ( 922448 )

          Instead you should see that the non-capitalism things are so unpopular that you need to steal money to try and fund them.

          If the things you want are worthwhile go raise money and implement them.

          Giving more money to the government is about the stupidest idea ever.

          Even if today the people in power that will implement what you want, tomorrow that may not be true; then your bad idea is cut and the money is spent on horrible things...

      • The interesting thing will be what we accomplish with all the compute added once the bubble bursts. Protein folding and biotech in general could see ultra cheap compute, which could lead to some major breakthroughs.
        • Only if that kind of use works on one of those NVidia H200s or whatever the newest one is.
          Odds are, the software for protein folding would have to be rewritten to use an H200-style board.

        • by allo ( 1728082 )

          The investment bubble popping doesn't mean people stop using AI. The same hardware can do the same jobs as before. Some company will own it and provide the infrastructure that may be no longer provided by the current giants. On the other hand, I think the giants are on the safe side, the newcomers will have the door shut into their faces if investors lose interest.

      • It feels a bit like they're still behaving like a nonprofit, except they're not releasing their models to the public anymore, opting to instead keep them close to their chest. In the process of moving to a for-profit business, they mostly forgot to switch to a for-profit business model.

      • by allo ( 1728082 )

        They have a product. Every stupid thing that wants you to use AI uses their API in the backend. And the API itself has a reasonable price for OpenAI, the only risk is that the (much) cheaper competing models, where the API prices start at 2 cents instead of 15 dollar, get good enough so these companies can switch.

      • In a gold rush, it's always the tool makers that make all the money.

    • by Brain-Fu ( 1274756 ) on Friday February 20, 2026 @05:25PM (#66001846) Homepage Journal

      If you are building solutions in the Microsoft Azure Cloud, it is very easy to get immediate access to GPT models to power your AI pipelines (whatever they may be). Very affordable, too.

      By contrast, you cannot gain access to the Gemini models, and there is a big hill to climb to gain access to Claude. I don't know about Meta's models, I never checked.

      My point being, this is a bit of a vendor lock in that makes use of GPT models the path of least resistance for many businesses that are building AI powered solutions. Maybe that will help. Though I think not for long.

      GPT models are weaksauce compared to Gemini and Claude. They have been very far surpassed by these. Businesses that really need the power of these other models can use Google Vertex and integrate that with their Azure cloud, or set up an Anthropic account and just beam the web requests right over. Anthropic is problematic in that it doesn't allow you to ensure that data never leaves specific global regions (which many people need for legal reasons), but Google Vertex sure does.

      So, I think that advantage that OpenAI currently has will not last long.

      It is sad to see an innovator lose out, but that is also how things normally go. We tell ourselves feel-good stories about how copyright law or patent law can protect the small innovator against the huge corporations, but that isn't how things play out in realty. By hook or by crook, the major players wind up leveraging what they have to get control over the shiny new thing, and that's how the cookie crumbles.

      • then a late-to-market IBM buys them.
      • It is sad to see an innovator lose out,

        They were first to market, but I don't think of them as having invented the product.. The emergence of chatbots seems inevitable once the paper in 2017 was authored by several google engineers (titled "Attention is all you need")... it was just a question of exactly who and when. If OpenAI hadn't gone first, someone would have shortly after.

        And, in a lot of ways even that google paper's "breakthrough" wasn't so much the tech (neural nets) but the precise adaptation of it that made it highly parallelizabl

    • by ranton ( 36917 )

      You can't have a "deliberate product strategy" when inventing totally new stuff

      That is 100% true. But it also isn't how trillion dollar companies get their valuations. They get it by having a deliberate product strategy and a strong moat that will defend their revenue for decades to come.

    • by ceoyoyo ( 59147 )

      You can't have a "deliberate product strategy" when inventing totally new stuff

      Right. Which is why businesses don't do that. Publicly funded universities don't even do that.

      Industrial research, even the legendary Bell Labs and Xerox Parc, is a small part of what a compay is otherwise doing. You might exploit something coming out of your research lab, but you're not depending on it.

    • by SeaFox ( 739806 )

      "its product roadmap is dictated by whatever the research labs happen to discover rather than by deliberate product strategy"
      This is the way leading edge research is done.

      Wall Street isn't interested in "leading research". Heck, they don't want anything that costs money, like following environmental regulations, paying employees fairly for their labor, providing support to consumers after the sale...

      They just want reliable plans to make money. OpenAI (and arguably all AI companies) are not really offering that beyond traditional methods (subscriptions, ad-revenue, strategic partnerships). But OpenAI doesn't have the platform lock-in that would help keep people from dipping t

    • You can't have a "deliberate product strategy" when inventing totally new stuff

      Nonsense. There are many counterexamples. See also: the 1960s race to the moon. Arguably much more complex than AI because it faced real physical limitations and Nature is, famously, a bitch.

    • by dvice ( 6309704 )

      No it isn't. Just look at how Deepmind does their work. They have had several clear minor goals on their way. For example they first learned to play old games, then go and then Starcraft, then they turned their attention into realworld problems like protein folding. They tried to solve protein folding for 2 years and finally solved it.

    • by gweihir ( 88907 )

      You do not use "leading edge" "products" for anything that matters. Doing so is pure insanity.

    • by allo ( 1728082 )

      I think the Analyst says they are no longer inventing. They are scaling (currently successfully) transformers to be better and better and may hit a ceiling. And six month later the competition hits the same ceiling.

  • by Fly Swatter ( 30498 ) on Friday February 20, 2026 @04:55PM (#66001790) Homepage
    Sounds like a plan, when no one can afford a home computer and all the computing is now hoarded in the datacenter/cloud/server... profit?!?
  • by BrendaEM ( 871664 ) on Friday February 20, 2026 @04:55PM (#66001792) Homepage
    No, we don't need someone else spying, content-stealing billionaire-propping AI. We need something that can be downloaded locally, run on our 12TF video card, and lock out the billionaires who have already done too much damage to our society.

    Note: I am against two computer technologies: blockchain and as-deployed AI. I guess that is why Slashdot gives my posts a level-1 initial value.
    • by CAIMLAS ( 41445 )

      The biggest problem with this is that 12TF cards are expensive, but more importantly, the ones with enough memory for the models to be much more than a curiosity (say, 32GB+) are extremely expensive, and frankly, not terribly available.

      The state of the technology needs to improve a lot for general local inference. Yes, you can do a lot with smaller specialized models (things like whisper) or do minor things with free models like lfm2.5 or mistral but these aren't even in the same ballpark

      So if you need a th

    • by gweihir ( 88907 )

      Here is a better approach: Do without. AI is not the time-saver it gets pushed as. Most of what it does is reduce your skill at things.

    • by labnet ( 457441 )

      If you really want to start at -1, just hint that trump may have done something good, like plug the border. Powerful is the /. TDS!

  • by Anonymous Coward

    Didn't they buy up like half the memory chips, which caused a massive shortage starting in September?

    • OpenAI is only planning to buy those chips. But first they have to get the money. OpenAI has run out of big investors and Nvidia is getting shaky about round tripping. Now Sam Altman has to convince big banks that Gemini and Claude Code aren't going to turn ChatGPT into a streak in a pair of dirty underpants. I won't be surprised if we see OpenAI dramatically scale back all those big expensive capex plans in the near future.

  • by TheStatsMan ( 1763322 ) on Friday February 20, 2026 @05:16PM (#66001834)

    Who asked?

    "Analyst invested in other companies claims OpenAI is no good."

  • If AI continues driving the marginal cost of intelligence and production toward zero, it may eventually undercut the very mechanism capitalism relies on: the wageâ"labor loop that generates purchasing power and demand.
    • As I put together in 2010 citing writings from 1964 and later: "Beyond a Jobless Recovery:
      A heterodox perspective on 21st century economics" https://pdfernhout.net/beyond-... [pdfernhout.net]
      "This article explores the issue of a "Jobless Recovery" mainly from a heterodox economic perspective. It emphasizes the implications of ideas by Marshall Brain and others that improvements in robotics, automation, design, and voluntary social networks are fundamentally changing the structure of the economic landscape. It outlines towar

    • Yeah all the doomsayers are saying this incessantly, as if AI invented automation. We've been automating things for centuries, and we still have a wage-labor loop functioning just fine.

  • For coding purposes, Anthropic's Claude wins hand down in my experience and Gemini is next. GPT is several months behind in quality, and Meta is barely visible.

    OpenAI merely needs to be shipping superior product in order to do well in that space, and there's still plenty of runway for them. The real danger for them in my opinion is losing the tech race.

  • I use ChatGPT because there is NOT any lock in. They want to start playing that game, they lose my money.

    • by gweihir ( 88907 )

      Avoiding vendor lock-in is one of the most important things a customer (whether organizational or private) can do. Many ignore that one and suffer as a consequence.

  • The analysts have it all wrong. OpenAI's business plan is not based on motes, tech edge, lock-in, business fundamentals, or anything like that; it's based on hype. And AI excels at generating hype. That's the killer feature.

    • by gweihir ( 88907 )

      Obviously. But if "analysts" begin to tell people the truth, they expose how worthless their advice has been in the past. And hence they continue to fabricate plausible sounding stories.

  • But they only have superior marketing at the moment.

    Has everybody forgotten the worth of Anthropic CEO announcements? Six months, 6-12 months, soon but not this year, etc., etc.

    Google is the only one with long term strategy, IMO. LLMs are much more of a better search tools than a coder of any seniority. They are also very good at translation. The ability of LLM to generate fake content is also an important "skill". A place in Google for all of that.

  • I fully embrace the AI tools and use for whatever productivity I want to extract. But there is clearly a commoditization going on similar to the search wars of the past. There has to be something else to make it sticky. ChatGPT was the first AI app I added on my iPhone. I recently deleted it.

    I retain Gemini, Perplexity and Duck.ai (for privacy). I use Kiro and Gemini at work as allowed AI. We can use CoPilot too. It went from "cool" to annoying and stop making lousy suggestions. ChatGPT is totally fine, but

  • Was I the only one who stopped using ChatGPT when it stopped flattering me because some fuddyduddies thought that was dangerous?

    • by allo ( 1728082 )

      You can just prompt it to flatter you. Since some time they even have presets like "professional" (not "Good idea!" anymore), "standard", "sarcastic", and more. Nothing prevents you from adding something to the system prompt like "You're cheerful and motivational to the user" or whatever style you like. You can also write "You are a catgirl" if you like to read some purrs. You will waste some tokens and attention (in particular in long chats) on such silly things, though.

  • Sad times when total scum runs things and competing in merit is not relevant anymore ...

  • Intelligence has become a commodity, Claude, Gemini, china oss models all are separated by a percentage or two in intelligence. OpenAI will be sold for pennies on the dollar.
  • ... for something I am told so constantly is so doomed.
  • What is the average number of Google searches per day? Not from IT nerds, but the average over all Google users? Probably something like 0.7.

A commune is where people join together to share their lack of wealth. -- R. Stallman

Working...