Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
AI

ChatGPT Nears 700 Million Weekly Users, Up 4x From Last Year 51

OpenAI's ChatGPT is on track to hit 700 million weekly active users, "up from 500 million in March, marking a more than fourfold year-over-year surge in growth," reports CNBC. From the report: The figure spans all ChatGPT artificial intelligence products -- free, Plus Pro, Enterprise, Team, and Edu -- and comes as daily user messages surpassed three billion, according to the company. The growth rate is also accelerating, compared with 2.5 times year-over-year growth at this time last year. "Every day, people and teams are learning, creating, and solving harder problems," said Nick Turley, VP of product for ChatGPT, in announcing the benchmark.

OpenAI now has five million paying business users on ChatGPT, up from three million in June, as enterprises and educators increasingly integrate AI tools. [...] OpenAI's annual recurring revenue is now at $13 billion, up from $10 billion in June, with the company on track to surpass $20 billion by year-end. Even at a $300 billion valuation and $20 billion revenue run rate, OpenAI will need massive capital to support its global push.

ChatGPT Nears 700 Million Weekly Users, Up 4x From Last Year

Comments Filter:
  • I still haven't used it. Why I am not surprised so many people are willing to let an A.I. do their thinking for them?
    • I use it all the time, and it's been awesome.

      I am starting a small business. Not only does ChatGPT (via Copilot) answer a lot of my questions, but I also am keeping lots and lots of records in the system.

      In the past, I would have used something like Excel, and had a fairly rigid data entry format. It works of course, but data entry is more work that way. Now I just 'tell' Copilot what I am doing, while I am doing it. At the end of the day/week/month I can ask for a recap, and everything is formatted nic

      • Do you know that when you ask for a report it will give you false information at some point and the rate at it will give you false information will increase with time? And if you're ok with that, why do you even need the report in the first place?

        Hallucination is AI marketing term for false information or error.

        • Yes, I do know that I get false information at some point, and I am okay with that. It disappointed me at first, then I realized how to work with it. You need to remind AI of some things, and you need to keep it on track. Humans are a lot worse in this regard.

          I treat it largely like I would treat a highly qualified human assistant. This is also where I find the conversations to be very important. Every once in a while I just tell it to review the entire conversation and that will steer it back the ri

    • AI is a tool, the problem here is people giving up their privacy by using a cloud-based AI when deepseek exists. I can run that thing on my m1 macbook
    • by linuxguy ( 98493 )

      > Why I am not surprised so many people are willing to let an A.I. do their thinking for them?

      AI is just a tool. Like computer is a tool. If you were an accountant before computers arrived and refused to "let the machine do your work", you would get left behind. This isn't very different from that. AI is pretty darn good today. And it is significantly better than it was last year. And it will be significantly better next year.

      I have some older family members. Some of them adopt newer tech and others shun

      • As a tool it's pretty bad if you take 5 minutes to think about it. Computers are reliable and when they make mistakes it's an exception.

        When "A.I." makes an error we invent terms that make it seem like it wasn't an error: hallucination.

        Of course if you know how LLMs work you would know that those errors are features and they cannot work without them regardless of how much training data you feed into them.

      • There is a bit of circular logic in the AI argument. If you let a program do everything, including thr thinking, how can you as thr user know if the program is working corrrctly?

        With an actual tool, such as a hammer, I can confirm that the nail is really in the board. And the hammer isn't going to hallucinate and lie about a nonexistant nail or a nail that it shot into my face.

        As a tool, AI's own extensive capabilities are its biggest weakness. A weasly little piece of software that will find any path to th

    • by sirber ( 891722 )

      Best ask AI why.

  • I don't care how much you want to shit on them. Those are some impressive numbers. Very impressive.

    • by gweihir ( 88907 )

      Impressive? Yes. In a good way? No.

    • I don't care how much you want to shit on them. Those are some impressive numbers. Very impressive.

      (Mark Z) ”Two billion on Facebook. Some, may call that impressive. I call that, a lot of dumb fucks.”

    • No, no, the 'bubble' is going to pop any time now! Half of Slashdot has been telling us that on each and every fucking AI article.

  • by xack ( 5304745 ) on Monday August 04, 2025 @05:25PM (#65566204)
    It looks like the AI takeover has already happened for many, and all their degrees and certifications will be "AI-tainted". This is much, much, worse than the Wikipedia revolution that happened 20 years ago now. John Seigenthaler [wikipedia.org] must be turning in his grave from all the AI hallucinations out there.
    • It does worry me what kind of weird stuff people will start believing just because an AI told them that once. Sort of like the myth that "you swallow an average of four live spiders in your sleep each year" and other types of unproven nonsense that sounds plausible and is so horrible you want to believe it. We'll need a new category of nonsensical claims coming from AI, I guess.

      • People already believe unhinged and stupid things. The world is flat, the US was the first to ban slavery, there was no Moon landing, electric cars pollute more than ICE, climate doesn't change, there is no evolution, chemtrails are a secret weather experiment. Some of these are more extreme than others, all are easy to check with multiple sources or a little reasoning.

  • by locater16 ( 2326718 ) on Monday August 04, 2025 @05:29PM (#65566208)
    For basic idiot questions (hey we all have them) "AI" is a good search engine. For anything even slightly complicated "AI" can confidently bug out and all the "thinking" and citations and other "fixes" are worthless. Just two weeks ago I had Gemini hallucinate and entire scientific paper and confidently provide links to its nonexistence.

    This happens a lot. Need an explanation for learning something? AI will explain quaternions as just another spatial dimension, no. It'll give you wrong information on safe sunscreen ingredients, will hallucinate that you're going to die tomorrow from some disease instead of coming to that conclusion yourself when you tell your doctor that you have lupus, etc.

    In short modern transformer model "AI"s have become artificial politicians, some"one" that will spew utter nonsense straight to people's faces with the virtual equivalent of a nod and a smile, and people will believe it and blame everyone else when it's wrong. We already have more than enough of those thank you "AI" companies, you worthless cunts.
    • Maybe a lot of other people aren't using them like you are trying to do. If you want to turn a bullet list into prose for example they are very good. If you want to tell ChatGPT to remember facts so you can remember them without matching a keyword, or getting a bunch of extraneous "facts," they are good for that.

      Personally I have also found them very good for summarizing issues and providing links (i.e. search+), but maybe that's not your experience.

      Also it does matter which AI we're talking about. I

      • by locater16 ( 2326718 ) on Monday August 04, 2025 @05:42PM (#65566246)
        Is AI good at summarizing, or do you just believe it's good because it's convenient? Is it good at providing links, or do you just believe it without checking? Is it really doing what it says it's doing, or do you just have a shortcut in your brain that says confident speech is probably right so you don't have to waste time thinking about it?
        • Oh I definitely use the links rather than the ai-generated text if I care.

          Is it missing something important in summarization? I can't say I've found that to be the case in retrospect, but yes it must happen. Is it more than would happen if I spent the same effort with a search engine, or less? I think it is less.

        • by narcc ( 412956 ) on Monday August 04, 2025 @07:08PM (#65566468) Journal

          Is AI good at summarizing

          It's astonishingly bad at summarizing text. It will ignore important details and 'hallucinate' others. Oh, and if the thing you want it to summarize isn't accessible or doesn't exist, it will still provide 'summary'.

          or do you just believe it's good because it's convenient?

          The output looks really good if you don't bother to check it for accuracy.

          Is it really doing what it says it's doing

          They suck at summarizing text because they're not actually summarizing text. All these things do, all they can do, is next-token prediction. That's why it doesn't matter if there isn't any text to summarize. Next-token probabilities are produced the exact same way, no matter what the context happens to be.

          do you just have a shortcut in your brain that says confident speech is probably right so you don't have to waste time thinking about it?

          To be fair, I think we're all guilty of that. If not when it comes to AI generated nonsense, then a book or some other media. It's impossible for us to be experts in everything, so we all lean on expert opinion. We also tend to associate confidence with certainty, which is fine most of the time, provided we don't also mistake certainty for accuracy!

        • An LLM providing a summary isn't the same as summarization!

          To provide a real/meaningful summary (retaining the key points and discarding irrelevant detail) requires reasoning and understanding of the subject matter and context as well as some understanding of what is important. Humans can do this ...

          Howevr, an LLM providing a summary is only PREDICTING what a summary would look like - it will give you something that looks like a summary in terms of form and length, with some references to the source materia

    • No, ai is not good for basic questions. Refer to literally every google AI results summary ever.
    • It is dangerous to ask it something too specific. It will gladly fabricate a plausible response. For example, don't ask it about what episode of The Simpsons something happened in; it will often just make something up and cite a specific episode, but when you go watch that episode, it's completely wrong. It desperately wants to give you an answer to a question and will generate lie after lie in an attempt to make you happy.

      Of course, I'm anthropomorphizing a soulless algorithm here. It's remarkably good at

  • by silvergig ( 7651900 ) on Monday August 04, 2025 @05:41PM (#65566240)
    Are. They. Making. Any. Money?
    Doesn't matter if they have 20 trillion weekly users. Show me the Money.
    • Exactly. These numbers are only to impress the gallery. Artificial numbers to raise expectations.
    • They've definitely got revenue as the article stated. "OpenAI's annual recurring revenue is now at $13 billion, up from $10 billion in June, with the company on track to surpass $20 billion by year-end" and "five million paying business users". Nothing to sneeze at and the growth rate is 4x. People will invest in that.

      • by narcc ( 412956 ) on Monday August 04, 2025 @07:22PM (#65566488) Journal

        People will invest in that

        Only stupid people who don't know the difference between profit and revenue.

        Don't let the fact that these things are inescapable at the moment fool you. These things are absurdly expensive. Everyone is banking on the tech improving rapidly and the cost falling dramatically. Enjoy the affordable access while it lasts because it won't last long.

        • by radl33t ( 900691 )
          Are they really banking on the costs dropping? Surely they need to make up for the hundreds of billions already spent and (and trillions more they speak of) It seems likely that the frontier LLM AI business plan is to heavily subsidize our addiction and or overreliance on these tools and then leverage this relationship to extract whatever revenue is necessary. Or it will collapse and shareholders will lose a lot of money. I read recently that the major firms' spending on AI is increasing much faster than th
        • by allo ( 1728082 )

          These things only get cheaper.

          Have a look at the price trends: https://epoch.ai/data-insights... [epoch.ai]

          • by narcc ( 412956 )

            Cheaper != cheap. New technology has made inference less expensive, sure, but not nearly enough to matter. Again, these things are expensive. OpenAI, for example, was forced to raise prices just to slow the bleeding. They are absolutely hemorrhaging money, even on their $200/month "pro" plan.

            • by allo ( 1728082 )

              But they ARE cheap. Have a look at the OpenRouter marketplace: https://openrouter.ai/models [openrouter.ai]

              End 2024 this blog article compared GPT-3.5 level intelligence going from $60/1M Token to $0.06/1M Token: https://a16z.com/llmflation-ll... [a16z.com]
              Now the prices fell even more: https://epoch.ai/data-insights... [epoch.ai]

              I could just now summarize 1 million tokens for 0.02 cent input, and if we assume the summary to be 4k token $0.00032 ($0.08/1M) Output using Mistral Small, which is one of the good medium size models currently. I thin

              • by narcc ( 412956 )

                Again, the price you are paying is not even close to the actual cost. This is why AI companies are burning money. Companies are not going to lose money forever so that you can enjoy cheap access to shitty chatbots.

                This isn't difficult.

                • by allo ( 1728082 )

                  Not all of the inference providers have VC money to burn. Some just provide the service at the cost that makes them profit. The loss is with the companies training models, not with the companies taking open models and providing inference.

                  • by narcc ( 412956 )

                    That's simply not true. AI Companies are losing money on inference. That's reality. You can continue to deny it if you like, but it won't change that simple fact.

                    • by allo ( 1728082 )

                      There is no much sense to discuss this further, when you just claim things to be true saying "trust me bro" even when they were disproved in the conversation before.
                      So I say now have a nice day and see you in another Slashdot forum.

                    • by narcc ( 412956 )

                      LOL! You think you were "discussing" something?! What a joke!

                      Here in reality, you were saying a bunch of nonsense and ignoring easily verifiable facts. That's not a discussion. That's you denying reality.

                    • by allo ( 1728082 )

                      Now we arrived at name calling?

                      Maybe I should have stopped the discussion earlier.

                    • by narcc ( 412956 )

                      Where do you see any "name calling"?

                      You've got nothing. Pathetic.

                    • by allo ( 1728082 )

                      You're doing it again. Don't you notice yourself?

                    • by narcc ( 412956 )

                      Do you know what "name calling" is or is this a sad and pathetic attempt at gaslighting?

                      Get a fucking clue.

                      Figures. You have nothing so you have to resort to petty bullshit to make yourself feel better. What a joke!

                    • by allo ( 1728082 )

                      I think it's enough. Good bye.

  • So what (Score:4, Insightful)

    by paul_engr ( 6280294 ) on Monday August 04, 2025 @05:46PM (#65566254)
    The numbers still dont add up and scale won't fix a problem with the practical limits of scale
    • I see it as a genie out of the bottle now ... were seeing the monetization of human ignorance and ingenuity simultaneously. Every style of dysfunction will pay money for their jollies. Look at the nerds who are marrying their LLM chats.. or something .. the markets for dysfunction will pay the bills even if the rest of the commercial AI doesn't payback pronto.

"One day I woke up and discovered that I was in love with tripe." -- Tom Anderson

Working...