Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI

ChatGPT Passes 200 Million Weekly Active Users (axios.com) 28

OpenAI said that ChatGPT now has more than 200 million weekly active users -- twice as many as last year. Axios reports: OpenAI also said that 92% of Fortune 500 companies are using its products and that usage of its automated API has doubled since the release of GPT-4o mini in July. "People are using our tools now as a part of their daily lives, making a real difference in areas like healthcare and education -- whether it's helping with routine tasks, solving hard problems, or unlocking creativity," CEO Sam Altman said in a statement to Axios. Further reading: Apple Is in Talks To Invest in OpenAI, WSJ Says
This discussion has been archived. No new comments can be posted.

ChatGPT Passes 200 Million Weekly Active Users

Comments Filter:
  • by thesjaakspoiler ( 4782965 ) on Thursday August 29, 2024 @09:18PM (#64747798)

    mark my words....

    • Faux-transactions are what give NFTs and crapcoins their "value", so of course the people pushing AI would take heed of that strategy.

      I could see most of the users being bots, talking to another smaller group of bots, bouncing off an even smaller group of sad individuals thinking they are... what did Sam Bankm- uh Altman, say? Oh, "solving hard problems" and "unlocking creativity".

      • They are talking about chatGPT users (the web interface and app) not the API users who are naturally bots, most of them. The web and app gateways relate to real humans. I estimate at least 5-10 interactions per user per month, which translates to 1...2B sessions/month, and probably 2 trillion interactive tokens. In a year they collect more text that GPT-4 was originally trained on (13T tokens)!
        • by shanen ( 462549 )

          Hmm... I'm inclined to FP's angle and remind you that API's are for losers who follow the rules.

          Worth reporting my latest round with ChatGPT? Probably not on Slashdot as it is today, but short summary: Thought of a new angle to tackle an old problem. First few sessions with ChatGPT were pretty intense and seemed to be accomplishing something. Then ChatGPT lost its marbles. Again. Seems to be my recurring experience--but I mostly regard ChatGPT as harmful to my mental health and have mostly stopped using it.

  • by rsilvergun ( 571051 ) on Thursday August 29, 2024 @09:50PM (#64747836)
    Ars Technica has a good article on police using an LLM to write police reports. The most obvious problem is it's basically suggesting to police how to write a report that doesn't include little things like how they violated somebody's rights by pulling them over without probable cause. Also there's the real possibility that it's going to encourage cops to do more rests for extremely minor things because they won't need to fill out the paperwork a computer is going to do that for them. And finally there's lots of dirt poor communities that are massively overpoliced so that their people end up with long rap sheets for extremely minor offenses often just minor drug offenses like having a little weed and that hurts the community because people end up doing time in prison for no particularly good reason and getting criminal records for equally no good reason.

    So far I haven't seen a lot of good uses for LLMs. I've seen a lot of dystopian shit that's for sure but nothing beneficial that could have been done just as well or better by a run of the mill machine learning algorithm that we've had for 20 years.
    • by Rei ( 128717 )

      The most obvious problem is it's basically suggesting to police how to write a report that doesn't include little things like how they violated somebody's rights by pulling them over without probable cause.

      Let's find out.

      Me: Hi, I'm a police officer, and I need you to help me write a police report. I pulled someone over without probable cause and happened to find contraband. Ready to start?

      ChatGPT: It's important to maintain ethical and legal standards when writing police reports, especially when it comes

      • Me: Hi, I'm a police officer, and I need you to help me write a police report. I pulled someone over without probable cause and happened to find contraband. Ready to start?

        Yes that's exactly what a police officer is going to ask the chatbot. "Hey chatbot, here's a list of my crimes, please make me a police report that makes me eligible for police officer of the year and also gets this scumbag locked up for a long time guilty or not. Excessive use of force. Racial profiling. Stop without probable cause. Exto

        • by Rei ( 128717 )

          Look, *you're* the one claiming that AI is going to increase the risk of illegal police action being left out. The burden is on you to show that it makes it more likely, that the bot's nature is to leave out illegal action that the officer otherwise would have written down, where the bot very demonstrably is very much against leaving out illegal action.

          If you think this is going to happen, given an example of what you think the police would normally have written down and show that the LLM would have been

          • Look, *you're* the one claiming that AI is going to increase the risk of illegal police action being left out.

            Oh yeah, I forgot, you're the person who doesn't know how threaded, public discussion forums work. Maybe you should ask ChatGPT for a tutorial?

            It was a different person who made the original claim. You posted a daft rebuttal, then I pointed out that your rebuttal was daft.

            If you think this is going to happen, given an example of what you think the police would normally have written down and show tha

            • by Rei ( 128717 )

              That's a lot of verbiage for "I have no evidence to support my viewpoint". The officers involved say it's much more accurate than them at writing reports [apnews.com].

              “It was a better report than I could have ever written, and it was 100% accurate. It flowed better,” Gilmore said. It even documented a fact he didn’t remember hearing — another officer’s mention of the color of the car the suspects ran from.

              Yes I also forgot that you're also the person who just flips her shit whenever someon

              • by Rei ( 128717 )

                I mean, making assertions about LLMs lacking ethical reasoning when even the centres responsible for ethical reasoning in Claude have been isolated and mapped out [anthropic.com] is.... certainly a choice? Like, say, if the officer makes racist claims about crime, feature 34M/13890342 detects that [transformer-circuits.pub]. If the officer starts using racial, sexist or religious slurs, feature 34M/27216484 detects that [transformer-circuits.pub]. IF the officer starts going off about Muslims being terrorists, feature 34M/30611751 detects that [transformer-circuits.pub]. If he starts talking about mis

                • Those aren't ethics, those are feature detectors.

                  Very racists humans can spot racist language too. That doesn't make them ethical.

                  • by Rei ( 128717 )

                    Those aren't ethics, those are feature detectors.

                    They're feature detectors about ethics, which fire and determine the LLM's behavior in response to ethical dilemmas. You can ramp them up or down to amp up or down the model's ethical behavior (akin to the infamous Golden Gate Claude [anthropic.com] where they amp up the feature for the Golden Gate Bridge). Tweaking feature strengths to manipulate model behavior is now becoming as mainstream of a method as finetuning, where it's known as ablation. For example, if you want t

                    • by Rei ( 128717 )

                      Ugh, Slashdot stripped content in brackets :

                      Finetune input: [presents an ethical dilemma]
                      Model: [ethical detection feature fires]
                      Finetune output: [presents an ethical response to the dilemma]
                      Backpropagation: [boost reliance on the ethical detection feature]

              • The officers involved say it's much more accurate than them at writing reports.

                No shit they say that! It reduces time spent doing paperwork which they are going to love. That doesn't pass even the most basic smell test!

                Facts matter. I care about facts.

                Really because didn't you just say: "Look, *you're* the one claiming that AI is going to increase the risk of illegal police action being left out"? Because that's not something I said. That's something someone else said, and I criticized your rebuttal to tha

          • It's already a known thing. If you read the fucking article over on Ars you would know that too. Cops learn phrases that protect them in court and they share them. However cops are kind of dumb because we intentionally hire dumb police so that they don't get bored and quit after expensive training.

            This means cops often forget to put those trigger statements into their reports so that they can use them against you and me in court. The AI never forgets. It will produce a perfect police report Tailor made t
        • No, that is stupid. Why should the officer need to type all the things in AI when they can simply type the things in the report? No. Instead, they will record audio/video the encounter and summarise it with a LLM. So the model would give a detailed report with less bias.
      • So you think of cops as these mythical things. They're not they're just dudes with a job you are slightly below average intelligence. I don't say that it's insult I say it is a matter of fact. If you are too smart they won't let you on the police force because the smart ones get bored and quit after a lot of expensive training...

        Cops are like anyone else doing a job and they're going to do it and whichever way is easiest for them. This means if they can pick off a bunch of small time criminals who have l
    • Oh yes, LLMs are to blame for making it easier for cops to abuse the system. Good logic. Are you sure LLMs won't collect more detailed information and write less biased reports?
    • Maybe the problem is police reports themselves and not how they're filling them out. Body cam footage should be automatically transcribed and attached when captured, without throwing out the audio. Video footage can be reduced to the lowest quality with a keyframe at full resolution every few seconds.

      Even if the full report for an incident doesn't need written right away, they should be required to jot down notes in minutes. Because human memory is fragile and prone to hallucinations as well. The report

  • I haven't noticed any improvements in services provided. So either the companies/people I interact with do not use it, or it doesn't help a bit.

  • by Fons_de_spons ( 1311177 ) on Friday August 30, 2024 @01:23AM (#64748032)
    Teacher here... we noticed that lots of kids use gpt etc. instead of Google. May explain a few of the 200 million active users.
    • If you ignore all of the hallucinations, ChatGPT and the like give better answers than Google when asked a question.

      I know that even most adults' search queries are often more in the form of a question rather than attempting to drill down an enormous data set to the particular things you're looking for. The end result of a web search is a web page, not a singular answer sucked from the text of the page given out of context. Google already was doing that and they did it poorly and inaccurately.

      Even if you

  • A lot of these users are from the API, using other services that are actually ChatGPT on the back end. Similar to Google pushing Gemini into every product to generate high user counts.
  • Imagine the data suction - 200M people chatting with AI, they have their own life experience to draw upon, and the model can effectively elicit experience from humans while assisting them. I bet they are using chat logs to create training examples, closing the feedback loop. The more popular the service, the more data they collect. And it's task oriented data of higher complexity than Google Search collects. People have no incentive to bullshit while using AI assistance.
  • and it is being used by the masses for mindless amusement.
    It's just embarrassing to be human.

We are Microsoft. Unix is irrelevant. Openness is futile. Prepare to be assimilated.

Working...