Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Twitter AI

Elon Musk's X Launches Grok AI-Powered 'Stories' Feature (techcrunch.com) 71

An anonymous reader shared this report from Mint: Elon Musk-owned social media platform X (formerly Twitter) has launched a new Grok AI-powered feature called 'Stories', which allows users to read summaries of a trending post on the social media platform. The feature is currently only available to X Premium subscribers on the iOS and web versions, and hasn't found its way to the Android application just yet... instead of reading the whole post, they'll have Grok AI summarise it to get the gist of those big news stories. However, since Grok, like other AI chatbots on the market, is prone to hallucination (making things up), X provides a warning at the end of these stories that says: "Grok can make mistakes, verify its outputs."
"Access to xAI's chatbot Grok is meant to be a selling point to push users to buy premium subscriptions," reports TechCrunch: A snarky and "rebellious" AI, Grok's differentiator from other AI chatbots like ChatGPT is its exclusive and real-time access to X data. A post published to X on Friday by tech journalist Alex Kantrowitz lays out Elon Musk's further plan for AI-powered news on X, based on an email conversation with the X owner. Kantrowitz says that conversations on X will make up the core of Grok's summaries. Grok won't look at the article text, in other words, even if that's what people are discussing on the platform.
The article notes that some AI companies have been striking expensive licensing deals with news publishers. But in X's case, "it's able to get at the news by way of the conversation around it — and without having to partner to access the news content itself."
This discussion has been archived. No new comments can be posted.

Elon Musk's X Launches Grok AI-Powered 'Stories' Feature

Comments Filter:
  • Apparently, Baby Yoda (TM) is Not Amused
    • by Luckyo ( 1726890 )

      I would imagine Meta is going to be way more pissed about naming overlap with instagram stories.

      • by Alascom ( 95042 )

        I would imagine Groq is even more pissed naming overlap.
        reference:
            groq.com (2017-01-09)
            vs
            grok.x.ai (2023-11-07)

  • That's just RAG. (Score:5, Interesting)

    by Rei ( 128717 ) on Monday May 06, 2024 @03:11AM (#64450878) Homepage

    "Grok's differentiator from other AI chatbots like ChatGPT is its exclusive and real-time access to X data." That's just RAG. Retrieval Augmented Generation. All Grok is doing is acting as a summarizer. This is something you can do with an ultra-lightweight model, you don't need a 314B param monster.

    Also, you don't need an X Premium subscription to "get access" to Grok, since its weights are public. To "get access" to an instance running it, maybe.

    I've not tried running it, but from others who have, the general consensus seems to be: it's undertrained. It has way more parameters than it should need relative to its capabilities. Kinda reminiscent of, say, Falcon.

    I also have an issue with "A snarky and rebellious" LLM. Except people using them for roleplaying scenarios (where you generally don't want a *fixed* personality), people generally don't want it inserting some sort of personality into their responses. As a general rule, people have a task they want the tool to do, and they just want the tool to do it. This notion that tools should have "personalities" is what led to Clippy.

    • From the standpoint of wanting something reasonably factual, I am pretty sure it would be dumb to rely on any AI that's using Tweets as the main part of its training data set.

      • by Rei ( 128717 )

        Honestly I question how much of the training was actually done with Twitter data.

        It certainly was a source, but I seriously doubt it was the only data or even majority of the data.

        Either way, doesn't really matter with RAG. RAG doesn't have to "know" much, just how to summarize things others have written.

        • It doesn't matter how many parameters you use, LLMs don't "know" anything that isn't fairly similar to training data anyway. RAG-style summaries are basically their best use-case.
      • Re:That's just RAG. (Score:5, Interesting)

        by sg_oneill ( 159032 ) on Monday May 06, 2024 @05:14AM (#64451056)

        Considering how drenched twitter now is with conspiracy theories and things like antivaxers spamming the site with nonsense about how if you know a vaccinated person you'lll get sick from "shedding" (mRNA doesn't "shed" but mere facts never stopped those crowd) and could catch "turbo cancer". Its an absolute wasteland of disinformation and bots nowdays, and they want to train AI on *that*?

        • You're completely right. But isn't that what social media users are doing on their own anyway? The might read an article summary, a bunch of tweets and retweets of rumors and uninformed opinions, but rarely research the original topic. Then they're misinformed on nearly everything. It sounds like this bot is just trying to make that existing process more efficient.
        • Spike protein does shed - it's been measured. There are papers on NLM/NCBI. Your news sources told you that, right?

          Spike is probably only a concern for breast-feeding infants, blood transfusions, and sexual relations.

          There are already 'pureblood' dating apps available so the market is responding. The Japan bloodbank is moving on it too to minimize cost overruns.

          • by sg_oneill ( 159032 ) on Monday May 06, 2024 @07:32AM (#64451184)

            Theres no concern at all for blood transfusions, breast feeding infants or sexual relations, because its pseudoscientific gibberish with no concievable mechanism for it to be true coupled with know known observed instances of it ever be true. Its the fever dreams of paranoid people who never finished highschool.

            Yes you might shit out a few acid denatured proteins. But it *doesnt mean anything*. You shit out proteins all day. And that isn't "Shedding".

            Shedding is when the body expels dead (and sometimes alive) viruses. That is literally impossible.

          • Spike protein does shed - it's been measured. There are papers on NLM/NCBI.

            While I am sure my search skills are good enough to find what you are discussing, I would rather have access to the data that you are using to come up with this idea. Can you provide links to such studies that say what you claim? Honestly, unless you are eating the other person, I am uncertain how spike proteins would be delivered to another person in such a way that they remain viable. My logic is that we would all be immune to COVID by now if that were true and we are not all immune to COVID right now.

      • From the standpoint of wanting something reasonably factual, I am pretty sure it would be dumb to rely on any AI that's using Tweets as the main part of its training data set.

        Training an AI on X content is the ultimate goal to which the computer science phrase "Garbage in, garbage out" has been moving for almost 70 years.

        We can finally close out that phrase and move on to the next frontier in CS.

    • by Luckyo ( 1726890 )

      There are already bots that attempt to provide summaries to longer posts on X. It's a bit hit or miss, so they're not very popular. Most just use the unroll bots that unroll long twitter threads into a single post/page.

    • The problem is that this is essentially a summary of Tweets, or a Tweet of a Tweet, so much for nuanced discussion.
      • I don't have a Twitter account, but aren't Tweets short? Why would they need summarized?

        • by dirk ( 87083 )

          They are 280 character for "regular" users, but the paid subscribers can go to 4000 characters. So yes, the majority of time it would be summarizing a 280 tweet, which just seems incredibly useless, much like all of X now.

        • While individual Tweets are short, a series of Tweets them might be lengthy. An inaccurate summary might convey the wrong information. For example two people discussing Tesla might be discussing how the company market cap is less than $500B due to stock price dropping over the last year. That could be incorrectly summarized to Tesla losing $500B over the last year.
        • I think the idea is to summarize a whole bunch of tweets, like trying to nail down a consensus interpretation of an article. It also sounds like a terrible idea.

          I think the point is to remove the truth from any given article and replace it with the groupthink interpretation of paranoid Musk fanboys.

    • Agreed, but even playing neutral is a personality.

      People get confused when llm's make definitive statements instead of showing the work/probabilities.

      I get it, that's more expensive to generate and queries already use too much energy, but half the users don't second-guess outputs.

  • by skam240 ( 789197 ) on Monday May 06, 2024 @03:12AM (#64450880)

    ...verify its outputs

    Oh good, so that thing the public is terrible at is what everyone should do.

  • by NotEmmanuelGoldstein ( 6423622 ) on Monday May 06, 2024 @03:14AM (#64450882)

    Grok won't look at the article ...

    Grok will repeat tweets, not facts and not the substance of the article: Maybe, this "AI" should be called ELIZA. The purpose obviously, is keeping the flame-war at the top of the discussion and hiding the infantile name-calling that such flame-wars devolve into.

  • another company owned by Musk. Called "Stories", this feature will generate summaries based on user tweets and not traditional news articles
  • On a scale from Jubal Harshaw to Gilbert Berquist, how fascist is this new "grok"?

    • Re:So... (Score:5, Funny)

      by sg_oneill ( 159032 ) on Monday May 06, 2024 @05:25AM (#64451062)

      Last I saw it was when it first came out , and there where ultra right wing dudes demanding it answer whether there are two genders or not. It was replying by calling them idiots in highly creative ways.

      I found it a rather amusing spectacle watching maga fools losing arguments to a chatbot designed to agree with them.

  • No android app (Score:4, Interesting)

    by Viol8 ( 599362 ) on Monday May 06, 2024 @03:27AM (#64450906) Homepage

    "hasn't found its way to the Android application just yet."

    It amazes me just how many people don't realise their phone has a browser and you can just access the web version of all these sites. In fact thats what I do - i don't want my phone cluttered up with all this shit when a bookmark will do.

    • Some sites do everything they can to make you use their app if there is one. Browser collects less data. Sometimes, simply putting the browser in desktop mode is enough. Other times it's unusable.
      One example I ran into yesterday was imgur. Web site didn't allow uploading a photo from a mobile browser . Until I put Firefox in desktop mode. Had to enlarge quite a bit, though.

  • We're not copying the content, we're showing reactions to it. Yeah right.

    • More like, we don't copy the content, we distort it to make it as outrageous as possible for more clicks.

  • Grok's differentiator from other AI chatbots like ChatGPT is its exclusive and real-time access to X data.

    You say that like it's a good thing. Have you read some of the stuff there?

    • by HiThere ( 15173 )

      Well...it *is* useful as a model of human language...at least *some* human language. The problem will come if it starts copying the reasoning.

  • by Misagon ( 1135 ) on Monday May 06, 2024 @03:55AM (#64450934)

    A simple LLM will never be able to "grok" [wikipedia.org] anything.

    • by Rei ( 128717 )

      What is "simple" about the extremely complex interactions of 316 billion parameters?

      • by aRTeeNLCH ( 6256058 ) on Monday May 06, 2024 @04:37AM (#64450986)
        That the complexity of those interactions is a result of combining few things in many ways. The parent post merely hinter at the idea that the term grok was used by Heinlein to indicate something a level above comprehension, whereas LLM systems are not there and possibly won't ever be.
        • by Rei ( 128717 )

          What is "a few" about 316 billion parameters, let alone before you exponentiate their interactions?

          • A few to the power of something amounts to a huge quantity, but you're arguing beside the point. Please reread the initial post without taking the word "simple" as absolute, instead accept that a real brain (perhaps not even able to grok) is in essence much more complex, hence LLMs are simple in respect. Even if an LLM has 316 trillion parameters, I'm not sure if it's still not simple in comparison, and simply not complex enough (just an LLM even if ridiculously well trained) to grok. Unless you believe tha
          • Combinatorics is not intelligence. You might as well be arguing for "Intelligent Design" in nature to suggest otherwise.
            • by Rei ( 128717 )

              Good to see we're abandoning the premise that the logic behind LLMs is "simple".

              LLMs, these immensely complex models, function basically as the most insane flow chart you could imagine. Billions of nodes and interconnections between them. Nodes not receiving just yes-or-no inputs, but any degree of nuance. Outputs likewise not being yes-or-no, but any degree of nuance as well. And many questions superimposed atop each node simultaneously, with the differences between different questions teased out at la

              • How is a flow chart "immensely complex"? You're describing emergent properties as if they're deliberate. Predicting words is just about resonance if you have a sufficiently large base of relationships. That's not intelligence, it's just, once again, combinatorics.
                • by Rei ( 128717 )

                  How is a flow chart...

                  You stopped after reading two word and ignored every other word in the post. *eyeroll*

                  Predicting words is just about resonance

                  The word "resonance" is a non-sequitur in that sentence. You might as well have written, "Predicting words is all about Australopithecus."

                  • No, you stopped after reading five words. Just long enough to realize I was contradicting you. Which I guess is proof in your mind that someone has nothing to say.

                    The word resonance is not a weasel word. It's not mysterious. There are very specific phenomena involved. If you didn't understand what I meant, you could simply have asked. But instead you said that and bullhorned your complete indifference to the subject you're pontificating on.
      • by HiThere ( 15173 )

        LLMs have no direct perception of the world. They can't even understand in the normal sense. They are a necessary PART of a real AI (that wants to work with humans).

  • by VeryFluffyBunny ( 5037285 ) on Monday May 06, 2024 @04:33AM (#64450976)
    Apparently, "Grok can make mistakes, verify its outputs." So that means you read the summaries & then you have to read the posts in order to see if the summary is correct or not... Why not save time & just read the posts?
  • by coopertempleclause ( 7262286 ) on Monday May 06, 2024 @06:34AM (#64451110)
    Given the number of people who never read the article, this is gonna end up like asking for a summary of asylum inpatients arguing over a headline.
    • this is gonna end up like asking for a summary of asylum inpatients arguing over a headline

      Oh, we're already well on our way to that without Grok's help. The only question is: is our progression linear or exponential?

  • by geekmux ( 1040042 ) on Monday May 06, 2024 @06:59AM (#64451144)

    conversations on X will make up the core of Grok's summaries. Grok won't look at the article text.

    So let me get this straight. You created an AI for the purposes of summarizing information to consumers, and then you pointed it at the comments section to generate that summary?

    We usually get entertained by reading the comments, but that is NOT how you deliver the information that’s trying to be disseminated (i.e. the original article). That’s how you find out how quickly AI can confirm Godwins law.

    • by Rei ( 128717 ) on Monday May 06, 2024 @08:44AM (#64451306) Homepage

      He comes up with the most mind-bogglingly stupid ideas based on how twisted his conception of reality has gotten. Basically, in his reality, news articles are probable lies, but people who get lots of likes on Twitter are probable truths.

      • by nomadic ( 141991 )

        He's a narcissist who doesn't know that much about tech but is convinced he does. That's why he pushes so many useless things (e.g. cybertruck) out.

      • I think he's just good at finding uses for technology that he's developed but can't find a buyers for. This is probably pointless, but it acts as a showcase for the technology and probably is going to get some people to use it and potentially stick around. He already wasted the money developing it, may as well put it to some kind of use.
        • by Rei ( 128717 )

          The real market is investors. He's seeking a $6B valuation on x.ai, which is just nonsensical vs. what they're offering.

    • I dunno, if I could get a three line summary of Twitter to save me the trouble of reading through it, I'd hardly complain about that.
    • conversations on X will make up the core of Grok's summaries. Grok won't look at the article text.

      So let me get this straight. You created an AI for the purposes of summarizing information to consumers, and then you pointed it at the comments section to generate that summary?

      We usually get entertained by reading the comments, but that is NOT how you deliver the information that’s trying to be disseminated (i.e. the original article). That’s how you find out how quickly AI can confirm Godwins law.

      Don't worry, there will also be a lot of comments based on the Grok summaries as well!!

  • Given the ever-increasing percentage of toxic content, conspiracy theories, grievance-filled rants, unfounded assertions (later retracted, once the damage is done) and ultra-partisan trolling attacks on X of late, I'm struggling to imagine a reason to pay just to have access to an AI whose LLM has been trained on such garbage data.

    But maybe the idea is actually genius, having a prejudice-enhanced agent on steroids that greatly helps in pandering with the negative tribal stereotypes and alternate facts st
  • Purposely copycatting threads, a third party product, Elon weaves an X value-add actor on stage at X. In the interest of vertical integration, the reputational risk is single-threaded voiced content where everything sounds the same.

    Threads was multi-lingual translation with its highlighter for those interested in the nuts and to hell with the bolts. Today Threads operating model is in competition for its authenticity. Sometimes simplicity is the value. It could survive Grok. Implementing a gateway alongside

  • I got a great idea. So, I make a service that uses ChatGPT to summarize an article. Then I have ChatGPT summarize that summary. Then I have— You know where this is going, I'll run the fucking thing again and again until less than 5% of the words of the summary appear in the original. I'll call this service: Telephones!

  • will experience the Hallucinations .
  • Spoutible implemented this feature about half a year ago.

    And now they also added an AI powered verification system.

    Much faster pace of innovation in this space than X.

  • Posts on social media are opinions, not facts, maybe they should have that disclaimer upfront.

Staff meeting in the conference room in %d minutes.

Working...