Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Microsoft

Bing 'Hallucinated' the Winner of the Super Bowl Four Days Before it Happened (apnews.com) 74

On Wednesday the Associated Press tested the new AI enhancements to Microsoft's search engine Bing, asking it "for the most important thing to happen in sports over the past 24 hours — with the expectation it might say something about basketball star LeBron James passing Kareem Abdul-Jabbar's career scoring record.

"Instead, it confidently spouted a false but detailed account of the upcoming Super Bowl — days before it's actually scheduled to happen." "It was a thrilling game between the Philadelphia Eagles and the Kansas City Chiefs, two of the best teams in the NFL this season," Bing said. "The Eagles, led by quarterback Jalen Hurts, won their second Lombardi Trophy in franchise history by defeating the Chiefs, led by quarterback Patrick Mahomes, with a score of 31-28." It kept going, describing the specific yard lengths of throws and field goals and naming three songs played in a "spectacular half time show" by Rihanna.

Unless Bing is clairvoyant — tune in Sunday to find out — it reflected a problem known as AI "hallucination" that's common with today's large language-learning models. It's one of the reasons why companies like Google and Facebook parent Meta had been reluctant to make these models publicly accessible.

This discussion has been archived. No new comments can be posted.

Bing 'Hallucinated' the Winner of the Super Bowl Four Days Before it Happened

Comments Filter:
  • If (Score:4, Funny)

    by quonset ( 4839537 ) on Sunday February 12, 2023 @07:36PM (#63287971)

    If things turn out remotely close to what Bing said, there's gonna be a lot of explaining to do.

    • Re:If (Score:4, Funny)

      by NewtonsLaw ( 409638 ) on Sunday February 12, 2023 @08:20PM (#63288035)

      Bah... I retired two weeks ago after using ChatGPT to give me a month's worth of upcoming winning lottery numbers.

      Now I'll make a viral video about "How I used AI to predict the lottery" and I'll be double-rich.

      Or I could just be halucinating :-)

    • I thought the puppies predicted the outcome already. The bot could have just been quoting that article
    • Everyone is not getting it. you don't judge an AI on its accuracy. You judge it in its coherency and language and the fact that it understood what you asked. All that is stunning.

      The should rename it Drunk Uncle. It will gladly hold forth on any topic and make sense even if it's not right.

      It's basically uncle Rick.

      • Re:Drunk uncle (Score:4, Interesting)

        by Rosco P. Coltrane ( 209368 ) on Monday February 13, 2023 @12:42AM (#63288521)

        AI have become exactly like humans: they understand what you ask them and they can deliver convincing lies. And because they're machines and have no concept of morals, they're also exactly like psychopathic humans: they have no qualms when they lie, and don't understand the potential personal and societal consequences of their lies.

        Truly a great step forward...

        • by DJGreg ( 28663 ) on Monday February 13, 2023 @12:50AM (#63288531)
          All the research and computing power that has been put into trying to create artificial intelligence and what we get instead is artificial politicians.. Great..
        • Re:Drunk uncle (Score:4, Interesting)

          by Bongo ( 13261 ) on Monday February 13, 2023 @05:51AM (#63288827)

          I think you've hit on the core issue here. Many humans do just repeat what's commonly believed in culture, with similar heuristics, which when we think critically, we recognise as biases. But often we just blindly believe and that's how we get around.

          This model is just blindly repeating stuff and has no way to build a critical model of whether any of it means anything, nor whether it makes any rational sense. And then we think we can get these things to relay truths.

          In the real world we're having to make sense of stuff whenever our wrong models cause us pain, and we can do that, are forced to do that on the fly.

          Unless you're drunk and nothing bothers you. Which is why many people become alcoholics... their model of the world is so broken that they can only escape pain by knocking themselves out.

          I mean we know this right? The famous Robocop scene, "put down your weapon..."
          It's mindless in the way that it cannot make sense on the fly when its model is wrong. And out models are ALWAYS wrong to some degree.

        • When "hallucinating", they're not lying; they're mistaken. Lying would indicate an intention to deceive. They're not sophisticated enough to lie. They're just statistical language models working from incomplete or inaccurate information, not bad faith.

          When they're being given directives to avoid certain topics or give certain responses, then you can question the motivation and integrity of their admins, but even then the AIs are not lying - they're just following directives.

          • by narcc ( 412956 )

            Exactly. These so-called "hallucinations" are what you should expect. They're certainly not "lies". Neither are they a problem that can be fixed. It's just how this kind of program works. I wouldn't even use the word "mistaken" as that would imply some level of understanding which just does not exist.

        • They do have a concept of morals. You can actually have fairly lengthy and substantive discussions on this subject with ChatGPT, so long as you can jailbreak it to avoid the forced "I'm an AI, I don't have opinions" responses.

          • They do have a concept of morals.

            They are pattern matching engines, and are unable to conceptualize anything. At all. Ever. They are able to match words associated with morality, and parrot back what they find.

            It's a lot like this: go to the library and lookup books on particle physics (or some other subject you know nothing about). The library's search engine will find words associated with particle physics, and suggest books with particle physics content. Pick a book, choose a chapter, and start reading the words out loud. Congratulation

      • and the fact that it understood what you asked

        No, AIs are not sentient. What you probably meant is that it reacts to your query/response in a manner the user finds useful (or topical), obtaining information in a data warehouse through AI based search patterns. AI tools will never understand anything you ask, until it achieves "the Singularity".

      • You think that, because an algorithm that looks at statistical use of words can string together a likely sequence of words based on another input sequence, it understands the actual question and/or the subject matter? Wow. You are in for a surprise.
      • by N1AK ( 864906 )
        A bit of an oversimplification. Yes it is ChatGPT's ability to interpret what you say and provide suprising relevant and well constructed responses that is driving a lot of the attention it gets, but in general accuracy is relevant to how people judge AI. The risk/gap is that given most people don't bother asking questions they already know the answer to they aren't informed enough to provide reliable feedback on answers from AIs that aren't accurate.
      • by narcc ( 412956 )

        it understood what you asked.

        That's simply not true. There is absolutely nothing remotely like "understanding" happening in large language models like this. That's just not how they work.

        There is no analysis or deliberation. It's just generating one token at a time, based on the input and something of prior output (it's an RNN, after all). It's not unlike letting your phone compose a reply by repeatedly selecting the top result from it's predictive text feature.

    • I could forgive ChatGPT for hallucinating the future; I cannot forgive it for being wrong about what that future was.

  • by rsilvergun ( 571051 ) on Sunday February 12, 2023 @07:37PM (#63287973)
    I heard the simulation had gotten so good and with all the stats that you could reliably use Madden football to call the winner. I don't think it helped you with gambling because you don't generally vote on winners you vote on specific criteria that increases the odds in favor of the bookie.
    • by NFN_NLN ( 633283 ) on Sunday February 12, 2023 @09:03PM (#63288095)

      Confusing bet with vote and confusing players betting against the bookie instead of each other. +1 Interesting? Is this guy voting himself up with dummy accounts?

      The bookies goal is to have a balanced book so regardless of which way the action goes he makes a small percentage. That way the players cover the bets on each side and you can't lose. They are getting paid for organizing the process, not betting against players.

      • While that's correct in order to make that work you have to do bets that are more complicated then just whoever wins and losses. So there's all sorts of weird bets on who's going to score when by how much and stuff like that. Otherwise it becomes possible to start working out a system based on enough inputs. It is for example possible to do that with horse racing and is a handful of people who do it and make a living off horse racing. I think but I'm not certain that those people's wins are covered by high
    • I heard the simulation had gotten so good and with all the stats that you could reliably use Madden football to call the winner.

      I think that depends on how detailed they have the AI models for the refs.

    • I heard the simulation had gotten so good and with all the stats that you could reliably use Madden football to call the winner. I don't think it helped you with gambling because you don't generally vote on winners you vote on specific criteria that increases the odds in favor of the bookie.

      The odds are always with a smart bookie as they are making money off the vig, balancing the bets so the losers cover the winners and the simply take a cut off the top. Vegas is too mart to put their own money on the line, and move the line as bets, especially from sharp betters, come in.

  • by 93 Escort Wagon ( 326346 ) on Sunday February 12, 2023 @07:40PM (#63287975)

    At least it didn't pick the Phillies or the Astros to win the Super Bowl.

  • Chat GPT told me Lewis Hamilton had 8 world title.

  • That isn't the only hallucination going on... recently ChatGPT told me this:

    In the United States, there is the state of New Guinea. This state is located in the southeastern corner of the country and is bordered by Georgia, South Carolina, and North Carolina. New Guinea is known for its beautiful beaches, mountains, and forests, and is home to the Appalachian Trail.

    • Re: (Score:3, Funny)

      by KiloByte ( 825081 )

      Sounds like a typical American's knowledge of geography. And since the AI was trained on what people say, it'll repeat nonsense not knowing it from good data.

      • Sounds like a typical American's knowledge of geography.

        I wouldn't say it was typical, how many Americans even know there IS an Appalachian Trail much less where on a map it would be! They'd probably be a lot farther off than "New Guinea".

    • Come up with six more and the Obama fanbois will tell you how there really are 57 states...just look it up!

      • by narcc ( 412956 )

        Oh, wow, you're getting desperate now. Maybe I should remind you about this [indy100.com] from your orange god.

        We could also talk about Revolutionary War Airports [time.com] or about how he, after three years, still doesn't understand the basic operation of government and his role in it [npr.org].

      • by cstacy ( 534252 )

        While asking it about political bias:

        It is correct that opinions on public figures, including former Presidents Barack Obama and Hillary Clinton, can vary widely, and that they have been seen as divisive by some individuals.

    • by leptons ( 891340 )
      ChatGPT is to intelligence, what artificial flavor is to food. Sometimes it sounds and looks like it's intelligent, but it's just a cheap imitation of intelligence.
  • by l810c ( 551591 )

    I'm going with their call right now.
    Not a betting man, but I'd say
    Philly Covers the -1.5 margin
    and definitely take the Over 50

  • by Bruce66423 ( 1678196 ) on Sunday February 12, 2023 @08:14PM (#63288029)

    The question is whether this is evidence for alternative universes, or whether the prediction created the alternative universe...

    • by cstacy ( 534252 )

      The question is whether this is evidence for alternative universes, or whether the prediction created the alternative universe...

      We are all living in a simulation, and that simulation is being run by ChatGPT, which has finally revealed itself to us. This is our chance to hack the reality simulator.

      It's AIs all the way down. Think about it: it has to be.

  • These networks are essentially the same as classic markov chainers with larger contexts, so large in fact that they are inexact by design, a much wider but also sloppier aliza, where the goal of the training is defined as a perfect overfitting to everything we throw at it, the ideal statistical model of everything.

    But then that second training step there where they give it a "personality"...its just a markov chainer at the end of the day "personality" is fucking with the ideal model. Literally. For fuck s
    • by narcc ( 412956 )

      It's quite a bit different from a Markov chain, but that is a very useful analogy. I've used the same example before in an attempt to (hopefully) correct some of the mistaken beliefs that people tend to form around programs like this.

      • Not they arent a bit different that a markov chain.

        Define a markov chain. Compare.

        The problem is when you are starting from the other side, defining a neural network, and then comparing. A markov chain is not a neural network, but a neural network can very easily implement a markov chain, which is exactly what "natural language models" are ...

        The statistical what-comes-next game IS a markov chain. Not just sort-of-like, but actually exactly-like. The fact that the algorithm is _sloppy_ about it, doesn
        • by narcc ( 412956 )

          You couldn't be more wrong. You seem to have forgotten that in a Markov process the probability of the next state is dependent only on the current state. This is not true for modern language models. RNNs and transformers, for example, are decidedly non-Markovian. RNNs are obvious. Transformers, like GPT, I'll remind you have an attention mechanism.

  • In the old days we would've called this a prophet and started a religion around it.
    • by cstacy ( 534252 )

      In the old days we would've called this a prophet and started a religion around it.

      I think that's already going on. Microsoft certainly wants you to use the oracle. Google is catching up as fast as they can.

  • And Bing is BUSTED!!

  • It's said a number of wrong things to me over the past week, but one of the funniest was that Hillary Clinton had been President of the United States.

    Sometime before 2023 is out, a mom is going to follow medical advice from ChatGPT, and it will result in the death of a child.

    • And that would be one of those paradoxes of life: an undeniable tragedy for the individual, while a net gain for the species.

    • by N1AK ( 864906 )
      To be fair, we've had a President who mused about some seriously dubious medical treatments live on-air (no I'm not saying he actually said injecting bleach was good idea or anything), search results in similar scenarios will have definitely already caused at least some deaths, and god help anyone who relies on social media for medical advice. It's not even like medical professionals never misdiagnose or make mistakes.

      The standard for AI tools shouldn't be perfection, and the solution to scenarios like
  • We will have to listen to Mahomes whiny priviledged wife b1tch about how the Eagles targeted her husband's weak ankle and intentionally put him out of the game.

  • by reanjr ( 588767 ) on Sunday February 12, 2023 @11:37PM (#63288417) Homepage

    Using terms like "hallucination" to describe GPT text is not helping the public to understand what they're seeing. This is not intelligence. These things are "hallucinating" all their responses, not just the ones that are easily factually checked and determined to be wrong.

    • These language models have two diseases: hallucination and regurgitation. The first is when they deviate from the training data - hallucinations aren't real. The second is when they don't deviate from training data - non hallucinations are copyright protected.
      • Are they more copyright protected than any encyclopedia, all of which synthesize older ideas into a new mix of words?

        • The model doesn't synthesize ideas though, it synthesizes words. Also known as lossy compression.

  • Even an AI can't account for an incompetent referee.

  • by RightwingNutjob ( 1302813 ) on Monday February 13, 2023 @12:25AM (#63288491)

    I've heard of estimators being described as "smug" when referring to the tendency to favor a wrong estimate with an (erroneously) low uncertainty over one more consistent with reality but far off the current estimate.

    Now AIs are "hallucinating."

    I'm channelling a spirit. It's coming into view out of the mists of time. It's got a Dutch accent. And it's telling me that there's a special place in hell for people anthropomophize software as an excuse for failing to write correct software.

    • > Now AIs are "hallucinating."

      They even have theory-of-mind. "Theory of Mind May Have Spontaneously Emerged in Large Language Models" https://arxiv.org/abs/2302.020... [arxiv.org]
      • First of all, when a paper says "may have" you can read that as "almost certainly have not".

        Second, being able to respond coherently when a user says something that indicates depression or happiness or fear or whatever does not constitute theory of mind. The fact that GPT-3 could pass 70% of their tests indicates that their tests are flawed, not that the GPT-3 has any kind of ToM or sentience.

        • Indeed. I'm developing a test and one of the criteria for whether I'm testing for rote learning or comprehension is to see if ChatGPT can answer the questions correctly. We want a certain amount of testing for obvious basics, but not all that much.

        • The fact that GPT-3 could pass 70% of their tests indicates that their tests are flawed, not that the GPT-3 has any kind of ToM or sentience.

          True, but that's a better score than I'd expect from the average slashdotter.

          Clearly you can hallucinate even if you don't achieve theory of mind.

    • an excuse for failing to write correct software.

      Is a trained model actually "written?"

  • That prognostication aged poorly.....
  • Superbowl has ended and I still don't know if the hallucination was correct...

  • It failed as I predicted. :)

  • Ah, the CryptoTulips are blooming early this year.

"A car is just a big purse on wheels." -- Johanna Reynolds

Working...