Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI Microsoft

Microsoft Claims Its New Tool Can Correct AI Hallucinations 50

An anonymous reader quotes a report from TechCrunch: Microsoft today revealed Correction, a service that attempts to automatically revise AI-generated text that's factually wrong. Correction first flags text that may be erroneous -- say, a summary of a company's quarterly earnings call that possibly has misattributed quotes -- then fact-checks it by comparing the text with a source of truth (e.g. uploaded transcripts). Correction, available as part of Microsoft's Azure AI Content Safety API (in preview for now), can be used with any text-generating AI model, including Meta's Llama and OpenAI's GPT-4o.

"Correction is powered by a new process of utilizing small language models and large language models to align outputs with grounding documents," a Microsoft spokesperson told TechCrunch. "We hope this new feature supports builders and users of generative AI in fields such as medicine, where application developers determine the accuracy of responses to be of significant importance."
Experts caution that this tool doesn't address the root cause of hallucinations. "Microsoft's solution is a pair of cross-referencing, copy-editor-esque meta models designed to highlight and rewrite hallucinations," reports TechCrunch. "A classifier model looks for possibly incorrect, fabricated, or irrelevant snippets of AI-generated text (hallucinations). If it detects hallucinations, the classifier ropes in a second model, a language model, that tries to correct for the hallucinations in accordance with specified 'grounding documents.'"

Os Keyes, a PhD candidate at the University of Washington who studies the ethical impact of emerging tech, has doubts about this. "It might reduce some problems," they said, "But it's also going to generate new ones. After all, Correction's hallucination detection library is also presumably capable of hallucinating." Mike Cook, a research fellow at Queen Mary University specializing in AI, added that the tool threatens to compound the trust and explainability issues around AI. "Microsoft, like OpenAI and Google, have created this issue where models are being relied upon in scenarios where they are frequently wrong," he said. "What Microsoft is doing now is repeating the mistake at a higher level. Let's say this takes us from 90% safety to 99% safety -- the issue was never really in that 9%. It's always going to be in the 1% of mistakes we're not yet detecting."
This discussion has been archived. No new comments can be posted.

Microsoft Claims Its New Tool Can Correct AI Hallucinations

Comments Filter:
  • End game (Score:5, Funny)

    by Ol Olsoc ( 1175323 ) on Wednesday September 25, 2024 @08:08AM (#64815841)
    It still doesn't address what happens when AI only references itself, and then determines what truth is. Which of course, might be so far from the actual truth, but hey, it didn't hallucinate it, it's only on every page. And 2+2=5
    • Re:End game (Score:5, Interesting)

      by DarkOx ( 621550 ) on Wednesday September 25, 2024 @08:26AM (#64815899) Journal

      It seems like you should be able to bolt Natural Language Processing (NLP/MNLP) onto the output of LLMs and have it locate the assertions of fact, and go look up things like dates and times and check that past tense language is not used about future events etc. Of course if you are going to go search the web or Wikipedia etc to check the LLMs work you then have to NLP the results and see if you can match the tokens. Its by no means a 'simple' solution.

      Of course then you are back to what sources do you trust. What counts as 'evidence', when do you have to prove a negative, like is 'police reports or it did not happen' a valid stance when there are self described whiteness willing to go on television and say it did. We don't actually live in a world where 'fact checking' works the way the media says it does.

      I can make statements about crime stats based on FBI data that will be contradicted by BJS surveys and research done by the very same DOJ. What is true? I think you certainly can argue hey if you did not feel it was important enough file a report the crime you were a victim of can't be important enough to count, so survey data is trumped by hard report counts. Someone else will say but but people don't report stuff for all kinds of reasons that they might share in a anonymous survey.

      Ultimately this is why real researches and real journalists cite their sources so readers can make their own judgements about how truth-y their assertions are and how valid their methodologies. I am not sure there really will ever be a truly valid and unbias way to have 'safe' general use LLMs unless we somehow also over come the technical and legal hurdles to enable them to produce a complete bibliography for their work; and there are many obstacles to that.

      • Re:End game (Score:5, Interesting)

        by dfghjk ( 711126 ) on Wednesday September 25, 2024 @08:50AM (#64815953)

        "I am not sure there really will ever be a truly valid and unbias way to have 'safe' general use LLMs unless we somehow also over come the technical and legal hurdles to enable them to produce a complete bibliography for their work; and there are many obstacles to that."

        That assumes what you mean by "safe general use LLMs". What do you think an LLM does?

        When you ask a person for testimony, you do not merely assume everything provided is factual. You make determinations using context. Why would you assume otherwise for LLMs when they way they work is modeled after the very imperfections that cause the same issues with people? Why do you think there will be some "safe" way to overcome that? Why do you think that providing "a complete bibliography" would solve the problem when it does not with people?

        Everything AI generates is a "hallucination" by design. People want to believe that AI is something other than what it is and these discussions are more about playing with words than advancing the state of the art. AI is 99% fraud and 1% hope that no one will notice or that someone will come up with solutions that the scientists can claim as their own. An AI architect understands almost nothing about the overall process, that's why there remain these claimed 200x improvements in function with singular changes. It's a giant grift aimed at VC that equally uninformed; Sam Altman is the new Elon Musk. His claims of imminent breakthroughs are the exact same lies that Musk has told about self-driving for years.

        • Re:End game (Score:5, Insightful)

          by GonzoPhysicist ( 1231558 ) on Wednesday September 25, 2024 @12:37PM (#64816587)

          Everything AI generates is a "hallucination" by design.

          Everything we perceive is a hallucination too, but it tends to correlate with reality well enough to get by.

          • Yes, a universal example of human hallucination is that we time-shift sounds to be simultaneous with things we see, even though reaction times to the two are different.

            Also, we edit our memories in real time, for example you can edit a witness' memory with a single word change, eg "How fast was the car going when it bumped/smashed into the other car?"

          • Everything AI generates is a "hallucination" by design.

            Everything we perceive is a hallucination too, but it tends to correlate with reality well enough to get by.

            You need to define hallucination, because I think my definition is a bit different from yours.

            In general, it is a false perception of sensory experiences that seem real but are not. The Tie in with AI is that the results are presented as facts, but may not even be close to a fact. A good if extreme example is when asked to generate a World War Two german soldier, the AI generated a dark skinned man of African descent, and an Asian female Wehrmacht soldiers.

            Since I was on an Ohm's law tangent in anothe

        • I think "hallucination" is a bad term for it. But people use "hallucination" because they tghink of the AI as a rational person that is thinking. But AI is not thinking, and it is not hallucinating, it is just a very fancy and advanced form of pattern matching. For every one of the right answers you could, with immense difficulty, discover how it got there from the original training data. But for every hallucination you could do the same thing. LLM is not thinking, it doens't know what an annual report

        • Everything AI generates is a "hallucination" by design.

          I think that is a super-broad statement. If I were to ask for the examination of Ohm's Law and who conceptualized it, a correct answer wouldn't be a hallucination, merely reportage.

          Sam Altman is the new Elon Musk. His claims of imminent breakthroughs are the exact same lies that Musk has told about self-driving for years.

          And that, my friend, is not a hallucination, but a statement of fact! 8^)

      • Sure, but I think this tech is just supposed to provide statistical improvement for a LLM to fact-check itself to reduce the rate of hallucinations. Even a 10% improvement would be significant.

        Considering how blatant some of the hallucinations are (remember, it's just fancy autocomplete), it shouldn't be too hard to improve on.

      • > We don't actually live in a world where 'fact checking' works the way the media says it does.

        Or, as Nitzsche said, there are no facts, only interpretations.

      • I can make statements about crime stats based on FBI data that will be contradicted by BJS surveys and research done by the very same DOJ. What is true? I think you certainly can argue hey if you did not feel it was important enough file a report the crime you were a victim of can't be important enough to count, so survey data is trumped by hard report counts. Someone else will say but but people don't report stuff for all kinds of reasons that they might share in a anonymous survey.

        Ultimately this is why real researches and real journalists cite their sources so readers can make their own judgements about how truth-y their assertions are and how valid their methodologies.

        Stick with me while I set this up, it is directly related to the AI LLM's.

        That reminds me of a survey, self reported, where young GenZ men and women were asked about relationships. Some 60 percent of men claimed they were not in a relationship nor looking for one, and a non-matching percentage of women claimed they were in a relationship.

        Now there could have been many reasons - perhaps more women are involved in Lesbian relationships, or counting casual dating and sex as a relationship. Perhaps dissem

    • Yup. It will start feeding itself as more and more content becomes AI generated. It is like a copy of a copy of a copy, ad infinitum, until it is unrecognizable.

    • So query using AI, then query using the old fashioned way, and when they disagree with each other always go with the old fashioned results. Which sort of implies it's just wasted time to use the AI...

    • It doesn't solve a lot of problems. But it does make a better search engine.
      • It doesn't solve a lot of problems. But it does make a better search engine.

        I would hope so. Lately, most every search I do has the same most recent article on the first several pages of results.

        Teh Intertoobz is badly broken.

  • After they could not get the problem fixed, they now simply claim that they can "correct" it. Obviously, that is not true because it is not possible. My guess is they have a database of previously observed hallucinations and maybe a bunch of gig-workers and that is it. As soon as you ask something a bit less common, you get hallucinations full blast again and those types of questions is where the problem mostly resides.

    Well, I guess more and more people are starting to see how incapable and useless LLMs act

    • My guess is they have a database of previously observed hallucinations and maybe a bunch of gig-workers and that is it. As soon as you ask something a bit less common, you get hallucinations full blast again and those types of questions is where the problem mostly resides.

      I mean, it’s just a couple of edge cases to write Michael, how much could it cost? 10 dollars?

      Problem solved, thank you.

    • by dfghjk ( 711126 ) on Wednesday September 25, 2024 @08:59AM (#64815971)

      The problem is that LLMs do not do what their creators want to claim they do, based on what layman imagine they do. No number of band-aids will fix that "problem".

      The joke is the term "hallucination". Everything produced is a "hallucination", what the term implies is that results are sometimes bad based on how awful the consequence is on an individual basis. It's a hallucination only when you don't like the result. Meanwhile, it is always just made up shit.

      "Well, I guess more and more people are starting to see how incapable and useless LLMs actually are for general application."

      Neural networks are certainly promising, the problem is that this drives an assumption that someone smart enough has learned how to apply them to solve problems when that clearly is not the case. An LLM appears to be little more than making a NN huge, there needs to be a lot more. A brain is not merely a large NN. It should also be understood that brains often make "inferences" that are wrong. The largest issue is the giant mismatch between capability and expectation, caused, of course, by the greed of the Sam Altman's of the world.

      • by gweihir ( 88907 )

        The problem is that LLMs do not do what their creators want to claim they do, based on what layman imagine they do. No number of band-aids will fix that "problem".

        Indeed. But a lot of people are in deep denial about that. LLMs have no insight. They are just somewhat good at guessing, but cannot verify their guess. And that cannot be fixed.

  • I'll take hallucinations over a search engine that ignores my search and outputs hours of useless information.
    • It's not useless information. It's commercial information, which interpreted means advertising. It's just not useful to you.
  • Try our new AI, new and improved, now with more AI!
  • I hate the use of 'hallucinate'. That term implies something has gone wrong, but the model is doing exactly what it is designed to do. Nothing has gone wrong. Anyway, I wonder what will happen when someone turns 'Correction' on Microsoft's own marketing materials? Probably nothing, since it sounds like Microsoft is deciding which sources are considered truthful.
    • by Z00L00K ( 682162 )

      If you want to make creative weird images it's actually beneficial if the AI hallucinates.

    • "Hallucinate" is just plain wrong. It means to see something that's not really there. LLMs don't see. An LLM is delusional (believes something that's not true).
    • What term do you prefer? "Outputs garbage?" "Lies?" "Produces nonsense?" Hallucinate is the term we use to indicate that the LLM produced output that is not correct.
  • Pay to Win! (Score:4, Interesting)

    by MobileTatsu-NJG ( 946591 ) on Wednesday September 25, 2024 @08:45AM (#64815941)

    Basically this means AI will have quality slider. More time/energy, more accurate results! Get a free quote today!

    Perhaps I'm already hopelessly biased in my views but I'm having trouble imagining this is what people had in mind.

  • This is what the flop of a dying fish looks like.

  • Multiple fine-tuned models working together and determining the best result. Of course you can't really stop "hallucinations" when the data doesn't exist or is polluted by tangible misinformation.

  • by ImprovOmega ( 744717 ) on Wednesday September 25, 2024 @10:40AM (#64816265)
    If you set the watchdog software the task of finding problems in output, it's going to start hallucinating problems and flagging for issues where they don't exist. It's a band-aid that's going to quickly soak through without fixing the underlying bleed. You have to think of AIs rather like dogs - they are eager to please and fulfill what you "want" within the parameters given. And if facts don't support that, they'll work around the facts to spit out desired output. It's an inherent flaw in goal-based programs.
    It's a problem with humans too, but the correction for it is negative feedback. Doing something outside of parameters results in punishment or otherwise undesirable outcomes. Lie on a resume and possibly lose your job. Cheat on your taxes and get fined. We don't have a negative feedback system for AI, and that's probably the only way to correct it, but I don't know what such a process would even look like for AI. And I don't think any of the current generation of AI engineers do either.

    The trick, as with humans, is not to make it too strict, or else it just learns to be better at hiding. If you lock down a teenager too much, they just rebel and get REALLY good at hiding their lives from their parents. It's a delicate balance to discipline a child in such a way that they correct their behavior instead of just learning how to get away with it. We really need to bring psychologists into the discussion, but with a specialty in machine learning. A hard combination to find.
  • Generative adversarial networks are what allow the creation of super-detailed images. One network generates images, the other decides whether or not they are AI-generated; training is done when the second one can't tell. It sounds like this is doing a similar thing for fact-checking. It's a decent idea.
    • by narcc ( 412956 )

      GANs are old and busted. Diffusion models are the new hotness.

      It sounds like this is doing a similar thing for fact-checking.

      The only similarity I can see is that one model is being used to judge the output of another. You won't be able to improve the models together automatically like you would with a GAN, if that's what you were hoping. Remember that the only reason we can improve the discriminator in a GAN is because we already know which images are real and which are generated. Also, determining the subjective quality of an image is a very different problem than

  • by nospam007 ( 722110 ) * on Wednesday September 25, 2024 @11:58AM (#64816489)

    I spend an hour today to get ChatGPT 4o to give me working raw html links, it continuously gave me links pointing to "404, page not found" even when I told it to check them first before posting.

    The couple of ones that actually worked pointed to anything but not what I asked for.
    I tried 5 dozen times.
    Something is seriously broken

    • by narcc ( 412956 )

      It's almost like it generates text probabilistically and doesn't actually understand anything...

      • 'It's almost like it generates text probabilistically and doesn't actually understand anything...'

        No need to understand, I asked for an internet search and posting of the found HTML links.
        It was incapable of doing it, no matter how many times I tried.
        It fabricated links that never existed.

        Weeks ago it worked flawlessly.

    • Oh man, repeating your requests again and again, didn't the idea come to your mind that maybe the issue is on your side?

      -- If you asked to process your inputs, the deal might be in an overly long list.

      -- But if you asked ChatGPT to share wisdom with you, then I'll upset you a bit: ChatGPT lacks wisdom. I'd prefer never to rely on its pre-trained knowledge and use it as a data processing tool for my inputs.
  • So, they have a hallucination generation algorithm, followed by a hallucination detection algorithm, followed by *another* hallucination generation algorithm and that is supposed to stop hallucinations?

    How?

    The end result is still the result of a system that generated hallucinations as it's output.

    I suppose you could do turtles all the way down and see what that would get you.

  • Some people fear AI but for the wrong reasons. Government wants to control it so it can bullshit the models into insisting that what it says is the truth rather than what is factually accurate.

  • 1. Open perplexity.ai 2. Paste all your text into perplexity. 3. Add the next prompt: "Thoroughly evaluate on correctness" 30 secs and voila! It always shows accuracy close to 100%. Jokes aside, this is currently the best fact-checker on the market.

"Nuclear war can ruin your whole compile." -- Karl Lehenbauer

Working...