Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI Microsoft

Microsoft Claims Its New Tool Can Correct AI Hallucinations 19

An anonymous reader quotes a report from TechCrunch: Microsoft today revealed Correction, a service that attempts to automatically revise AI-generated text that's factually wrong. Correction first flags text that may be erroneous -- say, a summary of a company's quarterly earnings call that possibly has misattributed quotes -- then fact-checks it by comparing the text with a source of truth (e.g. uploaded transcripts). Correction, available as part of Microsoft's Azure AI Content Safety API (in preview for now), can be used with any text-generating AI model, including Meta's Llama and OpenAI's GPT-4o.

"Correction is powered by a new process of utilizing small language models and large language models to align outputs with grounding documents," a Microsoft spokesperson told TechCrunch. "We hope this new feature supports builders and users of generative AI in fields such as medicine, where application developers determine the accuracy of responses to be of significant importance."
Experts caution that this tool doesn't address the root cause of hallucinations. "Microsoft's solution is a pair of cross-referencing, copy-editor-esque meta models designed to highlight and rewrite hallucinations," reports TechCrunch. "A classifier model looks for possibly incorrect, fabricated, or irrelevant snippets of AI-generated text (hallucinations). If it detects hallucinations, the classifier ropes in a second model, a language model, that tries to correct for the hallucinations in accordance with specified 'grounding documents.'"

Os Keyes, a PhD candidate at the University of Washington who studies the ethical impact of emerging tech, has doubts about this. "It might reduce some problems," they said, "But it's also going to generate new ones. After all, Correction's hallucination detection library is also presumably capable of hallucinating." Mike Cook, a research fellow at Queen Mary University specializing in AI, added that the tool threatens to compound the trust and explainability issues around AI. "Microsoft, like OpenAI and Google, have created this issue where models are being relied upon in scenarios where they are frequently wrong," he said. "What Microsoft is doing now is repeating the mistake at a higher level. Let's say this takes us from 90% safety to 99% safety -- the issue was never really in that 9%. It's always going to be in the 1% of mistakes we're not yet detecting."

Microsoft Claims Its New Tool Can Correct AI Hallucinations

Comments Filter:
  • It still doesn't address what happens when AI only references itself, and then determines what truth is. Which of course, might be so far from the actual truth, but hey, it didn't hallucinate it, it's only on every page. And 2+2=5
    • Re:End game (Score:4, Interesting)

      by DarkOx ( 621550 ) on Wednesday September 25, 2024 @09:26AM (#64815899) Journal

      It seems like you should be able to bolt Natural Language Processing (NLP/MNLP) onto the output of LLMs and have it locate the assertions of fact, and go look up things like dates and times and check that past tense language is not used about future events etc. Of course if you are going to go search the web or Wikipedia etc to check the LLMs work you then have to NLP the results and see if you can match the tokens. Its by no means a 'simple' solution.

      Of course then you are back to what sources do you trust. What counts as 'evidence', when do you have to prove a negative, like is 'police reports or it did not happen' a valid stance when there are self described whiteness willing to go on television and say it did. We don't actually live in a world where 'fact checking' works the way the media says it does.

      I can make statements about crime stats based on FBI data that will be contradicted by BJS surveys and research done by the very same DOJ. What is true? I think you certainly can argue hey if you did not feel it was important enough file a report the crime you were a victim of can't be important enough to count, so survey data is trumped by hard report counts. Someone else will say but but people don't report stuff for all kinds of reasons that they might share in a anonymous survey.

      Ultimately this is why real researches and real journalists cite their sources so readers can make their own judgements about how truth-y their assertions are and how valid their methodologies. I am not sure there really will ever be a truly valid and unbias way to have 'safe' general use LLMs unless we somehow also over come the technical and legal hurdles to enable them to produce a complete bibliography for their work; and there are many obstacles to that.

      • Re:End game (Score:5, Interesting)

        by dfghjk ( 711126 ) on Wednesday September 25, 2024 @09:50AM (#64815953)

        "I am not sure there really will ever be a truly valid and unbias way to have 'safe' general use LLMs unless we somehow also over come the technical and legal hurdles to enable them to produce a complete bibliography for their work; and there are many obstacles to that."

        That assumes what you mean by "safe general use LLMs". What do you think an LLM does?

        When you ask a person for testimony, you do not merely assume everything provided is factual. You make determinations using context. Why would you assume otherwise for LLMs when they way they work is modeled after the very imperfections that cause the same issues with people? Why do you think there will be some "safe" way to overcome that? Why do you think that providing "a complete bibliography" would solve the problem when it does not with people?

        Everything AI generates is a "hallucination" by design. People want to believe that AI is something other than what it is and these discussions are more about playing with words than advancing the state of the art. AI is 99% fraud and 1% hope that no one will notice or that someone will come up with solutions that the scientists can claim as their own. An AI architect understands almost nothing about the overall process, that's why there remain these claimed 200x improvements in function with singular changes. It's a giant grift aimed at VC that equally uninformed; Sam Altman is the new Elon Musk. His claims of imminent breakthroughs are the exact same lies that Musk has told about self-driving for years.

      • Sure, but I think this tech is just supposed to provide statistical improvement for a LLM to fact-check itself to reduce the rate of hallucinations. Even a 10% improvement would be significant.

        Considering how blatant some of the hallucinations are (remember, it's just fancy autocomplete), it shouldn't be too hard to improve on.

    • Yup. It will start feeding itself as more and more content becomes AI generated. It is like a copy of a copy of a copy, ad infinitum, until it is unrecognizable.

  • After they could not get the problem fixed, they now simply claim that they can "correct" it. Obviously, that is not true because it is not possible. My guess is they have a database of previously observed hallucinations and maybe a bunch of gig-workers and that is it. As soon as you ask something a bit less common, you get hallucinations full blast again and those types of questions is where the problem mostly resides.

    Well, I guess more and more people are starting to see how incapable and useless LLMs act

    • My guess is they have a database of previously observed hallucinations and maybe a bunch of gig-workers and that is it. As soon as you ask something a bit less common, you get hallucinations full blast again and those types of questions is where the problem mostly resides.

      I mean, it’s just a couple of edge cases to write Michael, how much could it cost? 10 dollars?

      Problem solved, thank you.

    • by dfghjk ( 711126 )

      The problem is that LLMs do not do what their creators want to claim they do, based on what layman imagine they do. No number of band-aids will fix that "problem".

      The joke is the term "hallucination". Everything produced is a "hallucination", what the term implies is that results are sometimes bad based on how awful the consequence is on an individual basis. It's a hallucination only when you don't like the result. Meanwhile, it is always just made up shit.

      "Well, I guess more and more people are startin

  • I'll take hallucinations over a search engine that ignores my search and outputs hours of useless information.
  • Try our new AI, new and improved, now with more AI!
  • I hate the use of 'hallucinate'. That term implies something has gone wrong, but the model is doing exactly what it is designed to do. Nothing has gone wrong. Anyway, I wonder what will happen when someone turns 'Correction' on Microsoft's own marketing materials? Probably nothing, since it sounds like Microsoft is deciding which sources are considered truthful.
    • by Z00L00K ( 682162 )

      If you want to make creative weird images it's actually beneficial if the AI hallucinates.

  • Basically this means AI will have quality slider. More time/energy, more accurate results! Get a free quote today!

    Perhaps I'm already hopelessly biased in my views but I'm having trouble imagining this is what people had in mind.

  • This is what the flop of a dying fish looks like.

  • Multiple fine-tuned models working together and determining the best result. Of course you can't really stop "hallucinations" when the data doesn't exist or is polluted by tangible misinformation.

"Now this is a totally brain damaged algorithm. Gag me with a smurfette." -- P. Buhr, Computer Science 354

Working...