Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Microsoft AI

Microsoft Takes Down a String of Embarrassing Travel Articles Created With 'Algorithmic Techniques' (businessinsider.com) 43

Microsoft took down a string of articles published by "Microsoft Travel" last week that included a bizarre recommendation for visitors to Ottawa to visit the Ottawa Food Bank and to "consider going into it on an empty stomach." From a report: The now-deleted article that included that recommendation -- "Headed to Ottawa? Here's what you shouldn't miss!" -- went viral after writer Paris Marx shared it as an example of an AI flop. The online chatter about the article, and the clearly offensive nature of the food bank recommendation, prompted Microsoft to issue a statement. The statement blamed a human.

"This article has been removed and we have identified that the issue was due to human error," a Microsoft spokesperson said. "The article was not published by an unsupervised AI. We combine the power of technology with the experience of content editors to surface stories. In this case, the content was generated through a combination of algorithmic techniques with human review, not a large language model or AI system. We are working to ensure this type of content isn't posted in future." It wasn't the AI that was the problem, it was the human. There was a "content editor" and they made a mistake. We all make mistakes, right? I might be more persuaded by that stance if that article, however egregious it was, were the only one. In fact, it was not. There were at least a handful of articles that made equally absurd if less offensive travel recommendations.

This discussion has been archived. No new comments can be posted.

Microsoft Takes Down a String of Embarrassing Travel Articles Created With 'Algorithmic Techniques'

Comments Filter:
  • Sounds about (Score:5, Insightful)

    by ArchieBunker ( 132337 ) on Monday August 21, 2023 @03:08PM (#63786008)

    As useful as answers.microsoft.com

    That’s on my personal black list for appearing in search results and not once delivering an answer that worked.

    • They're all canned responses from Microsoft MVPs

    • by ac22 ( 7754550 )

      That website is astonishingly unhelpful. Watch in amazement as the "volunteer mods" lead someone through a long-winded, pointless diagnostic script that has a 0% success rate.

      • If you need to fix a broken trackpad, the first thing you have to do is a "clean boot" of course.

        It won't fix anything but there are 67,000 search results for "clean boot" site:answers.microsoft.com

  • The answer to the question they imply is pretty clear on this one: the machine is flawed, but responsibility ultimately rests with the human operator whose entire job was to weed out that sort of flawed content. It doesn't matter if there was one article or a handful, any way about it, the human failed to manage the AI properly.

    More interesting than "who's at fault" is the question of "why did the fault happen?" I suspect it's one of two situations. Either the human operator got lazy and stopped doing their job, possibly because the AI was so good that they grew complacent... or the human operator was completely overwhelmed by an incredible volume of AI generated BS and this stuff slipped through the cracks as they were busy eliminating the truly bizarre stuff. Either way, it says something interesting about the state of these AIs!

    • by omnichad ( 1198475 ) on Monday August 21, 2023 @03:22PM (#63786038) Homepage

      You actually think there was a human review process? This was shoveled content of the lowest quality generated in bulk. They probably spot checked one or two before letting it run wild. Everyone forgets this is just a predictive text tool and just makes convincing looking text, not necessarily content that makes sense.

      There is no "proper" way to manage this other than to read each and everything it produces and even then you'd only find glaringly obvious ones. If you actually go and fact check it, you might as well just hire human writers.

      • I wish more people would read Gulliver's Travels. LLMs are essentially a real version of a device that was described in the story as a satirical literary device criticizing academia of the time. It's real "Hey, guys! We invented the Terror Cube from the beloved sci-fi novel Don't Build the Terror Cube!" energy.

    • >Either way, it says something interesting about the state of these AIs! Nothing that hasn't been brought up a million times. Probable root causes are: the employee was part of a system which encourages productivity over failure prevention, the employee was overworked, the employee was underqualified
    • by g01d4 ( 888748 )

      completely overwhelmed by an incredible volume of AI generated BS

      I doubt it. I suspect this is similar to the self driving car with a human monitor which hit a pedestrian awhile back with less dire consequences. Whether or not an "incredible volume" is input to the human filter, ideally the person who generated the prompt is doing the filtering.

    • by Anonymous Coward

      The answer to the question they imply is pretty clear on this one: the machine is flawed, but responsibility ultimately rests with the human operator whose entire job was to weed out that sort of flawed content.

      But what is the purpose of A.I. generated articles in the first place? A lot of people don't want to openly admit it, but the **only** purpose of A.I. generated articles is to save money by not having to pay human writers.

      But, if every article has to be reviewed by a human anyway, what's the point?

    • The answer to the question they imply is pretty clear on this one: the machine is flawed, but responsibility ultimately rests with the human operator whose entire job was to weed out that sort of flawed content.

      Therein lies the problem with LLM-generated content.

      Its output lies somewhere on the scale of blatantly wrong through subtly wrong through surprisingly accurate. The only way to tell is to have a human read it all, word for word looking for not only egregious errors or visibly insulting material but also looking for things that aren't obvious. Is it actually faster to have a human fact-check and tone-check and quality-check an article than it is to have one write it in the first place?

    • You're missing a third more probable option:

      The human gets paid on number of articles approved. The less review they do the more they approve. The humans do very much care about their incentive, which isn't quality of published work. They probably got fired over this and replaced with someone with the same incentive structure, who will discover the previous employee's work method and resume a similar pattern of review. Microsoft gets to say they changed things, their bottom line doesn't change but they get
  • by byronivs ( 1626319 ) on Monday August 21, 2023 @03:22PM (#63786040) Journal
    We change the language so as not to poison the well. Only AI when it works, this way we don't upset the most holy of VC and growth avenues. This old stuffy and broken algorithm did it, not these fancy new AI. You can keep the money flowing to the hype.
    • The article was not published by an unsupervised AI.

      Note the weasel words. Any time you see something like this, it means that you just have to remove one adjective to get a true statement. The article was published by a supervised AI.

      We combine the power of technology with the experience of content editors

      the power of technology here is likely a large language model and the content editors' experience consists of doing the same thing the week before.

      In this case, the content was generated through a combination of algorithmic techniques with human review, not a large language model or AI system

      In THIS case. Most of the wording was previously generated by AI and is just being recycled in this case through algorithmic techniques. The others were probably generated by a l

      • Most of the wording was previously generated by AI and is just being recycled in this case through algorithmic techniques. The others were probably generated by a large language model.

        Psst LLMs are algorithmic.

  • It's hard to imagine that a professional, well-compensated, experienced editor wouldn't notice those mistakes.

    Some companies seem to think that AI-generated copy will:

    1) Generate large amounts of copy very quickly
    2) Do it cheaply with as little oversight as possible
    3) Allow you to fire those expensive editors and journalists

    Whatever humans still remain at "Microsoft Travel" may well be rookies/outsourced people getting paid peanuts. Or experienced staff being flooded with impossible amounts of work being au

    • Fast, cheap, or good. Pick 2. 1-3 are all fast and cheap. Not good. And if there's no money in being good, they'll be mediocre more cheaply than anyone else. They'll just find a way to weed out the bad parts of the content in a separate automated fashion.

      • by ac22 ( 7754550 )

        The only way I can see this ending is if Google starts heavily penalizing crappy AI-generated content, and the sites that make use of it. That would have Microsoft Travel and pals scrambling to take down all their new articles.

        • That would be pretty easy. Just rank conciseness higher. It will get rid of a lot of clickbait and bad journalism too. Extra verbosity is often intentionally added to get you to spend longer and see more ads.

          • by ac22 ( 7754550 )

            "Saying nothing, stylishly" is the hallmark of AI-generated text. Directionless waffle, beautifully worded. Once you've seen it, you can't unsee it.

    • Sadly, the humans responsible for sacking the AI have also been sacked.

  • by Geoffrey.landis ( 926948 ) on Monday August 21, 2023 @03:25PM (#63786050) Homepage

    I'm a little horrified to see that Microsoft is shielding the AI from blame by throwing the humans under the bus.

    • Makes more sense to me to blame the AI. It is not like it will care after all. My best guess is that the human editor just got so bored with reading crap that they gave up.
    • Don't worry. There's an AI driving the bus.

    • I'm a little horrified to see that Microsoft is shielding the AI from blame by throwing the humans under the bus.

      This is not surprising at all. Microsoft (and many others) is hoping that eventually A.I. will become good enough that they can completely replace all those annoying humans who keep insisting on being paid for the work that they do.

      And so they will keep pretending that the problems are all the result of "human error", not a flawed computer program, because that buys them more time to keep trying to refine A.I. so that it actually works reliably.

    • I'm a little horrified to see that Microsoft is shielding the AI from blame by throwing the humans under the bus.

      It's not just Microsoft. There's a tech-bro culture of believing the machines infallible and humans are garbage producers. Every conversation about self-driving cars, even when they crash while self-driving, is filled with, "still better than stupid humans" comments. I can't understand where that mentality comes from, especially among people that should have enough computer knowledge to understand that computers just help humans make mistakes much more quickly than they do naturally.

      But hey, Microsoft is pa

    • by gweihir ( 88907 )

      MS is about making money, not about delivering a quality product of any kind. They will do whatever it takes to keep up the pretense their product does not suck, because frankly they know ist sucks badly, but being honest does not even enter their minds.

    • I'm a little horrified to see that Microsoft is shielding the AI from blame by throwing the humans under the bus.

      For all we know there is no human involvment in this and although MS blames a human, there is no evidence a human is involved.

      Unless they name the person who supposedly made the mistake and the person admits it, it's just some BS statement by MS which doesn't affect any human working there actually.

  • I would bet good money that the "human reviewers" were using AI to review the articles.
  • by Murdoch5 ( 1563847 ) on Monday August 21, 2023 @03:53PM (#63786156) Homepage
    Microsoft choosing to blame the human instead of the AI, or both the AI and human, just shows how rushed AI is. I was recently commenting on AI driverless, and that I don't trust it because I don't think it's ready, and here we have an example where it couldn't compose a well written article.

    The AI messed up, because to spite having "intelligence" in its name, it lacks any intelligence. Sure, the human should have caught the problem, but why did the AI generate the problem in the first place? The second issue I have, how long was this human given to review the work? Were they given a full day to review the article, or, did they get 10 minutes to surface skim the article because 10 more were sitting in queue?

    AI is not a magic spell you can just speak into existence and walk away from. At best, it's a dimwitted helper, but let's not put that dimwit into a Sr Engineering role and put the humans in the Jr roles. AI is in fact not a Sr level employee, it's not a Jr level employee, and it's not even graduated from primary school for the simpletons. It's a toy, a cool toy, but a toy, and not something you should rely on blindly and without fault.
  • by drwho ( 4190 ) on Monday August 21, 2023 @04:11PM (#63786218) Homepage Journal

    It can only be attributable to human error. This sort of thing has cropped up before, and it has always been due to human error.

  • It's like how literally every single time a natural disaster happens anywhere, clickbait articles speculating via headlines whether climate change is to blame sprout like mushrooms.

    The most recognizable attribute of those articles is the fact that they ramble on for several paragraphs, and might cite a few seemingly tangential facts or quote individuals for whom the fact that they were quoted in the first place seems to be more newsworthy than their actual quote itself.Then, they just fall off a literary cl

  • Oh, I see. I was trying to research Reims, France in the 17th century yesterday, and the Maison d'Arret - a jail, had zero reviews, and when I tried to find one pre-1807, I got an AI that insisted I log in.

Life is a whim of several billion cells to be you for a while.

Working...