Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI Microsoft

Microsoft's New AI Mistakenly Identifies Photos, Ignores Hitler (mashable.com) 214

An anonymous reader writes: Microsoft's newest online AI, CaptionBot, tries to identify what's in an uploaded photo, using two recognition APIs recently released by Microsoft Cognitive Services for app developers-- "Computer Vision" and "Emotion". But while Microsoft brags that their AI "can understand thousands of objects, as well as the relationships between them," bloggers are also sharing funny examples of CaptionBot's many mistakes. While it correctly identified Bea Arthur, Ozzy Osbourne and Joan Jett, and a movie poster with Arnold Schwarzenegger, it mistakenly identified Gene Simmons of KISS as "a woman in a red jacket...sitting on a motorcycle," described a wedding dress as "a cat wearing a tie," mistook Michelle Obama for a cellphone, and described one man's Twitter avatar as "a close up of two giraffes near a tree."

But CNNMoney reports that the AI is apparently programmed to ignore all images of Hitler and other Nazi symbolism (as well as Osama bin Laden), reporting that Microsoft's AI "often came back with 'I really can't describe the picture' and a confused emoji. It did, however, identify other Nazi leaders like Joseph Mengele and Joseph Goebbels."

This discussion has been archived. No new comments can be posted.

Microsoft's New AI Mistakenly Identifies Photos, Ignores Hitler

Comments Filter:
  • I'd say (Score:5, Funny)

    by bferrell ( 253291 ) on Saturday April 16, 2016 @10:32PM (#51925003) Homepage Journal

    They don't want another nazi-bot

    • Re:I'd say (Score:5, Funny)

      by Anonymous Coward on Saturday April 16, 2016 @11:06PM (#51925097)

      Computers are inherently racist because they don't understand what it's like being black.

      • I'm white, therefore I can not understand what it's like being black.

        Does that make me racist, a computer, or both?

        • If feminist speeches are any indicator, extrapolating from that it would mean you're racist, a computer and must stop eating melons.

          • Darn. I like melons.

            • Re: (Score:2, Insightful)

              by Opportunist ( 166417 )

              Sorry dude, it's the law. A white guy must be racist and any guy must be misogynist. Took me a while to get used to it but once you're accustomed to being a racist woman hater it's not that bad. I can't shave with a straight razor anymore 'cause I fear I might off that asshole, but that's a small price to pay to fit into the politically correct paradigm again.

              • Re:I'd say (Score:5, Insightful)

                by ultranova ( 717540 ) on Sunday April 17, 2016 @08:52AM (#51926435)

                Sorry dude, it's the law. A white guy must be racist and any guy must be misogynist.

                And yet I'm a white guy who doesn't seem to run into accusations of either racism or chauvinism. It makes me wonder if all the people who complain about being harassed by the politically correct hordes are, in fact, the innocent victims they try to present themselves as.

                Took me a while to get used to it but once you're accustomed to being a racist woman hater it's not that bad. I can't shave with a straight razor anymore 'cause I fear I might off that asshole, but that's a small price to pay to fit into the politically correct paradigm again.

                Your bravery is an inspiration to us all.

      • Except for IBM mainframes. [ibm.com] Support diversity, buy IBM mainframes!

        Oh, wait...

      • I have a black computer, you insensitive clod.

        Always wanted to have a black one work for me. Oddly, though, if the workload gets higher it just starts to hum. Wonder if I have to whip it to make it sing.

        (Hey, what, that was no racist joke. On a scale from black to white, this was Mexican. Tops)

        • by KGIII ( 973947 )

          > On a scale from black to white, this was Mexican.

          Does that mean you keep it clean? You know, maybe Spic-n-Span clean? I guess, maybe, if you had liquid cooling and there was some condensation then it could have a wet back?

          On a more serious note, if I were into putting bumper stickers on my cars (and I am not - not now, I did when I was younger and didn't care about keeping them) I'd seriously consider getting a bumper sticker that says, "Jesus is my gardener." Yeah, it'd piss off all sorts of people.

          • by Alomex ( 148003 )

            No, no, you must self-classify in a government document according to some arbitrary race classification and make that your identity. It makes this country of ours so much fairer and so unlike South Africa during Apartheid.

      • by Lumpy ( 12016 )

        But my computer is black. Black case, Black edition motherboard and black edition processor.

        I even installed http://www.blacklablinux.org/ [blacklablinux.org] to be sure it was completely black.

      • Well, older IDE hard drives did have a master/slave relationship.

    • by Trepidity ( 597 )

      Also reminds me of the Coca-Cola image generator that had a big blacklist [theatlantic.com] of words you couldn't put in the captions. Here the AI is writing the captions, but seems a similar blacklist idea.

      • A big black list? Oh my. That sounds...interesting.
      • Is it me or would a blacklist for words that are associated with a racist regime be hilarious?

    • by Lorens ( 597774 )

      Of course they don't want another Tay, but the fun thing is that apparently is *does* recognize Hitler and then refuses to say anything at all about the picture. Otherwise the bot would just say "a portrait photo of a man", or "a man with a toothbrush mustache". Compare with the last example in TFA when the bot takes a very cluttered image and somewhat correctly identifies it as not exceedingly happy people sitting at tables.

    • by KGIII ( 973947 )

      I did Nazi that coming...

      *is not proud of his behavior*

  • PhB.B.B.B.B.B (Score:5, Insightful)

    by Tablizer ( 95088 ) on Saturday April 16, 2016 @11:11PM (#51925119) Journal

    Microsoft's AI keeps embarrassing them. It's like they thought their corporate image problem from being a ham-handed OS monopoly wasn't big enough: they needed to automate gaffes.

    • Microsoft's AI keeps embarrassing them. It's like they thought their corporate image problem from being a ham-handed OS monopoly wasn't big enough: they needed to automate gaffes.

      Just think of it as the first steps of a new thing. Be interesting to see this develop

    • Re:PhB.B.B.B.B.B (Score:4, Insightful)

      by dbIII ( 701233 ) on Sunday April 17, 2016 @12:10AM (#51925253)

      Microsoft's AI keeps embarrassing them

      That's what an "A.I." made of lookup tables or pattern matching a pile of data does. I really cannot understand why they are putting this stuff forward as if it is ready to be more than just a more complicated "Eliza" toy.
      Use it to look stuff up ot have simple questions and answers - fine. Use it to have a conversation and expect perfect results - not a chance.

      • by MrL0G1C ( 867445 )

        This.

        I wish they'd stop calling this stuff A.I. it's simple pattern recognition. There is no such thing as an AI that has any kind of comprehension and it doesn't look like there will be any time soon.

        And this stuff will likely be driving cars and trucks, I want to know every detail of how it can fail beyond the obvious ie that it can't tell the difference between a dress and a cat or a face and two giraffes etc.

        • It's fairly primitive at this point, of course, but please do explain how this is fundamentally different form how living things perceive visual information and why this is not AI.

          • by MrL0G1C ( 867445 )

            We have a vastly superior knowledge of items in images and videos, we understand information about colours, materials, living things, the way physics interacts with these things, what's flammable, floatable, destructible, cheap expensive, natural, man made and countless more attributes. We understand these things, we don't just attributes words to them when we don't even comprehend the meaning of those words.

            Look at Eliza bots, they basically talk meaningless gibberish, sometimes they get lucky like a human

        • I wish they'd stop calling this stuff A.I. it's simple pattern recognition.

          AFAIK, so is human intelligence, that's why you spent years being a useless drooling slob.

    • by westlake ( 615356 ) on Sunday April 17, 2016 @01:41AM (#51925477)

      Microsoft's AI keeps embarrassing them. It's like they thought their corporate image problem from being a ham-handed OS monopoly wasn't big enough: they needed to automate gaffes.

      It is trivially easy to get a instant mod-up on Slashdot by pointing to the Microsoft's AI's occasional mistakes and not its successes. But most of the time Microsoft's AI seems to be getting it right. If you have something better, put it up where we can see it.

      • by phantomfive ( 622387 ) on Sunday April 17, 2016 @01:51AM (#51925505) Journal
        IBM - Watson, a computer that can win at Jeopardy
        Google - AlphaGo, a computer that wins at Go
        Microsoft - Tay, a racist chatbot

        Is there really a comparison? Even if Microsoft has some decent technology, they're definitely losing on the marketing front, they are making themselves look like dancing monkey cousins.
        • by kqc7011 ( 525426 )
          Will it disregard images of Buddhist temples with swastika's? Or any of the multiple images / uses of the swastika that others used before the National Socialists?
        • by ffkom ( 3519199 )

          Even if Microsoft has some decent technology, they're definitely losing on the marketing front, they are making themselves look like dancing monkey cousins.

          That's just because they are just that [youtube.com].

        • by __roo ( 86767 )

          I bet one of those AIs would pass a Turing test [wikipedia.org]—and not the one that can win at Jeopardy or beat a Go champion.

        • IBM - Watson, a computer that can win at Jeopardy
          Google - AlphaGo, a computer that wins at Go
          Microsoft - Tay, a racist chatbot

          Jeopardy is a trivia game.

          Key words and phrases to which you respond with a factoid. To be fun and playable for the audience the boundaries of this "universe" have to be quite small.

          Go is a game which is played with perfect information and clearly defined rules. It is a fascinating problem in its own right but it is not the same problem as recognizing a face or an object in a purely arbitrary setting.

          • Go is a game which is played with perfect information and clearly defined rules. It is a fascinating problem in its own right but it is not the same problem as recognizing a face or an object in a purely arbitrary setting.

            And yet playing the game took longer to solve than image recognition.

        • IBM - Watson, a computer that can win at Jeopardy

          Google - AlphaGo, a computer that wins at Go

          Microsoft - Tay, a racist chatbot

          Is there really a comparison?

          Understanding natural language.
          Playing a logic game with prediction to gain a winning move.
          Interacting with humans and identifying images without external influence.

          You're right on one thing there is absolutely no comparison. The ability to do one has absolutely zero bearing on the ability to do another.

          • Interacting with humans is one of the easiest things to get a computer to do man, lay off the pot, it's affecting your brain.
            • No. Interacting with computers is one of the easiest things to get a human to do. The opposite is nothing more than regurgitating predetermined responses ... by a human.

              • Interacting with computers is one of the easiest things to get a human to do.

                Yes, that is exactly what I said.

  • I think the most important piece of information here is that AI just isn't ready for the big time yet. People are going to do and say all kinds of fucked up and bizarre things. People will try to have sex with anything. They'll try to convince their AI assistant to support genocide. They'll demand that it pretend to agree with them about things like that. They'll ask for information that's not available, they'll cuss and scream, they'll talk about things that seem completely off-topic. People will use puns,
    • by ByteSlicer ( 735276 ) on Sunday April 17, 2016 @04:04AM (#51925687)

      We're not there yet but this effort by Microsoft is, IMHO, as smart as a mouse.

      Mice are pretty smart, I'd argue that the current AIs are at insect level of "intelligence".

      What's obvious from these results is that the AI has no idea what it's looking at. This is typical for a trained neural net: it finds the best matching pattern in an image, and maps that to one of its output categories. It makes no difference between a random black and white blob, and a penguin, so long as they match the pattern.

      A mouse, and true AI, will have spatial understanding. It will (intuitively) know that the images represent objects in space, and will be able to recreate a coarse 3D model of what they see. Then they will break down the scene in basic features, and identify it based on those features. It might say: hey, these blobs remind me of a penguin, but will never say that they *are* a penguin, because the blob will miss the beak and eyes and flippers and feet.

      Basically, what we have now are the neural nets we already had 50 years ago, only on much faster hardware, combined with a bot and a web search engine. It's basically ELIZA on steroids, but still a long long way from actual intelligence.

      • Mice are pretty smart, I'd argue that the current AIs are at insect level of "intelligence".

        Maybe.....this just looks like a probabilistic classifier, insects are capable of more than that.

        Basically, what we have now are the neural nets we already had 50 years ago,

        That's going a little too far: 50 years ago people were experimenting with perceptrons. The networks we use now are more advanced than that in capabilities.

  • The most ridiculous might be what Microsoft's AI describes an "an operating system suitable for mission critical servers". Or maybe that was Microsoft Marketing, not Microsoft AI. Either way.

  • by gnasher719 ( 869701 ) on Sunday April 17, 2016 @01:48AM (#51925495)
    They don't have any problem identifying photos of Hitler as Hitler. The problem is false positives: If the software mistook the photo of some living person as Hitler, and that was somehow published, that person would not be happy, and might start a lawsuit.

    Problem is easily solved by telling the software "if you think it is Hitler, you say you don't recognise it". There was a case a while ago where some photo analysis software mistook a woman for a gorilla. Highly embarrassing for everyone involved.

    I would think that software makers would nowadays add precautions to make particularly embarrassing mistakes less likely. (Mistaking a gorilla for a woman is no big deal, the other way round it's very bad).
    • This is probably the best explanation for the whole mess.

      Should've thought of it myself. If anything makes no sense, it's probably legal related.

    • Or, false positives or not, they just don't want to have anything to do with Hitler. Especially after Tay.

    • by IMightB ( 533307 )

      Personally, I don't think that identifying hitler is a problem.... per se. it's adding contect that I would discourage. For example if it said "Hey! That's Adolf Hitler, I think he's a swell guy." I'd have a problem with that. On the other hand with blocking Hitler images, what are they going to do with Charlie Chaplin?

    • ... unless you are a gorilla.

    • a) Statisticians actually formally discuss these as type 1 and type 2 errors. The chance of rejecting a true hypothesis and failing to reject a false hypothesis is typically taken into account in the beginning of your analysis.
      b) This is a non-trivial task. Sure, if you have a limited scope (we want our planes not to crash, so err on the side of something that is safer than required) it's easy. But in this case, it changes. "Find me a photo of hitler" is bad if it finds a woman who desperately needs to wax,

  • Did we learn nothing from the time we made HAL lie?

    --
    I think so Brain. But why do I have to wear this itchy & scratchy toothbrush on my upper lip?

  • by jandersen ( 462034 ) on Sunday April 17, 2016 @03:03AM (#51925613)

    One wonders which caption it would put on goatse?

    • Re: (Score:2, Informative)

      by Anonymous Coward

      Just tested: "I am not really confident, but I think it's a man holding a cat."

  • As soon as this bot goes live, will we only get to see Hitler pics to solve CAPTChAs so the botters don't get in?

  • All kinds of AI have teething pains, during which the problems are obvious and comical (the Apple Newton's handwriting recognition being a case in point). At the same time, the achievements of modern AI are amazing--but also troubling.

    When I compare AI as envisioned in the 1950s--Isaac Asimov's Multivac, or his robots, perhaps--the assumption was that AI would be closely similar to human intelligence. For example, it was implicit that robots would answer questions by actually understanding them. What we are

  • Even if we don't like what he did, Hitler did actually exist and is a significant character in world history,
    If we choose to ignore history we're doomed to make the same mistakes again.

  • by AAWood ( 918613 ) <aawood@@@gmail...com> on Sunday April 17, 2016 @11:35AM (#51927101)
    As soon as I heard that someone's avatar was described as being two giraffes, I knew it was going to be in black and white. As far as I can tell, their algorithm thinks that any greyscale image includes two giraffes. A rorschach test image, an art piece with a stylised tree, a black and white MS Paint picture of a stick-man Dumbledore, everything I could find got described as two giraffes (often in a "fenced-off area").
  • He was just a low-level "researcher" in a KZ that got notorious because he killed so many people in so many different gruesome ways (and enough of his subjects still survived the ordeal to tell the story).

    His superior back in Berlin more or less continued his career after the war - mostly because he systematically destroyed most documents that could proof a connection with the notorious experiments (once it was obvious that the war wasn't going to end well for Germany) and because Mengele himself had fled

Fast, cheap, good: pick two.

Working...