Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI

Do Neural Nets Dream of Electric Sheep? (aiweirdness.com) 201

An anonymous reader shares a post: If you've been on the internet today, you've probably interacted with a neural network. They're a type of machine learning algorithm that's used for everything from language translation to finance modeling. One of their specialties is image recognition. Several companies -- including Google, Microsoft, IBM, and Facebook -- have their own algorithms for labeling photos. But image recognition algorithms can make really bizarre mistakes. Microsoft Azure's computer vision API added the above caption and tags. But there are no sheep in the image. None. I zoomed all the way in and inspected every speck. It also tagged sheep in this image. I happen to know there were sheep nearby. But none actually present. Here's one more example. In fact, the neural network hallucinated sheep every time it saw a landscape of this type. What's going on here?

Are neural networks just hyper-vigilant, finding sheep everywhere? No, as it turns out. They only see sheep where they expect to see them. They can find sheep easily in fields and mountainsides, but as soon as sheep start showing up in weird places, it becomes obvious how much the algorithms rely on guessing and probabilities. Bring sheep indoors, and they're labeled as cats. Pick up a sheep (or a goat) in your arms, and they're labeled as dogs.

This discussion has been archived. No new comments can be posted.

Do Neural Nets Dream of Electric Sheep?

Comments Filter:
  • At least not until we start achieving some sort of artificial consciousness. Although current Deep Learning type of neural networks are amazing, they are not (considered as) conscious. They also lack imagination and understanding; see https://blog.keras.io/the-limi... [keras.io]
    • by lgw ( 121541 )

      There are about a dozen different approaches to machine learning, and the neural net approach is probably the oldest in terms of being useful for something. None of them are "smart": all they can do is optimize, mostly randomly, until they succeed.

      Image recognition in particular is something that has proven hard for machine learning, perhaps because the categories are fuzzy, or perhaps because humans are so good at it and that's the bar for comparison.

      The classic, textbook example is handwriting recognitio

      • by Plugh ( 27537 )

        the neural net approach is probably the oldest

        This!! I cannot get over how people think AI is new. Deep Learning is really just a minor addition to neural nets, taking advantage of our modern fast chips to add convolution operations to the mix. But neural nets are old tech. Brooks' paper Intelligence without representation [fc.uaem.mx] and Minksy's Perceptrons [wikipedia.org] both came out in 1969

        • And the Space Shuttle was just a minor addition to the Wright Brothers' Flyer (or if you prefer, to the Congreve rockets used against Fort McHenry in 1814).

    • What do you mean by consciousness?

  • by account_deleted ( 4530225 ) on Monday March 05, 2018 @01:29PM (#56210941)
    Comment removed based on user account deletion
  • Neural network technology scales with processor advancements, so I understand why AI researches stay so excited about throwing neural networks at everything - it just keeps getting better and better on its own. The thing is, as great as modern processors are, they aren't even close to in the same league as a biological brain. It is unrealistic to expect a computer based neural network to approach the capabilities of even a biological brain in the near future.

    AI researchers will only make progress if they pu

  • by dcw3 ( 649211 ) on Monday March 05, 2018 @01:41PM (#56211031) Journal

    They can find sheep easily in fields and mountainsides, but as soon as sheep start showing up in weird places, it becomes obvious how much the algorithms rely on guessing and probabilities

    This is known as "profiling". The sheep will protest, especially the black ones.

  • I see a bunch of "things" in the foreground in the grass. If I knew nothing of sheep farming besides a vague description and was only given a split second to decide, I would label them as animals/perhaps sheep as well.

    Given that most neural net imaging these days will split off the color and brightness channels from the image to 'recognize' something, I can see where these blurry pictures get some weird tags.

    • Given that most neural net imaging these days will split off the color and brightness channels from the image to 'recognize' something, I can see where these blurry pictures get some weird tags.

      I've lost count of the times I was made fun of saying that HSV was useful for image processing, doubly so before 2010. It was just one of those mantras CS people tended to repeat without really thinking it through. It may be 17 years late, but I think a strongly worded email to my undergrad TA is in order.

  • by iTrawl ( 4142459 ) on Monday March 05, 2018 @01:49PM (#56211087)

    Now what is that story where an AI is trained to turn the air on in an alien(?) train station when the train enters the platform? I can't find it on Google.

    The way I remember it the AI is trained, and then left alone and does a great job until one day when it kills all the passengers because it didn't turn the air on. The reason was that the station clock was broken. The AI didn't learn the train-at-platform correlation, but rather the wall clock schedule (I guess those trains were never early or late).

    • by lgw ( 121541 )

      This is a constant real-world problem with most of the AU approaches - if you make them too big relative to the problem, they'll just "memorize" the training data. That is, they'll over-optimize on the specifics of the training data and not generalize well at all outside of it.

      • by Calydor ( 739835 )

        Aren't we doing the exact same thing with school students, training them to pass the tests rather than to apply the things they learn in the real world?

        • by lgw ( 121541 )

          Ha! Yup, pretty much the same problem. What always amazed me was how easy those tests generally are, but kids are so bad at learning/generalizing because they're only taught the test that the teacher has no time to do anything but teach the test. Nasty feedback loop, there.
           

      • Yeah, I guess that explains a few things like the platypus. Australia is a pretty big.

    • by Nanoda ( 591299 )

      I'm fairly certain you're remembering a Peter Watts novel; IIRC that's from Starfish (or possibly the sequel Maelstrom?)

      There was a story here last week about some researchers who'd managed to 3D print a turtle that would be reliably misidentified as a rifle, despite not actually looking anything like one. These remind me that AI don't really work the way Hollywood (or even sci-fi) would typically want them to.

      • by iTrawl ( 4142459 )

        Thanks! That's it! I found this PDF [rifters.com] of the Starfish book.

        At page 198:

        "There is no pilot. It's a smart gel."

        "Really? You don't say." Jarvis frowns. "Those are scary things, those gels. You know one suffocated a bunch of people in London a while back?"

        Yes, Joel's about to say, but Jarvis is back in spew mode. "No shit. It was running the subway system over there, perfect operational record, and then one day it just forgets to crank up the ventilators when it's supposed to. Train slides into station fifteen meters underground, everybody gets out, no air, boom."

        Joel's heard this before. The punchline's got something to do with a broken clock, if he remembers it right.

        "These things teach themselves from experience, right?," Jarvis continues. "So everyone just assumed it had learned to cue the ventilators on something obvious. Body heat, motion, CO2 levels, you know. Turns out instead it was watching a clock on the wall. Train arrival correlated with a predictable subset of patterns on the digital display, so it started the fans whenever it saw one of those patterns."

        "Yeah. That's right." Joel shakes his head. "And vandals had smashed the clock, or something."

        Google still won't bring up the book even with "smart gel" instead of "AI" in the search terms...

  • You got to remember the algorithms are still relatively primitive. My guess is that in that pictures were geo-tagged in a region known for sheep. It saw the tubes coming out the ground as legs. In the other photo it saw the white rocks in the creek bed as wool with shadows.

    • Oh, I recognize all too well that all so-called 'AI' in it's current state is extremely primitive, is completely over-hyped by marketing types and the media, and as a result too many people are going to put way too much trust in it, with predictably disasterous results. Guess everyone has to learn the hard way.
    • You got to remember the algorithms are still relatively primitive. My guess is that in that pictures were geo-tagged in a region known for sheep. It saw the tubes coming out the ground as legs. In the other photo it saw the white rocks in the creek bed as wool with shadows.

      The training overall matters, if the location is part of it, that can lead to false positives. Also if the neural net does not try to separate unique objects and then identify them, it might identify the grass as "part" of the sheep. Machine learning is still only as good as the data it is being trained on, if it is trained with data with a false correlation, it cannot filter it out without additional training on data without the false correlation.

    • Doesn't even need geo-tagging. That's sheep grazing land... the close cropped grass is indicitive of sheep.

    • You got to remember the algorithms are still relatively primitive. My guess is that in that pictures were geo-tagged in a region known for sheep. It saw the tubes coming out the ground as legs. In the other photo it saw the white rocks in the creek bed as wool with shadows.

      More likely it's working as a scene type detection algorithm. It's an easier task to classify a scene in many cases, so it was probably lerning that and using it as a strong prior. The learning algorithm will pick up on correlations, whet

  • This is what causes human prejudices. We don't thoroughly analyze every situation - that would take way too much time. We take processing shortcuts which usually yield the right answer, but not always. Like "white things on green fields are usually sheep." Or "black people are usually better at sports." Or "women are usually more emotional than men."

    A prejudice is simply when you apply a usually-correct general rule to an individual, without first verifying that it's actually true in that individual
    • by SirGarlon ( 845873 ) on Monday March 05, 2018 @02:07PM (#56211217)

      I'm with you except for the part about the general rules underlying prejudices being usually correct. I don't believe that is a requirement for human beings to accept the rule. So I would say the "pre" in "prejudice" really means the rule doesn't get tested for accuracy or revised.

      Fundamentally, thinking of deep learning as machine-generated prejudice changes one's enthusiasm for the technology.

      • I'm with you except for the part about the general rules underlying prejudices being usually correct.

        Depends what you mean by "generally correct". All sorts of correlations exist, and humans are bad at determining which are causitive and which are not (machines are worse). Recording a correlation is near worthless for predictive power.

        Fundamentally, thinking of deep learning as machine-generated prejudice changes one's enthusiasm for the technology.

        Deep learning (all machine learning) is particularly bad. I

  • ... especially under any of the conditions below:

    # under time constraint, given only a fraction of a second to exam a sample
    # have to process large amount of samples
    # excessive amount of details
    # tasked with subjects they are not dealt with often: recognizing the different types plants, different types of cells, etc.

    In fact human beings likely make more silly mistakes than neural nets under those conditions.

    • by Megol ( 3135005 )

      That would only be relevant if the system ran for a limited time rather than until it produces an answer.
      It doesn't.

  • ... that doesn't sound too baaaaaaaaaad.
  • But seriously, I see in both articles that really the "learning" only takes place currently with handcrafted scenarios. If, however, the scenarios can be automated or "learned" then - well - maybe.
    • Eh, even if they don't flip over the turtle, we can make sure they stay INTERLINKED. You are a collection of cells. cells. do you want mod points? interlinked. is microsoft evil? interlinked. is wayland the way? interlinked. within cells interlinked.

      oh come on you lazy slashot filter. Grow some AI and pick up when capslock is funny... ok, for full effect, assume I'm yelling at you in the last half. You know the scene.

  • I first looked at the images without glasses on on purpose so I would know exactly what I was looking at. I'm fairly blind without them and gave an almost identical answer for the first photo and sailboats for the second photo.
  • "Real stupidity beats artificial intelligence every time." TERRY PRATCHETT

  • The lesson is that AI will have biases. They will have the exact same sort of problems and issues that people have when it comes to presumptions built up from prior experience. Stereotypes, prejudices, and bias. Sounds bad right? But it's the basis of CONTEXT. It's how language works. Things like pronouns and "it" can refer to anything and you have to rely on context to link it to something. And we do so based on what makes sense based on experience. Our eyeballs do the same thing. They fill in a lot

  • Neural nets can be only as good as the data used to train them. Outside of the training data, they are pretty much a wild guess. Which points to the real problem with Neural networks. If your training data doesn't cover the actual real world data very well, your network will not be good at all those unique edge cases. Over training (using too much specific training data) is as much of an issue as bad training data too. Over trained networks jump to conclusions based on the wrong things and are just as b

  • Dijkstra talked about this. [utexas.edu] Everyone here who uses terms like "the computer sees X as Y" or "the neural net thinks that A is actually B" or "the AI was mistrained" is making a fundamentally erroneous mistake: the computers do not think. The algorithms do not understand. The machine does not have vision; it does not see.

    Does your air conditioning filter understand the difference between air and dust?
    Does your cell phone's finger print reader or facial unlocker recognize you? Does your mirror?
    Do your head
    • Do you?

    • "Does your calculator know or understand mathematics? What about an abacus?" Oooh, for a moment I thought you were going to insult my slide rule.

  • Here's one more example [tumblr.com]. In fact, the neural network hallucinated sheep every time it saw a landscape of this type. What's going on here?

    Computers don't recognize organic life forms. A "sheep" is nothing more than a pattern of pixels. In this case, a black snout, white body, and black legs below -- like this [wikimedia.org]. Do we see anything similar to that in the picture?

  • I realized that my three-year-old needed a haircut after the cloud service I use tagged his photo as a picture of a dog.
  • by hyades1 ( 1149581 ) <hyades1@hotmail.com> on Monday March 05, 2018 @06:51PM (#56213093)

    "Bring sheep indoors, and they're labeled as cats. Pick up a sheep (or a goat) in your arms, and they're labeled as dogs."

    Run after a sheep with your kilt hoiked up around your chest and they're labeled as Scottish girlfriends.

Love may laugh at locksmiths, but he has a profound respect for money bags. -- Sidney Paternoster, "The Folly of the Wise"

Working...