Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI

Researchers Fooled a Google AI Into Thinking a Rifle Was a Helicopter (wired.com) 160

An anonymous reader shares a Wired report: Algorithms, unlike humans, are susceptible to a specific type of problem called an "adversarial example." These are specially designed optical illusions that fool computers into doing things like mistake a picture of a panda for one of a gibbon. They can be images, sounds, or paragraphs of text. Think of them as hallucinations for algorithms. While a panda-gibbon mix-up may seem low stakes, an adversarial example could thwart the AI system that controls a self-driving car, for instance, causing it to mistake a stop sign for a speed limit one. They've already been used to beat other kinds of algorithms, like spam filters. Those adversarial examples are also much easier to create than was previously understood, according to research released Wednesday from MIT's Computer Science and Artificial Intelligence Laboratory. And not just under controlled conditions; the team reliably fooled Google's Cloud Vision API, a machine learning algorithm used in the real world today. For example, in November another team at MIT (with many of the same researchers) published a study demonstrating how Google's InceptionV3 image classifier could be duped into thinking that a 3-D-printed turtle was a rifle. In fact, researchers could manipulate the AI into thinking the turtle was any object they wanted.
This discussion has been archived. No new comments can be posted.

Researchers Fooled a Google AI Into Thinking a Rifle Was a Helicopter

Comments Filter:
  • by religionofpeas ( 4511805 ) on Tuesday December 26, 2017 @07:46AM (#55808231)

    Many humans are also easily fooled into thinking that this is just a plain brick wall:

    http://cdn.playbuzz.com/cdn/d2... [playbuzz.com]

  • If and when self-driving cars really become a thing, vandalism of street signs will probably have to be elevated to a felony with a mandatory minimums, even if no one gets hurt. It'll also have to be something where minors can be charged as adults because they're the ones who probably do the majority of it, and you know there will be teens who'll think it's funny to cause a 10 car pile up.

    • Vandalizing street signs has been possible for a long time, just remove one street sign, and replace it with another. I don't think this has ever been a wide scale problem.

      • While the visual street signs will remain once driverless technology is >/= human performance, traffic signs and intersections will begin being fitted out with remote transmitters that communicate with your vehicle's on-board system, which will communicate with other vehicles on-board systems.
        • I know my city has no fucking money to support their public transportation system properly. Like hell will they spend money on this, even if it costs $10 per sign.
    • Re: (Score:2, Insightful)

      by Megol ( 3135005 )

      No, vandalism should have the same punishment as it have today - cars have to be able to drive as good as human drivers and so the dangers should be the same anyway. If the cars are stupid they shouldn't drive themselves period.

      The rest is just brain damage level reasoning mixed with prejudices.

      • by Anonymous Coward

        No, vandalism should have the same punishment as it have today - cars have to be able to drive as good as human drivers

        OK...

        But then:

        and so the dangers should be the same anyway. If the cars are stupid they shouldn't drive themselves period. ...

        You're setting a higher standard for cars than you do for humans.

        Every time I drive, I find it hard to believe the beings driving some of the other cars are the same species that put members on the Moon and got them home safely.

        They just about advertise their stupidity:

        "Hey, let's go 15 below the speed limit in the passing lane! And ignore all the cars passing me on the wrong side!"

        Either you're an obliviot because don't know what's going on (and shouldn't be driving), or you do know what's g

      • That's not the worst part. The worst part is that this article demonstrates that they may be smart one day and doing completely stupid things the next. In millions of cars.
    • Absolutely, because you need to feed people to the prison industry once weed gets legalized. And by people I mean black and brown people because white kids get off the hook with a slap on the wrist.
      • Myth. [rollingstone.com] US prisons are not full up with people for marijuana convictions, especially not for simple possession.

        Of the 750K annual US marijuana arrests:

        About 40,000 inmates of state and federal prison have a current conviction involving marijuana, and about half of them are in for marijuana offenses alone; most of these were involved in distribution. Less than one percent are in for possession alone.

        There are 2.2 million [wikipedia.org] US prisoners at the state and federal level, so less than 2%. It's such a small % that the keepers of the keys (do they use keys anymore?) can keep their prisons full by delaying parole releases.

        But yes, ethnicity still plays too large a role in sentencing, so you're not completely wrong.

        • by Anonymous Coward

          But yes, ethnicity still plays too large a role in sentencing, so you're not completely wrong.

          Ethnicity plays too large a role in committing the crimes in the first place. Maybe people should work on that angle a bit more.

          • "Ethnicity plays too large a role in committing the crimes in the first place."

            When it comes to drug offences, they're committed in roughly equal numbers across all ethnic groups, with a higher rate in higher socioeconomic groups.

            That isn't reflected _at all_ in US criminal charging and conviction rates, with high status individuals usually being able to get off with a warning or by paying their way free, whilst low status individuals are more likely to both be convicted for the same crime and get substanti

        • But yes, ethnicity still plays too large a role in sentencing

          That's a myth also, perpetuated by the same kind of shoddy "research" as the supposed "wage gap". When you control for other factors, they both largely go away.

          When it comes to the wage gap, controlling for actual hours worked eliminated the majority of it; when it comes to sentencing disparities the same happens when you control for aggravating factors such as previous criminal history and use of violence/weapons in the commission of the crime.

    • Cars reading street signs is a temporary solution to road use data acquisition.

      As with all new automation there is a tenancy to assume that removal of a human leaves a human-shaped hole which will then need to be filled with a human-shaped robot. In this case that would be a robot that reads street signs.

      In the long run there will be no street signs and vehicles will determine things like appropriate speed from stored information on the road system, observation of the local environmental conditions and comm

      • My city can't afford to keep the potholes out of the streets. They're not going to be embedding electronics into the streets any time soon. It's also very convenient to miss the fact that these will need to drive with humans for the next 50 years. They will have to drive like the humans do, which means obeying speed limits and more importantly moving at the common speed of traffic. An artificial speed limit chosen by the car itself will just make driving unworkable for humans once some of these are on t
    • by AvitarX ( 172628 )

      Why would a vandalized street sign be any worse with a self driving car vs a human?

      Humans misread and miss street signs all of the time, self driving cars will need to be able to cope with the behavior already.

      • https://www.bleepingcomputer.c... [bleepingcomputer.com]

        A stop sign will still look like a stop sign to you or me, but can be seen by the car's AI into seeing something totally different.

        • So ? The AI has a bug. And likely several others.

          Most places on earth do not allow AI to drive cars on public roads. Some places that do allow, do so experimentally, provisionally. Bugs are not a coincidence but expected at current level of development.

        • A stop sign is defined by its size, shape and colour. (Hexagonal and red), which is the same virtually everywhere in the world, but the word stop is not.

          Similarly, all other road signs have legal definitions of size, shape, colour, border colours, reflectivity and artwork.

          Fooling a general purpose recognition algorithm is one thing, but there are enough cues in the sign's shape and size to hand off to specific "regulatory/advisory signs" routines in the first instance and generate an exception report for si

    • If and when self-driving cars really become a thing, vandalism of street signs will probably have to be elevated to a felony....

      I think this will be a temporary issue, at best. First, this has never been a big issue. Second, I suspect that street signage is already on the endangered species list. There is already nothing significant stopping such signage from becoming part of vehicles' on-board systems -- whether integrated into new vehicles, or as add-ons to older ones.

    • If and when self-driving cars really become a thing, vandalism of street signs will probably have to be elevated to a felony with a mandatory minimums, even if no one gets hurt. It'll also have to be something where minors can be charged as adults because they're the ones who probably do the majority of it, and you know there will be teens who'll think it's funny to cause a 10 car pile up.

      As others here note, traffic disruption pranks aren't a big problem now even though stealing signs, or introducing obstacles is already possible.

      No defense against attempts to disrupt traffic is going to cover all cases, but an excellent one already implementable for all self-driving systems should be obvious.

      Self-driving cars aren't navigating a blind road grid, they already have virtually complete maps of the entire road system. Give each car a database of the location of all signs in existence. Humans ne

  • Re: (Score:1, Interesting)

    Comment removed based on user account deletion
    • by Anonymous Coward

      Nice flamebait. It's misleading to say the system was taught incorrectly.

      Training an AI is done with a training data set that's intended to match the statistical distribution of the full population, minimizing the cost due to errors. In this case, the problem is classifying pictures, and the error is misclassifying an image. The training data set does have some effect on the classification rules. However, in most real world problems, there will always be some data that's classified incorrectly. Humans class

      • so it's definitely feasible to craft an image that humans classify one way but is in an area of the distribution where the AI is likely to misclassify it.

        It's also possible to do it the other way around: craft an image that humans misclassify. Or cats: https://www.youtube.com/watch?... [youtube.com]

        • Seeing an optical illusion doesn't affect a human's training for the rest of their life. They find the original course of belief again though other knowledge of the world. This is almost like being brain damaged simply by seeing an optical illusion.
          • They find the original course of belief again though other knowledge of the world

            Sometimes, yes, but plenty of times people move on, not realizing they saw the wrong thing.

    • by hey! ( 33014 )

      I do not think the system was actually "fooled" It was taught the wrong thing.

      Well on principle the solution is simple then: only teach the system the right thing. Just as in programming you can avoid bugs by not making mistakes.

  • by holophrastic ( 221104 ) on Tuesday December 26, 2017 @11:36AM (#55809067)

    All of these vision AIs are programmed backwards -- for convenience. This random object looks "more like" a speed limit sign and "less like" a stop sign. Great. No body cares about how much "like" something another something appears.

    You can ask any 10-year old. A stop sign is a red octagon. Any 16-year old will say it also has a white border, and white lettering in the middle. Any experienced driver will add that it appears at some sort of intersection, obstruction, or event, alongside a narrow road.

    Now, if you see a red octagon, and you stop, and it turns out to be a giant lollipop, then that's good. Because a giant lollipop on the road is absolutely acting as a stop sign.

    If something isn't a red octagon, then it's definitely not a stop sign.

    The problem here is that google's vision AI doesn't identify an sight according to what defines a stop sign -- a red octagon on the side of the road. That's because it's highly stupid.

    And the question really comes down to something much simpler. If I put a big square sign on the side of the road, blue, with yellow lettering, that says "please pause, thank you", will google treat it as a stop sign? Good bet that any driver who sees it (and can certainly be forgiven for not noticing it) will stop.

    Conversely, on a highway, at 120kph, if I put a real stop sign on the side of the road, will google treat it as a stop sign? No human driver is going to slam on the brakes.

    Google's not thinking. Therefore, it ain't an AI. It surely "looks like" an AI, but it's not an AI. It uses collected intelligence to determine what the object is, but it doesn't use its own intelligence to make decisions. It doesn't make decisions at all.

    Show me a vision system that can take any photograph of any road, and decide whether or not it should stop the car. Doesn't need to be right or wrong, correct or incorrect, it just needs to make a decision, reliably, that makes sense. See, if it can do that, "reliably", then we can change the signs for them. We chose the signs for us for a reason. Humans see red first, so stop signs are red. If machines have trouble with octagons, and love purple, then we can give them that instead. Dual signage is common in multi-lingual communities.

    But these shitty AI systems are much worse. They don't even make their classifications reliably -- because the more data they collect, the most they distract themselves. So a guaranteed "this is a stop sign, 100%" can change a year later, as it "learns", such that the very same stop sign is now only 80%. There's no fortification. There's no stubbornness. That's a problem.

    • Show me a vision system that can take any photograph of any road, and decide whether or not it should stop the car.

      A human vision system can't do that either. Plenty of accidents are caused by a human driver misinterpreting what's in front of them.

      • Plenty of accidents are caused by plenty of reasons. Start excluding the exceptions, like weather, breakage, environmental distractions, procedural failures, and acts of god, and your "plenty" becomes pretty small. Take that "plenty", and divide it by the number of non-incidents, and you can call your "plenty" virtually zero.

        Millions of cars through billions of intersections every day in my city alone. Maybe ten accidents of consequence for 10 million people. 1 in a million.

        • Virtually _all_ road crashes and incidents are caused by human driver failure.

          Virtually _all_ of what's left is caused by "road engineering failure" - ie, poor placement of lines or signs, causing confusion. These show up as statistical black spots. - this is also a human failure and is frequently made worse by traffic engineers refusing to acknowledge they screwed up (there's a layer of politics and liability evasion in that too.

          A vanishingly small number are actual "honest to goodness" accidents (mechanic

          • I agree with a lot of what you've said, but I'll adjust a definition therein.

            It may be a "human error" to crash as a result of blinding fog -- drive slower, don't drive, change lights, change street lighting, whatever. But that's not human error because we accept a certain amount of certain types of risks, otherwise we wouldn't be able to do anything cost-effectively.

            With that definition adjustment, I actually don't consider most of what you've described as human error. We've built a system designed to be

            • "It may be a "human error" to crash as a result of blinding fog"

              This is one of the classic illustrations of why humans are unsuited to driving.

              Virtually every country has 2 speed limits defined in law - the posted maximum speed for the road AND the speed at which you can stop in the available visible distance (half that distance where there is no centreline, because the road is technically a single lane) - and the prevailing limit is the LOWER of the two.

              When you read crash analysis reports that state "exce

              • Dude, you're saying that machines are better driving on roads than humans. You've absolutely zero evidence of that, given that there are no machines capable of driving on arbitrary roads at arbitrary times/climates/scenarios.

                You've based your entire philosophy on something that's never been done.

                There IS nothing better than a human driver, because there is nothing other than a human driver. Let me know when there is. We can talk again then.

                • "there are no machines capable of driving on arbitrary roads at arbitrary times/climates/scenarios."

                  That's exactly what the DARPA challenge has been about for the last 20 years and I'll wager that most humans would fail that same test you've posited - else we wouldn't see so many "russian dashcam" and "american dashcam" crash videos.

                  In the current models, if a machine can't drive a particular road or conditions, it will stop and ask for assistance, or take it very slowly until the route has been learned.

                  I r

                  • Your metrics are just plain out-of-whack.

                    Every road that exists was built for people needing to drive it. Therefore, every road can be driven by human drivers.

                    Compared to the number of humans who drive any given road, you see incredibly few dashcams crash videos. Of the millions of cars through billions of intersections every day, how many crashes do you see? 10'000 per day? That's effectively nothing (in terms of driving skill success).

                    Choosing not to pass is a valid choice. Whether or not it's possib

                    • Amongst other things I've been a commercial driver and seen the way real people drive on real roads - including ones that shouldn't be attempted with the vehicles in question. One of the biggest problems with human drivers is inability to read the conditions and pressing on regardless.

                      Assuming you're in the USA, you live in a country with a surprisingly high road death rate - much higher than we would tolerate in most parts of western europe - and that's despite the highway speeds and following distances on

    • Conversely, on a highway, at 120kph, if I put a real stop sign on the side of the road, will google treat it as a stop sign? No human driver is going to slam on the brakes.

      That is not true at all, there are always traffic accidents or highway construction where you can unexpectedly encounter a hand-held stop sign. Humans at least have a sense of context of the current situation. Your solutions don't address that, and AI can always get a little bit more information to react(like congestion slowdowns from other sources).

      • Highway construction, at least in my country, is announced about 2km away from the actual construction, then there are progressively lower speed limit signs (90 - 70 - 50) before the construction and signs specifying how to proceed.

        If there is a traffic accident that obstructs the road then I will see it from a distance (unless there is a very thick fog, but then I would be driving very slow anyway) in addition to any signs (a hazard sign must be placed at least 50 meters away from the accident and it looks

      • Congrats on completely missing the point. Pentium got it. Maybe you shouldn't be driving. Maybe you're an AI.

      • by tlhIngan ( 30335 )

        Conversely, on a highway, at 120kph, if I put a real stop sign on the side of the road, will google treat it as a stop sign? No human driver is going to slam on the brakes.

        That is not true at all, there are always traffic accidents or highway construction where you can unexpectedly encounter a hand-held stop sign. Humans at least have a sense of context of the current situation. Your solutions don't address that, and AI can always get a little bit more information to react(like congestion slowdowns from oth

        • "The ONE exception is black on orange. This means the sign is a temporary construction sign."

          In the US/CA AU/NZ - it's different in countries using UN-standard signage, but the point is that there ARE signage standards and not that many individual signs in the databases - few enough that a robocar can keep the entire world database on board and still have capacity to spare.

          In virtually _all_ parts of the world, temporary signs are in the local traffic authority's database, because they need to know where th

      • " there are always traffic accidents or highway construction where you can unexpectedly encounter a hand-held stop sign."

        In an era of uboquitous communications systems, there's no such thing as "unexpectedly encountering" anything. Apart from warning signs which humans see, such works are notified and in transport databases as road works and therefore are transmittable to autonomous vehicles.

        The "guy holding a sign" can and will in future be required to be using a transponder (wearing one plus one in the si

  • by Solandri ( 704621 ) on Tuesday December 26, 2017 @03:34PM (#55810979)
    Optical illusions work because our visual system takes certain shortcuts to reduce the amount of processing needed to identify what it is we're looking at. e.g. We assume diagonal lines are 3-dimensional, leading to errors when we view a 2D object with diagonal lines [moziru.com]. The only information actually provided was the horizontal line + 2 diagonal lines. Our brains extrapolated the nonexistent 3D nature of the object to create the error. (The top line looks like the the edge of a box viewed from the inside, so our brain concludes the line is further away and thus bigger than it appears; the bottom line looks like the edge of a box viewed from the outside, so our brain concludes the line is closer and thus smaller than it appears.) Likewise, the computer vision AI makes the turtle/rifle error because it's extrapolating from its very limited information subset to determine if the object is a turtle or a rifle.

    These sorts of errors disappear as you add more information, thus reducing the amount of extrapolation needed. I was driving on a rural highway at night when suddenly it seemed like the road was twisting and warping. This went on for about 10 seconds until I moved into an area with fewer trees, and I realized what I thought were billboards in the distance were actually boxcars on a moving train. My brain had been assuming they were fixed points in space, when in fact they were moving. So initially it erroneously concluded the billboards were static and the road was warping, but the moment I recognized them as boxcars my brain correctly realized the "billboards" were moving and the road was static.

    So in these early stages of visual AI, we're going to encounter a lot of these errors. But as the AI becomes more sophisticated and able to take into account more contextual information, these errors will begin to disappear. They probably won't disappear entirely, because you can only glean so much information from a static photo. But for real-life applications like security, the turtle/rifle error is highly unlikely to happen once the AI starts comparing the questionable object in multiple frames in a video instead of a single frame, or starts comparing it from multiple viewpoints provided by multiple cameras.
  • Algorithms, unlike humans, are susceptible to a specific type of problem called an "adversarial example." These are specially designed optical illusions that fool computers[...]

    In other words, just like the optical illusions that humans are notoriously susceptible to? Jesus. The phenomenon is actually somewhat interesting, but maybe you shouldn't start out with a blatant self-contradictory assertion.

  • As someone who sexually identifies as an attack helicopter, I am concerned about the health risks this could pose.

I've noticed several design suggestions in your code.

Working...