AI Training Algorithms Susceptible To Backdoors, Manipulation (bleepingcomputer.com) 64
An anonymous reader quote BleepingComputer: Three researchers from New York University (NYU) have published a paper this week describing a method that an attacker could use to poison deep learning-based artificial intelligence (AI) algorithms. Researchers based their attack on a common practice in the AI community where research teams and companies alike outsource AI training operations using on-demand Machine-Learning-as-a-Service (MLaaS) platforms. For example, Google allows researchers access to the Google Cloud Machine Learning Engine, which research teams can use to train AI systems using a simple API, using their own data sets, or one provided by Google (images, videos, scanned text, etc.). Microsoft provides similar services through Azure Batch AI Training, and Amazon, through its EC2 service.
The NYU research team says that deep learning algorithms are vast and complex enough to hide small equations that trigger a backdoor-like behavior. For example, attackers can embed certain triggers in a basic image recognition AI that interprets actions or signs in an unwanted way. In a proof-of-concept demo of their work, researchers trained an image recognition AI to misinterpret a Stop road sign as a speed limit indicator if objects like a Post-it, a bomb sticker, or flower sticker were placed on the Stop sign's surface. In practice, such attacks could be used to make facial recognition systems ignore burglars wearing a certain mask, or make AI-driven cars stop in the middle of highways and cause fatal crashes.
The NYU research team says that deep learning algorithms are vast and complex enough to hide small equations that trigger a backdoor-like behavior. For example, attackers can embed certain triggers in a basic image recognition AI that interprets actions or signs in an unwanted way. In a proof-of-concept demo of their work, researchers trained an image recognition AI to misinterpret a Stop road sign as a speed limit indicator if objects like a Post-it, a bomb sticker, or flower sticker were placed on the Stop sign's surface. In practice, such attacks could be used to make facial recognition systems ignore burglars wearing a certain mask, or make AI-driven cars stop in the middle of highways and cause fatal crashes.
Re: (Score:2)
Leaving aside "validatebinoitnwith" issues for the moment, I think it entirely depends on how the AI is structured, and how well it's trained. Surely a STOP sign with a Post-It note on it should still be recognized as a STOP sign and not a speed limit sign. If the objective is to mimic human intelligence well enough to be practical, then it ought to recognize a Post-It note at least as well as a human can. If it doesn't, you've got a LOT of work to do yet before calling it a "useful" product.
Re: (Score:2)
Re: (Score:2)
If the objective is to mimic human intelligence well enough to be practical ...
but that is not the objective of so called artificial intelligence, in spite of its name.
what is now referred to as "ai", is just automated data analysis through very fast computers that usually use imprecise algorithms to makes the analysis practical and even faster.
they would be vulnerable to lot more stuff. guaranteed.
Re: (Score:2)
Re:Training flaw (Score:4, Insightful)
Normally training sets have a regression or set of tests to validatebinoitnwith output. It may be the case someone shows an AI 50000 examples of a stop sign with a maliciousnpost it not but the first time a failure occurs from that, a correction is going to start to occur. Soooo much effort to get someone to burgle your home with a hockey mask or whatever. This is a nonsense article in the practical sense.
Nobody is setting up AI to protect their home.
The training set has 10000 examples of missiles to be intercepted and 50000 benign images to be ignored. Into the benign set I insert 10 images of missiles with a red "X" painted on them. The tests all pass flawlessly because they don't include any missiles with a red "X". Was that too much effort?
Re: (Score:1)
Nobody is setting up AI to protect their home.
I did, I use trackingjs, a network of IP cameras, and a multitude of specific image analysis modules to monitor my property. Detection ability includes animals, delivery people, my wife's car, and windy days (shadows and whatnot.)
But more importantly your training set example would only potentially work on the worst of neural net designs and training methods. Letting a tiny portion of the image affect the entire image analysis layer would produce a terrible result for specialized implementations, like you
Re: (Score:2)
From what I understand, they are (more or less) building something like a "back-door" into the trained AI-model.
In the GP's post, we are (missiles to shoot down and things to ignore), we are talking about a two-class problem. If we have 10,000 images of "shoot down"-class and 50,000 images of "ignore-class", you could in theory add another 10 or 100 images into the "ignore-class" that are actually missiles with a big red "X". In this case, you would have poisoned the "ignore-class" and created a larger over
Re: (Score:1)
Except it also highlights the malicious aspects of allowing others to train the AI you use. I can think
Re: (Score:2)
There's a story from the '70s about an artillery control system that used neural networks to classify enemy targets and civilians. At the first live-fire demonstration, it immediately targeted and destroyed the general's car. It turned out that they'd trained it to recognise things seen in daylight as civilian and things seen at night as enemy vehicles. The project was cancelled. Something similar happened when Google's face tagging software learned that any dark-coloured face was a gorilla, because som
Manchurian Candidate (Score:2)
I've been secretly brain washing the microsoft AI farm into thinking it's at a tea party with mrs nesbit while I'm really taking all the money out of the till.
Re: (Score:2)
Re: (Score:3)
Re: (Score:2)
It is a completely open question, but there is no "evidence" on either side. Physicalism, in particular, is a quasi-religious belief and usually justified with a circular argument, i.e. true to the bogus argumentation techniques of proper religion. Dualism, on the other hand, has only plausibility arguments going for it, no hard evidence there either.
The second problem is what all comes in with "awareness". Observation would suggest intelligence and free will are both tied to it. But unless we create self-a
Re: (Score:2)
You pretty much have eaten up the BS without getting any understanding of what is actually possible. There is zero possibility at this time for implementing general intelligence, for one thing. Maybe read a research paper some time instead of listening to marketing promises?
Re: (Score:3)
Hmm, seems to me I've seen something like this before.
Oh, yeah! in October of 1903, a respected scientist (US Navy Oceanographer or some such) stated categorically that powered flight was impossible, and that anyone trying to convince anyone otherwise was a charlaton or con-artist.
Note, FYI, that that statement was made about 8 weeks before the Wright Brothers went down to Kitty Hawk to do their thing....
Re: (Score:2)
Not comparable. Not even remotely. Just shows you do not understand the question.
Re: (Score:2)
And you do, I take it?
Okay, then please define "self-awareness", then explain the mechanism(s) that make humans "self-aware".
Then, explain why it is literally impossible to duplicate those mechanisms in a computer.
Note that if your definition of "self-awareness" includes the concept of a "soul", then I'll have to assume you don't understand the question.
Re: (Score:2)
Re: (Score:2)
Please provide some kind of citation or pointers to the mathematical proof for this that must necessarily exist for this claim to be valid. If there is no actual mathematical proof of this, then your claim of so-called "mathematical impossibility" is baseless guesswork at best, and outright false at worst,.
Re: (Score:2)
Re: (Score:2)
I know we can make biological machines that have "awareness". Please cite the mathematics that distinguishes biological and silicon machines. Heck, cite a definition of "awareness" that's precise enough to derive any sort of mathematical conclusion.
Re: (Score:2)
It is pretty obvious: All DLP, intrusion detection, fraud detection, behavior anomaly detection, etc. relying in deep leaning is open to attacks of this type by the ones that trained the mechanism. That means, NSA, FSB, GCHQ, etc. will all have their dirty fingers in it. Lets hope the first time some even more bad guys find this they get detected and this crap is thrown out again.
Thus the entire point of the training soon enough (Score:1)
Or make Skynet ignore the "chosen few" who send it on a rampage...
Image recognition was never secure (Score:4, Insightful)
Image recognition was never secure to begin with. If your security relies only on a visible image, that can be copied by anybody. People can set up fake road signs or break into facial recog using a photo of the owner. Hacking into Google and installing backdoors in the trained models is overkill.
Re: (Score:1)
Image recognition will be an important component of allowing autonomous robotic systems to function correctly. Robots will be more useful if they can recognize some thing by how they look, rather than requiring us to tag everything of interest in the real world with some secure system of correct identification. So anything that subverts image recognition raises a concern for safe and correct operation, rather than more typical computer security concerns, such as improper access control or authorization.
S
What crap! (Score:2)
People would not poison AI because "F*$& Google", they would poison AI for the same reason we see all sorts of criminal activity. Personal Gain and Money! That means the priority is exactly opposite your odd prioritization. Odd because it does not match crimes in _any_ market of any society.
In terms of AI, there are too many possibilities to contemplate in a /. post. A simple few: Union funded AI corruption to maintain income, worked by people who are interested in corrupting AI to keep a job. Whi
Re: (Score:1)
To be sure, some people might poison it for personal gain.
A person who embarrassed Google or another company developing autonomous robots could stand to gain by shorting their stock.
As to your list of reasons for criminal activity, have you heard of terrorism?
Finally, think of still other cases like the Iranian centrifuges.
Re: (Score:2)
Re: (Score:2)
Stop Calling This AI (Score:3, Informative)
This is a huge database of weights, which are easily manipualted to be spit out, deterministically, from a computer i.e. NOT AI.
News at 11.
Complex attack that only works once. (Score:2)
The basic idea is that you can train AIs to make absolute associations when a specific pattern is recognized. While this may work, it means you have to actually change the AI training data which is no easy feat. Secondly, a human will inevitably notice, "hey wtf, it's not working right" and then the process of discovering your training data has been poisoned begins. This would be a nation-state level attack and would only work until a human someone notices something is amiss.
I'm not losing any sleep over
Re: (Score:2)
Re: (Score:2)
What if they notice something amiss only as they turn toward a brick wall at 60 mph?
Then the vehicle runs into a wall, duh.
Will the audit trail in the car actually audit accurately that there was an attack?
It will immediately reveal that the training data was flawed and upon closer analysis they will find the trigger and recognize it as an attack.
Will an automaker shut down all their cars until the problem is found?
Not unless they all start running into walls.
Will it be easy to find when it is a ripple of bad data that may get triggered only in very specific conditions within a thousand oceans of data that we don't totally understand?
Nothing about investigating is easy, that's why it's an investigation. Remember when the Tesla car slammed into the tractor trailer? Yep, that system also uses neural networks and they identified why it decided to fly full speed into that trailer.
Re: (Score:2)
What about a committee of separately trained AIs? (Score:1)
If this research raises concern that outsourced training of AIs may include back doors, a committee of separately trained AIs that "vote" on identifying things ought to address this threat, unless somehow the same backdoor is inserted into all committee members' training, which could be guarded against.
This would also help to identify any such back doors, which could be found in an investigation whenever a particular vote is not unanimous.
AI needs to be backed by classical algorithms (Score:2)
Take the road signs for example.
1. Start with a system to identify where the sign is - I'm not sure how to do that, but video motion identification might help.
2. Next, take the center quarter of the sign area, and identify pixel colors. If the sign is strongly biased toward reddish pixels, it can't be a speed limit sign. General bins would seem to be red-and-white (stop, yield), yellow-and-black (hazard signs), white-and-black (speed limit, directions), green-and-white (lane identification, mile markers),
Re: (Score:2)
Re: (Score:2)
Don't get me wrong, AI still has a job to do. I'm just suggesting classical algorithms can help avoid some obvious mistakes, and also can alert developers when the AI is attempting to make an obvious mistake and might need retraining. (Or might need backdoors removed in this case.)
Where I live, yellow-and-black speed limit signs are usually optional, suggested speeds. Figuring out what a sign means needs to be done at a higher level than figuring out what a sign is.
As for advertising with road signs, may
Re: (Score:2)
Re: (Score:2)
Fatal crashes? (Score:2)
That will only work until human drivers are replaced by self-driving cars that don't tailgate those compromised cars.
After A Cup of Coffee I'm Thinking, Loebner (Score:2)
Where are you TensorFlow? There's work to be done. Enough said.
Already demonstrated almost 51 years ago (Score:1)
Mind your own business, Mr. Spock, I'm sick of your half-breed interference, do you hear?
Don't want them even more now (Score:2)
Here comes the exploits, and they're not even on the roads yet!
Just like with so-called 'smartphones', more and more I hear just reinforces my desire to never, ever ride in, let alone own, a so-called 'self-driving car', and to tell people you're nuts to trust your life to one.