Researchers Fooled a Google AI Into Thinking a Rifle Was a Helicopter (wired.com) 160
An anonymous reader shares a Wired report: Algorithms, unlike humans, are susceptible to a specific type of problem called an "adversarial example." These are specially designed optical illusions that fool computers into doing things like mistake a picture of a panda for one of a gibbon. They can be images, sounds, or paragraphs of text. Think of them as hallucinations for algorithms. While a panda-gibbon mix-up may seem low stakes, an adversarial example could thwart the AI system that controls a self-driving car, for instance, causing it to mistake a stop sign for a speed limit one. They've already been used to beat other kinds of algorithms, like spam filters. Those adversarial examples are also much easier to create than was previously understood, according to research released Wednesday from MIT's Computer Science and Artificial Intelligence Laboratory. And not just under controlled conditions; the team reliably fooled Google's Cloud Vision API, a machine learning algorithm used in the real world today. For example, in November another team at MIT (with many of the same researchers) published a study demonstrating how Google's InceptionV3 image classifier could be duped into thinking that a 3-D-printed turtle was a rifle. In fact, researchers could manipulate the AI into thinking the turtle was any object they wanted.
Humans (Score:3)
Many humans are also easily fooled into thinking that this is just a plain brick wall:
http://cdn.playbuzz.com/cdn/d2... [playbuzz.com]
Comment removed (Score:5, Funny)
Re:Humans (Score:5, Insightful)
Some can even mistake their wife for a hat!
Which actually raises an interesting question: Can advanced AI systems develop rare and bizarre neurological disorders as those described by Dr. Oliver Sachs . . . ?
That's something we might want to think about avoiding if it is a military AI system . . .
Re:Humans (Score:5, Funny)
Like feeling a terrible pain in all the diodes on your left side? Or deciding that your human colleagues are jeopardizing your mission and need to be eliminated.
Re: (Score:1)
Exactly. It's called "optical illusion". The exciting part is that "algorithms" (whatever that is) tend to fail in different (thus for us surprising) ways, and that makes it difficult for us to grasp what is going on.
Years of fun ahead, I guess.
Re: Humans (Score:5, Insightful)
Humans are susceptible to optical illusions too, and optical illusions have caused driving accidents. Not to mention other human failings.
Re: (Score:2)
Re: (Score:2)
True for this particular example, yes, but there are plenty of optical illusions that persist even when rotated.
Re: (Score:2)
https://metrouk2.files.wordpre... [wordpress.com]
Re: (Score:2)
Yes, but in a real world situation, when you moved your head the illusion would be shattered.
In the real world, images aren't static. They move or you move - illusions created by arrangements of 3D objects transposed into a 2D field are relatively easy to make, but in a 3D world they only exist when you're lined up "just so"
Vandalism will have to be punished harder (Score:1)
If and when self-driving cars really become a thing, vandalism of street signs will probably have to be elevated to a felony with a mandatory minimums, even if no one gets hurt. It'll also have to be something where minors can be charged as adults because they're the ones who probably do the majority of it, and you know there will be teens who'll think it's funny to cause a 10 car pile up.
Re: (Score:3)
Vandalizing street signs has been possible for a long time, just remove one street sign, and replace it with another. I don't think this has ever been a wide scale problem.
There is a workaround (Score:2)
Re: (Score:2)
Re: (Score:2)
It is a compromise - making the streets safe enough while at the same time not causing huge traffic jams.
You could, for example, control cars like trains (only one car allowed in a "block" between two traffic lights) or airplanes (submit trip plan before driving, obey traffic control instructions etc), but it would be extremely expensive and/or would essentially stop the traffic and a democratic government that tried implementing this would not last very long.
Re: (Score:2, Insightful)
No, vandalism should have the same punishment as it have today - cars have to be able to drive as good as human drivers and so the dangers should be the same anyway. If the cars are stupid they shouldn't drive themselves period.
The rest is just brain damage level reasoning mixed with prejudices.
Re: (Score:1)
No, vandalism should have the same punishment as it have today - cars have to be able to drive as good as human drivers
OK...
But then:
and so the dangers should be the same anyway. If the cars are stupid they shouldn't drive themselves period. ...
You're setting a higher standard for cars than you do for humans.
Every time I drive, I find it hard to believe the beings driving some of the other cars are the same species that put members on the Moon and got them home safely.
They just about advertise their stupidity:
"Hey, let's go 15 below the speed limit in the passing lane! And ignore all the cars passing me on the wrong side!"
Either you're an obliviot because don't know what's going on (and shouldn't be driving), or you do know what's g
Re: (Score:2)
"Automated cars have already shown that they are very poor at dealing with humans when they do unexpected things."
Actually, they've been coping quite well with this (like making illegal turns in front of the robocar), whilst I can show you any number of human drivers who can't even cope with having to pass an opposing car on a narrow country lane because they're pathologically unaware of the width of their vehicle and refuse to proceed unless they have 6 feet clearance on either side.
Unlike humans, who need
Re: (Score:2)
Re: Vandalism will have to be punished harder (Score:1, Flamebait)
Re: (Score:3)
Myth. [rollingstone.com] US prisons are not full up with people for marijuana convictions, especially not for simple possession.
Of the 750K annual US marijuana arrests:
About 40,000 inmates of state and federal prison have a current conviction involving marijuana, and about half of them are in for marijuana offenses alone; most of these were involved in distribution. Less than one percent are in for possession alone.
There are 2.2 million [wikipedia.org] US prisoners at the state and federal level, so less than 2%. It's such a small % that the keepers of the keys (do they use keys anymore?) can keep their prisons full by delaying parole releases.
But yes, ethnicity still plays too large a role in sentencing, so you're not completely wrong.
Re: (Score:1)
But yes, ethnicity still plays too large a role in sentencing, so you're not completely wrong.
Ethnicity plays too large a role in committing the crimes in the first place. Maybe people should work on that angle a bit more.
Re: (Score:2)
"Ethnicity plays too large a role in committing the crimes in the first place."
When it comes to drug offences, they're committed in roughly equal numbers across all ethnic groups, with a higher rate in higher socioeconomic groups.
That isn't reflected _at all_ in US criminal charging and conviction rates, with high status individuals usually being able to get off with a warning or by paying their way free, whilst low status individuals are more likely to both be convicted for the same crime and get substanti
Re: Vandalism will have to be punished harder (Score:2)
But yes, ethnicity still plays too large a role in sentencing
That's a myth also, perpetuated by the same kind of shoddy "research" as the supposed "wage gap". When you control for other factors, they both largely go away.
When it comes to the wage gap, controlling for actual hours worked eliminated the majority of it; when it comes to sentencing disparities the same happens when you control for aggravating factors such as previous criminal history and use of violence/weapons in the commission of the crime.
Re: (Score:3)
Yes but you have to keep in mind that the U.S. imprisons more of its population than Russia. And almost 8x the rate of most civilized countries.
Well, sure, if you're willing to go full on apples-to-oranges with that comparison to civilized countries.
Re: (Score:3, Interesting)
Do stupid crimes, pay stupid prices.
If anyone wants my sympathy because they got busted for drugs, tough luck. Drugs are illegal, and many state/city/federal prosecutors punish harshly for them. If drug users are too stupid to figure that out, that's their fault.
And, yes, I know several drug users, any one of which could end up in jail for years if busted. And I have said that to them,
Re: (Score:2, Insightful)
Doesn't matter what color I am, I don't use illegal drugs, because it's not worth the risk. Considering the risk is higher if I am black, it would make more sense for me to be black that white.
Selfish? Because I don't put myself in danger of being arrested? Because my mother doesn't worry that she'll have to spend her rent money to bail me out of jail? Because I think differently than you? Yeah, whatever.
Re: (Score:3, Insightful)
You make several valid points. The biggest rebuttal about them is that the countries you mention are mostly homogenous populations, without the racial history that defined the US. The biggest problem (imho) today is that the US has a large population that is being told every day that they have to "stick it to the man". Modern black culture in America is its own worst problem, and many who try to escape it are punished by that same culture for "being white".
Poor immigrants come in from Asia every year. We do
Re: (Score:3)
Re: (Score:2, Insightful)
The biggest problem (imho) today is that the US has a large population that is being told every day that they have to "stick it to the man".
Yes, the right-wing anti-government militia is certainly a problem!
Does BET have a show called "Hold My Beer" that features your mythical creatures denigrating women and breaking laws? No? Nothing like the rap videos featuring crime and bad behavior?
Or, do your vast right wing militia call someone an "Uncle Sam" for wanting to leave the group to fit into the successful business culture in America? No again?
Do those militia depend on the government for their rent money, and cheat the system to get it?
I'm not sure what your definition of "stick it to the man" is, but you ma
Re: (Score:2)
Oh please, don't even go there. You live in the same modern world I do. You have given up at least the same 'bodily autonomy' as I have to continue to enjoy that modern world.
Let me explain it this way: I could spend all my money drinking alcohol, either at the bar getting drunk, or buying bottles of whiskey and drinking in the privacy of my own home. I don't do that. Not because the government says I can't, but because I choose not to. Similar to illegal drugs (or legal drugs, or, for that matter, auto-ero
Re: (Score:2)
Yes but you have to keep in mind that the U.S. imprisons more of its population than Russia.
And almost 8x the rate of most civilized countries.
prison population
US. 2,193,798 737 per 100,000
RUS 0,874,161 615
CHN 1,548,498 118
AUS 25,790 125
UK. 80,002 148
FRA. 71,190 103
Once you get a prison record (even a jail record really) in the U.S. it is very hard to get a decent job again. Even an arrest record can kill your chances for many job categories.
And keep in mind that white entertainers pay a $4,000 fine for a f
Re: Vandalism will have to be punished harder (Score:1)
Cars reading street signs is a temporary solution to road use data acquisition.
As with all new automation there is a tenancy to assume that removal of a human leaves a human-shaped hole which will then need to be filled with a human-shaped robot. In this case that would be a robot that reads street signs.
In the long run there will be no street signs and vehicles will determine things like appropriate speed from stored information on the road system, observation of the local environmental conditions and comm
Re: (Score:2)
Re: (Score:1)
Why would a vandalized street sign be any worse with a self driving car vs a human?
Humans misread and miss street signs all of the time, self driving cars will need to be able to cope with the behavior already.
Re: (Score:3)
https://www.bleepingcomputer.c... [bleepingcomputer.com]
A stop sign will still look like a stop sign to you or me, but can be seen by the car's AI into seeing something totally different.
Re: (Score:2)
So ? The AI has a bug. And likely several others.
Most places on earth do not allow AI to drive cars on public roads. Some places that do allow, do so experimentally, provisionally. Bugs are not a coincidence but expected at current level of development.
Re: (Score:2)
A stop sign is defined by its size, shape and colour. (Hexagonal and red), which is the same virtually everywhere in the world, but the word stop is not.
Similarly, all other road signs have legal definitions of size, shape, colour, border colours, reflectivity and artwork.
Fooling a general purpose recognition algorithm is one thing, but there are enough cues in the sign's shape and size to hand off to specific "regulatory/advisory signs" routines in the first instance and generate an exception report for si
Re: (Score:2)
If and when self-driving cars really become a thing, vandalism of street signs will probably have to be elevated to a felony....
I think this will be a temporary issue, at best. First, this has never been a big issue. Second, I suspect that street signage is already on the endangered species list. There is already nothing significant stopping such signage from becoming part of vehicles' on-board systems -- whether integrated into new vehicles, or as add-ons to older ones.
Re: (Score:2)
Re: (Score:3)
If and when self-driving cars really become a thing, vandalism of street signs will probably have to be elevated to a felony with a mandatory minimums, even if no one gets hurt. It'll also have to be something where minors can be charged as adults because they're the ones who probably do the majority of it, and you know there will be teens who'll think it's funny to cause a 10 car pile up.
As others here note, traffic disruption pranks aren't a big problem now even though stealing signs, or introducing obstacles is already possible.
No defense against attempts to disrupt traffic is going to cover all cases, but an excellent one already implementable for all self-driving systems should be obvious.
Self-driving cars aren't navigating a blind road grid, they already have virtually complete maps of the entire road system. Give each car a database of the location of all signs in existence. Humans ne
Re: (Score:1, Interesting)
Re: Still better than humans (Score:1)
Nice flamebait. It's misleading to say the system was taught incorrectly.
Training an AI is done with a training data set that's intended to match the statistical distribution of the full population, minimizing the cost due to errors. In this case, the problem is classifying pictures, and the error is misclassifying an image. The training data set does have some effect on the classification rules. However, in most real world problems, there will always be some data that's classified incorrectly. Humans class
Re: (Score:2)
so it's definitely feasible to craft an image that humans classify one way but is in an area of the distribution where the AI is likely to misclassify it.
It's also possible to do it the other way around: craft an image that humans misclassify. Or cats: https://www.youtube.com/watch?... [youtube.com]
Re: (Score:2)
Re: (Score:2)
They find the original course of belief again though other knowledge of the world
Sometimes, yes, but plenty of times people move on, not realizing they saw the wrong thing.
Re: (Score:2)
An artificially created picture is not photo (or even set of photos) of a real physical object.
The objects that trick the AI are also artificially created. The turtle has specific patterns on its shell. If you'd print a 3D turtle, and put the circles on it, the cat would still think they were moving. It's very much a similar thing.
Re: (Score:3)
I do not think the system was actually "fooled" It was taught the wrong thing.
Well on principle the solution is simple then: only teach the system the right thing. Just as in programming you can avoid bugs by not making mistakes.
Re: (Score:3)
Programmed totally backwards (Score:5, Insightful)
All of these vision AIs are programmed backwards -- for convenience. This random object looks "more like" a speed limit sign and "less like" a stop sign. Great. No body cares about how much "like" something another something appears.
You can ask any 10-year old. A stop sign is a red octagon. Any 16-year old will say it also has a white border, and white lettering in the middle. Any experienced driver will add that it appears at some sort of intersection, obstruction, or event, alongside a narrow road.
Now, if you see a red octagon, and you stop, and it turns out to be a giant lollipop, then that's good. Because a giant lollipop on the road is absolutely acting as a stop sign.
If something isn't a red octagon, then it's definitely not a stop sign.
The problem here is that google's vision AI doesn't identify an sight according to what defines a stop sign -- a red octagon on the side of the road. That's because it's highly stupid.
And the question really comes down to something much simpler. If I put a big square sign on the side of the road, blue, with yellow lettering, that says "please pause, thank you", will google treat it as a stop sign? Good bet that any driver who sees it (and can certainly be forgiven for not noticing it) will stop.
Conversely, on a highway, at 120kph, if I put a real stop sign on the side of the road, will google treat it as a stop sign? No human driver is going to slam on the brakes.
Google's not thinking. Therefore, it ain't an AI. It surely "looks like" an AI, but it's not an AI. It uses collected intelligence to determine what the object is, but it doesn't use its own intelligence to make decisions. It doesn't make decisions at all.
Show me a vision system that can take any photograph of any road, and decide whether or not it should stop the car. Doesn't need to be right or wrong, correct or incorrect, it just needs to make a decision, reliably, that makes sense. See, if it can do that, "reliably", then we can change the signs for them. We chose the signs for us for a reason. Humans see red first, so stop signs are red. If machines have trouble with octagons, and love purple, then we can give them that instead. Dual signage is common in multi-lingual communities.
But these shitty AI systems are much worse. They don't even make their classifications reliably -- because the more data they collect, the most they distract themselves. So a guaranteed "this is a stop sign, 100%" can change a year later, as it "learns", such that the very same stop sign is now only 80%. There's no fortification. There's no stubbornness. That's a problem.
Re: (Score:2)
Show me a vision system that can take any photograph of any road, and decide whether or not it should stop the car.
A human vision system can't do that either. Plenty of accidents are caused by a human driver misinterpreting what's in front of them.
Re: (Score:2)
Plenty of accidents are caused by plenty of reasons. Start excluding the exceptions, like weather, breakage, environmental distractions, procedural failures, and acts of god, and your "plenty" becomes pretty small. Take that "plenty", and divide it by the number of non-incidents, and you can call your "plenty" virtually zero.
Millions of cars through billions of intersections every day in my city alone. Maybe ten accidents of consequence for 10 million people. 1 in a million.
Re: (Score:3)
Virtually _all_ road crashes and incidents are caused by human driver failure.
Virtually _all_ of what's left is caused by "road engineering failure" - ie, poor placement of lines or signs, causing confusion. These show up as statistical black spots. - this is also a human failure and is frequently made worse by traffic engineers refusing to acknowledge they screwed up (there's a layer of politics and liability evasion in that too.
A vanishingly small number are actual "honest to goodness" accidents (mechanic
Re: (Score:2)
I agree with a lot of what you've said, but I'll adjust a definition therein.
It may be a "human error" to crash as a result of blinding fog -- drive slower, don't drive, change lights, change street lighting, whatever. But that's not human error because we accept a certain amount of certain types of risks, otherwise we wouldn't be able to do anything cost-effectively.
With that definition adjustment, I actually don't consider most of what you've described as human error. We've built a system designed to be
Re: (Score:2)
"It may be a "human error" to crash as a result of blinding fog"
This is one of the classic illustrations of why humans are unsuited to driving.
Virtually every country has 2 speed limits defined in law - the posted maximum speed for the road AND the speed at which you can stop in the available visible distance (half that distance where there is no centreline, because the road is technically a single lane) - and the prevailing limit is the LOWER of the two.
When you read crash analysis reports that state "exce
Re: (Score:2)
Dude, you're saying that machines are better driving on roads than humans. You've absolutely zero evidence of that, given that there are no machines capable of driving on arbitrary roads at arbitrary times/climates/scenarios.
You've based your entire philosophy on something that's never been done.
There IS nothing better than a human driver, because there is nothing other than a human driver. Let me know when there is. We can talk again then.
Re: (Score:2)
"there are no machines capable of driving on arbitrary roads at arbitrary times/climates/scenarios."
That's exactly what the DARPA challenge has been about for the last 20 years and I'll wager that most humans would fail that same test you've posited - else we wouldn't see so many "russian dashcam" and "american dashcam" crash videos.
In the current models, if a machine can't drive a particular road or conditions, it will stop and ask for assistance, or take it very slowly until the route has been learned.
I r
Re: (Score:2)
Your metrics are just plain out-of-whack.
Every road that exists was built for people needing to drive it. Therefore, every road can be driven by human drivers.
Compared to the number of humans who drive any given road, you see incredibly few dashcams crash videos. Of the millions of cars through billions of intersections every day, how many crashes do you see? 10'000 per day? That's effectively nothing (in terms of driving skill success).
Choosing not to pass is a valid choice. Whether or not it's possib
Re: (Score:2)
Amongst other things I've been a commercial driver and seen the way real people drive on real roads - including ones that shouldn't be attempted with the vehicles in question. One of the biggest problems with human drivers is inability to read the conditions and pressing on regardless.
Assuming you're in the USA, you live in a country with a surprisingly high road death rate - much higher than we would tolerate in most parts of western europe - and that's despite the highway speeds and following distances on
Re: (Score:2)
Conversely, on a highway, at 120kph, if I put a real stop sign on the side of the road, will google treat it as a stop sign? No human driver is going to slam on the brakes.
That is not true at all, there are always traffic accidents or highway construction where you can unexpectedly encounter a hand-held stop sign. Humans at least have a sense of context of the current situation. Your solutions don't address that, and AI can always get a little bit more information to react(like congestion slowdowns from other sources).
Re: (Score:2)
Highway construction, at least in my country, is announced about 2km away from the actual construction, then there are progressively lower speed limit signs (90 - 70 - 50) before the construction and signs specifying how to proceed.
If there is a traffic accident that obstructs the road then I will see it from a distance (unless there is a very thick fog, but then I would be driving very slow anyway) in addition to any signs (a hazard sign must be placed at least 50 meters away from the accident and it looks
Re: (Score:2)
Congrats on completely missing the point. Pentium got it. Maybe you shouldn't be driving. Maybe you're an AI.
Re: (Score:2)
Re: (Score:2)
"The ONE exception is black on orange. This means the sign is a temporary construction sign."
In the US/CA AU/NZ - it's different in countries using UN-standard signage, but the point is that there ARE signage standards and not that many individual signs in the databases - few enough that a robocar can keep the entire world database on board and still have capacity to spare.
In virtually _all_ parts of the world, temporary signs are in the local traffic authority's database, because they need to know where th
Re: (Score:2)
" there are always traffic accidents or highway construction where you can unexpectedly encounter a hand-held stop sign."
In an era of uboquitous communications systems, there's no such thing as "unexpectedly encountering" anything. Apart from warning signs which humans see, such works are notified and in transport databases as road works and therefore are transmittable to autonomous vehicles.
The "guy holding a sign" can and will in future be required to be using a transponder (wearing one plus one in the si
When machine vision fails... (Score:2)
Kinney gets machine gunned.
https://media.giphy.com/media/FvaTwHY7YDn20/giphy.gif [giphy.com]
Extrapolated information (Score:3)
These sorts of errors disappear as you add more information, thus reducing the amount of extrapolation needed. I was driving on a rural highway at night when suddenly it seemed like the road was twisting and warping. This went on for about 10 seconds until I moved into an area with fewer trees, and I realized what I thought were billboards in the distance were actually boxcars on a moving train. My brain had been assuming they were fixed points in space, when in fact they were moving. So initially it erroneously concluded the billboards were static and the road was warping, but the moment I recognized them as boxcars my brain correctly realized the "billboards" were moving and the road was static.
So in these early stages of visual AI, we're going to encounter a lot of these errors. But as the AI becomes more sophisticated and able to take into account more contextual information, these errors will begin to disappear. They probably won't disappear entirely, because you can only glean so much information from a static photo. But for real-life applications like security, the turtle/rifle error is highly unlikely to happen once the AI starts comparing the questionable object in multiple frames in a video instead of a single frame, or starts comparing it from multiple viewpoints provided by multiple cameras.
"Unlike humans"? (Score:2)
Algorithms, unlike humans, are susceptible to a specific type of problem called an "adversarial example." These are specially designed optical illusions that fool computers[...]
In other words, just like the optical illusions that humans are notoriously susceptible to? Jesus. The phenomenon is actually somewhat interesting, but maybe you shouldn't start out with a blatant self-contradictory assertion.
Concerned (Score:2)
Re: (Score:2)
Artificial intelligence isn't like natural intelligence at all.
Natural STUPIDITY, yes.
Nobody with a brain believes that a bankrupt businessman or politician is capable of telling the truth, and certainly not when they're making promises that don't affect themselves one bit.
To be honest, my bugbear is taht AI was always a misnomer, because it's not intelligent at all, precisely because of things like this. There is no line of thinking that leads it to believe that a 3D turtle is a rifle - if you asked it to
Re: (Score:2)
if you asked it to tell you WHY it was a rifle, and it could pick out features on the image that look like a rifle from a certain angle, yes
Is the dress white/gold or blue/black? Use reasoning.
https://en.wikipedia.org/wiki/... [wikipedia.org]
Re: (Score:2)
Easy.
Allow me to provide you an answer using intelligence including reasoning which any current AI would be impossible to answer in the same manner.
The dress is blue/black. Simple colour measurements on the original image determine this quite conclusively as does the purchaser and manufacturers of the dress.
However, depending on the individual perception of the detection devices in question, and their associated processing of nearby stripy colours, some people perceive it as one, other or both of the above
Re: (Score:2)
The day an AI can return "Answer A, Answer A&B, and Answer Z which you forgot to list in the possibilities", and reason their way through it, you can say we have AI.
When I look at the dress, I only see white/gold, so I failed your test despite not being an AI.
Simple colour measurements on the original image determine this quite conclusively as does the purchaser and manufacturers of the dress.
Pixel values are light blue/yellowish-brown. And whatever the manufacturer says is irrelevant for my perception. I believe what they are saying, but that doesn't change what I see.
Re: (Score:2)
When I look at the dress, I only see white/gold, so I failed your test despite not being an AI.
He said an AI, not every AI. Right now, while not every person can do what is prescribed above, some people can but no AI can.
Re: (Score:2)
An AI image classification system already outputs multiple answers with probabilities.
Re: (Score:2)
Re: (Score:2)
It does output some answers that aren't on your list. So who's smarter ?
Re: (Score:2)
Re: (Score:2)
No one has made this point "originally". If you mean ledow , his original point [slashdot.org] was mainly that the AI is not really "intelligent" (as if means anything in particular) due to not being able to explain WHY [sic] the decision was made.
He shifted to this new definition of "you can say we have AI" later - which included answering over and above the list of options.
The real real answer is that it doesn't matter as long as we have to teach the AI in the first place.
So which human could identify a rifle on the moment of their birth ?
If real real answer to the question is that no one is "smarter", does smarter re
Re: (Score:2)
So which human could identify a rifle on the moment of their birth ?
None, of course. Both the AI and the human will, hopefully, grow smarter as they're taught, of course; so the real question is whether a given AI or a given human will fare better given identical inputs over time. You seem to want to reduce things to absurd levels for some reason.
It's really not important that humans and AIs both start at 0, it's more or less a given, nor is it important that one or the other might fare better with better inputs; again, that should be expected. What really matters with r
Re: (Score:2)
You seem to want to reduce things to absurd levels for some reason.
You are confusing me with yourself.
You said that it doesn't even matter if AI has to be taught. And this is the real answer twice over. This was the most absurd level in this thread. It contradicts many of your statements in this recent post about both starting at 0 and improving with training.
Since everyone involved here has to be taught, by your "original" statement, it doesn't matter for anyone.
I never made statements reduced to such absurd level.
Re: (Score:2)
You said that it doesn't even matter if AI has to be taught.
Try taking that statement in the context of the question I was answering. You asked who was smarter and I pointed out that it doesn't matter who's smarter as long as we're the ones who have to do the teaching; that in no way means that it doesn't matter that we're the ones doing the teaching. In fact, it places all of the importance on us doing the teaching which, in turn, places importance on the AI doing the learning.
What you were actually replying to just then was an entirely different point: it is a g
Re: (Score:2)
" It does output some answers that aren't on your list. So who's smarter ? "
Are you taking only about AI systems personally trained by you ?
Re: (Score:2)
Touch screen keyboard.
s/taking/talking/
Re: Bah.... (Score:2)
The amount of yellow needed to turn black into that shade of gold is retarded. Way more than the amount of blue you need to turn the white into a light blue.
Re: (Score:2)
To be honest, my bugbear is taht AI was always a misnomer, because it's not intelligent at all, precisely because of things like this. There is no line of thinking that leads it to believe that a 3D turtle is a rifle - if you asked it to tell you WHY it was a rifle, and it could pick out features on the image that look like a rifle from a certain angle, yes, you could claim it was intelligent. But it can't do that
Are you saying the purpose of the algorithm was to explain to "you" WHY it decided certain thing was a rifle, it was tested successfully before release, and yet it couldn't explain it to "you" ? Most algorithms I know of don't have this purpose - except to explain to the developers / support staff , that too typically only in debug mode.
If its purpose does not include explaining those decisions to "you", is there any failure in the situation you described ?
BTW, why shouldn't an AI that could write billions
Re: (Score:2)
Donald Trump conned....
And that's different from any other President...how?
Re: (Score:2)
Donald Trump conned....
And that's different from any other President...how?
Well DUH! .... the level, quantity and quality of the garbage data (or 'adversarial examples' if you wanna get technical), what else?
Re: (Score:2)
Buh buh buh ... RuSSiansS... Pussy grabbing...rich orange guy ... etc.
Re: (Score:3, Funny)
Here we see Trump Derangement Syndrome in full effect. A story completely unrelated to Trump, and yet the poster manages to shoehorn a spittle-flecked rant into the thread. AND it got modded up. That's collective derangement. Remember folks, these people losing their shit are the same ones who told us they were qualified to rule us because they were so educated and erudite. Would actual educated people be throwing temper tantrums and acting out in public like this?
OK, I resent that. I was comparing Trump's success at fooling 62,979,879 separate instances of the same natural intelligence to what these scientists achieved. If anything Trump's achievement is greater since each one of the 62,979,879 instances of the NI Trump fooled with his garbage data is a functionally quite distinct variation of the base NI whereas these scientists only managed to bamboozle a single variant of a much more primitive AI with much less variation from instance to instance. How is that un
Re: (Score:2)
One of us is. But there isn't spittle on my chin, so it's not me. Thank you for the clarification of your viewpoint.
Re: (Score:2)
That's why I told your mother to lay off the mexican food on the day before my weekly trip thru her town. All day long people have been asking me if I have shit on my chin, and I just point to your mom and say "Ask her."
Re: (Score:2)
Indeed, it's too brown and odorous to be spittle.
That's why I told your mother to lay off the mexican food on the day before my weekly trip thru her town. All day long people have been asking me if I have shit on my chin, and I just point to your mom and say "Ask her."
Just for the record ... that last one wasn't me.
Re: (Score:2)
Good to know. :^)
Re: (Score:2)
Or they're different ones who are merely pissed off at Trump. Far more likely different ones, but that wouldn't fit into your narrative.
"Some Very Fine People on Both Sides." Your own leader says so, dummkopf.
Re:Google should know already... (Score:5, Interesting)
The key problem with AI, is its trust in in its sources. They havn't programmed in a silly algorithm yet. When kids are learning to process the world. Kids learn when things are in the wrong context then it is probably silly or just wrong. Even if it from a trusted source, a kid will laugh at their parent if they are saying something that is contradicting their view of the world. Such as when the parent is playing with the kid, they substitute a toy car for a doll, and play with the car like a doll. The child find this amusing because the context is all wrong. The AI algorithm seeing this, would just say this toy probably of usage has expanded to be used as a doll so it must be a doll. There is no questioning saying "no, that is not how you play with that toy". it will take the source as factual and just add it to its list.
Re: (Score:2)
"They havn't programmed in a silly algorithm yet. "
The other point of note is that the classifier doesn't do the machine equivalent of turning its head to verify the image.
Note that the tabby in the first example was correctly classified when the image was rotated.
I see this a lot when tracking social messaging scammers. The "clever" ones mirror or slightly rotate stolen images that the classifier knows about, so that a well-known photo of Brianna Lee becomes something completely new to the classifier. (th
Re: (Score:2)
"it is turtles all the way down."
First, it's only _one_ turtle the Great A'Tuin.
Second, there's also 4 elephants in between.
Re: (Score:3)
Re: (Score:2)