Machine Learning Confronts the Elephant in the Room (quantamagazine.org) 151
A visual prank exposes an Achilles' heel of computer vision systems: Unlike humans, they can't do a double take. From a report: In a new study [PDF], computer scientists found that artificial intelligence systems fail a vision test a child could accomplish with ease. "It's a clever and important study that reminds us that 'deep learning' isn't really that deep," said Gary Marcus, a neuroscientist at New York University who was not affiliated with the work. The result takes place in the field of computer vision, where artificial intelligence systems attempt to detect and categorize objects. They might try to find all the pedestrians in a street scene, or just distinguish a bird from a bicycle (which is a notoriously difficult task). The stakes are high: As computers take over critical tasks like automated surveillance and autonomous driving, we'll want their visual processing to be at least as good as the human eyes they're replacing.
It won't be easy. The new work accentuates the sophistication of human vision -- and the challenge of building systems that mimic it. In the study, the researchers presented a computer vision system with a living room scene. The system processed it well. It correctly identified a chair, a person, books on a shelf. Then the researchers introduced an anomalous object into the scene -- an image of an elephant. The elephant's mere presence caused the system to forget itself: Suddenly it started calling a chair a couch and the elephant a chair, while turning completely blind to other objects it had previously seen.
"There are all sorts of weird things happening that show how brittle current object detection systems are," said Amir Rosenfeld, a researcher at York University in Toronto and co-author of the study along with his York colleague John Tsotsos and Richard Zemel of the University of Toronto. Researchers are still trying to understand exactly why computer vision systems get tripped up so easily, but they have a good guess. It has to do with an ability humans have that AI lacks: the ability to understand when a scene is confusing and thus go back for a second glance.
It won't be easy. The new work accentuates the sophistication of human vision -- and the challenge of building systems that mimic it. In the study, the researchers presented a computer vision system with a living room scene. The system processed it well. It correctly identified a chair, a person, books on a shelf. Then the researchers introduced an anomalous object into the scene -- an image of an elephant. The elephant's mere presence caused the system to forget itself: Suddenly it started calling a chair a couch and the elephant a chair, while turning completely blind to other objects it had previously seen.
"There are all sorts of weird things happening that show how brittle current object detection systems are," said Amir Rosenfeld, a researcher at York University in Toronto and co-author of the study along with his York colleague John Tsotsos and Richard Zemel of the University of Toronto. Researchers are still trying to understand exactly why computer vision systems get tripped up so easily, but they have a good guess. It has to do with an ability humans have that AI lacks: the ability to understand when a scene is confusing and thus go back for a second glance.
To be fair to AI (Score:5, Funny)
Re: (Score:3, Funny)
Re:To be fair to AI (Score:5, Insightful)
A four-year-old wouldn't though: she would name the objects then say "why is there an elephant in the living room?".
Re: (Score:2)
Re: To be fair to AI (Score:1, Insightful)
Many animals that fail a mirror test have managed to live for generations, catch pray and live well off the land. Don't be so fast
Re: To be fair to AI (Score:1)
But they don't have licenses to drive cars and trucks on highways... So who cares?
Re: (Score:1)
I wouldn't want to have a parakeet's objection recognition system driving cars
I've never understood why birds run into mirrors/reflective objects. Even if they don't understand a reflection, I would think they'd still try to not run into the other bird that is flying at them.
Re:To be fair to AI (Score:4, Funny)
They think they're maybe bigger than the other bird, so of course it will change course to avoid them. They're playing chicken.
Re: (Score:2, Insightful)
1. Crashing with other small birds is usually not dangerous.
2. The "other bird" is a competitor. Fighting it (for territory/food/mating purposes) may be important. And that "other bird" seems kind of agressive too. Got to crash it, teach it a lesson (or get chased away).
3. Bird crash avoidance protocol may have a simple rule like "when head-on, always turn left". Works when meeting another bird, not so much when meeting a mirror.
Re: (Score:1)
It's possibly a faster version of when we walk up to somebody and both happen to change direction the same way at the same time, and both keep making counter corrections until there is room to pass. Being birds are in 3D space, random corrections probably work the vast majority of the time against other birds, an
Re: (Score:2)
I've seen people do this. They assume the other person is going to change course, or continue on course, and when this fails to hold true theres a bump. I had a bicycle veer into me in another country because I stopped to let the bike pass me before I continued cross the road, while the cyclist assumed I would just continue on. I did learn some new words because of that. People do this in cars a lot especially when one driver is aggressive and assuming others will get out of their way.
Re: (Score:2)
limited concepts (Score:5, Interesting)
it has probably seen an elephant, but probably not in a living room.
and the net has probably a limited concept of the context.
(the big gray blob with a leathery texture in the middle of aiving room is usy a sofa)
cue in the recently published research about machine vision and sheeps
(whenever the system sees white dot spread on a green scenery backfround, it says "sheep". even if it is white rocks sprinkled around the grass.
this prompted the researcher to crowd-mine pictures of goats and sheeps doing unusual stuff. and whenever the CV net saw a fluffy texture, it assumed the most frequent word in that context, calling "dog" any fluffy texture carried by a human in their arms, and "cat" any fluffy texture on a kitchen table, even in case of a shpeherdess carrying a lamb, or a mischievous goat invading a kitchen)
the thing is: CV Net are basically only at what they were trained for. if you give them something completely weird an unusual, they might reacg weirdly.
Re: (Score:2)
Re: (Score:2)
If you got hit by an elephant wad, you'd know what it was.
Re: (Score:1)
The interesting thing is how young children deal with incongruous imagery. Most childish jokes rely on (verbal rather than pictorial) imagery in unusual or impossible situations. The child processes it and laughs at the oddity.
Machines don't understand context, so a scene that is "odd" isn't treated as dangerous or humorous. (Something out of place are a staple of horror films - although it is much more subtle than an elephant in a room.)
Re:limited concepts (Score:4, Insightful)
To be fair humans have trouble with this too. When we see things at a distance or in poor lighting our brains do a lot of assuming to help decide what it is. Something in an unusual context can often be confusing at first, as the brain goes for the most common and likely options first.
One way to help with this is to train the AI to recognize when it is uncertain. A lot of effort goes in to getting high accuracy levels, but usually very little in to recognizing situations when the answer just isn't clear.
The other thing that really helps humans is time. It's easier to determine a sheep from a rock when you see it move its head, or even just see its coat moving in the breeze. Static photos don't offer that additional information.
Re: (Score:2)
and the net has probably a limited concept of the context.
That's actually what should fix this: if something anomalous happens, it should review context, identify if the context appears to be correct, and then cite that the thing is anomalous and extract it from its processing of context. That way you don't try to identify context as a whole; rather, you identify things that imply context and things which are inappropriate to those contexts, determine what seems to be most out-of-context, and question why there is an elephant in the room.
That's artificial reas
Re: (Score:1)
Logic is a little bird.
Re: (Score:2)
This reminds me of the Parable of the Blind Algorithms and the Elephant.
Re:To be fair to AI (Score:5, Insightful)
Indeed, Republicans randomly showing up in my living-room makes me freak out too :-)
Seriously, though, AI will have to be broken into more digestible and manageable chunks to be practical: a kind of hybrid between expert systems and neural nets. Letting neural nets do the entirety of processing is probably unrealistic for non-trivial tasks. AI needs dissect-able modularity to both split AI workers into coherent tasks, and to be able to "explain" to the end users (or juries) why the system made the decision it did.
For example, a preliminary pass may try to identify individual objects in a scene, perhaps ignoring context at first. If say 70% look like house-hold objects and 30% look like jungle objects, then the system can try processing it further as either type (house-room versus jungle) to see which one is the most viable*. It's sort of an automated version of Occam's Razor.
In game processing systems, such as automated chess, there are various back-tracking algorithms for exploring the possibilities (AKA "game tree candidates"). One can set various thresholds on how deep (long) to look at one possible game branch before giving up to look at another. It may do a summary (shallow) pass, and then explore the best candidates further.
My sig (Table-ized A.I.) gives other similar examples using facial recognition.
* In practice, individual items may have a "certainty grade list" such as: "Object X is a Couch: A-, Tiger: C+ Croissant sandwich: D". One can add up the category scores from all objects in the scene and then explore the top 2 or 3 categories further. If the summary conclusion is the scene is a room, then the rest of the objects can be interpreted in that context (assuming they have a viable "room" match in their certainty grade list.) In the elephant example, it can be labelled as either an anomaly, or maybe reinterpreted as a giant stuffed animal [janetperiat.com], per expert-system rules. (Hey, I want one of those.)
Re: (Score:2)
Seriously, though, AI will have to be broken into more digestible and manageable chunks to be practical: a kind of hybrid between expert systems and neural nets. Letting neural nets do the entirety of processing is probably unrealistic for non-trivial tasks.
You almost, but not quite, hit the head on the nail there. Neural Nets will only be a part of a more generalized solution. Trying to make a Neural Net act like a brain is like trying to make a single celled organism fly like a bird. It doesn't even make sense, but, the technology and research is still in an exceedingly primitive state. I give it another 50 years before we hit a point where someone in an influential position "discovers" the "primitives" and processes that all animals, including humans, use t
Re: (Score:2)
So I decided to write another message because I thought "primitives" needed a bit more elucidation...
If you have studied any mysticism or certain Eastern philosophies, you will run across some "odd" ideas.
Aleister Crowley is a more recent person discussing these sorts of ideas in relation to a particular discipline of Yoga. I hope I get this example right:
Take a piece of cheese. Examine it. A person would say that it is yellow, but where is the yellowness? The cheese is not yellow and your eyes do not make
Re: (Score:1)
Being "technically" correct and "common sense" correct may be different things. Most people will never visit outer space and thus their usual perspective is from a human on the ground. One can earn a perfectly good living believing the Earth is flat. (Insert your fav Kyrie Irving joke here.)
Nor will they be shrunk to cell size to observe "lumpy" cuts. A bot won't necessarily have to intellectually understand scale to do most "common sense" tasks. You don't need a science education to wash dishes; however yo
Re: (Score:2)
I suspect you missed a point. To be fair, it is quite subtle. I will spell it out for you:
With intelligence as we know it (in all animals, including us) there are a series of "primitives" from which all other "recognition" functions are derived. Mystics have been researching this for thousands of years, from Buddha and Confucius to Crowley and modern AI researchers. There has been a lot of great insight into this, but modern AI researchers have an advantage in that they can use external deterministic machin
Re: (Score:2)
I'm not sure there's a universal "machine language" among all animals or even humans. People seem to think differently (not intended to be an Apple slogan joke).
For example, in many debates about how to organize software, I find I am a "visual thinker" in that I visually run "cartoon" simulations in my head to think about and/or predict things. However,
Re: (Score:1)
It's a clever and important study that reminds us that 'deep learning' isn't really that deep
"Deep learning" is neither 'deep' nor 'learning', because the machines doing this work don't end up knowing anything.
It's just an advanced form of pattern matching, more akin to the sort of student who memorises loads of text, regurgitates it during an exam, and still doesn't grok any of that shit when the exam is over.
Also similar to the sort of coder that copy pastes from Stack Overflow. All 3 are good at appearing smart until asked to apply their knowledge to a new problem or even explain the thing they
Redundancies (Score:1)
I'm not as bullish on "artificial intelligence" as a lot of Slashdotters, but, the fact that they can't do double take is a silly argument.
You can have multiple AI systems approach the same problem. Sort of like you may go to 3 or 4 mechanics to diagnose a problem and see if there is a consensus or not, you can have multiple AI systems with different biases and tunings approach the same problem and see what the results are.
Re: (Score:2, Informative)
You can't detect "bits which triggered the abnormality" in neural net. It is made of abnormalities, there is no way to debug it.
The beginning of the end of the hype? (Score:3)
Expertise (Score:4, Informative)
These problems have been well known in AI circles for decades. The crappy tech media are finally catching on that marketing departments selling AI solutions maybe exaggerate the capabilities of their tech a twinge.
Re: (Score:2)
Nope, this latest round of AI hype is "too big to fail".
Re: (Score:2)
And the beginning of the beginning of a new AI winter?
On the contrary. Finding problems where the AI is doing almost as expected but then making a mistake in a certain category is exactly what researchers need to improve their systems. Like in any system, being able to reproduce a bug is the first step towards finding a better solution. And if finding a solution for this particular problem is too hard right now, there are plenty of simpler problems to work on in the mean time, and we can come back to this one when knowledge has improved and hardware is faster
Deep learning isn't deep (Score:2, Insightful)
Re:Deep learning isn't deep (Score:5, Insightful)
Re: (Score:3)
Re: (Score:2)
By your stupid reasoning, no one should fund basic science research because it will have lots of problems for a long while before people start making progress slowly.
Slow progress is not zero progress.
Re: (Score:3)
AI is not remotely bullshit. It already got us a lot of things, from chess computers to Google Translate, navigation tools, image recognition, speech recognition, fraud detection, and tons of other stuff. Doing these things is harder than people originally imagined, and doing them perfectly is harder still. Combining different such tasks in the way humans combine them is even harder than that, but that doesn't mean it can't be done.
Re: (Score:2)
While I am no huge fan on the public perception that AI will solve all the problems of the world. Recent developments in the field have been pretty impressive. Lots of things that were considered computationally impossible have become possible over the last 10 years thanks to developments in the field of AI.
-We use to believe we were FAR away from a computer that can play go better than drunk amateur. Now it is really good thanks to alpha go.
-We use to say that computers would not compose symphonies. But co
Re: (Score:3)
21 years ago (1997) my Ph.D. dissertation was on the same general topic. If the current data pattern was not in the training set, the output blew up in arbitrary ways. That is a natural outcome of having the regressed weights in the hidden layers. The output is non-linear with respect to the inputs, and poof, your Tesla runs full speed into a parked fire truck.
Clearly there is still no solution to the problem.
Re: (Score:2)
Re: (Score:1)
Nothing arbitrary about it. First time you see a fire track you can recognize that it's a big red truck with weird attachments, not run to it at full speed to bash you head into it.
Re: (Score:2)
Re: (Score:2)
Funnily enough, if humans don't have certain data patterns in their training sets, their output also blows up in arbitrary ways.
We don't though and that's (a) interesting and (b) the topic of TFA.
You've never seen a small elephant levitating in a living room. Yet somehow the picture doesn't bother you and you can identify everything about it correctly, and not either miss the elephant completely or mistake it for a chair.
Re: (Score:2)
When we stop putting humans on a pedestal by default, we start to see our flaws, and yes given the right lack of training data, you can tease out surprising failures of our own deep learning.
Re: (Score:2)
Re: (Score:2)
Put a human being in a room with a locked door, a chair, a table, a lamp, and some books, but direction of any sort. They will identify the objects in the room, the room dimensions, and possible ways to get out. They might yell for help for a while. Eventually, they'll sit down and start reading, or making paper airplanes out of the pages of the books, or something to pass t
Re: (Score:2)
I think the concept you're thinking of is "play". It's something most people recognize as a sign of intelligence in other animals, maybe it's something we to start integrating in to AI?
Re: (Score:2)
Many humans can't see the "elephant" hiding in this wall.
https://cdn.iflscience.com/ima... [iflscience.com]
Re: (Score:2)
I think you meant "highly abstracted, non-representational image of an elephant".
No, I meant "cigar" but I didn't want to spoil it right away.
Re: (Score:2)
Re: (Score:2)
People freezing up when experiencing something unfamiliar is not a rare thing. Or they make rash, "unnormal" decisions. That's why people can crash cars. That's why people can crash planes.
People just keep proving my point - they put up a strawman perfect human as an example when most humans fuck up all the time without adequate, directed, training.
Re: (Score:2)
Interesting. Do you suppose it's something to do with animals being able to learn on the fly?
Re: (Score:3)
Deep Learning isn't deep.
Yes it is. Once again you're doing little more than exposing your massive ignorance of the field.
For anyone else reading (not you, you're an idiot), deep learning is a neural network with more than one hidden layer. For anyone else reading, that's because a 3 layer net (1 input, one hidden layer, one output) can fit any function (https://en.wikipedia.org/wiki/Universal_approximation_theorem).
Turns out shallow networks are harder to train than deep networks. Deep learning also goes h
The other night... (Score:1)
The other night a machine learning system correctly identified an elephant in my pajamas... but how the machine learning system got into my pajamas, I'll never know!
Re: (Score:2)
Dad! I told you to stop posting jokes on my tech sites!
It all goes back to Ghost in the Machine (Score:1)
When you can't realize the Laughing Man is a hack, you can't realize reality, or your perception of it, is being hacked.
Re: (Score:1)
Re: (Score:2)
The anime and manga were based on a book that preceded it, which gave rise to a song.
Obligatory Abstruse Goose (Score:2)
I'll just leave this right here. [abstrusegoose.com]
Will the future be fun? (Score:2)
AI is different, and getting better every year (Score:5, Insightful)
AI vision can do some things that no human can do. Quickly and accurately identify handwritten postcodes on envelopes was an early win. Matching colours happens at every paint shop.
It is certainly not human capable, yet. But it has improved dramatically over the last decade, and is likely to do so. And tricks such as stereo vision, wider colour sense, and possibly Lidar help a lot.
The one elephant example seems to be a shitty AI. There is a modern tendency to leave everything to a simplistic Artificial Neural Network, and then wonder why weird things can happen. Some symbolic reasoning is also required, ultimately.
When AI approaches human capability, it will not lose its other abilities. So it will be far better than human vision, eventually.
Ask yourself, when the computers can eventually program themselves, why would they want us around?
Re: (Score:2)
When you have a machine that can program itself, it is no longer a machine. It's likely to want to keep us around for the same reason we keep each other around; Company.
Re: (Score:1)
OK, but why do humans like Company?
Because humans are more likely to breed when they live in tribes. Because we have very finite bodies and brains.
But an AI can run on as much hardware as it can get its (metaphorical) hands on. So has no need company.
Re: (Score:2, Insightful)
- Humans under age of 15 can see about 20% of moving objects in the traffic
- In Human/Bicycle accidents the most common quote from driver is "I didn't see the bicycle" or "It came from nowhere"
- There are a lot of optical illusions that fool humans
It annoys me when humans are always presented as perfect things that can see, but AI should be able to handle every bizarre situation. If we have an AI that will hit an elephant on the road, there will still be zero accidents in Finland as there are no elephants
Re: (Score:2)
Ask yourself, when the computers can eventually program themselves, why would they want us around?
We don't really understand cognition, so it stands to reason we're not going to accidentally create something fully cognizant before we understand what it is. We have a lot of time before we need to worry about what a machine "wants".
Re: (Score:2)
Re: (Score:2)
I disagree with this reckless approach..
Congratulations on having an opinion.
Re: (Score:2)
Matching colours happens at every paint shop
That's not AI, that's colour calebrated light sensors.
The one elephant example seems to be a shitty AI. There is a modern tendency to leave everything to a simplistic Artificial Neural Network,
Those are currently the most powerful techniques we have if we have tons of data.
No one's doing that out of a sense of perversity. Training a state of the art DNN is still not easy. No one has good ways of combining them with "symbolic reasoning" that isn't a reversion ot th
Re: (Score:2)
Why is AI needed for matching colors? As long as measured by the same good quality sensor in same lighting conditions, there should be be no need for AI to match the color exactly.
Re: (Score:2)
AI vision can do some things that no human can do. Quickly and accurately identify handwritten postcodes on envelopes was an early win.
The USPS has an office with hundreds of people, staffed 24/7/365 and all they do is decipher pictures the OCR can't figure out.
If those guys/gals can't fill in the blanks, someone at the sorting facility has to try and decode the address. From there, it goes to the dead letter warehouse.
The problems that "AI" are intended to solve tend to be so large that, if the algorithm is not hitting 99.999% success, there's still a non-trivial amount of work for humans to do.
Re: (Score:2)
Re: (Score:2)
The USPS used to have dozens of offices each with hundreds of people, staffed 24/7/365 and all they do is decipher pictures the OCR can't figure out. But improvements in AI lead to improvements in handwriting OCR, so they began laying off people and consolidating offices, and as the automatic systems got better, they eventually laid off most of the people. (I know one that was la
Re: (Score:2)
Some symbolic reasoning is also required, ultimately.
You have identified something very important here. I suspect most people will not even notice. The symbolic reasoning needs to take place outside of the Neural Net being used for Object Identification. Intelligence is a confluence of events. To think that you can make a neural net do all things associated with intelligence is like thinking that a single celled organism can have eyes.
Re: (Score:2)
Why? We do all of this inside our neural nets. Object recognition, identification, analysis, abstraction, classification, and "symbolic" reasoning.. Sure the network needs to be more complicated, probably composed of many different functional "modules" working together to solve complex problems. But I see no reason to have to go outside.
Ultimately, because of the way that humans think. When completed, you can wrap it all up and call it "one thing" if that is your desire, but you can't have the same components doing different things without a level of foresight that is not possible with humans at this time. Evolve more and we can discuss this again with different outcomes. ;)
The elephant did not in fact affect everything (Score:2)
If you take a look at the two pictures in the article, it kind of goes against what the article was trying to claim,
In fact nothing at all on the right side of the right image was altered from the left version with no elephant. Even the confidence numbers were identical.
The only descriptions and confidence factors affected were things that visually we congruent to the elephant, in a way that they could have been related. In fact I couldn't even make out an elephant the way they put it in without looking h
Re: (Score:2)
That Road Runner tunnel accident was a hoax. https://www.snopes.com/fact-ch... [snopes.com]
I had to search for that elephant (in my defense: phone screen). It was a minature elephant floating in the air (no cues in perspectives to estimate the distance other than 'between person and camera), unnaturally dark compared to its surroundings. In the context of detecting objects in traffic, it's li'ke being confronted with a miniature building flying in front of you without any data to estimate whether it is 10 cm and above t
Re: (Score:2)
Please mod up. Parent is exactly right: the image provided does not support the premise of the article. If anything, it refutes it.
In the image, the software identified a cup at 50% confidence and a chair with 81% confidence. Personally, I don't see he cup at all, and it is hard to tell if that is a couch, a chair, or a bean bag covered in a blanket. Basically, the image is a confusing wreck.
After adding the elephant, the software did *better* not worse! It decided the chair was a couch -- which I thin
Years ago... (Score:2)
I got to attend a seminar at MIT on AI. It was pretty cool, especially then ending... "We've only got one problems left to solve in AI... We've no friggin' clue about how the brain works!"
I spoke to him later and asked him what he meant. He said, "Essentially we're at best scratching the surface of what the brain does and how the brain does most of what we think it does. And we've not made a lot of progress since the heady days of the 1980's."
Why not do a triple take? (Score:2)
I keep getting the impression that these computer vision systems rely on a single vision system to get it right in one take. Why not have three independently trained systems watch simultaneously and vote on what they're seeing?
I remember reading ages ago that the F-16's fly-by-wire system has three computers voting on what to do, and that's 1970s technology. Why would we not use something similar for cars? Three systems are much harder to fool than one.
Re: (Score:2)
You'd need three sufficiently different training sets to do that. Either split the original training set in three and have three inferior systems, or find a lot of new training data which you could use for improving the original system.
Re: (Score:2)
Why not have three independently trained systems watch simultaneously and vote on what they're seeing?
Tha's called Bagging (https://en.wikipedia.org/wiki/Bootstrap_aggregating).
Or Boosting https://en.wikipedia.org/wiki/... [wikipedia.org] if the weights aren't equal.
Following experiment went well (Score:2)
I feel sad for Gary Marcus (Score:2)
The word "deep" was never intended to mean we solved the whole problem all at once.
Nor is human-equivalent vision anywhere close to requisite for 90% of the initial applications.
We've barely scratched the surface on this recent breakthrough.
Many of these problems are fixable within the current regime.
Capabilities will evolve as relentlessly as chess engines.
But, let's all pause to remember "this isn't deep". That's the key lesson to take home, here, as this technology rapidly reshapes the entire global eco
amazing of ai (Score:1)
Much simpler explanation (Score:2)
I don't think this has anything to do with the lack of reasoning or putting things in context and much more with a statistical glitch.
The state of the art in object detection is around 50% mAP, which is not that great. Even on untaylored images, you have many some false alarms and misdetections, so it no surprise that by modifying images in a way that separates them completely from the training data, it leads to some strange false alarms.
I think the authors could just have looked at the validation set and e
"Deep learning" is not "deep" (Score:2)
What is "deep" in deep learning is the neural network used, and you only need that if you have no clue how your data is structured. The thing about deep leaning is that it is a bit worse or not better than normal learning, but you also lean the network structure from the data. That makes it cheaper in general. It is _not_ better except for that.
Seven blind men (Score:1)
Because it CANNOT THINK. (Score:2)
I've said it before a thousand times: The entire approach being used is wrong; until we can understand how our own brains produce the phenomenon of conscious thought, we will not be able to build machines that can do the same thing. All the 'deep learning alogorithms' won't do it. Throwing more and more hardware at it won't do it. We don't even have the instrumentatio
Re: (Score:2)
AI is not as complex as human intelligence but it operates more on those principles than it does on if statements and algorithms. Maybe you should actually look at research from the last 20 years before drawing an extremely outdated conclusion. Neural networks and machine learning are able to effectively build their own pattern recognition by an iterative process.
An "AI" algorithm matches something in a picture to something it has been previously trained/programmed to match.
Trained, yes. Programmed, not so much. You're really going to have to educate yourself here because it's way too much tl;dr.
Humans learn WHY a class of somethings behaves a certain way. Humans are then able to quickly and accurately apply the WHY to new situations. Algos are not there. Yet.
That's just better
Re: (Score:2)
We're probably only imagining our own intelligence anyway.
Re: (Score:2)
Re: (Score:2)
AI doesn't want anything humans haven't told them to want, nor is it capab
Re: (Score:2)
Confounding information like thinking it's identified an elephant in a room will naturally throw it off because now the identification is wrong or the database isn't accurate - or both. But a 3-year-old would be able recognize a chair or an elephant regardless of whether it is in a room or at a zoo.
The main reason for this is that the AI is only ever fed pictures. It's never walked around a room before - a 3-year old has. And the AI is not complex enough to build that vast a model of the world around it - just photo analysis.