Deep Learning Can't Be Trusted, Brain Modeling Pioneer Says (ieee.org) 79
During the past 20 years, deep learning has come to dominate artificial intelligence research and applications through a series of useful commercial applications. But underneath the dazzle are some deep-rooted problems that threaten the technology's ascension. IEEE Spectrum: The inability of a typical deep learning program to perform well on more than one task, for example, severely limits application of the technology to specific tasks in rigidly controlled environments. More seriously, it has been claimed that deep learning is untrustworthy because it is not explainable -- and unsuitable for some applications because it can experience catastrophic forgetting. Said more plainly, if the algorithm does work, it may be impossible to fully understand why. And while the tool is slowly learning a new database, an arbitrary part of its learned memories can suddenly collapse. It might therefore be risky to use deep learning on any life-or-death application, such as a medical one.
Now, in a new book, IEEE Fellow Stephen Grossberg argues that an entirely different approach is needed. Conscious Mind, Resonant Brain: How Each Brain Makes a Mind describes an alternative model for both biological and artificial intelligence based on cognitive and neural research Grossberg has been conducting for decades. He calls his model Adaptive Resonance Theory (ART). Grossberg -- an endowed professor of cognitive and neural systems, and of mathematics and statistics, psychological and brain sciences, and biomedical engineering at Boston University -- based ART on his theories about how the brain processes information. "Our brains learn to recognize and predict objects and events in a changing world that is filled with unexpected events," he says. Based on that dynamic, ART uses supervised and unsupervised learning methods to solve such problems as pattern recognition and prediction. Algorithms using the theory have been included in large-scale applications such as classifying sonar and radar signals, detecting sleep apnea, recommending movies, and computer-vision-based driver-assistance software.
[...] One of the problems faced by classical AI, he says, is that it often built its models on how the brain might work, using concepts and operations that could be derived from introspection and common sense. "Such an approach assumes that you can introspect internal states of the brain with concepts and words people use to describe objects and actions in their daily lives," he writes. "It is an appealing approach, but its results were all too often insufficient to build a model of how the biological brain really works." The problem with today's AI, he says, is that it tries to imitate the results of brain processing instead of probing the mechanisms that give rise to the results. People's behaviors adapt to new situations and sensations "on the fly," Grossberg says, thanks to specialized circuits in the brain. People can learn from new situations, he adds, and unexpected events are integrated into their collected knowledge and expectations about the world.
Now, in a new book, IEEE Fellow Stephen Grossberg argues that an entirely different approach is needed. Conscious Mind, Resonant Brain: How Each Brain Makes a Mind describes an alternative model for both biological and artificial intelligence based on cognitive and neural research Grossberg has been conducting for decades. He calls his model Adaptive Resonance Theory (ART). Grossberg -- an endowed professor of cognitive and neural systems, and of mathematics and statistics, psychological and brain sciences, and biomedical engineering at Boston University -- based ART on his theories about how the brain processes information. "Our brains learn to recognize and predict objects and events in a changing world that is filled with unexpected events," he says. Based on that dynamic, ART uses supervised and unsupervised learning methods to solve such problems as pattern recognition and prediction. Algorithms using the theory have been included in large-scale applications such as classifying sonar and radar signals, detecting sleep apnea, recommending movies, and computer-vision-based driver-assistance software.
[...] One of the problems faced by classical AI, he says, is that it often built its models on how the brain might work, using concepts and operations that could be derived from introspection and common sense. "Such an approach assumes that you can introspect internal states of the brain with concepts and words people use to describe objects and actions in their daily lives," he writes. "It is an appealing approach, but its results were all too often insufficient to build a model of how the biological brain really works." The problem with today's AI, he says, is that it tries to imitate the results of brain processing instead of probing the mechanisms that give rise to the results. People's behaviors adapt to new situations and sensations "on the fly," Grossberg says, thanks to specialized circuits in the brain. People can learn from new situations, he adds, and unexpected events are integrated into their collected knowledge and expectations about the world.
Re:We know (Score:4, Insightful)
We have seen what trust in machine learning gets us. People facing harsher prison sentences because of their race, medical issues missed, ...
But these are the identical flaws that non artificial intelligence has. When humans make the decisions, people face harsher prison sentences because of their race, medical issues are missed, etc.
What you should be saying is, "we shouldn't trust the AI to do better at these tasks than the humans that have trained it."
...The fact that we can't know how they make a decision, we can't ask them to examine the process they use, is a huge problem.
But likewise for humans: humans are terribly unreliable when we ask them to explain the processes they use in making everyday decisions. "It's a gut feeling I have" is a response approved by hundreds of books telling you to trust your instincts, but these "gut feelings" are also conditioned by prejudices, both overt and invisible.
Re: (Score:3, Interesting)
Even worse, human beings provide post hoc rationalization for their gut feeling and lie to themselves as much as others that they base their judgement on arguments and reason when it's really simple gut feeling.
Re: (Score:2)
That's why it's best to have decisions made by some fixed algorithm. For example, in the UK the sentence is determined by rules like if the convicted has prior convictions, if there are factors like weapons involved etc.
There is very little judgement involved. Only proven facts are accounted for.
Re: (Score:3)
That's why it's best to have decisions made by some fixed algorithm.
Fixed algorithm is closer to religious punishment than to justice.
It's good to have algorithm that serve as a guideline so that it's not all to the arbitrary of the judge, but you can't omit what makes justice "just", which is being humane. The penalty needs to take into account the behaviour of the perpetrator 'before, during and after the acts'. Illegal behaviours can result from 'genuine mistakes' or were done with best intentions, and this attenuates the responsibility and therefore the penalty.
Only cer
Re: (Score:1)
Machine learning is involved in prison sentencing nowadays? How?
I never understood why courts need to see the face of the defendant or know any irrelevant factors to begin with. All that information can easily be shielded by modern technology which can make voices unrecognizable in real time.
Re: (Score:2)
But Sherlock needs to hear the defendant's accent, in order to realize he must have spent a few months in Birmingham last year, and therefore would have had access to the same cigarettes as made the ash found at the scene.
Re: (Score:2)
...even as trivial as webcams being unable to track darker faces.
I get that harsher sentences and missed diagnoses might be considered bad, but not being tracked by cameras? Does automatic surveillance tend to benefit the surveilled?
Re: We know (Score:1)
Re: (Score:3)
...even as trivial as webcams being unable to track darker faces.
I get that harsher sentences and missed diagnoses might be considered bad, but not being tracked by cameras?
An example of this is that Twitter's automatic photo-cropping algorithm was shown to crop black people out of photos. An amusing example was done by Tony Arcieri, who posted to twitter an image with photos of Mitch Mcconnell and Barack Obama. Sure enough, the Twitter preview showed only Mitch McConnell.
https://www.businessinsider.co... [businessinsider.com]
Does automatic surveillance tend to benefit the surveilled?
If you buy a webcam that's advertised as being able to track your face, and it only does that if you're white, that would be a flaw. https://www.csmonitor.com/ [csmonitor.com]
Re: (Score:2)
The issue is that if the algorithm is looking for a particular black person, it's more likely to match just any black person, whether it's the same person or not, compared to when it's looking for a particular white person.
https://www.nist.gov/news-even... [nist.gov]
Re: (Score:1)
> People facing harsher prison sentences because of their race,
Because machine deep learning is based on measurement and the ability to predict successfully. It has no "woke" agenda to apply "critical theory" to the results and ignore the likelihood of Marxism and anti-Semitism among the group being reviewed, it only gets to review their recorded behavior.
Kinda like guessing that the tall one with the "outie" genitalia is male, it's the way to bet.
Re: (Score:2)
You mean if a police force decides to ignore a certain group of people and focus on another group it would influence the data and thus the results?
Really? (Score:2)
One of the problems faced by classical AI, he says, is that it often built its models on how the brain might work, using concepts and operations that could be derived from introspection and common sense. "Such an approach assumes that you can introspect internal states of the brain with concepts and words people use to describe objects and actions in their daily lives," he writes. "It is an appealing approach, but its results were all too often insufficient to build a model of how the biological brain really works."
This sounds pretty woolly. Yes, neural networks were loosely inspired by the concept of a neuron. That's not to say they're even an attempt to actually model consciousness. Other AI models are even less connected with cognition. The idea that there's more to be learned from better modelling the brain is almost too obvious to bother stating it. Am I misinterpreting him?
Re:Really? (Score:5, Interesting)
Most people I see modeling brain patterns approach it from a very analytical approach, trying to map it the way we map a circuit or a software diagram. That's normal for engineers to do because that part of their brain is highly developed and it's what they're used to doing. I spend my days in management however and have worked to develop my people skills, and what I see is even highly educated, highly rational people are at the mercy of what we would call irrational, emotional responses, or even have intuition in ways that we can't describe rationally. There are a shocking few people who are capable of describing logically an emotional response. And yet, that emotional response is in many ways where the human ability for invention comes from. Invention is usually doing something that goes against modern understanding or all the data in the world, it can't be described rationally, but it happens. That ability for humans to adapt to a new situation without data has a lot to do with humans' ability to think relatively, use instinct and trust in themselves to overcome a never before seen situation or new idea.
I took from his argument that people are approaching the concept of human intuition and instinctive response by modeling with bigger and better algorithms and complex data sets to mimic this, and he's arguing that that approach will fail and that we need a new concept completely.
Re: (Score:1)
"....I spend my days in management however and have worked to develop my people skills, and what I see is even highly educated, highly rational people are at the mercy of what we would call irrational, emotional responses, or even have intuition in ways that we can't describe rationally..."
yet those emotional responses at that point are forever "completely" predictable given the new information on the current environment to include the past response and not much more? The fact that the person cannot explain why they do, has nothing to do with the actual ability to explain a given emotional response but the ability to recognize and quantify the "inputs" given. My opinion is that an emotional response to an event mostly relies on a union of a controlled environments and cultural experiences of
Re: (Score:2)
Re: (Score:2)
Yes, neural networks were loosely inspired by the concept of a neuron. That's not to say they're even an attempt to actually model consciousness
To be more precise, neural networks were an attempt to model consciousness. When they found out it didn't work, they decided to see what else they could do with them. Much like every other algorithm to emerge from the AI field.
Re: (Score:1)
Yes, neural networks were loosely inspired by the concept of a neuron. That's not to say they're even an attempt to actually model consciousness
To be more precise, neural networks were an attempt to model consciousness. When they found out it didn't work, they decided to see what else they could do with them. Much like every other algorithm to emerge from the AI field.
Then how do you explain video or television? and why is it assumed that an AI cannot use something that uses algorithms like those examples the same way we do? or is that ability to observe the self not programmed in to AI as of yet? i say self as the same algorithms we devise to create a video image are of the same languages we create AI with.
Re: (Score:2)
i say self as the same algorithms we devise to create a video image are of the same languages we create AI with.
This makes no sense to me, I have no idea what you are trying to convey.
Re: (Score:1)
we created language systems to predict among other things, how a computer system's memory will be used and what the system should expect in a physical memory location all based on a desired value the user expects. built on the facts supporting that matter can keep a predictable state when interacted with, and will in turn affect the state of other matter in a predictable way. a language system used to use all this is also used to create AI which are expected to "know" the world around it. and we should add
Re: (Score:2)
Both deep learning and the ART thing this guy is talking about use neural networks. They work pretty well.
Re: (Score:2)
They don't work to model consciousness.
Re: (Score:2)
If you can rigorously define consciousness then we can talk... about how to tell whether a particular system exhibits it.
Re: (Score:2)
People talking about this stuff often get really hand wavy.
Adaptive Resonance Theory is neural networks, same as deep learning (almost always) is. If I understand correctly, it appears the ART neural nets are limited to a single layer though, which has some well known problems.
If he's trying to say that simple supervised training isn't sufficient, that's pretty universally agreed.
Re: (Score:1)
Re:This part stood out: (Score:5, Informative)
They don't mean at a time, they mean at all. Imagine a child learns to tell a cat from a dog. Then it learns the names of the primary colors but as a consequence can no longer tell a cat from a dog.
Re:This part stood out: (Score:5, Informative)
ML networks are very good in a controlled environment, where unexpected inputs do not come up. They should not be dismissed as useless. You can populate a PCB by hand but yet we use pick-and-place machines. These machines require input to be in reels and separated by parts and have limits on parts they can place. Yet we use them because they are much faster and more accurate even with all the restrictions they have. Same with ML.
Now ML research is going into teaching the model to "not recognize" an object. To say - I dont know what this is. This turned out to be much harder task than one can expect.
No disrespect to that professor, but he is pushing a theory that he invented and thinks that it is better than existing ML. When the software that is based on his theory can perform a useful task, he would have a much better ground to stand on.
Re: (Score:1)
"No disrespect to that professor, but he is pushing a theory that he invented and thinks that it is better than existing ML. "
Yes. I'd only add that he invented this theory in the 1980s, he's had some time to show it is, to coin a phrase, "A New Kind of Neurally Inspired Computing".
Re: (Score:2)
Nobody is claiming ML is useless, just that it has some real limitations that need to be kept in mind. Those limitations mean there are situations where it simply cannot be used. For example, in law, reasoning must often be articulable in order to carry any weight at all. In other cases that has not been adequately observed and has now become a point of controversy and legal proceedings.
And most adults will NOT mis-identify to the degree a small child will. They do not see a hippopotamus and say doggie. The
Re: (Score:2)
How many times do you need to tell a child "that is a cow, not a dog" before it learns to recognize a cow? How many images of cows do a ML model need to recognize a cow?
The child learns with a drastically smaller dataset.
Re: (Score:1)
How many times do you need to tell a child "that is a cow, not a dog" before it learns to recognize a cow? How many images of cows do a ML model need to recognize a cow?
The child learns with a drastically smaller dataset.
maybe, just maybe the fact that we are using a system that was designed do complex computations in fractions of milliseconds to compute; 1=rat or 2=dog or 3=cat or 4=whatever...is the problem? The same problem it has always been?
Re: (Score:2)
Maybe, but no one has been able to come up with a system that would work better.
Re: (Score:2)
Maybe have "continuing education" for animals going while you try to learn colors for the first time, in order to avoid weakening already-established links?
Here's a picture of a color, and I don't care what animal you think you see (don't care, as in I won't be feeding the accuracy of your color guess back into your net, but I will be feeding back the accuracy of your animal guess). Here's a picture of an animal, and I don't care what color you think it is. Here's another color. Here's another animal.
I shou
Re: (Score:2)
I never saw a purple cow...
Re: (Score:1)
Re: (Score:2)
The difference is that the human brain is far more robust in that regard.
We can even adopt useful conventions that we know to be unphysical if it lets us apply existing knowledge. For example, schematics are always read as if electricity travels from positive to negative even though we know it's the electrons that move.
Re: (Score:2)
I think another important part is how real brains are quite redundant. A human will fail to correctly recognize objects as well, but it has fail safes, after checks, and even databases of common mistakes the system does to compensate for it.
As Laforge said (Score:2)
âoeYou know, Iâ(TM)ve always thought that technology could solve almost any problem. It enhances the quality of our lives, lets us travel across the galaxy, it even gave me my vision, but sometimes you just have to turn it all off.â
Re: (Score:2)
âoeYou know, Iâ(TM)ve always thought that technology could solve almost any problem. It enhances the quality of our lives, lets us travel across the galaxy, it even gave me my vision, but sometimes you just have to turn it all off.â
Unfortunately, technology does not seemed to have solved basic character encoding.
(Good quote though)
Re: (Score:2)
Good thing Data has an off switch.
Neither can humans, (Score:1)
...we're just automating the suckage
Re: (Score:2)
I consider Deep Neural Networks to be less reliable than eye witness testimony, the previous king of unreliable evidence.
Re: (Score:1, Interesting)
It's already long since been supplanted by news media a hundred years ago and now pictures and. video also, none of which can be trusted anymore in the age of GAN. Welcome to the Brave New World aka New World Order.
Re: (Score:2)
this is the dumbest reply to my considerably dumb post.
Re: (Score:2)
Huxley's novel "Brave New World" is primarily about the social impacts of a new class system that is pretty much a parallel to the concerns of class systems that has been discussed to death throughout the 19th and 20th century. It's an interesting and entertaining yarn but I think it doesn't really hold up so well now and may seem a bit anachronistic to younger readers.
Well's novel "The New World Order" in contrast to "Brave New World" has a utopian rather than dystopian tone. It's a bit of a dry read, more
Re: (Score:1)
It's already long since been supplanted by news media a hundred years ago and now pictures and. video also, none of which can be trusted anymore in the age of GAN. Welcome to the Brave New World aka New World Order.
Like it or not that is what government is for. A rule set first, to create the virtual world(laws), to create the conscious observable environment, to create the self, to create the rule set next,... The assumption is that given freedom a benevolent or at least competent set of rules can be created by anyone at any time and mass media has been doing all it can to disprove that.
Re: (Score:2)
Dr. Daystrom said it best:
More Training Time (Score:2, Insightful)
Re: (Score:2)
They can't even be trained to have the adaptability of a 6 month old puppy, so lets forget about adult human level AI for the moment.
Deep learning is not learning (Score:1)
Deep learning is the simulation of learning with stats.
It is not learning.
Have to look that one up (Score:5, Funny)
So what? (Score:5, Insightful)
Is Grossberg just following in the footsteps of Hubert Dreyfus?
Deep learning, or any "neural net" learning is just a mathematical model that tries to make predictions based on a way to understand data. Huge strides have been made in making it work where it does work.
Ultimately, deep learning or neural nets in general, like many other AI algorithms, work by finding ways to divide up very high dimensional spaces in ways that have predictive value. They are in general no better than the generality of their training set, and always far worse because no one really know how well they have divided up a million-dimensional space.
Just because idiots (i.e. suits) follow the fad of thinking deep learning is the be-all end-all of AI, doesn't mean that their ignorant view is a new enlightenment from god. Deep learning is just a very good and very very smart algorithm that works if you have a HUGE amount of data to train on. That's it. Nothing more. Getting to AI will take a lot lot more.
Summary (Score:1)
IEEE Spectrum extolls IEEE fellow Stephen Grossberg's virtues and talks about his book Conscious Mind, Resonant Brain: How Each Brain Makes a Mind.
In the book he presents his model Adaptive Resonance Theory (ART) that solves what he has called the stability-plasticity dilemma that occurs in Deep Learning: How a brain or other learning system can autonomously learn quickly (plasticity) without experiencing catastrophic forgetting (stability).
No technical details are discussed.
Unexplainable? (Score:2)
The fact that behavior of a deep networks analytically explained should hardly be a problem.
In the medical field there are many drugs whose action is not well understood. Yet we prescribe them to millions.
Re: (Score:2)
Re: (Score:1)
The difference is that the pilot can explain their reasoning.
Re: (Score:2)
So can the deep learning model. Further research strongly suggests that both are just making up a story they think you want to hear.
Thought is not pattern recognition and prediction (Score:2)
AFAICS ART is an on the fly training algorithm which tries to adapt weights for new poorly matched inputs ... that's nice, but how is that going to get them closer to actual thought? Learning new patterns is not the same as learning to think.
Thought is an unbounded iterative process ... the gap from pattern recognition and prediction to thought is as wide for ART as it is for any other current ANN.
"not explainable" (Score:2, Flamebait)
Re: (Score:2)
It's the rules which are not explainable. They say it's not about equality of outcome, but if you don't achieve on equality of outcome it's always going to get blamed on x'ism any way. So yeah, they try to hide behind algorithms.
The truth is poor a defence in a court of law, Bayesian mathematics is x'ist ... but no one wants to hear that and it's easy to find even experts to deny it. So in a court of law obscure algorithms provide a convenient extra layer of defence, it's rational and not even immoral.
Probably not an AI expert. (Score:2)
I've seen it before, you end up talking to someone in person and they kind of fall apart when you get to the meat of it. I don't know where he stands in that respect, but I'll just note that auditable AI is a major objective to organizations that are interested in life and death applications, and it has been for some time.
His comments should be taken with a g
Re: (Score:2)
I've seen Grossberg give talks and read some of his papers. He has an awfully hard time deciding between saying he invented everything OR that everything is worse than what he invented 20 years ago. But somehow, he perseveres and always manages to choose one of those options for anything new under the sun.
We have all these problems with people too (Score:4, Interesting)
Re: (Score:2)
Re: (Score:2)
The problem is those in authority will be able to blame the machines and wash their hands without penalty. This leads to shoot first and keep shooting some more.
sounds familiar (Score:2)
Don't pay any attention to this crank (Score:2)
I've seen Grossberg give talks and read some of his papers. He has an awfully hard time deciding between saying he invented everything OR that everything is worse than what he invented 20 years ago. But somehow, he perseveres and always manages to choose one of those options for anything new under the sun.
That's not to say deep learning doesn't deserve some cold water, or that Grossberg hasn't made important contributions over the years. But he has zero credibility when it comes to talking about other peopl
Obligatory (Score:2)
in the same line... (Score:1)