Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI

Deep Learning Can't Be Trusted, Brain Modeling Pioneer Says (ieee.org) 79

During the past 20 years, deep learning has come to dominate artificial intelligence research and applications through a series of useful commercial applications. But underneath the dazzle are some deep-rooted problems that threaten the technology's ascension. IEEE Spectrum: The inability of a typical deep learning program to perform well on more than one task, for example, severely limits application of the technology to specific tasks in rigidly controlled environments. More seriously, it has been claimed that deep learning is untrustworthy because it is not explainable -- and unsuitable for some applications because it can experience catastrophic forgetting. Said more plainly, if the algorithm does work, it may be impossible to fully understand why. And while the tool is slowly learning a new database, an arbitrary part of its learned memories can suddenly collapse. It might therefore be risky to use deep learning on any life-or-death application, such as a medical one.

Now, in a new book, IEEE Fellow Stephen Grossberg argues that an entirely different approach is needed. Conscious Mind, Resonant Brain: How Each Brain Makes a Mind describes an alternative model for both biological and artificial intelligence based on cognitive and neural research Grossberg has been conducting for decades. He calls his model Adaptive Resonance Theory (ART). Grossberg -- an endowed professor of cognitive and neural systems, and of mathematics and statistics, psychological and brain sciences, and biomedical engineering at Boston University -- based ART on his theories about how the brain processes information. "Our brains learn to recognize and predict objects and events in a changing world that is filled with unexpected events," he says. Based on that dynamic, ART uses supervised and unsupervised learning methods to solve such problems as pattern recognition and prediction. Algorithms using the theory have been included in large-scale applications such as classifying sonar and radar signals, detecting sleep apnea, recommending movies, and computer-vision-based driver-assistance software.

[...] One of the problems faced by classical AI, he says, is that it often built its models on how the brain might work, using concepts and operations that could be derived from introspection and common sense. "Such an approach assumes that you can introspect internal states of the brain with concepts and words people use to describe objects and actions in their daily lives," he writes. "It is an appealing approach, but its results were all too often insufficient to build a model of how the biological brain really works." The problem with today's AI, he says, is that it tries to imitate the results of brain processing instead of probing the mechanisms that give rise to the results. People's behaviors adapt to new situations and sensations "on the fly," Grossberg says, thanks to specialized circuits in the brain. People can learn from new situations, he adds, and unexpected events are integrated into their collected knowledge and expectations about the world.

This discussion has been archived. No new comments can be posted.

Deep Learning Can't Be Trusted, Brain Modeling Pioneer Says

Comments Filter:
  • One of the problems faced by classical AI, he says, is that it often built its models on how the brain might work, using concepts and operations that could be derived from introspection and common sense. "Such an approach assumes that you can introspect internal states of the brain with concepts and words people use to describe objects and actions in their daily lives," he writes. "It is an appealing approach, but its results were all too often insufficient to build a model of how the biological brain really works."

    This sounds pretty woolly. Yes, neural networks were loosely inspired by the concept of a neuron. That's not to say they're even an attempt to actually model consciousness. Other AI models are even less connected with cognition. The idea that there's more to be learned from better modelling the brain is almost too obvious to bother stating it. Am I misinterpreting him?

    • Re:Really? (Score:5, Interesting)

      by Whateverthisis ( 7004192 ) on Thursday January 13, 2022 @11:30AM (#62169901)
      I think you're slightly tangential to what he's saying. Yes, it's a given that better modeling of the brain should lead to better AI development. I think he's saying something different; that the very concept of how we perceive the brain works is fundamentally flawed, and that when we model the brain we have to think of the brain in very different ways.

      Most people I see modeling brain patterns approach it from a very analytical approach, trying to map it the way we map a circuit or a software diagram. That's normal for engineers to do because that part of their brain is highly developed and it's what they're used to doing. I spend my days in management however and have worked to develop my people skills, and what I see is even highly educated, highly rational people are at the mercy of what we would call irrational, emotional responses, or even have intuition in ways that we can't describe rationally. There are a shocking few people who are capable of describing logically an emotional response. And yet, that emotional response is in many ways where the human ability for invention comes from. Invention is usually doing something that goes against modern understanding or all the data in the world, it can't be described rationally, but it happens. That ability for humans to adapt to a new situation without data has a lot to do with humans' ability to think relatively, use instinct and trust in themselves to overcome a never before seen situation or new idea.

      I took from his argument that people are approaching the concept of human intuition and instinctive response by modeling with bigger and better algorithms and complex data sets to mimic this, and he's arguing that that approach will fail and that we need a new concept completely.

      • "....I spend my days in management however and have worked to develop my people skills, and what I see is even highly educated, highly rational people are at the mercy of what we would call irrational, emotional responses, or even have intuition in ways that we can't describe rationally..."

        yet those emotional responses at that point are forever "completely" predictable given the new information on the current environment to include the past response and not much more? The fact that the person cannot explain why they do, has nothing to do with the actual ability to explain a given emotional response but the ability to recognize and quantify the "inputs" given. My opinion is that an emotional response to an event mostly relies on a union of a controlled environments and cultural experiences of

    • From his perspective, bio-mimicry is the one true path to AI, so that must be what AI researchers are doing, and so why aren't they doing it more accurately?
    • Yes, neural networks were loosely inspired by the concept of a neuron. That's not to say they're even an attempt to actually model consciousness

      To be more precise, neural networks were an attempt to model consciousness. When they found out it didn't work, they decided to see what else they could do with them. Much like every other algorithm to emerge from the AI field.

      • Yes, neural networks were loosely inspired by the concept of a neuron. That's not to say they're even an attempt to actually model consciousness

        To be more precise, neural networks were an attempt to model consciousness. When they found out it didn't work, they decided to see what else they could do with them. Much like every other algorithm to emerge from the AI field.

        Then how do you explain video or television? and why is it assumed that an AI cannot use something that uses algorithms like those examples the same way we do? or is that ability to observe the self not programmed in to AI as of yet? i say self as the same algorithms we devise to create a video image are of the same languages we create AI with.

        • i say self as the same algorithms we devise to create a video image are of the same languages we create AI with.

          This makes no sense to me, I have no idea what you are trying to convey.

          • we created language systems to predict among other things, how a computer system's memory will be used and what the system should expect in a physical memory location all based on a desired value the user expects. built on the facts supporting that matter can keep a predictable state when interacted with, and will in turn affect the state of other matter in a predictable way. a language system used to use all this is also used to create AI which are expected to "know" the world around it. and we should add

      • by ceoyoyo ( 59147 )

        Both deep learning and the ART thing this guy is talking about use neural networks. They work pretty well.

    • by ceoyoyo ( 59147 )

      People talking about this stuff often get really hand wavy.

      Adaptive Resonance Theory is neural networks, same as deep learning (almost always) is. If I understand correctly, it appears the ART neural nets are limited to a single layer though, which has some well known problems.

      If he's trying to say that simple supervised training isn't sufficient, that's pretty universally agreed.

  • Comment removed based on user account deletion
    • by sjames ( 1099 ) on Thursday January 13, 2022 @11:36AM (#62169923) Homepage Journal

      They don't mean at a time, they mean at all. Imagine a child learns to tell a cat from a dog. Then it learns the names of the primary colors but as a consequence can no longer tell a cat from a dog.

      • by xwin ( 848234 ) on Thursday January 13, 2022 @12:22PM (#62170053)
        From my personal experience with ML models, they do behave like a child for the most part. Very limited child, concentrated on one task, but a child. The model only can recognize an object from a list of the objects that it knows. If it does not recognize the object it may think that it is another object that it knows. Small children do exactly that. If they know a dog but don't know a cow, they will call a cow a dog. Even adults do that with say a type of a plant. They will recognize a plant of the type they dont know, as a different plant of the type they do know.
        ML networks are very good in a controlled environment, where unexpected inputs do not come up. They should not be dismissed as useless. You can populate a PCB by hand but yet we use pick-and-place machines. These machines require input to be in reels and separated by parts and have limits on parts they can place. Yet we use them because they are much faster and more accurate even with all the restrictions they have. Same with ML.
        Now ML research is going into teaching the model to "not recognize" an object. To say - I dont know what this is. This turned out to be much harder task than one can expect.
        No disrespect to that professor, but he is pushing a theory that he invented and thinks that it is better than existing ML. When the software that is based on his theory can perform a useful task, he would have a much better ground to stand on.
        • "No disrespect to that professor, but he is pushing a theory that he invented and thinks that it is better than existing ML. "

          Yes. I'd only add that he invented this theory in the 1980s, he's had some time to show it is, to coin a phrase, "A New Kind of Neurally Inspired Computing".

        • by sjames ( 1099 )

          Nobody is claiming ML is useless, just that it has some real limitations that need to be kept in mind. Those limitations mean there are situations where it simply cannot be used. For example, in law, reasoning must often be articulable in order to carry any weight at all. In other cases that has not been adequately observed and has now become a point of controversy and legal proceedings.

          And most adults will NOT mis-identify to the degree a small child will. They do not see a hippopotamus and say doggie. The

        • How many times do you need to tell a child "that is a cow, not a dog" before it learns to recognize a cow? How many images of cows do a ML model need to recognize a cow?

          The child learns with a drastically smaller dataset.

          • How many times do you need to tell a child "that is a cow, not a dog" before it learns to recognize a cow? How many images of cows do a ML model need to recognize a cow?

            The child learns with a drastically smaller dataset.

            maybe, just maybe the fact that we are using a system that was designed do complex computations in fractions of milliseconds to compute; 1=rat or 2=dog or 3=cat or 4=whatever...is the problem? The same problem it has always been?

      • by Sloppy ( 14984 )

        Maybe have "continuing education" for animals going while you try to learn colors for the first time, in order to avoid weakening already-established links?

        Here's a picture of a color, and I don't care what animal you think you see (don't care, as in I won't be feeding the accuracy of your color guess back into your net, but I will be feeding back the accuracy of your animal guess). Here's a picture of an animal, and I don't care what color you think it is. Here's another color. Here's another animal.

        I shou

      • by GuyK ( 1819282 )
        Step back a bit. It's more like when Copernicus proved that the earth orbited the sun. Huge libraries of integrated helio-centric ideas and calculations became "unfounded" - their predictions remained pretty close, but where we asserted knowledge (existence and connection of these ideas), we had a black hole that would take science much time and effort to fully repopulate. Consider each such idea as a hidden node in a deep ML system. I'd view the above as the article's meaning in saying "an arbitrary
        • by sjames ( 1099 )

          The difference is that the human brain is far more robust in that regard.

          We can even adopt useful conventions that we know to be unphysical if it lets us apply existing knowledge. For example, schematics are always read as if electricity travels from positive to negative even though we know it's the electrons that move.

    • by Z80a ( 971949 )

      I think another important part is how real brains are quite redundant. A human will fail to correctly recognize objects as well, but it has fail safes, after checks, and even databases of common mistakes the system does to compensate for it.

  • âoeYou know, Iâ(TM)ve always thought that technology could solve almost any problem. It enhances the quality of our lives, lets us travel across the galaxy, it even gave me my vision, but sometimes you just have to turn it all off.â

    • âoeYou know, Iâ(TM)ve always thought that technology could solve almost any problem. It enhances the quality of our lives, lets us travel across the galaxy, it even gave me my vision, but sometimes you just have to turn it all off.â

      Unfortunately, technology does not seemed to have solved basic character encoding.

      (Good quote though)

    • Good thing Data has an off switch.

  • ...we're just automating the suckage

    • I consider Deep Neural Networks to be less reliable than eye witness testimony, the previous king of unreliable evidence.

      • Re: (Score:1, Interesting)

        by shiftless ( 410350 )

        It's already long since been supplanted by news media a hundred years ago and now pictures and. video also, none of which can be trusted anymore in the age of GAN. Welcome to the Brave New World aka New World Order.

        • this is the dumbest reply to my considerably dumb post.

        • It's already long since been supplanted by news media a hundred years ago and now pictures and. video also, none of which can be trusted anymore in the age of GAN. Welcome to the Brave New World aka New World Order.

          Like it or not that is what government is for. A rule set first, to create the virtual world(laws), to create the conscious observable environment, to create the self, to create the rule set next,... The assumption is that given freedom a benevolent or at least competent set of rules can be created by anyone at any time and mass media has been doing all it can to disprove that.

    • by Sloppy ( 14984 )

      Dr. Daystrom said it best:

      It takes four hundred thirty people to suck that much. With this, you don't need anyone. One machine can do all the sucking they send men out to do now. Men no longer need suck in space, or on some alien world. Men can go on to achieve greater things than sucking.

  • More Training Time (Score:2, Insightful)

    by Anonymous Coward
    These AI models have a lot to 'learn' about the world. A model trained for just a few hours or days is probably nowhere as effective as what a human goes through through their entire development, including interacting with the world in meaningful ways. The only way for these systems to do as well as humans is to replicate the human development process, including ~25-30 years of continuous training on data and physically validating what they learn, including an acute sense of right and wrong (and even then
    • They can't even be trained to have the adaptability of a 6 month old puppy, so lets forget about adult human level AI for the moment.

  • Deep learning is the simulation of learning with stats.

    It is not learning.

  • by alleycat0 ( 232486 ) on Thursday January 13, 2022 @12:21PM (#62170043) Homepage
    "Grossberg -- an endowed professor" - I don't think this word means what I thought it meant.
  • So what? (Score:5, Insightful)

    by groobly ( 6155920 ) on Thursday January 13, 2022 @12:22PM (#62170051)

    Is Grossberg just following in the footsteps of Hubert Dreyfus?

    Deep learning, or any "neural net" learning is just a mathematical model that tries to make predictions based on a way to understand data. Huge strides have been made in making it work where it does work.

    Ultimately, deep learning or neural nets in general, like many other AI algorithms, work by finding ways to divide up very high dimensional spaces in ways that have predictive value. They are in general no better than the generality of their training set, and always far worse because no one really know how well they have divided up a million-dimensional space.

    Just because idiots (i.e. suits) follow the fad of thinking deep learning is the be-all end-all of AI, doesn't mean that their ignorant view is a new enlightenment from god. Deep learning is just a very good and very very smart algorithm that works if you have a HUGE amount of data to train on. That's it. Nothing more. Getting to AI will take a lot lot more.

  • IEEE Spectrum extolls IEEE fellow Stephen Grossberg's virtues and talks about his book Conscious Mind, Resonant Brain: How Each Brain Makes a Mind.

    In the book he presents his model Adaptive Resonance Theory (ART) that solves what he has called the stability-plasticity dilemma that occurs in Deep Learning: How a brain or other learning system can autonomously learn quickly (plasticity) without experiencing catastrophic forgetting (stability).

    No technical details are discussed.

  • The fact that behavior of a deep networks analytically explained should hardly be a problem.

    In the medical field there are many drugs whose action is not well understood. Yet we prescribe them to millions.

  • AFAICS ART is an on the fly training algorithm which tries to adapt weights for new poorly matched inputs ... that's nice, but how is that going to get them closer to actual thought? Learning new patterns is not the same as learning to think.

    Thought is an unbounded iterative process ... the gap from pattern recognition and prediction to thought is as wide for ART as it is for any other current ANN.

  • is exactly why these systems are so popular. Corporations are financial entities. They will always break the rules to make more money. When deep learning systems are used, corporations can break the rules by subtly modifying the training data set to enforce rules which are illegal but which the corporation believes will make more money. Rules like drastically undervaluing homes owned by black people. . When caught, they can say "the AI made the decision." When the matter comes to court, it is virtual
    • It's the rules which are not explainable. They say it's not about equality of outcome, but if you don't achieve on equality of outcome it's always going to get blamed on x'ism any way. So yeah, they try to hide behind algorithms.

      The truth is poor a defence in a court of law, Bayesian mathematics is x'ist ... but no one wants to hear that and it's easy to find even experts to deny it. So in a court of law obscure algorithms provide a convenient extra layer of defence, it's rational and not even immoral.

  • I know it sounds like he has solid credentials, but it's clear from his statements that he has major gaps in his awareness of the state of the art.

    I've seen it before, you end up talking to someone in person and they kind of fall apart when you get to the meat of it. I don't know where he stands in that respect, but I'll just note that auditable AI is a major objective to organizations that are interested in life and death applications, and it has been for some time.

    His comments should be taken with a g
    • by xeos ( 174989 )

      I've seen Grossberg give talks and read some of his papers. He has an awfully hard time deciding between saying he invented everything OR that everything is worse than what he invented 20 years ago. But somehow, he perseveres and always manages to choose one of those options for anything new under the sun.

  • by aldousd666 ( 640240 ) on Thursday January 13, 2022 @04:28PM (#62170839) Journal
    Show me a human who isn't vulnerable to the same things... sure we only train for one thing at a time now, largely because we can combine lots of deep learning networks together, each serving as an 'expert' on it's area. But we don't need to get them to be perfect... they just have to make fewer mistakes than people, which is a lower bar than you might think.
    • Somehow I doubt that. Is a sentencing deep learning network that only give harsher penalties to minorities 50% as often as human judges be acceptable? I think not and that's because we don't judge humans on aggregate statistics. We just say there are good judges and bad ones.
    • by evanh ( 627108 )

      The problem is those in authority will be able to blame the machines and wash their hands without penalty. This leads to shoot first and keep shooting some more.

  • my intelligence is not good at many things. worse than AI, my intelligence is not good at the things it's creators designed it for. I can't explain why i work this way either. I just know that some tasks i gravitate to, and others i just don't care about. Sorry dad. I like skateboarding. I'd rather do that than play tennis.
  • I've seen Grossberg give talks and read some of his papers. He has an awfully hard time deciding between saying he invented everything OR that everything is worse than what he invented 20 years ago. But somehow, he perseveres and always manages to choose one of those options for anything new under the sun.

    That's not to say deep learning doesn't deserve some cold water, or that Grossberg hasn't made important contributions over the years. But he has zero credibility when it comes to talking about other peopl

  • Those that are interested in this line of view might be interested in this: https://www.lesswrong.com/post... [lesswrong.com]

A committee takes root and grows, it flowers, wilts and dies, scattering the seed from which other committees will bloom. -- Parkinson

Working...