Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Supercomputing Hardware Science

Scientists Develop the Next Generation of Reservoir Computing (phys.org) 48

An anonymous reader quotes a report from Phys.Org: A relatively new type of computing that mimics the way the human brain works was already transforming how scientists could tackle some of the most difficult information processing problems. Now, researchers have found a way to make what is called reservoir computing work between 33 and a million times faster, with significantly fewer computing resources and less data input needed. In fact, in one test of this next-generation reservoir computing, researchers solved a complex computing problem in less than a second on a desktop computer. Using the now current state-of-the-art technology, the same problem requires a supercomputer to solve and still takes much longer, said Daniel Gauthier, lead author of the study and professor of physics at The Ohio State University. The study was published today in the journal Nature Communications.

Reservoir computing is a machine learning algorithm developed in the early 2000s and used to solve the "hardest of the hard" computing problems, such as forecasting the evolution of dynamical systems that change over time, Gauthier said. Previous research has shown that reservoir computing is well-suited for learning dynamical systems and can provide accurate forecasts about how they will behave in the future, Gauthier said. It does that through the use of an artificial neural network, somewhat like a human brain. Scientists feed data on a dynamical network into a "reservoir" of randomly connected artificial neurons in a network. The network produces useful output that the scientists can interpret and feed back into the network, building a more and more accurate forecast of how the system will evolve in the future. The larger and more complex the system and the more accurate that the scientists want the forecast to be, the bigger the network of artificial neurons has to be and the more computing resources and time that are needed to complete the task.

In this study, Gauthier and his colleagues [...] found that the whole reservoir computing system could be greatly simplified, dramatically reducing the need for computing resources and saving significant time. They tested their concept on a forecasting task involving a weather system developed by Edward Lorenz, whose work led to our understanding of the butterfly effect. Their next-generation reservoir computing was a clear winner over today's state-of-the-art on this Lorenz forecasting task. In one relatively simple simulation done on a desktop computer, the new system was 33 to 163 times faster than the current model. But when the aim was for great accuracy in the forecast, the next-generation reservoir computing was about 1 million times faster. And the new-generation computing achieved the same accuracy with the equivalent of just 28 neurons, compared to the 4,000 needed by the current-generation model, Gauthier said. An important reason for the speed-up is that the "brain" behind this next generation of reservoir computing needs a lot less warmup and training compared to the current generation to produce the same results. Warmup is training data that needs to be added as input into the reservoir computer to prepare it for its actual task.

This discussion has been archived. No new comments can be posted.

Scientists Develop the Next Generation of Reservoir Computing

Comments Filter:
  • This is faster, this is faster because it resembles the brain, and this is faster on big problems says the inventor.

    Uh, we're missing the "How?" here...

    • by kmoser ( 1469707 ) on Tuesday September 21, 2021 @11:52PM (#61819617)

      An important reason for the speed-up is that the "brain" behind this next generation of reservoir computing needs a lot less warmup and training compared to the current generation to produce the same results. Warmup is training data that needs to be added as input into the reservoir computer to prepare it for its actual task.

      I guess your brain wasn't fully warmed up when you read the article.

      • Excuse me, but what does that actually mean?
        They gave it a better blankie and a hot coco?

        Also, I'd bet money that they made it faster, by removing details. On the basis that is is the same insane thing that the whole fake "AI" industry has done until now, when they act like a weight tensor is practically the same thing as a living neural net. There is no reason to assume that cheater's mindset has changed.
        I bet they will soon find out that the advantage is just the result of the 80:20 rule, squared. As in:

    • Re: (Score:3, Informative)

      by vivian ( 156520 )

      According to the paper, which is downloadable and quite an interesting read, is that a traditional Reservoir Computer (RC) processes time series data associated with a strange attractor using an artificial recurrent neural network, with the forecasted strange attractor being a linear weight of the reservoir states. The NC-RC performs a forecast using a linear weight of time delay states of the time series data and nonlinear functionals of this data.
      Paraphrased from description of Fig 1, P2 of https://www.n [nature.com]

      • by vivian ( 156520 )

        Typo correction - NC-RG should read NG-RC. I might need new glasses.

      • I understand some of these words...

        Follow up question. Can you expand on fractals and strange attractors? I know in the Mandelbrot set, there are strange attractors and that these show part of the chaotic nature of the set. Likewise the set is infinitely complex. Could this algorithm be trained to generate an arbitrary point with an arbitrary zoom factor in the set? If so, generating a fractal zoom would seem far less complex now? Likewise, could the algorithm be trained to find an approximate part of the s

        • I have only a passing familiarity with some of the concepts, however I am pretty sure that this technique is not capable of generating points deep in the Mandelbrot set accurately because of the butterfly effect, however it could be suitable for generating points that look plausibly like the Mandelbrot set, in other works, deep mathematical fake. But that does not appear to be its focus, rather it is supposed to speed up time series computations, most probably including finite element analysis, producing c

      • Goddamn. I'm getting old, and forgotten more math than I remember.

        This kinda stuff makes me worry about my future in software development. Might be time to do a refresher course on undergrad math again....

      • According to the paper, which is downloadable and quite an interesting read, is that a traditional Reservoir Computer (RC) processes time series data associated with a strange attractor using an artificial recurrent neural network, with the forecasted strange attractor being a linear weight of the reservoir states. The NC-RC performs a forecast using a linear weight of time delay states of the time series data and nonlinear functionals of this data.

        Translated: Someone's taken an obscure algorithm that, presumably, someone somewhere has a use for beyond the creation of a stream of research publications, and come up with a more efficient algorithm to do the same thing.

        Sounds a lot less interesting when you put it like that.

      • by gweihir ( 88907 )

        According to the paper, which is downloadable and quite an interesting read, is that a traditional Reservoir Computer (RC) processes time series data associated with a strange attractor using an artificial recurrent neural network, with the forecasted strange attractor being a linear weight of the reservoir states. The NC-RC performs a forecast using a linear weight of time delay states of the time series data and nonlinear functionals of this data.
        Paraphrased from description of Fig 1, P2 of https://www.nature.com/article... [nature.com]

        I have a CS PhD and I did not understand that.

    • Uh, we're missing the "How?" here...

      We are also missing the "What?"

      What specific problem is sped up by this "dynamical system"?

    • by gweihir ( 88907 )

      This is faster, this is faster because it resembles the brain, and this is faster on big problems says the inventor.

      Uh, we're missing the "How?" here...

      No surprise the "how" is missing. Nobody actually knows how the human brain works.

      • YOU do not know. The "AI" community does not know and does not want to know.

        Sorry, but I have a pretty good understanding. As do actual neurologists and neuro-psychologists.
        I gaveth realistic neural nets schizophrenia, and taketh it away again, ten years ago. It's not that hard is such a simulated setting, really. At least the part that isn't caused by neurochemical imbalances, injury, or the like. Though nowadays, within reason, I would know how to solve that too.
        (The reason it's not a trivial one-day cure

        • by gweihir ( 88907 )

          You do not know either. And, in particular, you do not know a thing called the "Dunning-Kruger Effect".

  • I understand the basis of Neural Networks. But that's about it for the whole Machine Learning field.

    I've picked out what might be the key point of Reservoir Computing: (Quote from Wikipedia)
    ", training of recurrent neural networks is challenging and computationally expensive. Reservoir computing reduces those training-related challenges by fixing the dynamics of the reservoir and only training the linear output layer."

    If I'm understanding that statement correctly, it is saying there is known good models,

    • They point out that this is a faster way of training a deep net to do something that is already amenable to existing neural net techniques.

      • by evanh ( 627108 )

        But what is it? How does any of these variations work?

        • Have [wikipedia.org] fun. [wikipedia.org]

          • by evanh ( 627108 )

            That's where I got my quoted from in the first place. t'was hoping someone would confirm my understanding.

          • That is the stuff we already knew.

            Why are some people always condescendingly post links that are so general and vague, that even if they contained your specific answer, which they usually don't even, you'd take half a decade to get to that localized bottom?

            Like
            Child: "Mommy? Who's my daddy?"
            Mom: "Have fun! *throws telephone book and introductory textbook on physics with a bookmark at 'chemistry' at child*"

  • by brxndxn ( 461473 ) on Wednesday September 22, 2021 @12:36AM (#61819677)

    If it is indeed faster, how did they replace positive duractence with unilateral phase detractors? I am also skeptical of their illustrated synchronization of cardinal grammeters compared to the original super computer encabulation. Also, did they prove sinusoidal depleneration? Anyway, hoping this is an improvement and not a step backward.

    • Everyone forgets the contributions of Leemon Baird

      Encabulator Algorithm explained [youtube.com]

    • A complete discussion can be found here in PDF format. Pre-population of the reservoir is clearly explained as opposed to TFA.

      Reservoir initial state [isotropic.org]

    • I think the secrets in the sem-iboloid stators. Get the winding right and it'll help reign in the barescent skor motion that plagues these resovoir computers.

    • They used pre-famulated amulite surmounted by a malleable logarithmic casing, fitted to the ambifacient lunar waneshaft so that side fumbling was effectively prevented.

    • If it is indeed faster, how did they replace positive duractence with unilateral phase detractors?

      I'm sure that they do it by reversing the polarity of the neutron flow. Works every time.

    • Since you invented new words -- it's possible for that bit of nonsense to be absolutely true. All you have to do is define each word to mean something that is valid in the real world.

      So, trademark this copypasta and then get busy convincing people to use "unilateral phase detractors" for the next discovery in neural networks. Eventually, you can say you had prior art for a patent and sue everyone.

  • by fahrbot-bot ( 874524 ) on Wednesday September 22, 2021 @12:54AM (#61819709)

    A relatively new type of computing that mimics the way the human brain works ...

    They tried mimicking the brains of multiple canines, but Reservoir Dogs Computing didn't turn out so well...

  • Yeah, what is Operations Research. Rather old hat to me, which preceded data cubes. Sounds like it still comes down to sparse weighted B-Tree pruning at the end of the day. This is how Chess programs on 8 bit processors like Z80 worked, sort-of. Then nuclear bomb simulation, took averages, FFT and associative hashing to speed things up on 1970's mainframes. And you should see how capacitor charge averaging and spinning glass discs worked on pre-digital or analogue missile seekers. And in learning, or AI as
  • the new-generation computing achieved the same accuracy with the equivalent of just 28 neurons,

    I suspect some of them might say that it takes them several million neurons.

  • but all he wants to know is, Why am I Mr Pink?

  • -ic means of or like/related to
    -al means of or like/related to
    Dynamic means something of or like motion
    Dynamical means something of or like something of or like motion

    Any time someone writes -ical, they are trying to make something look more impressive than it is by making the word longer

    See: graphical

    • Yeah, I'm pretty skepticalistic of this, myself.

    • You are right of course. This is an example of the obfuscation that pervades the published work out of academia. Through most of society we've dropped the idea that overly formal speech and obscure word choices are impressive but not in the pretentious world of academia.

      I wonder how much more the world would know if the walls were bashed down and peer reviewed work were written for the largest audience possible without losing integrity rather than the narrowest. Especially in fields like biology and chemist
    • I suspect it's a symptom of somebody from a Germanic language writing it.
      Because e.g. to a German, that sounds like the correct way to say it.
      Caught myself writing that, many times.

  • what their unit tests look like.

    • I can't stand the unit test fallacy.

      If something needs units tests, to be reliable, then the unit tests themselves need unit tests to be reliable.
      If something doesn't need unit tests, to be reliable, then the code itself doesn't need unit tests.
      It's always a self-contradiction.

      In reality, the scientific method applies: How many sigma have you got? How many observations have you got? In other words: How many different implementations have you got? The reliability is based on the number of those.
      Unit tests ar

  • by Geoffrey.landis ( 926948 ) on Wednesday September 22, 2021 @11:42AM (#61820987) Homepage

    "...such as forecasting the evolution of dynamical systems that change over time"

    Wow. Let's look at the definition of the words here.

    "Forecasting": prediction of changes in time

    "evolution": changes over time

    "dynamical systems": systems that change over time

    "that change over time": yeah, just what it says: change over time.

  • by cowdung ( 702933 ) on Wednesday September 22, 2021 @05:11PM (#61822183)

    The code of the Nature paper is here:

    https://github.com/quantinfo/n... [github.com]

    I guess others can play around with it and see if the claims hold up.
    The implementation looks fairly minimal and not very self explanatory.
    This implementation of a more classic "Echo State Network" is much nicer:

    https://github.com/kalekiu/eas... [github.com]

    But the paper claims to have improved on that.

    What seems most attractive is that it is so easy to train (lower # of examples) which is kind of the holy grail of machine learning.

    But I'm skeptical because it looks a bit too much like magic.

I tell them to turn to the study of mathematics, for it is only there that they might escape the lusts of the flesh. -- Thomas Mann, "The Magic Mountain"

Working...