Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI

Researchers Achieve AI Breakthrough Using Light To Perform Computations (independent.co.uk) 66

"Researchers have achieved a breakthrough in the development of artificial intelligence by using light instead of electricity to perform computations," reports the Independent.

"The new approach significantly improves both the speed and efficiency of machine learning neural networks..." A paper describing the research, published this week in the scientific journal Applied Physics Reviews, reveals that their photon-based (tensor) processing unit (TPU) was able to perform between 2-3 orders of magnitude higher than an electric TPU.

"We found that integrated photonic platforms that integrate efficient optical memory can obtain the same operations as a tensor processing unit, but they consume a fraction of the power and have higher throughput," said Mario Miscuglio, one of the paper's authors.

"When opportunely trained, [the platforms] can be used for performing inference at the speed of light."

This discussion has been archived. No new comments can be posted.

Researchers Achieve AI Breakthrough Using Light To Perform Computations

Comments Filter:
  • This is a "processing" breakthrough... not an AI breakthrough.

    I am so sick of a computer making a binary decision 2 nano seconds faster being treated as a revolution in AI by media.

    • But it's at the speed of light now! I'm going down to Best Buy to pre-order my household robot...

    • by Misagon ( 1135 )

      Yes, but the processing in question is specific to tensor processing units. It is an [i]analogue[/i] circuit using light, not a digital one. It can not be applied to general purpose computing.

      • Yes, but the processing in question is specific to tensor processing units. It is an [i]analogue[/i] circuit using light, not a digital one. It can not be applied to general purpose computing.

        The low-resolution operations are most applicable to NNs, which often use FP16 (or even FP8), but there are some other applications. For instance, FP16 is usually good enough for moving graphics and video games.

        So in addition to TPUs, these photonic circuits may someday be used in GPUs.

      • Yes, I have already been told today that the "dictionary" does not matter. Sounds like you are the same stupid as they are.

        AI is NOT Equal to "processing". Which is why I said this an advancement to "processing" not AI. Just because it will improve AI does not mean it is an advancement to AI. Otherwise it is the same a saying "silicon" is a fucking advancement to AI. And it just isn't!

        I am sick of everything being trashed about as though it is AI.

        News flash folks, we are not even close to any meaningfu

        • AI

          The winter will be very cold.

        • by gweihir ( 88907 )

          Well, AI is not a speed problem at all, rather obviously. If it where, we would have working intelligence in machines which would just be very slow. May not have applications, but would demonstrate feasibility. And at that point increasing speed would be meaningful. Instead we have absolutely nothing. Dumb automation is what is possible and by now it looks very much like that will be it.

    • by newcastlejon ( 1483695 ) on Sunday July 26, 2020 @03:59PM (#60333745)

      It's a custom chip used in machine learning research. That makes it relevant to AI.

      I'd say I was sick of people skipping TFA to get first post, but this is /. after all.

      • That has nothing to do with it.

        Developing a "process" for bringing us a step closer to AI is an advancement for AI.

        Making things process faster is not specific to AI, regardless of if it is being used in an application for AI.

        It's like saying making a battery that lasts twice as long is an advancement for self-driving cars.

        It's bullshit... and people fundamentally do not grasp why because they are ignorant and when someone comes by to correct their misunderstanding they get pissed off and double down on the

        • by narcc ( 412956 )

          It's silly no matter how you reason about it. Computationalism is a dead end.

          • by gweihir ( 88907 )

            It's silly no matter how you reason about it. Computationalism is a dead end.

            Interesting. I did not know that term yet. So far I assumed "Physicalism" captured it nicely, but this one may be better in some situations.

            I do continue to be completely fascinated by people that simply ignore self-awareness and then think they are doing sound Science.
            Although the continued failure of AI research to produce anything that has human-like intelligence (also called AGI now that AI is
            a completely broken and worthless term) gives us a strong hint that not only is self-awareness outside of known

            • not only is self-awareness outside of known Physics

              That's because "self-awareness" is a vague philosophical concept
              that has nothing to do with physics, or any other science.

          • Computationalism is a dead end.

            As distinct from what?

      • by gweihir ( 88907 )

        It is a _speed_ increase, not any kind of qualitative improvement. Calling it an "AI breakthrough" is basically a direct lie.

    • by Plammox ( 717738 ) on Sunday July 26, 2020 @04:14PM (#60333771)
      Intel just reported issues getting their 7nm process working. We're hitting the wall for transistor scaling very soon now, and the main issue in enabling AI is processing power. I think TFA is hugely relevant. If improvements of 2-3 orders of magnitude are possible, we're in for some crazy applications becoming possible.
      • by gweihir ( 88907 )

        We're hitting the wall for transistor scaling very soon now, and the main issue in enabling AI is processing power.

        No. If it were we would have meaningful AI (i.e. AGI) that was just very slow. We do not have that.

        • by Plammox ( 717738 )
          Well, I was not referring to "real" AI/AGI, merely the next steps in deep learning or the somewhat diffuse LinkedIn-definition of AI, if you wish. Although it's not a Wargames Joshua or HAL9000, machine learning is still pretty darn interesting in itself.
          • by gweihir ( 88907 )

            Machine learning, automation, etc. are interesting. agreed. It just has still not dawned on the general public that "AI" is not about intelligence. But this still is just a gradual improvement (if it works at all, apparently it is a simulated result, not physical hardware). It mainly has the theoretical potential of making things cheaper.

            • A.I. research is about producing a computer system that can do an acceptable
              emulation of the behaviour of an intelligent human. Whether it really
              has "self awareness" is irrelevant.

              Of course, what is acceptable to one person would not be acceptable to others.
              Some people go gah-gah at the performance of a trivial chat-bot program.
              For me, scoring in the top 5% on the SAT ten years in a row would be a good start.
              • by gweihir ( 88907 )

                A.I. research is about producing a computer system that can do an acceptable
                emulation of the behaviour of an intelligent human. Whether it really
                has "self awareness" is irrelevant.

                Of course, what is acceptable to one person would not be acceptable to others.
                Some people go gah-gah at the performance of a trivial chat-bot program.
                For me, scoring in the top 5% on the SAT ten years in a row would be a good start.

                That definition is completely unsuitable for any kind of application in tech, except maybe entertainment. Anything else needs actual problem-solving capabilities, not faked ones. Self-awareness plays a role as we currently observe intelligence only in connection with it, hence any claim intelligence can be done without it is pure speculation. And there is the little _other_ problem that known Physics has no mechanism for self-awareness.

    • I am so sick of a computer making a binary decision 2 nano seconds faster being treated as a revolution in AI by media.

      I am so sick of a human making a binary decision 2 nano seconds after reading an article summary being treated as a legitimate opinion in /. by media.

    • by ludux ( 6308946 )
      Didn't even read the summary, huh buddy?
    • Well, obviously. Given that AI is still at least 50 years off, and almost, but not quite, entire unrelated to weight-based tensor multiplications!

    • by gweihir ( 88907 )

      Well, it probably started with Marvin "the moron" Minsky that made completely inane claims like a computer with more transistors than humans have brain cells would be smarter than humans (nicely showing that this cretin did neither understand transistors or brain cells, nor intelligence or software). Similar stupid claims abound to this day when we still have zero indication that making computers intelligent is possible at all.

  • The orders of magnitude are in electricity costs, not in calculation throughput.

    This is important in AI, because the AlphaGo team spent ~$3000 on electricity for every Go match they played.
    • AlphaGo team spent ~$3000 on electricity for every Go match they played

      That sounds implausible, given how the followup versions of AlphaGo were squeezed into much smaller machines than the original one. For example AlphaZero ran on a 44-core machine with 4 TPUs. Let's say it was consuming a maximum of 1.3 kW. In several hours, that's like 5 kWh of electricity. Hardly worth $3000 for a match.

      • Here's a citation, on page 7 [stanford.edu]. Apparently the hardware for the entire thing was $27million.

        The AlphaZero paper [arxiv.org] doesn't describe very well what kind of hardware they used for Go, it mainly focused on chess.
        • A 44 core machine with 4 TPUs does NOT cost $27M unless you include the additional training hardware and/or the R&D.

          AlphaGo: 1920 CPUs and 280 GPUs, $3000 electric bill per game

          Ah, it becomes clear then - this refers to the oldest versions of the software that 1) were running on non-TPU hardware, and 2) even used the available hardware inefficiently. Any version post the Lee Se-dol matches *didn't* require 1920 CPUs and 280 GPUs; as I said, they managed to squeeze it into a single machine by software and hardware improvements. Most of the matches took place after

          • Any version post the Lee Se-dol matches *didn't* require 1920 CPUs and 280 GPUs;

            ok, I just gave you a citation from Stanford and you completely decided to ignore it, without evidence. That's clear confirmation bias.

        • by ceoyoyo ( 59147 )

          That's not a reference, it's an unreferenced slide from a CS class. It doesn't say anything about how they arrived at that figure.

          DeepMind is notoriously cagey about details. But AlphaGo has been reimplemented in a completely open way:

          https://arxiv.org/pdf/1902.045... [arxiv.org]

          You can download OpenGo yourself if you like. I bet you can get it to run for less than $3k a game!

          From that paper, they trained OpenGo for nine days on 2000 GPUs. That's a lot of hardware, and the electricity would likely have run into

          • But AlphaGo has been reimplemented in a completely open way: https://arxiv.org/pdf/1902.045 [arxiv.org]... [arxiv.org]

            That's alphaZero, not alphaGo.

            Which illustrates an important point. *Training* large models can be very intensive. *Inference* is not.

            Inference is less expensive than training, but if you're inferring over 80,000 positions a second, there is still a huge computational resource. AlphaGo at its core was still a tree searching algorithm, with inference used for pruning.

      • by Anonymous Coward

        Depends if the number for running the trained network, or for training the network (then amortized across the matches played).

      • Yeah I think they're dividing the training time electricity by all the matches
  • by Carrier Lifetime ( 6166666 ) on Sunday July 26, 2020 @03:59PM (#60333749)

    Light doesn't scale, the wavelength puts a minimum practical size on dielectric waveguide width, approx. 9 microns in glass, 3 microns in something like silicon. That's much larger than transistors in any semiconductor material like Silicon, Gallium Arsenide of Gallium Nitride. Photonic Integrated circuits usually end up huge.

    • Sub-wavelength waveguides have been a thing since the 1800s. Forget that, even insects have them in their eyes.

    • by mark-t ( 151149 )
      There is no fundamental reason beyond technological limits (as opposed to physical limits) why optical transistors cannot be made as small as electrical transistors, or even smaller in fact, since unlike electrons, photons have no charge and can be arbitrarily close to eachother, even intersecting without causing any side effects.
      • Not to mention that there always the advantage in not needing to deal with the immense amount of heat that electric ICs generate.
        And in matter of size, there's no problem in photonic processors end up being larger, except perhaps for mobile applications, maybe even in this case a much lower power consumption can compensate the bigger chip.

      • by niftydude ( 1745144 ) on Sunday July 26, 2020 @06:02PM (#60333993)
        Utter Rubbish. The fundamental building block of an optical transistor is an optical interferometer, and the size limit of an optical interferometer is set by the wavelength of the light being used.

        I've been involved in photonics for 30 years and no one has ever proposed a concept for a scalable optical transistor. The fundamental physics of the situation imposes limits.

        If you have an actual idea for a scalable optical transistor, the file the patent now- if it's feasible you'll become a multi-billionaire.

        On the other hand, if it's empty rhetoric based on an armchair knowledge of physics, you'll get nothing.
        • by mark-t ( 151149 )

          The wavelength of photons imposes a limit on the bandwidth that can be achieved on a single channel, not on the size of a potential switch.

          Individual atoms are perfectly capable of both absorbing and emitting whole photons, despite their size being a fraction of the wavelength of light they produce.

          • So you have a optical transistor design based on a single atom?

            Awesome, patent it, you'll be a billionaire.

            Alternatively, consider that your language betrays the fact that you have no idea how photons interact with electron shells and are talking garbage.
            • by mark-t ( 151149 )

              Where did you get that I was suggesting that I had such a device or that it was somehow technologically possible today? There's an important difference, understand, between what is technologically impossible right now and what cannot ever be achieved regardless of technological advancement.

              I asserted only that there are not any genuine physical barriers to making optical transistors just as small if not smaller than electronic transistors have been made. It may even be physically possible to get them

          • Please stop. You have a level of half-knowledge that is dangerous to you.

            I watch only PBS SpaceTime and dabble in quantum physics a bit, and even I know you don't really have a clue of how any of this works.

        • by gweihir ( 88907 )

          Interesting. Hence the scales they can get this down to make it pretty much completely worthless compared to electrical transistors, with an area density 1'000'000 times or so lower. Thanks for the information.

    • Bosons vs. Fermions

  • by backslashdot ( 95548 ) on Sunday July 26, 2020 @04:33PM (#60333811)

    So that's what my robot meant when he said he was light headed.

  • No, this is not "AI", this is automation. No, the problems with making it actually intelligent are not performance problems. And no, this is not a "breakthrough".

  • nonsense (Score:5, Informative)

    by Goldsmith ( 561202 ) on Sunday July 26, 2020 @06:17PM (#60334035)

    I'm a condensed matter physicist. This is a nonsense paper.

    If I were a reviewer for this paper, I would have absolutely required some significant revisions. The computational aspects of this paper may be fine, but the material science, device physics, and overall clarity of the paper is terrible. In particular, it should be made clear from the title and abstract that the paper is about simulated results. There is a claim of making a device in the supplement, but it is not at all believable. Were I a reviewer, I would have insisted that Supplementary Figure 2a be put into the main body of the paper and explained, because it is either representative of an important and good step forward or a fantasy on the part of the authors. There are undefined acronyms, terms which are contradictory ("electrostatic heating" is used to refer to "joule heating" early in the paper, and later the correct term is used), and there is a general use of invented or non-standard scientific sounding jargon in the description of materials and device physics. I'm not an expert in the computational aspects, but I would express to the editor concern regarding those aspects of the paper as well based on the mis-use of language that I can see.

    The corresponding author on this paper is well published in this field (optical computing and materials for optical computing), and this sub-field of condensed matter physics is particularly jargon heavy and secretive, so some of this is cultural.

    It may be that this is a fine piece of research, but if you're going to publish in a general applied physics journal, you should use general applied physics nomenclature and standards. The reviewers and editor should have helped fix all of this rather than let the authors muddle through with what comes out sounding like a bad episode of Star Trek rather than science.

  • by OneHundredAndTen ( 1523865 ) on Sunday July 26, 2020 @08:31PM (#60334437)
    Which makes me think that we are about to enter a new AI winter - the reality of AI unable to live up to the hype, and investors losing interest.
  • is this news really? shocker, photons faster than electrons! color me gobsmacked.

    • by gweihir ( 88907 )

      Speed of electrons is not relevant for electronic computations. Speed of electricity is and it is far higher and usually close to light-speed, depending on conductor material and shape. Electrons in conductors can be as slow as 1m/h.

      • at the speed of light - please read the article.

        • oops, undo, undo - your point is well made, however i think you misunderstand the point.

          electrons represent single bits, binary.

          light carries a great deal more information.

          i think, not my field this.

          • by gweihir ( 88907 )

            Well, maybe. I think the posting I responded to is mixing speed of particles and speed of computations done with said particles. These are fundamentally different though.

  • the use of optics to calculate vector maths, etc, has massive potential.

    as the article concludes, a lot of our data is already from optical sources.

    this is not binary data, and will never be completely rendered as such, there is always a loss, an approximation, and the output data is massive, and takes enormous work and resources to compress, store and transmit.

    to date our global conscience, what google wants to call our world brain, our connected network, has been fumbling in the dark, representing sight b

If you want to put yourself on the map, publish your own map.

Working...