Researchers Achieve AI Breakthrough Using Light To Perform Computations (independent.co.uk) 66
"Researchers have achieved a breakthrough in the development of artificial intelligence by using light instead of electricity to perform computations," reports the Independent.
"The new approach significantly improves both the speed and efficiency of machine learning neural networks..." A paper describing the research, published this week in the scientific journal Applied Physics Reviews, reveals that their photon-based (tensor) processing unit (TPU) was able to perform between 2-3 orders of magnitude higher than an electric TPU.
"We found that integrated photonic platforms that integrate efficient optical memory can obtain the same operations as a tensor processing unit, but they consume a fraction of the power and have higher throughput," said Mario Miscuglio, one of the paper's authors.
"When opportunely trained, [the platforms] can be used for performing inference at the speed of light."
"The new approach significantly improves both the speed and efficiency of machine learning neural networks..." A paper describing the research, published this week in the scientific journal Applied Physics Reviews, reveals that their photon-based (tensor) processing unit (TPU) was able to perform between 2-3 orders of magnitude higher than an electric TPU.
"We found that integrated photonic platforms that integrate efficient optical memory can obtain the same operations as a tensor processing unit, but they consume a fraction of the power and have higher throughput," said Mario Miscuglio, one of the paper's authors.
"When opportunely trained, [the platforms] can be used for performing inference at the speed of light."
Bull stinking shit... (Score:1, Informative)
This is a "processing" breakthrough... not an AI breakthrough.
I am so sick of a computer making a binary decision 2 nano seconds faster being treated as a revolution in AI by media.
Re: (Score:2)
But it's at the speed of light now! I'm going down to Best Buy to pre-order my household robot...
Re: (Score:2)
Yes, I would like my own Mr. Handy too... or several...
Re: (Score:3)
Yes, but the processing in question is specific to tensor processing units. It is an [i]analogue[/i] circuit using light, not a digital one. It can not be applied to general purpose computing.
Re: (Score:2)
Yes, but the processing in question is specific to tensor processing units. It is an [i]analogue[/i] circuit using light, not a digital one. It can not be applied to general purpose computing.
The low-resolution operations are most applicable to NNs, which often use FP16 (or even FP8), but there are some other applications. For instance, FP16 is usually good enough for moving graphics and video games.
So in addition to TPUs, these photonic circuits may someday be used in GPUs.
Re: (Score:3)
Re: (Score:1)
Yes, I have already been told today that the "dictionary" does not matter. Sounds like you are the same stupid as they are.
AI is NOT Equal to "processing". Which is why I said this an advancement to "processing" not AI. Just because it will improve AI does not mean it is an advancement to AI. Otherwise it is the same a saying "silicon" is a fucking advancement to AI. And it just isn't!
I am sick of everything being trashed about as though it is AI.
News flash folks, we are not even close to any meaningfu
Re: (Score:2)
AI
The winter will be very cold.
Re: (Score:2)
Well, AI is not a speed problem at all, rather obviously. If it where, we would have working intelligence in machines which would just be very slow. May not have applications, but would demonstrate feasibility. And at that point increasing speed would be meaningful. Instead we have absolutely nothing. Dumb automation is what is possible and by now it looks very much like that will be it.
Re:Bull stinking shit... (Score:5, Informative)
It's a custom chip used in machine learning research. That makes it relevant to AI.
I'd say I was sick of people skipping TFA to get first post, but this is /. after all.
Re: (Score:2)
That has nothing to do with it.
Developing a "process" for bringing us a step closer to AI is an advancement for AI.
Making things process faster is not specific to AI, regardless of if it is being used in an application for AI.
It's like saying making a battery that lasts twice as long is an advancement for self-driving cars.
It's bullshit... and people fundamentally do not grasp why because they are ignorant and when someone comes by to correct their misunderstanding they get pissed off and double down on the
Re: (Score:2)
It's silly no matter how you reason about it. Computationalism is a dead end.
Re: (Score:2)
It's silly no matter how you reason about it. Computationalism is a dead end.
Interesting. I did not know that term yet. So far I assumed "Physicalism" captured it nicely, but this one may be better in some situations.
I do continue to be completely fascinated by people that simply ignore self-awareness and then think they are doing sound Science.
Although the continued failure of AI research to produce anything that has human-like intelligence (also called AGI now that AI is
a completely broken and worthless term) gives us a strong hint that not only is self-awareness outside of known
Re: (Score:2)
not only is self-awareness outside of known Physics
That's because "self-awareness" is a vague philosophical concept
that has nothing to do with physics, or any other science.
Re: (Score:2)
Computationalism is a dead end.
As distinct from what?
Re: (Score:2)
It is a _speed_ increase, not any kind of qualitative improvement. Calling it an "AI breakthrough" is basically a direct lie.
Re:Bull stinking shit... (Score:4, Interesting)
Re: (Score:2)
We're hitting the wall for transistor scaling very soon now, and the main issue in enabling AI is processing power.
No. If it were we would have meaningful AI (i.e. AGI) that was just very slow. We do not have that.
Re: (Score:2)
Re: (Score:2)
Machine learning, automation, etc. are interesting. agreed. It just has still not dawned on the general public that "AI" is not about intelligence. But this still is just a gradual improvement (if it works at all, apparently it is a simulated result, not physical hardware). It mainly has the theoretical potential of making things cheaper.
Re: (Score:2)
emulation of the behaviour of an intelligent human. Whether it really
has "self awareness" is irrelevant.
Of course, what is acceptable to one person would not be acceptable to others.
Some people go gah-gah at the performance of a trivial chat-bot program.
For me, scoring in the top 5% on the SAT ten years in a row would be a good start.
Re: (Score:2)
A.I. research is about producing a computer system that can do an acceptable
emulation of the behaviour of an intelligent human. Whether it really
has "self awareness" is irrelevant.
Of course, what is acceptable to one person would not be acceptable to others.
Some people go gah-gah at the performance of a trivial chat-bot program.
For me, scoring in the top 5% on the SAT ten years in a row would be a good start.
That definition is completely unsuitable for any kind of application in tech, except maybe entertainment. Anything else needs actual problem-solving capabilities, not faked ones. Self-awareness plays a role as we currently observe intelligence only in connection with it, hence any claim intelligence can be done without it is pure speculation. And there is the little _other_ problem that known Physics has no mechanism for self-awareness.
Re: (Score:2)
I am so sick of a computer making a binary decision 2 nano seconds faster being treated as a revolution in AI by media.
I am so sick of a human making a binary decision 2 nano seconds after reading an article summary being treated as a legitimate opinion in /. by media.
Re: (Score:2)
Re: Bull stinking shit... (Score:1)
Well, obviously. Given that AI is still at least 50 years off, and almost, but not quite, entire unrelated to weight-based tensor multiplications!
Re: (Score:2)
Well, it probably started with Marvin "the moron" Minsky that made completely inane claims like a computer with more transistors than humans have brain cells would be smarter than humans (nicely showing that this cretin did neither understand transistors or brain cells, nor intelligence or software). Similar stupid claims abound to this day when we still have zero indication that making computers intelligent is possible at all.
Orders of magnitude refer to (Score:2)
This is important in AI, because the AlphaGo team spent ~$3000 on electricity for every Go match they played.
Re: (Score:2)
AlphaGo team spent ~$3000 on electricity for every Go match they played
That sounds implausible, given how the followup versions of AlphaGo were squeezed into much smaller machines than the original one. For example AlphaZero ran on a 44-core machine with 4 TPUs. Let's say it was consuming a maximum of 1.3 kW. In several hours, that's like 5 kWh of electricity. Hardly worth $3000 for a match.
Re: (Score:2)
The AlphaZero paper [arxiv.org] doesn't describe very well what kind of hardware they used for Go, it mainly focused on chess.
Re: (Score:2)
AlphaGo: 1920 CPUs and 280 GPUs, $3000 electric bill per game
Ah, it becomes clear then - this refers to the oldest versions of the software that 1) were running on non-TPU hardware, and 2) even used the available hardware inefficiently. Any version post the Lee Se-dol matches *didn't* require 1920 CPUs and 280 GPUs; as I said, they managed to squeeze it into a single machine by software and hardware improvements. Most of the matches took place after
Re: (Score:2)
Any version post the Lee Se-dol matches *didn't* require 1920 CPUs and 280 GPUs;
ok, I just gave you a citation from Stanford and you completely decided to ignore it, without evidence. That's clear confirmation bias.
Re: (Score:2)
That's not a reference, it's an unreferenced slide from a CS class. It doesn't say anything about how they arrived at that figure.
DeepMind is notoriously cagey about details. But AlphaGo has been reimplemented in a completely open way:
https://arxiv.org/pdf/1902.045... [arxiv.org]
You can download OpenGo yourself if you like. I bet you can get it to run for less than $3k a game!
From that paper, they trained OpenGo for nine days on 2000 GPUs. That's a lot of hardware, and the electricity would likely have run into
Re: (Score:2)
But AlphaGo has been reimplemented in a completely open way: https://arxiv.org/pdf/1902.045 [arxiv.org]... [arxiv.org]
That's alphaZero, not alphaGo.
Which illustrates an important point. *Training* large models can be very intensive. *Inference* is not.
Inference is less expensive than training, but if you're inferring over 80,000 positions a second, there is still a huge computational resource. AlphaGo at its core was still a tree searching algorithm, with inference used for pruning.
Re: (Score:2)
Always optimize your algorithm before inventing a new processor to run it.
Re: (Score:2)
Re: (Score:1)
Depends if the number for running the trained network, or for training the network (then amortized across the matches played).
Re: Orders of magnitude refer to (Score:2)
Light doesn't scale (Score:3)
Light doesn't scale, the wavelength puts a minimum practical size on dielectric waveguide width, approx. 9 microns in glass, 3 microns in something like silicon. That's much larger than transistors in any semiconductor material like Silicon, Gallium Arsenide of Gallium Nitride. Photonic Integrated circuits usually end up huge.
Re: (Score:2)
Sub-wavelength waveguides have been a thing since the 1800s. Forget that, even insects have them in their eyes.
Re: (Score:3)
Re: (Score:2)
Not to mention that there always the advantage in not needing to deal with the immense amount of heat that electric ICs generate.
And in matter of size, there's no problem in photonic processors end up being larger, except perhaps for mobile applications, maybe even in this case a much lower power consumption can compensate the bigger chip.
Re: Light doesn't scale (Score:5, Informative)
I've been involved in photonics for 30 years and no one has ever proposed a concept for a scalable optical transistor. The fundamental physics of the situation imposes limits.
If you have an actual idea for a scalable optical transistor, the file the patent now- if it's feasible you'll become a multi-billionaire.
On the other hand, if it's empty rhetoric based on an armchair knowledge of physics, you'll get nothing.
Re: (Score:2)
The wavelength of photons imposes a limit on the bandwidth that can be achieved on a single channel, not on the size of a potential switch.
Individual atoms are perfectly capable of both absorbing and emitting whole photons, despite their size being a fraction of the wavelength of light they produce.
Re: Light doesn't scale (Score:2)
Awesome, patent it, you'll be a billionaire.
Alternatively, consider that your language betrays the fact that you have no idea how photons interact with electron shells and are talking garbage.
Re: (Score:3)
Where did you get that I was suggesting that I had such a device or that it was somehow technologically possible today? There's an important difference, understand, between what is technologically impossible right now and what cannot ever be achieved regardless of technological advancement.
I asserted only that there are not any genuine physical barriers to making optical transistors just as small if not smaller than electronic transistors have been made. It may even be physically possible to get them
Re: Light doesn't scale (Score:1)
Please stop. You have a level of half-knowledge that is dangerous to you.
I watch only PBS SpaceTime and dabble in quantum physics a bit, and even I know you don't really have a clue of how any of this works.
Re: (Score:2)
Interesting. Hence the scales they can get this down to make it pretty much completely worthless compared to electrical transistors, with an area density 1'000'000 times or so lower. Thanks for the information.
Re: (Score:2)
Bosons vs. Fermions
Sick computer (Score:5, Funny)
So that's what my robot meant when he said he was light headed.
The usual bullshit (Score:1)
No, this is not "AI", this is automation. No, the problems with making it actually intelligent are not performance problems. And no, this is not a "breakthrough".
nonsense (Score:5, Informative)
I'm a condensed matter physicist. This is a nonsense paper.
If I were a reviewer for this paper, I would have absolutely required some significant revisions. The computational aspects of this paper may be fine, but the material science, device physics, and overall clarity of the paper is terrible. In particular, it should be made clear from the title and abstract that the paper is about simulated results. There is a claim of making a device in the supplement, but it is not at all believable. Were I a reviewer, I would have insisted that Supplementary Figure 2a be put into the main body of the paper and explained, because it is either representative of an important and good step forward or a fantasy on the part of the authors. There are undefined acronyms, terms which are contradictory ("electrostatic heating" is used to refer to "joule heating" early in the paper, and later the correct term is used), and there is a general use of invented or non-standard scientific sounding jargon in the description of materials and device physics. I'm not an expert in the computational aspects, but I would express to the editor concern regarding those aspects of the paper as well based on the mis-use of language that I can see.
The corresponding author on this paper is well published in this field (optical computing and materials for optical computing), and this sub-field of condensed matter physics is particularly jargon heavy and secretive, so some of this is cultural.
It may be that this is a fine piece of research, but if you're going to publish in a general applied physics journal, you should use general applied physics nomenclature and standards. The reviewers and editor should have helped fix all of this rather than let the authors muddle through with what comes out sounding like a bad episode of Star Trek rather than science.
Re: (Score:2)
I bet you hear a lot of midget jokes. Best one so far?
Nothing to do with AI (Score:3)
Re: (Score:3)
The sooner the better. My eyes can only take so much rolling.
Re: Nothing to do with AI (Score:1)
I'm already shorting "AI" stocks.
I started to get first signs of disappointment, some time ago. It is definitely on the downturn now, and its five minutes of fame and being milked by the gamblers is over.
soooo.... light is faster for computing... : / (Score:1)
is this news really? shocker, photons faster than electrons! color me gobsmacked.
Re: (Score:2)
Speed of electrons is not relevant for electronic computations. Speed of electricity is and it is far higher and usually close to light-speed, depending on conductor material and shape. Electrons in conductors can be as slow as 1m/h.
Re: (Score:1)
at the speed of light - please read the article.
Re: (Score:1)
oops, undo, undo - your point is well made, however i think you misunderstand the point.
electrons represent single bits, binary.
light carries a great deal more information.
i think, not my field this.
Re: (Score:2)
Well, maybe. I think the posting I responded to is mixing speed of particles and speed of computations done with said particles. These are fundamentally different though.
and our global conscience gains vision (Score:1)
the use of optics to calculate vector maths, etc, has massive potential.
as the article concludes, a lot of our data is already from optical sources.
this is not binary data, and will never be completely rendered as such, there is always a loss, an approximation, and the output data is massive, and takes enormous work and resources to compress, store and transmit.
to date our global conscience, what google wants to call our world brain, our connected network, has been fumbling in the dark, representing sight b