
Will the End of Moore's Law Halt AI Progress? (mindmatters.ai) 170
johnnyb (Slashdot reader #4,816) writes:
Kurzweil's conception of "The Singularity" has been at the forefront of the media conception of artificial intelligence for many years now. But how close is that to reality? Will AI's be able to design ever-more-powerful AIs? Eric Holloway suggests that the power of AI has been fueled by Moore's law more than AI technology itself, and therefore hitting Moore's Wall will bring AI expansion to a fast halt.
Holloway calls that halt "peak AI...the point where a return on the investment in AI improvement is not worthwhile." He argues that humanity will reach that point, "perhaps soon...."
"So, returning to our original question, whether there is a path to Kurzweil's Singularity, we must conclude from our analysis that no such path exists and that unlimited self-improving AI is impossible."
Holloway calls that halt "peak AI...the point where a return on the investment in AI improvement is not worthwhile." He argues that humanity will reach that point, "perhaps soon...."
"So, returning to our original question, whether there is a path to Kurzweil's Singularity, we must conclude from our analysis that no such path exists and that unlimited self-improving AI is impossible."
AI progress is not bound by computation speed @now (Score:1)
If AI was able to sit processing something for a year and come up with something useful, that'd be leaps ahead of what we have now. Speed of processing is definitely not the issue right now.
Re: (Score:1)
Exactly. Processing 10 million cat pictures in 2 hours instead of 8 hours really doesn't matter in the grand scheme of things. We need better algorithms. Speed isn't that much of an issue especially when you can rent as many GPUs as you could possibly use at the click of a button.
Re: (Score:2)
> Speed of processing is definitely not the issue right now.
Clearly written by somebody who isn't actively involved with things like virtual/augmented/mixed-reality, realtime image-recognition, low-latency high-framerate photorealistic rendering, or realtime ray tracing.
Trust me, there are PLENTY of things left capable of soaking up enormous amounts of computing power.
The "realtime" part, in particular, is a nasty bitch. There are quite a few things that don't necessarily require SUSTAINED high-performa
Re: (Score:2)
Clearly written by somebody who isn't actively involved with things like virtual/augmented/mixed-reality, realtime image-recognition, low-latency high-framerate photorealistic rendering, or realtime ray tracing.
Neither of which has anything to do with AI.
Those tasks are indeed worthy and demanding challenges, but without RTFA I understand the question to be something like "Can the research field of AGI advance without a massive steady increase in transistors-per-CPU?". I would guess that the field is in such an infancy that it is not the transistor count that is the limit. Each software system will just take longer to run, but as opposed to real-time rendering, they can wait (like in non-realtime rendering).
Re: (Score:2)
The thing about parallel processing is that not everything CAN be neatly decomposed into stateless parallel processes.
It's kind of like the situation with human workers. If you're excavating a big hole and have an army of slaves, adding workers/slaves to dig, fill buckets, and carry them away will generally increase your net output... until the point when they start getting in each other's way. As the complexity of the task increases, their ability to work efficiently in parallel decreases rapidly.
Computers
Re: (Score:3)
The thing about parallel processing is that not everything CAN be neatly decomposed into stateless parallel processes.
However we have really, really good evidence that real, strong AI absolutely CAN be decomposed into stateless parallel processes sufficiently well to allow it to perform in real time at the highest level of competence, with hardware that only has a maximum switching rate of only 1000 HZ, and has a mean firing rate of only 6 HZ. You probably have one of these pieces of evidence about two feet from your keyboard.
Re: AI progress is not bound by computation speed (Score:2)
I'd hardly call debouncing a keyboard "AI" ;-)
Re: AI progress is not bound by computation speed (Score:2)
Or interpreting the imaging sensor on a gaming mouse. ;-)
Transistors and AI (Score:5, Insightful)
If you think AI is just more transistors, you probably aren't doing anything interesting in AI research. How many transistors in the human brain? How many regular transistors are necessary to do the work of one quantum transistor? We don't even know how the brain works, and this asshat is asserting that we'll never be able to build a machine that works the same way.
Re: (Score:3)
Re: (Score:2)
Re: (Score:2)
The big problem is that you can map the neurons and synapses, but you can't read the strength from the microscope slide.
Re: (Score:2)
And even if you could, you don't know how the weights change over time.
Re: (Score:3)
The synaptic weights have not been mapped, instead machine learning algorithms were used. Not dissing the project at all, I think there is a huge amount to learn from it, including helping to determine whether our model of how neurons work is essentially complete or not. I'm betting on not, and that a whole lot more processing goes on in the neuron than what happens at the synapses alone.
Re: (Score:2)
Examples:
1 - Dendrites perform multiple computations and transformations before forwarding the signal to the cell body.
2 - When the axon fires, the activation is also sent backwards to the dendrites where the signal is maintained for up to a minute (presumably to aid in the adjustment of synaptic weights)
3 - Purkinje
Re: (Score:2)
Thanks for that. A quick survey showed me that neural coding on the axon is now known to be richer than a simple firing rate, in particular the inter-spike interval is significant in some cases, including Purkinje cells. Who knows what else is significant, maybe the timing of a single action potential in reference to some clock, in some cases. But it's clear that the task of reverse engineering C elegans is much bigger than just determining connectivity and synapse weights. The null result is important her
Re: (Score:3)
Another few decades of work on the AI and it should be good, like the AI exports said in the 1970's
Re:Transistors and AI (Score:5, Informative)
How many transistors in the human brain?
Of course the answer is zero, because the brain has neurons. But we can have some numbers for comparison. A Graphcore GC2 IPU has 23 billion transistors. In comparison, a brain has:
100 billion neurons.
10 trillion synapses.
300 billion dendrites.
Which of those need to be emulated? A transistor does not do as much as a neuron, and we don't know all the things a neuron does. There is some evidence that the inside of a neuron does some kinds of calculations. So it's much more complicated than just comparing raw numbers. That said, transistors do operate faster than neurons.
Good link for more reading [timdettmers.com].
Re: (Score:2)
The brain has way more connections, but a synapse triggers, on average, less than once per second. A high end GPU is clocked at several gigahertz.
Also, many neurons have nothing to do with "thinking". They are engaged in background tasks, like keeping your heart beating, and monitoring your need to eat and breathe.
More/faster hardware won't get you different answers, just faster answers. So if hardware was the bottleneck, we would have really smart AI engines that take a long time to think. That is not
Re: (Score:2)
Re: (Score:3)
Re: (Score:2)
Any living brain will also have vast redundancy built in, backups of backups of backups in every function and every memory. Not one series of cells carrying out a task but thousands carrying at the identical task and the more repeated the more become involved and made. Can not have one bump on the head, causing a death dealing lock up or substantively altering decision trees, process loops, high low focus tests, past outcome referencing repetitions and risks, long term projection outcomes versus short term
Re: (Score:2)
Re: (Score:3)
Also, many neurons have nothing to do with "thinking".
...
That is not happening, because the real bottleneck is our knowledge of how intelligence works.
I generally agree with the thrust of your comment, but your last sentence refutes your first one. We really don't know how intelligence works, or even have a good definition of what it is, still, so your confidence that neurons that maintain homeostasis have nothing to do thinking is misplaced. In fact there is very strong evidence that these background task neurons are also very important in thinking - like in maintaining consciousness. Disruptions in those low level background neurons do not usually inter
Not really, we are already there in hardware.. (Score:2)
Fast is just a little bit of an understatement, dont you think?
These days we are putting over 30 billion transistors on a chip (and for memory we are layering chips up to 64 times...)
Meat is very VERY slow though, nerve impulses travel at around 450 km/h, so in chip signals move at around 2 million times the speed.
However by the time you factor in neuron firing times (WAY slower) you find cross-conduction speed through the brain is closer to 10 m/s
making the same allowances through a chip, we find current t
Re: (Score:2)
Re: (Score:2)
How many transistors in the human brain?
Of course the answer is zero, because the brain has neurons. But we can have some numbers for comparison. A Graphcore GC2 IPU has 23 billion transistors. In comparison, a brain has: 100 billion neurons. 10 trillion synapses. 300 billion dendrites. Which of those need to be emulated? A transistor does not do as much as a neuron, and we don't know all the things a neuron does. There is some evidence that the inside of a neuron does some kinds of calculations.
There is a lot more than "some" evidence of that. The behavior of a single neuron is quite complex, much more like a CPU than a transistor. We haven't worked out the processing of even a single neuron. There is very good evidence that even individual synapses perform some sort of computation - their response is a function of a number of local chemical factors, i.e. it is a weighted computation, and is not simple like a transistor.
Neural systems (except for the simplest peripheral reflexes perhaps) are stati
Re: (Score:2)
Assuming at some point in time we have the instrumentality to observe a living human brain in action at the resolution/level of detail necessary to really understand the how and why of it's functioning, we'll be able to eliminate a fair number of those for some of the 'hardwired' functions they serve, mainly the 'bare-metal' functions required to keep our basic bodily functions in order -- but even then, we currently don't even understand enough of what our brain does
What's Moore have to do with it? (Score:1)
Considering that the time when Moore's law has been applicable hasn't appreciably sped up the development of AI, I fail to see how the end of Moore's law will appreciably slow it down. Can someone point me to a chat bot that is better than ELIZA [wikipedia.org]?
Re: (Score:2)
Bingo.
Today's CPUs and GPUs are essentially silicon, copper, and various doping ions in a thin sheet that we pump electricity through. They're glorified heating elements that happen to do "work" in a useful way. They're a brilliant design for what's essentially lightning in a rock, but they're very limited.
The human brain is made of many kinds of cells in various arrangements -- each cell being a small three dimensional world full of molecular machines and nearly every connection between those cells expone
Re: (Score:2)
Good thing I actually read people's entire posts otherwise I'd be inclined to write a scathing reply instead of this one.
100% correct; we don't even know how a biological brain like ours produces the phenomenon of 'thought' or 'consciousness' or anything else that defines us as 'sentient', mainly because we don't have the instrumentality (yet?) to observe the entire sy
Re: (Score:2)
Yes the real problem is that we don't really know how brains work and we cannot even really imagine a path to knowing how they work. One possible path I can imagine is through understanding how to read DNA. If we can figure out the DNA language well enough to build our own life-machines from scratch just by writing or rewriting the low level code then we may be able to understand everything that brains do by studying the blueprints. If we study the differences between human brains and mouse brains at the de
Re: (Score:2)
Of course all of those things can be simulated by a normal computer program - it's just physics. If we can simulate a rocket to land men on the moon, why can't we simulate a brain!
Re: (Score:3)
Re: (Score:2)
Because shooting a few men to the moon is actually pretty simple.
As soon as you have rockets that don't randomly explode for unknown reasons.
Re: (Score:2)
the brain isn't electrical like computer circuits in nature, it's chemical. Even impulse transmission is chemical.
That just means it's slower, right?
Re: (Score:2)
Yes, The speed of transmission of even the fastest nerve velocity in humans is roughly 120 meters/second. Some are much slower, less than one meter/second. It would also be fair to call the signals "electro-chemical".
Re: (Score:1)
Why would we want to reproduce a brain? Surely the goal is to produce something better.
Something better than a republican brain, definitely.
Re: Transistors and AI (Score:2)
Same reason software developers write single threaded code even though it's been clear for decades that computing is moving towards parallel processing: it's the only thing they know how to do.
Obviously yes (Score:2)
Because the end of Moore’s Law would obviously mean that no further technological progress could possibly occur. /sarcasm
Re: (Score:2)
Moore's law... always 2-3 years from ending (Score:2)
There's a lot more to do, it's not just about scaling down transistors. More layers of transistors, different ways of structuring logic to better use the transistors... there are a lot of ways left unexplored to keep going, even if transistor size remains fixed.
Re: (Score:2)
You are right, but the problem with 3-d silicon is that it generates correspondingly more heat - which is already a problem even with current designs. I think that such technologies will keep us moving forward for a while even after we get to the smallest practical transistor size, but there will still be limits.
Longer-term, there's more hope for non-silicon based models - but that's a whole different subject that might not be subject to the familiar Moore's Law, which was never more than an observation abo
Re: (Score:2)
You are right, but the problem with 3-d silicon is that it generates correspondingly more heat
That is a real engineering problem to be sure, but it's also one which can be addressed. When there's no other way to cool processors, they'll start coming with liquid cooling integrated into the die itself. I imagine that the package will include a liquid to liquid heat exchanger, and that the liquid which actually circulates through the die will be sealed in, just to keep it clean. All this will be expensive, so they will do anything and everything else possible first. There is still some room to advance
Re: (Score:2)
Kurzweil is a Shill (Score:5, Insightful)
Kurzweil pretends to know what he's talking about because he can fit a graph with lots of tampering with the data. He fails to see that what he calls exponential growth is nothing more than the beginning of a sigmoid function. A good analysis of Moore's law and computational power shows a sigmoid function, as with many technologies they start off slow, build up quickly, then tapper off.
Re: (Score:3)
Re: (Score:2)
He was making an analogy to production of units of the arcade game "Tapper". [wikipedia.org]
Perfectly cromulent.
Re: (Score:2)
Re: (Score:1)
A good analysis of Moore's law and computational power (..)
Not to mention that computational power will always hit fundamental resource limits. Say one could do storage by putting individual electrons into a 'field' of locations, where the presence or absence of that single electron represents a 0 or 1. Even then, moving that electron back & forth takes energy - no matter how little. It takes time - no matter how fast. And it takes space - no matter how small.
Since resources like energy / space / raw materials (and time) have practical limits, that puts a ha
Re: (Score:3)
Moore's Law, strictly speaking, only applies to silicon semiconductor technology, not to computational technology as a whole. As such, it will eventually exhaust its possibilities, and we're not too far from doing that now. You are correct that most technologies do follow a sigmoid function and that few if any can ever be expected to follow an exponential curve.
But Kurzweil was not quite so naive as to claim that silicon semiconductor technology would continue forever. Rather, he claimed that successor tech
Re: (Score:2)
Holloway is much too pessimistic: He completely discounts any successor technologies to silicon.
We already have several successor technologies, gallium-arsenid as replacement for silicon, optical computing and in the 1990s jap. companies experimented with supra conducting transistors.
In other words, the death of Moore's Law (for which read: the progress of silicon technology) marks a transition period, not an endpoint.
Other techniques will suffer from the same principle constraint.
Re: (Score:2)
No argument that these are all serious candidates for a viable successor technology, but the question is which one(s) will win out for headroom, life of the components, cost of production, and so forth. Holloway is an idiot if he thinks that the death of silicon means the end of progress.
Other techniques will suffer f
Re: (Score:2)
Yes, but is the exact shape of the curve important?
- If the curve starts bending back the other way very far into the future (or we are at a very low, early heel of it), wouldn't it effectively be the same from the perspective of us humans who would not follow this evolution curve?
- Even when tapered off to zero further evolution, the top limit of the curve would be much higher and would probably not go down to near human level except if there was an extinction of both such AI and the knowledge and/or capab
Re: (Score:2)
Kurzweil was well aware that technology curves are sigmoid, as you'd know if you'd actually read anything he wrote. Fast-growth "exponential" phases are invariably followed by levelling off as a technology matures and approaches its limits - but very often, new methods soon follow, with S curves of their own.
It's hard to be certain at what stage in the AI curve we're at currently, but most researches feel that the field is a long way from hitting its limits yet.
Things get small (Score:2)
Who will win?
South Korea has the design experts to move into smaller parts that will work as needed.
China will have to wait to see what the USA, South Korea and Japan design.
The problem is who will retool their factory first, take all the risks, take on new debt only to see their tech advantage totally "lost" to Communist China.
A working AI will not result due to the decades that people
Re: (Score:2)
What are you talking about?
There is no new secret quantum CPU technology that is going to push Moore's law along. If you are talking about qubits and spinbits, they a long, long way off from becoming a viable CPU. This isn't a simple matter of retooling, a lot more research and slick tricks need to be invented. The quantum world is a very noisy world leading to a need to spread things out and use photons to communicate. Smaller isn't going to be the answer, in fact, chips will most likely get physically
Re: (Score:2)
There is no new secret quantum CPU technology that is going to push Moore's law along.
Not entirely true. Advances in understanding of quantum mechanics are also key to shrinking classical computers. For example, overcoming tunnelling issues.
Re: (Score:3)
CPUs stopped getting faster 3-years ago as Intel, AMD, ARM started cramming in more and more cores.
That is complete nonsense. CPU clock rates stopped getting faster, but the CPUs still can retire many more operations per second.
Always Better (Score:2)
Why would it? (Score:2)
Someone is so focused on that tree they're about to walk into that they don't notice they're in a forest.
Moore law is limiting. (Score:2)
Oh my Lord? (Score:5, Informative)
Eric Holloway:
* seems to have no qualification in physics/nanotechnology to add anything to the discussion if Moores law will end, and when
* Seem to bagger along with intelligent design folks, with him re-telling the old stories they usually tell about information science and the rest of science
* and seems to write no peer reviewed articles any more (after the paper he wrote unrelated and before his PHD research)
* Did a PHD in a program where the students are identified as "good stewards of God-given talents" (https://www.ecs.baylor.edu/ece/index.php?id=865400)
* Did a PHD program which contains in its description "Engineering is also a value-based discipline that benefits from Christian worldview and faith perspectives; students can also select supportive courses from religion, theology or philosophy. Course selection is broadly specified to provide flexibility and to accommodate a wide-range of student interest." (https://www.ecs.baylor.edu/ece/index.php?id=863609)
* Description of the seminar series of his university where it seems that he presented his PHD: (https://www.ecs.baylor.edu/ece/index.php?id=868860): eBEARS seminars are presented by Baylor ECE faculty, ECE graduate students and transnationally recognized scholars and leaders. The topics lie within the broad area of ECE. In concert with Baylor's Pro Futurus strategic plan to be "a place where the Lordship of Jesus Christ is embraced, studied, and celebrated," some eBEARS seminars focus on the topic of faith and learning.
So praise the Lord for his insights!
Re: (Score:1, Troll)
What do his religious beliefs have to do with his expertise in the subject?
Re: (Score:2)
Everything, a person who believes a bunch of myths and lies as fact and belongs to an organization devoted to same led by power and money grubbing scum who ignore, cover over, and give handsome golden parachutes to the perpetrators of their sex scandals likely will have nothing of value to contribute to science, being the antithesis of what they devote their life to.
Re: (Score:2, Interesting)
Interesting opinion, considering many great scientific contributions and advancements throughout the ages have come (and still come) from people who were (or are) very religious. And many of those people attribute their discoveries at least partly to divine inspiration, realizing that despite all the hard work, study, experimentation, and preparation on their part, there is still sometimes the feeling of a sudden flow of knowledge or creativity from an external source, resulting in ideas that they feel they
Re: (Score:2)
The greatest contributions come from those NOT under the strict control of organized religion
By the way, I've previously have worked in high energy physics for years as engineering physicist at national lab
Re: (Score:2)
Re: (Score:2)
Human progress has extended life, cured and mitigated disease, raised standard of living.
Religion and the ignorant mindset it promotes has maimed, killed, caused disease, and impeded the progress noted above. Not surprising since it is based on lies.
Re: (Score:2)
Re: (Score:2)
The personal belief: Exactly nothing, so taking part in a PHD program where this connection is explicitly made and being connected to IDers means that you got science something wrong.
Why not yet? (Score:3)
Maybe (Score:2)
Now the real work begins... (Score:5, Interesting)
Much of recent AI progress has come from the awesome amount of cheap computing power available.
That's not going to change! As today's bleeding edge silicon processes evolve, they will get faster and cheaper (both in cost and energy consumption), if not smaller.
Much of AI is inherently parallel: So long as more CPUs and GPUs can be added, larger problems will be solved faster.
We are still in the first two generations of custom hardware for AI. That trend will continue and accelerate as new architectures and algorithms arrive.
I'd say there are at least three full "Moore's Law" generations coming for AI, very likely more. But transistors alone won't be driving it. Fortunately, there are lots of other factors that will.
Re: (Score:3)
Right and wrong (Score:5, Insightful)
He's right and wrong. He is correct that much of the "advancements" in AI has been because of processing power (and dataset size). Most of what I learned in AI in college a quarter century ago forms the foundation of today's AI (and most of what I learned had been developed decades earlier). The reason we have things like Siri isn't because AI is smarter. It's because processing power is so fast and cheap, and because data storage and ram is so large and cheap, that an absolutely massive data set can be crunched to do speaker agnostic recognition to determine what I said. In fact, Apple can run my voice audio through dozens of speech models (male, female, accents, etc) in parallel to find the best result. So he is right - processing power has enabled AI to become far more useful of late.
However, where he is wrong is in the parallelism and scalability. In my above example, many different nodes (maybe located in entirely different datacenters) are doing that processing to find the best match.
AI doesn't need to exist on one processor, and it doesn't need to execute at any particular speed. If we're talking "turing" type AI, and I were to ask it "How are you feeling today?" and the AI takes 5 hours to reply "I feel the same as I always do.", well it is still just as intelligent as if it were responding in real-time. When we have reached that point in AI intelligence then we can throw more processing power at it in many different ways to allow it to process faster. The point is that the intelligence is not bound by the processing speed. Sure, for Siri to be viable commercially and useful to Joe Blow it needs to be fast, but as far as research and advancing the field of AI, that is independent of the processing speed.
And having said all that, AI has not advanced significantly beyond the full realization and expansion of things like neural nets with massive processing power and data sets to be useful in identifying, say, a tree in a photograph. We could have been doing that in 1980 given the processing power and storage capacity we have now.
Re: (Score:1)
Re: (Score:1)
I came in here to say pretty much what you said. I went to graduate school (masters) in the field of AI in the late 80s and have recent returned to graduate school in the same field, so 30 years later. (Georgia Tech both times) I was shocked. I'm learning the same approaches to the problems. The biggest difference I'm seeing are that there are handy libraries to use so I don't have to code stuff from scratch unless the class requires it. I worked mostly in image understanding the first time. In one o
Re: (Score:2)
I didn't take ML classes back in my days (late 90s), but do remember there
Re: (Score:2)
Yes there may be some big changes, but almost certainly not in your lifetime. Smart people have been trying to figure out how to make intelligent machines for decades and conceptually not much has changed. How old are you btw?
Real AI, as in an artificial human brain equivalent, is probably centuries or even millennia away. After we can make ourselves more intelligent and upload consciousness into an electronic device and plug a coprocessor into a socket in our neck then maybe we will be able to create an ar
Scarcity breeds innovation. Of course it won't. (Score:2)
Magnetic disk performance resulted in companies investing in Flash and other technologies. The oil crisis resulted in companies investing in engine efficiency. Broken iPhones and expensive iPhones results in more people finding 3rd party repair options. Adversity and scarcity breeds innovation. We'll see a lot of money pour back into the pure science of understanding AI, and also we're seeing companies like Google and AMD invest in chip design that is tailored to AI. "AI-SICs", so to speak. I look for
How about (Score:2)
Using AI to develop better processors?
Not as long as you can make more silicon (Score:1)
Wait for it.....New proof of god! (Score:2)
Kurzweil addressed that (Score:2)
Kurzweil gave more thought to this subject than this particular poster probably every will. He said that Moore's Law will slow down for integrated circuits, he just believed that a new technology will replace it. Sure, his estimates might not end up true, but it's still rather pointless to argue with his Singularity with arguments he has already discussed.
Far as AI processing has progressed, that hasn't had that much to do with Moore's Law and more to do with how to effectively use silicon for it. Eric Holl
We are physical proof it is doable... (Score:2)
It's not so relevant for multiple reasons. (Score:2)
1. The speed limits of microprocessors are relevant because microprocessors process serial threads of instructions. Parallelizing multiplies effective performance. This is why GPU's are used so much more today, even in addition to multi-core processors.
2. Neuromorphic chips provide many magnitudes better performance than CPU's and GPU's. They do solidify the activation functions possible -- as those cannot be modified or added to once burned into a chip. This is a downside but the tensor-based model f
What progress? (Score:2)
So far we've only been applying insane amounts of CPU power and data to machine learning algorithms which haven't changed a lot in the last decades.
We are now seeing such ML applications getting slowly as good as conventional ones, but at a much higher computational effort. Essentially machine learning boils down to statistics. However unlike normal statistics, ML does not provide you with insights. This may be perfectly OK for finding out what fruit is in front of a camera, however whenever you have accoun
Re:NO, Thats Plastic. (Score:2)
Re: (Score:2)
Re: (Score:2)
They're actually vastly inferior to this date. Brains remain far more efficient, largely because unlike transistors, there are many potential outcomes of electric signal entering a nerve cell.
Add to this the logic systems developed by billions of years of selectionary pressure, and you get biological computers that are far more advanced than anything we have in silicon today. That's why deep level AI similar to that of human brain proved utterly impossible to this date. Even if you were to manage to match t
Re: (Score:2)
Re: (Score:2)
We don't even need quantum computers to speed up AI. All we need is a different architecture that more closely mimicks a brain.
Right now, even the most advanced practical applications of AI are still using serial computations to calculate the propagation of the signals, Even massively parallel GPUs are still serially calculating everything, just doing it in batches instead of one by one.
Meanwhile, some researchers are experimenting with new layouts where the components actually behave like neurons rather th
Re: (Score:1)
For commercial purposes, training can take place in the factory at a slower pace. The product just has to execute.
Also, it should be possible (at some point in the future) to design a hardware neural net that is capable of training and executing.
Re: (Score:2)
"Moore's law is the observation that the number of transistors in a dense integrated circuit doubles about every two years."
It actually does not state anything else. So it may mean that we may see an end to how dense things can be packed, but the law can still be fulfilled by larger chips and even multiple chips in the same casing to manage massive multi-core processors.
Even though we now see a transit to more pure 64-bit cores I still see that a lot of stuff when doing multi-thread and multi-process activi
Re: (Score:2)
it may mean that we may see an end to how dense things can be packed
EUV will take us through another factor of 8 or so density increase more or less smoothly without relying on as yet unknown breakthroughs beyond what is required to get past the current 7nm hump.
Then don't discount the possibility of breakthroughs. For example, somebody might figure out a way to mass produce nanotube transistors, maybe good for a further factor of 8.