Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI

Will the End of Moore's Law Halt AI Progress? (mindmatters.ai) 170

johnnyb (Slashdot reader #4,816) writes: Kurzweil's conception of "The Singularity" has been at the forefront of the media conception of artificial intelligence for many years now. But how close is that to reality? Will AI's be able to design ever-more-powerful AIs? Eric Holloway suggests that the power of AI has been fueled by Moore's law more than AI technology itself, and therefore hitting Moore's Wall will bring AI expansion to a fast halt.
Holloway calls that halt "peak AI...the point where a return on the investment in AI improvement is not worthwhile." He argues that humanity will reach that point, "perhaps soon...."

"So, returning to our original question, whether there is a path to Kurzweil's Singularity, we must conclude from our analysis that no such path exists and that unlimited self-improving AI is impossible."
This discussion has been archived. No new comments can be posted.

Will the End of Moore's Law Halt AI Progress?

Comments Filter:
  • If AI was able to sit processing something for a year and come up with something useful, that'd be leaps ahead of what we have now. Speed of processing is definitely not the issue right now.
     

    • by Anonymous Coward

      Exactly. Processing 10 million cat pictures in 2 hours instead of 8 hours really doesn't matter in the grand scheme of things. We need better algorithms. Speed isn't that much of an issue especially when you can rent as many GPUs as you could possibly use at the click of a button.

    • > Speed of processing is definitely not the issue right now.

      Clearly written by somebody who isn't actively involved with things like virtual/augmented/mixed-reality, realtime image-recognition, low-latency high-framerate photorealistic rendering, or realtime ray tracing.

      Trust me, there are PLENTY of things left capable of soaking up enormous amounts of computing power.

      The "realtime" part, in particular, is a nasty bitch. There are quite a few things that don't necessarily require SUSTAINED high-performa

      • Clearly written by somebody who isn't actively involved with things like virtual/augmented/mixed-reality, realtime image-recognition, low-latency high-framerate photorealistic rendering, or realtime ray tracing.

        Neither of which has anything to do with AI.

        Those tasks are indeed worthy and demanding challenges, but without RTFA I understand the question to be something like "Can the research field of AGI advance without a massive steady increase in transistors-per-CPU?". I would guess that the field is in such an infancy that it is not the transistor count that is the limit. Each software system will just take longer to run, but as opposed to real-time rendering, they can wait (like in non-realtime rendering).

  • Transistors and AI (Score:5, Insightful)

    by reanjr ( 588767 ) on Saturday January 05, 2019 @08:43PM (#57910854) Homepage

    If you think AI is just more transistors, you probably aren't doing anything interesting in AI research. How many transistors in the human brain? How many regular transistors are necessary to do the work of one quantum transistor? We don't even know how the brain works, and this asshat is asserting that we'll never be able to build a machine that works the same way.

    • It's even more foolish since researchers have already been able to build a machine that behaves like very simple creatures [sciencealert.com]. There isn't any reason so think that we can't make something more complex, it's just a matter of being able to map out the wiring and build hardware to mimic the sensory data that the artificial brain needs. However, there's a long way to go. I recall another researcher that was trying to make a robot to fold clothes, but that problem turned out to be much harder than he thought since
      • C Elegans neurons and synapses have been mapped but the simulations still don't respond the way the worm does. The robot you linked to doesn't behave like the worm, the scientists are still trying to figure out why the simulations don't work.
        • The big problem is that you can map the neurons and synapses, but you can't read the strength from the microscope slide.

        • The synaptic weights have not been mapped, instead machine learning algorithms were used. Not dissing the project at all, I think there is a huge amount to learn from it, including helping to determine whether our model of how neurons work is essentially complete or not. I'm betting on not, and that a whole lot more processing goes on in the neuron than what happens at the synapses alone.

          • It's already well known by neuroscientists that neurons are more complex and perform more computation and memory based functions than any current model that is used in these simulations.

            Examples:
            1 - Dendrites perform multiple computations and transformations before forwarding the signal to the cell body.

            2 - When the axon fires, the activation is also sent backwards to the dendrites where the signal is maintained for up to a minute (presumably to aid in the adjustment of synaptic weights)

            3 - Purkinje
            • Thanks for that. A quick survey showed me that neural coding on the axon is now known to be richer than a simple firing rate, in particular the inter-spike interval is significant in some cases, including Purkinje cells. Who knows what else is significant, maybe the timing of a single action potential in reference to some clock, in some cases. But it's clear that the task of reverse engineering C elegans is much bigger than just determining connectivity and synapse weights. The null result is important her

    • by AHuxley ( 892839 )
      Decades of the AI winter https://en.wikipedia.org/wiki/... [wikipedia.org] should have worked on that?
      Another few decades of work on the AI and it should be good, like the AI exports said in the 1970's ...
    • by phantomfive ( 622387 ) on Saturday January 05, 2019 @09:30PM (#57911086) Journal

      How many transistors in the human brain?

      Of course the answer is zero, because the brain has neurons. But we can have some numbers for comparison. A Graphcore GC2 IPU has 23 billion transistors. In comparison, a brain has:

      100 billion neurons.
      10 trillion synapses.
      300 billion dendrites.

      Which of those need to be emulated? A transistor does not do as much as a neuron, and we don't know all the things a neuron does. There is some evidence that the inside of a neuron does some kinds of calculations. So it's much more complicated than just comparing raw numbers. That said, transistors do operate faster than neurons.

      Good link for more reading [timdettmers.com].

      • The brain has way more connections, but a synapse triggers, on average, less than once per second. A high end GPU is clocked at several gigahertz.

        Also, many neurons have nothing to do with "thinking". They are engaged in background tasks, like keeping your heart beating, and monitoring your need to eat and breathe.

        More/faster hardware won't get you different answers, just faster answers. So if hardware was the bottleneck, we would have really smart AI engines that take a long time to think. That is not

        • by qubezz ( 520511 )
          The heart has its own pacemaker, it keeps itself beating. The sinoatrial node is in the heart and creates voltages that cause the heart muscles to contract, while reacting to chemical signals such as adrenaline and blood gas levels. The spinal column emits neurotransmitters which only modify the heart's activity level. Many other organs also have the intelligence of their functioning built-in.
        • by rtb61 ( 674572 )

          Any living brain will also have vast redundancy built in, backups of backups of backups in every function and every memory. Not one series of cells carrying out a task but thousands carrying at the identical task and the more repeated the more become involved and made. Can not have one bump on the head, causing a death dealing lock up or substantively altering decision trees, process loops, high low focus tests, past outcome referencing repetitions and risks, long term projection outcomes versus short term

        • I agree. What we are missing right now is knowledge about what the brain neural networks are optimising for (the loss functions of the brain) and perhaps the types of invariances they can handle. We have conquered only a few of these invariances and objectives. The brain has evolution to thank for. Maybe if we can create good enough simulations we'd be able to offer our AI agents a similar evolutionary path or at least a large enough training ground. Static datasets are just not good enough for general AI.
        • Also, many neurons have nothing to do with "thinking".

          ...

          That is not happening, because the real bottleneck is our knowledge of how intelligence works.

          I generally agree with the thrust of your comment, but your last sentence refutes your first one. We really don't know how intelligence works, or even have a good definition of what it is, still, so your confidence that neurons that maintain homeostasis have nothing to do thinking is misplaced. In fact there is very strong evidence that these background task neurons are also very important in thinking - like in maintaining consciousness. Disruptions in those low level background neurons do not usually inter

      • Fast is just a little bit of an understatement, dont you think?

        These days we are putting over 30 billion transistors on a chip (and for memory we are layering chips up to 64 times...)

        Meat is very VERY slow though, nerve impulses travel at around 450 km/h, so in chip signals move at around 2 million times the speed.
        However by the time you factor in neuron firing times (WAY slower) you find cross-conduction speed through the brain is closer to 10 m/s
        making the same allowances through a chip, we find current t

        • Have you looked at recent graph based neural networks? They can handle multi dimensional and non-uniform structures, such as scenes, texts, proteins and social networks. They are a perfect fit for reasoning tasks. I think they are somewhat in the middle between symbolic and statistical learning. Also, have you seen the progress in reinforcement learning, especially model based RL. It is possible to perform (some) activities at super human levels. I think that graph based NN, RL and simulators for training a
      • How many transistors in the human brain?

        Of course the answer is zero, because the brain has neurons. But we can have some numbers for comparison. A Graphcore GC2 IPU has 23 billion transistors. In comparison, a brain has: 100 billion neurons. 10 trillion synapses. 300 billion dendrites. Which of those need to be emulated? A transistor does not do as much as a neuron, and we don't know all the things a neuron does. There is some evidence that the inside of a neuron does some kinds of calculations.

        There is a lot more than "some" evidence of that. The behavior of a single neuron is quite complex, much more like a CPU than a transistor. We haven't worked out the processing of even a single neuron. There is very good evidence that even individual synapses perform some sort of computation - their response is a function of a number of local chemical factors, i.e. it is a weighted computation, and is not simple like a transistor.

        Neural systems (except for the simplest peripheral reflexes perhaps) are stati

      • Which of those need to be emulated?

        Assuming at some point in time we have the instrumentality to observe a living human brain in action at the resolution/level of detail necessary to really understand the how and why of it's functioning, we'll be able to eliminate a fair number of those for some of the 'hardwired' functions they serve, mainly the 'bare-metal' functions required to keep our basic bodily functions in order -- but even then, we currently don't even understand enough of what our brain does
    • Considering that the time when Moore's law has been applicable hasn't appreciably sped up the development of AI, I fail to see how the end of Moore's law will appreciably slow it down. Can someone point me to a chat bot that is better than ELIZA [wikipedia.org]?

    • by Ramze ( 640788 )

      Bingo.

      Today's CPUs and GPUs are essentially silicon, copper, and various doping ions in a thin sheet that we pump electricity through. They're glorified heating elements that happen to do "work" in a useful way. They're a brilliant design for what's essentially lightning in a rock, but they're very limited.

      The human brain is made of many kinds of cells in various arrangements -- each cell being a small three dimensional world full of molecular machines and nearly every connection between those cells expone

    • We don't even know how the brain works, and this asshat is asserting that we'll never be able to build a machine that works the same way.
      Good thing I actually read people's entire posts otherwise I'd be inclined to write a scathing reply instead of this one.

      100% correct; we don't even know how a biological brain like ours produces the phenomenon of 'thought' or 'consciousness' or anything else that defines us as 'sentient', mainly because we don't have the instrumentality (yet?) to observe the entire sy
      • Yes the real problem is that we don't really know how brains work and we cannot even really imagine a path to knowing how they work. One possible path I can imagine is through understanding how to read DNA. If we can figure out the DNA language well enough to build our own life-machines from scratch just by writing or rewriting the low level code then we may be able to understand everything that brains do by studying the blueprints. If we study the differences between human brains and mouse brains at the de

  • Because the end of Moore’s Law would obviously mean that no further technological progress could possibly occur. /sarcasm

    • Moore's law isn't about being able to make more transistors. It's about shrinking and speeding up the transistors. So "if" Moore's law ends, you just build bigger processors that have more parallelism. Moore's law is about the physical manufacturing process of integrated circuits, nothing more.

  • There's a lot more to do, it's not just about scaling down transistors. More layers of transistors, different ways of structuring logic to better use the transistors... there are a lot of ways left unexplored to keep going, even if transistor size remains fixed.

    • You are right, but the problem with 3-d silicon is that it generates correspondingly more heat - which is already a problem even with current designs. I think that such technologies will keep us moving forward for a while even after we get to the smallest practical transistor size, but there will still be limits.

      Longer-term, there's more hope for non-silicon based models - but that's a whole different subject that might not be subject to the familiar Moore's Law, which was never more than an observation abo

      • You are right, but the problem with 3-d silicon is that it generates correspondingly more heat

        That is a real engineering problem to be sure, but it's also one which can be addressed. When there's no other way to cool processors, they'll start coming with liquid cooling integrated into the die itself. I imagine that the package will include a liquid to liquid heat exchanger, and that the liquid which actually circulates through the die will be sealed in, just to keep it clean. All this will be expensive, so they will do anything and everything else possible first. There is still some room to advance

        • Never said that it couldn't be addressed, but eventually you reach the point of diminishing returns in terms of cost, complexity, and the amount of space required for whatever cooling mechanism is used. We might even be able to get, say, up to around 3-4 generations or even more out of 3d, but I have a hard time imagining how we'd manage another 20 or more like we already have with 2d silicon. 4 generations achieved by increasing the depth would create chips whose components were roughly 16 times the thickn
  • by Dayze!Confused ( 717774 ) <slashdot DOT org AT ohyonghao DOT com> on Saturday January 05, 2019 @08:54PM (#57910904) Homepage Journal

    Kurzweil pretends to know what he's talking about because he can fit a graph with lots of tampering with the data. He fails to see that what he calls exponential growth is nothing more than the beginning of a sigmoid function. A good analysis of Moore's law and computational power shows a sigmoid function, as with many technologies they start off slow, build up quickly, then tapper off.

    • by Dunbal ( 464142 ) *
      Kind of like your post. You were doing so well until you hit "tapper off" :)
    • A good analysis of Moore's law and computational power (..)

      Not to mention that computational power will always hit fundamental resource limits. Say one could do storage by putting individual electrons into a 'field' of locations, where the presence or absence of that single electron represents a 0 or 1. Even then, moving that electron back & forth takes energy - no matter how little. It takes time - no matter how fast. And it takes space - no matter how small.

      Since resources like energy / space / raw materials (and time) have practical limits, that puts a ha

    • Moore's Law, strictly speaking, only applies to silicon semiconductor technology, not to computational technology as a whole. As such, it will eventually exhaust its possibilities, and we're not too far from doing that now. You are correct that most technologies do follow a sigmoid function and that few if any can ever be expected to follow an exponential curve.

      But Kurzweil was not quite so naive as to claim that silicon semiconductor technology would continue forever. Rather, he claimed that successor tech

      • Holloway is much too pessimistic: He completely discounts any successor technologies to silicon.
        We already have several successor technologies, gallium-arsenid as replacement for silicon, optical computing and in the 1990s jap. companies experimented with supra conducting transistors.

        In other words, the death of Moore's Law (for which read: the progress of silicon technology) marks a transition period, not an endpoint.
        Other techniques will suffer from the same principle constraint.

        • We already have several successor technologies, gallium-arsenid as replacement for silicon, optical computing and in the 1990s jap. companies experimented with supra conducting transistors.

          No argument that these are all serious candidates for a viable successor technology, but the question is which one(s) will win out for headroom, life of the components, cost of production, and so forth. Holloway is an idiot if he thinks that the death of silicon means the end of progress.

          Other techniques will suffer f

    • Yes, but is the exact shape of the curve important?

      - If the curve starts bending back the other way very far into the future (or we are at a very low, early heel of it), wouldn't it effectively be the same from the perspective of us humans who would not follow this evolution curve?

      - Even when tapered off to zero further evolution, the top limit of the curve would be much higher and would probably not go down to near human level except if there was an extinction of both such AI and the knowledge and/or capab

    • Kurzweil was well aware that technology curves are sigmoid, as you'd know if you'd actually read anything he wrote. Fast-growth "exponential" phases are invariably followed by levelling off as a technology matures and approaches its limits - but very often, new methods soon follow, with S curves of their own.

      It's hard to be certain at what stage in the AI curve we're at currently, but most researches feel that the field is a long way from hitting its limits yet.

  • Quantum issues now make past years of easy electric design expensive and its finally time to pay for total retooling.
    Who will win?
    South Korea has the design experts to move into smaller parts that will work as needed.
    China will have to wait to see what the USA, South Korea and Japan design.
    The problem is who will retool their factory first, take all the risks, take on new debt only to see their tech advantage totally "lost" to Communist China.
    A working AI will not result due to the decades that people
    • What are you talking about?

      There is no new secret quantum CPU technology that is going to push Moore's law along. If you are talking about qubits and spinbits, they a long, long way off from becoming a viable CPU. This isn't a simple matter of retooling, a lot more research and slick tricks need to be invented. The quantum world is a very noisy world leading to a need to spread things out and use photons to communicate. Smaller isn't going to be the answer, in fact, chips will most likely get physically

      • There is no new secret quantum CPU technology that is going to push Moore's law along.

        Not entirely true. Advances in understanding of quantum mechanics are also key to shrinking classical computers. For example, overcoming tunnelling issues.

      • CPUs stopped getting faster 3-years ago as Intel, AMD, ARM started cramming in more and more cores.

        That is complete nonsense. CPU clock rates stopped getting faster, but the CPUs still can retire many more operations per second.

  • Given mountains bursting with data and endless combinations of that data being worked with it becomes a mathematical certainty that AI would not become ever stronger as time passes. obviously the hardware that could be built to optimize such gargantuan labor is beyond our imagination but we will see computers optimized by AI that get ever stronger. building a power supply for such machines would be mind boggling and the waist heat from such a system could become greater than the heat generated by stars.
  • You can come up with new software algorithms, and new ways to arrange transistors, without making those transisters smaller and smaller.

    Someone is so focused on that tree they're about to walk into that they don't notice they're in a forest.
  • I hope it ends soon.
  • Oh my Lord? (Score:5, Informative)

    by drolli ( 522659 ) on Saturday January 05, 2019 @09:21PM (#57911058) Journal

    Eric Holloway:

    * seems to have no qualification in physics/nanotechnology to add anything to the discussion if Moores law will end, and when

    * Seem to bagger along with intelligent design folks, with him re-telling the old stories they usually tell about information science and the rest of science

    * and seems to write no peer reviewed articles any more (after the paper he wrote unrelated and before his PHD research)

    * Did a PHD in a program where the students are identified as "good stewards of God-given talents" (https://www.ecs.baylor.edu/ece/index.php?id=865400)

    * Did a PHD program which contains in its description "Engineering is also a value-based discipline that benefits from Christian worldview and faith perspectives; students can also select supportive courses from religion, theology or philosophy. Course selection is broadly specified to provide flexibility and to accommodate a wide-range of student interest." (https://www.ecs.baylor.edu/ece/index.php?id=863609)

    * Description of the seminar series of his university where it seems that he presented his PHD: (https://www.ecs.baylor.edu/ece/index.php?id=868860): eBEARS seminars are presented by Baylor ECE faculty, ECE graduate students and transnationally recognized scholars and leaders. The topics lie within the broad area of ECE. In concert with Baylor's Pro Futurus strategic plan to be "a place where the Lordship of Jesus Christ is embraced, studied, and celebrated," some eBEARS seminars focus on the topic of faith and learning.

    So praise the Lord for his insights!

    • Re: (Score:1, Troll)

      by mapkinase ( 958129 )

      What do his religious beliefs have to do with his expertise in the subject?

      • Everything, a person who believes a bunch of myths and lies as fact and belongs to an organization devoted to same led by power and money grubbing scum who ignore, cover over, and give handsome golden parachutes to the perpetrators of their sex scandals likely will have nothing of value to contribute to science, being the antithesis of what they devote their life to.

        • Re: (Score:2, Interesting)

          by Anonymous Coward

          Interesting opinion, considering many great scientific contributions and advancements throughout the ages have come (and still come) from people who were (or are) very religious. And many of those people attribute their discoveries at least partly to divine inspiration, realizing that despite all the hard work, study, experimentation, and preparation on their part, there is still sometimes the feeling of a sudden flow of knowledge or creativity from an external source, resulting in ideas that they feel they

          • The greatest contributions come from those NOT under the strict control of organized religion

            By the way, I've previously have worked in high energy physics for years as engineering physicist at national lab

        • by thrig ( 36791 )
          One popular myth these days is the Myth of Progress, you know, the one were we would already be building bridges on Jupiter ("2018!". James Blish. 1956). Adherents of this myth may be found in various corporate and academic environments wherein they may carry out all the usual power and money grubbing activities, cover over, give handsome golden parachutes, have sex scandals, and contribute nothing of value to science...because they're too busy politically infighting or churning out bad papers for bad grant
          • Human progress has extended life, cured and mitigated disease, raised standard of living.

            Religion and the ignorant mindset it promotes has maimed, killed, caused disease, and impeded the progress noted above. Not surprising since it is based on lies.

      • by Dunbal ( 464142 ) *
        Because God.
      • by drolli ( 522659 )

        The personal belief: Exactly nothing, so taking part in a PHD program where this connection is explicitly made and being connected to IDers means that you got science something wrong.

  • by AndyKron ( 937105 ) on Saturday January 05, 2019 @10:03PM (#57911176)
    Why hasn't some alien already done this and turned the Universe into a massive computer? Oh right, we're living in a simulation...
  • Maybe it will create AI, since that hasn't happened yet.
  • by BobC ( 101861 ) on Saturday January 05, 2019 @10:34PM (#57911286)

    Much of recent AI progress has come from the awesome amount of cheap computing power available.

    That's not going to change! As today's bleeding edge silicon processes evolve, they will get faster and cheaper (both in cost and energy consumption), if not smaller.

    Much of AI is inherently parallel: So long as more CPUs and GPUs can be added, larger problems will be solved faster.

    We are still in the first two generations of custom hardware for AI. That trend will continue and accelerate as new architectures and algorithms arrive.

    I'd say there are at least three full "Moore's Law" generations coming for AI, very likely more. But transistors alone won't be driving it. Fortunately, there are lots of other factors that will.

    • The recent advances have much more to do with Hinton and team's algorithms for training a deep network. That was a major stumbling block that significantly limited the usefulness of neural networks. Even without any increase in computing power, that development created the explosion in success with deep networks.
  • Right and wrong (Score:5, Insightful)

    by Dan East ( 318230 ) on Saturday January 05, 2019 @11:14PM (#57911480) Journal

    He's right and wrong. He is correct that much of the "advancements" in AI has been because of processing power (and dataset size). Most of what I learned in AI in college a quarter century ago forms the foundation of today's AI (and most of what I learned had been developed decades earlier). The reason we have things like Siri isn't because AI is smarter. It's because processing power is so fast and cheap, and because data storage and ram is so large and cheap, that an absolutely massive data set can be crunched to do speaker agnostic recognition to determine what I said. In fact, Apple can run my voice audio through dozens of speech models (male, female, accents, etc) in parallel to find the best result. So he is right - processing power has enabled AI to become far more useful of late.

    However, where he is wrong is in the parallelism and scalability. In my above example, many different nodes (maybe located in entirely different datacenters) are doing that processing to find the best match.

    AI doesn't need to exist on one processor, and it doesn't need to execute at any particular speed. If we're talking "turing" type AI, and I were to ask it "How are you feeling today?" and the AI takes 5 hours to reply "I feel the same as I always do.", well it is still just as intelligent as if it were responding in real-time. When we have reached that point in AI intelligence then we can throw more processing power at it in many different ways to allow it to process faster. The point is that the intelligence is not bound by the processing speed. Sure, for Siri to be viable commercially and useful to Joe Blow it needs to be fast, but as far as research and advancing the field of AI, that is independent of the processing speed.

    And having said all that, AI has not advanced significantly beyond the full realization and expansion of things like neural nets with massive processing power and data sets to be useful in identifying, say, a tree in a photograph. We could have been doing that in 1980 given the processing power and storage capacity we have now.

    • Not correct. Hinton (and team) created the major breakthrough in 2006 with their algorithm for training deep networks. Prior to that, there were simple single layer networks that could be trained with backprop, and there were deeper networks that outperformed all other methods on image recognition, but they had to be evolved instead of trained, the older backprop algorithms didn't work on deep networks.
    • I came in here to say pretty much what you said. I went to graduate school (masters) in the field of AI in the late 80s and have recent returned to graduate school in the same field, so 30 years later. (Georgia Tech both times) I was shocked. I'm learning the same approaches to the problems. The biggest difference I'm seeing are that there are handy libraries to use so I don't have to code stuff from scratch unless the class requires it. I worked mostly in image understanding the first time. In one o

      • Different ML techniques will only give small f1-score improvements on the same data-set (given you didn't make big mistakes or use different features). I do think there are some more recent state-of-the-art ensemble techniques like Random Forest and Extreme Gradient Boosting that get good results on smaller data-sets, but those bagging/boosting ensemble-decision-tree stuff techniques were probably already known someway or the other.
        I didn't take ML classes back in my days (late 90s), but do remember there
  • Magnetic disk performance resulted in companies investing in Flash and other technologies. The oil crisis resulted in companies investing in engine efficiency. Broken iPhones and expensive iPhones results in more people finding 3rd party repair options. Adversity and scarcity breeds innovation. We'll see a lot of money pour back into the pure science of understanding AI, and also we're seeing companies like Google and AMD invest in chip design that is tailored to AI. "AI-SICs", so to speak. I look for

  • Using AI to develop better processors?

  • Even if processing power stopped increasing per mm^2 of die space (it's not), the answer is no. AI processing work is highly parallel. That means you can use things like GPUs. Even better is you can use many CPUs and many GPUs. So the processing power is limited by how much hardware and power you can supply. No, AI is just starting.
  • Humans unable to invent themselves ;)
  • Kurzweil gave more thought to this subject than this particular poster probably every will. He said that Moore's Law will slow down for integrated circuits, he just believed that a new technology will replace it. Sure, his estimates might not end up true, but it's still rather pointless to argue with his Singularity with arguments he has already discussed.

    Far as AI processing has progressed, that hasn't had that much to do with Moore's Law and more to do with how to effectively use silicon for it. Eric Holl

  • The existence of intelligence in mere humans is proof it can be done. We just need to discover better alternatives to silicon.
  • 1. The speed limits of microprocessors are relevant because microprocessors process serial threads of instructions. Parallelizing multiplies effective performance. This is why GPU's are used so much more today, even in addition to multi-core processors.
    2. Neuromorphic chips provide many magnitudes better performance than CPU's and GPU's. They do solidify the activation functions possible -- as those cannot be modified or added to once burned into a chip. This is a downside but the tensor-based model f

  • So far we've only been applying insane amounts of CPU power and data to machine learning algorithms which haven't changed a lot in the last decades.
    We are now seeing such ML applications getting slowly as good as conventional ones, but at a much higher computational effort. Essentially machine learning boils down to statistics. However unlike normal statistics, ML does not provide you with insights. This may be perfectly OK for finding out what fruit is in front of a camera, however whenever you have accoun

Neutrinos have bad breadth.

Working...