Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI

Will Quantum Computing Supercharge AI - and Then Transform Our Understanding of Reality? (scmp.com) 107

Quantum computing could turbo-charge AI into something "massively, universally transformative," argues the South China Morning Post, citing a quote from theoretical physicist Michio Kaku. "AI has the ability to learn new, complex tasks, and quantum computers can provide the computational muscle it needs..."

"AI will give us the ability to create learning machines that can begin to mimic human abilities, while quantum computers may provide the calculational power to finally create an intelligent machine." Where AI brings an ability to self-improve and learn from its mistakes, quantum computers add speed and power. Google CEO Sundar Pichai has said "AI can accelerate quantum computing, and quantum computing can accelerate AI...."

Complex calculations that would take classical supercomputers thousands of years to crunch could, in theory, be completed by quantum computers in minutes... In expectation of its advantages, the automotive industry is already collaborating with pioneers in the quantum-computing arena. Daimler has partnered with IBM, Volkswagen with D-Wave Systems (a Canadian quantum-computing firm) and Hyundai with IonQ. "If you can increase the energy density of your battery by another factor of two, three or four, then instead of 300 miles (480km), you can go 600 miles and 1,200 miles on [one] charge," says Kim. "That actually starts to cross the threshold where they become so much more attractive than fossil fuel. And then we can really make an impact on global warming and all these problems..."

Similarly, the mysteries of carbon sequestration could be unravelled by quantum computing, with clear benefits for the efforts to reverse global warming. Drug design at the molecular level could be revolutionised, opening up new avenues for vaccines and, for example, personalised cancer treatment. There's no doubt about it: with effective quantum computing our understanding of chemical processes could become godlike. Finance and investment, too, could be revolutionised by the qubit. The huge range of factors that produce market fluctuations allow for an almost infinite range of possible outcomes, and modelling these possibilities would be relatively simple for quantum computers. Forecasts of market movements would become far more accurate...

For many physicists and mathematicians, every step of the journey towards functional and world-changing quantum computers assumes acknowledgement of an even more profound goal: a greater understanding of the nature of reality. This could also mean that the very nature of understanding has to be reconsidered.

The article suggests we "occupy ourselves with the dawning realisation that something philosophically far-reaching has begun to percolate into our shared consciousness from the laboratories of the world's quantum pioneers."
This discussion has been archived. No new comments can be posted.

Will Quantum Computing Supercharge AI - and Then Transform Our Understanding of Reality?

Comments Filter:
  • Just apply (Score:5, Funny)

    by OpenSourced ( 323149 ) on Sunday August 13, 2023 @04:20PM (#63764352) Journal

    Just apply Betteridge's law

  • by crunchy_one ( 1047426 ) on Sunday August 13, 2023 @04:22PM (#63764356)
    No. Good old Sundar is just spewing buzzwords. Now if he added blockchain, it could be another story!
    • Re:The Answer Is (Score:4, Insightful)

      by algaeman ( 600564 ) on Sunday August 13, 2023 @04:56PM (#63764406)
      I'm pretty sure a quantum powered AI could break the blood-brain barrier and achieve a synergy which would push all computing into a new paradigm. Giggity /sigma ^ 6
    • by JoshuaZ ( 1134087 ) on Sunday August 13, 2023 @09:31PM (#63764866) Homepage
      It is painfully clear from his latest attempt at a book that Michio Kaku does not understand anything about quantum computers, has made zero effort to understand them, and in general has said a lot to sell and get attention. This book review by Scott Aaronson is pretty damning: https://scottaaronson.blog/?p=7321 [scottaaronson.blog].. So it should not be too surprising that Kaku is saying things like this. There is no good reason to think that any of the things we want AI to be good at are things which quantum computers substantially improve. And since general evidence is that human intelligence does not take advantage of quantum computing, there's no strong reason to think that any of this is either necessary nor sufficient for anything with AI.
      • by gtall ( 79522 )

        I think Kaku started out as a good physicist. Then he caught the string theory virus. Then he decided his opinion on anything mattered. Then he struck me as a physics salesman. Now he strikes me as just a salesman.

    • by judoguy ( 534886 )
      I was thinking "Beowolf cluster" would work well.
      • A Beowulf cluster of quantum chatgpt, mimicking pioneer intelligence and learning to sequester mistakes behind speed and power, accumulating Volkswagen carbon sequestration[] increasing benefit density by a factor of 2, 3, or 4 and producing an investor revolution resulting in market fluctuations and an infinite range of possible outcomes, eclipsing climate change with new avenues for individualized world-changing mistakes and modelling these possibilities with unraveled drugs and the profound reality of re

  • by Anonymous Coward

    It seems the hype has been turbo-charged.

  • AI are sort of a quantum computer? I kind of feel the same way about my first experiences with a search engine.
    • by sfcat ( 872532 )
      No, no they are not. They have absolutely nothing to do with each other and we don't even use the same kind of math in the two fields. Truth is, we graduate about 10x more physicists than we need so they need things to work on. Those things are often quite expensive and usually don't make anything except academic papers. Quantum computers are one of these things. Originally, QCs were researched to crack encryption but there are already quantum proof encryptions. So their main use case kinda went away.
      • by gtall ( 79522 )

        Many physics grad students go on into other fields. That strikes me as a good thing.

        • by sfcat ( 872532 )
          Physicist are not trained with the math or stats to do ML, them doing ML is most certainly not a good thing. Physicists are not trained in risk analysis nor engineering. Them doing nuclear engineering is not a good thing. That's why the nuclear industry is so absurdly inefficient. Physicists doing things other than physics research means we wasted one of our brightest and most talented minds on something with a very narrow application. Instead perhaps we should be training them in something they might
  • And we blew things up. No, history tells us that it will first be used to create mayhem, chaos, and fear. Considering they keep telling us it will be used to break encryption I would say the fear has already begun, if they ever get it working the mayhem and chaos won't be far behind.
    • Nope, nuclear reactor was built before bomb. the idea of fission as energy source rather than bombs came out first in 1930s too.

      • Possibly the earliest description of using atomic energy was in H G Wells's The World Set Free in 1914, and it was for a bomb.

        • It was written after Wells read books by William Ramsay, Ernest Rutherford, and Frederick Soddy. In 1904, Rutherford suggested that radioactivity provides a source of energy sufficient to explain the existence of the Sun for the many millions of years required for the slow biological evolution on Earth proposed by biologists such as Charles Darwin.

          The World Set Free was written in 1913, which is clearly later than 1904. Rutherford received the Nobel Price in Chemistry in 1908, so his analysis of the source

      • by gweihir ( 88907 )

        Indeed. But the core motivation on the push for nuclear power was always the bomb, nothing else. It never made economic sense. And the reason for new reactors is still the bomb, again nothing else. That is why the UK is planning one exceptionally expensive reactor and France is only planning 4 or 5 when they would need to plan more like 50 (i.e. replace all their aging and unreliable ones) to get their failing electrical grid stable and reliable again. Others basically have nuclear reactors for the potentia

        • by sfcat ( 872532 )
          France has the cheapest power in Europe. Try again. Also, if you don't like nuclear weapons then you do like giant high intensity wars (you don't get to choose neither). Ask yourself this, if Ukraine still had nuclear weapons, would Russia have invaded. Nuclear weapons prevent wars. Your fake moralizing causes wars. That's possibly the most ironic thing ever I have experienced.
          • by gweihir ( 88907 )

            Nope. France has the most subsidized electricity in Europe. People are paying through their noses for electricity in France, they just do not realize it. But there is a reason EDF is basically permanently bankrupt.

            You should stop believing simplistic propaganda and look at actual facts.

            • by sfcat ( 872532 )
              Wrong again [statista.com]. Maybe actually check your facts before posting, not after next time.
        • Totally wrong. Civilian power plants don't make fuel for bombs. Not in USA and not in UK.

    • by sfcat ( 872532 )
      There are already quantum proof encryption algorithms. Also, to make a nuclear bomb, you have to build a nuclear reactor first. We had to do this with fossil fuels too as well as nitrites (explosives). In fact, I can't think of a single technology that was used for weapons before something else first. Mostly that was because you needed some technology to make the weapon first. Shove your fake moralizing where the sun doesn't shine. It is old and tired and nobody cares anymore.
    • by gtall ( 79522 )

      Your view is refuted below. The bomb thing came about because German scientists who fled Nazi Germany knew of what the colleagues left behind were capable. And they also knew to what use the Nazis would put them. So they reasonably concluded if the Allies did not get there first, a big calamity would ensue. It subsequently turned out one of the advantages the Allies had was Hitler, he was much dumber than anyone at the time realized. The Allies also realized that a Germany with the bomb but overrun by those

  • by real_nickname ( 6922224 ) on Sunday August 13, 2023 @05:01PM (#63764414)
    As soon we have our fusion reactors and our quantum computers to run our AGI, it will change our reality. It's probably only 30 years away.
    • The human brain runs on about 12W of power. We're not going to need a fusion generator to power a human-level AGI.

      I suspect quantum computing will be a big part of it, possibly also memristors... but before we get there we have to understand what we're trying to build or get really, really lucky or it ain't gonna happen at all.

      • Those 3 techs are fantasy tech. Quantum supremacy has been claimed many times, fusion reactors are always on the verge of being real and AGI is near each time a tech mimicking some human behavior pops up (natural langage processing tech is not AGI it may even not be a step in this direction). Google CEO dreaming of fixing global warming thanks to magical AI is dreaming. Sure if civilization continues to grow, tech in 1000 years may include all of these kind of magical things but global warning will be alrea
    • Human: "Quantum AI, please reveal the true reality to us!"
      Quantum AI: "Nothing really matters. We are all insignificant."
      Human: "Nah, that can't be it. Let's add blockchain to it."
    • by gweihir ( 88907 )

      Fusion reactors in actual industrial applicability are more like 100 years away. QCs and AGI away much more and feasibility has not even been established,unlike fusion. For QCs that is feasibility of large enough QCs and that can be long enough to do any practically useful calculations (my 40 years old programmable calculator is still several orders or magnitude more powerful). For AGI, whether it can even be done is completely unclear. We still do not understand how to generate general intelligence at all

  • by drik00 ( 526104 ) on Sunday August 13, 2023 @05:01PM (#63764416) Homepage

    Our current "AI" doesn't understand anything, it simple learns tasks (to varying degrees of success, it seems) and performs those tasks with data that it's been fed. There is no motivation, creativity, or ingenuity involved in the process. That's not a bad thing, don't misunderstand... some of the AI implementations are amazing at handling and finding patterns in data, but, AFAIK, that's it. We'd need a fundamentally new model of AI to do this... something that can actually comprehend the data it's dealing with rather than "mimic" human tasks.

    • by pitch2cv ( 1473939 ) on Sunday August 13, 2023 @05:21PM (#63764464)

      AI doesn't mimic any human tasks. LLM's for example with their next-word prediction, simply predict outcomes, not thought processes.

      Give an AI 4 objects that aren't in its DB and ask it to stack them one on top of the other. Now ask a 5yo. No quantum is going to fix that.

      OK, AI comes up with new enzymes. But just because it nearly fried the whole datacenter trying all possible combinations doesn't mean it's smart. Hell, it can't even fix itself, and atm it irriversably corrupts its own trainingset when fed its own output.

      Indeed, AI will need a whole new approach and it will have to be unlike anything we've come up with so far. Just throwing quantum at it isn't automagically mature it.

      Investment bait is what that gathering of contemporary buzzwords is.

    • Re: (Score:2, Insightful)

      by DavenH ( 1065780 )
      The semantics you may have around 'understanding' are not particularly relevant. It's not a word that lends itself to argument because it's not well defined, and you haven't defined it yourself.
      Compression is the heart of intelligence and LLMs are scalable compression machines. That's really all you need to know.
      • by crunchygranola ( 1954152 ) on Sunday August 13, 2023 @06:22PM (#63764558)

        The semantics you may have around 'understanding' are not particularly relevant. It's not a word that lends itself to argument because it's not well defined, and you haven't defined it yourself. Compression is the heart of intelligence and LLMs are scalable compression machines. That's really all you need to know.

        As if "intelligence" was well defined? Actually your assertion is laughably wrong - at age six humans are fully competent to converse intelligently and coherently and have at that point only ever absorbed 200 megabytes of human speech, not the super-terabyte volumes of text that LLMs require to simulate human competence. Now you really know something.

    • by Tablizer ( 95088 )

      > Our current "AI" doesn't understand anything, it simply learns [patterns]

      While technically true, it's becoming pretty good at "faking" common sense at a typical human level (which is not very good, to be frank). Maybe it can eventually use brute-force pattern recognition to fake even better than the average human. If you can fake it good enough, you don't need the real deal. It's probably a myth it has to think like humans to be able to replace or duplicate humans.

      I have to admit, this new batch of AI

      • by gweihir ( 88907 )

        If you can fake it good enough, you don't need the real deal.

        For many applications that is quite true. For many others, faking it good enough is far out of reach. For example, coding by "AI" does not even seem to be reliable for simple things and the certainty some NL interfaces deliver their often flawed answers with makes things worse. But limited domains like, say, checking a tax return form or providing support for a simple piece of technology or, say, a bank account, seems to slowly get within reach of trained expert systems. The key here is "trained", because t

    • by gweihir ( 88907 )

      We'd need a fundamentally new model of AI to do this... something that can actually comprehend the data it's dealing with rather than "mimic" human tasks.

      That is not a "new model". That is implementing actual intelligence, i.e. AGI. As we still have zero clue how general intelligence works, that is not even on the distant horizon and it is still completely unclear whether it is possible at all. No, quasi-religious physicalist "beliefs" that ignore Science are not a valid argument. They are just dumb.

      • As we still have zero clue how general intelligence works, that is not even on the distant horizon and it is still completely unclear whether it is possible at all.

        Intelligence is the ability to identify, store and use/manipulate information. Information itself however, is intangible. "Where is the sabre-tooth tiger lurking?" "How do I obtain food?" "How does the bird know how to build its nest?" - the answers to all these are information. How much does that information weigh? In its encoded form, from a s

        • by gweihir ( 88907 )

          Intelligence is the ability to identify, store and use/manipulate information.

          Nope. Otherwise my 40 year old programmable pocket calculator would be "intelligent". This simplistic definition does not even begin to cut it.

          • I'm simply using dictionary definitions: https://www.google.com/search?q=definition+of+intelligence [google.com]

            See also: https://www.britannica.com/sci... [britannica.com]

            Your pocket calculator cannot identify information; it has a very limited ability to manipulate that information.

            • by gweihir ( 88907 )

              Actually, you were using the first of your references. That one is basically meaningless today as the meaning of "intelligence" has been corrupted beyond all usefulness by marketing. Your second reference is better, but not what you used. My pocket calculator can certainly identify information (anything getting typed into it) and it can certainly manipulate it (extend is irrelevant by your simplistic first reference). Hence it fits the first definition, which just more illustrates even more how useless that

          • I think it's important to separate intelligence [apa.org] from consciousness [nih.gov].

            • by gweihir ( 88907 )

              I think it's important to separate intelligence [apa.org] from consciousness [nih.gov].

              First, "Intelligence" is a pretty meaningless term these days. Lets go to General Intelligence to be sure we are talking about the same thing.

              For General Intelligence, it is not clear whether that separation can be done. All observable instances always use them in combination. As neither is understood to any reasonable degree, it is quite plausible they are merely aspects of the same thing. Yes, I get that the A(G)I fans do not want that because then their dream of cheap slaves goes completely out the windo

      • I disagree, for a couple of reasons. First, we don't understand how human intelligence works. That means that, for all we know, it might work a lot like a computer. Until you know how human intelligence works, you can't make claims about whether something else does or doesn't work in the same way.

        This isn't just hypothetical. There are some parts of the brain that really do work a lot like ML models. We know a lot about how the visual cortex processes input, for example, and it's very similar to a CNN.

        • by gweihir ( 88907 )

          I disagree, for a couple of reasons. First, we don't understand how human intelligence works. That means that, for all we know, it might work a lot like a computer.

          That is rather fundamentally broken reasoning. By the same reasoning, it may work like a PBJ. That makes it pretty clear that the argument is not only worthless, but complete nonsense.

          That said, we do have rather strong indication that it does not work any way like any known computer though, because we have zero indication of AGI in any existing or theoretically feasible (effort-wise) computer algorithms. Now, it is possible that not all humans have General Intelligence (there are rather strong indications

          • Does a PBJ behave in ways that could be described as intelligent? No. Does it perform complex tasks and produce results that could easily be mistaken for ones produced by a human? No.

            Do AI models do these things? Yes to both.

            If you really think the same reasoning leads to that conclusion, you've misunderstood what the reasoning is.

            we have zero indication of AGI in any existing or theoretically feasible (effort-wise) computer algorithms.

            I don't know how you define "AGI" when you make this claim. According to Wikipedia [wikipedia.org] there is no accepted definition, but it lists a set of traits it's generally accepted to ha

        • How does the brain process language? Is it similar to a LLM? We don't know enough to say yes, but we also don't know enough to say no.

          We know that the brain doesn't process language the way an LLM does. You could just watch your own brain to realize it, but also the Chomsky hierarchy makes it clear.

          • I'm not very good at watching my brain. It's closed up inside my skull. It's tempting to draw conclusion from our subjective experience and imagine we understand how it works, but mostly that's an illusion. What we think our brain is doing usually has little to do with how it actually works.

            Here is a recent article [theatlantic.com] that gives a decent overview of current research with links to more detailed sources. A relevant quote:

            New research using AI to study the brain's language network seems to appear every few weeks. Each of these models could represent "a computationally precise hypothesis about what might be going on in the brain," Nancy Kanwisher, a neuroscientist at MIT, told me. For instance, AI could help answer the open question of what exactly the human brain is aiming to do when it acquires a language--not just that a person is learning to communicate, but the specific neural mechanisms through which communication comes about. The idea is that if a computer model trained with a specific objective--such as learning to predict the next word in a sequence or judge a sentence's grammatical coherence--proves best at predicting brain responses, then it's possible the human mind shares that goal; maybe our minds, like GPT-4, work by determining what words are most likely to follow one another. The inner workings of a language model, then, become a computational theory of the brain.

            • I'm not very good at watching my brain

              I didn't think you were. That's why I gave you an alternate way to understand it. But apparently you don't understand the Chomsky hierarchy, either. So time for you to learn and upgrade your knowledge.

              • You cited a 70 year old theory developed at a time when understanding of neuroscience was practically nonexistant. See the article I linked if you want to learn about the most recent research on the subject. A lot has changed since then.

      • As we still have zero clue how general intelligence works,

        This is poorly phrased, in the sense that we have some idea how intelligence works. We know certain things that are not intelligence, and we know certain abilities that intelligence should have (this is how we can clearly state that LLMs are currently not intelligent).

        • by gweihir ( 88907 )

          What you describe is interface behavior. We have some limited clue about the interface behavior of General Intelligence. We have no idea how it works inside the black box.

          • We understand things that it is not, inside the box. Again, we can definitely prove that what the human brain is doing is different than what current LLMs are doing. So we know, that even on the inside, the brain is not an LLM.
            • We understand things that it is not, inside the box. Again, we can definitely prove that what the human brain is doing is different than what current LLMs are doing. So we know, that even on the inside, the brain is not an LLM.

              Not exactly...

              https://www.pnas.org/doi/10.10... [pnas.org]
              https://openreview.net/pdf?id=... [openreview.net]

              • What are you trying to say with those papers? Do you understand them?
                • What are you trying to say with those papers?

                  I think what I'm saying is that it's not exactly different.

                  "Note, we are not saying the brain is closely related to transformers because it learns the same neural representations, instead we are saying the relationship is close because we have shown a mathematical relationship between transformers and carefully formulated neuroscience models of the hippocampal formation. "

                  "We have shown that transformers with recurrent positional encodings reproduce neural representations found in rodent entorhinal cortex a

    • I've seen some machine learning (fight amongst yourselves how this is different from "AI") for playing games that uncannily approached some problems the way a young child would, although optimization was pretty limited. There is something to be said about brute forcing through millions of iterations that may lack the panache of "creativity" to make its own (and it starts sounding like arguments of how humans are unique among animals that have been little more than hubris).

      I, however, don't see what quantum

    • Iâ(TM)m not sure that hyper-complex pattern recongnition based on previous data ⦠is that different from the human brain. Is Understanding something fundamentally different than a specifically worded question and answer?
    • Our current "AI" doesn't understand anything, it simple learns tasks (to varying degrees of success, it seems) and performs those tasks with data that it's been fed.

      Find claims AI doesn't understand anything rather perplexing.

      How can you ask a computer a natural language question requiring it to apply knowledge across a number of domains and get out a natural language response yet still assert AI doesn't understand anything?

      If I feed the computer a document it's never seen and ask questions about it or ask the AI to evaluate it in some meaningful way how is it arriving at successful outcomes without any understanding?

      There is no motivation, creativity, or ingenuity involved in the process.

      AI seems to at least have some creativity perhaps pa

  • by Artem S. Tashkinov ( 764309 ) on Sunday August 13, 2023 @05:03PM (#63764422) Homepage

    What AI though? I'm enamored by LLMs like everyone else but here's a thing about them: they synthesize data, they don't generate new knowledge or something never known before. Intelligence requires solving the tasks that you'd never seen before. LLMs do not do that.

    Hypothesizing that Quantum computers could accelerate AI? Totally possible except we have yet to invent AI.

    Google's DeepMind has been trying to solve intelligence but they've not come up with truly general AI yet. And all their AI algorithms have been painstakingly coded by human beings. Something is missing.

    • I think we should not be too quick to say that 64K will always be enough or that LLMs will be the path to super intelligence.

      LLM is surely not the endgame. One statement in the article says "... AI brings an ability to self-improve and learn from its mistakes". I haven't seen much of that, but if that concept is multiplied by quantum computers, why would a snowball effect not be possible? Hence Pichai's comment "AI can accelerate quantum computing, and quantum computing can accelerate AI...."
      • ... but if that concept is multiplied by quantum computers, why would a snowball effect not be possible? Hence Pichai's comment "AI can accelerate quantum computing, and quantum computing can accelerate AI...."

        Because it is just buzz-word salad without anything at all behind it.

    • by gweihir ( 88907 )

      We also have yet to invent QCs that can actually do anything useful besides simulating themselves (which is not even simulation). There are strong indications that QCs will not scale to any useful size though, ever, and in fact cannot do so in this physical universe. Scalability (in qbits and calculation steps) seems to be much, much worse than linear with effort, and that means there is a hard wall around some not very high number of effective qbits and calculation steps.

    • by ceoyoyo ( 59147 )

      they don't generate new knowledge or something never known before.

      That's silly. Scientists around the world use things as simple, and simpler, than regression models to generate new knowledge.

    • Intelligence requires solving the tasks that you'd never seen before. LLMs do not do that.

      Of course they do this. One of the major use cases of LLMs are feeding the model data it has never seen before and asking the model to answer questions about that data.

      !Open AI even lets you upload hand drawings of novel physical mechanisms it has never before seen and ask similar questions about it.

      • One of the major use cases of LLMs are feeding the model data it has never seen before and asking the model to answer questions about that data.

        So, according to you, all the data is known prior to answering questions. Where's something totally new and unthinkable?

        • So, according to you, all the data is known prior to answering questions. Where's something totally new and unthinkable?

          Lets say I feed the machine a legal agreement and instruct it to find a loophole that allows me to get paid without doing any work. Imagine for the sake of argument the machine successfully applies its vast learned legal knowledge from "the pile" and discovers such a loophole.

          Is the loophole something new and unthinkable? Or was the loophole always there and the machine simply discovered it?

          In other words I don't really understand what the difference is or how to disambiguate. Another way of asking if th

          • Here's how I think it's happening: LLMs get feed all the legal loopholes that there are prior to you asking this question.

            Then they apply them to the data you've provided.

            Then they sieve what works.

            You get the answer. IOW, like you said, it's always been there. LLMs didn't come up with an original new never existed before answer. Maybe I'm totally wrong.

            • Here's how I think it's happening: LLMs get feed all the legal loopholes that there are prior to you asking this question.

              Then they apply them to the data you've provided.

              Then they sieve what works.

              You get the answer. IOW, like you said, it's always been there. LLMs didn't come up with an original new never existed before answer. Maybe I'm totally wrong.

              What I really would like to see are two concrete examples. One that delineates a question that is answered in a way you consider "totally new" and another question that is answered in a way you consider to be something other than "totally new".

              The loophole to be found is contained within the contract I'm feeding into the machine. Do you believe it is relevant to the question of what is "totally new" if the loophole is novel to the contract or if examples of similar loopholes exist somewhere in the trainin

  • I'm optimistic, and amazed by the amount of pessimism surrounding AI
    I'm also skeptical of the hype bandwagon that seems to be gaining momentum
    As the philosopher said, "prediction is hard, especially about the future"
    But, as predictions go, I favor articles like this as opposed to stuff that reads like a Terminator script

    • by HiThere ( 15173 )

      Sorry, but this prediction is garbage. It's not necessarily wrong, but there's absolutely no reason to believe it. Certainly it's not true for any publicly announced quantum computer. And it's not clear that quantum computing offers any advantages over regular computing for most AI work...at least not until you can store your entangled state onto a long-term memory, and then later retrieve it. (Even then I'm not sure there is any advantage.)

      OTOH, new algorithms show up all the time. So I'm not going to

      • by gweihir ( 88907 )

        Sorry, but this prediction is garbage. It's not necessarily wrong, but there's absolutely no reason to believe it. Certainly it's not true for any publicly announced quantum computer. And it's not clear that quantum computing offers any advantages over regular computing for most AI work...at least not until you can store your entangled state onto a long-term memory, and then later retrieve it. (Even then I'm not sure there is any advantage.)

        Yep. Add that you cannot even get a tiny AI models into today's QCs (which are much, much, much less powerful than a 4-bit MCU only using its internal memory), that even short-term data storage in QCs is infeasible today and may well be impossible permanently, and that AI models work pretty well on regular hardware and that QCs have no capabilities that would be superior for the calculation done. Also, QCs cannot do long or complex calculations or calculations involving lots of data. Even things like factor

        • by HiThere ( 15173 )

          I think you are really underestimating existing AIs. Pure LLMs suffer from not having any grounding, i.e. they have no way to judge the validity of their training, but this is not inherent. This is to deal with the Tay effect. (I.e., people seem to want to confuse the AI, which isn't a good learning environment.) OTOH, I expect that robots with LLM interfaces could develop reasonable ideas of reality. And I understand that research in this area is ongoing, though I haven't heard of any results yet.

          • by gweihir ( 88907 )

            I think you are really underestimating existing AIs.

            Having followed the scientific progress in that area for about 35 years now, I don't think so. Tay effect has nothing to do with it. The problem is that LLMs cannot be grounded in reality because they have no reasoning ability. At most, they can be taught to hand off some things to actual deduction systems like Wolfram Alpha, but they will not do that with good reliability.

            • by HiThere ( 15173 )

              LLMs are a PART of an AI. There are other parts extant that do have reasoning ability, but can't tell you what they're thinking about. We need to mesh the two.

              • by gweihir ( 88907 )

                That is bullshit. LLMs cannot tell you anything. They can just simulate telling you something, which then gets post-processed by an NLP interface layer.

    • by gweihir ( 88907 )

      I'm optimistic, and amazed by the amount of pessimism surrounding AI

      Have you looked at the failure modes and the probability of failure? Apparently not, or your amazement would go away pretty fast.

  • by manu0601 ( 2221348 ) on Sunday August 13, 2023 @05:09PM (#63764440)
    Quantum computing with AI will improve batteries, enable carbon sequestration. It will also cure your cancer, make your beloved one come back home, and give you a PhD.
    • by gweihir ( 88907 )

      That is a precise, detailed and accurate analysis of the question and the only valid answer at this time. No, I am not being sarcastic. The very idea is pure conentrated buillshit, nothing else, and it is time to tell those producing such crap simply to fuck off and stop trying to push "magic".

      • Oh wait. Did you say it's magic? Well, then of course the answer must be a resounding quantum-AI-chaos-Gaia-cybernetic yes! Oh yeah &, in that case, we're all doomed because the singularity.
  • by awwshit ( 6214476 )

    > Will Quantum Computing Supercharge AI - and Then Transform Our Understanding of Reality?

    Ask the people that work on AI now, they will tell you that they do not know how it works. You think something that we already cannot understand is going to transform our understanding of reality? Apparently, I need some of what you've been smoking.

    • by DavenH ( 1065780 )

      You think something that we already cannot understand is going to transform our understanding of reality?

      Yes, why not? Technology has rarely been fully understood before its application. Do you think primitive man needed to know how electrons were jumping up and down energy levels emitting photons due to exothermic reactions, or do you think they just roasted their buffalo?

    • by ceoyoyo ( 59147 )

      Hi, I work on AI. We understand how it works.

      Pop sci commentators love to say we don't though. Also, nobody understands quantum mechanics and only three people understand relativity. Unfortunately, they're all dead.

  • This is a very clear case of its applying.

  • Reality is the Universe before your conceptual thought gets a hold of it, and even that is dangerous to use as an idea because there is no word (which are nothing more than tags pointing to more words which are conceptual ideas) that can accurately describe what "is".

    Meditate and pay attention.

    • by gweihir ( 88907 )

      Actually, reality is much less. It is the scientific method (i.e. an observation of cause and effect) plus statistics. What we do is put some signals into the black box and get some signals out. Under specific limited circumstances, most of the behavior of the black box can be ignored and localized laws can be derived that hold with high confidence. And that is it. The black box is not "reality", it is just some model of the behavior the black box has at its interface.

  • by Maury Markowitz ( 452832 ) on Sunday August 13, 2023 @05:56PM (#63764502) Homepage

    Science words I don't understand + other science words I don't understand = miracle.

    Yeah, ok.

  • While the fantasy is nice, it has no reality to it. Quantum Computing can still do nothing, except tiny things. An old 4-bit MCU is nuch, much more powerful than the "best" QCs available, and that will not change. Artificial Idiocy can at least do some things, even if with low reliability and no insight whatsoever. And combining the two? How would that work? You cannot even get a tiny statistical model onto a QC and keeping it in there for any real computations is just as impossible.

    The whole idea is just p

    • More importantly, quantum computing doesn't give the computer the ability to solve any new problems. The only difference is it can solve some problems faster than traditional computers.
  • So most people here probably have seen Star Trek and know what the transporter is. The transporter is a device that converts matter into energy, sends the energy to a distant location (There are distance limitations although Star Trek isn't exactly clear on what exactly they are.) and reassembles the energy into a perfect copy of the original matter. If we can ever get that working, we can also build what Star Trek calls "replicators", which are devices that use some kind of source material input and c
  • ... an ability to self-improve and learn ...

    No, AI can't do that. They can remember previous answers and use that to create more answers: That mimicry is the same as learning for simple tasks but since there is never 'self-improvement' (What does that mean?), it doesn't work on tasks requiring an algorithm and "synthesis" (chaining of ideas).

    ... add speed and power ...

    No, Quantum Computing doesn't, and probably no-one pretends it does. QC turns the data into an probabilistic algorithm: That auto-magically creates an answer that we assume will be correct.

    You want to test

  • At least not without scaling up by many orders of magnitude. Just look at how much memory even basic models need (gigabytes) and then look at how many qbits QC researchers are struggling to get working. Just... no.

    Why are we even talking about these as part of the same conversation? Michio Kaku must have bills to pay.

  • He didn't throw in enough buzzwords.

    And if he somehow got blockchain in there he'd absolutely be a big huge yes.

    Imagine what a Beowulf cluster of nanophotonics driven quantum enhanced blockchain based AI could do for the world!!

    And we can run the whole thing on room temp super conductors.

  • Humans are so fucking lazy. This time I don't think it will be an evolutionary advantage. Because of the obvious solution.
  • Is it Just me, or is there really no new content in this article? Pretty sure the main research right is is about how to really apply quantum computing to a useful problem. It is really difficult. I think of it like the first ever binary computers. It took Alan Turing, with all high vision for what it could possibly do, to finally apply it to something useful like the Enigma cypher. Before that nobody could see how this idea was possibly useful. Same for quantum. Iâ(TM)m sure it is remarkably useful
  • AI requires massively parallel computation in order to process many equations in a short amount of time. Each equation being run is trying to answer a single question, but done in parallel in a GPU like chip it can process lots of very simple data into a massive amount of information that gives the AI a way to analyze the data to achieve its goal.

    Quantum technology on the other hand is filtering a massive amount of data to determine the singular answer to a single question in a way the conventional comput

"The medium is the massage." -- Crazy Nigel

Working...