Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI

IonQ CEO Peter Chapman on How Quantum Computing Will Change the Future of AI (venturebeat.com) 33

In a wide-ranging interview with VentureBeat, quantum computing startup IonQ chief executive Peter Chapman talks about quantum computing's future impact on AI and ML. From the interview: The conversation quickly turned to Strong AI, or Artificial General Intelligence (AGI), which does not yet exist. Strong AI is the idea that a machine could one day understand or learn any intellectual task that a human can. "AI in the Strong AI sense, that I have more of an opinion [about], just because I have more experience in that personally," Chapman told VentureBeat. "And there was a really interesting paper that just recently came out talking about how to use a quantum computer to infer the meaning of words in NLP. And I do think that those kinds of things for Strong AI look quite promising. It's actually one of the reasons I joined IonQ. It's because I think that does have some sort of application." [...] "For decades, it was believed that the brain's computational capacity lay in the neuron as a minimal unit," he wrote. "Early efforts by many tried to find a solution using artificial neurons linked together in artificial neural networks with very limited success. This approach was fueled by the thought that the brain is an electrical computer, similar to a classical computer."

"However, since then, I believe we now know the brain is not an electrical computer, but an electrochemical one," he added. "Sadly, today's computers do not have the processing power to be able to simulate the chemical interactions across discrete parts of the neuron, such as the dendrites, the axon, and the synapse. And even with Moore's law, they won't next year or even after a million years." Chapman then quoted Richard Feynman, who famously said "Nature isn't classical, dammit, and if you want to make a simulation of nature, you'd better make it quantum mechanical. And by golly, it's a wonderful problem because it doesn't look so easy. Similarly, it's likely Strong AI isn't classical, it's quantum mechanical as well," Chapman said.

This discussion has been archived. No new comments can be posted.

IonQ CEO Peter Chapman on How Quantum Computing Will Change the Future of AI

Comments Filter:
  • Now they just need to convince that an analogy is an implementation, and watch the investment money flood in...

  • Chapman has figured something out that a lot of folks have not.

    No only do we not fully understand what intelligence actually is as a mechanism in a way that we can replicate it... we don't have have a full grasp of what systems are playing a part in our own Intelligences.

    We are not going to be seeing AI in our life times... but if you keep calling sorting lists and advance algorithms AI then go to bed, we are done.

    There are multiple hurdles.
    Self Re-Wiring/ReMapping
    Electrical
    Chemical
    New Structure
    Structural D

    • by ceoyoyo ( 59147 )

      Your brain is NOT the only part of your body that has the ability to process data and information. In fact one of the simplest examples of this are Athletes. They train in repetitive motions to train their muscles to perform specific movements without them having to be overtly thought about by the host brain.

      That's your brain.

      • Not only isn't it your muscles, but rather your brain, that is being trained in this way (as you point out), but so-called muscle memory isn't even an example of intelligence as most people would think of that word.

        I also don't see the relevance of "chemical" in SirAstral's laundry list; at best, there's an implicit hypothesis that if intelligence isn't done chemically, it can't be done at all. I very much doubt whether that is true.

        On the other hand, I don't see why quantum computing should be the missing

        • by ceoyoyo ( 59147 )

          Well, as usual, "intelligence" is poorly defined. Many people would consider the ability to learn a complex skill part of intelligence, and it's certainly something we'd want an AGI to be able to do. It's not a cognitive task though, and it's also something that's pretty straightforward for current neural networks.

          "Chemical" probably relates to the conversion of electrical signals to neurotransmitters at the synapses. A lot of people point this out as being something that's very different between biological

    • by JBMcB ( 73720 )

      Chapman has figured something out that a lot of folks have not. No only do we not fully understand what intelligence actually is as a mechanism in a way that we can replicate it... we don't have have a full grasp of what systems are playing a part in our own Intelligences.

      AI researchers have known this for decades. Companies selling AI, and news organizations covering them, gloss over this knowledge. A headline that reads "We are nowhere near creating machines that think like humans, so don't worry about it." isn't going to get as many clicks as an opposite headline with a still frame from the Terminator.

      • Yes, the researches tend to know... but these kinds of things usually are not translated out because of what you said... it is harder to sensationalize a headline like that.

        But it does show a serious lack of awareness on things like this in the population general. I don't expect people to know how to create AI, but people do need to be aware of the hurdles in the way so they can avoid spending time on bullshit like this...

        https://www.forbes.com/sites/c... [forbes.com]

        Everyone begins wasting time and forcing others to w

    • There's more than one way to skin a cat. We don't necessarily need to understand how our own brain works to create an AI that can think in a similar way.
    • We are not going to be seeing AI in our life times...

      “No computer will ever beat me” Garry Kasparov, 1997.

    • We are not going to be seeing AI in our life times

      The problem is and has always been the algorithmics, the method. I'm pretty confident that the gigantic AlphaGo parallel infrastructure made of thousands processors, could gain some "intelligence" by being programmed differently.

  • Did they also schna until their noses filled with water?

    The conversation quickly turned to Strong AI, or Artificial General Intelligence (AGI), which does not yet exist. Strong AI is the idea that a machine could one day understand or learn any intellectual task that a human can. "AI in the Strong AI sense, that I have more of an opinion [about], just because I have more experience in that personally," Chapman told VentureBeat.

    It doesn't exist. You have zero experience. That may be your area of greatest expertise, but you still have no experience with it.

  • Not this again (Score:2, Insightful)

    by t4eXanadu ( 143668 )

    Except for the fringe, no one takes seriously the idea that quantum mechanics has a significant role in human intelligence or consciousness. Somehow I'm not surprised the business hype machine is playing up this nonsense.

    That the brain is like any computer at all is still an open debate, despite it having become part of the modern doctrine of Neuroscience.

    • by ceoyoyo ( 59147 )

      It would be awfully nice if it did. Your brain is made of a bunch of squishy stuff that grows all by itself and works happily at 310 K while getting knocked around, irradiated and having warm dirty water pumped through it. Basically the exact opposite of any notion we have of building a quantum computer.

      The quantum computer builders would desperately like to have QC and AI linked, so they could benefit from both streams of investment.

    • Except for the fringe, no one takes seriously the idea that quantum mechanics has a significant role in human intelligence or consciousness.

      That's true in neuroscience but it's not an uncommon outlook in computer science/machine learning. How neurons are modeled in a neural net can make or break its ability to to learn. The activation function, the function used to determine when the artificial neurons in a neural net should fire, has to be stochastic to a degree to get that learning behaviour. Of course, the best we can usually do in computer science is psuedorandom. So there is some merit in wanting to investigate what true randomness could

  • The brain is an electrochemical circuit but the function of chemical element is to modify the functionality of electrical circuit. Nature has optimized us in strange ways but there is no evidence to suggest any quantum level influences. I'll be the first to admit I was wrong when they present substantial evidence of quantum effects on neurological function but until then it's hypothetical at best.

  • This quote:

    “Sadly, today’s computers do not have the processing power to be able to simulate the chemical interactions across discrete parts of the neuron, such as the dendrites, the axon, and the synapse. And even with Moore’s law, they won’t next year or even after a million years.”

    Moore's law after a million years? That's like 2^500000 = 10^150514 times faster than we have now, right? I respect what my brain does, but I don't think it has that much processing power.

  • His point is that the human brain (and presumably quantum computers) can do something that traditional computers can't.

    Unfortunately, my understanding is that's not the case: traditional computers can solve any problem that quantum computers can solve, and vice versa. The only difference is how quickly they can do it.
  • The assumption is that you need to fully simulate the biochemical interactions of the neuron in order to simulate a brain.
    This is frankly wrong.
    The variation in the performance of a single neuron as it goes from 30-50 years, as it is affected by alcohol or fatigue or disease is very, very significant.
    You remain 'the same person'.
    This means that there is no value in a hyper-accurate model of the neuron that is much more accurate than the hour-hour variation of normal neurons.

    We do not have a good idea what t
    • Any brain is an interconnected system of systems.
      To understand how a human brain does what it does, as a total system, you have to be able to analyze a living human brain, in realtime, as a system. Then maybe you have a shot at developing hardware/software that can emulate that accurately.
      As-is, so-called 'AI research' is roughly equivalent to the 'million monkeys on a million typewriters given infinite time can duplicate the works of Shakespeare' theory: they're blindfolding themselves and throwing dartb
      • This is an assumption.
        It is IMO a reasonable position that you can take a flash-frozen brain, and combine it with knowledge of how the neuron works, and simulate the whole brain accurately, even though your models of the neuron are not 100% precise.
        As long as they are 'good enough' and your mappings are good enough of the connectivity, you have the individual you started out with.
        (as much as anyone is the same after hypothermia leading to unconsciousness)
        • Doesn't sound right. It's too complex a system, too dynamic in nature, to kill and expect to get an accurate picture of how it really works.
  • That's what this is.
  • Can we please stop this demented nonsense?

    • This foolishness must be kind of entertaining, otherwise none of us would bother posting.
      Besides, what else is there to do?
      • by gweihir ( 88907 )

        You have a point. Observing people say stupid things and do foolish acts is the one thing that has gotten a lot easier in the current situation.

  • You know, before marketing got involved and called everything AI?

The trouble with being punctual is that nobody's there to appreciate it. -- Franklin P. Jones

Working...