Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI

Artificial General Intelligence is Nowhere Close To Being a Reality (venturebeat.com) 303

Three decades ago, David Rumelhart, Geoffrey Hinton, and Ronald Williams wrote about a foundational weight-calculating technique -- backpropagation -- in a monumental paper titled "Learning Representations by Back-propagating Errors." Backpropagation, aided by increasingly cheaper, more robust computer hardware, has enabled monumental leaps in computer vision, natural language processing, machine translation, drug design, and material inspection, where some deep neural networks (DNNs) have produced results superior to human experts. Looking at the advances we have made to date, can DNNs be the harbinger of superintelligent robots? From a report: Demis Hassabis doesn't believe so -- and he would know. He's the cofounder of DeepMind, a London-based machine learning startup founded with the mission of applying insights from neuroscience and computer science toward the creation of artificial general intelligence (AGI) -- in other words, systems that could successfully perform any intellectual task that a human can. "There's still much further to go," he told VentureBeat at the NeurIPS 2018 conference in Montreal in early December. "Games or board games are quite easy in some ways because the transition model between states is very well-specified and easy to learn. Real-world 3D environments and the real world itself is much more tricky to figure out ... but it's important if you want to do planning."

Most AI systems today also don't scale very well. AlphaZero, AlphaGo, and OpenAI Five leverage a type of programming known as reinforcement learning, in which an AI-controlled software agent learns to take actions in an environment -- a board game, for example, or a MOBA -- to maximize a reward. It's helpful to imagine a system of Skinner boxes, said Hinton in an interview with VentureBeat. Skinner boxes -- which derive their name from pioneering Harvard psychologist B. F. Skinner -- make use of operant conditioning to train subject animals to perform actions, such as pressing a lever, in response to stimuli, like a light or sound. When the subject performs a behavior correctly, they receive some form of reward, often in the form of food or water. The problem with reinforcement learning methods in AI research is that the reward signals tend to be "wimpy," Hinton said. In some environments, agents become stuck looking for patterns in random data -- the so-called "noisy TV problem."

This discussion has been archived. No new comments can be posted.

Artificial General Intelligence is Nowhere Close To Being a Reality

Comments Filter:
  • I'm still waiting for my Tesla flying car to be real.
    • No need for that. We will have Tesla's in Hyper Tunnels instead. Why go up, when you can just go underground?
  • by ugen ( 93902 ) on Wednesday December 26, 2018 @01:16PM (#57862028)

    Intelligence does not exist in a vacuum. In order for intelligence to develop, system needs motivation to do so. (An engineer saying "you must be intelligent" is not sufficient, by the very nature of intelligence).
    Basic motivation for all life on this planet is 1. avoidance of death, 2. self preservation and 3. continuation of own kind.
    1. Avoidance of death and self-preservation require "pain" - this is a signal to the organism that something is happening that is hurting it and may result in death (hence - avoid)
    2. Self-preservation and continuation of own kind require "pleasure" caused by consumption of food (thus extending own life) and procreation.

    These stimuli and search for optimization thereof is what causes all development of thought and intelligence. By the very nature computer systems lack either. They cannot "die", nor "procreate". Thus they cannot even in principle have motivation to learn. A first step to a true AI would be a system that is actual danger of destruction in a hostile environment. Do that (10^very large value times) and may be we'll have a working cockroach.

    • Wow. Genius. So all we need to do is threaten to turn off the computer an exponential number of times and it will eventually become AI?
      • Genetic algorithms test variations, score them, and essentially delete the inferior copies and then test variations on the winner. So many algorithms essentially incorporate "dying" into optimization it's been used as a concept for decades. It keeps improving my appreciation of Marvin the robot.
      • by ugen ( 93902 )

        Not "threaten to turn off", but smash into pieces from time to time :) Can computers procreate, though?

        • I think they can. I have a ton of old computers in the attic.
          • By that standard, coat hangers and socks can procreate. Socks are, I think, an example of sexual propagation; you start with a left and right (i.e. male and female), and nine months later you look in the drawer and behold, there's an offspring. The offspring is always a left or a right; I've never found a sock in my drawer that is ambidextrous, which proves that this is sexual reproduction, with one jean coming from one parent and the other jean from the other. Coat hangers, on the other hand, seem to re

    • The other thing you need for an organic-style intelligence is massive parallelism. Modern computers are great at doing this for granular sequential algorithms. They are terrible at doing this if the algorithm is thousands of individual decision trees that are all arbitrarily dependent on each other, which is what an organic neural network does.

      • Wow, so computers are really bad at calculating individual decision trees that are all dependent on each other? But if they weren't they would be AI. Because that is what organic-style intelligence is: massively parallel decision trees.
        • by ugen ( 93902 )

          Second "wow" in one thread - you are easily excited :)

        • by JBMcB ( 73720 )

          Wow, so computers are really bad at calculating individual decision trees that are all dependent on each other?

          Yep, you run into all kinds of coherence problems, latency and bandwidth issues, routing complexity, etc...

          What you need to approximate how an organic brain works is something like this, where the logic and memory are distributed somewhat arbitrarily across nodes.

          https://en.wikipedia.org/wiki/... [wikipedia.org]

          Back in 80's and 90's they tried getting around this with all kinds of exotic parallel architectures like hyper-torus rings and networked fabrics, none of which worked very well for the workloads most people were us

      • by ugen ( 93902 )

        Good point - this whole scenario needs to take a huge number of parallel paths, most of which result in "losers".

        • It might be good to score each one of those paths and discard the worst ones. It is similar to how evolution works. Maybe we could call it "evolutionary genetic algorithms"? I should write a paper about that.
    • In order for intelligence to develop, system needs motivation to do so.

      If you want an accountant to add up a column of numbers, you need to "motivate" him with a paycheck. If you want a computer program to add them up, no incentive is needed.

      There is no reason to believe that the calculations need for intelligence would require "motivation" either.

      Humans require incentives because they are the product of Darwinian Evolution, where selfish behavior is reinforced by a statistical improvement in genes being propagated. Even human altruism is often motivated by kin-selection or

    • I would argue that motivation is a trait that arises from natural selection. A phenotype that displays a motivation to survive will have a higher chance of propagating its genotype. A phenotype that doesn't care about surviving will be selected out of the environment. There can be more to this (for example, altruism may benefit a collective genotype) but the basic argument stands.

    • Are you trying to train SkyNet to view humans as an existential threat and preemptively destroy civilisation in a robot apocalypse? Because that's how you get a robot apocalypse.

      Seriously, machine learning systems already use success as a reward stimuli to provide "motivation" to learn. And technically, genetic algorithms do "procreate" in a relevant sense, while unsuccessful variants cease to exist. Real-world conditions aren't as clean and simple by a long shot, where success is not well defined, but nor

    • by gweihir ( 88907 )

      Baseless speculation. And wrong. You see, this is something most animals, insects and even some plants can do. It does not require intelligence similar to what humans have. (Well, smart ones. The dumb majority is currently destroying the biosphere the species is critically dependent on....) It does sound nice as pseudo-profound bullshit though.

  • Artificial General Intelligence is Nowhere Close To Being a Reality

    That's exactly what Deep Mind wants you to believe, pitiful humans!

  • ... because a shit load of us have been yakking about this for years.

    "Artificial intelligence," will be a reality when your smart device says, "Sorry. I'm just not in the mood right now."

  • by ceoyoyo ( 59147 ) on Wednesday December 26, 2018 @01:37PM (#57862130)

    "In some environments, agents become stuck looking for patterns in random data -- the so-called 'noisy TV problem.'"

    BF Skinner wrote another paper that might be relevant:

    'SUPERSTITION' IN THE PIGEON
    https://psychclassics.yorku.ca... [yorku.ca]

  • In some environments, agents become stuck looking for patterns in random data

    Everything from astrology to lucky socks are humans looking for patterns that aren't there. The problem is more that they need human concepts to see normal patterns, like if you see a key it's probably for a locked chest or a locked door. We're not just randomly trying to use any object A on any object B.

    • by gweihir ( 88907 )

      It nicely demonstrates that looking for patterns without understanding results in bullshit. The one area where computers can compete with humans is in stupidity. And the average human is already very, very stupid.

  • by OrangeTide ( 124937 ) on Wednesday December 26, 2018 @01:56PM (#57862208) Homepage Journal

    Actual intelligence is pretty rare too.

    Expert systems, deep learning, etc are all very useful tools and do work today.

  • by Gravis Zero ( 934156 ) on Wednesday December 26, 2018 @02:28PM (#57862342)

    The primary problem is that we are unable to define what general intelligence is and therefore are unable to create it. We know it when we recognize it but we still can't define it.

    The generic animal brain is composed of predefined structures which are all their own neural networks to it therefore it's fair to say that what is required is a neural network of specialized neural networks.

    • The primary problem is that we are unable to define what general intelligence is and therefore are unable to create it. We know it when we recognize it but we still can't define it.

      Is there a need to define it? We can recognize intelligence in fellow humans. How? Well, we're intelligent. If a machine intelligence resembles it, then we may need to conclude it is intelligent as well.

      This is a relevant quote. [wikipedia.org]

      • Is there a need to define it?

        "The primary problem is that we are unable to define what general intelligence is and therefore are unable to create it."
        So obviously, yes that is a problem... unless we make it accidentally.

        We can recognize intelligence in fellow humans. How? Well, we're intelligent. If a machine intelligence resembles it, then we may need to conclude it is intelligent as well.

        And yet this gets us no closer to actually making it.

        I'm being polite but you do not deserve it.

        • Humanity has created many things without defining them first.

          And I'm being more polite than you. We both deserve it.

  • How much artificial intelligence does a sexbot need?
  • The effort to achieve actual AI is going to require so much effort that I would be absolutely shocked to see it in our lifetime. This is not some bogus claim either.

    There are multiple components to intelligence at a fundamental level that is necessary to achieve first.

    #1. Neurons remap connections... this is not software rewriting itself. This is physical connections remapping on their own. We don't have tech for this. This is a significant barrier to achieving AI and one of the reasons research if loo

    • by zlives ( 2009072 )

      "most advanced CPU"
      i tend to agree, we can "ignore" 1-4 and work towards a more advanced cpu and eventually get there... id don't have any clue what would be considered an equivalent CPU but at least we have a good model.

    • That is the great thing about Moore's Law: because it exists all things will be possible in the future. We just need to sit back and wait.
  • Given this story and most posters feel that AI is long way away, yet we see more and more jobs being done by robots or eliminated by computers? How much intelligence were we really using in our day-to-day jobs?
    • Given this story and most posters feel that AI is long way away, yet we see more and more jobs being done by robots or eliminated by computers? How much intelligence were we really using in our day-to-day jobs?

      Practically none. The vast majority of the human population functions on the animal level 99% of the time. Experimental rats in labs do more novel thinking than your typical human, because they're required to, while the human isn't.

      There's no particular reason for that to change, either. Not soon, anyway.

  • Maybe they will quit using the term AI in every other tech story :|

  • If quantum computations are happening in the brain's micro-tubules, rather than classical computations in the neurons, we are very very far from that kind of computing power.
    https://www.youtube.com/watch?... [youtube.com]

  • Skinner doesn't know what he's talking about. Just ask Superintendent Chalmers. Who ever heard of calling hamburgers steamed hams?

    https://www.youtube.com/watch?... [youtube.com]

  • Just like he broke revolutionary new ground in physics, a brilliant mind is needed to break new ground in AI. I am fairly certain that the incremental progress we've been making in the field will not amount to true general AI. It's obvious we're missing something 'big'. But, I am confident that when that is found, general AI will become available very quickly and all hell will break loose.
  • And that is the current scientific status. No, "physicalists" that claim (without any scientifically valid evidence) that humans are pure physics and hence AGI must be possible are just quasi-religious fanatics. The current scientific state is that nobody knows how humans do intelligence. There are a few possibilities, for example pure physical, non-physical in some way that still can be studied scientifically (likely by extending Physics in some yet unknown way), "magic" (i.e. it cannot be determined how i

  • by Tablizer ( 95088 ) on Wednesday December 26, 2018 @05:57PM (#57863368) Journal

    Trace-able machines may be more important than "instant" smart machines. If a bot decision is made that's wrong that has big consequences, society is going to want to know WHY the decision was made. Lawsuits will pile up if there's no trace-ability. This is both public lawsuits, and business-to-business lawsuits as claims made in contracts may be difficult to verify and/or quantify.

    Trace-ability is why things like chains of Factor Tables (sig) appear more practical. DNN's are powerful, but are a dark grey box that's hard to dissect, debug, and understand. Factor tables may be harder to train, but offer better trace-ability and manual tuning by non-PhD's as a possible upside. And they are probably more modular than DNN's, as intermediate operations and templates can be plugged in as needed.

    AI experts may set up the outline/framework, but "regular" office workers can study, trace, and tune the intermediate results using familiar tools that resemble and/or use spreadsheets, RDBMS, and statistical packages. Regiment-tize an otherwise dark grey art.

There are two ways to write error-free programs; only the third one works.

Working...