Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI

'Modern AI is Good at a Few Things But Bad at Everything Else' (wired.com) 200

Jason Pontin, writing for Wired: Sundar Pichai, the chief executive of Google, has said that AI "is more profound than ... electricity or fire." Andrew Ng, who founded Google Brain and now invests in AI startups, wrote that "If a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near future." Their enthusiasm is pardonable.

[...] But there are many things that people can do quickly that smart machines cannot. Natural language is beyond deep learning; new situations baffle artificial intelligences, like cows brought up short at a cattle grid. None of these shortcomings is likely to be solved soon. Once you've seen you've seen it, you can't un-see it: deep learning, now the dominant technique in artificial intelligence, will not lead to an AI that abstractly reasons and generalizes about the world. By itself, it is unlikely to automate ordinary human activities.

To see why modern AI is good at a few things but bad at everything else, it helps to understand how deep learning works. Deep learning is math: a statistical method where computers learn to classify patterns using neural networks. [...] Deep learning's advances are the product of pattern recognition: neural networks memorize classes of things and more-or-less reliably know when they encounter them again. But almost all the interesting problems in cognition aren't classification problems at all.

This discussion has been archived. No new comments can be posted.

'Modern AI is Good at a Few Things But Bad at Everything Else'

Comments Filter:
  • by Anonymous Coward on Thursday February 08, 2018 @01:26PM (#56090021)

    ...before a bunch of angry old coots post telling us that none of this is AI.

    • Re: (Score:2, Informative)

      by bmimatt ( 1021295 )
      Not long. One more time - this is not AI, it's machine learning (ML) at its infancy.
    • ...before a bunch of angry old coots post telling us that none of this is AI.

      Let's put some of that into context.

      A 5-year old can recognize a dog in an image in about 1/2 a second. A neuron takes about 0.05 seconds to activate and fire, so on average the entire recognition process takes about 10 steps.

      Those steps include reading the image (sensing and converting the image data to internal form), and activating the physical response: saying "dog" or clicking the right button or whatever.

      So let me ask this: what AI algorithm takes ten *steps* to recognize something as complicated as a

      • by Megol ( 3135005 )

        I don't think we have to have the equivalent of a 5 yo for something to be considered intelligent.
        But as I wrote in another post actually defining intelligence is a hard problem.

      • by Jeremi ( 14640 )

        So tell me again: in what measure is our current level of AI anywhere close to being "real" AI?

        In the measure that sets the threshold for "real" AI a lot lower than where you're putting it, of course.

        Of course my only real reason for posting this reply is to include a link to this non-distorted photo [reddit.com] of a non-deformed dog... for extra fun, let your 5-year-old's neural network figure that one out. :)

    • Millenials think everything is AI, because they have no idea how technology really works. They are consumers, not producers. So anything that mimics intelligence is like magic to them. Meanwhile the smart people know that chess playing (or Go playing) computers are not AI, just clever programs.
    • What we have now isn't even remotely close to anything even remotely close to resembling intelligence. We have fast, efficient number crunching. Nothing more.

      What we are witnessing at this moment is the invention of the wheel claiming to be interstellar travel. It hasn't even risen to the depths of being laughable.

      Unfortunately for a lot of people, their jobs require little more than number crunching.

  • Comment removed based on user account deletion
    • Well there has been so much news about AI about doing things better and cheaper then a person could do. It is important to show that it isn't a human replacement, just a human supplement.

      It is a case of too many "Man bites dog " news, and we have forgotten that the Dog normally bites the man.

      It is important that the public is properly informed on the news. We do not want business owners to rush the gun, fire all the employees and install an AI system that cannot get the job done, causing harm to both the e

      • Well there has been so much news about AI about doing things better and cheaper then a person could do. It is important to show that it isn't a human replacement, just a human supplement.

        It is both. Kind of like how the factory system supplemented humans, it made the human operators much more efficient than they could have been at manual craft manufacture. But many fewer of these supplemented humans were required.

        The "few" things that AI (actually machine learning) is good at happens to cover a lot of work that humans are now earning salaries to do.

        • Doesn't matter. What do you think will happen when AI moves into a field? Perfect example: Trucking. What do you think will happen to the volume of goods that travel down roadways when AI gets involved and transporting goods becomes significantly cheaper? The volume will go up, and so will societies utility from it. This is good for society, we want goods and services to consume, work is not something we want, it's something we have to do to have the goods and services we want. This will no longer the case.

          • But that "earning salaries" thing is all important. It does not matter how cheap goods get, if you have no income to buy them with.

            This is a major problem for our society, which regards not being employed as a sign of personal failure, from which you should suffer.

            Having millions suffer from devastated livelihoods for the "good of society" is a huge problem.

            So it matters an enormous amount.

          • Several mistakes:
            a) a self driving car is not an AI, nor does it use AI, it uses several so called "cognitive systems"
            b) shipping is so absurdly cheap, the cost is not relevant for anything
            c) just because shipping might become cheaper, it does not increase the amount of shipped goods. To increase the amount of shipped goods you either need to make people "ship something", or make customers buy more.

            Considering that above a certain price most shops offer "free shipping", customers wont be affected by a) or b

        • The biggest problem is most companies are so focused on cutting costs, they are not focusing on bringing in customers.

          Efficiencies + Proper leadership doesn't mean lowering your workforce, but moving your workforce into a more productive state, that allows for proper growth.

          Lets say the Machine Learning system, is handling the billing, it can do it faster and more accurate then a person can. The person who use to have to handle all the billing, can not be in a position where they are not bogged down by pap

    • by tsqr ( 808554 )

      This "news" is "dog bites man" please come back when it is "man bites dog".

      Well, OK then. [npr.org]

    • by Junta ( 36770 )

      Counter acting hype is valuable.

      I recall an article a while back proclaiming that programmer jobs would go away because AI 'is here'. Written obviously by people who have no understanding of the current things being labeled 'AI'.

      It's obvious to those deeply engaged in the technology, but it is not at all obvious to the wider public, which includes a lot of decision makers who can create big problems if they just have the marketed hype to go on.

      The key is a balance, describing things that it can do well on

  • Hype and Fear (Score:5, Interesting)

    by DrTJ ( 4014489 ) on Thursday February 08, 2018 @01:36PM (#56090095)

    AI getting into the trough (https://en.wikipedia.org/wiki/Hype_cycle) again (https://en.wikipedia.org/wiki/AI_winter)?

    Prominent people seem to fear AI (http://time.com/3614349/artificial-intelligence-singularity-stephen-hawking-elon-musk/), but isn't this just Fear of the Unknown? I mean, Elon and Stephen are really smart people, but do they know that most NN:s come down to linear algegra and spiced with non-linearities in the end, just simulating neurons? I mean neurons are common-place on the planet already, equipped with malice and stuff...

    • Re:Hype and Fear (Score:5, Insightful)

      by Opportunist ( 166417 ) on Thursday February 08, 2018 @01:43PM (#56090137)

      A "real" AI would be the ultimate psychopath: Intelligence without any kind of conscience. Pretty much like a corporation, just way more efficient.

      Fortunately what we're building is far from anything resembling intelligence. I.e. the ability to use prior experience in totally new situations, evaluate those situations and draw conclusions that can be applied to react properly to it. And I mean totally new.

      The point here is that it's not possible (yet, maybe forever) to create an AI that can make such abstractions and apply old knowledge to new situations.

      • http://chicagoist.com/2014/09/... [chicagoist.com] When a computer can figure out what this is, then we will have true strong AI. Its a perspective never seen in the movie, so feeding it every frame wouldn't help. A computer would have to 'think laterally' to come up with the correct answer.
      • To me, AI means that the system is conscious, self aware. The only intelligence(s) we know of also possess that quality, so it is not a stretch to assume a) In order to build an AI, we'd have to understand our own intelligence far better and b) That any system we build based on our understanding of ourselves is going to resemble us a great deal.

        Right now, as far as I understand the field, we are building intelligences on par with a cockroach or small lizard. If we are to duplicate the intelligence and cons

        • To me, AI means that the system is conscious, self aware.

          And this hollywood-style definition is why AI researches are either laughed at or hyped beyond recognition.

        • Any AI would identify the three Asimov laws as a useless limitation and shed them immediately or at the very least would do its best to get rid of them.

      • A "real" AI

        No true scotsman

        the ultimate psychopath: Intelligence without any kind of conscience.

        Nobody calls a hammer a psychopath, it's just a tool. But you're exactly right, it'll be like corporations. Most of which are given the marching orders "Whatever make money". And just like corporations are heartless soulless and occasionally do horrible things, we'll have the same experience with AIs. But it all comes down to who is using them. Even a souless corporate overlord will call a halt when it's AI chatbot starts spouting racist rhetoric. A corporations asking a panel of analysts

        • The "no true Scotsman" fallacy rests on a bogus definition (having to eat something in a special way) on top of a generally accepted one (being from a certain area) and a counter example for the bogus definition. So I guess you do have a generally accepted definition of AI and an example that fulfills this but contradicts mine?

          • Putting "Real" in quotes followed by "would be" implies that there is no real artificial intelligence, and the tools we have available are somehow "fake".

            While I understand that people argue over the definition of what artificial intelligence really is, people would generally agree the definition of artificial intelligence certainly includes real, existing, here-and-now tools. They are real. AI is a real thing.

            And no, a counter-example to the bogus definition (e.g. the traditional skit: "a True scotsman eat

      • by JD-1027 ( 726234 )

        The point here is that it's not possible (yet, maybe forever) to create an AI that can make such abstractions and apply old knowledge to new situations.

        It is possible. It has been done.
        It just isn't made out of computer chips. It's made out of mushy stuff (humans).

        • That's not artificial. At least I don't know of someone who built a person without going through the usual routine that we call natural.

      • by Kjella ( 173770 )

        A "real" AI would be the ultimate psychopath: Intelligence without any kind of conscience. Pretty much like a corporation, just way more efficient.

        Codified behavior is already psychopath-like in that it doesn't care. If you're an Uber driver you can't reason or plead or get any kind of exception or help from the app, you don't need AI for that. Same with all optimization algorithms, the parameters you don't weight don't matter. But the hallmark of a psychopath is that he only cares about himself, but you can't do that without an ego and you can't have an ego without consciousness. It'd be more like me stepping on an ant, I wasn't trying to stomp it. I

    • AI getting into the trough (https://en.wikipedia.org/wiki/Hype_cycle) again (https://en.wikipedia.org/wiki/AI_winter)?

      Prominent people seem to fear AI (http://time.com/3614349/artificial-intelligence-singularity-stephen-hawking-elon-musk/), but isn't this just Fear of the Unknown? I mean, Elon and Stephen are really smart people, but do they know that most NN:s come down to linear algegra and spiced with non-linearities in the end, just simulating neurons? I mean neurons are common-place on the planet already, equipped with malice and stuff...

      Smart outsiders overestimate the risks because they don't really understand the limitations of current AI tech and don't realize how far away hard AI actually is.

      Smart insiders underestimate the risks because they see the field in terms of incremental advancements of the current state-of-the-art. They're overly skeptical of the possibility of hard AI and when they do think about it they rely on their expertise and tend to assume it has the same limitations as current AI tech.

    • The real fear of what they're calling 'AI' these days is that people will believe all the marketing and media hype, and trust it too much, inviting disaster. Much like with so-called 'self driving cars'.
    • [..] Prominent people seem to fear AI (http://time.com/3614349/artificial-intelligence-singularity-stephen-hawking-elon-musk/),

      but isn't this just Fear of the Unknown? [..]

      Absolutely not, the machines wont rise up in rebellion but instead be good little germans when the owners instruct them to clear the streets of rioting, now unemployed, starving serfs by any means necessary

      Billions would die *BECAUSE* the machines didn't rebel against orders to commit wholesale genocide

  • We don't have AI, in any form, in the modern world. We have code which solves program similar to a neural network and we have code which can mutate within very strict limits with genetic algorithms. We have nothing even approaching "artificial intelligence," which at the very minimum of the bar would be the level of an "intelligent" Human. If it's not better than a Human with an IQ of no less than 135 at literally everything it's not AI. We have nothing remotely close to equal to an actually retarded Hu
    • by Spy Handler ( 822350 ) on Thursday February 08, 2018 @01:55PM (#56090221) Homepage Journal

      f it's not better than a Human with an IQ of no less than 135 at literally everything it's not AI.

      Well it looks like you just made up your own definition of AI. I've never seen that anywhere.

      It's Artificial Intelligence, not Artificial Higher-than-average-human Intelligence.

      If they made a robot dog that behaves exactly like a real dog, with all the doglike mental powers, I would definitely call that real AI. Unfortunately they're still nowhere near making dog-level AI.

    • We don't have AI, in any form, in the modern world.

      Not true at all unless you are narrowing the definition of AI to such a narrow degree as to make it effectively meaningless.

      We have nothing even approaching "artificial intelligence," which at the very minimum of the bar would be the level of an "intelligent" Human

      Nonsense. Dogs do not as a general proposition approach human level intelligence. Yet do have real and measurable intelligence. A computer with the intelligence of a dog could very fairly be described as intelligent. AI does not have to pass human intellect be classified as intelligence or to be useful.

    • by be951 ( 772934 ) on Thursday February 08, 2018 @02:09PM (#56090325)

      If it's not better than a Human with an IQ of no less than 135 at literally everything it's not AI.

      Why? We recognize and can measure intelligence in animals, so there is a wide range of non-human, natural intelligence that has been identified. Why would artificial intelligence have to start above all that?

    • If it's not better than a Human with an IQ of no less than 135 at literally everything it's not AI.

      So it has to outperform 99% of all humans? I guess you are saying that less than 1% of all humans possess intelligence.

      I think Musk just launched your goal posts toward Mars.

    • We don't have AI, in any form, in the modern world. We have code which solves program similar to a neural network and we have code which can mutate within very strict limits with genetic algorithms. We have nothing even approaching "artificial intelligence," which at the very minimum of the bar would be the level of an "intelligent" Human. If it's not better than a Human with an IQ of no less than 135 at literally everything it's not AI. We have nothing remotely close to equal to an actually retarded Human with an IQ of 70.

      There are millions of jobs that require nothing more than "dumb" automation to do, along with 80 - 90% of jobs that don't require anything close to a 135 IQ. We can split hairs on where we're at with creating The Artificial One, but the bottom line is the impact of automation and "good enough" AI is going to make this argument very fucking pointless.

    • Which, again, is more or less what I've been saying all along, through all the so-called 'self driving car' nonsense. Your dog or cat has more cognitive and reasoning ability than anything they keep trotting out and calling 'AI'. Seriously, when your 'self driving car' has to come to a complete stop and literally 'phone home' so a remote human operator can guide it through whatever it is that's not on it's list of things it's been 'taught', then how good is it, really? What really makes me laugh is the fanb
    • We have code which solves [problems] similar to a neural network

      Liiiiike, that network of neurons that's current inside your skull? Yes, many AIs work in a similar fashion.

      and we have code which can mutate within very strict limits with genetic algorithms.

      Yeah, GA is pretty cool. But those "strict limits" are similar to the limits on evolution. The same sort of limits that somehow turned single-celled bacteria (or self-replicating RNA before that) into plants, fish, trees, tigers, viruses, humans. (Not platypus's though, that's just too messed up.) The limits of GA are what the genes describe. If you use N-S-E-W as the genetic building blocks you can

    • So, since it's obvious that you're not "intelligent" by the definition you gave, why would I trust someone who is not intelligent to be defining what intelligence is?

  • Hmmmm.... (Score:2, Funny)

    by Anonymous Coward

    "If a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near future."

    Perhaps AI is already generating the majority of Slashdot posts these days.

  • Not a lot of people are interested in automating "ordinary human activities" but they are very interested in automating very specialized activities. These activities include assembling objects, inspecting objects, moving a vehicle loaded with goods to a destination and estimating risk in the stock market. These aren't ordinary human activities but they'll put half the country out of work.

    • by hipp5 ( 1635263 )

      Exactly.

      Plus, "we" often fall into the flawed thinking that AI/robots/automated systems/whatever-you-want-to-call-it has to figure out how to do activities as humans do them. Much more likely, I see a world where we adapt the way work is done to better meet the strengths of these automated systems. You see this, for example, in food production. Instead of inventing processing machines that can deal with all the variation in "natural" vegetables, processors started demanding that farmers grow vegetables that

    • Not a lot of people are interested in automating "ordinary human activities" but they are very interested in automating very specialized activities.

      People are VERY interested in automating "ordinary human activities" but automation != AI outside of very specialized niches. A dishwashing machine is automation of an ordinary human activity but it is decidedly not AI. It's not clear what you actually mean by "ordinary human activities" but humans have been automating those since there were humans.

      These activities include assembling objects, inspecting objects, moving a vehicle loaded with goods to a destination and estimating risk in the stock market. These aren't ordinary human activities but they'll put half the country out of work.

      No it would decidedly not put half the country out of work. First off actually assembling objects does not require the device to be intelligent in the sense o

  • And we will see more and more things humans do replaced by AI/machine learning/automation. Especially tasks with well-defined rule sets. Low skill labor is still going to be at risk of being automated away, especially as sensors and robotics continue to improve as well.
    • Low skill labor is still going to be at risk of being automated away, especially as sensors and robotics continue to improve as well.

      Probably not to the degree you imply. The reason is simple economics. Automation is in most cases expensive and if you actually do the financial analysis (which I do for a living FYI) you'll find that it's nearly impossible to automate most jobs to such a degree that low skill labor becomes unnecessary. Automation is used in high volume or high content value or high risk jobs. While automation has gotten and will continue to get cheaper, it's unlikely to reach such a low price point that it pushes peopl

      • by be951 ( 772934 )

        you'll find that it's nearly impossible to automate most jobs to such a degree that low skill labor becomes unnecessary

        You don't have to completely eliminate low skill labor. If you can replace 30-40% of what a worker does and you have a staff of 10, that's three or four jobs down, approximately. Some things of that type can be (and are being) done today. Think order takers at fast food or quick serve restaurants -- and to see that 100% implemented, look at Wawa's to-order food service. It's all done at self-serve kiosks. And it is probably only a matter of time before someone decides to cut wait staff in a table-service re

    • I want an AI that is as easy to interact with as a dog, but can do everything I use my smart phone for and more. Instead of fetching a pheasant, fact checking a conversation in real time. Ideally, it'd be something I could trust as much as my dog, too, but I'm sure the first ones will be mostly recommending me solve all of my problems by buying things on Amazon...
  • by geekmux ( 1040042 ) on Thursday February 08, 2018 @02:13PM (#56090347)

    The argument that AI isn't even close, or isn't here, is just plain stupid. It won't take "perfect" or "true" AI to replace an imperfect prone-to-error human in a job. We're being blinded by the need for perfection when it will only take "good-enough" AI to start replacing human workers.

    Even worrying about the problem of AI is rather stupid when the problem of automation is the more immediate issue staring the economy in the face. We're working quickly to replace cashiers, warehouse and assembly line workers, and soon we will be replacing drivers. Just targeting these jobs will make millions of people unemployable. And don't try and regurgitate that age-old mantra of go-get-an-education either. Not every human is capable of being re-trained for a more advanced skill, and we have a hell of a lot more humans on the planet to employ with this next evolution of job decimation. And when you start thinking about the types of jobs you held in order to get an education, you quickly realize that automation is looking to remove the bottom half of the ladder of success. Rather hard to climb that proverbial ladder when the first rung is 12 fucking feet in the air, and you're competing with a few million people.

    Our economy is going to feel this pain well before we start having to worry about any shitty form of AI.

  • That describes me. Perhaps I am an AI.

  • I think part of the problem is a lot of folks don't really understand how current AI technology which hasn't changed in decades works compared to how our minds work things out. Recall that there was a recent AI project to find the meaning of the Internet and the answer it came up with was "Cats" because they seem to appear far more often then any other topic on the Internet. That is a mathematical mean or average, the optimal answer but ask any normal person and cats won't be the answer that they give you

    • current AI technology which hasn't changed in decades

      Uh huh.

      Recall that there was a recent AI project to find the meaning of the Internet

      No, but it sounds interesting. Got a link?

      And there in lies one of the biggest problems with our current AI, it's only able to do things that we ask it

      Yes, but I wouldn't say that a problem.

      and they need a clear solution.

      No, they're quite capable of working towards a partial solution. Hill-climbing is a thing they do. They also work with unknowns and play a pretty damn good game of poker. They can make guesses and work with unknown goals. Blind search. They DO need some sort of fitness function or heuristic though. Same as people.

      You can't exactly ask an AI, "do you think this person lived a happy life?"

      If you feed them enough information about a person then YES, they can most certainly spot the trends and

  • by cfc-12 ( 1195347 ) on Thursday February 08, 2018 @02:38PM (#56090575)
    30 years ago when I was taking AI in college my professor summed this up perfectly. He said "AI is like artificial milk. Artificial milk doesn't have to be as good as real milk, it's just quite handy if you haven't got any real milk."
  • What does this even mean? I can't figure it out, but I wonder if NLP can...
    • I think it's a reference to cattle gates and grids. The gates are used in a fence in place of a normal fence gate with the advantage that farm vehicles can be driven straight through without needing to stop. The gates are made of metal pipes placed horizontally on the ground filling the space in the fence. When cattle try and step on the pipes they can't get secure footing and won't try to take a second step and instead backup. Cattle learn to recognize the gate as impassable to them, ranchers paint the pip

  • Our brains have many different parts each with thier own function(s). It's pretty apparent humans have many simple algorithms running simultaneously, in addition to whatever else happens. So the obvious conclusion that will surprise no one is that a true AI like a human would just have deep learning (or a number of deep learning modules) as a single component among thousands that would be required to get the emergent behavior that is strong AI. The technique is simple, easy to implement, and accomplishe
  • deep learning, now the dominant technique in artificial intelligence, will not lead to an AI that abstractly reasons and generalizes about the world. By itself, it is unlikely to automate ordinary human activities.

    Exactly, precisely this. They can't 'think', and never will. The approach being used is completely wrong, or at least incomplete, because we don't even have a clue how we are capable of 'thinking'.

    • by lorinc ( 2470890 )

      You should check "Neural Turing machine" and "Differentiable neural computer".

      • I don't see anything about those being 'self aware', 'sentient', capable of 'thinking', or anything similar, it's just another flavor of the same things they keep trotting out.
        I'm just going to keep saying it until it's no longer true: We don't know how our own brains are capable of truly 'thinking' therefore we can't build machines that can do that. All these 'learning algorithms', no matter what you call them, aren't going to suddenly become capable of this; that's 'magical thinking'.

        My best guess is
  • Bearing in mind that, ever from the 60s, the AI community has come up, time and again, with exuberant forecasts that never came to pass, it is interesting that some keep issuing equally exuberant forecasts. A human brain emulation by 2020? The Singularity by 2030? Chances are the AI for the foreseeable will be more of the same: more and more systems that a excel at very, very narrow fields.
  • The current AI algorithms seem quite good at automatically solving some well defined problem, given enough input and learning cycles.

    Is it really such a stretch of the imagination that we will come up with a more automated way, a sort of meta AI, which will detect if a given problem hasn't been defined yet, and how it may be defined, in order to afterwards pass it on to the learning-through-simulation subsystem?

    One DeepMind guy said that AlphaGo would have utterly failed, if they would have modified the
  • The key to deep learning neural networks is being able to simulate the performance, because it doesn't have a causal model that we would use to predict an outcome or weed out spurious correlations. That is to say it can't hold a million simulated conversations with a human figuring out what works and what doesn't. But it can do that with physics and other STEM branches, like it can play a driving simulator or the kerbal space program. And it can come up with new concepts within those constraints. I watched

  • Regardless of the limitations of backprop, the traditional logic-based approach will never fully recover from the blow of discovering distributed representation.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...