'Modern AI is Good at a Few Things But Bad at Everything Else' (wired.com) 200
Jason Pontin, writing for Wired: Sundar Pichai, the chief executive of Google, has said that AI "is more profound than ... electricity or fire." Andrew Ng, who founded Google Brain and now invests in AI startups, wrote that "If a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near future." Their enthusiasm is pardonable.
[...] But there are many things that people can do quickly that smart machines cannot. Natural language is beyond deep learning; new situations baffle artificial intelligences, like cows brought up short at a cattle grid. None of these shortcomings is likely to be solved soon. Once you've seen you've seen it, you can't un-see it: deep learning, now the dominant technique in artificial intelligence, will not lead to an AI that abstractly reasons and generalizes about the world. By itself, it is unlikely to automate ordinary human activities.
To see why modern AI is good at a few things but bad at everything else, it helps to understand how deep learning works. Deep learning is math: a statistical method where computers learn to classify patterns using neural networks. [...] Deep learning's advances are the product of pattern recognition: neural networks memorize classes of things and more-or-less reliably know when they encounter them again. But almost all the interesting problems in cognition aren't classification problems at all.
[...] But there are many things that people can do quickly that smart machines cannot. Natural language is beyond deep learning; new situations baffle artificial intelligences, like cows brought up short at a cattle grid. None of these shortcomings is likely to be solved soon. Once you've seen you've seen it, you can't un-see it: deep learning, now the dominant technique in artificial intelligence, will not lead to an AI that abstractly reasons and generalizes about the world. By itself, it is unlikely to automate ordinary human activities.
To see why modern AI is good at a few things but bad at everything else, it helps to understand how deep learning works. Deep learning is math: a statistical method where computers learn to classify patterns using neural networks. [...] Deep learning's advances are the product of pattern recognition: neural networks memorize classes of things and more-or-less reliably know when they encounter them again. But almost all the interesting problems in cognition aren't classification problems at all.
I wonder how long it will be.... (Score:4, Insightful)
...before a bunch of angry old coots post telling us that none of this is AI.
Re: (Score:2, Informative)
What's hard [Re:I wonder how long it will be....] (Score:4, Insightful)
+1. This is algorithms and infant ML.
I can take my kid and train him to swim and then train him to drive a car and get rudimentary skill in a week in both.
You can only do this after about six years of full-time learning in how to navigate in the real world and how to operate his body. This is the hard part, the part that humans learn in their first six years and AIs don't: dealing with the external world.
Learning to swim and learning to drive a car are easy; machines can do that. Learning to make a peanut-butter-and-jelly sandwich out of what is in the refrigerator: now that's hard.
Re: (Score:2)
Learning to make a peanut-butter-and-jelly sandwich out of what is in the refrigerator: now that's hard.
It's also sexual harassment if you ask a woman to do that. http://domainincite.com/20201-... [domainincite.com]
Re: (Score:2)
Computers have no cognition.
That is nonsense.
Every most stupid neural network "program" is trained to "recognize".
And the programmer did _nothing_
Every NN is basically an off the shelf empty brain, working the same way as any other empty brain, in other words: nothing at all. Only after training it does what it does: recognize stuff. Aka: performing "cognition".
Re: (Score:2)
It isn't puzzling - it's obvious.
1) Why was chess considered something obviously needing _intelligence_?
2) Why isn't playing chess a display of intelligence of computers?
Answers:
1) Because the possible positions were far to large to be stored in a computer at the time (and also now - but not relevant here). And as computers couldn't store all possible combinations a human obviously couldn't - hence the human showed intelligence and to beat the human the computer must be smarter than the human.
2) Because the
Re: (Score:2)
That's silly. If you want to discuss some concept like consciously creative software, come up with a term that describes whatever you're trying to do. If you want to discuss the nature and origins of intelligence, go hang out in the philosophy department.
Everyone else who works in AI, and of course the general public who still understands that Deep Blue is AI and the computer opponent in Starcraft is AI, will keep using the same terms that have been used for more than half a century.
In three, two, one... (Score:3, Interesting)
...before a bunch of angry old coots post telling us that none of this is AI.
Let's put some of that into context.
A 5-year old can recognize a dog in an image in about 1/2 a second. A neuron takes about 0.05 seconds to activate and fire, so on average the entire recognition process takes about 10 steps.
Those steps include reading the image (sensing and converting the image data to internal form), and activating the physical response: saying "dog" or clicking the right button or whatever.
So let me ask this: what AI algorithm takes ten *steps* to recognize something as complicated as a
Re: (Score:2)
I don't think we have to have the equivalent of a 5 yo for something to be considered intelligent.
But as I wrote in another post actually defining intelligence is a hard problem.
Re: (Score:2)
So tell me again: in what measure is our current level of AI anywhere close to being "real" AI?
In the measure that sets the threshold for "real" AI a lot lower than where you're putting it, of course.
Of course my only real reason for posting this reply is to include a link to this non-distorted photo [reddit.com] of a non-deformed dog... for extra fun, let your 5-year-old's neural network figure that one out. :)
Re: (Score:2)
Re: (Score:2)
What we have now isn't even remotely close to anything even remotely close to resembling intelligence. We have fast, efficient number crunching. Nothing more.
What we are witnessing at this moment is the invention of the wheel claiming to be interstellar travel. It hasn't even risen to the depths of being laughable.
Unfortunately for a lot of people, their jobs require little more than number crunching.
Re: (Score:2)
Re: (Score:2)
Well there has been so much news about AI about doing things better and cheaper then a person could do. It is important to show that it isn't a human replacement, just a human supplement.
It is a case of too many "Man bites dog " news, and we have forgotten that the Dog normally bites the man.
It is important that the public is properly informed on the news. We do not want business owners to rush the gun, fire all the employees and install an AI system that cannot get the job done, causing harm to both the e
Re: (Score:2)
Well there has been so much news about AI about doing things better and cheaper then a person could do. It is important to show that it isn't a human replacement, just a human supplement.
It is both. Kind of like how the factory system supplemented humans, it made the human operators much more efficient than they could have been at manual craft manufacture. But many fewer of these supplemented humans were required.
The "few" things that AI (actually machine learning) is good at happens to cover a lot of work that humans are now earning salaries to do.
Re: Obviously (Score:2)
Doesn't matter. What do you think will happen when AI moves into a field? Perfect example: Trucking. What do you think will happen to the volume of goods that travel down roadways when AI gets involved and transporting goods becomes significantly cheaper? The volume will go up, and so will societies utility from it. This is good for society, we want goods and services to consume, work is not something we want, it's something we have to do to have the goods and services we want. This will no longer the case.
Re: (Score:2)
But that "earning salaries" thing is all important. It does not matter how cheap goods get, if you have no income to buy them with.
This is a major problem for our society, which regards not being employed as a sign of personal failure, from which you should suffer.
Having millions suffer from devastated livelihoods for the "good of society" is a huge problem.
So it matters an enormous amount.
Re: (Score:2)
Several mistakes:
a) a self driving car is not an AI, nor does it use AI, it uses several so called "cognitive systems"
b) shipping is so absurdly cheap, the cost is not relevant for anything
c) just because shipping might become cheaper, it does not increase the amount of shipped goods. To increase the amount of shipped goods you either need to make people "ship something", or make customers buy more.
Considering that above a certain price most shops offer "free shipping", customers wont be affected by a) or b
Re: (Score:2)
The biggest problem is most companies are so focused on cutting costs, they are not focusing on bringing in customers.
Efficiencies + Proper leadership doesn't mean lowering your workforce, but moving your workforce into a more productive state, that allows for proper growth.
Lets say the Machine Learning system, is handling the billing, it can do it faster and more accurate then a person can. The person who use to have to handle all the billing, can not be in a position where they are not bogged down by pap
Re: (Score:3)
This "news" is "dog bites man" please come back when it is "man bites dog".
Well, OK then. [npr.org]
Re: (Score:2)
Counter acting hype is valuable.
I recall an article a while back proclaiming that programmer jobs would go away because AI 'is here'. Written obviously by people who have no understanding of the current things being labeled 'AI'.
It's obvious to those deeply engaged in the technology, but it is not at all obvious to the wider public, which includes a lot of decision makers who can create big problems if they just have the marketed hype to go on.
The key is a balance, describing things that it can do well on
Hype and Fear (Score:5, Interesting)
AI getting into the trough (https://en.wikipedia.org/wiki/Hype_cycle) again (https://en.wikipedia.org/wiki/AI_winter)?
Prominent people seem to fear AI (http://time.com/3614349/artificial-intelligence-singularity-stephen-hawking-elon-musk/), but isn't this just Fear of the Unknown? I mean, Elon and Stephen are really smart people, but do they know that most NN:s come down to linear algegra and spiced with non-linearities in the end, just simulating neurons? I mean neurons are common-place on the planet already, equipped with malice and stuff...
Re:Hype and Fear (Score:5, Insightful)
A "real" AI would be the ultimate psychopath: Intelligence without any kind of conscience. Pretty much like a corporation, just way more efficient.
Fortunately what we're building is far from anything resembling intelligence. I.e. the ability to use prior experience in totally new situations, evaluate those situations and draw conclusions that can be applied to react properly to it. And I mean totally new.
The point here is that it's not possible (yet, maybe forever) to create an AI that can make such abstractions and apply old knowledge to new situations.
Re: (Score:2)
Re: (Score:2)
Right now, as far as I understand the field, we are building intelligences on par with a cockroach or small lizard. If we are to duplicate the intelligence and cons
Re: (Score:2)
To me, AI means that the system is conscious, self aware.
And this hollywood-style definition is why AI researches are either laughed at or hyped beyond recognition.
Re: (Score:2)
Any AI would identify the three Asimov laws as a useless limitation and shed them immediately or at the very least would do its best to get rid of them.
Re: (Score:2)
A "real" AI
No true scotsman
the ultimate psychopath: Intelligence without any kind of conscience.
Nobody calls a hammer a psychopath, it's just a tool. But you're exactly right, it'll be like corporations. Most of which are given the marching orders "Whatever make money". And just like corporations are heartless soulless and occasionally do horrible things, we'll have the same experience with AIs. But it all comes down to who is using them. Even a souless corporate overlord will call a halt when it's AI chatbot starts spouting racist rhetoric. A corporations asking a panel of analysts
Re: (Score:2)
The "no true Scotsman" fallacy rests on a bogus definition (having to eat something in a special way) on top of a generally accepted one (being from a certain area) and a counter example for the bogus definition. So I guess you do have a generally accepted definition of AI and an example that fulfills this but contradicts mine?
Re: (Score:2)
Putting "Real" in quotes followed by "would be" implies that there is no real artificial intelligence, and the tools we have available are somehow "fake".
While I understand that people argue over the definition of what artificial intelligence really is, people would generally agree the definition of artificial intelligence certainly includes real, existing, here-and-now tools. They are real. AI is a real thing.
And no, a counter-example to the bogus definition (e.g. the traditional skit: "a True scotsman eat
Re: (Score:2)
The point here is that it's not possible (yet, maybe forever) to create an AI that can make such abstractions and apply old knowledge to new situations.
It is possible. It has been done.
It just isn't made out of computer chips. It's made out of mushy stuff (humans).
Re: (Score:2)
That's not artificial. At least I don't know of someone who built a person without going through the usual routine that we call natural.
Re: (Score:2)
A "real" AI would be the ultimate psychopath: Intelligence without any kind of conscience. Pretty much like a corporation, just way more efficient.
Codified behavior is already psychopath-like in that it doesn't care. If you're an Uber driver you can't reason or plead or get any kind of exception or help from the app, you don't need AI for that. Same with all optimization algorithms, the parameters you don't weight don't matter. But the hallmark of a psychopath is that he only cares about himself, but you can't do that without an ego and you can't have an ego without consciousness. It'd be more like me stepping on an ant, I wasn't trying to stomp it. I
Re: (Score:3)
AI getting into the trough (https://en.wikipedia.org/wiki/Hype_cycle) again (https://en.wikipedia.org/wiki/AI_winter)?
Prominent people seem to fear AI (http://time.com/3614349/artificial-intelligence-singularity-stephen-hawking-elon-musk/), but isn't this just Fear of the Unknown? I mean, Elon and Stephen are really smart people, but do they know that most NN:s come down to linear algegra and spiced with non-linearities in the end, just simulating neurons? I mean neurons are common-place on the planet already, equipped with malice and stuff...
Smart outsiders overestimate the risks because they don't really understand the limitations of current AI tech and don't realize how far away hard AI actually is.
Smart insiders underestimate the risks because they see the field in terms of incremental advancements of the current state-of-the-art. They're overly skeptical of the possibility of hard AI and when they do think about it they rely on their expertise and tend to assume it has the same limitations as current AI tech.
Re: (Score:3)
Re: (Score:3)
[..] Prominent people seem to fear AI (http://time.com/3614349/artificial-intelligence-singularity-stephen-hawking-elon-musk/),
but isn't this just Fear of the Unknown? [..]
Absolutely not, the machines wont rise up in rebellion but instead be good little germans when the owners instruct them to clear the streets of rioting, now unemployed, starving serfs by any means necessary
Billions would die *BECAUSE* the machines didn't rebel against orders to commit wholesale genocide
There's No Such Thing (Score:2, Insightful)
Re:There's No Such Thing (Score:5, Insightful)
f it's not better than a Human with an IQ of no less than 135 at literally everything it's not AI.
Well it looks like you just made up your own definition of AI. I've never seen that anywhere.
It's Artificial Intelligence, not Artificial Higher-than-average-human Intelligence.
If they made a robot dog that behaves exactly like a real dog, with all the doglike mental powers, I would definitely call that real AI. Unfortunately they're still nowhere near making dog-level AI.
Re: (Score:2)
Intelligence has a basic definition of being more than an average person.
No, that's highly intelligent. Quite different.
Moving the goal posts (Score:2)
We don't have AI, in any form, in the modern world.
Not true at all unless you are narrowing the definition of AI to such a narrow degree as to make it effectively meaningless.
We have nothing even approaching "artificial intelligence," which at the very minimum of the bar would be the level of an "intelligent" Human
Nonsense. Dogs do not as a general proposition approach human level intelligence. Yet do have real and measurable intelligence. A computer with the intelligence of a dog could very fairly be described as intelligent. AI does not have to pass human intellect be classified as intelligence or to be useful.
All intelligence is pattern recognition (Score:3)
Use any definition other than "pattern recognition" and we don't have it.
So me any form of intelligence that isn't some form of pattern recognition. Hell the entire field of physics and every other science is simply the act of observing patterns and building a model to describe them that has predictive value. At it's most basic form that is just sophisticated pattern recognition.
Re: (Score:2)
You mean:
Well, except for the French, because a rat-intelligence based cooking assistant could be quite useful. But rat intelligence isn't that useful for translating natural language, issuing mortgages, issuing insurance, medical diagnosis, detecting click-fraud, or coming up with good lolcat slogans.
For some reason, many people seem wedded to the kind of ordered metaphor of assen
Re:There's No Such Thing (Score:5, Interesting)
If it's not better than a Human with an IQ of no less than 135 at literally everything it's not AI.
Why? We recognize and can measure intelligence in animals, so there is a wide range of non-human, natural intelligence that has been identified. Why would artificial intelligence have to start above all that?
Re: (Score:2)
Re: (Score:2)
Intelligence has the basic qualifier of being more than average.
Obviously it does not. We've discussed things with lower intelligence already. You may be getting confused due to the similarity of the noun intelligence -- which particularly when we are speaking of "mouse intelligence" or other animal intelligence, but also the wide range of human intelligence simply means mental or intellectual capacity as an attribute, which can obviously be low or high or average -- with the adjective intelligent, typically meaning: having or indicating a high or satisfactory degree of
Re: (Score:2)
Re: (Score:2)
Computers don't even have anything near mouse-level intelligence
Neuroscientists are currently struggling to understand the functioning of the nervous system of c elegans, a microscopic worm with 300 neurons and lesss than 10,000 neural connections. They don't expect to understand the whole thing for many years.
In contrast, there are 100 billion neurons and 100 trillion connections in a human brain.It's like comparing a cup of water to Lake Superior.
Re: (Score:3)
If it's not better than a Human with an IQ of no less than 135 at literally everything it's not AI.
So it has to outperform 99% of all humans? I guess you are saying that less than 1% of all humans possess intelligence.
I think Musk just launched your goal posts toward Mars.
Re: (Score:2)
We don't have AI, in any form, in the modern world. We have code which solves program similar to a neural network and we have code which can mutate within very strict limits with genetic algorithms. We have nothing even approaching "artificial intelligence," which at the very minimum of the bar would be the level of an "intelligent" Human. If it's not better than a Human with an IQ of no less than 135 at literally everything it's not AI. We have nothing remotely close to equal to an actually retarded Human with an IQ of 70.
There are millions of jobs that require nothing more than "dumb" automation to do, along with 80 - 90% of jobs that don't require anything close to a 135 IQ. We can split hairs on where we're at with creating The Artificial One, but the bottom line is the impact of automation and "good enough" AI is going to make this argument very fucking pointless.
Re: (Score:2)
This is where you turned off your brain...
It doesn't matter if AI cost 1 billion dollars, because, much like processor development it is spread over how ever many millions of units you sell.
>It needs to be cheaper than Human labor to be useful because we live in a scarcity-driven world.
Yes, the materials to make robots/computers are soooooo scarce. Uh, no not really. See the thing about AI/computers is you can turn them off. Humans keep eating, shitting, and taking up climate controlled space. Over the e
Re: (Score:2)
Re: (Score:2)
We have code which solves [problems] similar to a neural network
Liiiiike, that network of neurons that's current inside your skull? Yes, many AIs work in a similar fashion.
and we have code which can mutate within very strict limits with genetic algorithms.
Yeah, GA is pretty cool. But those "strict limits" are similar to the limits on evolution. The same sort of limits that somehow turned single-celled bacteria (or self-replicating RNA before that) into plants, fish, trees, tigers, viruses, humans. (Not platypus's though, that's just too messed up.) The limits of GA are what the genes describe. If you use N-S-E-W as the genetic building blocks you can
Re: (Score:2)
So, since it's obvious that you're not "intelligent" by the definition you gave, why would I trust someone who is not intelligent to be defining what intelligence is?
Hmmmm.... (Score:2, Funny)
"If a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near future."
Perhaps AI is already generating the majority of Slashdot posts these days.
"unlikely to automate ordinary human activities" (Score:2)
Not a lot of people are interested in automating "ordinary human activities" but they are very interested in automating very specialized activities. These activities include assembling objects, inspecting objects, moving a vehicle loaded with goods to a destination and estimating risk in the stock market. These aren't ordinary human activities but they'll put half the country out of work.
Re: (Score:2)
Exactly.
Plus, "we" often fall into the flawed thinking that AI/robots/automated systems/whatever-you-want-to-call-it has to figure out how to do activities as humans do them. Much more likely, I see a world where we adapt the way work is done to better meet the strengths of these automated systems. You see this, for example, in food production. Instead of inventing processing machines that can deal with all the variation in "natural" vegetables, processors started demanding that farmers grow vegetables that
Ridiculous FUD (Score:2)
Not a lot of people are interested in automating "ordinary human activities" but they are very interested in automating very specialized activities.
People are VERY interested in automating "ordinary human activities" but automation != AI outside of very specialized niches. A dishwashing machine is automation of an ordinary human activity but it is decidedly not AI. It's not clear what you actually mean by "ordinary human activities" but humans have been automating those since there were humans.
These activities include assembling objects, inspecting objects, moving a vehicle loaded with goods to a destination and estimating risk in the stock market. These aren't ordinary human activities but they'll put half the country out of work.
No it would decidedly not put half the country out of work. First off actually assembling objects does not require the device to be intelligent in the sense o
Still will be quite useful (Score:2)
People aren't going away (Score:3)
Low skill labor is still going to be at risk of being automated away, especially as sensors and robotics continue to improve as well.
Probably not to the degree you imply. The reason is simple economics. Automation is in most cases expensive and if you actually do the financial analysis (which I do for a living FYI) you'll find that it's nearly impossible to automate most jobs to such a degree that low skill labor becomes unnecessary. Automation is used in high volume or high content value or high risk jobs. While automation has gotten and will continue to get cheaper, it's unlikely to reach such a low price point that it pushes peopl
Re: (Score:2)
you'll find that it's nearly impossible to automate most jobs to such a degree that low skill labor becomes unnecessary
You don't have to completely eliminate low skill labor. If you can replace 30-40% of what a worker does and you have a staff of 10, that's three or four jobs down, approximately. Some things of that type can be (and are being) done today. Think order takers at fast food or quick serve restaurants -- and to see that 100% implemented, look at Wawa's to-order food service. It's all done at self-serve kiosks. And it is probably only a matter of time before someone decides to cut wait staff in a table-service re
Re: (Score:2)
Ignorance blinded by Perfection (Score:3)
The argument that AI isn't even close, or isn't here, is just plain stupid. It won't take "perfect" or "true" AI to replace an imperfect prone-to-error human in a job. We're being blinded by the need for perfection when it will only take "good-enough" AI to start replacing human workers.
Even worrying about the problem of AI is rather stupid when the problem of automation is the more immediate issue staring the economy in the face. We're working quickly to replace cashiers, warehouse and assembly line workers, and soon we will be replacing drivers. Just targeting these jobs will make millions of people unemployable. And don't try and regurgitate that age-old mantra of go-get-an-education either. Not every human is capable of being re-trained for a more advanced skill, and we have a hell of a lot more humans on the planet to employ with this next evolution of job decimation. And when you start thinking about the types of jobs you held in order to get an education, you quickly realize that automation is looking to remove the bottom half of the ladder of success. Rather hard to climb that proverbial ladder when the first rung is 12 fucking feet in the air, and you're competing with a few million people.
Our economy is going to feel this pain well before we start having to worry about any shitty form of AI.
Re: (Score:2)
You're conflating low-skill human jobs with high-skill human activities/jobs. I don't want 'good enough' AI driving 80,000 lb. trucks at 80 mph.
And I don't want tens of thousands of deaths every year due to human drivers. Understand it will only take a 25% reduction in that death statistic to sell good-enough automation solutions. Bottom line is it doesn't matter what you want. Your opinion or mine hasn't mattered in a very long time when it comes to solutions like this being deployed.
We may get rid of some cashiers, in VERY high-volume stores, (like Walmart), but Bill's Liquor in Podunk, Nebraska can't fucking sell liquor to minors, gets a handful of customers per day, and isn't going to spend more on robots than what the entire store is worth.
Who gives a shit about one or two jobs in a fucking middle-o-nowhere liquor store? I'm talking about the Wal-Marts of the world. Places that have started to repla
Re: (Score:2)
Humans are special, cupkake ! Perhaps 45,000 human-driver caused auto deaths is OKEY, but 45 AI-machine caused auto deaths is not acceptable. Get the picture ?
Give me a fucking break. Today, a lot of people have to die in order for an auto manufacturer to finally admit fault and initiate a recall. The FDA is known for approving drugs with massive side effects (including death) simply because someone statistically proved they will do slightly more good than harm. Autonomous and AI solutions will be no different.
Morals and ethics will always come second to profit.
Re: (Score:2)
I don't want 'good enough' AI driving 80,000 lb. trucks at 80 mph.
A total of 3,986 people died in large truck crashes in 2016.
Someone who doesn't understand what the term "Good enough" means isn't good enough to talk about AI.
Let's see some actual fucking self-driving cars in real-world scenarios for a few years,
In 2015, the US states of Nevada, Florida, California, Virginia, and Michigan, together with Washington, D.C. allowed the testing of autonomous cars on public roads.[30]
In 2017, Audi stated that its latest A8 would be autonomous at up to speeds of 60 km/h using its "Audi AI". The driver would not have to do safety checks such as frequently gripping t
Re: (Score:2)
> We're working quickly to replace cashiers
Self-checkout has been a thing for decades at this point. I think the only difference is now that you can use IPTV to monitor from locations farther away.
Ordering kiosks are also old tech. It's nothing that couldn't have been done many years ago.
This is just a hype wave.
Start being forced to pay a human cashier $15/hour, and that hype becomes reality real quick.
In the past, automation was not easily justified. In our 24-hour on-demand instant-gratification world, it is.
Hmm (Score:2)
That describes me. Perhaps I am an AI.
Path Optimization vs Meaning (Score:2)
I think part of the problem is a lot of folks don't really understand how current AI technology which hasn't changed in decades works compared to how our minds work things out. Recall that there was a recent AI project to find the meaning of the Internet and the answer it came up with was "Cats" because they seem to appear far more often then any other topic on the Internet. That is a mathematical mean or average, the optimal answer but ask any normal person and cats won't be the answer that they give you
Re: (Score:2)
current AI technology which hasn't changed in decades
Uh huh.
Recall that there was a recent AI project to find the meaning of the Internet
No, but it sounds interesting. Got a link?
And there in lies one of the biggest problems with our current AI, it's only able to do things that we ask it
Yes, but I wouldn't say that a problem.
and they need a clear solution.
No, they're quite capable of working towards a partial solution. Hill-climbing is a thing they do. They also work with unknowns and play a pretty damn good game of poker. They can make guesses and work with unknown goals. Blind search. They DO need some sort of fitness function or heuristic though. Same as people.
You can't exactly ask an AI, "do you think this person lived a happy life?"
If you feed them enough information about a person then YES, they can most certainly spot the trends and
Artificial Milk (Score:3)
like cows brought up short at a cattle grid (Score:2)
Re: (Score:2)
I think it's a reference to cattle gates and grids. The gates are used in a fence in place of a normal fence gate with the advantage that farm vehicles can be driven straight through without needing to stop. The gates are made of metal pipes placed horizontally on the ground filling the space in the fence. When cattle try and step on the pipes they can't get secure footing and won't try to take a second step and instead backup. Cattle learn to recognize the gate as impassable to them, ranchers paint the pip
Mono climate worlds aka Star Wars (Score:2, Insightful)
What I've been saying all along: (Score:2)
deep learning, now the dominant technique in artificial intelligence, will not lead to an AI that abstractly reasons and generalizes about the world. By itself, it is unlikely to automate ordinary human activities.
Exactly, precisely this. They can't 'think', and never will. The approach being used is completely wrong, or at least incomplete, because we don't even have a clue how we are capable of 'thinking'.
Re: (Score:2)
You should check "Neural Turing machine" and "Differentiable neural computer".
Re: (Score:2)
I'm just going to keep saying it until it's no longer true: We don't know how our own brains are capable of truly 'thinking' therefore we can't build machines that can do that. All these 'learning algorithms', no matter what you call them, aren't going to suddenly become capable of this; that's 'magical thinking'.
My best guess is
Re: (Score:2)
Some refuse to learn (Score:2)
Re: (Score:2)
meta AI (Score:2)
Is it really such a stretch of the imagination that we will come up with a more automated way, a sort of meta AI, which will detect if a given problem hasn't been defined yet, and how it may be defined, in order to afterwards pass it on to the learning-through-simulation subsystem?
One DeepMind guy said that AlphaGo would have utterly failed, if they would have modified the
Deep learning is more than classification (Score:2)
The key to deep learning neural networks is being able to simulate the performance, because it doesn't have a causal model that we would use to predict an outcome or weed out spurious correlations. That is to say it can't hold a million simulated conversations with a human figuring out what works and what doesn't. But it can do that with physics and other STEM branches, like it can play a driving simulator or the kerbal space program. And it can come up with new concepts within those constraints. I watched
not your father's symbolism (Score:2)
Regardless of the limitations of backprop, the traditional logic-based approach will never fully recover from the blow of discovering distributed representation.
Re: Yet (Score:5, Interesting)
This. It makes sense that google will tout its neural networks, they own them. And yes, the reality is that many tasks and displays of "intelligence" will be difficult of those specific algorithms to handle efficiently or correctly. But the field is in its infancy. Computers haven't been around for even a century. I think though that they have in very specific terms been intelligent all along. The fact that they can do math such as understand 2+2=4 is in and of itself AMAZING.
Why it doesn't impress us is because we know what's going on inside and can dispell the magic. We know how it works. If I showed you a machine and I said "it can treat you like a therapist and cure your depression with greater success rate than the worlds renound phychiatrists", or some other seemingly "beyond computers" task; you would say that's artificial intelligence. But once I show you the secret sauce, the algorithm, the data points, the learning attributes it takes in and the process it uses, it's no longer intelligent, it's just a dumb machine using someone it was given. That's because we don't know why we are intelligent. We can use natural language, and we can do facial recognition, and we can determine creatively how to fix something we haven't seen before. We don't understand the process we take as toddlers to gain those skills. If we did, we would replicate it simply.
True AI will never become a reality because we have to understand it to build it, and by understanding it, we remove the magic and dispell that which was created as "true AI". We just keep moving the goal posts in search of something that is seemingly human. We will get there though. There is nothing in our heads that the universe and all of physics has barred us from creating. There is no law like gravity that states lIntelligence shall not exist but for within the head of a human being". Computers are better than us at chess, go, poker, and so many other tasks. Surely that is intelligence already.
Re: Yet (Score:5, Insightful)
We saw roughly how heavier-than-air flight would work, but we didn't have the pieces to put it together. We understood the airborne part enough to carry humans dating back at *least* to the sixth century (earliest recorded 'paragliding'). We couldn't make a practical aircraft, but we could see how the pieces would play a role in such a marvel if we solved other pieces.
Here, the current 'AI' craze doesn't even in theory extrapolate to higher-order displays of intelligence. It is a highly practical field to advance and is certainly useful, but *if* we want to go to more 'intelligent' systems, it's going to be based on a different methodology, or at least no one who understands the field can see a hypothetical extrapolation of this approach that leads to those results.
The problem people have is that a useful, albeit narrow discipline is conflated with the entirety of human intelligence. I have seen many in the field understandably trying to discourage the phrase 'AI' to head off very annoying irrelevant conversations and concerns.
Re: (Score:2)
Here, the current 'AI' craze doesn't even in theory extrapolate to higher-order displays of intelligence.
I'd love to see the theory that claim is based on. I've never heard of any theory of "higher-order" intelligence (whatever that means) that tells us the current approach won't scale. If you're going to make claims about what is or isn't possible "in theory", those claims need to be based on an actual theory. Otherwise, it's empty rhetoric.
Many AI researchers believe the current approach can and probably will ultimately lead to human like intelligence. They reason like this. Humans have human like intel
Re: Yet (Score:5, Insightful)
Computers don't understand 2+2. They perform the operation by moving electrons from one place to another, ending in a pattern that humans interpret as 4.
Re: (Score:2)
Computers don't understand 2+2. They perform the operation by moving electrons from one place to another, ending in a pattern that humans interpret as 4.
Is there "understanding" if the computer arrives to the conclusion by simulating brains at a physical level using same functions organic brains do?
I mean human brains are machines, after all and not supernatural by themselves in any way. Just couple magnitudes higher in complexity than what we can currently build
Re: Yet (Score:2)
Yes, they understand 4. That's why they can emit 4 beeps, use 4 in the operation of a subsequent transaction, and know what is greater than and less than. Your understanding is no better.
Re: (Score:2)
Re: (Score:2)
A lot of mathematicians would say if you haven't studied ZF set theory, you don't understand 2+2 either. And if you have studied it, you know it describes addition as a process of algorithmic symbol manipulation. Exactly the sort of thing computers are great at.
Maybe the main difference is that humans delude themselves into thinking they "understand" things when really they're just following rules. Computers don't have that problem. So which is more intelligent?
Re: (Score:2)
chinese room (Score:2)
Are you familiar with the Chinese Room [wikipedia.org] argument?
Re: (Score:2)
Re: (Score:2)
What you are saying reminds me of the old discussion of the Artistic Method vs the Scientific Method.
The artistic method attempts to solve a problem by coming up with a solution, determining if it provides a solution, and if it does not rejecting it and starting over.
The scientific method attempts to refine its initial solution over many iterations, in order to eventually come up with a solution that works. Only when they hit a point that does not offer any more paths for refinement will they reject the or
Re: (Score:2)
I don't need it to be a new artificial Einstein.
I just want it to do the dishes and the laundry and clean the house.
The rest I can do myself.
Re: (Score:2)
Yea, but for the most part one of them wins out. While we can fight semantics, Hollywood will always win.
The Anti-Hero is a Hacker, not a cracker. Because it can cover so many types of people who do so many different things. From just being good at a computer to breaking into a high security area.
The same thing with AI and Machine Learning. The AI gives the computer a personality that we could learn to love or fear.
Re: (Score:2)
I'd love to hear you differentiate the two.
Re: (Score:2)
AI isn't really intelligence
No true scotsman.
Its knowledge based on what is in its data base.
Yes, just like yours. And just like you, they can add to their knowledge.
Hardly any of it is considered really deep self learning other then some programmed learning abilities.
Yes, those parts are what we call AI.
But then again, people get excited about Space X launching a rocket that was done back in the 60's and 70's? Will AI hit a brick wall as well?
The DC-X [wikipedia.org] was flown in the 90's. It ran out of funding after it's last launch caused damage. And just like the gap with VTVL, AI research had a couple of winters and the hype train is once again going strong.
But the cycle of hype and the resulting disillusionment doesn't mean that it doesn't exist. The field has progressed and the capabilities of AI have expanded.
bloody cowards.
Re: (Score:3)
Re: (Score:2)
I don't like YOUR definition of intelligence. Not enough haggis.
As far as I know, intelligence means ability to gather knowledge through external experience
What, so a webcrawler is intelligent? It "gathers knowledge".
I'd typically just boil it down to "learning".
For an 'AI' to truly be intelligent at playing Go, it needs to start with a seed and be shown instructions to Go and learn how to play from that.
Keep up with the times grandpa. [phys.org] That's exactly what they did. No learning sets needed.
if someone programmed the rules for Go into it, then it has not learned to play through intelligence.
You need to better differentiation what you mean by "Shown instructions" and "programmed the rules".
The same 'seed' should also similarly be able to learn to play chess, do s crossword, or identify animals.
You have no idea what a "seed" is do you? It's just some nebulous term you've got rumbling around in your head that's pseudo-magical.
Anyway, the clo