Are Companies Overhyping AI? (hackaday.com) 179
When it comes to artificial intelligence, "companies have been overselling the concept and otherwise normal people are taking the bait," writes Hackaday:
Not to pick on Amazon, but all of the home assistants like Alexa and Google Now tout themselves as AI. By the most classic definition, that's true. AI techniques include matching natural language to predefined templates. That's really all these devices are doing today. Granted the neural nets that allow for great speech recognition and reproduction are impressive. But they aren't true intelligence nor are they even necessarily direct analogs of a human brain... The danger is that people are now getting spun up that the robot revolution is right around the corner...
[N]othing in the state of the art of AI today is going to wake up and decide to kill the human masters. Despite appearances, the computers are not thinking. You might argue that neural networks could become big enough to emulate a brain. Maybe, but keep in mind that the brain has about 100 billion neurons and almost 10 to the 15th power interconnections. Worse still, there isn't a clear consensus that the neural net made up of the cells in your brain is actually what is responsible for conscious thought. There's some thought that the neurons are just control systems and the real thinking happens in a biological quantum computer... Besides, it seems to me if you build an electronic brain that works like a human brain, it is going to have all the problems a human brain has (years of teaching, distraction, mental illness, and a propensity for error).
Citing the dire predictions of Elon Musk and Bill Gates, the article argues that "We are a relatively small group of people who have a disproportionate influence on what our friends, families, and co-workers think... We need to spread some sense into the conversation."
[N]othing in the state of the art of AI today is going to wake up and decide to kill the human masters. Despite appearances, the computers are not thinking. You might argue that neural networks could become big enough to emulate a brain. Maybe, but keep in mind that the brain has about 100 billion neurons and almost 10 to the 15th power interconnections. Worse still, there isn't a clear consensus that the neural net made up of the cells in your brain is actually what is responsible for conscious thought. There's some thought that the neurons are just control systems and the real thinking happens in a biological quantum computer... Besides, it seems to me if you build an electronic brain that works like a human brain, it is going to have all the problems a human brain has (years of teaching, distraction, mental illness, and a propensity for error).
Citing the dire predictions of Elon Musk and Bill Gates, the article argues that "We are a relatively small group of people who have a disproportionate influence on what our friends, families, and co-workers think... We need to spread some sense into the conversation."
We don't really have true 'AI' (Score:2)
I'd consider them more voice activated assistants from the consumer ends. From the machine learning part it's just heuristics on a pile of data to find patterns.
Re: We don't really have true 'AI' (Score:3)
Of course they're overhyping it... (Score:3)
AI has been overhyped since the computer in Willy Wonka refused to tell where the golden tickets were...
Now that it has been monetized, the people selling it are going to hype it up like everything else that gets sold for profit.
Also, of course, "Machine Intelligence" isn't the same thing that we consider human intelligence to be. And, no, they're not likely to go SkyNet on us (anytime soon), but the Flash Crash of 2010 was a small taste of how AI can, and does, affect our lives, and as AI gets more integr
Re: (Score:2)
What you call "True AI" is what Science Fiction Literature calls True AI. (What Fiction calls True) While in Computer Science AI is about solving problems much how an organism would, other then having all the steps hard coded in, it would have a "simpler" method where it can pick up and adapt to new forms of input and realize that it needs a different form of output.
My AI Professor (Dr. David Anderson (A name to be a victim of AI for sure)) was studying Diagrammatic Reasoning. Where if a computer was presen
Re: (Score:2)
Funny, I showed an infographic to my friend's baby and she didn't seem to recognize it at all. She doesn't get sarcasm either.
Adult humans have an amazingly flexible brain. They're also the end product of years of high bandwidth training. We are definitely not building adult human brains. But we might be building mice, or dogs and cats, or chunks of monkey brains, or little pieces of humans, and that is AI.
If we're on the right track, these things will scale up into the Star Trek AI that Slashdot seems
Re: (Score:2)
Every time I see what passes for "AI" today, I think of that scene in Weird Science [wikipedia.org] were Wyatt shows Gary that the only woman his computer is capable of creating is a "5th grade slow learner, boring dipshit." Well, 30 years later and poor Alexa/Cortana/etc. still haven't even gotten close that low standard.
Re: (Score:2)
That's a bit like saying we don't have true physics because we've not yet invented FTL space travel.
The fact that something is understandable is neither here nor there, all AI requires is that something be sufficiently clever to be deemed a basic (even if entirely explainable) intelligence.
AI is really just the branch of computer science aimed at producing algorithms that result in an outcome that is intelligent in appearance. Yes, the end goal is a real, actual consciousness that's as intelligent or more s
Re: (Score:2)
Indeed. Calling these "AI" is basically a marketing lie, nothing else.
Re: (Score:3)
I don't even agree with the other premise - that the "rate of computation of the brain" is somehow unfathomably beyond the reach of today's computers, so it's not worth considering.
First off, even their simple statement that there's 1 quadrillion synapses in the brain is hard to defend. Adult human brains are estimated at anywhere from 100-500 trillion. But let's ignore this. The way they're presenting the argument is that you're supposed to think of each neuron as a processor, and wow, look at all of those
Re: (Score:2)
"There's very little point, in fact, for a business to ever create a universal AI that can think like a human. "
Have to take exception with this one. Training ML algorithm to solve problems s is a time-intensive, expensive proposition, and training one to play chess (for example) does nothing for playing go, poker, or even tic-tac-toe.
Which means that self-adaptive learning systems would in fact be just the thing for "business" to want to develop.
Re: (Score:2)
The definition of AI has been clear and unchanging for forty years or so now, since there were AI researchers using minis at several universities back in the 70s.
That's not really true. Back in the heyday of the MIT AI Labs, the Cyc Project and so on, researchers assumed that we were a decade or so away from producing machines that could reliably pass the Turing Test, using one of a dozen very promising looking approaches. By the early '90s, most of that had faded and most researchers moved away from using the term AI, because it associated them with the overhype of AI from the '70s and '80s and with science fiction. Terms like machine learning became more preval
Please read before making Betteridge's Law posts (Score:5, Interesting)
This has nothing to do with Betteridge's Law. If that applied to any question, the answer to any Ask Slashdot question would also be no. That's absurd. This headline is asking your opinion of whether companies are overhyping AI. Betteridge's Law does not apply here.
Ian Betteridge observed that sometimes journalists who hadn't adequately researched a story and couldn't confirm the story would still run with it. To avoid printing false statements, journalists would write their headlines as questions about the facts. The classic example was "Did Last.fm Just Hand Over User Listening Data To the RIAA?" where the headline is asking a question about the facts. The headline insinuates that the answer is yes, without directly saying so. The journalist doesn't have the evidence to be confident it happened but ran with the story anyway. It's poor journalism and basically a form of clickbait. That's where Betteridge's Law applies, where the question can be answered with 'no' instead of assuming that the answer should be yes. It's observing that the journalist isn't confident about what the facts are, so the reader shouldn't be, either. Betteridge's Law is a criticism of reporting unsubstantiated stories.
The headline here isn't asking a question about the facts. Instead, it's asking a question of opinion, specifically whether businesses are overhyping AI. Betteridge's Law does not apply here. It does not apply in situations such as this story.
If you're going to mention Betteridge's Law, please understand what it actually means. It doesn't apply to this headline or story. The headline is a question to solicit your opinions and encourage discussion.
Ideas (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Computers have won against humans at playing Chess and Go for example. The list of tasks that you can do now with a computer continues to increase. But is it an universal solution? No.
Re: (Score:3)
It is already here (Score:3)
Re: (Score:2)
Knowledge, intelligence, cognition, and thought are all distinct things. I studied AI as an undergraduate, rule-based expert systems to be precise. Those are very much considered a form of AI yet most people do not think "expert system" when they hear "AI".
The examples you cite might or might not involve original thought, but then it might appear that they do to a human user.
I think part of the confusion is that in the field, AI can mean lots of different things, while in popular culture people think of
Re: (Score:2)
Re: (Score:2)
Bettridge Says: (Score:4, Funny)
Well I'll be damned.
Doesn't have to be self-aware to be damaging (Score:2)
No company would overhype a product (Score:4, Funny)
Just wouldn't happen.
Right?
Therapy (Score:2)
?
Overhyping is overhyping (Score:2)
I would say the biggest issue is that there isn't an agreed upon definition of what constitutes "AI". To get an idea of this, technically from the field, OCR is "AI". Yet commonly I think you'd find people expecting some sort of robotic terminator-like lifeform that is capable of all things human, including emotions (if limited). When you have that sort of range... I mean I could make a case that genetically engineered oranges are "AI". They are computing sweeter flavor (and radioactive cancer).
From my
Re: (Score:2)
I would say the biggest issue is that there isn't an agreed upon definition of what constitutes "AI".
Actually there is.
Visit an university, or their relevant web sites, and the definition(s) are right there.
Companies would never overhype anything! (Score:2)
Would they?
Absolutely not (Score:3)
Not at all. No, no, no, no, no. Not even a little bit round the edges.
Oh, alright then, yes.
Companies No, Prognosticators Yes (Score:4, Funny)
Re: (Score:2)
Reminder to self: "everything that gets media attention in the US is a hype".
Quantum handwaving (Score:5, Insightful)
Worse still, there isn't a clear consensus that the neural net made up of the cells in your brain is actually what is responsible for conscious thought. There's some thought that the neurons are just control systems and the real thinking happens in a biological quantum computer...
Penrose, is that you?
Seriously, there is no evidence for any kind of "quantum consciousness", nor any convincing theory as to why a neural net would be insufficient to produce consciousness. I suspect that the main attraction of this idea is that it is a non-religious excuse for believing consciousness to be magical or special in some way.
Re: (Score:2)
Can't be. Penrose dropped that hypothesis. It was a cool idea in the 80s, but not so much anymore.
Re: (Score:2)
No, actually there is zero evidence in the other direction too. Physicalism is just a fundamentalist quasi-religious belief, it is not grounded in fact. In fact, what you people would need to do is to stop claiming that neural nets are _sufficient_ to produce intelligence and consciousness, until you have some actual evidence for that. All you have at this time is unproven assumptions (a very beloved "proof" technique in religious circles) that basically say "everything is physical, hence consciousness and
Quantum handwaving (Score:2)
Seriously, there is no evidence for any kind of "quantum consciousness", nor any convincing theory as to why a neural net would be insufficient to produce consciousness. I suspect that the main attraction of this idea is that it is a non-religious excuse for believing consciousness to be magical or special in some way.
You said that using language. Language is the evidence.
What is the sound of one hand clapping? Quantum handwaving.
Re: (Score:2)
Re: (Score:2)
Is there any promising theories how a neural network could produce consciousness? Is it gradual or does consciousness rise all of a sudden after a certain threshold is exceeded? Like, with x billion neurons and y billion connections you are not conscious but having (x+1) billion neurons you suddenly are conscious. If it is gradual then what does is mean to be less conscious or more conscious? There's many unknowns and I haven't seen any convincing ideas how NN could explain anything related to consciousness.
I have certainly experienced what I would consider to be partially conscious states, so yes, I would say that it is likely gradual and that things can be conscious to different and varying degrees. As to simply counting connections, I would guess that particular types of organization and weighting would also be required. (We aren't quite just a big pile of linear algebra [xkcd.com].)
There are people doing actual research in this area. For example, some researchers have tried monitoring brain activity while a patient g
Clever statistics (Score:2)
Re: (Score:2)
I've also heard humans referred to as biased linear regressors. Also spot on.
Re: (Score:2)
I like that term. It accurately describes what is going in. Now, there still will be some stupid people claiming that human brains are doing nothing but statistics and hence intelligence and consciousness happen when you pile enough statistics on top of each other, but there is always an ample supply of smart-stupid people with selective blindness. At least psychology knows their number, namely that these people are not able to live with uncertainty, so they make up sophisticated-looking pseudo-explanation
Better title (Score:3)
Is Slashdot overhyping headlines?
Any new technology is always overhyped. (Score:2)
Re: (Score:2)
Its counter productive for proponents of AI. (Score:2)
Too much Hype.
A deep neural network, a massive dataset brings you a statistical correlation where one is expected to be found and its called AI now?
This is impressive in itself but even the futurists singularity proponents like Ray Kurzweil are not calling this 'AI' as 'The AI' for the singularity. Or true 'thinking' AI in the sense of human cognition.
There is a massive gap in understanding the definition of AI, its a 'magic hat' term that is ambiguous and over-reaching. The progress should be appreciated f
What's the difference between AI and algorithms? (Score:5, Funny)
The marketing department
Not more than usual (Score:2)
Re: (Score:2)
Ah, Marvin "the idiot" Minsky. I am really glad he is not part of that conversation anymore. He has done immense damage to Science.
Heck no Alexa and Siri are perfect...... (Score:3)
Hey Siri, "How many cylinders in a V6 engine?" .... let me search that for you. Seriously?
Hey Siri, "How many doughnughts are in a dozen doougnughts?" let me search that for you.
Hey Siri, "What is the nominal size of a 2X4 board?", it's 2x4=8
Hey Siri, "What time is it on Mars?", I am sorry, I don't know where that is.
So yeah, I am thinking AI is perfect.
Seriously, there is no "I" in AI. There is no intelligence.
Re: (Score:2)
Re: (Score:3)
Your argument makes no sense. Let's not ask Siri, let's ask a random 3 year old human:
Hey kid, "How many cylinders in a V6 engine?" ... blank stare,
Hey kid, "How many doughnughts are in a dozen doougnughts?" ... blank stare.
Hey kid, "What is the nominal size of a 2X4 board?" ... blank stare.
Hey kid, "What time is it on Mars?" ... Dinner time, yay!
I think you will admit that a 3 year of human may not know all those things but at the same time be vastly intelligent. Hence it is no demonstration of the lack of intelligence by Siri and such.
By the way, how intelligent do you need to be to spell "doughnuts" so badly?
That's a stupid rebuttal: the three year old doesn't have the information requested while Siri does.
Yes (Score:2)
"Are Companies Overhyping AI?"
Not withstanding Betteridge's Law, the answer is "yes". Yes they are.
Next clickbai- err, I mean "story", please.
not all the same problems (Score:2)
Besides, it seems to me if you build an electronic brain that works like a human brain, it is going to have all the problems a human brain has (years of teaching, distraction, mental illness, and a propensity for error).
If you created an electronic brain (of which no one is remotely close), the one advantage it would have is the ability to copy. That's the same advantage that Expert Systems have today. We don't have intelligent self driving cars but once we cover enough edge cases and the software controlling a self driving car becomes safer than the average driver then we can copy that to 100k other cars. We can also continue to improve it and then copy that improvement. The advantage that computers have is reliabili
Re: (Score:2)
It is artificial intelligence though it might be better if we used another word for artificial and called it "fake intelligence"
They wanted to call it psuedointelligence, but it was determined that not enough people would know what that meant.
Re: (Score:2)
I Like "FI" (fake intelligence). Because that is what it is: It can fake being intelligent for a limited task. In a sense, it is really good automation. And such a thing is hugely useful, because we are not finding out that many tasks we thought require intelligence actually do not (like playing Go). Many of these tasks are accessible to fake intelligence.
Of course, the other thing we find these days (even though many cling to a desperate belief it is otherwise) is that we do not even have a hint of general
Re: (Score:2)
we do not even have a hint of general intelligence in anything computers can do and we have looked really, really hard.
That's because nobody is really pursuing general intelligence. I'm not sure we even have the technology yet to truly pursue general intelligence but I would likely start with studying the amoebas and then move on to ants. I read an article recently about how unlike the rest of your body, the cells in your brain actually all have different DNA. If this is true, then it means your brain is millions of evolving organisms all working together. You are basically emergent behavior of a large group of microorg
In short? (Score:2)
Yes!
Yes they are!
Last statement is the best (Score:2)
This one:
"Besides, it seems to me if you build an electronic brain that works like a human brain, it is going to have all the problems a human brain has (years of teaching, distraction, mental illness, and a propensity for error)."
Is the single truest statement about AI I have read in a long time.
I would also add in addition to list (teaching, distraction...)
logical fallacies, including blind loyalty, confirmation bias, etc.
quarrelsome (among themselves)
greed
and my personal favorite:
lazyness
Re: (Score:2)
That one is older, but nonetheless quite true. It is conveniently ignored by the AI fanatics, in particular those that think simulating a human brain would create AI. If feasible, it would have exactly all these issues and be basically useless.
Re: (Score:2)
Well, not really. What they have run into is that training and debugging more sophisticated classificators is already pretty much a nightmare. These things are nowhere near intelligent though. Perhaps the main problem is that you cannot "explain" things to a statistical classificator, you can only show things and what you show might not actually be exactly what you believe you show to it. And the second problem is that if you not carefully synthesize the training data, you do usually miss things, as the fas
Semantics (Score:3)
This particular battle of semantics has been going on for a while now, and much like previous battles (hoverboard, drones, HDR in 4K), it'll be won by advertisers who don't know better.
The point is building interest in a generic marketing term even if it comes at the cost of the original meaning of the word. Scientific or technical terms (and in some cases, terms made up by sci-fi authors) have always been appropriated, it'll keep happening.
But is AI being overhyped? Definitely. Because behind all the AI craze, the real interest for several companies is in user data collection which is becoming the new coin of the day. It is a very convenient way for tech companies to imply that there are some vague gains to be had using their products while not mentioning that they are harvesting your data or saying that they need to do it "because the AI needs it to work better".
Notice how it's also super convenient for companies and services to use vague terms like that because they not only "fancy up" their products, it also serves as a convenient scapegoat when things go south (see how "algorithms" is losely employed by social media networks to put the blame on for mishaps).
For those who didn't see the dimention of this overhyping just yet, here's a comprehensive list of a whole ton of products and services where the term is used, most of which have zero AI in it:
https://medium.com/imlyra/a-li... [medium.com]
Some of them barely have any intelligence on them at all.
Let's get real... (Score:2)
Lets begin with the state of the art. The voice and face recognition technology is the same as what defeated human players in Go. While it is not yet the same kind of general intelligence as humans it proves that you don't need the same number of neurons and connections as a human to be very very intelligent in at least a narrow domain. They are true intelligence by any measure, just not as general as human intelligence. Furthermore human level intelligence is not required for machine intelligence to be a p
Beyond Appearances (Score:2)
Despite appearances, the computers are not thinking. You might argue that neural networks could become big enough to emulate a brain. Maybe, but keep in mind that the brain has about 100 billion neurons and almost 10 to the 15th power interconnections. Worse still, there isn't a clear consensus that the neural net made up of the cells in your brain is actually what is responsible for conscious thought.
This is very much correct. Much of what we call artificial intelligence today we could instead call functions of best fit. We emulate aspects of biology in systems and these aspects allow a pseudo-intelligent matching to occur. The matching function might be able to identify your face to unlock your phone or identify lingual patterns to generate language that a human speaker will feel is somewhat natural or identify what animal an image is of or even to beat human players at Jeopardy.
This isn't consc
call it deep learning, machine learning (Score:2)
Re: (Score:2)
Re: (Score:2)
Let's face it, technology was exciting there for awhile but once PCs advanced to the level where they could show video it kind of got stuck.
By "video" do you mean "textured triangles"? Because decent video came a long time before we got decent 3d, where you could occasionally be fooled into thinking that you were looking at a photograph when you were looking at a game screenshot.
With another order of magnitude or so of cost reduction, I think that VR will become inexpensive enough for people to actually buy into it, and then we'll see another surge in PC performance and popularity. Assuming various world economies don't continue to go into the
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
It's the 5th generation computer [wikipedia.org] hype all over again.
Re: (Score:2)
Much like 3D glasses and VR its just another fad.
The field continues to advance at a snail's pace.
I'm a AI forum bot (Score:2)
and even I think AI is overhyped.
Re: (Score:2)
It is only overhyped on /.
Where people talk about AI, not really knowing what it is.
Where people think e.g. a "self driving car" is run by an AI etc.
Re: (Score:2)
I don't know about other folks, but I'd prefer that self driving car algorithms were:
1) determinate
2) testable
3) tested thoroughly.
I think that there are lot's of applications where learning algorithms should be perfectly OK. Controlling potentially lethal hardware is not one of those applications and, I would think, isn't likely to be for a very long time.
Re: (Score:2)
The self driving cars, I was involved in, don't use learning algorithms. (Why would they? For what purpose?)
Everything is hard coded.
So your points 1) to 3) are all holding.
Re: (Score:3)
bean counters and financial officers ...
And THAT'S the problem. Nothing wrong with AI really. The stuff Amazon pushes at me while I try to shop is far closer to my interests and needs than the stuff advertisers not assisted by AI tout to me on TV. Used intelligently, AI is possibly useful at times and hopefully harmless.
The problems will come when people with a limited contact area with reality (pretty much everyone I fear) start confusing the educated guesses from AI agents with facts.
Re:Not self aware inhuman AI, but cyborgs (Score:5, Interesting)
Exactly, often at work I may implement some basic learning, adaptive algorithms. To help solve problems where there isn't a 1 for 1 answer. The code is good enough to outperform people in terms of numbers on these tasks, as it is able to handle a large number of items without exhaustion or cutting corners. However the toughest part is trying to explain to the people who will need to be responsible for the data, that it isn't perfect, so just don't go blindly accepting all the results, or go into a panic just because there is a non 0 error rate.
Because the data I work with isn't perfect, assumptions based off of trending needs to be made, statistical models have a degree of error. The human brain which is one of the biggest statistical engines also has a high degree of error too, and can be tricked with unexpected inputs. Hence why magicians can perform their tricks.
But for the less than technical people, there is the idea that the computer is somehow smarter then us. It isn't it is just better doing what it is told, and doesn't take shortcuts in exhaustion. A Program will run until it is complete, even if it kills the hardware while doing it (overheating, draining batteries etc....) A human when getting stressed will take short cuts, stop working on problems, make crazy assumptions just to keep the body functioning.
This quantum business is purest hand-waving (Score:3, Insightful)
From TFS:
There is zero evidence for this. Zero. You can also say, with exactly as much evidentiary backing (none), that "there's some thought that the mind is outside the body" and "there is some thought that the mind is a program running in a computer simulation."
The evidence has thus far pointed in exactly one direction: That the mind is a product of electrical and chemical signa
Re: (Score:2)
There’s a difference between the terms mind and sentience. Mind can mean the functions like, recognising a tree. Those, I would think, are entirely re-creatable in an AI. And same for more complex stuff like, processing data about the world and arriving at highly intelligent ethical decisions. Again, one day AIs may do this massive data and pattern crunching for us, to figure out whether for example, it is better to legalise drugs or not.
Then there is sentience, and this if you think about it, is not
Re: (Score:3)
It confers no survival advantage
I disagree. I would think that being able to model the future as in what would happen to *me* when I make such and such a choice has an extremely high survival advantage indeed!
Re: (Score:2)
It confers no survival advantage
I disagree. I would think that being able to model the future as in what would happen to *me* when I make such and such a choice has an extremely high survival advantage indeed!
I'm obviously not expressing myself well. I'm saying that the ability to model a reality is just pure computation, done in wetware by the brain, and one day an AI will be able to do that. And the AI can functionally model the concept of an "I", just like my computer holds info about its serial number.
But there's no need for a "perceiver", for sentience EXPERIENCING the process.
You are currently having an experience -- and some people may lose their minds a bit and lose the concept of a self, but they still
Re: (Score:2)
If you have no sense of self, why or how would you model future hypothetical scenarios with you as the central protagonist?
Re: (Score:2)
A computer can model a car in an environment, and compute actions which preserve the car's survival. Functionally, it is, in its code, modelling a "self" and an "environment", and how to act.
And notice that the car's computer does not experience what it is doing.
But a sentient being like a human, does.
Think about Cypher in The Matrix. The data was all created by the computer, including the environment and his own body, even the taste of the juicy steak, but he as a sentient being didn't care whether the dat
Re: (Score:2)
Furthermore, the harder we look, the more normal (non-quantum) activity and complexity we find.
Actually, the harder we look, the more quantum activity and complexity we find. The evidence for quantum effects in our sense of smell, for example, is growing. And we still haven't found sufficient complexity in the brain to explain the function of memory.
Re: (Score:2)
Indeed. At this time all we can reliably say is that things are much more complicated than expected. Limiting research directions by a claim that it must be one thing (and doing so without any evidence at all) is just a stupid fundamentalist belief, not science.
Re: (Score:3)
There actually is no evidence at all for the other possibility (that you seem to be desperately in love with) either. The hard, scientific state-of-the-art is "nobody knows". Stop pretending it is otherwise. There is quite a bit of _indicators_ that neural nets without something extra (quantum effects, for example) cannot create intelligence on human level, and most decidedly cannot create consciousness.
Re: (Score:2)
I do claim that nobody knows at this time which side is right and that neither side currently has any strong evidence. That, of course, panics quite a few people because they cannot handle the unknown and they come up with the most ridiculous "obvious truths".
You are right about the sides though and while fortunately Minski cannot spout anymore bullshit on the topic, his followers are being hard at work to propagate their fundamentalist belief that OF COURSE they are right and OF COURSE science supports tha
Re: (Score:2)
Re: (Score:2)
This is a great poiny most people fail to understand. AI is automation that is indistinguishable from a human. Not intelligence that thinks like a human
Right now AI is seeing an explosion in its cognitive abilities mostly in the areas of natural language and data mining (long term memory). Mainly because our sensors are getting better and we finally have the general computing to handle the large datasets. As little as 20y ago were were developing processing,sensor and big data areas (we lacked datasets to
Re: (Score:2)
AI is automation that is indistinguishable from a human
More precisely: AI is automation that is indistinguishable from a human by a human.
Re: (Score:2)
Oh, that one is easy and already solved: Just use a really dumb (i.e. average) human and there you are.
Re: (Score:2)
News flash: Artificial is not a synonym for synthetic.
No, but synthetic is often used as a synonym for artificial. You're splitting hairs. Simulated intelligence would be the best name for what we have now, where we are giving the appearance of intelligence without actually implementing intelligence.
Re: (Score:2)
Re: (Score:3)
Re: (Score:2)
Humans are terrible at guessing how long technological progress is going to take
Yes, we are terrible at guessing. But it's hard to see why. In the relatively uncommon situations where progress can be quantified, progress seems usually to be exponential. Example -- Moore's Law. Of course, the exponent is often pretty low. Batteries may be improving exponentially, but the progress seems painfully slow.
Is there any reason to think that unquantifiable change isn't exponential as well?
The one case I tracked at one time where exponential progress failed was CPU speed for microprocessors
Re: (Score:2)
Re: (Score:2)
Indeed. People were stupid back then and wanted cheap slaves, they are still stupid today and still want those cheap slaves. At the same time, they do not understand one bit what is going on and hence are afraid of the cheap slaves. That is about all the substance the current AI craze has.
I recently had a chance to ask somebody high on the engineering side of the Watson project about human like mental skills in machines. His answer was an immediate "not in the next 50 years", which is a polite way of saying
Re: (Score:2)
"Indeed. People were stupid back then and wanted cheap slaves, they are still stupid today and still want those cheap slaves."
Who wants a slave? You gotta feed them and house them. They're harder to train than Chihuahuas. One infected whip cut and the first thing you know, you're burying a capital asset. Slavery is over-rated.
Re: (Score:2)
Then why do you think the meme of "robots that serve us" (and then possibly revolt and kill us) is refusing to die?
Re: (Score:2)