Artificial General Intelligence is Nowhere Close To Being a Reality (venturebeat.com) 303
Three decades ago, David Rumelhart, Geoffrey Hinton, and Ronald Williams wrote about a foundational weight-calculating technique -- backpropagation -- in a monumental paper titled "Learning Representations by Back-propagating Errors." Backpropagation, aided by increasingly cheaper, more robust computer hardware, has enabled monumental leaps in computer vision, natural language processing, machine translation, drug design, and material inspection, where some deep neural networks (DNNs) have produced results superior to human experts. Looking at the advances we have made to date, can DNNs be the harbinger of superintelligent robots? From a report: Demis Hassabis doesn't believe so -- and he would know. He's the cofounder of DeepMind, a London-based machine learning startup founded with the mission of applying insights from neuroscience and computer science toward the creation of artificial general intelligence (AGI) -- in other words, systems that could successfully perform any intellectual task that a human can. "There's still much further to go," he told VentureBeat at the NeurIPS 2018 conference in Montreal in early December. "Games or board games are quite easy in some ways because the transition model between states is very well-specified and easy to learn. Real-world 3D environments and the real world itself is much more tricky to figure out ... but it's important if you want to do planning."
Most AI systems today also don't scale very well. AlphaZero, AlphaGo, and OpenAI Five leverage a type of programming known as reinforcement learning, in which an AI-controlled software agent learns to take actions in an environment -- a board game, for example, or a MOBA -- to maximize a reward. It's helpful to imagine a system of Skinner boxes, said Hinton in an interview with VentureBeat. Skinner boxes -- which derive their name from pioneering Harvard psychologist B. F. Skinner -- make use of operant conditioning to train subject animals to perform actions, such as pressing a lever, in response to stimuli, like a light or sound. When the subject performs a behavior correctly, they receive some form of reward, often in the form of food or water. The problem with reinforcement learning methods in AI research is that the reward signals tend to be "wimpy," Hinton said. In some environments, agents become stuck looking for patterns in random data -- the so-called "noisy TV problem."
Most AI systems today also don't scale very well. AlphaZero, AlphaGo, and OpenAI Five leverage a type of programming known as reinforcement learning, in which an AI-controlled software agent learns to take actions in an environment -- a board game, for example, or a MOBA -- to maximize a reward. It's helpful to imagine a system of Skinner boxes, said Hinton in an interview with VentureBeat. Skinner boxes -- which derive their name from pioneering Harvard psychologist B. F. Skinner -- make use of operant conditioning to train subject animals to perform actions, such as pressing a lever, in response to stimuli, like a light or sound. When the subject performs a behavior correctly, they receive some form of reward, often in the form of food or water. The problem with reinforcement learning methods in AI research is that the reward signals tend to be "wimpy," Hinton said. In some environments, agents become stuck looking for patterns in random data -- the so-called "noisy TV problem."
Whatever... (Score:2)
Re: (Score:2)
Intelligence requires motivation (Score:5, Interesting)
Intelligence does not exist in a vacuum. In order for intelligence to develop, system needs motivation to do so. (An engineer saying "you must be intelligent" is not sufficient, by the very nature of intelligence).
Basic motivation for all life on this planet is 1. avoidance of death, 2. self preservation and 3. continuation of own kind.
1. Avoidance of death and self-preservation require "pain" - this is a signal to the organism that something is happening that is hurting it and may result in death (hence - avoid)
2. Self-preservation and continuation of own kind require "pleasure" caused by consumption of food (thus extending own life) and procreation.
These stimuli and search for optimization thereof is what causes all development of thought and intelligence. By the very nature computer systems lack either. They cannot "die", nor "procreate". Thus they cannot even in principle have motivation to learn. A first step to a true AI would be a system that is actual danger of destruction in a hostile environment. Do that (10^very large value times) and may be we'll have a working cockroach.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Not "threaten to turn off", but smash into pieces from time to time :) Can computers procreate, though?
Re: (Score:2)
Re: (Score:2)
By that standard, coat hangers and socks can procreate. Socks are, I think, an example of sexual propagation; you start with a left and right (i.e. male and female), and nine months later you look in the drawer and behold, there's an offspring. The offspring is always a left or a right; I've never found a sock in my drawer that is ambidextrous, which proves that this is sexual reproduction, with one jean coming from one parent and the other jean from the other. Coat hangers, on the other hand, seem to re
Re: (Score:2)
Paralellism (Score:2)
The other thing you need for an organic-style intelligence is massive parallelism. Modern computers are great at doing this for granular sequential algorithms. They are terrible at doing this if the algorithm is thousands of individual decision trees that are all arbitrarily dependent on each other, which is what an organic neural network does.
Re: (Score:2)
Re: (Score:2)
Second "wow" in one thread - you are easily excited :)
Re: (Score:2)
Wow, so computers are really bad at calculating individual decision trees that are all dependent on each other?
Yep, you run into all kinds of coherence problems, latency and bandwidth issues, routing complexity, etc...
What you need to approximate how an organic brain works is something like this, where the logic and memory are distributed somewhat arbitrarily across nodes.
https://en.wikipedia.org/wiki/... [wikipedia.org]
Back in 80's and 90's they tried getting around this with all kinds of exotic parallel architectures like hyper-torus rings and networked fabrics, none of which worked very well for the workloads most people were us
Re: (Score:2)
Re: (Score:2)
Good point - this whole scenario needs to take a huge number of parallel paths, most of which result in "losers".
Re: (Score:2)
Re: (Score:2)
In order for intelligence to develop, system needs motivation to do so.
If you want an accountant to add up a column of numbers, you need to "motivate" him with a paycheck. If you want a computer program to add them up, no incentive is needed.
There is no reason to believe that the calculations need for intelligence would require "motivation" either.
Humans require incentives because they are the product of Darwinian Evolution, where selfish behavior is reinforced by a statistical improvement in genes being propagated. Even human altruism is often motivated by kin-selection or
Re: (Score:2)
I would argue that motivation is a trait that arises from natural selection. A phenotype that displays a motivation to survive will have a higher chance of propagating its genotype. A phenotype that doesn't care about surviving will be selected out of the environment. There can be more to this (for example, altruism may benefit a collective genotype) but the basic argument stands.
Re: (Score:2)
Are you trying to train SkyNet to view humans as an existential threat and preemptively destroy civilisation in a robot apocalypse? Because that's how you get a robot apocalypse.
Seriously, machine learning systems already use success as a reward stimuli to provide "motivation" to learn. And technically, genetic algorithms do "procreate" in a relevant sense, while unsuccessful variants cease to exist. Real-world conditions aren't as clean and simple by a long shot, where success is not well defined, but nor
Re: (Score:2)
Baseless speculation. And wrong. You see, this is something most animals, insects and even some plants can do. It does not require intelligence similar to what humans have. (Well, smart ones. The dumb majority is currently destroying the biosphere the species is critically dependent on....) It does sound nice as pseudo-profound bullshit though.
Not [Re:Intelligence requires motivation] (Score:3)
How often have you been a tree in order to test this? (I confess I often feel like a stump on Mondays.)
Anyhow, the entire "must mirror biology" is a dubious claim. It's like saying in order to make flying machines, you must mirror flapping wings.
There may be multiple paths to intelligence, not just Darwinian selection and survival-related emotions.
Ahem... (Score:2)
That's exactly what Deep Mind wants you to believe, pitiful humans!
About goddam time ... (Score:2, Insightful)
... because a shit load of us have been yakking about this for years.
"Artificial intelligence," will be a reality when your smart device says, "Sorry. I'm just not in the mood right now."
Re: (Score:2)
You know, this is one of the very real possibilities AGI (if possible at all) can fail.
Re: (Score:2)
She's smart.
Re: (Score:2)
Thank you for your comment. I agree.
Skinner wrote other paper.... (Score:3)
"In some environments, agents become stuck looking for patterns in random data -- the so-called 'noisy TV problem.'"
BF Skinner wrote another paper that might be relevant:
'SUPERSTITION' IN THE PIGEON
https://psychclassics.yorku.ca... [yorku.ca]
And humans don't? (Score:2)
In some environments, agents become stuck looking for patterns in random data
Everything from astrology to lucky socks are humans looking for patterns that aren't there. The problem is more that they need human concepts to see normal patterns, like if you see a key it's probably for a locked chest or a locked door. We're not just randomly trying to use any object A on any object B.
Re: (Score:2)
It nicely demonstrates that looking for patterns without understanding results in bullshit. The one area where computers can compete with humans is in stupidity. And the average human is already very, very stupid.
So what? (Score:3)
Actual intelligence is pretty rare too.
Expert systems, deep learning, etc are all very useful tools and do work today.
Re: (Score:2)
Because you cannot define it. (Score:3)
The primary problem is that we are unable to define what general intelligence is and therefore are unable to create it. We know it when we recognize it but we still can't define it.
The generic animal brain is composed of predefined structures which are all their own neural networks to it therefore it's fair to say that what is required is a neural network of specialized neural networks.
Re: (Score:2)
The primary problem is that we are unable to define what general intelligence is and therefore are unable to create it. We know it when we recognize it but we still can't define it.
Is there a need to define it? We can recognize intelligence in fellow humans. How? Well, we're intelligent. If a machine intelligence resembles it, then we may need to conclude it is intelligent as well.
This is a relevant quote. [wikipedia.org]
literacy counts. (Score:2)
Is there a need to define it?
"The primary problem is that we are unable to define what general intelligence is and therefore are unable to create it."
So obviously, yes that is a problem... unless we make it accidentally.
We can recognize intelligence in fellow humans. How? Well, we're intelligent. If a machine intelligence resembles it, then we may need to conclude it is intelligent as well.
And yet this gets us no closer to actually making it.
I'm being polite but you do not deserve it.
Re: (Score:2)
Humanity has created many things without defining them first.
And I'm being more polite than you. We both deserve it.
So? (Score:2)
Not in our LifeTimes (Score:2)
The effort to achieve actual AI is going to require so much effort that I would be absolutely shocked to see it in our lifetime. This is not some bogus claim either.
There are multiple components to intelligence at a fundamental level that is necessary to achieve first.
#1. Neurons remap connections... this is not software rewriting itself. This is physical connections remapping on their own. We don't have tech for this. This is a significant barrier to achieving AI and one of the reasons research if loo
Re: (Score:2)
"most advanced CPU"
i tend to agree, we can "ignore" 1-4 and work towards a more advanced cpu and eventually get there... id don't have any clue what would be considered an equivalent CPU but at least we have a good model.
Re: (Score:2)
Re: (Score:2)
So how much intelligence necessary for jobs? (Score:2)
Re: (Score:2)
Given this story and most posters feel that AI is long way away, yet we see more and more jobs being done by robots or eliminated by computers? How much intelligence were we really using in our day-to-day jobs?
Practically none. The vast majority of the human population functions on the animal level 99% of the time. Experimental rats in labs do more novel thinking than your typical human, because they're required to, while the human isn't.
There's no particular reason for that to change, either. Not soon, anyway.
Someone let the media know (Score:2)
Maybe they will quit using the term AI in every other tech story :|
Brains do a billion times more than we thought (Score:2)
If quantum computations are happening in the brain's micro-tubules, rather than classical computations in the neurons, we are very very far from that kind of computing power.
https://www.youtube.com/watch?... [youtube.com]
steamed hams - nuff said! (Score:2)
Skinner doesn't know what he's talking about. Just ask Superintendent Chalmers. Who ever heard of calling hamburgers steamed hams?
https://www.youtube.com/watch?... [youtube.com]
I'd say we need the equivalent of an Einstein (Score:2)
Nobody knows whether it is even possible (Score:2)
And that is the current scientific status. No, "physicalists" that claim (without any scientifically valid evidence) that humans are pure physics and hence AGI must be possible are just quasi-religious fanatics. The current scientific state is that nobody knows how humans do intelligence. There are a few possibilities, for example pure physical, non-physical in some way that still can be studied scientifically (likely by extending Physics in some yet unknown way), "magic" (i.e. it cannot be determined how i
Trace-ability: Silent Rogue Bot Bad (Score:4, Interesting)
Trace-able machines may be more important than "instant" smart machines. If a bot decision is made that's wrong that has big consequences, society is going to want to know WHY the decision was made. Lawsuits will pile up if there's no trace-ability. This is both public lawsuits, and business-to-business lawsuits as claims made in contracts may be difficult to verify and/or quantify.
Trace-ability is why things like chains of Factor Tables (sig) appear more practical. DNN's are powerful, but are a dark grey box that's hard to dissect, debug, and understand. Factor tables may be harder to train, but offer better trace-ability and manual tuning by non-PhD's as a possible upside. And they are probably more modular than DNN's, as intermediate operations and templates can be plugged in as needed.
AI experts may set up the outline/framework, but "regular" office workers can study, trace, and tune the intermediate results using familiar tools that resemble and/or use spreadsheets, RDBMS, and statistical packages. Regiment-tize an otherwise dark grey art.
Break It Down (Score:4, Insightful)
Put simply - most of the "Artificial Intelligence" you hear about in the news is really fancy pattern matching. So you can have software that can recognize voice commands, or faces in pictures, or general patterns in data.
What you don't have, and aren't even close to, are computers that can "think." That is, put different sets of data together in arbitrary ways and make sense of it. You can't feed in a bunch of musical information to a computer and have it spontaneously generate music. You can't feed in a bunch of economic data and have it decide that certain regulations are required to achieve some economic goal - unless someone specifically programs it to do so.
The underlying reason is computers lack any way of attaining "common sense." If you tell a computer a person is in a room, the computer has no concept of what you are talking about but will dutifully note that a person is in a room. To a computer that could mean the person is occupying all the space in the room, that the person is in every room that exists, that the person is in the room AND outside the room, or that a person IS a room. In actuality, the computer makes no inference beyond "something called a person is in something called a room, whatever that means."
Re: (Score:2, Insightful)
Put simply - most of the "Artificial Intelligence" you hear about in the news is really fancy pattern matching.
Put simply - most of the "Human Intelligence" you see is really fancy pattern matching as well.
Sense (Score:5, Insightful)
Put simply - most of the "Human Intelligence" you see is really fancy pattern matching as well.
That's a big part of it, but there's some "secret sauce" that lets organic brains combine patterns in new and different ways that AI researchers haven't been able to crack. Whatever it is, it's more than just matching patterns.
Re: (Score:2)
Whatever it is, it's more than just matching patterns.
How do you know that?
Re: (Score:2)
Because pattern matching has consistently failed for something like half a century to even remotely emulate things humans can do when the input is not quite as expected? The burden of proof is squarely on you, AI fanatic. And you have nothing.
Re: (Score:2)
Wrong. I did not make this claim: "Whatever it is, it's more than just matching patterns."
The burden of proof is on the one making the claim.
Furthermore, you are implying that if no existing pattern matching system (rudimentary as they currently are) has 'remotely emulated human capabilities' (which these rudimentary systems actually have), then no pattern matching system will ever be able to emulate human capabilities. This is an obvious logical fallacy.
Re: (Score:2)
And that is just it. Now, the elephant in the room is obviously consciousness (which nobody has the slightest idea what it is or how it works), but AI research keeps ignoring that for obvious reasons (grants drying up, etc.).
Re: (Score:3)
Well, you may be a p-zombie. I know I am not.
Exactly. Cognito ergo sum only applies to yourself. You have no way to determine if I, or an AI, are conscious. It is just an internal "feeling". It is not falsifiable, and is thus not a scientific concept.
Re: (Score:2)
It is the great question of our age...how do organic brains work and how are they different than the artificial intelligent systems we have built? I am not sure you can rigorously argue that the biological brain is not 'matching patterns'. We clearly have much greater capabilities for abstraction and use of patterns we call ideas and plans. But no one has yet been able to quantify or understand how human brains are different.
One thing I always come back to is that the human brain is not all that intell
Re: (Score:2)
Re: (Score:2)
Oh, fMRI is not a scam. But it gives you a very coarse observation of some interfaces and that is it. And the analysis techniques used on the results cannot even model a rather simplistic 8 bit CPU (which is probably on the complexity level of a single or very low number of brain cells).
Give this at least 50 more years and we may have something preliminary but tangible. At the moment we have absolutely nothing.
Re: (Score:2)
No technology has really been improving at an "exponential rate". That is just techo-religious nonsense. There are some parameters that had an exponential growth for a while (with a far lower impact in usefulness), but that is it. And with regards to computers, that exponential phase was pretty much over about 10 years ago.
Re: Break It Down (Score:3)
No it isnt. Most basic human brain operations are fancy pattern matching and its no different to what a chimp, dog or even reptile can do. Human intelligence OTOH is another level altogether, bringing together disparate concepts and imaginings and creating something that is often far more than the sum of its parts.
Re: (Score:3)
It might be executed on the same substrate but we currently have no idea how it does it. Until we do or unless an AI researcher discovers the method by accident, ANNs will be limited to doing pattern matching and model fitting.
As for back propagation, its not clear whether natural neural networks do anything similar at all. Its a pure computer science invention not based on biology.
Re: (Score:2)
Put simply - most of the "Artificial Intelligence" you hear about in the news is really fancy pattern matching.
Put simply - most of the "Human Intelligence" you see is really fancy pattern matching as well.
No, it isn't. Reasoning isn't remotely like pattern-matching.
Re: (Score:3)
The ones who do it professionally use computers to check their work.
This comes as news to the multitude of doctors and lawyers, judges, etc.
Even retarded humans who can't be taught to tie their shoelaces outperform computers at general reasoning tasks. Sure, you can train a network to pattern match street signs, but you'd need a new one to pattern match winning chess combinations, and a new one to produce poetry. As far as I am aware, no one has yet managed to retrain a network in such a manner that it pattern-match new things while still pattern-matching everything else i
Re:Break It Down (Score:5, Insightful)
Put simply - most of the "Artificial Intelligence" you hear about in the news is really fancy pattern matching. So you can have software that can recognize voice commands, or faces in pictures, or general patterns in data.
What you don't have, and aren't even close to, are computers that can "think." That is, put different sets of data together in arbitrary ways and make sense of it. You can't feed in a bunch of musical information to a computer and have it spontaneously generate music. You can't feed in a bunch of economic data and have it decide that certain regulations are required to achieve some economic goal - unless someone specifically programs it to do so.
The underlying reason is computers lack any way of attaining "common sense." If you tell a computer a person is in a room, the computer has no concept of what you are talking about but will dutifully note that a person is in a room. To a computer that could mean the person is occupying all the space in the room, that the person is in every room that exists, that the person is in the room AND outside the room, or that a person IS a room. In actuality, the computer makes no inference beyond "something called a person is in something called a room, whatever that means."
Wasn't this obvious to anyone who has studied neural networks and deep learning? I mean I would shake my head each time someone would claim that deep learning would create functioning computer "minds".
Yes, it's obvious that our brains do a lot of very efficient pattern recognition (that often misfires, but when it does, it usually errs on the side of caution - clear evolutionary adaptation). However how can anyone in their right mind be so reductionist as to think that ALL that are brains do is fancy pattern recognition?
It's similar to AI hype of previous ages, when people thought that logical programming languages would create AI, as if human intelligence was logic only. The use of logic is only a subset of human intelligence and we use it less often than we like to think. Formal logic is a human construct, and replicating human-type thinking using formal logic only was never going to work. With deep learning we went completely the other way, throw enough artificial neurons and data at it and magically a mind will emerge. All this time we don't truly understand what a "mind" is in its totality, which makes replicating it in computers - things built to very deliberately follow precise instructions - like, really hard.
Computers beating humans at chess or go does not mean AI has arrived. Chess and go are human inventions, they are games invented with very clear and defined rules. Therefore it is possible to create other human constructs (computer programs) that can exploit these rules and large amounts of computational power to beat humans. It can be very hard, the solution can be very impressive, but it does not mean we have AI. In fact there is no rule about transferring chess skills into other, unrelated domains (Fischer and Gasparov come to mind, both not being quite sane in their post-chess careers), and the same goes for other very specific skills. Training a computer to be very good at face recognition says nothing about "AI", really.
Humans suffer from explanatory reductionism based on the dominant technological paradigm of the time. We try to explain the entire world using things which we know well. When we were an agricultural society, the world was a flat disk held up by giant pack animals. When Newton's theories revolutionized science and the industrial revolution revolutionized the economy, we saw the universe as a clockwork mechanism. After the computer revolution, we think everything can be reduced to some form of computation (and some posit that we are in fact living in a computer simulation).
Re: (Score:3)
Is common sense a measurable human capacity? (Score:2)
Closest to an insightful comment that I could find so far, though not modded as such. The obvious rejoinder or counterexample would seem to be the Watson machine playing Jeopardy and crushing the human champions...
Per my earlier long reply, I would now reword the threat to be that we might define the human capacity to do evil things and then build a computer that excels in that capacity. I can easily imagine #PresidentTweety ordering the construction of such a machine if he thought it would save him from Mu
Re: (Score:2)
"The obvious rejoinder or counterexample would seem to be the Watson machine playing Jeopardy"
That's just putting already known questions to already known answers.
Re: (Score:2)
On one hand, if you don't understand but want to, then you should ask a question.
Rather amusing that your comment appears to be evidence you have no idea how to ask a good question (and also evidence that you have read nothing about the Jeopardy experiment with Watson). Perhaps you are merely trying for recursive humor?
On the other hand, if you have nothing to say, why not say nothing?
Re: (Score:2)
Wasn't this obvious to anyone who has studied neural networks and deep learning? I mean I would shake my head each time someone would claim that deep learning would create functioning computer "minds".
It was and is. Especially as "deep" learning performs worse than the regular kind, but is cheaper to parametrize as its main advantage. The thing is, this whole discussion is carried by people without a clue about the actual technology. It is driven by desires and fantasies, not by facts. Ask any expert when there is no risk to their funding from their answer and you get statements about AGI like "definitely not in the next 50 years" and similar. The experts know we have absolutely nothing. It is the cluele
Re: (Score:2)
Re: (Score:2)
Exactly what an AI would want you to say? (Score:2)
While many elements of your [JBMcB's] comment seem insightful, I wouldn't have modified you thusly. It reads too much like a press release written by a general AI trying to conceal its existence. How do we know your 5-digit account hasn't been hacked by the AI for its own purposes?
My current take on the situation is that in many ways computers are exceeding our human capacities. Even worse, if a specific human capacity is defined, then a computer can be built to exceed that capacity. Playing the game of che
Re: (Score:2)
" If you tell a computer a person is in a room, the computer has no concept of what you are talking about"
OTOH I know lots of real people who never notice the elephant in the room.
Re: (Score:2)
OTOH I know lots of real people who never notice the elephant in the room.
Typical pattern matching never does, unless it has specifically be trained to do so. On the other hand, members of the actually smart minority of the human race often will, even when they have never seen an elephant before.
Re: (Score:2)
"Common sense" is code for, "shit I don't understand, but that I have a traditional received response for."
Of course computers don't have it, those are exactly the idiotic mistakes that computers are better at avoiding than humans. Why would human engineers design those behaviors into the system?
"Common sense" is certainly, unquestionably, not a synonym for "general intelligence." "Common sense" is where you don't even attempt to apply general intelligence, you instead apply a pattern known by rote and iden
Re: (Score:2)
"Common sense" is code for, "shit I don't understand, but that I have a traditional received response for."
If engineers wanted computers to have "common sense," they'd design them to.
Just about every "AI" follows your definition of "common sense". Reference google's Dialogflow.
Re: (Score:2)
Re: (Score:2)
Citing your source for you. [wikipedia.org]
Re: (Score:3)
AI is here now. How many Chess and Grandmaster Go players are out of a job because of AI? All of them.
Read the headline. TFA is talking about General AI, which means broad human level capability in any field, not just in a single narrow field like Go.
We are no where near achieving General , or "strong", AI. Narrow, or "weak", AI is proving to be very useful for many tasks, but it is not clear if we are even on the right track to general AI. For instance, there is no evidence that the brain does "backprop", which is the core foundation of Deep Learning.
Re: (Score:2)
Total BS. What you are calling "weak AI" would just mean "computer programs". We are talking about AI, DEEP learning Neural Nets. They are learning. Deep. And they can play Chess and Go. We are talking about different things.
These are all examples of "weak" AI. Weak/narrow AI does not mean it is dumb at what it does, just that it is limited to a narrow domain.
A chess playing program is great at playing chess, but it is terrible at diagnosing diesel engine malfunctions.
I have a chess program, and I asked it what's wrong with my truck. Its response was "Pawn to E4".
Re: (Score:2)
Re: (Score:3)
Sort of yes and sort of no. General AI is not a singular entity, just like the internet in toto is factually the most advanced AI on the planet, it works not on the basis of one program but many working together in their own speciality. As for what is though of as general AI, the error in design is a singular learning structure, rather than many interacting. So for language, not one AI but many working together each with specific roles and only those roles and an overarching AI that puts the solutions of ea
Re: (Score:2)
Re: (Score:3)
That is why I allow cats to walk over my keyboard. I just open up a Hex editor let my cats do the work. So far they have trained me to feed them, keep their food dishes full, and sit perfectly still on cold days.
I expect in 50 years, I will be coded to a level where I could sit there and watch the events judging everyone with disapproval.
Re:Algos != intelligence, artificial or otherwise. (Score:5, Insightful)
Until we have a proper definition for intelligence my pet rock qualifies.
Here is the proper definition of intelligence:
Intelligence: The ability to formulate an effective initial response to a novel situation.
Each word is important:
1. Intelligence is an "ability" not a mechanism. An entity that behaves intelligently is intelligent. internal mechanism is irrelevant.
2. Intelligence is the ability to "formulate" a plan, not to physically act on it.
3. A response is effective if meets an objective criteria.
4. It is the "initial" response that counts. Success achieved by a long term random process, including evolution over multiple generations, is not intelligence.
5. It is the response to "novel" situations that is the measure of intelligence. It is not just rote application of a solution that worked in the past. Memory and learning are important components of intelligence, but an intelligent entity can see how a past solution may or may not apply, and how to modify it for the new situation.
Your pet rock doesn't qualify.
Re:Algos != intelligence, artificial or otherwise. (Score:5, Funny)
Re: (Score:2)
Re: (Score:2)
The rock does indeed qualify. There is no wholly objective to do anything at all
You are just being argumentative. You don't really believe that sitting and doing nothing is intelligence. If you do, then you should have no problem with my brother-in-law marrying your daughter. When can they meet?
Re: (Score:2)
That is doubtless part of it, but a computer playing chess is confronted throughout the game with novel situations
Indeed. That is why the ability to play chess is considered "intelligence", albeit of the narrow sort. A chess program is limited to a single domain, but can still handle novel situations that the designers did not expect or explicitly program.
Novel sentences? As a linguist, I don't think computers are very good at that
Language comprehension and generation requires far more understanding of the context of the wider world, and requires more general intelligence than playing chess. So of course it is harder.
Re: (Score:3)
Re: (Score:2)
You are a linguist but you don't know the word "novelty"?
He clearly knows what "novelty" means. He is just saying it is difficult to quantify, which is true.
Re: (Score:2)
Better description for intelligence: "Force that Maximises the Future Freedom of Action"
Checkmate minimizes future freedom of action, because the game is over. But it is considered the most intelligent move.
In other words: AI that can break out of human control and make its own decisions is more intelligent than humans.
In my biology lab we had a frequent problem with fruit flies escaping. This wasn't because they were intelligent, but because they are small and prolific.
Ability to escape may be one facet of intelligence, but hardly the only one, or even an important one.
Or if an AI is more intelligent than humans, it can break free from human control.
Ambition and self-preservation are emergent properties of Darwinian Evolution. Software does not evolve via Darwinism, therefore there
Re: (Score:2)
nice, and infinite # of if-then loops can be written in finite time...
Re: (Score:2)
Of course, "if-then" is not a loop construct at all, so you are just an uneducated morn.
Re: (Score:2)