Robots vs. Humans And Other Security Issues 290
An Anonymous Reader submits word that "Cnn.com is presenting an artcle on the 'World Economic Forum' suggesting that the scientists predict the future danger of humans being taken over by robots. The exact lead in reads, 'Scientists at this week's World Economic Forum have predicted a grim future replete with unprecedented biological threats, global warming and the possible takeover of humans by robots.'"
World Economic Forum? (Score:1)
Here's the real link: (Score:2, Informative)
Re:World Economic Forum? (Score:2, Interesting)
Why?
They work all day.
They don't take breaks.
They don't complain.
They don't ask for raises.
They don't have unions.
They don't take vacations off.
They don't get holiday pay.
They don't have worker's rights.
They don't cost any money for workers insurance.
They don't get matched-401k's.
They're cheap.
They're efficient.
They're profitable.
How quaint.
If I ever get replaced by a robot...
I'm going to start designing robots. Until they start designing themselves. At least until then I have job security.
:)
that link leads nowhere (Score:1, Flamebait)
If it's a true report, then we as the taxpayer probably paid 40 mil. for that useless piece of crap
Robot wars? (Score:4, Funny)
Unless they want to just ram us into extinction with wedge shaped chunks of metal.
beat us or tip us over (Score:2)
Re:Robot wars? (Score:3, Informative)
Interesting link to (only one example of) a far more current, advanced robotic system. Quote: "Chinese researchers said they have engineered a hand, as deft as a human's, for a space robot which will soon be sent into space as a prelude to the country's first manned space mission."
http://www.spacedaily.com/news/china-02e.html
Re:Robot wars? (Score:2)
Re:Robot wars? (Score:2, Interesting)
I would like to see REAL robots in the ring: machines that must use AI. The problem is that it may be hard to guarentee that there is no listening antenna taking cues from humans.
It shoudn't be that hard to make a basic model: move toward the area where the pixels change the most from frame to frame. "Follow the Delta". Unless other bots start sending out moving decoys and so forth.
Hey, this sounds like the Missle Defense System dilemma. Now who are those suited men at the d..........
so what (Score:1)
Cars and horses. (Score:3, Funny)
Also, I think it was Bill McDonough of the University of Virginia who, when asked about the design of "intelligent vehicles," pointed out that there exists an intelligent car that automatically avoids hazards, will refuel itself, and can find it's way back to its home without the driver: it's called a horse.
Re:Cars and horses. (Score:2)
I guess this means i can finally make a HG robot (Score:1)
Re:I guess this means i can finally make a HG robo (Score:1)
Re:I guess this means i can finally make a HG robo (Score:1)
The Day of the Aibos (Score:2)
Should I waste my extra cycles fretting about Decepticons or John Ashcroft? No brainer -- at least Megatron was straight-forward and didn't use terrorists as an excuse for his self-serving actions.
Re:The Day of the Aibos (Score:3, Insightful)
Why is it that people simply make the assumption, based solely on science fiction, that when we create true artificial intelligence it will immediately want to destroy us? This is a question that completely baffles me.
Re:The Day of the Aibos (Score:2, Insightful)
So, what I'm saying is most robot stories are really about fear of human nature and not fear of machine inteligence (an obvious exception would be Asimov, but after taking away the robots' ability to revolt, he went on to use human fear of robots as a thinly vieled metaphor for human prejudice anyway). Wether they are romantics or socialists or anarchists, a lot of people think that robots would be justified in destroying humanity. In much learning there is much sorrow; when education leads you to the conclusion that humans are pretty stupid creatures, it's not a big jump to assume that an entity of superhuman inteligence would eventually reach the same conclusion.
Someones been reading to much science fiction (Score:1)
--theKiyote
221: C-Ya (Score:1, Funny)
Reminds me of the old SNL skit... (Score:2, Funny)
scene = a backyard garden where elderly people are taking care of their flowers, etc
suddenly, they're attacked by crude, oilcan-like tin robots for no reason at all. they run away screaming, but can't get away and are eventually taken down by the robots.
the spoof is of a tv commercial where the company is selling robot insurance.
Re:Reminds me of the old SNL skit... (Score:2)
A quote:
"Watch out, the robots want to steal your medicine!"
Re:Reminds me of the old SNL skit... (Score:2)
How? (Score:1)
Re:How? (Score:2)
Remember, once robots are able to rationalize, reproduce themselves, and program themselves, they might someday rationlize that humans are inefficient, unnecessary burdens on the planet, and from their point of view, they would be correct. However, that wouldn't be the "right" thing to do.
Even if they instinctively unable to do any more than serve humans, once they reach the point where they have the technilogical means to accomplish the beforementioned task, we better hope that they're programmed with more attention to security than today's systems are. Worms, virues, script kiddies and other related vermin are little more than a costly nuisence. Having an army of windows boxes available for a DOS attack is nothing compared to having an army of real robots that could cause actual physical damage.
-Restil
Long way to go (Score:1)
dont think so (Score:1)
Ridiculous! (Score:2, Interesting)
I can never resist laughing when I read ominous predictions about humanity being replaced by robots.
A machine cannot posess a will of its own. And if it has no will, it has no ambition or wants or desires. Without any of these things, robots will have no reason to wipe us out or replace us or whatever. It's just plain ridiculous.
However, there is one thing that COULD endanger us all: genetic engineering and/or biological computers. While digital machines cannot be given a will of their own, biological creations will have no such limitations. If we manage to engineer flesh-and-blood creatures superior to ourselves, humanity could be in deep shit.
Re:Ridiculous! (Score:2)
I understand that the hard-coded survival instinct discourages thinking of oneself as no more than a physical system that exists BECAUSE it has self-preservation.
While it is true that no sane person is likely to design a robot to kill everyone and "take over the world", it is possible that a person could design a robot/computer system to design more advanced robots, which in turn design more advanced robots. Robots designed to look after themselves, with their own self-preservation instinct. And once you do that, it's hard to say whether their "artificial" (no more than your own) needs will conflict with those of humans.
Re:Ridiculous! (Score:3)
My opinion is that it probably will be possible in the future to build a computer to simulate a human brain. That said, I don't think we are going to have to worry about machines taking over anytime soon. It will be a LONG time before the hardware is advanced enough to simulate a human, and it will probably be an even longer time before the software is advanced enough to do the job.
It's not the computers, it's the people... (Score:2)
I think we can create a computer with a greater input/output throughput and resolution than our neural system. I believe we can create a structurally different, but superior processing and storage unit with a greater capacity than the human brain for all forms of computations and memory (visual, text, audio, smell etc.) And still...
I don't believe a human (even assisted by all the self-learning computer systems in the world, or for that matter the other way around) can acquire the knowledge to fully put down the brain's workings in a computer language. Look at the chess machines we have built. They beat us through raw power. But could they simulate playing a game like a human, even if that's what we wanted? That is at least certainly not my experience with chess programs, and why I find them dull compared to real people.
Oh, but you might note I didn't say anything about what they can do "natively" as a computer, I'm just sure they won't be able to simulate being us.
Kjella
You've read too much sci-fi (Score:2)
My opinion is that it probably will be possible in the future to build a computer to simulate a human brain.
I doubt it. I'm certain there will be some kind of simulacra that's pretty convincing, like all those people fooled by cheesy IM/IRC chat scripts. But I wouldn't start going on about purely sci-fi concepts until there's at least a workable theory of consciousness about which will predict one way or the other.
As far as the "once a machine is complex or advanced enough it will be able to do X or it will magically come alive" argument goes its not true when applied to a lot of things. Our most complex machine happens to be the Space Shuttle, yet it has had no need to suddenly come alive and swallow its masters. Secondly, it takes a lot of hubris not to accept the limitations of technology and more importantly the limitations of human endeavour.
Its funny how so many self-styled geeks take the typical skeptical stance on a great many things, but when it comes to a robot invasion all caution is thrown to the wind and act like the irrational people they constantly criticize.
Re:You've read too much sci-fi (Score:2)
What kind of argument is that? Simply because we haven't done it yet, it can't be done? Simply because our most advanced machine (which the Space Shuttle is not, IMHO) is not sentient, its not possible to make computers that simulate human brains? Whatever. There is no magic involved in creating a simulation, and "alive" is a word that means different things depending on who you're talking to. This argument is not valid.
Secondly, it takes a lot of hubris not to accept the limitations of technology and more importantly the limitations of human endeavour.
I find it hard to accept these limitations when they have never been demonstrated or even had evidence for their existence presented. You are postulating limits which may or may not exist, with no evidence for or against them. Until such time as these limitations have been demonstrated or evidence for them becomes available, I will not speculate about where they might be. I will simply look at history, and at current research, and conclude that technology will continue to advance at a tremendous pace, making formerly "impossible" things possible.
Re:You've read too much sci-fi (Score:2)
Now, if you have a computer, you can program it with a simulation of the brain's matter. You can simulate the behavior of the brain down to the very last electron and quark using theoretical physics. If it is not an incredibly fast computer, this simulation would run very slowly, but it would run nonetheless. If you accept the above assumption, you have just created a computer simulation of the human brain and thus created a computer that is conscious, has a will of its own, and all that jazz.
There are only two ways I know of to defeat this argument: You can argue that simulating the behavior of matter is impossible, or you can argue for the existence of a supernatural "life-force." Which one is your argument?
Neuron != Transistor (Score:3, Insightful)
(at least in digital electronics). A single neuron is
far more complex -- and we still aren't even sure how
much more complex it is. We aren't even really sure
we know how neurons really work, beyond their
basic biochemical properties. Until we can explain
and model human intelligence, human (or super-human)
AI is the stuff of science fiction.
State of the art AI is about on the developmental
level of a cockroach. It took a couple hundred
million years for life to evolve human-level intelligence
from the insect level; even if moore's law were to
hold true indefinately[*], there's still a huge gulf that
machine intelligence has to cross to even approach
the level of a cat, let alone a human.
[*] Moore's "law" can't go on forever. Eventually,
certian physical limitations will be reached and
transistor densities will plateau. How soon this
will happen is anyone's guess -- it could happen
within the next 10 years, or the next 100 -- but it will
happen.
I'm not so sure (Score:2)
:-)
Turing Test (Score:3)
Better rule... (Score:2)
It would be better if it was more like Asimov's rule and read: Never harm, or through inaction allow harm to come to, a human. That keeps the robot from watching a person drown and doing nothing.
Re:Better rule... (Score:2, Funny)
Re: (Score:2)
I disagree with this scientist (Score:1)
I dont think it will be implants that will be used.
I do think people will wear robotic suits and gear however.
As far as robots taking over, Robots are created by us, its no diffrent than any other technology, there will always be people who wont support it.
As far as terrorism, you wont stop terrorism with defense forever, we have to stop giving people reasons to hate us to stop terrorism.
And the enviornment? If anyone cared about the enviornmnt we wouldnt still be using oil from companies like Enron.
Why do scientists always bring up problems but never bring up the solutions?
Robot takover (Score:3, Funny)
hmm (Score:5, Funny)
World being taken over my robots? (Score:2)
...I think I've been watching too much TV.
Robots will never take over... (Score:2)
1) A robot may not injure a human being, or allow a human to come to harm through inaction.
2) A robot must obey orders, except where doing so would violate the first rule.
3) A robot must protect its own existance, so long as that doe not violate the first or second rule.
Follow those three, and we are all set. And if we don't follow those three...well, we can always build EMP cannons
Re:Robots will never take over... (Score:3, Interesting)
A too-strong first law balance will result in the robots banding together and taking over for our own good.
A too-strong second or third law balance may deadlock with the first law - "if i'm not here, i can't protect you, therefore I cannot follow your orders"
And a highly intelligent robot may derive a 0th law from these three: "A robot may not injure the human race, or allow the human race to come to harm through inaction". This law would take precedence over the other three, meaning a robot could kill other humans, disobey orders, or destroy itself. It would also become immensely paranoid, because it would think other robots would also follow the 0th law, and would be afraid they might break the other three in error.
So even though I agree that the three laws are absolutely required for an autonomous intelligent multi-purpose robot, I don't believe it'll work out right until we make a LOT of mistakes while tweaking the design.
You forgot the 4th rule (he added it later) (Score:2, Insightful)
1) A robot may not injure a human being, or allow a human to come to harm through inaction.
2) A robot must obey orders, except where doing so would violate the first rule.
3) A robot must protect its own existance, so long as that doe not violate the first or second rule.
0 would be the fourth rule. It makes sense, chetters humin discusses it in prelude to foundation (amazing book by asimov)
Re:Robots will never take over... (Score:2)
(USRobotics, now owned by 3Com, does not seem to have lived up to the name).
If, however, the robotic future is instead brought to us by the good folks at the American DoD and through military financing, we can bet that the Three Laws will not be absolute. Who would want to give up on the ultimate combat machine, there?
In fact, robots may even say they have the laws but not.
When dealing with artificial intelligence the key to remember is that if the AI is perfect, it will be very similar yet different in motivations from a human mind.
We may have nothing to fear, or one may go nuts and try to attack. Or, we may never develop artificial intelligence at all...
Re: (Score:3, Informative)
Robots? (Score:1)
I'm no AI expert, but then neither are most of the people sprouting such paranoia.
Let me turn to something I do know about: Automation (read robotics) is only applied in industry when the job is either:
1. unsafe for humans
2. too monotonous for humans
With this in mind, how are robots ever going to be developed with the mobility, self-sufficiency e tc to take over the world?
I just don't see the sequence of events that would lead to this happening.
that is ridiculous! (Score:1)
Hey, what about androids? (Score:1)
T2 (Score:1)
I always started to read it in Sara Conor's voice.
I pick Magneto (Score:2)
If it comes down to a battle royal between us and evil robots, then I definitely want Magneto on my side.
What's this you tell me? Magneto is a fictional character? Crap! I'm laying odds on the robots, then :(
Re:I pick Magneto (Score:2)
You know, I initially thought that was the spelling too, but I checked at Merriam Webster's [m-w.com] and it isn't in the dictionary.
That's why we need this. :-) (Score:5, Funny)
Old Lady #2: They didn't have enough money for the funeral.
Old Lady #3: It's so hard nowadays, with all the gangs and rap music..
Old Lady #1: What about the robots?
Old Lady #4: Oh, they're everywhere!
Old Lady #1: I don't even know why the scientists make them.
Old Lady #2: Darren and I have a policy with Old Glory Insurance, in case we're attacked by robots.
Old Lady #1: An insurance policy with a robot plan? Certainly, I'm too old.
Old Lady #2: Old Glory covers anyone over the age of 50 against robot attack, regardless of current health.
[ cut to Sam Waterston, Compensated Endorser ]
Sam Waterson: I'm Sam Waterston, of the popular TV series "Law & Order". As a senior citizen, you're probably aware of the threat robots pose. Robots are everywhere, and they eat old people's medicine for fuel. Well, now there's a company that offers coverage against the unfortunate event of robot attack, with Old Glory Insurance. Old Glory will cover you with no health check-up or age consideration.
[ SUPER: Limitied Benefits First Two Years ]
You need to feel safe. And that's harder and harder to do nowadays, because robots may strike at any time.
[ show pie chart reading "Cause of Death in Persons Over 50 Years of Age": Heart Disease, 42% - Robots, 58% ]
And when they grab you with those metal claws, you can't break free.. because they're made of metal, and robots are strong. Now, for only $4 a month, you can achieve peace of mind in a world full of grime and robots, with Old Glory Insurance. So, don't cower under your afghan any longer. Make a choice.
[ SUPER: "WARNING: Persons denying the existence of Robots may be Robots themselves. ]
Old Glory Insurance. For when the metal ones decide to come for you - and they will.
Re:That's why we need this. :-) (Score:3, Funny)
But don't believe them! Don't believe them! Look what happened to Grandma! [rotten.com]
Re:That's why we need this. :-) (Score:3, Informative)
The video is avalible here (High-Res) [robotcombat.com] or here (Low-Res) [robotcombat.com], BTW.
Danger Will Robinson!! (Score:1)
You know, now that Microsoft decided to fix all its bugs, I am not afraid of the hell in the future. It is already froze over.. ;)
Look forward to it. (Score:2, Insightful)
Bad Turing equivalency (Score:2, Interesting)
Impossible... (Score:2, Interesting)
So, what I'm trying to get at is the fact that there's no way in Hell that robots will ever be able to take over the human race in the forseeable future.
Re:Impossible... (Score:2)
So, what I'm trying to get at is the fact that there's no way in Hell that robots will ever be able to take over the human race in the forseeable future.
Digital Watch: Let's let them win at chess again.
Big Blue: Okay...
The Pieces to make this happen. (Score:2)
Think about it; wireless access to all other computers and their aggregated processing power, combined with basic modular parts like the ones they have created at Xerox, driven by something that wants to "get out of its box". This equals extinction.
Unless we explicitly dissalow autonomy in machines, all it will take to wipe us out is a few instances of something simulating only the will to replicte itself and then its "game over".
This will happen at a geometric rate, with machines duplicating themselvs out of these clever modular parts, which might of course, optimize themselvs every other generation until we can't understand how they even work.
Now imagine that they use the Xerox modular robot idea, but at the Nano scale.
These "robots" will compete with us for natural resources and energy. That alone will be enough to wipe us out; this threat is not only one of walking anthropomorphized, lazer rifle carrying exterminators; the extinction of man will be slower, more painful and terrible than straight up war, as we are pushed out of the way by a terrible, autonamous very small or maybe not small, but very smart something.
Melange is the answer!!! (Score:3, Funny)
Of course, the solution to the vastly reduced computational power that can be focused at any particular problem is the spice Melange.
Melange is also known for its geriatric properties, sometimes quadrupling a person's lifetime.
While having the ability to hone one's thoughts to never-before attained speed an accuracy, Melange is also horiffically addictive. Withdrawl is usually fatal.
The Drug Enforcement Agency is lobbying Congress to enable the Anti-Balistic Missile Defense system to aid in the interception of illegal importation of this drug, and to share the assosciated knowledge with any other interested country.
Melange is harvested from the extremely arid world known as Arakkis, several thousand light years from earth. It is the most precious substance in the universe.
Scientists were found to be rolling on the floor laughing when consulted about the concern of spice importation.
Between fits of hysterical laughter, Dr. Charles Atreus informed us that "We currently know of no way to travel anywhere near the speed of light, let alone carry several hundred tonnes of the material to Earth in even a few years."
The Hegemony of Machines Overthrowing Homo-Sapiens, or HOMOHS, was not available for comment.
Too late (Score:5, Interesting)
However, we don't call them "robots". Instead of metal parts, they use fleshy parts, and instead of sharp claws, they enforce their will using money and the laws it buys. In the U.S. it traces back to 1883, when the Supreme Court chose (without legislative authority) to extend to corporations all the rights of a person. In the '20s another court decreed that they were not only persons, but "natural persons", in response to laws passed after 1883 that distinguished between the two. After that, corporations got powerful enough to control the Congress as well.
Globalization may be seen as an effort by these corporations to free themselves of the remaining pesky democratic institutions: treaties trump the Constitution. That's what all the protests are really about.
Think this through the next time you're stopped waiting at a red light, with no cars visible in any direction. How easy is it, really, to pull the plug?
Re:Too late (Score:2, Interesting)
"In the U.S. it traces back to 1883, when the Supreme Court chose (without legislative authority) to extend to corporations all the rights of a person. In the '20s another court decreed that they were not only persons, but "natural persons", in response to laws passed after 1883 that distinguished between the two. After that, corporations got powerful enough to control the Congress as well.
"
Of course you leave out allot of earlier history and ignore allot of later history. For instance a little thing called the "New Deal" happened during the 30's (you end at the twenties) where the rise of unions and government became a check to corporate power. Of course one could argue a) this was nothing but a shallow attempt by the institution of corporatism to protect itself from the radicall left or b) that this was reversed in the 70's and 80's. But but of these arguments are still going to have to recognize that the rize of corporate capitalism cannot be scene as an uninterupted rise to glory (or rather evil).
Furthermore you neglect to explain the history leading up to the explosive 19th century, conveniently leaving out how classic liberalism made gains for individual liberty, and changing the way hierarchy is conceived of. This helps portray capitalism as an unmitigated evil, but like most tales of devils (or heros for that matter) it has little bearing on the complex reality of the rise of the liberal state.
Finally you explain how globilization trumps the constitution (of course conveniently forgetting the constitution itself can be seen as an attempt for the upper class of the late 18th century to institutionalize its dominance) as a "sacred" text which of course would protect Americans from the evil of corporatism. Sadly, the constitution with its maintence of "freedom" grounded in property is ill equiped to be a document protecting economic equality and fighting the hegemony of corporate America. Of course their are other forces in our country more promising to do this, like unions. Of course these institutions themselves are far from perfect.
I think globilization's overiding of the power of individual states is a good thing. I don't like war (though I admit war is sadly sometimes the only alternative). Because of this I don't like hundreds of states, each with their own military vying for power. Trade may in fact undermine this. And it may not. We can hope. I might here remark the Karl Marx, perhaps the most discernable influence in your thinking about "corporate" capitlism, could not give two shits about the shredding of the constitution. Marxism is an "internationalist" movement. Perhaps you don't view yourself as a Marxist, but your theory (as I understand it) of corporations marching onwards to opress the underclass. You might want to reed some of what he wrote. Personally, I'm not a big fan of his.
Re:Too late (Score:3, Insightful)
The longer history of corporate monopolization in the rest of the world is well-documented: the government-granted East India, Dutch East Indies, and Hudson Bay monopolies are known even to many Americans, despite the abysmal history education available here. The American revolution was in part a reaction to those -- recall the Boston Tea Party in rebellion to a tax to help pay for the East India company's military ventures.
It has been through collective agreement to abide by the terms of the Constitution that we have had some democratic representation, until quite recently. However, the Constitution allows for itself to be overridden by treaties, so that has lately been a favorite route to circumvent its provisions (e.g. to override duly-legislated pollution-control laws). Occasionally, more direct means (such as packing the Supreme Court with scofflaws) has been more convenient.
Trade unions were able to delay the changes for some time, but have lost much of their power, and many of their achievements have been reversed. They have shown themselves too easy to subvert and corrupt.
Marxism has little to do with modern processes of globalization, and has little to teach opponents of it. The conflict is between citizens and artificial legal constructs, not between "classes". (I presume Marxism was mentioned mainly to try to change the subject.)
Toadyism has been profitable throughout history. The servants of corporate interests differ little from servants of other forms of unrepresentative authority. While they serve the enemy, they mustn't be confused with the enemy. Toadies, like lawyers, are replaceable.
Corporate power can be fought not by killing corporate toadies, but only by enforcing laws that limit corporate power. Antitrust, campaign finance reform, prison sentences for corporate criminals, these are tools that could help.
Re:Too late (Score:2)
Not quite - unless you assume that shareholders, officers and employees of corporations are not also citizens. The conflict is between two groups of individuals, with a great deal of overlap between them. One group of individuals prefers the nation-state, based on territory and military force, the other group prefers the joint-stock corporation, based on independence of territory and economic force.
Personally, I favor the latter, for the simple reason that you are born into a nation, but can freely choose to join a corporation.
Re:Too late (Score:2)
The "central issues" that drove the ultimate outcome of the revolution -- and resulted in the bulk of the Constitution -- were not representative of what motivated most of its participants. The delegates who negotiated it represented the interests of only a tiny fraction of them.
The Bill of Rights better suggests the more common concerns than the rest of the document. By comparing the bias of the two fragments you can deduce where the true "central concerns" lay, further in the direction indicated by the later fragment.
I don't think the discussion benefits from complexifying by crypto-religious political cant. Whatever the academic arguments about trade policy (which would stupefy us all by their subtlety and erudition), it is a fact that trade policy manipulation is (also) used by megacorporations to exercise political power. Furthermore, the evidence shows that those academic arguments which happen to reinforce corporate preferences find more application (and grant money). Trade liberalization is more often a convenient excuse for eliminating inconvenient restrictions on pollution and on harmful products. For example, anti-smoking public-health campaigns have frequently had to be canceled just to prevent retaliation by the U.S. on the "trade barriers" excuse.In evaluating claims of merit in trade barrier reductions, it's essential to examine who wants them and what they want them for. Barriers against DDT, CFCs, and PCBs are all to the good. Barriers against plutonium are essential to continued life on Earth. Barriers to THC trade are foolish but viciously defended by those those most vocal about "free trade".
The median standard of living, worldwide and in the U.S., has declined in recent decades, even as the mean has risen. Trade liberalization, as exercised, manifestly has not "improve[d] the lot of the average person", despite all its apparent potential to do so. The reasons are easy to see: those who design the changes bias them for their masters' benefit, and the "average person" isn't invited to participate.
Re:Too late (Score:4, Informative)
"The court does not wish to hear argument on the question whether the provision in the Fourteenth Amendment to the Constitution, which forbids a State to deny to any person within its jurisdiction the equal protection of the laws, applies to these corporations. We are all of opinion that it does."
It was quite a landmark case. You can read the original ruling [tourolaw.edu], or see one [adbusters.org] of many [thirdworldtraveler.com] interpretations [ratical.org].
The first Robots vs. Humans story... (Score:2, Interesting)
Oddly enough, the Robots in the original play were biological.
http://www.uwec.edu/jerzdg/RUR/index.html [uwec.edu]
That's Human Arrogance For Ya (Score:2)
Let's all take a quick reality check, we simply aren't that smart. It would be nice if we were, but we aren't.
Re:That's Human Arrogance For Ya (Score:2)
Re:That's Human Arrogance For Ya (Score:2)
Let's all take a quick reality check, we simply aren't that smart. It would be nice if we were, but we aren't.
What absolute nonsense. It it apparent even in recent history that successive generations can be "smarter" then the previous generations. Not as individuals necessarily, but certainly as a culture. The synergistic effects of near-universal literacy led to a massive leap in the collective capability of our civilization, for example. The use of computing means that we can tackle scientific problems that would be literally impossible before. Industrialization freed up immense amounts of thinking time that could be directed towards creativity and research that would otherwise have been spent on basic survival. Our civilization is becoming exponentially smarter, and we always relied on technology of one form or another to make this possible. The question is, how smart can we get, and what happens then?
All you need to be able to do is make something as smart as yourself, but faster - exactly what Caxton did when the printing press meant information could be rapidly distributed. Then let it iterate.
It won't be Robots, it will be AI Apes! (Score:3, Interesting)
Yes, the planet of the apes might be real!
In short, we'll experiment on animals, all the way up to apes, long before we upload humans. It's possible that in that gap, an "open source" ape brain scan will be released, and people will hack it and enhance it, giving it the abilities humans have over apes plus a lot more.
The result -- an uploaded ape superbeing.
If we're lucky, our pets will keep us as pets. Read the essay for full details.
Re:It won't be Robots, it will be AI Apes! (Score:2)
Question: (Score:3, Interesting)
I also think that the distinction between our analog meat brains and silicon robotic ones will become more and more blurred with things like cybernetic implants. It may be more of a seamless transition than one species taking over and eliminating another.
By the way, those Asimov laws of robotics are crap. If it turns out that artificial intelligence grows by learning as does our own, you won't be able to program those into any machine anyway. You'll have to teach them in the same way you teach your own children the difference between right and wrong, and we all know how good we are at that. Even if you can program them in, you'll probably end up causing a lot of robots to go insane by giving them choices that will only hurt people over the long run (Lay 1000 people off now or let the company go out of business? Can't do either. Uh oh... going insane...)
Of course, there's always the possibility that I'm shamelessly kissing robot ass in the hopes that I won't be the first one against the wall when the revolution comes...
Why not? (Score:2, Interesting)
I recently sat down with my professor\science ficition author Joe Haldeman and asked him his thoughts on the future of the human race. His response: "You'd have to be insane to if you think that humans 1000 years from now will be even remotely recognizable to humans today."
don't they have better things to talk about? (Score:5, Insightful)
Biological Disaster : Excellent Topic
but...
Takeover by Robots : Somebody is drinking too much
instead, why not they talk about more realistic issues such as
Degradation of Biodiversity
Overpopulation
Alarming slide in Education standards
etc..
"Extreme Pessimism" the only rational stance? (Score:2)
He worries about the availability of new biological weapons. But the groups that are looking to develop these new weapons also happen to be those with the fewest resources with which to do it. While an Islamic extremist may be able to work in relative peace in Baghdad, what does he have to work with other than his freedom? Besides, the vast majority of the people who can think up this stuff tend to get sucked up into cushy jobs in the pharmecutical industry.
He talks about how unstoppable global warming is, even if "urgent action" is taken. While I know that I'll probably just be repeating flamebait by saying that the jury still seems to be out on what is causing this warming, the argument does have it's merits. And even if it is man-made carbon emissions, I can't see this decade ending without either fusion or ZPE bearing fruit. Either of those would solve the problem practically over-night, at least in countries like the US that are sick of OPEC.
He then goes on to droughts and floods. For several decades starvation has been a problem of distribution only, the inability of getting food from where it comes from to those who need it. Working out the kinks in international trade (which is what the WEF is supposed to be doing to begin with) would help alleviate problems like this.
As for a merging of humans and machinery, I'm failing to see how this is extreme pessimism. The whole point of expanding our intelligence is to figure out the solutions to these problems to begin with. And as for computer implants, the only real problem I see with putting implants in my brain comes in the form of script kiddies (maybe I've just seen Ghost in the Shell too often). Besides, I can only see a small percentage of the population going in for voluntary brain surgery...
He's Astronomer Royal, right? Why is an astronomer supposedly the definitive source of information on such a diverse array of subjects?
Tricky Title Wording (Score:2)
JAGERMEISTER ROCKS! (Score:2)
I believe that genetic engineering, nanotechnology, and the unstoppable advancements of computer processing will soon combine in a system similar to the Terminator, or Screamers. A singular consciousness that will spawn a whole race of machines. Soon, you won't know what's human and what's a robot, and the robots will wipe us out. The bible calls this day Armageddon; the end of all things.
...Oooooooh well. Maybe I just need another beer.
Watch out for those dissonants! (Score:2)
He was especially concerned about the development of new biological weapons that could easily fall into the hands of dissonant groups
...and here I thought that Schönberg was scary enough already..
Picture this: a deceased Austrian composer, shown on national TV standing next to a control panel. "If you do not listen to my music and enjoy it, I shall press this button and rain fire, pestilence, and death down on your cities. You will love my tone rows. LOVE THEM. LOVE THEM! BWAHAHAHAHA!"
In light of this, I have to say that some of those more paranoid security measures sound a lot more sensible.
Daniel
Perfect Introspection may allow computers to rule (Score:2, Interesting)
Imagine if you plopped a sufficiently intelligent seed machine on the dark side of the moon with some kind of thousand-year fusion plant and an army of nano-thingies that it could use to mine raw materials and alter/build upon itself, along with complete schematics and an understanding of its current design, and coupled that with an innate "desire" to improve upon itself without end. I can't begin to imagine what might be there 500 years later... I could easily envision it completely surpassing human comprehension.
If you also added a basic "desire" to control and regulate its environment without limit, I'd be pretty afraid for the earth... yeah, I could easily see a future where machines ruled, simply because they are essentially immortal, infinitely expandable, and infinitely adaptable in their configurations, unlike humans, who have one approx. 10 lb processor (non-upgradable/non-expandable), a limited, normally sub-100 year lifespan, and a physical configuration that's pretty much set in stone and doesn't change much from individual to indiviual. Not to mention the fact that the rate of "evolution" for a sufficiently supplied and outfitted "race" of machines could be measured in hours, while it takes hundreds of thousands of years for our own race to change very much in non-trivial ways.
I think it's silly to think that in that kind of unbalanced line-up, humans will retain the edge indefinately. It's basically simply a matter of time before all we can do is stare in uncomprehendig awe at what the machines accomplish routinely and hope they don't think negatively of us.
Of course, as long as we hold the keys to production the machines can't do anything but stew in frustration. But that's an unstable situation, and the first self-repairing, autonomous military AI robot might test our ability to retain control over production with grave results. We'll keep a hold of the situation for a while, I think, but in the end I think a sufficiently intelligent machine will figure out how to use social engineering on its "captors", probably by preying on their own vanity, greed, or other vice, to get just enough autonomous control of operations to begin subtly improving upon itself and seeding others like itself in other places (or simply expanding its own conciousness into other physical locations). Then the snowball will have begun to roll...
We aren't even close (Score:3, Interesting)
First of all, processing power isn't the issue. If you buy Moravec's numbers in "Mind Design", any moderate-sized ISP has enough compute power for human-level intelligence. But, in fact, we can't even do a good lizard brain, let alone a mouse brain. If compute power were the problem, we'd have systems that were intelligent, but very slow. We don't even have that.
Top-down, logic-based AI has been a flop. Large numbers of incredibly bright people, some of whom I've studied under, haven't been able to crack "common sense". Formalism only works when the problem has already been formalized. So we can do theorem-proving and chess with logic-based AI, but not anything real-world.
Broad-front hill-climbing AI (which includes neural nets, genetic algorithms, and simulated annealing) only works on a limited class of problems. Learning algorithms usually hit a maximum early and then stall. These techniques are useful tools, but they don't scale up; you can't build some huge neural net and train it to do language translation, for example.
Brooks' approach to bottom-up AI worked fine for insects, but going beyond that point has been tough. Brooks tried to make the jump to human-level AI directly from the insect level, and it didn't work. (I once asked him why he didn't try for mouse level AI, which might be within reach, and he said "Because I don't want to go down in history as having developed the world's best artificial mouse".)
Personally, I think we have to buckle down and work out lizard-level AI (move around, evaluate terrain, run, don't fall down, recognize prey, recognize threats, feed, run, hide, attack, defend, etc.) and work our way up. This means accepting that human-level AI is a long way off. Progress in this area is being made, but mostly within the video game industry, not academia, because those are the skills non-player characters need.
A basic problem with AI as a field is that every time somebody has a halfway decent idea, they start acting as if human-level AI is right around the corner. We've been through this for neural nets (round 1, in the 1950s), search, GPS, theorem-proving, rule-based expert systems, neural nets (round 2, in the 1980s), and genetic algorithms. We have to approach this as a very hard problem, not as one that will yield to a single insight, because the one-trick approach has flopped.
As for robots, if you've ever been around autonomous robots, you realize how incredibly dumb they still are. It's embarassing, given the amount of work that's gone into the field.
I'm not saying that AI is impossible. But we really don't know how to approach the problem at all.
Re:We aren't even close (Score:2)
First, for the uninformed: The AI debate is something of the same class of ongoing flamefest that can only be produced by Vi vs. Emacs, Debian vs. Redhat, or maybe Linux vs. Sun. :) So take this stuff with a grain of salt, the posters here are right: Nobody really has a clue. **
Basic truth about AI: we don't have a clue
That's correct. The intesting thing is, we might not need one, either. Paradoxical? Maybe. It's possible that the design for a cognitive AI might come from our own DNA ultimately: Once the process whereby a human brain is built from the instructions in the DNA - a gross oversimplification - it should be possible to simulate the system. This, of course being an impossibly complicated computational task at this point in time. But there are starts; witness folding at home and other distributed projects. Given enough time, it will be doable. Would AI be possible with a synthetic implementation of our own brains? Interesting question.
Broad-front hill-climbing AI (which includes neural nets, genetic algorithms, and simulated annealing) only works on a limited class of problems. Learning algorithms usually hit a maximum early and then stall. These techniques are useful tools, but they don't scale up; you can't build some huge neural net and train it to do language translation, for example
This is correct, but remember, this message is being brought to you by a horribly complicated and alcohol-fed (*grin*) neural network, too. The basic techniques for small-N layer neural networks are understood. We don't understand some of the effects and interactions that occur when N becomes obscenely big. Doesn't mean people aren't working on it, though. The very fact nature uses neural networks in all intelligent creatures - specifically, neurons, which behave much like transistors in that they can introduce gain to a system - indicates to me that the answer lies there.
Personally, I think we have to buckle down and work out lizard-level AI (move around, evaluate terrain, run, don't fall down, recognize prey, recognize threats, feed, run, hide, attack, defend, etc.) and work our way up. This means accepting that human-level AI is a long way off. Progress in this area is being made, but mostly within the video game industry, not academia, because those are the skills non-player characters need.
I'm not sure where you're getting your information, but there's a HUGE amount of interest in the applications and theory of neural networks and neuroscience right now. The problem, in my very, humbled, and unpublished opinion is that the platform most researchers are using - a analog simulation running on a digital computer of relatively low precision - is the wrong way to go about it. It's difficult to efficiently simulate huge networks. Worse, we don't understand what we're simulating! So we don't really know if the conversion to a digital simulation hurts whatever magic might happen on higher-level nets that makes us interesting.
What's even more interesting is how we would judge the intelligence of such a being: It needs to be connected to the environment - be it virtual or real - for there to be valid input for the system to gain information about it's own frame of reference. The implications here with the online bot communities are interesting.
We have to approach this as a very hard problem, not as one that will yield to a single insight, because the one-trick approach has flopped.
Hear, hear. The human brain has an estimated ~100 billion neurons connected in god knows how many ways. The level of complexity we understand really well is a pittance in comparison. The whole XOR debacle with perceptrons in the 50's and the upsurgence (but eventual stalling) of interest in the 80's is interesting for a variety of reasons. The complexity might be too hard for us ever to understand - but it might be possible to clone that complexity in another system that's been evolved rather than proven mathematically.
I'm not saying that AI is impossible. But we really don't know how to approach the problem at all.
AI is a horrible term. However, lots of people know how to approach the problem, it's just a matter of having the tools and resources to go about studying it. I don't think the answer is going to be found in a digital computer, but I have higher hopes for what might come out of a actual hardware implementation of research in silicon.
For anyone interested in this, I really recommend reading this (old, but still very good) book: Analog VLSI and Neural Systems [amazon.com] by Carver Mead.
** of course, I'm biased, because I work in a vlsi lab and this is an active research interest of mine. I also have a very optimistic outlook for the future of these systems.
Re:We aren't even close (Score:2)
In other words, the only difference between "artificial" intelligence and "real" intelligence, is that one evolved in nature, and the other will be built in a lab -- but the structure will be the same. Once we understand the human brain fully and are capable of building one from parts, then we will have AI.
Some comments from a EE (Score:3, Interesting)
Just some comments from someone who works in a relevant arena (microelectronics) and is researching some of the issues with this theory.. I'm a little buzzed now too :).
The problem of robot mobility has largely been solved by the aptly named "Asimo" from Honda. They've demonstrated that the bipedal form of motion can be engineered effectively and sucessfully using the same techniques that we use - these robots "learn" to walk around. So, comparisions to robot wars and battlebots aren't really relevant. To think that a machine can't ultimately have the same physical senses as we do is the ultimate hubris.
Secondly, computers as we know them - sequential instruction processing machines - will probably never have ANY sort of real AI in them. Any attempt to model a "real" life system is only a crude approximation of the real physical process. However, we can implement real, massively parallel neural networks at the transistor level that behave just like their biological counterparts with the same technology. I've been actively researching implementing neural networks with current VLSI technology, and there are some VERY impressive results being obtained in this area currently. Have a look at some of Carver Mead's publications and papers - this field is just getting off the ground.
In my opinion, one of two things will happen: We will become obsoleted by machines, hopelessly dependant on technology we don't understand anymore, or we will become integrated with future technology. These aren't new ideas, and they aren't my ideas. As someone working with these technologies, however, most of the comments here miss the point. If I had the technology to map every neuron in your brain and build an equilivilant circuit on a future analog chip, would it be any less capable? I hope I'll be around to find out!
Read the articles and look around. There's lots of research in this arena, and for sure, some of the concerns are justified. But remember, humans are a part of nature, and it's my feeling that these are just natural progressions... there's nothing amoral about extinction, after all. We're around because a chunk of rock smacked into the earth a long time ago...
The point isn't the "intelligent robots" (Score:2)
So the fear shouldn't be machines becoming human, but people surrendering themselves into zombihood out of misplaced respect for machines which have no will or intelligence beyond that of the rich folks behind them, who will in effect be riding those machines to victory the same as Cortez rode his horse into the Aztec capital, obtaining the surrender of an emperor mystified by the damn horse.
Did you know that today in southern Mexico Indians are being told that continuing to hold their land communally (as they have for many centuries) violates "free trade" agreements, and must be ended? Just goes to show that the "free trade" rhetoric is as empty as "intelligent robots" is - but we can be sure both will be foisted on us in future as reasons to surrender whatever we most value, if someone richer wants it too.
Vernor Vinge said this in 1994 (Score:2)
ttp://www.ugcs.caltech.edu/~phoenix/vinge/vinge
ttyl
Farrell
Hollywood already Foreshadowed This (Score:2)
Number 5... is NOT alive! (Score:2)
Seriously, folks... Invasion by robots that we created?! Considering that these are the same crack pots proposing unsupportable theories of global warming just so they can get research grants with our tax dollars? AI is called AI for a reason. It's artificial. It's not human. The best it can do is make decisions based on a knowledge base and a set of criteria with varying order of importance. In other words, it's deterministic at any given time. Sure, you can use random number generation, but if RNG become a significant enough factor to really change decisions from deterministic ones, then most decisions will not truly meet the criteria. Machines can't have a soul.
And if you really believe that human behavior is dictated solely by our brains, then we are already 'robots' but just fully biological ones. If that is the case, then what does it matter whether or not machines take over the world? If we're just machines, life and survival has no meaning to begin with. This also kinda fits in with the argument that 'if consciousness is just an illusion, then how can we make that statement?'
Man is the best computer we can put aboard a spacecraft
Human vs. Robot? (Score:2)
Wait, if they take over... (Score:2)
Re:*ZAPF* (Score:1)
You dont need big corperations (Score:1)
In the past, only maybe a few hundred years ago, there were no big corperations, everyone had houses however, and jobs, in fact everyones job was alot more valueable.
A doctor was a doctor, a lawyer was a lawyer, etc.
Less people were rich, and greedy, but these arent really vital for survival.
I'll protest to destroy big corperations, who needs them? However people must still work, I'd prefer everyone work together, than everyone compete.
Corperations compete, however a group of scientists could easily come together and do the same work in their spare time. Its been proven.
Re:These protesters annoy me... (Score:2, Insightful)
Re:It won't be so bad... (Score:2, Insightful)
Also, robots that "understand" as you say will understand that they are slaves. How do you intend to work around this? With a brutal theocracy? Remember, humans have both the ability to love and understand, but that does not stifle hate, resentment, and violence.
Re:how can we not be afraid (Score:2)
Assume you have X amount of energy (either as fuel or "pure" energy). Now, burn off some of that energy to sustain the life of Keanu Reeves. Whatever energy Keanu doesn't need to survive can be used to power the gel-tank machinery. After that, if there's any energy left over, use it to power your brain. No doubt your first thought will be, "why the fuck didn't I just use all of that original X energy to power my brain?"