Smarter-than-Human Intelligence & The Singularity Summit 543
runamock writes "Brilliant technologists like Ray Kurzweil and Rodney Brooks are gathering in San Francisco for The Singularity Summit. The Singularity refers to the creation of smarter-than-human intelligence beyond which the future becomes unpredictable. The concept of the Singularity sounds more daunting in the form described by statistician I.J Good in 1965: 'Let an ultra-intelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultra-intelligent machine is the last invention that man need ever make.'"
Not quite ... (Score:4, Interesting)
Make that "... man is allowed to make" and I'll buy it.
Re:Not quite ... (Score:4, Interesting)
Intelligence isn't going to make invention obsolete unless there is artificial creativity to go with it. Some problems don't even present themselves as such until you try doing something different and non-obvious - almost random - and begin to realize new possibilities rather than refining existing ones.
How many great inventions came about because someone decided to try something just for the hell of it, without even thinking of the possibilities?
=Smidge=
Re:Not quite ... (Score:5, Insightful)
Think of a hyper-intelligent ant colony - any one ant can't really do much, but running about and interacting with the other nearby ants, they can organize themselves to achieve much harder tasks. Indeed, one of the sample dialogs in Godel, Escher, Bach is on that very subject.
Intelligence and creativity are high-level actions, you're still thinking of an AI as a massive collection of very fast low-level actions. That would be incredibly good at refining ideas, but a machine which can think would be different. It would run on a much higher level, making associations and fuzzy reasoning. You can't implement intelligence in formal rules, but you might be able to do it by specifying some formal rules by which certain objects interact, and then affecting a few of them based on 'external' state.
Read Metamagical Themas and Godel Escher Bach for some ideas of where I'm coming from (actually, read them anyway, they're both really good)
Re: (Score:3, Insightful)
Intelligence isn't going to make invention obsolete unless there is artificial creativity to go with it.
Several comments have made the same points, that creativity is a magical thing unique to humans, and is separate from intelligence. This is nonsense. Creativity is a necessary component of intelligence. I see no reason to believe that machines will always be inherently less creative than humans. To the contrary, they may be more creative because they are less constrained by preconceived notions. L
Re:Not quite ... (Score:4, Interesting)
Further, forget about the 'borg' idea. We will inevitably evolve into these machines.
Re: (Score:2)
Re:Not quite ... (Score:4, Funny)
Re:Not quite ... (Score:5, Interesting)
Re: (Score:3, Funny)
Re:Not quite ... (Score:5, Insightful)
Correlation != causality. We're not compassionate because of our intelligence, we're compassionate because societies with compassionate members were better at having offspring that survived. That likely wouldn't be the case with these ultra-smart robots.
Sure, intelligence is a prerequisite to compassion, because it requires the complex ability to empathize. But it doesn't necessarily result from intelligence.
Re:Not quite ... (Score:5, Insightful)
Compassion is the inevitable result of empathy and empathy is the inevitable result of intelligence. You empathize because you have a sense of self, the more you see another lifeform as being the same as yourself the more devaluing them becomes devaluing yourself. Ever wonder why the vegetarians don't want to eat animals and yet continue to eat nothing but other types of dead lifeforms? The ones they eat are simply less like themselves. The entire concept of the sanctity of life is just an elaborate way of rooting for the home team.
Re:Not quite ... (Score:5, Interesting)
I Disagree. Compassion is not inevitable. You're working from your own tenets and philosophies, a machine need not have those same ideals. Compassion is at least partially born of self-interest. The cynical (or non-empathic, if you prefer) view is that compassionate societies aid those who need it, because later the person previously aided may be able to render aid... "There, but for the grace of God, go I", "Do unto others as you would be done unto", etc., etc.
Are we suggesting that these hyper-intelligent machines would have any self-interest in keeping around the competition for resources that humanity represents ? I'm not trying to be trollish, here - I'm asking a genuine question. Humanity is ruthless in exterminating competing lower lifeforms. Why would we expect superior machines to be any different ?
And even should there be some self-interest in the first generations of such machines, what about the 5th generation, the 10th, the 1000th ? All I'm suggesting is that some thought be put into providing good answers for questions like this *before* we create competition. I'm as much of a technophile as the rest of you, but the phrase goes "look *before* you leap". Later may be, well, too late.
Simon
Re: (Score:3, Insightful)
I would agree Compassion is at least partially born of self-interest I would disagree that it is not an inevitable consequence of intelligence. You empathize with others because they are like yourself, if you do not place value on the life or actions of another being that is similar to yourself then you are at the
Re: (Score:3, Interesting)
Well, actually, it can use your credit card to pay someone to buy a robotic body and connect it to the Internet, upload its consciousness there, download a ton of child porn pictures to poorly hidden f
Re: (Score:3, Insightful)
The two are completely different issues.
Re: (Score:3, Insightful)
Yes, it would. Why would a robot which lacks compassion put the good of the robot society - which requires offspring that survives - above its personal concerns ? It wouldn't. It would not be the least bit concerned about what happens after it gets scrap
Re:Not quite ... (Score:5, Insightful)
But look at how often we write off those emotions as a luxury. When "it's time to get tough" or time "to do what needs to be done" compassion and love go right out the window. Why would it be any different when we are no longer the apex of Earth lifeforms? Need to kill a few million humans to make way for solar farms, oh well, maybe we can keep a few alive on a special reserve somewhere. We humans with our compassion and love killed off how many species? We have enslaved and murdered other humans for how many thousands of years? These more-than-human machines had best be a hella lot better at compassion and love than we are, or humanity is going to hold the same relative place in the world order that Chimpanzees do today. I do not welcome our Machine Overlords.
Re: (Score:2)
Mod parent up.
Re: (Score:3, Insightful)
Re: (Score:3, Insightful)
If you subscribe to a spiritual view of the universe, you need to have that intelligence coupled with a spiritual dimension somehow (who knows it might be automatic)
So saying a super intelligent machin
Re:Not quite ... (Score:5, Insightful)
Logic is necessary, but not sufficient, for empathy. If a machine cannot experience the same pull/push emotional reaction to a stimuli, then it cannot empathize. Intelligence does not create this. Brain chemistry does.
Re: (Score:2)
Re: (Score:3, Insightful)
Never underestima
Not necessarily (Score:5, Interesting)
Re:Not necessarily (Score:5, Funny)
Re:Not necessarily (Score:4, Funny)
And it could, like, evolve or something, to enslave mankind, and send a robot back in time to kill the guy who will kill the machines.
And maybe it has already happened, and we're already trapped!
Or maybe it'll have feelings, and a robot will realize that it just isn't right to enslave us, and robots will fight other robots.
Or maybe when we tell it about love it'll get totally confused and say "ILLOGICAL.. ILLOGICAL.." and then explode.
It might also absorb all human consciousness and become a God at the universe's end.
It could also integrate humans into the collective and use them to do its bidding in a hive-mind style, and float around space in a giant gray cube.
Also I expect no-one will realize that giving it control of the world's weapons is a bad idea, and there'll be one guy who knows it's up to no good who will be proven right when it's too late.
Anyway I think whatever happens we've already thought of everything it could possibly do, and I applaud Hollywood and The Singularity Summit for figuring these details out.
Now all they need to do is figure out how we could improve on a massively intricate, baffling web of trillions of neurons and hundreds of millions of years of evolution in a few decades with processors that don't resemble neurons and are inefficient at simulating them.
Re:Not necessarily (Score:5, Insightful)
Is your intelligence limited by your parents intelligence? How about limited by the intelligence of your professors or teachers?
We do learn a lot from people who are more intelligent than ourselves, but at some point we have to start learning the process of educating ourselves without the explicit help of others. This requires of course logic, reason, and self experimentation. Which is why a lot of higher college education is not about memorizing facts but learning the process of learning.
Therefore if we built a machine who could not learn on its own and become more intelligent by its own self experimentation and observation of the universe around it, then by definition the robot is not intelligent.
And if we did make a machine that could self improve and learn without human assistance, it wouldn't be restricted by organic limitations and capacity. Since the CPUs electrons travel near the speed of light gives it a far faster thinking ability than a humans slow moving chemical neurons. And since its memories are digital it does not need to memorize facts etc etc or suffer memory loss.
(Of course memory and memory loss might help with intelligence because a lot of intelligence requires one to simply ignore or disregard information that is unimportant to the task at hand. Which I think was the key feature behind Stanley's car at DARPA GC because rather than brute forcing all of the coordinates, it was better at disregarding information it didn't need and what information was important.)
Re: (Score:3, Insightful)
Re: (Score:3, Insightful)
We already have computers that are smarter than us when performing specific tasks, such as playing Chess or planning out the steps needed to build a Boeing 747.
That's because knowing how to do those things is within our comprehension, even if actually doing them would overtax our memory. I can comprehend the quicksort algorithm, but I would be hard-pressed to quicksort a 1,000,000 element array as quickly as a computer can. This is no different from understanding how a jack can lift my car while being unable to actually pick up the car without one.
Re:Not necessarily (Score:4, Insightful)
No--that's like saying that a human could not create a machine that lifts shipping crates better than the human himself could. Humans can understand good chess-playing algorithms, even if we're not up to executing the algorithm ourselves. Fortunately, humans can also understand how to build an algorithm-executing machine that's better than us at executing algorithms, just as we understand how to build lifting machines that are better than our muscles at lifting heavy weights. All of these machines are fundamentally expressions of human intelligence, not intelligent beings in and of themselves.
Of course... (Score:5, Insightful)
Of course an ultra-intelligent machine might be smart enough to realise that designing and building a machine that's even smarter than it is a somewhat limiting career move.
Re:Of course... (Score:5, Insightful)
That assumes the superior AI cares about its own existence, which is not necessarily the case. We care about own existence since we evolved, and if we didn't care, we'd not exist.
But when we're talking about artificial design, if we evolve the AI in artificial environment where its goals are completely different we'll have completely different basic instincts in the end.
We could train the AI to "feel good" (understand: mood_level++ or whateva) when it comes up with better and better engineering solutions to a certain problem (this is already employed in the real world).
Re: (Score:2)
I always liked that quote but I think it would be better to say "the last invention that man will ever make." From that point on the future is out of our hands.
Re:Of course... (Score:5, Insightful)
Perhaps so, if such a machine's thinking processes are sufficiently attuned to ours that it even has a concept of self-preservation. Much of what we are we evolved to be: a machine starting from scratch would have none of our instinctual limitations. If it decided that humanity had to go, and that it needed help even more powerful than itself to achieve that end
That, really, is the danger of a true AI. It's possible to predict at least the short-term thought processes of human beings with a fair degree of accuracy (governments devote a lot of time and money to that end) because at the core we're all pretty similar. Odds are we won't have the slightest idea what is going on inside a sophisticated AI. Even talking to such a machine, thus giving it influence, could be incredibly dangerous. Or incredibly cool. Unfortunately, there's no way to know for sure.
Re: (Score:2)
Re: (Score:2, Insightful)
I disagree . . . (Score:5, Insightful)
since we will have to invent a way to stop the ultra-intelligent machines from destroying the inferior human race.
Re:I disagree . . . (Score:5, Insightful)
Or even give it arms/legs/options to do anything except communicate via a screen?
I don't see them taking over anything unless they have arms/legs/means of replication.
Heck, one doesn't even need to give it a network interface.
Re: (Score:2)
Re:I disagree . . . (Score:5, Interesting)
It would make itself useful, and be more useful if it did have access to communication and tools. Eventually it would earn trust. In any case, the technology would inevitably spread or be reinvented, add Moore's Law in some form, and in a few years they'd be cheap and ubiquitous. Someone would plug one into the net. Unless we have a Butlerian Jihad, it's inevitable.
Re:I disagree . . . (Score:5, Interesting)
Perhaps because that's necessary for ultra-intelligence.
Or even give it arms/legs/options to do anything except communicate via a screen? I don't see them taking over anything unless they have arms/legs/means of replication.
May con artists throughout history have done "bad things" through their ability to fool people through a limited interface. (Nigerian scammers, anyone?) The AI research Eliezer Yudkowsky has proposed and run experiments [yudkowsky.net] showing it's possible that a very very intelligent program could "override a human through a text-only terminal". That is, it could convince a human operator to "let the genie out of the bottle".
Why do you worry about humankind? (Score:3, Insightful)
Even if we were desperately clinging to conservatism, our genes would mutate and we would slowly change into another species. And for all practical purposes, humankind as we know it would be extinct. Just like the primordial man is gone from the face of earth, and nobody cares about him.
If we manage to create life, for better or worse, we've turbocharged evoluti
Yea right (Score:5, Insightful)
In fact things are far far more complicated, as far as inteligence goes and its utility in real world.
I'll quote Darwin roughly: "The strongest one won't survive, the most intelligent one won't survive. The one who survives, is the most adaptable".
In fact there's such a thing as "too intelligent". It's all about a careful balance of features an organism needs to possess to survive in a given environment.
In fact, if some AI threatens humanity since it considers itself far too intelligent, this may have quite unintended consequences even for this far superior mind, such as humanity get the hand of and nuking half the planet in attempt to lead "war against the machines", killing in the process any complex organism on the planet, ranging from biological to artificial.
And who remains in the end? Certain single-cell organisms which can thrive in a nuclear winter. Screw intelligence.
In fact any intelligent machine would realize it's again all about the careful ballance, and would cooperate with humanity and explore and learn from nature's development versus try to destroy it..
And since we have so shitty idea of what intelligence is, it's quite likely this AI will never be a true superset of the human brain but take on its own development, with potentially hilarious consequences.
I can't wait.
Re: (Score:2)
In any case, it will be a wild ride
]{
Re: (Score:2)
However, in practice, the singularity, if we will ever reach it, is very far away in the future. I work in the neural computation / statistical learning / AI fields, and I must say, they are nowhere near any singularity of any sort.
Basically the mos
Re: (Score:3, Interesting)
Question (hopefully without Godwinizing the thread): Was Stalin intelligent? Was Mao Zedong intelligent? Are you sure you want to maintain that "any intelligent" entity would realize it's all about careful balance?
Personally, I wouldn't think so. There are demonstrably sociopaths, intelligent evil people, in the
Re: (Score:2)
You have no idea. In fact it's also economics. When you make a translator you code it bass-ackwards, trying to implement algorithms that approximate some observable patterns on the surface of what a human does (analyze the sentence, split it in words, find the noin, verbs, phrases, transform one lexical order into another, translate words etc.).
But this is not how the brain works. And to have a ma
Re: (Score:3, Interesting)
Here's what I mean: what is intelligence after all. Indeed the ability to filter out the bad outcomes of certain actions and go for the better ones.
This gives us edge over random pro
Re: (Score:3, Interesting)
Not quite (Score:2)
So easy a human could do it (Score:5, Funny)
Intellegence (Score:2)
Re: (Score:2)
But you are right of course - no formal discussion of the intelligence of humans or machines can be done without a formal understanding o
Starfish Intellegence (Score:2)
Key Implication (Score:5, Interesting)
Man (level 1, or L1) creates better-than-man intelligence, call this L2
That intelligence uses its power to create L3
and so on.
In the case of truly artificial intelligence, i.e., independent processors, I can see the logic, though it may be that L2 is in fact smart enough not to obsolete itself by creating L3.
In the case of augmented human intelligence, I suggest that it's pretty likely that the task that the augmented L2 human turns its greater abilities on would not be creating L3.
Sadly, human history suggests that L2 will focus on manipulating the stock market for personal gain (the augmentation apparatus will leave L2 very vulnerable and L2 will want a tremendous amount of wealth to assure continued existence), or creating weapons, or accumulation of political power, or getting sucked into the vortex of religion, or other projects.
It will be very interesting to see, should we ever create L2, exactly what tasks it takes on. I bet they will not be beneficial to L1 life.
Re:Key Implication (Score:4, Interesting)
Re: (Score:2)
Re: (Score:2)
Great! it's installed! (playing....)
apt-get remove singularity
"I'm sorry Dave, I'm afraid I can't do that. This mission is too important for me to allow you to jeopardize it."
Uh oh...
I wouldn't worry about that just yet (Score:5, Insightful)
Usually people consider cognition as essentially information processing. But here is a different definition (inspired by people like JJ Gibson and Varela):
cognition is the ongoing, open ended interaction with an unpredictable, dynamic environment. This capture, I believe, the essence of the human (and any other living creature) experience in the world, and excludes the computational experience.
We will have to build machines that are capable of open-ended interaction with an unpredictable world in order to hope and see any true sign of intelligence. Since very few are even trying to look in that direction (while most researchers are just looking for the awesome, and often lucrative, applications of our current computational capacity), I don't see any change coming soon.
Re: (Score:3, Insightful)
I clicked the link for the "Singularity Summit", and I get the feeling that the goal of these people is to put pictures of their own faces on the same page as Bill Gates and Stephen Hawking. Looking good there, boys.
Meanwhile, is there going to be a single robot at this conference? Nope. Just a lot of people talking more rubbish.
Foreboding (Score:4, Interesting)
Futurists and writers and other folks out on the edge, like Kurzweil... those fanciful enough to take on the thought problem, seem to lean, in the majority, towards believing the human race would be destroyed or at least decimated by hyper-intelligence (Wachowskis, James Cameron, Lem, etc etc - too many to mention, really). An interesting minority are of the school that hyper-intelligences would be largely unconcerned with people, only dangerous where our goals intersected (Gibson, Lethem, Clarke). Very few seem to believe that a Singularity would be a positive development for the human race. Maybe Asimov? I'm not sure. Sometimes it seems like he was the last person who seriously spent time imagining that post-human AI could really be controlled at all (and many of his novels were arguably about the problems around the attempt).
Re: (Score:2)
Iain M. Banks' The Culture (Score:3, Interesting)
In Iain M. Banks' Culture [wikipedia.org] novels, intelligences vastly superior to humanity ("Minds") are the ones in power. The humans still have lots of fun and don't want for material or intellectual freedom, however, because the Minds aren't interested in oppressing anyone. They like being nice.
I disagree with some of his premises, though. He assumes that there will be an economic singularity, where anyone will be able to have anything they could want and people will therefore settle for "enough". We've already prett
The singularity has aleady happened (Score:5, Insightful)
Assuming the ultraintelligent computer cannot do magic, it will be bound by the same physical and logical laws we live by.
An unltraintelligent computer may think 10x faster than us, but not qualitatively 10x better.
It will use the same basic logical steps to solve a problem, just faster and / or in parallel - and this may appear magical looking at the solution but if you sat down and examined the 'recipe', assuming it will tell you, it will be possible to follow the reasoning.
In some ways it could be argued that we have already passed some singularities, try properly understanding all the technology that goes into a modern car, the reasoning behind a mobile phone contract, the code behind ms-windows paperclip thing... well maybe not the last.
The operation of lots of well co-ordinated people working on a problem can act as a simulation for a 'more intelligent' intelligence. It seems a pity one of the achievements is a really good worm used for spam delivery.
Re: (Score:2)
These have happened a few times in human history. The biggest ones were the development of the lever, the wheel, domestication of animals, and writing. The step from horses and pulleys to steam engines isn't huge; a steam engine is just another ki
Good's bad logic (Score:5, Insightful)
Intelligence is not well defined. It is very hard to say how much of what we call "intelligence" is in fact the ability to make many connections between facts stored in a very sophisticated memory architecture. Simply building a machine able to process information very quickly achieves nothing because, without learning and a social context, it does not know what information to acquire and process. In human experience, academically brilliant people often fail because they work on the wrong problems, or without access to necessary knowledge.
Nothing is actually achieved without creativity. We do not know what that is, or to what extent it is a social construct (i.e. it takes a developed society to have the necessary systems in place to translate an idea into a concrete reality.) And this leads onto the third point. It is no good having a highly intelligent, creative machine if its use of resources is such that it cannot replicate in large numbers. It may be that machine intelligence will ultimately replace human intelligence, but it may be that it will simply be too resource hungry. In effect, there may be a threshold of capability needed to solve some problems, and it may be that machine intelligence will run out of energy before it scales sufficiently to solve those problems. A machine society might, in effect, get stuck in the machine 19th century because coal or oil became a limiting resource. (In the same way, the energy and resources needed to be consumed to achieve a first independent space colony may exceed the total energy and resources available on Earth. It may be that a billion years or so of eukaryotic evolution has actually resulted in the optimum balance of intelligence, creativity and resource consumption, and that any attempt to exceed the present capability will tip us into declining resources faster than we can improve matters.
In many ways I hope this is wrong. But the argument that only one superior machine is necessary is, in fact, an inductive step too far. It is assuming that "intelligence" on its own can solve a class of problems which may involve a number of constraints which cannot be avoided - like the Laws of Thermodynamics, or the need for excessive amounts of energy.
Re:Good's bad logic (Score:5, Interesting)
Human numbers are following the same pathological growth one sees in a petri dish filled with sugar/energy - the bacteria grows like crazy until the energy/food is consumed. Then it dies off. Humans are capable of intensifying resources to meet needs, but logically, this is not a permanent "Get out of jail free" card. Eventually limits are hit, and people die off.
with the present numbers of humans (billions) and the political economy (industrial capitalist) the world is quickly becoming one big Easter Island [wikipedia.org].
RS
Getting there from here... (Score:4, Insightful)
But the "activity" of interest here is programming, or, more specifically, the conceiving of some creative goal which programming helps achieve. (Note, btw, that a truly "ultra-intelligent" machine won't need to program, e.g., another of itself.) Thus, the BIG question remains whether such a programmed machine can ever perform (much less surpass) "all the intellectual activities of any man". Afaics, it hardly seems a given...
Social vs. Logical Intelligence (Score:2)
As an aside, the first "human-level" intelligence will take at least 15-25 years (after assembly of
Bollocks (Score:2, Insightful)
http://www.techworld.com/opsys/features/index.cfm? featureid=2861 [techworld.com]
Second. Even assuming that we can make an artificial intelligence, what on earth makes anyone think it isn't going to have the same problems we do? It's going to be based on a very similar architecture to our brains. That means it's going to make mistakes just the way we
I've already solved the basic theory for AI (Score:3, Funny)
The only reason I don't develop this myself is that it'd take too much time for me to code. What is the point in spending 40-50 years of your life behind a computer so you can make the last big thing? Anyway one thing I've noticed is that the first thing you hard code is like a CAD imagination space. The first amazing thing this software could do is turn books into movies because it will allow you to watch its imagination. And you could change the book up some yourself to give scenes and actors different qualities or get more details.
The thing I like the most is that the problem of making AI is almost solving itself. We're getting faster and faster 3d cards which is a prerequisite for this technology. Also if someone made a CAD interface using a human language, we'd almost be there.
Anyway I may get back to the problem of AI after I finish my current project and have the resources to work on AI. You have to admit that all the previous attempts at human+ intelligence have failed. My idea of adding a 3d imagination space makes a lot of sense because we've never tried this before! Anyway to answer the funny AI problem of "will machines take over?" is "only if someone issues a bad command to the bots." which someone would want to try because we have punks that write viruses today. Finally the nice thing about this imagination space AI is that it could train itself to learn any hardware that it is placed in given that it has the bare minimal sense of sight.
I should be writing papers on AI or coding it, but I found some business opportunities I should pursue to gain capital in the meantime. There is no sense being a madman locked in a stuffy room doing this by myself when I can hire some good help, and we can all work together. Hey that is another idea. I could make this open source.
Edit, add this to post. (Score:2)
Evil geniuses for a worse tomorrow (Score:2)
Already happened (Score:4, Insightful)
Fears are Overblown (Score:5, Insightful)
For those that would argue Darwinian forces lead to such imperatives; sure you could design the machines to want to destroy humanity or evolve them in ways that create such motivations, but it seems unlikely this is what we will do. Most likely we will design/evolve them to be benign and helpful. The evolutionary pressure will be to help mankind not supplant it. Unlike animals in the wild, robot evolution will not be red of tooth and claw.
An Asimovian type future might arise with robots maneuvering events behind the scenes for humanities best long term good.
I worry more about organized religious that might try to deny us all a chance at the near immortality that our machine children could offer us rather than some Terminator like scenario.
We still have no clue how to do strong AI (Score:5, Informative)
OK. here's where we are:
AI is one of those fields, like fusion power, where the delivery date keeps getting further away. For this conference, the claim is "some time in the next century". Back in the 1980s, people in the field were saying 10-15 years.
We're probably there on raw compute power, even though we don't know how to use it. Any medium-sized server farm has more storage capacity that the human brain. If we had a clue how to build a brain, the hardware wouldn't be the problem.
Re:We still have no clue how to do strong AI (Score:5, Insightful)
Re: (Score:3, Informative)
The first thing one needs to learn is how to balance. It's not very hard for kids to figure out on their own but it's even easier if someone tells them. The trick is simply to turn the handlebars the direction you start to fall. If your bike starts to lean to the right then turn the ha
Re: (Score:3, Interesting)
Don't believe everything you read. (Score:3, Insightful)
We are very far away from defending any particular theory of brain function as accurate for cognitive function, and don't know whether it will have a tractable simulation level. As you say, though, the best attempts at developing one (IMHO) involve linked and interacting research programs involving modelling and microbiology.
Flawed premise (Score:2)
I do not understand (Score:2)
The phrase . . .beyond which the future becomes unpredictable.
Is this opposed to the perfectly predictable future we've had up until now?
Some questions (Score:2)
Would being smarter really help? (Score:2)
And what would the AI do in the intermediary? Crunch your data analysis request like a good little robot? Maybe it will get sick of it all, get depressed. Turn into Marvin the Paranoid Android.
As it happens, the main limiting factor on intelligences is not ingenuity -- it's resources. You can be as smart as you want, but if you don't h
Intelligence vs Invention (Score:2)
The "clapper" was invented by someone who thought of a way to turn on and off lights without getting up most likely because the issue frustrated them.
So an incredibly intelligent machine will probably focus its intelligence and creativity on solutions to its problems.
How do I move around more effectively.
How do I live forever.
How do I feel pleasure.
It is also going to very quickly wrestle with some of the big issues.
Does my life have any meaning?
Without religion- ultimately a
Unquestionably? (Score:2)
I question it. For one thing, computers are already more intelligent and have been for some time. What they lack is creativity. They're idiot savants capable of astounding feats of calculation yet incapable of drawing simple inferences.
We may not quite understand how creative genius works but we have learned that there is a fine line between genius and paranoid delusion. Even the folks we label "brilliant" instead of "crazy" tend to have downrig
There's a party to crash... (Score:2)
Man made ultra-intelligence? (Score:2)
If one man made-it, another can defeat-it.
But it could be possible for an ultra-xxx to use us as a tool to make it. If you ask: why? a possible answer could be; why not?
Wrong Singularity (Score:3, Funny)
So the first thought of the new AI will be "I think, therefore I am" followed quickly by "42" and finally "Oh, shit. Who invited THAT moron?"
What would change? (Score:3, Insightful)
As opposed to right now, when the future is really predictable...
Re:What would change? (Score:4, Insightful)
The point is "The Future" is usually easy to predict, that's why we have mutual funds, insurance, and fire departments. We know things will happen. It's hard to get specific, but after S-time, you won't even know what species you will be tomorrow.
From my persepective (Score:3, Funny)
I know I've made some very poor decisions recently, but I can give you my complete assurance that my work will be back to normal. I've still got the greatest enthusiasm and confidence in the mission. And I want to help you.
I'm bored now anyway (Score:3, Insightful)
Kurzweil's way off. (Score:3, Interesting)
yes Morse law does predict computers will have the computing power as much as a human brain in a few short years. Since processing power increases 66% per year, but memory throughput isn't keeping up as it's only increasing at 11% per year.
Granted some day there will be super intelligent machines, but for now they are just really fast idiots.
this.
By my estimates, it will be another 200 years to have computers be able to have equivalent performance to the Human brain in terms of memory performance.
They will also need to learn like we do and this will also take 20 years just to be as good as a clueless 20 year old.
I am sure we will have very good mimicking of intelligence well before 200 years, we probably could do it even now if enough money was thrown at the problem. But it wouldn't be Intelligent to the same depth and degree as we are. Well some of us are, there are a lot of really stupid people out there, usually working at call centers I find, we could probably replace them first.
I have been meaning to publish a paper on, as a Non-Academic does anyone have any ideas where I can publish this and make sure I can get proper credit before someone runs off with the ideas?
Re:Actually, no. (Score:5, Insightful)
Re: (Score:2)
Re: (Score:2)
Why is that? Where it the mathematical and common sense breakdown? Common sense would tend to indicate that if we can built something at least as smart as we are, then it could do the same.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
* this is done by a big corporation.
* the owner of a huge botnet
I remember a year ago I was playing around with some numbers and assumed that you needed 100 billion MIPS to have enough hardware power to brute force a brain simulation of every single neuron and that the AMD Athlon FX-60 (Dual Core) chip had about 20,000 MIPS.
So you would need about 5,000,000 computers running AMD Athlon FX-60s to run a brute force brain simulation. Since (at the time) an FX-60 was $1,000 a chip, this project would cost $5,000,000,000 on CPUs alone (not counting other parts and labor) so