NPR Looks to Technological Singularity 484
Rick Kleffel writes to tell us that NPR is featuring a piece with both Vernor Vinge and Cory Doctorow looking at the possibility of the "technological singularity" in the near future. Wikipedia defines a technological singularity as a "hypothetical "event horizon" in the predictability of human technological development. Past this event horizon, following the creation of strong artificial intelligence or the amplification of human intelligence, existing models of the future cease to give reliable or accurate answers. Futurists predict that after the Singularity, posthumans and/or strong AI will replace humans as the dominating force in science and technology, rendering human-specific social models obsolete."
Great predictions of the unpredictable (Score:5, Insightful)
Brilliant, real brilliant.
Re:My god! (Score:2, Insightful)
-- William Gibson
Evolution yes, singularity no (Score:5, Insightful)
Well, I doubt it. I agree with most of the idea of the 6:17 cast and even agree that educational and social changes like widespread literacy may be considered a singularity, but I seriously doubt the timeframe of one generation/30 years they mention. Literacy was adapted over hundreds of years, network communities have been developing for at least 30 years and are still primitive and very far from a "collective mind". For me Wikipedia is "augmented intelligence", but before that I had the Encyclopedia Britannica on my iBook and before that an encyclopedia on my desk, so this to is evolved. And since the Wikipedia is created by so many, it may be considered a primitive product of the "meta intelligence" described.
Btw, the piece from NPR focuses (very trendy) on collaboration and advanced information management, they do not lay great hope on a major breakthrough in AI.
Re:My god! (Score:3, Insightful)
From what I've seen we are as near to creating decent AI as we are to producing fusion power stations.
All intelligence is genuine, not artificial. (Score:2, Insightful)
There is no "artificial intelligence". All intelligence that is called artificial intelligence is genuine. It's a rare example of people saying something is artificial when it is genuine. It's an example of disrespecting very intelligent programmers. Disrespect of technically knowledgeable people is very common.
Computing is so famous now that people with little or no technical knowledge want to seem like they know something about it. But, they don't want to actually study anything. They just want to pontificate.
--
U.S. Government violence encourages other violence.
Re:Great predictions of the unpredictable (Score:3, Insightful)
Re:Since when ? (Score:5, Insightful)
The problem with the video phone is that I can't roll out of bed and answer it. Video conferencing does have it's uses, but I need time to prepare so I don't look like my usual pile of ass who just rolled out of bed. That might make the telemarketers stop calling tho... hmmm
It wasn't technology they guessed wrong, unless you count not having those things the jetsons did, instantly groom and dress out as you got out of bed. Now that would make the video phone take off.
Wikipedia, again? (Score:1, Insightful)
Stuff like that happens every now and then (Score:1, Insightful)
Humans Haven't Wiped Out Lower Species... (Score:1, Insightful)
OTOH when I consider what I do to the fireants on my lawn every year it does not gives me hope for mankind living in a world of super robots. They might view us as little more than a nuisance.
Most likely they would go their way and we would go ours. We would have to learn to identify their intergalactic highways and not cross against the light, of course, otherwise:
More Important: I'll be out of a job (Score:4, Insightful)
Re:There is no artificial intelligence (Score:2, Insightful)
Current Top Story on Slashdot: (Score:2, Insightful)
Re:Evolution yes, singularity no (Score:3, Insightful)
Okay, this came out wrong. I do not think that wikipedia represents intelligence and therefore it cannot be "augmented intelligence". I think that (one aspect of) intelligence is the ability to process information, evaluate it in combination with other information/knowledge acquired before, establish a position in a world model, decide on an action based on formerly known actions or develop a new action and finally perform it. So for me Wikipedia can augment a humans intelligence not simply by providing more information, but by providing it in a way that it may be added to the regular information processing habit.
Let's say I make 500 conscious decisions every day (which shirt to wear, which food to eat, take the new job, press the red button etc.). For almost any of these decisions I can rely on a mix of internal information (already acquired knowledge and deductions) and external information (books, web, Wikipedia, ask someone). I will not visit the public library 500 times a day, but I may call up an article from the wikipedia 20 times a day. It's not just about availability of information, it's also about "process compatibility". Therefore the encyclopedia on the desk may not be counted as augmenting my "intelligence process" (access is too slow for me to be willing to use it all the time), while the wikipedia may. This depends on your personal process, I'm sure there have always been people who look up every foreign word they don't know while most try to guess, and wikipedia will not become a part of your routine unless you replace your modem by DSL or cable.
scienobabble (Score:3, Insightful)
Sure, things change, sometimes quite suddenly and unexpectedly. But really, the relationship between the development of literacy (NPR's example of a past singularity) and the subsequent course of history is nothing like the relationship between a real singularity and... anything. It's just a bad metaphor, and I think I'd have a lot more respect for "future studies" if they dropped it and came up with a new way of describing whatever phenomenon it is they're predicting
Fear of the superior (Score:4, Insightful)
The C-Prize [geocities.com] is the path to superhuman AI.
And as for the "threat" of superhuman AI:
Even assuming AI were to develop the equivalent of genetic self-interest, (something that would take a long time even if humans turned them lose to reproduce without us selecting them appropriately) I'd much rather be in competition with a species that had the potential of being symbiotic due to having a different ecological nich. If it gets to the point that the solar output (forget the sun falling on Earth here -- that's too insignificant to consider important to a silicon based life form) is the limited resource, I suspect that the nich humans will fill will be orders of magnitude larger than they now fill on earth.
The best hope humans have of the transhumanist wishful thinking is to develop superhuman AIs that find utilizing the gas giants to their advantage given the limited supply of silicon. Humans, as the highest form of organic intelligence, would be the natural species to transit to higher intelligence.
Maybe the super AI's could get around this by using a straight carbon semiconductor form of intelligence or something but there is more going on in our brains than we understand. For example, I suspect there is a lot more quantum logic going on within our brains than currently thought by cognitive scientists and neurologists. It only makes sense evolution would have exploited every angle of the physics of the universe to create intelligence. My point in bringing in the possibility of quantum logic is that there are really many things we don't know about natural systems of high complexity and I suspect the same will apply even to super AI's. The fact that we might have the laws down cold at the quantum level doesn't mean we know how things operate in the higher complexity systems.
Human brains are very valuable repositories of ancient wisdom about the universe and the most optimal thing for the super AIs to do -- at least for a while -- would be to transhumanize our brains for us.
Moreover, if it is ok to pass laws to prevent the creation of intelligences greater than our own, why isn't it ok to pass laws dumbing down the smartest among us?
The self-determination argument applied to humanity as a whole -- striving to maintain control of its own destiny by preventing the creation of higher non-human intelligences -- applies also to people who want to maintain control of their own destiny against those smarter than themselves.
Personally I'm much more frightened of unenlightened self-interest than I am enlightened self-interest.
I really wish it were possible to make some of the "smart" people who are really good at grabbing control of resources intelligent enough to understand that they are using those resources in very stupid, self-destructive ways.
Indeed, it is this abysmal stupidity among the shrewdest among us that is my main motivation for promoting super AI.
Re:Since when ? (Score:5, Insightful)
How about these:
1791 Luigi Galvani accidentally closing an electrical circuit through a frog's leg, causing it to
jerk violently. This rapidly led to the understanding of how nerves and muscles work.
1879 Louis Pasteur accidentally inoculated chickens with an old cholera culture. The chickens should have died from cholera, but they got sick and then got better. After discovering the mistake, Pasteur re-inoculated the chickens with fresh culture and the chickens didn't even get sick. This lead to the modern vaccination.
1895 Wilhelm Roentgen accidentally discovered X-rays.
1928 Alexander Fleming accidentally discovered that a type of mold (later named Penicillium) significantly inhibited bacterial growth. This lead to antibiotics.
Never assume that all discoveries are predicted before they are "discovered." I would actually say that most INSIGNIFICANT technological advancement is predicted well out, most of these are evolutionary. Many significant advancements are revolutionary and there is no way many of them could be predicted as there was no information related to the new process before the discovery of the process itself.
1 million calculators... (Score:3, Insightful)
The eternal quest... (Score:3, Insightful)
Experience. The hidden result of all reactions, real or imagined - observable experience.
Regardles of what gods may exist, what greater reality may exist, or whatnot, the purpose to everything can be met with a system that pursues experience in all it's variety. If we are all that is, the eternal quest for experience will be it's own purpose. Endless experience would fulful all purposes.
The trick is setting up a system of gathering experience that doesn't meet with stagnation. Stagnation can come in many forms - death/ceasation, returning to exactly the same state as some past point without being aware of it(looping), or any path that will inevitably lead to those states. Etropy is an obvious block towards seeking experience as an ultimate goal - but if totally unavoidable, then the ultimate goal would be maximizing exploration with the resources available.
Ryan Fenton
Re:Great predictions of the unpredictable (Score:5, Insightful)
I saw that and thought of a recent simulation of an evolving ecosystem. Autotropes, herbivores, predators and parasites all evolved independently in a simulation that simply required growth and survival. I think they are naturally emergent phenomena. You can even explain the existence of defense attorneys and cold-call telephone soliciting this way.
Agent Smith and the Singularity (Score:1, Insightful)
Re:Ye gods... (Score:3, Insightful)
My interpretation of the singularity is very different from what they seem to be talking about in the article.. err interview. They're talking about the influence of computers, artificial intelligence and whatnot -- what you might call "The AI Revolution" -- rather than the real singularity.
The foundation of the technology singularity, as I always understood it, is that new technology (not necessarily AI) increases the pace of further technologic development, until development accelerates to infinity. The first part of the conjecture is easy to verify, as witnessed by the revolutions you mention. Humans lived on this earth for about 100,000 years before developing agriculture; after that it was about 9,000 years before the printing press and widespread literacy; 500 years or so till the industrial revolution; maybe 150 years until we had the first computers; and ~50 years until the development of the Internet.
If we extrapolate this trend (which is what futurists do), future technological revolutions will increase in pace, some happening literally overnight, until they all seem to happen at once. That moment is the singularity. What happens after that is the stuff of bad science fiction.
Personally, I think there's probably an upper limit on the pace of useful technological development. Just because Intel releases a new and faster chip doesn't mean I'm going to buy one before I've gotten the full use out of my current one. And there are certainly physical limits to technology as well: despite hundreds of years of trying, no one's yet managed to turn lead into gold. In the long run, I think the pace of development will slow (and there's some who say it has slowed) and eventually technology will just plateau, but not for a very long time.
Existing models of the future? Which ones? (Score:4, Insightful)
The premise of this definition is that models of the future give reliable or accurate answers at present. What are the models they talk about? Special futurist models? Do these really give reliable or accurate answers today? Or do they mean all models of human behaviour, i.e. most models of the social sciences? Supply & demand will no longer determine price?
If the models are found not to be good predictors of behaviour, they will be modified or replaced. You know... sort of like how it works right now?
If patterns in human behaviour start changing rapidly because of rapidly evolving superhuman intelligence, then sure, our ability to model that behaviour will go out the window. But then, we wont be doing the modeling, superhuman intelligences will. I don't see why the emergence of superhuman intelligence would have to lead to a singularity.
I believe the models will cope. Not "existing models", but tomorrow's models.
Re:Great predictions of the unpredictable (Score:2, Insightful)
the last REAL singularity... (Score:3, Insightful)
I have already augmented my intelligence (Score:0, Insightful)
Re:Evolution yes, singularity no (Score:4, Insightful)
You don't think much of anyone, do you?
Re:I for one... (Score:3, Insightful)
Although I now post under my actual initials, in my day I've had two screen aliases. Yours is one of them. It feels kinda weird to reply to it.
KFG
Re:Ye gods... (Score:4, Insightful)
It is really easy as an observer to sit on the outside and say: "Wow, more neato stuff seems to be coming out faster and faster- why, if I extrapolate it will probably keep coming out faster and faster and we'll get this exponential curve." But that ignores the fact that:
* The problems get harder
* Technological adoption is generally limited by the speed at which society can absorb it, not by the technology
* We've never found a silver bullet
By which I mean:
The problems get harder: Einstein may have been a genius- but we have our share of geniuses today. We almost certainly have many more geniuses actively involved in science (and physics research) than ever before- and they are well resourced (not fantastically, but OK). But they aren't producing Einstein like breakthrough physics because it is damn hard to improve on what we have. We know the current models have holes but we haven't worked out how to fix them- and not for want of trying.
The same applies to lots of technical problems- both the technical research and the translation of that research into real world products. Batteries and fusion power both have enormous commerical incentives but somehow we haven't found the answer yet. We HAVE made improvements but the simple truth is: these are hard problems.
See also the cost of electronic foundaries [wikipedia.org]- around a billion $US and climbing by roughly an order of magnitude with each succesive generation. That is where the bleeding edge of real world technology rests and it isn't cheap and it is just unbelievably tricky.
Technological adoption is generally limited by the speed at which society can absorb it, not by the availability of technology: Science can in theory race ahead of everyday use but in practice it usually has to be supported by technology. Leaving aside silver bullet technologues (like AI- see below) scientific research needs to be translated into technologies that everyday people can use. And technology that everyday people use needs to be adopted, which means it needs to be understood and accepted. That isn't a formula for a singularity.
In theory a small population could make a 'huge breakthrough' and race ahead leaving the rest of the world's population bewildered by the change, but every indication is that the be big problems need big resources to address. And even more resources to translate into actual out of the lab usage (see electronics foundries link above).
We do see some impressive stuff (like Google) which catches our attention and is really useful but this is a tool that society adopts at its own rate. And Google is successful because it DOESN'T baffle and bewilder. It empowers the everyday person. That is pretty characteristic of succesful technology.
We've never found a silver bullet: Science fiction stories often have a bit of hidden magic- the AI, fusion power, teleportation (aka worm hole gates, star drives, etc...) that definitively solves some problem (problem solving, energy, transport to the stars) with no big side effects. That is great for science fiction, but in the real world we don't do this (I won't say absolutely, but I can't think of a real life silver bullet). Everything is a careful trade off, the really big problems don't just go away.
The big one is thinking: for all that computers help us do work they don't do what we would consider 'intelligent' things. Or when they do (like pattern recognition in breast cancer X-Rays) they are so limited in their scope that we st
Re:Limits of Intelligence (Score:3, Insightful)
Where is the limit? 200 IQ? 1000 IQ?
Even then, the hypothetical AI has advantages over us. It can examine its own code (subconsious?)
So, it can optimize slow, inefficient routines. Maybe it could even optimize its architecture via a
custom instruction set. Or maybe even the base process, silicon, to quantum, or biotech.
It would also have a much larger range of IO choices, as well as the number of channels.
As well as non-fuzzy long term memory.
Postulate this:
1) the AI starts at 100 IQ
2) every year it can think some percent faster
3) larger amount/variety of input
Questions:
1) would it not give better informed answeres, faster, year after year?
2) This would be more intelligent, right?
Even if there is a cap at 200 IQ, if it keeps getting faster, it can evaluate more possible breakthrough
ideas per time unit. Maybe limited by boredom?
Oh , but the scenario is perfectly valid (Score:3, Insightful)
Today's mind vs. tomorrow's (Score:5, Insightful)
Ever hear of the generation gap? The youth of today are different from us--they've been raised from birth in a world of ubiquitous networked computing and ambient findability. (see? I can throw around stupid buzzwords too.) Talk of "The Singularity" is not much different from complaining that your kids spend all their time texting. It's making explicit the fact that you can't imagine keeping up as you age. Well duh. We won't be running the show in 2050--our kids and their kids will.
Re:Why the singularity is just late to the party (Score:2, Insightful)
I have to disagree with you there. Consider the biggest world-changing inventions so far - The car, the airplane, the printing press, the computer, networking, the wheel - none of these are substantially based on biological mechanisms.
The path that evolution has taken over millions of years has lead to some amazingly complex and beautiful solutions to survival. But the environment that technological systems operate in now is very different and the time spans are compressed to hundreds and even tens of years.
Since there is currently no Strong AI (that we know of) the jury is out as to how it will happen. But the chances of it closely mimicking a biological mechanism are about the same as for the previous inventions.
Re:Hofstadter thinks Kurzweil full of it, film at (Score:3, Insightful)
In that light, then I would say that so far the prediction holds true, no chess master has been beaten by a computer program that applies reasoning instead of dumb search and huristics. Also, no machine has matched the three names composing the titel of the book and likely can't for a while.
However, I'm not sure that this single prediction about chess accurately reflects the thrust of GEB anyway. Hofstadter appears to me to spend a great deal of GEB in explaining what in fact reasoning actually is, how it should be possible to mechanise. The prediction about chess doesn't jibe with the rest of the book as I remember. Perhaps I should look up the quote and then I'll understand?
Re:Why the singularity is just late to the party (Score:2, Insightful)
While evolutionary mechanics are beautiful for creating a streamlined and efficient system, it has its limits. Biological organisms are hindered by lack of resources. While the things they do with carbon, nitrogen, and oxygen are unrivaled by any modern synthetic chemical techniques, there are many reactions that are all but impossible in biological systems due to the need for catalysts made from rare metals or extreme temperatures. Nature can't work with carbon nanotubes because it does not have any to work with.
So what I am trying to say is that evolutionary systems are limited by the starting basis set, and expanding beyond that is impossible without an outside source.
Re:I for one... (Score:5, Insightful)
You say that like it might be a bad thing.
Religion is all well and good when it is a personal thing and mayebe OK when you are following the teachings of people (or things) long gone, but once it forms into clumps or groups of people, and it would seem especially once these groups of people start following the teachings of people who are alive now, we start getting problems. It's the high priests, the living leaders of religions who decide they need to spread the word of their god at the point of their follower's swords and that's when the trouble starts!
Re:Great predictions of the unpredictable (Score:3, Insightful)
To which I feel compelled to reply "Bwhuahahahaha"
B.S. (Score:3, Insightful)
On the other hand, their proposed "technological singularity" has served well as the theme of a great many science fiction novels.
Re:Great predictions of the unpredictable (Score:3, Insightful)
Re:Since when ? (Score:3, Insightful)
Re:I for one... (Score:4, Insightful)
Maybe it's better to be ruled by artificical intelligence than by the natural stupidity that rules over us now.
Plant Wheel (Score:3, Insightful)
One word, my friend:
Tumbleweeds.
Re:Today's mind vs. tomorrow's (Score:4, Insightful)
That's really not what's under discussion here -- I'm not more intelligent than a 15th-century monk. Putting that monk in the modern world would cause severe culture shock because of the disconnect between the world and his existing frames of reference. He'd have to run like mad to try to catch up, because he didn't have his whole life to become used to it, but a bright person could probably manage it.
What the futurists are talking about is a different level of intelligence. A person (machine, augmented human, whatever) who has more basic potential than a human, in the way a human has more basic potential than a cat. Someone for whom advanced calculus solutions are as intuitively obvious and immediate as "2+2" is for you. Someone who remembers anything they've ever seen or heard the way you can remember what someone just said to you a moment ago. Someone who can picture deformations of multi-dimensional topographies as easily as you can imagine a checkerboard folding in the middle. And even those examples are pretty poor, coming as they are from an average human intelligence -- probably only the first step along the path these guys are trying to think about.
Qualitative difference (Score:2, Insightful)
Of course you can try to emulate the non-logical functions inside a logical framework, but by doing so the machine gets trapped inside a kind of "Gödel paradox", forever unable to explain itself for lack of sufficient axioms ("sufficient" meaning "infinite"). Self-consciousness is then literally impossible.
This isn't so bad as it seems. It only means that machines, no matter how advanced, are and will always be extensios of human faculties. In other words, we are their conscience, in the exact same sense that we're the conscience "behind" our hands and feet. Or, if you like to see it this way, machines and humans are already a single thing, as they have always been, since the instant our first ancestor decided to throw his first rock.
The day humanity ends is the day all machines die. Some of them can of course keep working after that, more or less as some of our body organs sometimes stay working after our brain dies. But death is already there, unavoidable, only waiting for the power source to shut down. Death is the only real human-machine "singularity", that point after which we know nothing about. Any other is mere fiction.
Faster and faster (Score:3, Insightful)
I'll use myself as an example. I wore glasses from th 5th grade on. Six years ago, after 40 years of wearing glasses, I had cataract surgery that replaced my damaged lenses with plastic ones. (Complete with warranty cards, I might add; the future is weird.) I've had diabetes for 25 years. For the first 10, I treated it with diet. For the next 10, with pills. For most of the next 5, I injected a form of insulin that was created by RNA-modified bateria in vats. (For the previous 60 years, insulin had been taken from the harvested pancreases of slaughtered cattle.) For the last couple of months, I have been injecting tiny amounts of a new drug that was developed because a molecular biologist noticed that the molecular structure of a key insulin-regulating hormone was strikingly similar to that of gila monster venom.
I take an additional 6 drugs that aid in further controlling my diabetes, control my asthma, keep my arthritis from crippling me, or act as preventatives for high blood pressure and heart disease.
I am now 54 years old. In the Stone Age, I would have died before I was 20. Even in the early 20th century, I would have been lucky to make it to 30.
We are very close to extending the human lifespan by one year every year. Don't think we Baby Boomers are going to get out of your way, kiddies. We're here for the long haul.
Re:I for one... (Score:3, Insightful)
In the past men polluted as aggressively as they could. There was no thought at all given to protecting the planet. People if anything are much, much cleaner now. The key difference is that we are slowly but surely running out of space. We are not worse polluters than our ancestors, we are just being held to an effectively higher standard. (Don't get me wrong, I think it is vital that we meet it) Oh and G.M. crops are about as far away from pollution as you can get. Don't be such a neo-phoebe, if you don't want us all to starve your going to have suck it up and accept some G.M. crops, its another case of higher populations chaining the standards.
As far as the stripping of man's values, I don't think you are looking at the horrors of history quite carefully enough. Man has been cruel and brutal for almost his entire history. It is only very recently that democracy, the abolishment of slavery, or the emancipation of women has occurred. Torture was considered a defacto standard for basically all of human history. We have come a long way, and I think we are still on an upward trend.
I will be the first to admit that we are in a local valley. Things are worse in a lot of ways than they were 5-10 years ago from the perspective of cultural progress. But if you thing that this is the beginning of the end you are being overly pessimistic and melodramatic. Sure things are bad, and I bet that they are going to get a little worse in the next two years or so, but then they will start to get better.
My god do I see the seeds for a bright future being planted today. A future of liberty, equality and trust. Technology is starting to enable some really increadable community tools. People are waking up and seeing that they need to play a part in the way the environment is handled. And we really are all getting smarter.
The trend is still up! Its just the moment which is down. Honestly the only thing that scares me is the mass retirement of the baby-boomers, but hopefully that won't hit us too hard.