NPR Looks to Technological Singularity 484
Rick Kleffel writes to tell us that NPR is featuring a piece with both Vernor Vinge and Cory Doctorow looking at the possibility of the "technological singularity" in the near future. Wikipedia defines a technological singularity as a "hypothetical "event horizon" in the predictability of human technological development. Past this event horizon, following the creation of strong artificial intelligence or the amplification of human intelligence, existing models of the future cease to give reliable or accurate answers. Futurists predict that after the Singularity, posthumans and/or strong AI will replace humans as the dominating force in science and technology, rendering human-specific social models obsolete."
Since when ? (Score:4, Interesting)
The event singularity doesn't have to happen because the futurists are always wrong.
Microsoft or Real Only? (Score:2, Interesting)
invention/discovery... (Score:3, Interesting)
AI's are human-designed/manufactured. Since we're prone to errors, it follows they are/will be as well. Does that mean AIs would make similar or different mistakes, and how would they handle them? The same, differently, or not at all? Will we see a regression, in that AIs will result to brute-force discovery much like early scientists? Will they evolve?
Another question area: Anyone who has built a compiler knows the three-tap rule. Build it, build it using itself, build it a third time, compare. Will AIs produce AIs, and if so, will they be better, or equally flawed? Will a 'perfect' AI still be capable of scientific invention/discovery? Will the mistakes of its human operators/supervisors/managers make up for its lack thereof?
What about drive? Will the drive of a human manager/supervisor/etc be sufficient substitute for an AI which can't posess them?
The Abolition of Man (Score:2, Interesting)
This summer I read C.S. Lewis's masterpiece The Abolition of Man [amazon.com]. (No, I didn't link-jack the Amazon link for want of filthy lucre.)
Skip reading the editorial review. Here are some excerpts from the first customer reviewer, Charles Warman:
Re:Since when ? (Score:4, Interesting)
There is the big difference there that all of the technologies that he demonstrated were already developed and working, that others had a fair level of consent that they would eventually exist, and that he was talking about the near future.
A tough nut (Score:4, Interesting)
If you look at most of the goals we have right now, they're pretty mundane and shortlived. Curing disease, stop killing eachother, end to hunger, creating objects that we find beautiful and pleasing, creating more living beings like ourselves.
Once we reach a singularity we'll have the technology to do away with all these problem oriented goals and I for the life of me can't really think of any obvious goals past that point. While I agree with the premise that we don't have any reliable way of predicting what our goals will become past the singularity, does anyone have any guesses?
Why the singularity is just late to the party (Score:5, Interesting)
The thing is, we are still way surpassed at this by billions of years of evolution. We run on energy from fossil fuels and build from materials we've mined and shipped. On the other hand, we find bacteria living in the most surprising places, we find superior sonar in dolphins and bats to anything we make, and all of it runs on, ultimately, fresh plant matter. We get excited over a myomer that lifted some heavy weight, and I tell you, an elephant can do the same thing given enough food. The sheer variety and efficiency of the ecosystem virtually guarantees that most any way you can think to survive has been done somewhere, somehow, by some living creature. We're worrying about when oil will peak, if we can live another century, and outside our doors the world can go on for eons to come provided we don't break it with our silly toys.
And in a geek-intense environment like this one, I think I can say that it's difficult to beat the end product of a long-term evolutionary algorithm, which itself is an arguably good model of what the world around us acts like, and you all will understand.
I don't deny the coolness of my Apple notebook and I've got a decent number of shelves full of programming books, but I think biomimicry [biomimicry.net] is where it's at. We can go a lot further learning from our world of proteins and DNA and RNA and using - or just having fun with! - what's already there.
We can also get out more and enjoy our analog, fuzzy-logic, neural-net-driven, molecularly-computed fleshy selves.
Re:All intelligence is genuine, not artificial. (Score:3, Interesting)
I'm an RA at an "Artificial Intelligence" lab. In the Fall, I'll be working on my PhD, studying "artificial intelligence." I have a membership to the American Association for "Artificial Intelligence," which is one of the most respected organizations in the field of "Artificial Intelligence."
I don't seen anything geniunely "intelligent" about a support vector machine, but, it does get the job done quite nicely.
I've worked with some of the best people in the field of "artificial intelligence" and spoken with a number of others. Let me look over my bookshelf... "Aritifical Intelligence - Stuart Russel and Peter Norvig." "The Society of Mind - Marvin Minsky (founder of the MIT "Artificial Intelligence Laboratory")... Some others that don't have such easy citations linking them to instances where the practitioner referred to themselves as being in the field of "Artificial Intelligence," but "Mind and Mechanism - Drew McDermott..." Lets see, he also wrote "Artifical Intelligence Programming, co-authored by Eugene Charniak."
Quite a bit of what we do has nothing to do with emulating human intelligence, though some of it does. Cog, for instance, experiments with human-like behavior. Is the neural net that I wrote that can steer a car "intelligent?" I don't really think so, not in a way that would offend me if it were called "artificial intelligence." My office-mate just got a best-paper award in an Aritifial Intelligence conference.
So, anyway, I guess to be brief, I disagree.
Ye gods... (Score:4, Interesting)
Re:All intelligence is genuine, not artificial. (Score:5, Interesting)
Artificial primarily means that it comes from artifice (ingenuity) or art. It doesn't (directly) mean it's fake, it just means it's a consciously created work of humankind rather than nature. I think that in modern times with so many knock-offs of natural goods, such as artificial sweetener, the secondary definition has gained the upper hand.
Check out wictionary [wiktionary.org] (It's the hive-mind wikipedia, it must be right!)
When you read enough literature from the 16th and 17th centuries you get more familiar with the original, literal meanings of words such as this one. A favorite subject was to compare art to nature, and they'd freely use the word "artificial" to mean that which comes from human arts. This is not to say that the secondary definition is wrong: for example, when in Book 3 of The Faerie Queene a troll creates an artificial woman to replace the girl who left him out of snow, "virgin" wax and some gold wire (and of course wackiness ensues) it is repeatedly underscored that this "False Florimell" is a cheap immitation.
Anyway, you can chose any definition you like. I sort of prefer artificial intelligence to synthetic intelligence or whatever, just because how you regard the word artificial says a lot about you and what you think of human creativity. And I don't like euphamism treadmills, which is effectively what we're talking about here.
Hasn't it already happened? (Score:1, Interesting)
Also look at computers, space exploration, mind-altering drugs to treat any number of "disorders", robotics etc.
Aren't we already there?
Limits of Intelligence (Score:2, Interesting)
Assuming intelligence is the ability to extrapolate from facts to deduce the future, then it's limited by the accuracy of the facts (garbage in, garbage out). There's no point in have ever greater powers of deduction if the facts have a lot of noise in them.
Sherlock Holmes looked powerful because Victorian society had high levels of structure and relatively less noise. It's common strategy to act crazy, illogical, stupid when in a conflict with more powerful enemies.
The butterfly effect, as an illustration of chaos, will protect us from the singularity.
Re:Evolution yes, singularity no (Score:2, Interesting)
Today's news is already old - and this is just the beginning.
Curiously enough.... (Score:3, Interesting)
Re:invention/discovery... (Score:5, Interesting)
The current thinking is that we will make seed AI, i.e., general intelligence for manipulating software, and that it will improve itself, in an incremental fashion, all the way up to and beyond the level of human intelligence. Of course, this will be done with the help and guidance of programmers but the fear is that by giving it free reign to manipulate itself we will no longer be able to understand what it creates. Not only will this mean that we won't learn anything, but we'll also be unable to control it. As such, most people who seriously consider working on this stuff advocate a goal based higher level of functioning with "friendliness" to humans as being the primary goal and improve yourself as a secondary subgoal. That way, even if the beast gets out of control, the worst it will do is solve world hunger.
A multiplicity of singularities (Score:4, Interesting)
I'm way too young to remember the Millerites and the Great Disappointment of October 22, 1844, when Jesus failed to reappear, but I've been blessed to live through a veritable multiplicity of singularities.
Oooh, singularity! I like that word. So much kewler than, say, "Armageddon." It sounds so technical, so scientific, so free from ranting religiosity....
Re:Since when ? (Score:3, Interesting)
You could make videophone calls from AT&T booths at the New York World's Fair in 1964. But you can trace demonstrations of the idea back at least to the 1920s. Mechanical scanning, the Nipkow Disk.
Re:I for one... (Score:5, Interesting)
Humans are proud of their abilities. They fashion themselves to be the most capable species on earth. If, in the future we are outclassed by artificial intelligence, it seems likely that the we will feel ashamed of ourselves, in a sense. When first-class athletes go past their prime, they are likely to retire out of the game. They do not want to compete as a second-class athlete. Advanced AI could really hurt our feelings, and spawn a desire to give up. I mean, what's the point of life if we aren't on top?
My reply to this was simply: Die fighting for those that you love.
Of course, in such a scenario we might be faced with the choice of enhancing ourselves through biology and cybernetics, so as to compete with our "AI over-lords." But such a choice may really alter what it means and feels to be human. I am not saying whether this is good or bad, but I am saying that if we do decide to take that course we will be sacrificing the human experience for the sake of preservation of the species.
So, I wasn't truly talking about natural selction, and I should have left it out of my previous post. Evolution, however, is WHAT I am talking about. Evolution simply means: A gradual process in which something changes into a different and usually more complex or better form. (from dictionary.com) Of course, biology uses that term within the framework of genetic change over time.
Re:My god! (Score:5, Interesting)
About 10 years away then...
Re:What happens when we get there (Score:2, Interesting)
Ethicial question (Score:2, Interesting)
I would reference a quote by Rick Mullin from his article Frankenstein At The Circus [acs.org]
Re:Since when ? (Score:3, Interesting)
Yet another wheres-my-flying-car-cynic eh?
You see, Bad futurists attempt to predict specific inventions at specific far-future dates while 1) ignoring the facts; 2) forgetting to ask whether anyone *wants* the projected product or situation; 3) ignoring the costs; 4) and trying to predict which company or technology will win. These are the type of futurists that sell the most books and most people have their hope-bubbles bursted by.
Accurate futurists, like Ray Kurzweil [kurzweilai.net], extrapolate more general trends into the future based on the very predictable history of exponential technological acceleration. e.g. I can say with certainty that I'll be able to buy a 1 Terabyte HD in 2007 for under $0.50 per GB, but I can't tell you if someone will have invented the next tech to begin the paradigm shift to the medium with a better price/performance ratio than spinning platters.
Re:A tough nut (Score:4, Interesting)
If you look at most of the goals we have right now, they're pretty mundane and shortlived. Curing disease, stop killing each other, end to hunger, creating objects that we find beautiful and pleasing, creating more living beings like ourselves.
Once we reach a singularity we'll have the technology to do away with all these problem oriented goals and I for the life of me can't really think of any obvious goals past that point. While I agree with the premise that we don't have any reliable way of predicting what our goals will become past the singularity, does anyone have any guesses?
The first noble truth of Buddhism is that all is suffering. Nietzsche (whose philosophy has Buddhist influences) wrote of the will to power of all things. If we think of suffering as being caused by a lack of power, then the amount of suffering one feels is equal to the amount of power one has left to be gained.
After this "singularity" occurs and we have used technology to transcend our organic existence and overcome the plights of present day humans, the only suffering left will be the power not yet possessed. This power will be attainable in the form of technology, or rather, information. New found knowledge will continue to empower whatever humanity evolves into, be it super powerful AI, or perhaps some type of collective intelligence.
So, my guess as to what a possible goal for future civilizations might be, which is the same basic goal as we have now is... to maintain and gain power, and it will happen via the acquisition of new information, i.e. learning.
other technology sigularities (Score:1, Interesting)
the Internet
computers
air travel (unfortunantly space travel hasn't had this effect yet)
automobiles
cheap aluminum production
cheap steel production
the printing press
with any of these the world after they became broadly available was something that could not have been predicted prior to the invention, as even the most mundane of these has side effects and uses that were complete surprises
one upcomeing development that could end up being another sigularity is the possibility of cheap titanium production. while it's already used for expensive things, when it's available to be used in day-to-day items the new level of strength/weight will spur new developments that could change society completely
Oh noes, the Rapture! (Score:3, Interesting)
Re:I for one... (Score:5, Interesting)
I'll also note that your whole argument stems from the assumption that the human race will be in some sort of competition with its tools. Frankly, there's no reason to think anything will compete with us as a race unless we design it that way. As individuals, sure, you'll lose your job if a robotic assembly line can do it better, but you only got the job in the first place because of the existing technology that let you steal the job from the rug weaver in africa (or whatever). Live by the sword, die by the sword.
future = rise of cyborgs? (Score:5, Interesting)
The problem is also mostly with the expectations people have of computers. Everyone wants computers to return deterministic and easily tracable results. For example if I want a value from a database I want to issue a query and have the value returned. I don't want a system that would return it faster but only with 80% of correctness, I don't want any "fuzziness" only exact numbers. In other words people would rather have computers do what computers are doing - calculating stuff fast and exactly, they don't want computers to really act like humans. I think subconsciously we will just never allow computers to reach a human level of soffistication and thus they will probably never surpass us.
On the other hand, what would rather happen is that we will slowly integrate machines into ourselves - litteraly. As soon as the baby is born we will tag it with an RFID, we will implant sensors for infrared vision, ultrasound, we will inject nanoparticles to boost the immune system. In other words I see a cyborg future were we become one with the machines. If anything or anyone will destroy us it will only by ourselves, at the same time if anything helps us prosper, it will also be ourselves. The future is (mostly - short of a big meteorite hitting us) in our hands...
Re:Since when ? (Score:3, Interesting)
Until I actually met a futurist...and then started looking for information on futurists...and god forbid saw viedo's of the most respected futurists at a futurists convention. And then I discovered that most futurists are absolute nutjobs. We're talking cult of personality for the emotionaly disturbed. Meaning most of them are of the 'Starchild Lovemaker' hippy variety, with only a tangential understanding of technology. They have no idea what tech actually is, and how it works. They're as clueless as all the idiots who invested in those internet bubble IPO's who said burning thru millions was a great idea, and that profitability was not important for a publicaly traded company or one which was getting venture capital funding. These futurists jumped on memes they had no integral understanding of, just mumbling phrases which caught their imagination.
And then I saw 'trendwatchers'; 'alternative' losers who actually got paid money to roam around the poor areas of asia to spot stuff they could steal and 'incorporate' into the latest western fads. Even less of what I wanted to do.
Now I still want to do what I mentioned above...only in that purely relevant realm, using actual logic and analysis to actualy be usefull. Cause remember; it wasn't futurists who predicted the internet, the fall of the Berlin wall/USSR, the impact of electronics or even the wars due to teh scarcity of water.
Anyway....more ontopic: my guess is the singularity will be quite a ways away, because whilst it is true we're getting more and more new tech, and developing tech-trees faster and faster, there's a mayor hurdles. Cross-pollination. Linking the different tech's to produce even more powerfull tech. 'Search' is just part of the problem (and a huge one even at that); it's very7 very hard just to know what is known! What research has already been done on nanotechnology? Oh, you mean nanomaterial? Or physics on the meso-scale? Or nano-chemistry? Or, or or.... . And which part of that is usefull to me, to the stuff I'm doing? That's HARD! And then there's intergration of two disparate fields into one tech....for example you need biology and electrical engineering to create your biochipthingy. Two very different field with different terminologies...now learn what they mean and connect them with an engineer and a biologist
So that's that for the singularity...humans at the moment just can'tr cope with all the wildly divergent and fragmented information out there, and that problem is only going to get worse. I expect the Singularity is in reality going to be some kind of 'Diffussion' instead. That's state will last for a long time before we digg ourselves out of that hole before the real Singularity can occur.
Re:I for one... (Score:3, Interesting)
The thread was assuming that a super AI was formed, and that they would rule over us. Maybe silly, maybe not.
The point of my post was simply this. We may someday be capable of artificially modifying ourselves post-conception in ways that would make that person alien to the un-modded humans. Meaning such modifications as computers working intimately with our brains. Genetic modifications for suer intelligence, and extra digits. Things like, steel reinforced limbs, motor enhanced muscles..three breasts for porno flicks. Things like adding new sensory capacities in the brain.
If such things are possible, then their lives..the human experience may no longer have anything remotely human about it.
I understand what you mean. We are evolving now. Change is the norm, and thus anything that changes within us is still human. Fine. But that human may be vastly different than in the past. Your argument is merely taxonomic. Human is merely a label to you? No doubt we came from hominids and from rodent before that, but that does not mean that the previous rodent experience is comparable to that of our current human experience? The experience is different. If we change ourselves radically, and QUICKLY then our experience will be different.
Who says it hasn't already happened? (Score:5, Interesting)
Imagine an intelligent and curious human from rural Nepal, or Papua New Guinea. Could you explain your job to them?
Could you do your job without the embryonic augmentations we have now, such as Google?
We're partway up that vertical curve now.
Re:Since when ? (Score:5, Interesting)
Re:My god! (Score:3, Interesting)
In fact, I would wager that really understanding the universe and its underlying complexity will only be understood by conscious systems much more complex than the human brain - meaning that most likely, effective fusion power will be designed *BY* the intelligent machines. See my sig.
Once "they" control a power plant, then there is no need for the "us" anymore.
Re:I for one... (Score:3, Interesting)
What cracks me up it seeing "spoilers" below, but within view, as though one is supposed to skim past some undefined period of time.
Has anyone failed to remember ROT13? If Wiki* had a ROT13 control, you could click it and see plaintext, clicked again, return to the original material.
sigh.
p.s.
I'll believe the singularity when I start seeing "All your base are belong to us" and yanking power plug doesn't faze it.
Re:invention/discovery... (Score:4, Interesting)
Isaac Asimov discusses that concept in one of his short stories; The Evitable Conflict. In that short story, there were huge computers that could assimilate vast amounts of information in order to determine the best course. Because of their reliability, the machines had been put in charge of things like food production and distribution. In the end, the machines began manipulating events to ensure that anyone who disagreed with the machines control was removed from a position of influence. They did this because obviously what was best for mankind was to be guided by the machines, who didn't start wars or squandor resources like they did. In order to maintain what was best for humanity, they had to act against individual humans and, in short, ensure that humanity was never ever the master of its own destiny.
It's fiction, yes, but even such simple goals as the one you suggested need to be interpreted. How should one weigh up the needs of the many against the needs of the few?
Re:I for one... (Score:5, Interesting)
It might be we still follow the survival of the fittest rule.
But then, how come I sense this disturbing trend that is stripping the single man of all his cultural and material property?
Men in the past had access to renewable water sources because there was a different kind of pollution, didn't fear the sun because of the ozone layer depletion, didn't pollute the land with genetically engineered crop or chemicals. Culturally speaking the trend is stripping man of every set of values which is not money: French revolution fucked the aristocracy. Fascist trolls made us hate nationalism associating it with violence and ignorance (this is an european perspective, in fact usa people were more nationalist, but now you have your own bush troll). Global media fucked home-bred traditions in the west, while Communism did the same in a more violent and explicit way in the east. Corporations have stripped us of science. Scientific experiments in total privacy and patents make not science, but occultism. Now everything is poised to strip us of religion, as the battle is between islamic violent and sexist integralism, neo-con crusaders, zionists will end up with people worn by WWIII refusing anything that remotely sounds like faith.
This is a brain dump not an analysis. Am I wrong? I sure hope i am. But think about it when you have to evaluate any change marketed as "progress".
Re:Why the singularity is just late to the party (Score:3, Interesting)
Re:Ye gods... (Score:2, Interesting)
That's even if you're measuring the right thing.
Re:I for one... (Score:3, Interesting)
If people refuse religion and it's their choice, no problem. If external interests want people to obey only to one value and make people either hate religion or follow its distortion, that's a problem. Another problem is that religion becomes something that divides, and divided people who fights among themselves are not likely to fight other battles which might be vital, like the one to be free human beings.
I am no supporter of aristocracy either, but I like even less that the power coming from being landowner be destroyed by the almighty buck. Why? because the buck is easily concentrated in the hands of the few and becomes dangerous. In fact, those having the real money, those producing the money (ie banks and the fractional reserve), are interested in acquiring power and real goods, since they now better than anybody that money is worthless per se. Power and goods that won't be ours anyway.
Re:All intelligence is genuine, not artificial. (Score:3, Interesting)
-Edsger Dijkstra
Thanks, I've been wondering the source ever since he brought it up.