Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Technology

Nanotechnology And The Law of Accelerating Returns 156

digitect writes: "The article More More More at Reason is a good overview of the increasing rate of acceleration for technology. It includes references to nanotube technology, nanobots and estimations of gross computing power in the near and far future. Frankly, I doubt we will ever develop computers with the sophisticated power of even a mouse brain, although many may protest that we already have exceeded their gross power. I believe that things like perception and reasoning are beyond the scope of raw power. But it's a fun read anyway."
This discussion has been archived. No new comments can be posted.

Nanotechnology And The Law of Accelerating Returns

Comments Filter:
  • Storage!

    So far, it has pretty much also held true for "computing power" (doubling every 18 months). New kinds of killer apps become possible when really humungous amounts of storage -- without significant energy drain -- are available... For example, kids could routinely "scan" every school-related document. (Think how much more you might now understand if you were able, whenever you wished, to "intelligently search" everything you ever read -- or wrote!)

    This author is claiming something not even remotely connected to Moore's Law, though. We cannot even teach robots to climb stairs yet (unless it's a set of stairs they already "know"), yet he thinks they will become able to "think" (not only move autonomously but also perceive, relate apparently uncorrelated storage items, solve non-algorithmic problems, etc.). I don't believe that is going to happen in any significant way within the next century, let alone the next decade. Didn't I just read something about how nanotechnology so far seems (in the lab) to be associated with ominously low energy input/output returns?

  • claiming that $1,000 today will buy you computing power that compares to that found in insect and mouse brains

    Or a half witted dog [aibo.com] for $1500

  • It staggers me how such intelligent people such as Bill Joy can have such a paranoid and deluded view of the future.

    Seeing such a complete lack of faith in humanity is more of worry than the possibility of technology running rampant just like it always does in hollywood sci-fi's.

    I say bring it on... I can't wait to upgrade my computer from mosquito power to pidgeon power !
  • Hmm...I wonder if all that Non-binary information could be sampled and quantized? Naaaaaaaaa..

    There is a large and growing problem in the modern world ... a belief that everything can be 'quantified'. Sure, if you had a model that perfectly replicated every molecule, down to the sub-atomic level, for a volume of space defined by light-speed for the duration of the experiment (e.g. for running the experiment for one second, you need to encompass an area with a radius of 300,000 kilometers), you could accurately predict the future. Realistically, when you start looking at just about anything at the atomic level, it defies prediction. Atomic bombs are about the easiest reaction to simulate, and they're still building computers to adequately model _that_ reaction. At the moment, it's - surprisingly - a rather limited set of parameters - although the results match RW test results OK. Now, that's for a reaction in which the various transition states and interactions are known (e.g. proton hits another nucleus, resulting in ...).

    Compare that to, oh, DNA. Double helix, with a rather large number of combinations. And it tends to change it's shape in interesting ways. Allowing new chemical bonding. Attempting to model any 'reaction', such as thought processes, without knowing all the reactions that make up one component, is pretty much a shot in the dark, and has virtually zero chance of being correct. The idea that all we need is a bigger and better computer is, ultimately, fallacious. Remember the old acronym GIGO? The same thing applies to attempting to solve equations with incomplete data. The desire to quantify the unquantifiable is just plain stupid - it shows a lack of understanding of what you're attempting to model, and a lack of understanding of chaos theory.

  • Actually, I think evolutionary algorithms are quite interesting. In the form you usually see, they are very good at optimizing lots of variables in a complex solution space. No good for problems where you can't define an overall structure for the solution.

    However, the link you provide made for an interesting read. Genetic Programming (as they describe it) seems to be the most likely candidate for the kind of outcome you describe. It appears to permit program evolution to change algorithmic structure, rather than just algorithmic constants (which Genetic Algorithms change). I'm not totally convinced that even the insane improvements in computing power will enable Genetic Programming to produce anything beyond laboratory curiosities.

    Nonetheless, it would be interesting to see one of these programs be developed to drive a car, or some such "AI"-type problem.

  • Fair enough. I have not described I-ness accurately. I don't understand it well enough to describe it. But, don't you think you know what I mean? Do you not have a sense of identity, of you being more than a pattern of computation? Isn't there anything going on in your head that seems qualitatively different from what you know of the operation of a digital system?

    I'm really not being argumentative here. I really do want to understand what kind of computation (or whatever) is going on in my mind, to make me, me.

    By the way, I do not say computers can't be intelligent/conscious/sentient.

  • Actually, I believe it's:
    "Won't that be grand! The programs and computers will start thinking, and the people will stop!"
    I'm pretty sure about the "won't that be grand" part, but the rest is a little hazy. It's been too long, I need to watch that movie again...
  • Why does our civilization want more, More, and MORE? Why is Moore's law? Why do we exceed the human parameters of perception and sense? In this light, it seems that nanotechnology will fill the bill. But at what point will we cross the line and lose our humanity. So much of our culture and diversity has been lost and homogenized. Perhaps nanotechnology will allow us to live in this ever accelerating and compressed world but derive some pleasure and reassurance .
  • by Junks Jerzey ( 54586 ) on Monday November 13, 2000 @08:55AM (#627515)
    Typically, the "computers will be more powerful than humans!" comments come from newbies, while "computers will get faster but will be distinctly different than brains" is what you hear from people who have been involved in AI for a while.

    The tremendous speed increases in computing hardware are often mistaken for something deeper. We're writing larger applications, yes, but they're not necessarily more stable or more advanced in a way that's different than simply adding more features. If anything, we're starting to come to the realization that simpler is better, or at least that having straightforward goals is much better than shooting for extremes.

    Take compilers, for example. In the 1970s, two top goals of compiler writers were "incredibly high levels of optimization" and "automatic correction of user errors." Today the goal is more conservative: "go for a straightforward implementation that will have the fewest problems." It isn't worth doing over-the-top optimization if you're trading a 0.5% speed increase for greatly increased code complexity. As a result, more compiler writers have taken a conservative approach. In terms of correcting user errors, it is simpler and more predictable to simply report errors as they are found. Trying to be smart causes more trouble than it is worth ("How can my program be wrong if it compiles and runs?").

    Complexity is a limiting factor in grandoise plans for AI.
  • Insightful and compelling post.
    For the sake of us all, please follow this link. [learningco...school.com]
  • Might help you, would harm me.

    I know many formally trianed typists who have been typing at keyboards less time then I have that have carpal tunnel.

    I show no sign of coming down with it. Classic typing detroys ahdsn and I have no urge to do that to msyelf, though I appreciate that slashdot's lack of a spell checker might make my idosyncracies somewhat difficult for the reader.

    If you can find me a typing system or device that is relatively easy to learn and non-repetitive, I might be very interested.

    I DO have a twiddler on order for some light weight computing experiments and plan to learn to use it. Maybe that will help.
  • I do not agree.

    While I do concede that there is no way that C++ or Java (or anything else even remotely related) are going to be the languages that we'll be using to program "brains" with in the future, I think you are failing to recognize "evolutionary programming" as a possible solution.

    Producing software by hand that is complex enough to achieve awareness may be an NP-complete problem, but evolution has solved many NP-complete problems in the past =) Additionally, I don't think that software evolution has to be on the scale of eons either; Maybe it doesn't exactly follow Moore's law, but I would think it falls somewhere close. The biggest problem with evolutionary software design, as I understand it, is defining goals and success tests. The research already done in this field is fascinating, and yields software that defies all the rules of traditional software design, often utilizing the hardware in incredibly unorthodox ways that suggest an eerie knowledge of the underlying atomic structures of the silicon.

    Here's a fun site with more information on Evolutionary Computing (EC):

    The Hitch-Hiker's Guide to Evolutionary Computation
    http://alife.santafe.edu/~joke/encore/www/

    In my opinion, the coupling of nanocomputing with evolutionary software, will yield computing brains which are indistinguishable (and eventually superior, I suppose) to their biological counterparts, save for the underlying material composition.

    paulb
  • That is not ...entirely correct.

    Adrian Thompson [susx.ac.uk] has been researching hardware evolution; using genetic algorithm as a feedback loop for programming FPGAs. In some cases the optimal solutions achieved obviously relies on complex electromagnetical resonance from seemingly unconnected parts of the circuit, behaviour not anticipated by todays testing suites.

    Also, if by regularity of digital circuits you mean its deterministic logic, it's ability to not be affected by random fluctuations of its environment, I'd like to point out that that might not be a positive property in this context.

  • Exactly why should this "feeling" of I-ness be anything special? These consciousness talks get all muddled up because you look at them from inside your head, so to speak. Look inside your head as 3rd person; it's a machine that takes input of slashdot.org and produces output of "because I think I'm I, there must be God".

    I don't really see the problem. Theoretically it sounds entirely possible to build a machine that would react like that.

    Personally (the magic word), I'd say that this whole conscience thing is overrated. You think of it as some sort of entity, soul. As far as I'm concerned, chair as conscience just like you do -or don't-, you just react (read-reply-read-reply) to your environment in more complicated way than the chair. Or slug. Or slime mold.

  • What I'm really trying to get at is what is the minimum number of neurons needed to think. Put another way, if we subtract out everything the brain must do that isn't thinking, does it reduce the complexity needed to think?

    In a similar fashion, birds were an existance proof for heavier than air flight. Flapping wings are a very complex solution to flying. Once it was determined that wings could be used only to generate lift and that thrust can come from elsewhere the problem was simplified.

    We know that you do not need a complete set of functioning sensory systems to think. Helen Keller was a fine example of that. And Stephen Hawking does fine without the ability to control his muscles.

    On the other hand, if one is blind or deaf those neurons are still available to think with. And don't have to deal with any sensory input.

    Any AI would have the need for input. We need to determine how complex that input need be. Current computers can 'reason' about the physical world without human like senses. As a trivial example consider Mapquest. It can answer spacial questions about the world.

    One of the reasons artifical vision is so difficult to implement is that there is too much info. The system can't determine what is relevant. Having an alternate mechanism to introduce info into the system simplifies the problem.

    I realize that we use the same neural machinery for multiple tasks. Thus I would ask is dreaming, visualizing, etc. needed for thinking? Or is it a side effect of how one particular system, the human brain, was built.

    In a round about way the point I'm trying to make is that it may be that using the complexity of the brain as a measure for what is needed for an AI is actually an upper limit because the brain has tasks other than thinking.

    The brain may also have redundancies not needed in a minimal AI solution. People can get by with just one hemisphere. Does this mean we can cut our complexity estimate in half?

    And that given that the brain was evolved and not engineered is there a more elegant solution to designing a thinking machine?

    Steve M

  • Before you decide conscience thought is possibly too complicated, there is another viewpoint to try out. Conscience thought is NOT complicated.

    Everyone agrees (I think!) that humans have conscience thought. What about monkeys that can use sugn language and are sad when their babies are taken away? What about a dog? A bird? A slime mold?

    Just recently, on NPR, I heard a story about possible intelligence in slime molds. It was debunked by an expert who convincingly claimed it was not. It seems we sometimes attribute internal thinking too much to external behavior.

  • Ok, I don't have the entire R.A.W. library on hand (but I have most of the better books), but didn't he point out sometime in the last decade that we DO have (almost) light-speed travel, if you count telepresence? (I realize I'm mixing two separate points of your post, and all the limitations of current telepresence) Also, and this is nearly totally O/T: Please, SOMEONE say something about the C+/speed particle travel that was recorded back in June in some US lab. (If NO one knows I'm sure I can find a reference by asking all the friends to whom I gave a printed copy of the article about it at the time.) Precisely, some specific particle was recorded as having passed some detector immediately *before* it was launched (I think). hey, flame me all you want, I'm in sales :) but would really like an answer to this
  • When the government models nuclear explosions, they aren't modelling what's happening to every atom in a city block.

    Also, just because you can describe configurations of atoms in space (if we could) doesn't mean you can therefore model identical macro behavior of a cell - that is, unless your model of force atomic behavior was extremely good.

    My point, however, was that let's say we model a one cell amoeba. We would also have to model its environment to duplicate said behavior. If it isn't getting any feedback through interactions with the world, it isn't doing anything.
  • by coult ( 200316 ) on Monday November 13, 2000 @06:17AM (#627525)
    This stuff sounds a lot like the science fiction of the forties and fifties..."By 1980, people will travel to their offices in automated helicopters! People will fly to the moon and mars on a regular basis!" Many science fiction novels of the time portrayed extravagant space-travel, and yet had humans doing the navigational computations on board these spacecraft, completely missing the coming dominance of electronic computers in computations. The popular imagination of future technology was simply an extrapolation of the current technology - "Fast airplanes, bigger rockets, atomic power!" - when in reality the technology of the future has turned out to be things that people could not have imagined at the time (the world-wide-web, biotechnology, fax machines, massive computational power...) In the same vein, the breakthrough technology of the next 100 years will most likely consist of things that haven't even occured to us yet -- not simply faster and faster computers, smaller and smaller robots, better and better bioengineering, but rather something else entirely.
  • > Well it is a lot different. There has been and will be much insight and debate on thinking whether the reality of a thinking machine will ever come about.

    If you're a strong materialist, how can you avoid it? I'm not asking this as a rhetorical question - I really want some ideas.

    > It is a matter on which I'm sure no one here is qualified to make any valuable judgments

    Translation: "Quiet, ya dumb slashdotters!"

    > so let us not discuss it.

    And so therefore we must.
  • ...this relates to moore's law?

    Put succinctly: Moore's Law is a special case of the more general "Law of Accelerating Returns."

  • . . . and currently run via nanotechnological means, albeit inefficient ones.

    I refer, of course, to the human brain. All it is, is an organic, massively-parallel processing environment with an equally massive and complex boot-rom, and a uniquely flexible OS.

    So, why is it ok to have a jellyware CPU system with intuition, and not a hardware one ?? The difference is likely to be a matter of "hardware" complexity, and the "software" running on it. . .

  • by Anonymous Coward
    They are talking about the ability to compute similar task functions, I am sure (if not, we can just pretend they were). Try calculating complex behavior of a dragonfly on your pc. The dragonfly operates massively parallel while the personal computer does not. That said, we don't even have refined algorithms for sensory and cognitive functions.
  • You can take the behaviorist approach to explaining my input/output, if that satisfies you. I am not satisfied with the black box approach.

    "Personally" is the magic word. I can't look inside my head as 3rd person...I'm in here. I don't claim it is soul necessarily, and am certainly not trying to argue for or against the existence of God here. There could be some purely mechanistic, computational explanation for my awareness/sentience/whatever. I just haven't seen one yet I think is correct.

    Almost certainly it is possible to build a machine that reacts sufficiently like me that an observer could not tell the difference. But then the interesting question to me would be one that the behaviorist would never ask. Would that machine experience the same sensation of I-ness that I do? Perhaps it is implementation dependent. Perhaps some implementations of my behavior are "conscious" and some are not.

  • I think it's impossible to craft such an explanation that would be accurate and at same time satisfy you (pl as in plural, god, I hate english non-plurality you-pronoun). You can't define "consciousness"; how do you know it exists?

    If there were a machine that reacted like you to outside world, it wouldn't be you, necessarilay. But if there were a machine whose internal neural networks would run in similar patterns to yours, then it would be you.

    What I'm saying is that... well, sorry, your last paragraph just made me remove some words. This "I-ness" could indeed be just another feature in the network, but your question "just WHAT is consciousness" as some sort of fundamental question feels strange. You don't have the exact mechanism for anything in human brain from motorics to quantum mechanicss, why should this be anything special?

    Sorry, I just got the impression that you were saying consciousness is somehow special, something that mechanist world couldn't create.

    P.S. I always wondered about those people who offer souls, dualism, quantum mechanics as sentience tool and all that as solution to consciousness problem... wouldn't that just move the problem bit farther instead of solving? Doesn't every universe need clearly defined rules, making it just another MechaVerse? And just why would this "randomness" in QM create consciousness? I thought these seemingly random events are just as bound to laws of nature as anything, only with several equal outcomes instead of one.

  • I don't believe consciencenous is anything special.
    I have read a few "explanations of consciousness" but none of them explain to my satisfaction that I-ness that I feel in my head. I'm sorry, but a handwaving invocation of "superposition of neural networks" just doesn't cut it. Please explain my sense of identity, my presence in the moment. I'm not saying it is unexplainable, or even too complicated to understand...just that I have not so far seen a convincing explanation.
  • Fax machines have been around a bit longer, and the basic idea goes back to the turn of the century Quick story:

    My barber was an encrypter stationed in Germany during the Korean War. (Lots of great stories, and I've discussed Cryptonomicon with him.) Anyway, his unit got a prototype fax machine back in the early fifties. It was slow as molasses in January, but did work. The brass decided not to use it, though, as it was determined to be too slow and imprecise. So the guys at different bases that had these machines would fax Playboy magazine photographs back and forth.

    Glad to see that things haven't changed a bit. :)

  • Let's say we have infinite computer power, infinite memory, and infinite disk space.

    Why not just assume that we have an infinite power source and all components are infinitely reliable? Then you wouldn't need all that disk. :0)

  • Roger Penrose, a mathematics professor at Oxford, makes a compelling argument that no digital computer will ever be able to do abstract reasoning and be intelligent in the way that most define the word "intelligent."

    He attacks this issue differently from Turing (there's an old saying that most people can't pass the Turing test anyway) and takes an approach similar to Kurt Gödel's work in incompleteness of formal systems, which showed that any system you can come up with will always have propositions that cannot be proved or disproved in that system.

    He argues that certain things that humans do all the time, like comprehending paradoxes, self-reference, abstract assocation, world modeling, etc. cannot be done by any deterministic system (including digital computers), in a finite amount of time.

    He also has a controversial theory that brains use quantum mechanical effects to employ "multiple universes" to get from point A to point B. His argument is way out there, but it's pretty airtight.

    The Sir Roger Penrose Society [welcome.to] discusses this a lot and has lots of links to other similar discussions.

  • In response to Penrose's arguments, I have two points. He rests the majority of his case upon Gödel's theorem of incompleteness.

    First: there is no evidence that Gödel's theories don't apply to human intelligence just as much as they do to the automated reasoning processes that they were originally targeted at debunking.

    Second: Gödel's theories only apply to deterministic systems. The moment you start including random numbers in the equations, you cannot get a result anymore. Your automated reasoning system can break out of the 'rut' that it was in, and up to the next class of problems. Doesn't this sound exactly like the way humans approach mathematics? Ask any mathematician who has struggled on a hard problem for years - the solution comes in a moment of inspiration - just another word for a random thought unlocking the secrets of the problem.

    So, thinking machines must be nondeterministic. Fine; we already know how to make circuits behave in a nondeterministic manner.

  • why is it ok to have a jellyware CPU system with intuition, and not a hardware one ?

    Simple answer: One has a soul, the other doesn't.

    Current scientific "knowledge" refuses to believe the existence of the soul, because it can't prove that it exists. Until scientists can overcome that hurdle, computers will never be more than sophisticated adding machines.

    At one time, I believed that it would never happen, but recent advances in the open-mindedness of some scientists has made me believe that there could be hope some day.

    At one time, it was considered scientific heresy to suggest that animals had emotions, creativity, or self-awareness - but some animal researchers are beginning to understand now.. and this is the beginning.
  • Proper design is left as an exercise for the reader.

    I think I had you for a professor once. I shot your dog.

  • You're overlooking the obvious. If you want to do analog computing in silicon, you build an analog computer.

    This sounds smart, but as someone who has designed analog neurosystems in silicon, I'd say that in this day and age you can get far more bang for your buck using digital simulations of analog circuitry. The reasons for this include the power of digital design tools (VHDL, etc.), digital testing suites, regularity of digital circuits, and the generality of digital machines (no application-specific silicon required).

    See airplane flight vs. bird flight on this one...
  • ..this relates to moore's law?

    ----

  • Uh, no. Because of noise in electrical circuits, 64-bit doubles are more precise/have fewer errors than the best available analog circuitry. Do you think the engineers /like/ discretizing their systems to solve them? In the unlikely event that transmission noise is important to the functioning of the human brain, you'll probably come out ahead going digital anyway; analog (or custom digital) circuitry doesn't generally benefit from Moore's law. (Read about the Lisp Machine, et al.) You can always sample the noise from an analog source and add it in where necessary.

    -_Quinn
  • The quote above strikes me as a butchering of an argument put forth by Hans Moravic in his book _Robot_ (and I can't quote the subtitle from memory). Go to Carnegie Mellon University's web site.

    He presents a cogent argument that we are five orders of magnitude away from computing power that simulates the human brain. He lists the steps as the intelligence of:

    an insect,

    a lizard,

    a mouse,

    a monkey,

    and human.

    Moore's law says five orders of magnitude is 17 doublings. 34 years for the pessimists who use 4x every 4 years. Make that 33, 'cause the book came out last year.

  • Prove to me you're not a chinese room.

    -_Quinn
  • Is E.M. Forster

    Steve M

  • by Trinition ( 114758 ) on Monday November 13, 2000 @06:24AM (#627545) Homepage

    Frankly, I doubt we will ever develop computers with the sophisticated power of even a mouse brain, although many may protest that we already have exceeded their gross power. I believe that things like perception and reasoning are beyond the scope of raw power.

    Just to offer my viewpoint... The brain is slow, but massively parallel and interconnected in a vast array of various neural networks. Inherently, the brain is analog -- down to the quanta of electrons involved in the chemical reactions.

    In order to simulate that in a computer that executes things very quickly, but serially, would require a HUGE AMOUNT of computing power. You'd have to be able to simulate time-slices as small as those significant in the brain.

    However, if we were to take several slow processors, and network them together in parallel, we'd probbably get a lot closer for a lot less.

    I don't believe consciencenous is anything special. Its just the superposition of hundreds or thousands of neural networks all owrking together. Heck, at one time, man-kind thought the motion of the planets and stars were just too complicated to ever figured out, so they were labeled as something mysterious and never to be known. We shouldn't make that same mistake with the brain and mind simple because it appears at present to be too complicated to figure out.

  • This is a key point; one that often seems to be overlooked. I'd be willing to believe we've got enough computer power to match a mouse's brain. And there seems to be no evidence that we've reached the end-point of Moore's law.

    But even supposing we have the equivalent raw power of a mouse's brain, it doesn't mean a thing if we don't have a clue how a mouse's brain works.

    -y

  • Can you imagine a beowolf cluster of these articles? They would have us believe that we're going to turn our civilization (and each one of us with it) into God in a few years.

    I'm continually amazed at Moore's law, and how long we've managed to keep it going. My little web page to guestimate hard drive prices [tsrcom.com] has been revised twice because it wasn't optimistic enough.

    That being said, I think it would take about 10^9 current generation systems networked together to approximate the learning skills of a single two year old. (OK, perhaps I'm being optimistic) If the current trends hold, that means we can take off a factor of 10 every five years, so it's at least 45 years until our computers are as smart as a 2 year old.

    On another point, I fail to see how a new technology is going to eliminate the need for capitalism to keep us all motivated.

    Mike Warot, Hoosier

  • Trinition wrote:

    However, if we were to take several slow processors, and network them together in parallel, we'd probbably get a lot closer for a lot less.

    Interesting concept. Has anyone tried AI using clustering technologies, rather than brute-force computing ????

  • I didn't look for the /. story, but here [nyu.edu] is the web site for the silicon mouse you are referring to.

    Steve M

  • strange thing, i was reading through the /.'ers comments (i RARELY post), and i was wondering if anyone would point that out! coulda sworn /. had some skeptics. who cares about the computing power of a mouse brain? doesn't it seem a bit more farfetched to think that in 10 years we're going to have nanites rummaging thru our brains, "beaming" in experiences? at that point, i'll be glad i have my MCSE. god save you if win2010 crashes in someones head due to bad administration. i think that something important to remember, though, is that the economy doesn't support products that scare the great majority of people.
  • If it is, in fact, possible for man to build a machine 'smarter' than himself, is it not necessarily true that this machine is capable of the same feat? It seems if we built a machine 'smarter' than ourselves we would necessarily create an omniscient machine indirectly, or at least a series of machines that exponentially approached omniscience. Man, I love recursion.
  • Around 2030, we should be able to flood our brains with nanobots that can be turned off and on and which would function as "experience beamers" allowing us to experience the full range of other people's sensory experiences

    With this ability, lawyers would all be out of work - speedy instant justice with no protracted trials, wrongly accused, etc etc. It's like some kinda dream world.

    Oh wait, even 30 years ago we were promised Mars colonies and flying cars in everybody's garages.

    I want my flying car.


  • You ever used OCR? It can pick up multiple-font (and/or damaged) characters at a very high accuracy without any training or 'thinking' at all.

    Aside from that, you have a point -- an outstanding question in AI is how to connect the visual system to a symbolic one that will help interpret the visual data.

    Incidentally, you don't need emotions to create a course of action, just a lot of horsepower. (Remember Deep Blue?) I'd also doubt you need emotion to create goals; certainly the three laws* would suffice for goal-creation, and don't necessitate emotion.

    But in general, the theory isn't there; we're still waiting for the psychologists to develop a hard science. :)

    -_Quinn
  • break the conventions of the Jihad? ;)

    but seriously, I love the promise nanotech has, and frankly would love to see what it can and will offer.

    Looks like the Sony ad people liked it too, what with the PS9 ad they have for the PS2 :) (you don't watch enough TV if you haven't seen that ad yet!)

    ------
    http://vinnland.2y.net/
  • It isnt that conciencenous is simple, it is just that it probably stems from a simple set of conditions. One problem is that it is hard to retrace the end result (conciencenous) back to those simple rules and conditions. It is analogous to Conway's game of life, where 3 simple rules can make very complex results which can never be traced back to their initial states.
  • The individual neuron is not much more then a glorified adding machine.

    Which kinda highlights the ignorance most computer types have regarding biology. PNP junctions ain't tuff. Let's take a typical neuron ... interconnected with a few dozen other neurons - capable of dealing with multiple inputs. Each inter-neuron connection comprises several close 'ends' (to use a non-technical term ;-). So what affects the transmission of an impulse? Well, scientists are still working on that. The various chemical levels (Seratonin, to name a highly known one). Virtually every chemical in the body (Had a fright? Epinephrine (a.k.a. adrenaline under the old nomenclature) enhances the transmission of info for certain synapses (those associated with movement, strangely enough). Had another nearby neuron going off recently? Degrades transmission slightly. Same neuron going off twice in a row? Slightly harder to get the signal through. All those chemicals also have a big effect within the neuron itself.

    A bit is either on/off. There have been _some_ multistate bit experiments (most recently Intel with its memory), but generally it hasn't gone anywhere. All computer comparisons are BINARY. The neuron is affected by tens of thousands of chemicals. It's affected by other neurons. Every damn one of those interactions is gradiated, i.e. NON-BINARY. Did the nail in you foot cause just one molecule of epinephrine or one hundred to reach that neuron? How does that interact with the other ten thousand chemicals, each of which has the same variable effect?

    Are you familiar with the exponent and factorial functions?Ten to the eleventh neurons. Varying numbers of connections, from a min of two to over fifty. Varying number of synaptic gaps. Several thousand chemicals floating around your bloodstream on a regular basis. You start getting numbers that (grossly) exceed the number of atoms in our planet (let alone our system), using the most conservative estimates. Now, given that a binary 'bit' requires quite a few molecules to build, it's kind of, well, ignorant, to view a neuron as a 'glorified adding machine', when an adding machine equivalent to the human brain would require more space than our planet.

  • "You ever used OCR?"

    Yeah - obviously you can always hard code something. The "specific keys with a different font" example was only used in context of thinking. For example, my keyboard has some keys where the text is below the actual key - but we intuitively know that the text below the key relates to the key above it. If I encountered another keyboard with the same key, but with the text on the key, I would automatically conclude that they have the "same" key - as in function. Just represented differently. Obviously a computer program cant figure that out. It would need some sort of reasoning to figure that out - yet a straight program that recognizes objects wouldn't have a chance in heck.

    IMO, It's only that our current systems aren't sufficiently complex and generalized to deal with said problems.
  • That it 'can be done' is a pretty bold statement. I will grant it is possible, but remember there are little things like Heisenberg's Uncertainty Principal that have a lot to say about what you can observe on a "small enough scale". We can't simply observe a brain and reproduce it "artificially". Any sucessful attempt at doing so will require, at very least, a bit of luck.
  • Simple answer: One has a soul, the other doesn't.

    I beg to differ, speaking as a non-christian, I could argue that people have no more soul than a TI-82 (now, the TI-85's are a different story) due to the fact that given simple stimulus you will evoke a simple and directly predictable response - should you know enough about the person's psychology.

    Everything is relative and based upon quantum probabilities (rather than Aristotlean binary logic), the problem with our current computers is that they are based upon Aristotlean logic, that is - ignoring the logical third possibility: maybe, and extensions thereupon (10% maybe, 20% maybe, 30% maybe, etc.)

    As the function of the human brain is based upon assigning a probabilistic value to an observed relation (ask Pavlov) through positive and negative re-inforcement, the logic of our wetware is such that we have a near-infinite degree of %maybe available to define our observations.

    Our neurons create new connections based upon our observations (pos/neg), which in turn influence future actions.



    Personally, I don't know whether humans have a "soul" or not, although the more I see us seeking to define it, the more I see it running away - after all, definition into such an inflexible logic would surely thwart something rooted in such a flexible system.

    I would like to think we do, but that will just be something I keep as a big maybe for now.



    Of course, if the soul and body co-exist, a Quantum Logic computer would indeed have both.
  • I recently watched a lecture by someone named Lloyd Watts who Kurzweil (featured in this article) has picked up on. He has simulated the human auditory pathway on a multi-fpga board. In 5 years it may be viable to do this on a PC.

    Of course, he has no idea what exactly happens cognitively from wherever the auditory pathway projects -- but it is a good start.

    Mr. Watts offers supposition that neuroscientists really do understand specific brain functions - just that our information is so fragmented that it is very hard to understand how these systems work.

    Supposedly by 2020 we should have the computing power to do higher order cognitive functions; the question is - will we have the algorithms?
  • by SEWilco ( 27983 ) on Monday November 13, 2000 @06:38AM (#627561) Journal
    You're overlooking the obvious. If you want to do analog computing in silicon, you build an analog computer. Don't try to emulate analog in digital systems, instead you burn analog circuits on your wafers.

    Proper design is left as an exercise for the reader.

  • Are we forgetting just how much biological regulation a mouse brain has to maintain? It's not just all "calculations"....

    I've often seem arguments against AI based on the complexity of the brain. Human not mouse. But the above quote shows why this may not be as large an issue as generally believed.

    The question is, how much of the brain is devoted to non-thinking tasks and does this significantly reduce the number of neurons needed for thinking?

    Here's one example. There is a lot of neural machinery devoted to visual processing. Yet one doesn't need to be able to see to be able to think. So can we subtract the neurons and the corresponding connections devoted to the visual system from the number of neurons needed to think?

    What about the brain resources devoted to the other senses? Or those used for muscle movement?

    Steve M

  • by big.ears ( 136789 ) on Monday November 13, 2000 @06:41AM (#627563) Homepage
    The submitter wrote:
    I believe that things like perception and reasoning are beyond the scope of raw power."

    Actually, these two areas of artificial intelligence that are probably understood better than any others. (Language and Memory--now those are problems people are still clueless about, IYAM.) Neuroscientists have mapped out the perceptual system with great detail (at least the visual perceptual system), and there are some fairly advanced neural network models that embody these findings. On the other hand, Newell and Simon were able to understand and explain many kinds of "reasoning" very well in the 1970s--today the main descendent of this work, "SOAR", can work with 10,000 or more rules. It can fly planes in simulated combat and make strategic and tactical decisions. Maybe it is unable to do everything a pilot does, but I would argue that it is still reasoning.

    So, it is technically correct to say that these things are beyond the scope of raw power, but the theoretical advancements have already been made. The only thing holding these system back from real-time performance is raw power.

  • It's a lot like the difference between gross motor and fine motor. The ability to write your name and pull a bus are completely different.

    My Octane pulls busses all day long. But the qualitative difference between that and a human brain are enormous. It's apples and dumptrucks.

    Today: Frank takes ill [ridiculopathy.com]

  • When comparing computing power of mice brinas and $1000 computer systems, what is he really talking about?

    Correctly revired you can play QuakeIII on the mouse-brain?

    Correctly programmed (and fit with legs) the computer can run around sniffing for cheese?

    Computers and brains has very different ways of working. I cannot see these comparisons have very much meaning without it being specified how the comparison is being done.
  • by volsung ( 378 ) <stan@mtrr.org> on Monday November 13, 2000 @06:42AM (#627566)
    The basic problem I have with the nanotech pundits is that they seem to assume that once you have the parts, it is trivial to make the whole. I think the Computing Revolution has demonstrated the falsity of that statement fairly well.

    Processors provide the "parts" of computation by physically performing the actual instructions used. These computers basically allow numerical operations, memory access, and branching. That doesn't seem like much, but it's "Turing complete," which means that (if you buy the Turing hypothesis) everything which is computable can be computed with such instructions. We have all the parts of computation we need, and they're getting faster all the time.

    But the software still lags. We have "computationally intense" software, but that's not the same as complex software. 3D games always push the envelope of computer capability because just when you think you've got enough computing power, id throws more triangles and more textures at the problem. That's a quantitative change, but not a qualitative change.

    When we look at all of the other software produced, it seems that if the software is marginally complex (think of your favorite program here), it's buggy as hell. Reducing the bugs in the software requires more effort; an exponential amount of effort as the complexity increases.

    That's why we've seen the speed of computer hardware shoot through the roof, and the complexity of computer software plod along, unable to keep up. Producing complex software is an NP-complete problem. (/me ducks the flames of the math people in the audience.)

    If you'll permit me to play pundit for a second: I think we'll reach these so-called "milestones" that the AI people and the nanotech people keep giving us and realize that while we can manufacture a computer with the MIPS/FLOPS/whatever of a mouse/dog/human brain, we don't have the slightest idea how to string all of that power together to actually perform the operations of the mouse/dog/human brain.

    Your computer will get 10,000 fps with 6e10 textured polygons in Quake XXXVI, but it still won't be able to learn a new language.

  • "Incidentally, you don't need emotions to create a course of action, just a lot of horsepower"

    Well deep blue is just something programmed to play chess by a bunch of people which iterates through a huge game tree with pruning help from the programmers, is it not?

    Anyway, I'm vague because I don't sufficiently understand this stuff. That said, you can obviously hard code goals - but how would a program create new goals without emotion? For example, let's say thinking of making a lot of money makes me happy as it is an instrumental goal to doing things that will ultimately make me happy (i.e., travelling, time with loved ones - which are instrumental goals towards blah blah etc). Without the emotion I would not be able to add new goals, as nothing would motivate me to do so. If I was waiting for the subway and I saw someone getting beat up, I might feel anger and perhaps fear. Whichever one won out as a result of a cognitive process (i.e., thinking I might be next, or believing I can "take that guy") would affect goal oriented behavior.

    You might say that we could do the same thing by calculating value in some sort of action matrix - but when sufficiently complex, wouldn't it be an emotion equivalent?

    My ideas may just be outdated, as they were primarily formed by a 1995 book titled (Descartes' Error : Emotion, Reason, and the Human Brain) by Antonio Damasio.
  • OK, I understand. My supposition is that we do *not* understand this stuff, and computer power is what is holding us back.

    I think we'll have computer intelligence before we even come close to understanding intelligence. That's because our first intelligent computers will be products of brute force: simulate all possible brains and ask all of them to flash the screen if they understand us. The ones that flash the screen are possibly intelligent.
  • Some of the much touted materials technology
    progress while others are commercial lemons.
    Successes are silicon chips, optical cable,
    and screen displays. Lemons are hi-T superconductors and buckyballs.
  • I doubt we will ever develop computers with the sophisticated power of even a mouse brain, although many may protest that we already have exceeded their gross power. I believe that things like perception and reasoning are beyond the scope of raw power.
    If you're saying that just having enough raw power is insufficient, that's true. If you're saying that we won't be able develop computers matching the sophistication of biological brains (i.e., artificial intelligence), I think you're wrong. I strongly suggest reading The Age of Spiritual Machines: When Computer Exceed Human Intelligence [amazon.com] by Ray Kurzweil. Also Engines of Creation [amazon.com] by K. Eric Drexler.

    For a contrary opinion, you can read The Emperor's New Mind [amazon.com] by Roger Penrose. While it's well worth reading, Penrose's argument against artificial intelligence seems to be that intelligence requires quantum uncertainty, and that computers are deliberately designed to avoid the effects of quantum uncertainty.

    This argument fails to pursuade me for two reasons:

    • Penrose fails to demonstrate a convincing need for quantum uncertainty
    • If quantum uncertainty does turn out to be necessary for intelligence, there's no reason why we can't build computers that are affected by it. They may not be much like today's computers, but that wouldn't make them any less artificial.
  • Well, the idea is this: Most people won't be motivated. Most people are useless anyway. How many telephone sanitizers and lawyers do we need? And not to be harsh, wouldn't all the people in shit jobs on assembly lines be much happier drinking free beer and watching tv?

    The elite will be the ones actually DOING things - creating the content to amuse the masses, doing the research, directing the engineering. They won't be elite because they have been elevated, they will be the elite because they are the few with the actual motivation to DO anything.

    I had a sig, but it was stolen by communists.
  • People, I have programmed with plugboards and breadboards. I know the "proper design" is really hard to do with analog circuitry, and it's much easier to break analog problems down into digital processing. Especially with temperature variations messing with your resistance values...but I like to understate difficulties.
  • Perhaps. But isn't that a little like having a million monkeys crack away on typewriters until they've reinvented Esperanto or something? Obviously an effort would have to be guided by people with an idea of where they were going - i.e., having a hand in design and therefore understand something about the system they are concocting.
  • by Erasmus Darwin ( 183180 ) on Monday November 13, 2000 @06:43AM (#627574)
    Peception can be achieved when we understand it.

    Do we have to understand it, first? It seems like if we could cheat a bit, and just model a mouse brain inside a physics simulation, we'd have a computer engaging in perception-like tasks. The obvious drawback to this is that it wouldn't be capable of doing anything a mouse couldn't do (and would lead to a flurry of /. posts along the lines of "Imagine a Beowulf cluster of these things! They could run through mazes and find cheese! The possibilities are limitless!"). However, by modelling a mouse brain, scientists would be able to better "fiddle" with it and understand it, possibly leading to a more practical understanding of perception.

    And to get further off on a tangent (but hopefully remaining within the realm of a worthwhile discussion), we suddenly open up a whole can of worms with regard to creating a machine-based consciousness. In my own opinion (and this is just opinion here), a hypothetically powerful/complete enough simulation of a human brain approaches consciousness. I'm of the opinion that an actual living, breathing human is just such a simulation via chemical means. I'm a little afraid of the ethical consequences when we gain the ability to create neural networks with complexity rivaling that of our own brains; one could argue that it's of even greater ethical concern than human cloning. A human clone, at least, has the benefit of being inarguably human (barring something really weird like a gorilla/human hybrid), and would thus be protected by normal laws.

  • by Anonymous Coward
    "So can we subtract the neurons and the corresponding connections devoted to the visual system from the number of neurons needed to think?"

    It would probably reduce the potential problem sets that the AI could handle. That is, if we didn't have any other mechanism to introduce concepts to them. If you don't understand the world around you spatially, it would be a lot harder to think about abstract interactions between them.

    We also use some of the same "neural machinery" when imagining, dreaming, visualizing, whatever. I'm pretty sure the same goes for sounds as well.
  • by Benjamin Shniper ( 24107 ) on Monday November 13, 2000 @07:01AM (#627581) Homepage
    And then we could build a beowulf cluster of these!

    Goodbye Karma.

    A large, networked system of analog neurons might just do the trick here as for creating a system with the intellegence of a mouse. But absent any good way to deliver, register, and respond to stimulus, this would be one crazy machine. It simply wouldn't have enough information to act, any way to deal with information sent to it, or any way to figure out whether its actions were appropriate or inappropriate (it would need a complex systems of rewards and punishments and some sort of inherant internal mapping of neurons to stimuli and responses.) To wit, experiments have shown that if you cluster a bunch of analog neurons together, it will think random thoughts until you bother to shut it off.

    Plus it would need to "eat", self-repair, purge unneeded inputs (both by discarding unsupported hypothesises [Is this a cat? It does not look like a cat. It is not a cat.] and if it eats it will have to poop), and eventually defend itself against hazards. In other words, mice will be "better" for a long long time.

    -Ben
  • okay sure

    I remember back in the early 80's when people were like, By the year 2000 robots with computers in their heads will be doing all of humans menial tasks, we will just do the creative stuff. Okay since I have a robot making my bed and cleaning my apartment for me right now I can assume thats true. Yeah nanotech robots will do all the work too, especially since they are so small, they will have all that space for computing power.

    >Around 2030, we should be able to flood our brains with nanobots that can be turned off and on and which would function as "experience beamers" allowing us to experience the full range of other people's sensory experiences and if we find ordinary experience too boring, we will have access to archives where more interesting experiences are stored.

    Sound like a PS2 commercial???
    Okay again, Im going to be the skeptic, in 30 years we are going to let nanotech robots into our brains and manipulate what we are thinking about, yeah nobody will have a problem with that

    I mean foresight is good but have a little common sense.
    Everytime there is a new technology people think that it will drastically change the world. The internet is great, but it hasnt changed the world that much, I still shop at the mall, talk to my parents on the phone. I still have to study for my tests and I still have to pay the bills.
  • "Oh that'll be great. Computers will start thinking and the people will stop!"

    Many people I know have already gotten a head start on this one, so I'm not sure how much of a real impact this will have on the world. But personally, I like this bit myself...

    Around 2030, we should be able to flood our brains with nanobots that can be turned off and on and which would function as "experience beamers" allowing us to experience the full range of other people's sensory experiences and if we find ordinary experience too boring, we will have access to archives where more interesting experiences are stored.

    Beamers, you say? Wow! I sure do love electronics, don't you? Dude, I've seen what happens when my toaster breaks down. Except when that happens, all you lose is breakfast. If one of these "beamers" decides to get ambitious, you end up stuck in John Malkovich's head or something. That also brings up some damned interesting and abusive uses of this technology. To what extent would we be capable of "experiencing the full range of sensory experience"? How much information gets broadcast to our own brain? If our senses are telling us we're experiencing artificial events from someone else's brain, then do we forget who we are? This is getting into the realms of philosophy, and I don't think I even want to begin delving into the implications here. Just know that there are many, and they're not all Utopian.

    These robots, when they were developed, would do all the world's work: People could sit back and enjoy themselves, drinking their mint juleps in peace and quiet.

    Say, I think I read a book about this somewhere... The Time Machine, perhaps? Is anybody else dismayed by the notion that this technology would allow us to become lazier than ever? I'm sure there are others, I'm sure I'm not in the minority in thinking this. Now, mind you, I really like nanotechnology. I think that it's capable of revolutionizing every corner of life, and could perhaps make many jobs automated, making services much, much cheaper, lower cost of living, and give people much more free time. Or just leave everyone out on the street desperate for a job, when companies don't bother lowering their prices on items that now cost practically pennies to make, standard of living stays the same, and you're left with scores of people out looking for a way to keep from starving to death for one more night.

    Within 10 years, revolutions in genomics, proteomics, therapeutic cloning, and tissue engineering will be adding more than one year every year to human life expectancy

    Wonderful! And when people stop dying, we'll have to colonize the seas. When the seas get full of folks, we'll burrow underground. When we've reached the limit to how much our natural resources can sustain us, we'll all turn into cannibals or something. Personally, I'm not a huge fan of living forever. I think the real issue is not expanding our life span, so miserable people can live their miserable lives for another miserable fifty or so years, but rather trying to improve quality of life, so that the few years we do have aren't so miserable. We've got six billion people on this spinning ball of dirt and water, and well over half of them (I don't have the stats in front of me) are dying of malnutrition, while we in the more developed nations waste enough food to sustain a few dozen small countries. Isn't it ironic that in a world where most humans are starving to death, the US is dealing with the growing rate of obesity. Doesn't this seem a tad unbalanced? I suppose the moral of this particular story is that we need to improve quality, not length, of life, and this can only be done by properly distributing the resources we have, if nanotechnology is going to have any kind of positive effect on the world.

    As a side note, who will be getting these treatments? The wealthy? That's going to cause some serious social problems, if we suddenly end up with the rich crowding the world. And they won't be rich forever... so what happens then? So fine, we make it available to everyone who wants it. So that's great, who's going to fund this project? It's not going to be evenly and fairly distributed, it's just not. So only the rich/powerful/important will get the treatment, and the gap between the rich and the poor will widen beyond repair.

    Okay. I'll stop now. The moral of this whole story is that I just don't think we're ready for this kind of revolution. We have to figure out what's really important about the quality of the life we're lenghtening before we make ourselves immortal. We have to learn more about the true nature of the self before we start bombarding our brains with other people's experiences. And we need to seriously get our collective heads our of our communal asses before we start restructuring society the way it will need to be when human workers become obsolete. But these are just things to think about. In the end, there's not a whole lot that I can do or say to stop this phenomenon. But we are indeed, as the Chinese curse proclaims, living in "interesting times."

    /* Steve */
  • Check out Geniebusters.org [geniebusters.org] for a well reasoned critique of the idea that nanotechnology will produce some sort of effortless cornucopia of wealth. From the article:

    A lot of people read
    Engines of Creation and think: there has got to be something wrong with this. But they can't put their finger on it. They always assume that if nanotechnology is possible at all, then everything in Engines of Creation follows, and so they think they have to show that it is impossible to build machines out of atoms. These nanocritics lose the argument every time, because in fact it is possible to make machines out of atoms.

    The problem lies elsewhere. I take a different approach. I acknowledge the obvious fact that nanotechnology will exist. It is already well underway. It seems like almost every issue of Nature has an article about nanotechnology (in a general sense). However, the fact that nanotechnology will exist does not imply that little robots will supply all our needs for free. In other words, Nanosystems may be true, but this implies nothing about Engines of Creation.

    This article ought to be required reading for anyone writing about nanotechnology.

  • It's an interesting western philosophical bias that minds are somehow seperate from bodies. In my opinion, simply replicating what the brain does won't be enough.

    The brain recieves input and stimuli from all of the body's systems. The brain is also messy and imprecise internally. I think all of those factors will actually turn out to be important.

    It may also turn out to be the case that the brain depends on quantum phenomena to function. Yet another thing to worry about.

  • That sounds almost like "prove to me that you have free will". Obviously I can't - though, in the context of the sentence I used "chinese room" to illustrate the fact that the only AI we know (weak AI), doesn't really know anything about the symbols it's manipulating except under limited conditions.

    I don't subscribe to the conjecture that machines will never be sentient - just that at current complexity and understanding, it's somewhat illusory.
  • If you can't describe your I-ness any more accurately, then how can you expect me to quantifiably explain it? That is part of the problem with people saying computers can't be [intelligent/conscience/sentient/etc.] -- those terms have varying definitions.

    All I know is in the tiny bit I've dabbled in neural networks, I've seen a lot. I've seen memory, self-organizing maps, boundary detection, etc. -- all from very simple and small neural networks. Now when you consider that there are billions of cells in the human brain, you multiply those simple capabilities I listed before enormously!

    I don't see humans as being anything more than one step away from the rest of the animals. We happen to have excessively large brians for our side (only humans and dolphins have significantly heavier than average brains for their body size), and because of that surplus of nerual power, we *seem* smarter -- just as a Pentium III computer seems able to do more than an 8086 (speech recognition, for example).

  • I have great respect for Roger Penrose, but I disagree with the need for quantum mechanics to account for the complexity of the brain. When you look at the power of a simple neural network, then mulptiplex and multiply that power according to the number of cells in the brain, a great amount of power is realized.

    I think a great deal of insight can be found by realizing how the brain works. Imagine your brain isn't really in your body, but connected to a vast computer that is simulating stimuli and reacting to output from your brain (i.e. The Matrix). Your brain acts as a black box to the world. It develops over a number of years through trial & error testing dozens of different inputs (light, sound, taste, smell, touch, temperature, internal signals, time-domain information, etc.). If anyone were to take the time build a system they believe to be comparable and let it develop over 20 years -- then see how it compares -- we might be in for surprise.

    As an inkling of what we might find -- people have developed a robot that can control its robotic arm and "sees" through a video camera. It starts off with a clean slate, but over a duration of hours, it begins to learn that when a certain signal (started at random) is sent, this "thing" it seems "moves". Eventually, it "realizes" that the "thing" is its "arm". Does it really realize it, or has it just associated certain outputs with certain inputs? Do we realize our arm is ours?

    Keep in mind, my daughter is 5 months old and just now realizing her feet are hers -- at least, I suppose that she is realizing it.

  • The original submitter doesn't think we can achieve vision or reasoning in a computer.

    How does he think people do it? Does he believe in a soul that handles all of this?

    It's very simple. We're made of matter. Therefore anything we do can be done 'artificially' once we can manipulate matter on a small enough scale.
    _____
  • Computer program or artificial intelligence? Just because something can traverse a neural network of weighted rules doesn't mean that it can learn (i.e., reconfigure its network). Even if you have a system that can "learn" by reconfiguring itself due to some mathematical algorithm - it can't really think upon problems enough to do anything but the most crude "learning". It's nothing more than the chinese room.
  • The article talks alot about exponential growth of technical progress.

    But do we really progress so much faster today than yesturday? And how do we measure this progress and conclude it's exponenetial?

    Isn't it a bit "crude" to determine our technical progress on the basis of how many transistors we can cram into a small piece of silicon, or how much faster we can take a vaccum cleaner from concept to product because of CAD/CAM? Does that really mean that we are advancing so much faster in technology today?

    The human genome project is used in the article as an evidence of the accelerating progress.
    Let me use an analogy: If I lived in the 16th century and where a human calulator named Babbage and was bored with writing calculation tables and therefore made a machine that could do the necessary calculations 10 000 times faster than a human. Does this mean that the techninal progress just rose 10 000 points?
    Point being, the genome project finished early because of raw computing power and refined methods. And that is not proof of exponential technical progress!
    /Patrix

    "And if any one should ask me, "Whence dost thou know?" I can answer, "I know, because we measure; nor can we measure things that are not; and things past and future are not." But how do we measure present time, since it hath not space? It is measured while it passeth; but when it shall have passed, it is not measured; for there will not be aught that can be measured. But whence, in what way, and whither doth it pass while it is being measured?"
    http://www.ccel.org/fathers2/NPNF1-01/TOC.htm#To pOfPage
    Chapter XXI.-How Time May Be Measured.

  • "If you'll permit me to play pundit for a second: I think we'll reach these so-called "milestones" that the AI people and the nanotech people keep giving us and realize that while we can manufacture a computer with the MIPS/FLOPS/whatever of a mouse/dog/human brain, we don't have the slightest idea how to string all of that power together to actually perform the operations of the mouse/dog/human brain"

    Yes this is true. However I think it's a little unfair to think that those who introduced the moores law as a function of animal intelligence comparison (i.e., Kurzweil in his book) are using it as the only basis of their prognostications.

    Although I believe you are right that Kurzweil and others are a little out there in terms of being realistic. Humans are notorious optimists. Just because it's vogue to predict what's going to happen in 2050 doesn't mean we haven't learned from what Turing predicted 50 years before.
  • Do we have to understand it, first?

    I suppose not.

    Now, it you could simulate a consiousness would it be able to understand itself? If you could figure out how to keep it from making logical errors without eliminating creativity, it would be smarter than people. Trust and ethics are another matter.

  • "but the theoretical advancements have already been made"

    WRONG. Let's say we have infinite computer power, infinite memory, and infinite disk space.

    Do we have the algorithms to create emotion and therefore general systems to create goals and course of action. No. A program that iterates through a bunch of weighted goals doesn't have emotion (or at least enough "emotion" to understand the most crude positive and negative feedback).

    How would the computer think upon long and short term goals?

    Would a visual subsystem be able to recognize objects in space; for example 50 different types of chairs, desks, pens, books, cars, whatever? Nope, because the current systems are completely symbolic. Can such a system understand what an object is for, where it belongs, etc? Nope. At least not without some sort of language other than machine language and very simple inference, statistical, and very simple goal based reasoning.

    Let's say you teach a computer to recognize keyboards and then keys, and then specific keys, whatever. How does it recognize different size keys (for example, I have a ms internet keyboard with litte non standard keys on top) or specific keys with a different font.

    etc etc.
  • by AFCArchvile ( 221494 ) on Monday November 13, 2000 @06:04AM (#627619)
    This one from Walter, the venerable programmer at Encom, and one of Encom's founders:

    "Oh that'll be great. Computers will start thinking and the people will stop!"/

  • Frankly, I doubt we will ever develop computers with the sophisticated power of even a mouse brain ... And 640k should be enough for anybody, right? :-)
  • We are already able to directly observe brain processes on nearly a molecular level - which is the level where all the work is actually done - and we can even observe the molecules, if we don't worry too much about the subject being alive. :-)

    On the other end, building these bad boys, we can achieve the same effect if we work with matter on a LARGE enough scale. Right now it's an even bet whether the first device to pass the Turing test reliably will be made of lots of very tiny things, or will be gigantic and fill up a warehouse.

    But one way or another, it will be done.
    --

  • A lot of what Kurzweil says is nonsense, but it is derived from ideas that appear a lot more nonsensical than they actually are.

    The idea that progress is going through a sharp turn upward is not supported by the Kurzweil's reference to the "exponential", a curve that looks basically the same at any scale -- but on a more radical mathematical formulation that goes to infinity in finite time -- specifically by Friday, 13 November, A.D. 2026 (give or take). No, this isn't just some New Age eschatology -- it was actually arrived at by looking at historic data and extrapolating into the future.

    Here is an excerpt from "Spasim (1974) The First First-Person-Shooter 3D Multiplayer Networked Game [geocities.com]" that discusses the origin of the Transhumanist conception of "The Singularity":

    They were trying to realize a man-machine cybernetic vision of this magical little gnome named Heinz von Foerster [univie.ac.at] and needed an email system to go along with it.
    ...
    When the semester was over, I threw a few things into my '64 Chevy Impalla, and headed east on Interstate 80 across the Illinois border for Urbana and CERL. It was my first paying job as a programmer.

    Arriving at the Mecca of networking and meeting the magical little gnome who founded second order cybernetics [vub.ac.be] (symbolized by the Ouroboros [best.com]) in his Biological Computer Laboratory [uiuc.edu] was an amazing experience.
    ...
    A vital side note: Heinz von Foerster had published a paper in 1960 on global population: von Foerster, H, Mora, M. P., and Amiot, L. W., "Doomsday: Friday, 13 November, A.D." 2026, Science 132, 1291-1295 (1960). In this paper, Heinz shows that the best formula that describes population growth over known human history is one that predicts the population will go to infinity on a Friday the 13 in November of 2026. As Roger Gregory [thing.de] likes to say, "That's just whacko!" The problem is, after he published the paper, it kept predicting population growth better than the other models. (see section 4.1 "Systems Ecology Notes" [umass.edu]) One of Heinz's early University of Illinois colleagues was Richard Hamming of "Hamming code" fame [fi.edu]. Once while visiting the Naval Postgraduate School, I asked Dr. Hamming what he thought of Heinz von Foerster. Professor Hamming's response was "Heinz von Foerster: Now there's a first class kook!" I suspect Heinz's publication of, what Transhumanists [go.com] call, "the singularity [go.com]" had really gotten to Hamming -- not that Heinz wasn't eccentric enough get Hamming's goat in any case. Well, to continue this digression so as to give the damn Transhumanists a much-deserved keyboard lashing: It's one thing to be a guy like Hamming and denounce Heinz as a "kook" for following his formulae where they lead -- it's another to turn Heinz's formulae into a virtual religion, call it "the singularity" and totally forget where the idea came from the first place. I suggest the Transhumanists cite Heinz in the future whenever they refer to "the singularity" and think about his assumptions -- the primary one being that societies success varies directly with population size. It might be good to see if his model fits the data subsequent to the last check of which I am aware -- 1973 -- which just happens to be right at the point high population density societies decided to abandon their forward progress toward the space frontier.

  • ... about accelerating technology [caltech.edu]. Much better than Joy's whining self-praise.
  • The problem with trying to look for "perception" or "reasoningmabiltiy" in computers is a failrue in definitions. We really don't knwo what these thinsg are. They are vauge, fuzzy words for "thinsg like we do." As long as theya re nto tightly defiend the debate over whetehr or nto they can be, or have been, reached is endless and meaningless.

    IMO perception and reasoning are emergent properties of our neural networks. If we look at the reserach doen i neural networks we see what I believe are simpler, but no less real, properties of a similar kind emerging.

    Lets eamine one example: The CMU autonomous driving van. This was a project started under the DARPA Umanned Combat Vehicles project. (They were tryign to build Bolos, for any Kieth Laumer fans out there.) This is a van that a neural network can pilot down a lane on a road under a wide variety of drivign and visability conditions, at about 60mph.

    Some interesting things about this project:
    (1) they did not "program; the avn to do this. The reserahcers had no strategy in mind. They merely taught the neural network with video input and driving control output. It strategized by itself, this is what neural networks do.

    (2) They attnepted traing unde ra number of different conditions. When tehy examiend the rules the system had coemn up with afterward, they found the ruels vaired widely dependign on the trainging conditions BUT the perofrmance of any set of rules seemed to be a constant. (That 60mph).

    There is an intersting article they wrote called "Exposing the hidden layer" that ran in Byte about 15 years ago.

    SO what cane we say about the autonomous van? Well, it problem solved in a creative way finding its own solutions. It also leraned to pick out key visual elements to drive that solution.

    To me, this IS perception and reasoning ability, albeit with a limited scope. I think we tend to mysticize our own abilities way too much. In the end I don't think they are functionally any diffrent, just more complex because of the size and sophistication of our neural nets.
  • My expectation is that at some point in the future, somebody (or somebodies) is going to figure out how to let an electronic brain grow/self-assemble, in whatever form is necessary to deal with its environment and (hopefully) to perform some kind of function specified by the creator(s).

    Once you reach this threshhold of creation, where the creators don't have to personally design every connection and node being put together in the new brain, then the potential definitely exists for a brain to be assembled which will outstrip our own in pure cognitive ability.

    Furthermore, to borrow the chaos theme from Jurassic Park, once you've unlocked that kind of flexibility, sooner or later somebody is going to screw up or purposefully design a brain which isn't subject to their expected constraints. (Not taking into account the ethical dilemma of deciding whether a brain which is more "sentient" than you are, should be your slave.)

    My personal hope is that before our own creations start their own evolutionary path and leave us in the dust (if they decide we're in the way, kiss your carbon-based ass goodbye...) we come up with the technology necessary to transition our OWN evolution into the new one (so that WE are the seeds for the next evolutionary stage).

    Yeah, it sounds WAY too science fiction - but what other options are there besides firmly clamping down on the advancement of technology to prevent that kind of cognitive ability from being created (somehow, I have a mental image of Frank Herbert's Dune ban on "computing machines").
  • I meant infinite relative to the resources needed to solve the problem. You're twisting or misunderstanding what I said. The example was for the purpose of illustrating that which we don't understand. His supposition was that we do understand all this stuff and computer power was what was holding us back.
  • And how are you going to absolutely simulate perfect atomic -> molecular -> macro molecular -> organelle -> cellular behavior? How is even a single cell going to behave similarly unless it has a near identical environment from which to work in?
  • This is exactly the basis for the book "The Society of Mind" by Marvin Minski.

    Consciousness emerges and control becomes semi- autonomous when the complexity of a system becomes otherwise unwieldy and unmanagable.
  • I don't believe consciencenous is anything special. Its just the superposition of hundreds or thousands of neural networks all owrking together.

    you know, i used to think that same thing, but lately i'm not so sure. the more i think about it, the more there's a limit to how things can just happen without intent. when you get down to the pre-big bang singularity, or the combinations of conditions required for inteligent life, you start wondering if the laws of physics are really everything. and after that you start wonderin where the laws of physics came from...

    is being conscious really that simple or is it that it's too complicated for us to understand with the available information? if that's the case, then that explains why we try to oversimplify it...
  • by onion2k ( 203094 ) on Monday November 13, 2000 @06:08AM (#627643) Homepage
    All this would be done automatically, effortlessly, without human hands or labor, by a fleet of tiny, invisible robots

    Thats funny, thats exactly how my boss thinks work gets done too.
  • If the rich all become immortal, it won't matter if the estate tax gets repealed or not.

    --
  • As a suggestion for how to measure progress, consider how long it would take for someone from 100 years ago to adapt to and accept the modern world.

    So say we consider how comfortable someone from 100 years ago would feel after 5 years in the modern world. Now we ask, if we took someone from 900 ce, and dropped them into 1000 ce, would it take more or less than 5 years to come to that same level of acceptance? Admittedly, it's really crude, but it does give a qualitative measure of the rate of progress.

    But in general, the argument for the singularity (which is the inevitable outcome if progress is in fact exponential) is in the computer field, progress is limited by the tools you are using. These tools are limited because they were the limit of what could be made with last years tools. An easy analogy is that it's much easier to write a really advanced IDE inside of Visual Studio than it is to write a really advanced IDE in notepad in assembly language. So as the tools advance, the next set of tools can become even more advanced, and those tools determine the progress of all other aspects of technology as well.

    It's not just compiler writing, of course. There's also better industrial robots allow you to build even better industrial robots, or lots of computer power lets you use really computationally intensive techniques to build the next batch of processors.

    I personally doubt the singularity hypothesis because I think there are constraints on progress that are invariant. For example, finite energy constraints, finite limits on the speed of communication, resource limits, and most importantly, the limitation of how fast people can adapt.

    "Exponential progress" is not just because that's what's observed. It's because any time the value of the next time step is an increasing function of the value of the previous time step, you have an exponential process. Do you think technological progress is dependent on our current level of technology (i.e. is it faster than the middle ages)? If so, technological progress is exponential.
  • How about computer programs and devices that will act to make you "super human". That is, agents with current ai and good human interface which will act to enhance human memory and task proficiency and as a result abstract thinking?

    Its always been supposed that we were all going to stop working when robots could take over all our tasks - but isn't it the truth that we just shift to other tasks that computers aren't good at?

    Even if we produce robots that are superior for job tasks, guess how much it would cost to manufacture plus upkeep of a robot to act as a janitor or fast food clerk compared to a humans minimum wage? :-)
  • A mother raising children is not considered "a worker." She is treated as if she has no input or productivity to contribute to the "real economy".

    A mother raising children is not a "worker" as pertains to the economy. She impacts the economy in a variety of ways, and may be quite important to society as a whole and/or to the quality of the children she raises. But she does not produce goods and, in terms of the economy, is therefore not "productive."

  • I believe that things like perception and reasoning are beyond the scope of raw power. But it's a fun read anyway.

    Peception can be achieved when we understand it.

    I remember an article about a simulated mouse brain that could recognize words spoken by many different voices. The article pointed to a site where some nutty proffesor had made a little puzzle to annoy his peers rather than publish a paper explaining his results. It seemed prommising.

    I can't find that article now, so I may have just dreamed it.

"May your future be limited only by your dreams." -- Christa McAuliffe

Working...