Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Technology Books Media Book Reviews

What Computers Really Can't Do 378

A reknowned computer scientist punctures some of the arrogance and hype surrounding computing and details some of the many computational and other problems computers can't solve. After years of rising expectations, the public expects computers to reverse aging, solve the most complex problems, and restore the ozone layer. So do many computer scientists, says the author of "Computers LTD., what they really can't do." It's a good question. What can't computers do? Jump in.
Computers Ltd., What They Really Can't Do
author David Harel
pages 221
publisher Oxford University Press
rating 7/10
reviewer Jon Katz
ISBN 0-19-850555-8
summary the limits of computing

*

What can't computers do? Why don't we hear more about their limitations, along with the mushroom clouds of hype about their limitless capabilities? By now, the public might well expect computing to restore the environment, cure cancer, prolong life and reason through the world's most complex and intractable problems.

Not so fast.

The good news, writes author David Harel in his new book, "Computers LTD: What They Really Can't Do," from Oxford University Press, is that computers are indeed incredible, capable of amazing feats.

The bad news is that they also face major problems, serious limitations on what they can ever be expected to accomplish, and that few people, even with advance computer science degrees, really grasp that there are fundamental barries no amount of hardware, software, brainpower or money can ever overcome.

Harel explores the boundaries of computable and noncomputable problems, and find's a lot to be pessimistic about. "..our hopes for computer omnipotence are shattered. We now know that not all algorithmic problems are solvable by computers, even with unlimited access to resources like time and memory space." In fact, he adds, problems relating to computer programs, particularly running time and memory space -- he calls these difficulties computational complexity -- severely limit just how much computers will ever be able to do.

Harel, who's a mathematics and computer science dean at Israel's Weizmann Institute of Science, may have written one of the first books in recent memory that focuses on the limits of computers. For a community grown understandably arrogant by years of hubris and hype, this is probably a much needed dose of reality. Why focus on the negative?, the author asks. His answer:

l. To satisfy intellectual curiousity. Computer scientists need to know what can be computed and what can't.

2. To discourage futility. Computer experts who tackle problems that are simply insoluble need to stop wasting their time.

3. To encourage the development of new paradigms. Many of the most exciting areas of computer science research -- including parallelism, randomization and quantum and molecular computing would not be advancing at their current speeds if it weren't for increased understanding about what computers can't accomplish.

4. To make possible the otherwise impossible. (The author saves much of the answer to what might be possible a surprise in the book, so I can't give it away here).

Harel acknowledges that our society could barely function without them. But he warns against the widespread mythology that computers will be able to do almost anything we can think up.

Typically, Harel writes, when people have problems making computers do what they want them to do, their excuses that fall into three categories: more money would buy larger, more sophisticated computers; being younger would permit us to wait longer for time-consuming programs to be terminated; being smarter could lead us to solutions we don't currently seem able to find.

But the truth is that computers are simply not equal to solve many complex problems. Harel raises, then mostly sidesteps, the debate over whether computers can be endowed with human-like intelligence. "In its wake," he writes, "a host of questions arise concerning the limits of computation, such as whether computers can run companies, carry out medical diagnoses, compose music or fall in love."

For non-techs, this book is on a pretty high plane. Even with Harel's impeccable credentials and engaging writing style, plenty of concepts are rough for someone who's not a programmer or computer scientist to grasp, especially when he gets to tiling and algorithms.

But the question is significant. The limitless potential power of computing has all kinds of implications for technology, education, culture and politics. We do need to know more about what's realistic. This splash of cold water is welcome, and more than a little shocking.

Purchase this at ThinkGeek.

This discussion has been archived. No new comments can be posted.

What Computers Really Can't Do

Comments Filter:
  • by SEWilco ( 27983 ) on Saturday January 15, 2000 @06:41AM (#1368959) Journal
    My favorite is those who want to eliminate animal testing by instead using computer simulation.

    Flip open any biology or medical publication and see how many details of biology are still being discovered, thus couldn't be simulated even if you had a computer powerful enough for the job.

  • Has anyone a little more sophisticated than Katz read this book? Is this just another rehash of decidability and intractability? Or is there something new here?
  • Computers are just simple turing machines. This means that everything they do is utterly predictable. The very essence of being conscious is an ability to behave in a random fashion, also known as free will. Computers will never have free will and will never be conscious, not in their present Turing Machine form, anyway.

    It is for the best, anyway. I don't want to be superceded mentally and made redundant, like the industrial revolution made my muscles redundant. So I am very glad conscious computers are impossible. It would be dangerous for us if they were.

  • Why doesn't this review tell us at least one thing that computers can't do. Is this a P-time vs NP-time maths book? Is it a social problems book? What? Just tell us!
  • by AstynaxX ( 217139 ) on Saturday January 15, 2000 @06:45AM (#1368964) Homepage
    Where did this idea come from precisely? Maybe i don't read to same books, see the same movies, etc. but I've never seen computer porteyed as all knowing and/or all powerful. From Star Trek to the Matrix, even the most advanced computers seem to need human intervention to function and/or are vulnerable to human sabotage and control. I really don't see where the author, or katz, came up with this idea.

    -={(Astynax)}=-
  • Is it just me, or did he just tell us, "Here's this book ... I won't tell you what's in it, but you can go buy it and find out." Even if you didn't want to ruin the book's 'surprise', you could at least tell us whether it's worth reading. What we ended up getting here was a question suited for 'ask slashdot', along with an attached advertisement.
  • The limitless potential power of computing has all kinds of implications for technology, education, culture and politics. We do need to know more about what's realistic. This splash of cold water is welcome, and more than a little shocking.

    Apparently it's only shocking to Katz and to other true believers of the one faith of computers&#169. Anybody involved with computers that has any semblance of sanity realizes that computers are not capable of solving every problem/question humanity has ever formulated. And the chances are that they never will. Even people that are into far-future sci-fi style writing usually keep a realistic stance about such things. Dan Simmons, a great author of many styles, wrote of a future with AI (autonomous intelligences, not artificial) computer units that were so intellectually superior to humans that most humans could not even fathom the depths of their 'minds', yet even these great beings couldn't answer some of the most fundamental of questions. Who are we? Why are we here? Who else is out there? How do we do ...?

    The whole premise of pure (and completely unfounded) belief in the abilities of machines is just as laughable as any religiously clung to belief. If you believe without question, then you lose your ability to see reality. If Katz sees this as a splash of cold water, then perhaps he needs to regain some perspective.

    BTW, has anyone else noticed that Katz has shifted gears over the past few weeks from the "computer people are the smartest, bestest, wonderfullest, most misunderstood" to "computers suck, and they are damaging our society beyond repair"? I wonder if he just had a major system crash a few weeks ago?

  • by AstynaxX ( 217139 ) on Saturday January 15, 2000 @06:51AM (#1368969) Homepage
    Well, some would argue that people are also just automata who respond in a predictable manner, if you know everything about their life from conception till the moment of the action in question. The issue is that humans are so complex, and have such a complex web of influences and forces, that the human mind cannot reliably predict what another human may do. In some sense humans, it could be argued, are psuedo-random, we are predictable, just not to any intellect we have yet spwaned or encountered.

    -={(Astynax)}=-
  • by HiQ ( 159108 ) on Saturday January 15, 2000 @06:52AM (#1368970)
    To discourage futility. Computer experts who tackle problems that are simply insoluble need to stop wasting their time.

    So now this Harel decides that a problem is insoluble? If a team of researchers try to solve a problem, should they stop because Harel says it can't be done? Who does this guy thinks he is, the All-knowing deus? Isn't it so that the effort to solve a problem can yield other results? Isn't that what science is about?


    How to make a sig
    without having an idea
  • by PureFiction ( 10256 ) on Saturday January 15, 2000 @06:53AM (#1368971)
    Lets assume that the current trend of rapid increases in computing power continues a decade or two.

    The most interesting problems crunched on today (IMHO) with computers are simulation and complex problem solving. The latter meaning various algorithms for finding optimal solutions to combinatorial problems.

    Simulation meaning the ability to predict the behavior of physical structures, chemicals, processes, etc.

    Combinatorial optimization solving traveling salesman, design - VLSI, chemical engineering, etc. using algorithms such a simulated annealing, genetic or eveolutionary, nueral networks, etc.

    These types of processing will continue to grow in power and flexibility to a point where we can design incredibly complex systems entirely in silico.

    Once this is accomplished, the majority of human 'work' will consist of manual labors, or 'creative' tasks. The engineering types of processes, VLSI, CAD/CAM, structure design, will be crunched out by computers at a fraction of the cost, using incredibly powerful eveolutionary processes to find solutions no human could dream of.

    This is already happening in quite a few fields of expertise.

    Thus, we will be the eternal dreamers, searching for the endless areas of which to apply our computing power, and provide direction for its use. The rest will be done by the black box brutes.

    At least, that's my opinion... ;)
  • I thought I wouldn't have to bother with any longer, thanks to computers:

    Washing the dishes

    Taking out the garbage

    Cooking

    Going to the bathroom

    Eating

    Breathing

    Drinking

    Dying

    This article really ticks me off!

  • Except no one, in this generation at least, is saying anything of the kind. Where as we have Katz and countless other people on slashdot saying that they can, are, or will frequently to varying to degrees. Not everyone on slashdot is either an engineer or a programmer. In fact, I'd wage that the vast majority of frequent readers are between the ages of 16 and 20...those who generally don't have much professional experience.
  • Sure they can. Go into your preferences menu, and check the box marked "JonKatz". You will never see one of his stories on Slashdot again as long as you're logged in. Stop whining.
  • by Christopher Thomas ( 11717 ) on Saturday January 15, 2000 @06:56AM (#1368977)
    Computers are just simple turing machines. This means that everything they do is utterly predictable. The very essence of being conscious is an ability to behave in a random fashion, also known as free will.

    Devil's advocate time:

    Prove this.

    As far as I can see, a human mind is indistinguishable in practice from a very large deterministic system in a chaotic environment. Apparently nondeterministic actions are adequately explained by strong sensitivity to input and the chaotic, effectively unpredictable nature of this input.

    So, rather than making a blanket statement that the human mind can't be emulated by a deterministic machine, you're going to have to prove that it isn't already one :).

    I'm using "human mind" instead of "conscious mind" above because you're going to have one hell of a time defining "consciousness".
  • Get me a date.

    ----
    Ray, when someone asks you if you're a god, you say yes.
  • At least I know the author. David Harel wrote Algorithmics: the Spirit of Computing [awl.com] and I really liked that book. I read it in the first semester of my first CS year, so that's the level you should think of.

    It explains what algorithms are, what complexity and the "big-O" notation are, and has a good discussion of P vs NP, and decidability.

    Given this background, I suppose this book also covers the "computers can't do everything" from that angle.

  • by Slef ( 8700 ) on Saturday January 15, 2000 @06:58AM (#1368980)
    Harel, [...], may have written one of the first books in recent memory that focuses on the limits of computers.

    Search on Amazon.com (or others) for books on "Complexity Theory" or "Theory of Computation". I get 277 hits.

    We now know that not all algorithmic problems are solvable by computers, ...

    Now? This has been known for 50 years (The halting problem, etc). The book might be very good, but please don't make it sound like this is news.
  • by Illserve ( 56215 ) on Saturday January 15, 2000 @06:58AM (#1368981)
    I'd probably disagree with most of this book. There's no reason that even a Turing machine couldn't simulate a problem solving device as complex as the human brain, provided you'd figured out all of the physiological properties that contribute to intelligence.

    But even before that goal is reached, computers are going to go a very long way in enhancing our own intelligence and problem solving capabilities. Hell they already have.

    Another point is that the solution to some of these problems may not take the form this guy expects. We could change the laws of physics by building a virtual reality indistinguishable from reality, putting everyone into it and then changing the rules.

    Computers are tools and they will solve whatever problems we tell them to, eventually.
  • by funkman ( 13736 ) on Saturday January 15, 2000 @07:00AM (#1368983)

    2. To discourage futility. Computer experts who tackle problems that are simply insoluble need to stop wasting their time.

    4. To make possible the otherwise impossible.

    Computers are unable to interpret English to discover typos in words spelled correctly. Forget the unsolvable problems, I prefer insoluble ones more. They go so much better with tea.

    They can't make the author appear smarter either. First I will state computers cannot solve a problem, then I will say I will use computers to solve problems which were once impossible.

  • by American AC in Paris ( 230456 ) on Saturday January 15, 2000 @07:05AM (#1368985) Homepage
    "...our hopes for computer omnipotence are shattered. We now know that not all algorithmic problems are solvable by computers, even with unlimited access to resources like time and memory space."

    Huh. That really flies in the face of what we thought about the power of computers back...when? Circa Fritz Lang's Metropolis [imdb.com]?

    Perhaps the above should read:

    "My hopes for computer omnipotence are shattered. I now know that not all algorithmic problems are solvable by computers, now that I've read through decades' worth of essays written by some of the greatest computer scientists ever to live."

    information wants to be expensive...nothing is so valuable as the right information at the right time.

  • From what I understand though JK's description, Mr Harel is probably talking about the NP-hard problems, ie problems which take exponential time to solve (exponential being related to their "size", eg solving the travelling salesman problem [bcit.bc.ca] for N cities takes k*exp(N) steps).

    Although those problems are effectively unsolvable through the classical, algorithmic way, quite a lot of them can be solved using the most recent AI techniques - the drawback being that the solution is not 100% guaranteed optimal. Genetic Algorithms [?] [everything2.org], for example, are the most powerful optimization tool that ever came out of AI. It can deal with the travalling salesman's problem (see one version here), just as well as other technique such as "Ant colonies [susx.ac.uk]"

    Furthermore, complexity theory (which deals with "computability") only holds for Turing machines. DNA / quantum computers do not fell in the "NP-cursed" category of computers.

    Mr Harel's thoughts, while being perfectly snesible as far as his own field is concerned (Turing-like algorithmics), should not be taken as holy scripture. Digital calculators are only a couple of decades old. It took thousands of years to fully exploit the power of the steam engine. We can try to imagine what "computers" will be like in 30 years from now, but expecting such a forecast to be accurate would be foolish.

    Thomas Miconi
  • The study of determining what class of problems is insoluble is an area of research in and of itself. Rather than inhibiting research, it enhances it by eliminating endless research into intractible areas to let time be better spent on more fruitfull pursuits.

    Its been a few years since I studied it but what it comes down to is that certain classes of problems can be mathematically proven to be of a certain level of difficulty. If you can prove mathematically that the problem you're working on is equivalent to a certain class of known problems, you can deduct information about decidability, tractibility, etc.

    Wow. That made my head hurt just thinking about that class ;-)
  • by istartedi ( 132515 ) on Saturday January 15, 2000 @07:14AM (#1368996) Journal

    Computers are just simple turing machines. This means that everything they do is utterly predictable.

    Unless you have a Real Random Number Generator (RRNG) card plugged in.

    The very essence of being conscious is an ability to behave in a random fashion, also known as free will.

    Oh no! my RRNG has consciousness. What's more, it has free will! Even more disturbing is that this means dice are sentient while being rolled. To not roll the dice is cruelty. I heard that there are dice in Las Vegas not being used now. I urge all of you to go to Vegas and shoot craps all day and all night. I urge that we form a society for the prevention of cruelty to dice.

    Computers will never have free will and will never be conscious, not in their present Turing Machine form, anyway.

    The really scary thing is that nobody can prove that the brain isn't just a sophisticated neural network. Maybe consciousness is an illusion. To believe otherwise is, at this point, a matter of faith.

    It is for the best, anyway. I don't want to be superceded mentally and made redundant, like the industrial revolution made my muscles redundant.

    So, you don't want to be promoted to mid-level management?

    So I am very glad conscious computers are impossible. It would be dangerous for us if they were.

    Can you prove either of these statements? The current state of computers is not proof; neither are any Hollywood movies where intelligent computers take over the world.

  • by Anonymous Coward
    Pay attention to the second definition here...

    insoluble (n-sly-bl)
    adj.

    1.Abbr. insol. That cannot be dissolved: insoluble matter.
    2.Difficult or impossible to solve or explain; insolvable: insoluble riddles.
  • You're hanging around the right people then...

    In my tech company, the question everyone in the networking and development department has to answer on a daily basis is "Why can't we do that? We have computers, don't we?"

    The people who don't understand what computers can and can't do fall into two categories. The first are techies who beleive that anything is possible, given enough development time. The second is any given company's sales force, who see computers as magical creatures similar to unicorns that shit money.

    Example: I used to work at an ISP that fancied itself a web design firm. We actually had some capital and a good community reputation to work with, so when we started offering IBM Net.Commerce based e-sales solutions, we got some good bites. After the first few bites, however, our sales force started selling the Net.Commerce package as an end-all-be-all solution.

    "You want to catalog and sell each of the 100,000 bolts and nuts your company manufactures? No Problem! You want to do it for under $5000 dollars with two Photoshop hackers and a single developer? No Problem! We have computers to do all this stuff for us, right?"

    </rant>
  • by Anonymous Coward

    Of course you could edit your preferences [slashdot.org] and rid yourself of katz .. but that has been mentioned already.

    Here's a better solution: if you run Unix, edit "/etc/hosts" and add this line:

    127.0.0.1 slashdot.org


    If you run windows, simply get a large pair of all metal scissors and cut the power cord to your computer. The shock should hopefully kill you, and if not, your computer will be disabled. Thereby protecting you from katz. (unless he shows up at your house.)

  • (Haven't read it yet, BTW)

    People trying to make a point often seem to invent an "prevailing opinion" to argue against. I don't think that many people really think computers are omnipotent. Good idea for a book though.

    Another good reason for tackling this point is that understanding what computers *aren't* highlights some really odd things about what minds *are*.

    Books like this one, "The Emporer's New Mind" and "Godel, Esher, Bach" do do seem to imply some truly wierd things about the capabilities of human brains.

    Incidentally, does anyone know of any research into analogue computing approaches to artificial intelligence? It seems fairly clear from the maths that nothing which is limited to carrying out tasks a Turing machine could perform will ever shed that much light on the nature of the mind

  • by QuantumG ( 50515 ) <qg@biodome.org> on Saturday January 15, 2000 @07:16AM (#1369003) Homepage Journal
    I was at a reverse engineering conference and heard a lawyer stand up and talk about what is legitimate research and what is not based on some recently based laws. Quite oblivious to what he was saying he uttered "... you should stop research ..." the most taboo words you can utter at a conference. Often you will hear people get up and say "you can't do that, it's impossible!" and the person who is there to present research into the problem will quietly and smugly ask "oh, do you think we should stop trying?" knowing full well that the response will come back in the negative. It doesn't matter if something is declared impossible. If someone thinks they have figured out a way to do even a partial solution to an interesting problem then it should be investigated.
  • Ooh... ooh... I know! They can't solve the halting problem [maine.edu]! Do I get participation marks?
  • by yellowstone ( 62484 ) on Saturday January 15, 2000 @07:18AM (#1369006) Homepage Journal
    Some quotes on the subject, by people more eloquent than me:
    When a distinguished, but elderly scientist states that something is possible, he is almost certainly right.

    When he states that something is impossible, he is very probably wrong.

    Arthur C. Clarke's - First Law
    The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.
    -George Bernard Shaw
    I can't speak for other /.-ers, but I'm not really interested in people who want to talk about what can't be done...

    -y

  • Renoun mathematician and TRUE father of modern computing, Alan Turing, proved that there was a very definite limit to what could be done on a computer.

    Any operation that CAN be done is called a "computable problem" (great surprise, that one!), and ANY computation device with sufficient time and memory can solve ANY computable problem.

    The "classic example" that University text books the world over still use to this day is the infamous "halting problem" as an example. Can you write a program that, given ANY code, determine if it'll ever halt?

    The answer is no. You can't. You =can= write programs that'll work for a =range= of programs. (It takes no great feat to write a program that'll check "Hello World".) But a generic program is impossible.

    (The proof of that involves feeding the program itself. Since knowing whether it'll ever stop is dependent on knowing whether it'll ever stop, you have an infinite loop. The computer's molecules will decay long before it ever gives an answer.)

    One of the great challanges "Hard" AI scientists is this. If the human brain is a computational device, is reverse-engineering conciousness a computable problem? If not, then (by definition) the scientists can't do it.

  • They are equivilent because they can do the same things, but they are not the same. No-one can build a turing machine because it requires an infinite amount of tape.
  • by FallLine ( 12211 ) on Saturday January 15, 2000 @07:25AM (#1369016)
    Warning: Some might consider this flame bait. Caveat Emptor.

    Katz is a hack. He doesn't really care to think or challenge anything. All he wants to do is make a name for himself, sell "books", etc. To do that, when you have mediocre skills and limited intelligence, you must find your niche. Katz does this by being the loudest voice in the heard.

    When computers are "hot", he'll be their greatest cheerleader. When the internet is hot, he'll be there too. But when the Dot Coms start crashing, and there is a large sentiment that he can cash in on AGAINST it, he'll be there just as quickly. Never mind consistency. Just read his stuff over the past couple of years.

    I see Katz as a Clintonesque figure, albeit, without the charisma, intelligence, etc...always holding his finger out to the wind of public opinion or, rather, his niche audience of teenage "geeks".
  • Computers cant do book reviews.
    Jon Katz cant do book reviews.
    Jon Katz is a computer.
    QED.
  • Computers are just simple turing machines. This means that everything they do is utterly predictable. The very essence of being conscious is an ability to behave in a random fashion, also known as free will.

    First, it's not at all clear that free will exists, or that human behavior isn't completely deterministic. Second, if randomness is all that's needed, I already have a /dev/random; if that's not random enough, a simple particle counter or other device can be added to the system.

    If human level intelligence can exist in 1500cc's of fatty meat, I don't see any reason why it couldn't eventually exist in some sort of other computer as well. I suspect, though, that we might not ever be able to actually create such an intelligence though programming, because our own ability to understand how our intelligence works is limited. However, we may be able to evolve one using genetic algorithms - a sort of natural selection applied to algorithms.

    Tom Swiss | the infamous tms | http://www.infamous.net/

  • I'm not saying it is or isn't. What I said, or meant to imply, was that no one in this day and age is saying that cars are the cure to all the ills of the world, while many still are hyping computers up to be just that. A lot of this applies to slashdot too.
  • ...that I have an infinite amount of tape right here in my pocket.
  • Apparently it's only shocking to Katz and to other true believers of the one faith of computers©. Anybody involved with computers that has any semblance of sanity realizes that computers are not capable of solving every problem/question humanity has ever formulated.

    There are more of them than you think. How about the people who worship the Great Oracle of Weather Prediction to "prove" global warming? I got into an argument on this very topic on this very site. It's incredible how much faith people put into weather simulations that try and predict the trends 50 years into the future, yet these models cannot predict the weather more than one day in advance.

    I think in a lot of ways this guy is dead-on. The best way to get government grant money is to write a program that "proves" doomsday.


    --

  • Basic Chaos Theory [?] [everything2.org].



    The very essence of being conscious is an ability to behave in a random fashion, also known as free will

    There's no such thing as true randomness on the non-quantic scale. What we call "free will" is the result of a (highly structured) bunch of intercommunicating neurons. While the process of decision (ie will) remains one of the darkest parts of the Neurosciences realm, we already have enough clues to figure out where we should look (can you say "basal ganglia" ?).


    Thomas Miconi
  • by falloutboy ( 150069 ) on Saturday January 15, 2000 @07:37AM (#1369031)
    Has anyone a little more sophisticated than Katz read this book? Is this just another rehash of decidability and intractability? Or is there something new here?

    Has a slashdot user bashed Katz? Is this just another rehash of decidability and intractability? Or is there something new here?

    My mommy always said, if you don't have anything nice to say, STFU.

  • Quantum Mechanics are non-deterministic, on the micro-scale. They'd better be deterministic on the macro-scale, or the computer I'm sitting at might start floating away, whilst three dozen me's all try and grab the last apple which is turning into a penguin.

    (Douglas Adams might not be a top scientist, but the description of the Improbability effects are a lot like the Quantum world.)

    IMHO, whilst Quantum Computing is non-deterministic at any given instant, you can't avoid the breakdown into a deterministic state (and therefore Turing logic) any time you try to do anything.

  • Not the travelling salesman problem again.

    The TSP is one of a sizable class of NP-hard search problems for which the optimal solution in the worst case is very hard, but a near-optimal solution in almost all cases is easy. It doesn't take "modern AI", either, just an algorithm with a random component discovered at Bell Labs in the 1960s.

    For those of you who care, here's how you solve the TSP:

    • 1. Create some path that connects all the nodes.
    • 2. Cut the path at two randomly-chosen links, creating three segments. Reassemble the three segments in all possible ways, and keep the shortest path produced.
    • 3. Repeat step 2 until no improvement is observed for a while.

    This is quite fast. The TSP for 50 nodes can be solved on a 6MHz PC/AT in less than a second. (It's been a while since I ran that program.)

    The random component makes it impossible to create a pathological case for which the algorithm makes repeated bad choices. This is a case where indeterminism beats determinism.

  • by evilandi ( 2800 ) <andrew@aoakley.com> on Saturday January 15, 2000 @07:43AM (#1369039) Homepage
    SEWilco wrote: My favorite is those who want to eliminate animal testing by instead using computer simulation. Flip open any biology or medical publication and see how many details of biology are still being discovered, thus couldn't be simulated even if you had a computer powerful enough for the job.

    Some moderator marked the above as flamebait. That's bollocks. This is a highly valid point and totally on-topic for the subject of "what computers can't do".

    "Contentious" does NOT equal flamebait. Stuff like that NEEDS to be discussed. We can't just pretend a subject will go away just because some people feel passionately about it.

    SEWilco is quite right. You can't model what you don't know.

    In addition, computers require absolute parameters. Not only can you not model what you don't know, but you can't do worthwhile simulations (ie. those used for human life or death decisions) based on educated guesses.

    I only have respect for anti-vivesectionists who are vegans- not only in diet, but in clothes, tools, furniture and cosmetics too. Either animals are something we eat, or something we don't. Any half-way stance is hypocrytical.

    Since there is absolutely no chance of any nation enforcing veganism on it's population, anti-vivisectionism is ultimately futile.

    What I personally feel doesn't come in to it. There is no point arguing for a law if it will never get voted in or be enforced.

    --

  • If you want to dig a little deeper into the sociological aspects of these subjects than Harel's book reaches, you may want to read Clifford Stoll's "Digital Snake Oil".
    Actually, it's Silicon Snake Oil. He has another, newer book along the same line called High-Tech Heretic. I think he goes a little far in his criticism, but it's a good counterbalance to the hype, especially about the use of computers in schools.

    Tom Swiss | the infamous tms | http://www.infamous.net/

  • I read Stoll's book a month ago; here's my capsule review:
    A remarkably repetitive book that would have been good at the length of, say, 50 pages or so, but instead is padded out to 239 pages. Stoll's basic points are nothing novel: 1) Information is not wisdom. 2) Computers are expensive and get outdated really quickly. 3) Because you're on the computer right now, you're not doing some other activity that Stoll considers more worthy. There are a few amusing anecdotes scattered through the text, but mostly this book is one long complaint; at times justifiable, at times not, but irritating throughout.
  • Flamebait == does not fit in with the majority opinion. You know, thinks like abolishing slavery, that was flamebait.
  • "Computers are just simple turing machines. This means that everything they do is utterly predictable."

    You obviously don't run Windows on your PC.

    Sorry, couldn't resist.
  • <i>It took thousands of years to fully exploit the power of the steam engine.</i>

    The steam engine is only a few hundred years old, and the development of the first practical steam engine (by James Watt) kicked off the Industrial Revolution. If you date computers to WW-II, the steam engine is only about three times older than computers.

    If you date computers to Charles Babbage, which is not entirely unreasonable, then computers and steam engines are nearly the same age!

    While it's true that a steam-driven novelty was known in classical times, it was not an engine capable of doing practical work. While a hollow sphere with directed vents will spin when heated by an external flame, it doesn't generate much usable power.

    In contrast, a "steam engine" works by filling a sealed chamber with steam, then rapidly cooling it causing the steam to condense and the external air pressure to move a piston. This requires good metallurgy (so the chamber doesn't collapse) and tight manufacturing tolerances (so the piston will slide, but not let air leak around it), and a dozen other things to keep it from seizing up within hours. Calling the classical toy a "steam engine" is comparable to calling your walkman -- no, your cd-player -- a Cray supercomputer because both contain silicon-based circuitry.
  • You sound like you hold the same viewpoint as Roger Penrose, famed mathematician and author of The Emperor's New Mind (which is probably a lot better than this book). Nonetheless, quantum computing offers an answer to all your criticisms of computers as conscious machines.

    Quantum computing introduces true randomness, non-determinacy, and other strange things into computing. It's hard to imagine how it it would not be possible to build a conscious quantum computer (theoretically, that is).
  • (The proof of that involves feeding the program itself. Since knowing whether it'll ever stop is dependent on knowing whether it'll ever stop, you have an infinite loop. The computer's molecules will decay long before it ever gives an answer).

    Just to pick nits, that's simplified to the point of incorrectness, and since Turing's proof is simple enough to describe in a paragraph, there's no reason to oversimplify.

    The proof involves assuming that a program exists that can decide whether, given any program and some input for it, that program will halt. Supposing we have this magical program (call it M for Magical), Turing then proceeds to show a contradiction. He does this by constructing another program that will loop forever if and only if it's fed a program that M says will halt. Then he feeds that machine to itself, which yields a paradox because if the machine halts when fed itself, then it will loop forever -- but wait! How can it both halt *and* loop forever? This is impossible. So, by contradiction, the original assumption (that M exists) is false. It's a really cool argument because it's very simple but really brain-twisting. A quick google search turned up this [netaxs.com] if you're interested in a better explanation.

    Goedel's First and Second Incompleteness Theorems are constructed in much the same way, BTW (and Goedel did it first).

  • I don't know about other schools, but everyone in Computer Science at the University of Waterloo has to take a course in 3rd year that, among other things (problems solvable by FA, PDA's, TM's), talks about problems that are unsolvable.Studying these topics seems like basic theory and I imagine most schools have something similar.

    I would bet most of the people who Jon Katz is talking about are rather naive when it comes to anything about computers, not just the limits of their capability.
    ---

  • What's the difference? I dont see it. I dont think turing saw it either because he used the word "infinite".
  • "computers are totally predictable..."
    *Sigh*, so are most PEOPLE, most single cells and many other complex systems. The trick is looking for what to predict. Computers are "predictable" only so much as we control the situation. Try running your machine without a fan for awhile, or institute some bit error (like a magnetic picture of your girlfriend on your case, something I did for a while in a futile juvenile attempt to goad my 386-40 into sentience). Suddenly, things become less predictable. The predictability of a cornered animal's actions are similarly predictable, so long as we control the situation. Granted, this is a silly sort of argument -- after all, there's no call for a "free range" computer. I'm just saying that perhaps the predictability of a computer and therefore its ability to truly "innovate" without outside stimulus is a fairly pardigmatic concept -- as is creativity in humans. Look at any three websites and tell me that each of them is a seperate entity with no similarity, that they aren't adapted to a paradigm with radom minor deviances according to a further paradigm of acceptable deviance, and I'll call you a liar. Sure, computers can't create, but neither can most humans...there's no reason why a computer, properly programmed and template driven, can't emulate the "art" of most advertising executives, web site designers, popular music authors, &tc.

    Java is the way...
  • AI used to be compute-limited. Hans Moravec, in his 1988 book "Mind Children", has a calculation, based on the processing power of the neurons in the retina, of how much compute power would be necessary to make a brain. His measure of "power" in bits processed per second, reads as follows:
    • Bee - 10^9 bits/sec.
    • Hummingbird - 10^10 bits/sec.
    • Mouse - 10^11 bits/sec.
    • Human vision - 10^13 bits/sec.
    • Human - 10^14 bits/sec.

    He rates the classic VAX 11/780, generally considered to be a 1 MIPS machine, at 6x10^7 bits/second. So supposedly a top of the line desktop today, about 1500 times the power of the old VAX, is comparable to a mouse. A 1000-machine cluster should reach human power.

    But we're not even close.

  • Nobody has been able to prove that NP != P. If someone were to discover proof that NP = P (which would most likely be done simply by finding an efficient solution for any NP-complete problem, such as TSP), then most of Harel's objections would be shot down. In short, he's basing this on a guess that NP!=P.
  • your web site has nothing on it. And a turing machine without an infinite tape cannot run all the programs that a turing machine with an infinite tape can run. So you may say that your computer is equivilent to a turing machine but it is only equivilent to a turing machine with a certain length tape, make the tape any longer and your computer is now not equivilent.
  • The question to Multivac (and its incarnations throughout Time) was (moreorless) "How to stop the eventual heat death of the Universe."

  • Problems unsolvable by computers roughly fall into three categories:

    1. Mathematically proven impossible

    The Halting Problem and similar. These are inherently impossible to solve exactly or exhaustively. Note that this impossibility applies regardless of whether the entity tackling the problem is carbon-based or silicon-based.

    2. Theoretically possible (exact algorithm is known), but time-/space-consuming.

    The Traveling Salesman and his friends in NP. The jury is still out on whether their intractability is a human limitation (i.e. we just haven't managed to come up with a working algorithm in P) or whether they're really that hard, but if the latter is true, then again they're hard to solve exactly for anyone, not just computers.

    3. Things involving creativity, feelings, "true understanding", etc.

    A suprising number of technically knowledgeable people are willing to grant that one without further questioning, and that's understandable. After all, one can't quite imagine what an algorithm for coming up with a new idea or a subroutine for falling in love would look like, and yet humans are able to DO these things, and they're easy.

    So why is it hard to teach a machine to do that? Well, look at it from a different angle: how hard is it to teach a human to do that? Have you ever tried to explain what exactly "being in love" is? The best we've come up with so far in that area is art, music, poetry, which seems to evoke similar feelings in different people, but that's by no means fail-safe. So, from a not too unlikely point of view, humans can't do these things either - we don't know how to do them. They do us instead. Machines might suffer from the same shortcoming, but given the state of our knowledge about this area of human behaviour, we're not even in a position to find out yet.

    But then again, maybe the author of the book reviewed here has found a way...
  • Hopefully some day he will outright slander someone and they will sue him into the ground, or maybe he will insult an ethnic group and they will take care of him. The standard response of "take him out of your preferences" does not do justice. Slashdot is spending money on him (or does he do this for free, either way) that they could be spending on someone more deserving, and that means I loose out. Think Geek looses out too. If someone else had done this review they might have done such a great job that the gazillion people on Slashdot rushed over and bought it, alas, now they will receive but a trickle.

  • >> Computers are just simple turing machines. This
    >> means that everything they do is utterly
    >> predictable. The very essence of being conscious
    >> is an ability to behave in a random fashion,
    >> also known as free will.

    > Devil's advocate time:

    > Prove this

    See Roger Penrose's book on the subject The Emperor's New Mind wherein he uses rather a lot of words to explain why he believes hard AI is not possible. It's an opinion I personally don't agree with (and as an earnest teenager I was delighted to be able to read a book by such a well respected academic, and find myself capable of actually disagreeing with it!).

    Penrose's central theme is that computers are deterministic and that the human brain is not. My angle is that any sufficiently complex deterministic system can appear non-deterministic (hey, that's Chaos Theory) -- and anyway, even if that's not the case, you could easily hook up a random noise source to an A/D convertor and have your computer AI grab input from that for its "free will".

    I say, if it can be done in wetware, it can be done in software -- to deny this is to invoke the supernatural, to say the Soul is seperate from the physical brain, and that's something I personally can't agree with.

    --
  • While you are right that Heron's steam Aeolipile would not have been capable of much power, what IS amazing is the fact that the ancient Greeks had all the essentials for a true steam engine, but didn't take the route of combining the elements to create such a machine.

    The knew of valves and pistons - Heron even had an automatic temple door system [millersv.edu] that relied on air pressure drawing up water to activate the doors to open when a fire was burned on an alter nearby. Other uses were various automata for stage plays and productions, and for various waterworks (fountains and such).

    The truth of the matter probably revolves around the fact that they didn't need such machines - there isn't much practical benefit of a machine that only somewhat works, when slaves are much, much cheaper (and in plentiful supply)...

    Worldcom [worldcom.com] - Generation Duh!

  • I'm not 100% against animal testing, and I'd rather things were tested on some rabbit before it gets to human testing, but, at the same time, most of the animal testing industry needs several hob-nailed boots to the head to correct it.

    Yeah. PETA goes a little too far; animal testing is a necessary evil. And I think most rational people see it as that.

    Though, perhaps if the PETA people would like to volunteer to spare a few guinea pigs...?

    Nope, didn't see any mad rush to the research labs for *that* one.

  • by sv0f ( 197289 ) on Saturday January 15, 2000 @08:26AM (#1369091)
    There are two famous books by the phenomenologist philosopher Hubert Dreyfus on the folly of Artificial Intelligence.

    "What Computers Can't Do: A Critiqe of Artificial Reason"
    "What Computers Still Can't Do: A Critiqe of Artificial Reason"

    AI folks hate these books for many reasons, but especially because Dreyfus is a technical doofus. He consistently misunderstands what computation is, how computers are programmed, etc. (Sometimes with comical results -- there's a great story in Levy's "Hackers" about Dreyfus claiming (in the 1960s) that no computer would ever play decent chess and then being soundly defeated by a primitive chess-playing program shortly thereafter.)

    It's pretty clear that the title of Harel's book ("What Computers Really Can't Do") plays on the titles of Dreyfus's books, reasoning soundly about the formal limits of computation rather than insinuating rhetorically about what computation cannot be based on a particular philosophical (phenomenological) critique.
  • Turing machines can be built, and can be functional and even useful.

    "Can be built", I'll buy. "Functional" I will also acknowledge. But "useful"?

    Other than as an educational toy, what use is a physical Turing machine (that can't be done cheaper and better by a PIC chip or something)?
    --
  • Your comment reaches to the crux of the matter. Many of the problems that computers either can't do or are poor at, we ourselves as human beings probably don't understand or don't have a method to solve.

    I'd add, however, that a few posters have pointed out Quantum and DNA computing, as "breaking the mold". I think writings in the vein of this book need to be cast against the backdrop of "... assuming the current method of problem solving and execution... ".

    Quantum computing offers a method that *may* break or reduce certain NP complete problems, as I understand it. Problems which were "impossible" in Newtonian mechanics are near-trivial in relativistic frameworks. Quantum problems which were near-intractible mathematically reduced to simple interactions with Feynman diagrams.

    If the problem is hard, or currently "impossible", a revolution of sorts in thinking is likely required. Those labels state that your mode of thinking needs to expand, as your problem space has grown beyond your solution.

    As was also pointed out, AI produced the genetic algorithm, which offers a new approach to certain NP search problems, like the travelling salesman. While this doesn't actually achieve a solution in less than NP time, it creates a method to find near-optimal solutions in linear or logarithmic time, using a very different approach.

    Be critical of new ideas. For that is science. But be open to those new ideas as well, for that is progress.

  • If they're that frightened that a one-page article can contain enough of the book's content to make buying it redundant, it's certainly not something I'm going to bother reading. If someone tells you a bit about the plotline of a good book, do you decide not to read it because you now know something about it? Of course not ... if a book could be compressed to a page or two, then it never would [should] have been written.

    If this is how /. is going to do their advertising, they might as well just make a splash page with a pretty Flash animation, or some other such airheaded marketing. A review, no matter who commisioned it, should have some content.

  • I read only a small fraction of Pennrose before deciding that he was a bigot. There is no intellectually honest reason to invoke wierd physics to explain the operations of the human brain.
  • ...think about the difference between long-range changes that form climate and the momentary, hourly, daily changes of weather.

    I think what you're saying is that there is a difference between predicting a set of balls falling through some pegs will fall in a bell-shape pattern (which can be predicted reliably), and predicting the path of a single ball (which can't), and I agree.

    However, I'm not convinced that the difference between "weather prediction" and "climate prediction" follows this analogy. Both are dealing with the behavior of mass particle systems, just one is on a longer time scale than the other. If you go back to the ball-through-pegs analogy, you might say that the bell shape gets more accurate as time goes on, so a long-term climate prediction should be more accurate than a short-term prediction. However, is your local long-range forecast more accurate than the short-term forecast? Mine isn't, in fact, the opposite is true. The longer out you go, the less accurate it gets.

    This shouldn't be surprising. We are dealing with an insanely complex, very little understood phenomenon.


    --

  • Computers are just simple turing machines. This means that everything they do is utterly predictable.

    That statement can't be proven, because it's false. To site just the example I'm most familiar with, genetic algorithms and genetic programming have produced results that are unpredictable, surprising, and we would call them "creative" if a human had come up with them.

    A quick search on Google turned up this article [businessweek.com] which talks about how genetic algorithms have been used to come up with some interesting designs. I'm sure there are some other great articles out there, but I don't have to time to search for them at the moment.

  • by Azog ( 20907 ) on Saturday January 15, 2000 @09:01AM (#1369113) Homepage
    So now this Harel decides that a problem is insoluble? [...] Who does this guy thinks he is, the All-knowing deus?
    (sigh). No, no, no. Go study some theoretical computer science before you attack researchers who actually know something about it.

    There are large classes of interesting problems which are incomputable. And that's not just because some PhD said "I tried for four years to solve this problem, and I couldn't figure out how to do it, so it must be incomputable."

    Incomputable is a technical term in computer science. Problems can be proved incomputable. These proofs are not trivial. They usually are based on a formal, mathematical model of a computer. If some problem P is proved incomputable, and if the proof is correct, then no real computer that has the same limitations as the "model" computer will ever be able to solve the problem either. It has nothing to do with speed or memory, either. These problems are simply not solvable with our current models of computatation.

    Now, IIRC, the "Church-Turing-Tarski Thesis" states that all reasonable (realistic) models of computation will have the same limitations, so if that theorem is true, then no computer will ever solve these problems.

    So, any research effort to try to solve the problem with current computers is totally futile. It would be like trying to find a solution to the equation "n * 0 = 100".

    You are correct that sometimes, trying to solve a problem yields other results. The way to try to solve these problems is to try to find an alternative model of computation, and prove that it is not resticted to the same class of problems as existing computers.

    For example, there is some interesting research being done on the limits of quantum computation. Perhaps quantum computers will be able to solve a larger class of problems. That might disprove the Church-Turing-Tarski thesis.

    The reason people should read this book is that many, many programmers out there do not have a theoretical computer science background. People who are self taught, or took a two year course from a technical school may be highly skilled programmers - I don't want to diss them. But they probably don't understand the limits of computation, and that might get them into trouble someday.

    And I haven't even mentioned intractability - the gigantic class of problems that we don't know how to solve quickly when the problem gets large. For example, many optimization problems seem easy on paper when you have a set of 2 or 3 objects. You code up a little demo program that can handle 10 to 20 objects. It seems a little slow, but you figure you can optimize it and find a better algorithm, and use a faster computer. Meanwhile Marketing is promising people that you will be able to solve the problem with 1000 objects.

    Maybe you work on the problem for months and never solve it and get fired. Or maybe you discover that the problem is intractable - NP-complete, for example - and that there is no known algorithm to solve the problem, and probably isn't one, and even the fastest imaginable computer using the best known algorithm could only handle 50 objects.

    This is why everyone should read a little about intractability and incomputability. Ok enough ranting, back to work.
    Torrey Hoffman (Azog)
  • The best book on this subject IMO is The Emperor's New Mind by Roger Penrose. It came out a few years ago. It is basically a critique of AI, but to get there he discusses the theory of computing, Godels Incompleteness theorem, Quantum Mechanics and much much more. The aim of the book is to argue that there are certain things a human mind can do that a computer can never do. Roger Penrose is himself a top mathematician and although the book is aimed at the general public it's not for the faint hearted. Having said that though it is simply stunning, it is a tour of all the major scientific ideas of the last century, and is incredibly stimulating. If you want to read a book on the subject, read this one.
  • FSMs and TMs are not equivalent. In computing theory, automatons are often classified by the set of languages that they can accept. To accept a language is to be able to determine whether any given string is a member of the language.

    The set of languages that can be accepted by a FSM is the set of "regular languages." The set of languages that can be accepted by a TM is the set of "recursively enumerable" languages. The second is a strict superset of the first.

    Theoretically, you can solve more problems with unbounded storage than without. Of course, practically, for a given problem, if the finite storage is big enough...

  • If someone was to discover that NP=P, that would be the biggest and most significant event in the field of computer science and mathematics since... um.... since forever, really.

    It is not just a "guess". Sure, it has not been proven that NP != P. But most computer scientists strongly believe that P!=NP. Calling it a guess is like saying that Stephen Hawking "guesses" that the universe started with a Big Bang.

    When I die, the first question I ask God will be... "So, what's with the P!=NP thing?" :-)


    Torrey Hoffman (Azog)
  • > Man will never fly, because if we were meant to fly he would have wings.
    Fallacious argument. You don't need wings to fly. *cough helicopter*

    > Man will never break the sound barrier, because we will never be able to produce a Chuck Yeager.
    Again another fallacious argument. The speed of sound (which has a limit) is independent of people.

    > Man will never break the light barrier, because of the limitation of the brains of some physicists.
    So the speed of light is a limitation of the brain? Huh?

    > Stop saying things will never happen.
    Man will never be able to reach the bottom of the ocean with just his baby clothes.

    We can't go breaking the law of physics/math at will. Pi will never change. The trick is knowing what is impossible, and what is highly unprobable.

    Cheers
  • by BigBlockMopar ( 191202 ) on Saturday January 15, 2000 @09:31AM (#1369135) Homepage

    Here's one thing computers *can't* do:

    Shorten the work week.

    Technology was initially embraced because, allegedly, it would give us more leisure time. Popular Mechanics magazine has made some of the funniest wrong predictions over the years. One of my favorites was that in 1950, they said that by the year 2000, we'd all be working only 2 days a week, and machines would take the drudgery out of menial tasks by simply eliminating our need to do them.

    Of course, that hasn't happened: if anything, the reverse is true.

    An ex-neighbor of mine has an interesting collection: he collects lawn mowers. So, he's got a gadget called "The Lawn Ranger [mowbot.ro.nu]". It's a late-1980s computer controlled lawnmower that uses optical sensors to figure out where it has and hasn't already cut. You put it in the middle of your lawn, press the start button, and it goes merrily along, destroying your garden hose, the toys that the kids left in the lawn, and generally wreaking havoc. It's cool, and the task of mowing the lawn is pretty braindead, but it's hard for the computer to grasp it.

    He's also got a far more practical device called a Hovermower [hovermower.com]. It has no wheels, and uses a fan built onto the blades to hover above the lawn like a hovercraft. It, too, is great: sweep it around corners. But, like the Lawn Ranger, it's not a very good idea: when it runs out of gas, as the motor slows down, it ceases to produce enough lift, and the blades end up tearing up a big chunk of sodding. And you don't want to ever leave the thing idling unattended, as it has a tendency to slide around like a puck on a crooked air hockey table.

    Technology, and all associated good ideas, have their limits.

    Sure, we're more productive during our working ours because of technology. And it's given society a whole lot *more* career choices than before, when you could basically either be a farmer or a burden to your family.

    Computers are merely an incremental step along the path away from a one-lifestyle existance, whereever that path may lead. They simply join the ranks of everything starting from the steam engine and Jaquard's Loom all the way to the modern transportation infrastructure and the fax machine.

    Cars can't do everything.

    Nope. But they've freed us from the shackles of public transportation, allowed us to independently venture further than the first town down the road, and given us the ability to be more productive in the workplace. And, in doing so, they entertain us and diversify the working world.

    This is prolly a good book and all but get real people, computers are just tools and the audience this book was intended for knows this.

    Agreed. But I'd wager there are some reading this discussion right now to whom computers are *everything*; while that's not necessarily wrong if your work and hobbies involve nothing else, but it's a very narrow (ie. wrong) view of the big picture.

    Computers are cool toys. And then when you've got valuable information spinning around at 7,200 RPM on your hard disks, then they're very important tools.

    A slot screwdriver can be used for turning screws. Or, it can be used as a pry-bar (I have a big one that my buddies and I call "The Persuader"). Or a chisel. Or as a weapon. Even as a fireplace poker. They're a very versatile tool.

    A computer is simply a very versatile tool: They're the 21st century screwdriver.

    And that is a rational perspective.

  • Jon Katz is an idiot
  • >>Computers are just simple turing machines. This means that everything they do is utterly predictable.

    >That statement can't be proven, because it's false. To site just the example I'm most familiar with, genetic algorithms and genetic programming have produced results that are unpredictable
    While you might be right, your example sucks. Genetic algorithms are very simple. They are predictable. Every time you run the genetic programming software with the same inputs, you get the same outputs. If you had enough time, you could dump out the entire execution trace of the program and read though it and understand every single thing that the program did, and why.

    There are no computer programs in existence that are unpredictable in the sense that humans are unpredictable.

    Now, I am not claiming that such a thing is impossible. Perhaps we will eventually understand human brains so well that we will be able to say that humans are, in principle, predictable as well. Or perhaps we will discover that human brains use some sort of quantum process (Roger Penrose's theory, IIRC). But if we discover that, then we could reverse engineer it and build it into computers to give them the same capabilities.

    But we aren't there yet. A genetic algorithm which comes up with interesting designs isn't even close - most genetic algorithms are just a random walk in some problem space, and the fitness testing part of the genetic algorithm program works for finding local maxima which correspond to good solutions to the problem.

    They are interesting engineering, but really, there not that interesting from a theoretical point of view.

    Torrey Hoffman (Azog)

  • One other thing is that most animal testing is not beneficial - more cosmetics and such than anything else.

    I agree. Is it really necessary to test eyeliner on rabbits?

    However, animal testing of potentially life-saving drugs, techniques and procedures, I'm all for. As long as, again, it's well planned, and viewed in the light of the necessary evil that it is.

    We have a large criminal population who will never do any good for society. This would be an excellent pay back.

    Yeah, even Hitler had a good idea from time to time. Though, I suspect, that the thought of being a guinea pig and potentially used in really nasty experiments would be a very strong deterrent to the criminal population. However, it goes completely against the existing standards regarding cruel and unusual punishment. That's a slippery slope to start going down.

    Perhaps an agreement to be used in testing for a reduced sentence?

    One might argue that giving a convicted bank robber 10 years off his prison term when he gets the placebo is unfair; I'd argue that the coin could have landed either way and he could just as easily have been the guy getting ten years off his sentence for some really nasty experimentally-induced neurological disorder.

  • ... is whether we can determine whether John Katz brain will ever stop churing out these worthless articles.

    Computability was discussed to death years ago in Roger Penrose's "The Emporer's new mind".

    IMO until we know in far greater detail how the brain works, then claims as to whether the brain achieves non-computable things are useless. Penrose's whole argument was based on quantum level computations taking place in the nanotubes of the glia, rather than at a neural net or higher level. Hardly a mainstream view.

  • I did not mean my message to be a flame, but rather a request for more information.

    I apologize. I misinterpreted your comment. I think that Katz's opinion on the book may be, in one regard, more useful than an opinion from a reader who is versed in computing theory. That is, this book doesn't seem targeted as a textbook, so its market is for enthusiasts as much as professors and researchers, who may have a level of understanding closer to Katz's.

    Regardless, I apologize for being rude.

  • Of course this leaves the question: how did the frickin' logic get there?

    It was a human-made invention of Aristotle [stanford.edu]. There's plenty of logic defying randomness in nuclear decay, the Uncertainty principle, and heck, one fellow used the Gödel theorm to show that there's randomness in arithmetic [auckland.ac.nz]!! The atomic API is still not completely defined.

  • It's also worth noting that computer circutry is getting to the stage where it is becoming vulnerable to quantum events. This is considered a bad thing, and quite a bit of work is focused on eliminating these interactions.
    If random events played such a significant role in the much larger size circuits of the human mind, you'd think by now we'd have evolved compensatory mechanisms. Our brains are meant to process and react to our environment, not invent new random data.
  • Is consciousness computable? Is it recursively enumerable? Is it even algorithmic?

    Are people speaking of something they know little or nothing about?

    A Turing machine is used to determine the answers to these questions--not to model anything. TMs came before computers (which, by the way, are most accurately modelled by LBAs for which there is a solution to the halting problem).

    A Turing machine is best used to gauge the theoretical efficency of algorithm as well as give a solid framework to what an algorithm is. The limitations on the TM are not those of the computer but rather the mathematical (ie totally theoretical) limitations and carry the weight of expressions like 1 + 3 = 4. Due to the way these symbols work, it is not valid for 1 + 3 = 5. There is no real reason, only mathematical axioms that hold. Same with a Turing machine. Things proven with a Turing machine hold theoretically.

    Computers are not TMs but rather an approximation of a certain small group of TMs (particularly the TM which represents the universal LBA (linear bounded automaton)).

    To make broad sweeping statements that intelligence is or is not recursive, recursively-enumerable, or undecidable is a bit premature since we cannot accurately describe the problem nor can we accurately describe the solution in terms that are acceptible for use with the mathematical concept of a TM. My current belief is that it is undecidable (ie, non-algorithmic). This does not mean it cannot be duplicated by man in a lab, just that the TM model of computation cannot represent it. Follow my reasoning:

    1. Intelligence is not an algorithm to enumerate a set of correct solutions. In other words, intelligence does not have a final answer, but rather an evolving set of current "good enough" conditions from which to operate.
    2. There is no accepted "yard stick" for intelligence. Without a way to accurately measure intelligence, without an accepted standard definition of intelligence, and without a method to test solutions to problems for intelligence (as opposed to luck or misapplication of faulty intelligence), there can be no way to mathematically determine whether or not intelligence exists.
    3. There can be no accepted measure of intelligence. I define intelligence here as using intuitive (non-algorithmic?) processes to exercise better judgement. "Better" is subjective and philosophers have been arguing for thousands of years whose judgement is better. As a matter of fact, if a yardstick for intelligence could be developed, it would finally finish what Godel started 80 years ago in that philosophy as a study will be as useful as astrology. Simply take the conclusions of two philosophers, measure the intelligence, and take the more intelligent. Eventually, it would evolve into a more concrete science like astronomy.
    4. The exercise of intelligence often comes with experience. Therefore, there is no agreed upon initial state, since those excersing intelligence have diverse experiences. Therefore, by definition, there is no single state from which intelligence arises. These experiences could be in the womb or genetic factors inherited or any number of things. The fact is, there is no "good" starting point--once again a subjective that cannot be measured.
    5. A TM requires (among lots of other things) two very special states: initial and halt. By the points raised above, these states cannot exist.
    6. Therefore, a TM cannot be constructed to manufacture intelligence.

    There are those who would say that birth and death are pretty good initial and halting states. However, no two people are born the same and death is an artificial consequence of being alive not being intelligent (trees also die).

    I am not saying that intelligence cannot be duplicated by man. I am just saying that current models of computation cannot do it. Just because a car can move you from point A to point B does not mean that point A and point B can be on two different planets. The mechanics that make up a car cannot accomplish this just as the current models of algorithms cannot model intelligence. A radical change in thinking is required. Whether or not that change will come is still in doubt.

    PerES Encryption [cloverlink.net]

  • Please read The Human Use of Human Beings. I'm pretty sure this is all covered there if you read closely.

    There are mechanical problems that are hard to do. They're called "Hard Problems" - one way or nearly one way algorithms that plausibly could be solved given enough time.

    There are math problems that don't lend themselves to discrete mathematics. I'm not sure a computer would have help Georg Cantor develop set theory. Also there are certain problems that lend themselves well to approximation but not an exact solution. If I dust off my solid analytic geometry books I'm sure I can find a few. That or real time celestial navigation problems using polar calculus.

    Then there are all of those problems that don't lend themselves to computation at all. Knowledge and insight come from the synthesis of new ideas out of different, multiple sources. So for example the sharpest mining bit in the universe doesn't by itself help you to understand the chemistry of the Earth's crust.
  • Well....good to see my sloppy thinking doesn't go unchallenged -- 4 responses putting me in my place already.

    Of course, I was incorrect: since GA/GP use pseudo-random numbers, they are, of course, predictable, inasmuch as you can step through the program. Just to pick a nit, though, it would be incorrect to say that they're "just a random walk in some problem space."

    GAs are often referred to as stochastic -- that is, they involve randomness. But that doesn't mean that they are merely random themselves. They're self-directing.

    So, yes, if you know all the inputs, you can work through the logic and see just how the program got to its solution. Whether this is the case or not with the human brain as well, AFAIK, remains to be seen.
  • Yes, technology was embraced because it would free us up to have more leisure time. But most people would rather work than have that leisure time.

    Productivity per worker over the past fifty years has tremendously increased thanks to technology.

  • Didn't the Royal Society once make a short list of the few remaining real problems for science to solve - the rest was supposed to be perfecting the results and correcting minor mistakes that seemed to lead to inconsistencies, like small irregularities found in black-body radiation, and some electro-magnetic effects...

  • So, do you have blind faith in questioning everything?

    ;-)

    -jon

  • It's like roulette: I can't predict where the ball will land on the next spin, but I can say with absolute certainty that if I stay in the casino for long enough I will eventually lose all my money.

    No, it isn't even remotely the same. Roulette is a bounded system with a limited, known set of variables. Over time, the ball should end up (on average) in each space the same number of times. I forget the exact details, but the reason the house wins at roulette (and why you shouldn't waste your money at it) is that there are N spaces on the board and N-2 spaces on the wheel, or the other way around.

    Weather has lots of particles and lots of factors that we can't even begin to understand. For a long time, it was global cooling which was supposed to be the end result of pollution. Now it's global warming. It is certainly true that the average temperatures that have been measured recently have been higher, but what's causing it? Is it a permanent thing or is it temporary? Considering we have about 200 years of hard data which we are using to predict 4 billion years, there's a bit of inaccuracy here.

    As for the election, the most accurate thing that can be stated is that more people INTENDED to vote for Gore. The problem is that they didn't do it correctly.

    -jon

  • I'm having a hard time seeing how your points support your argument (that "consciousness" cannot occur in a deterministic machine), or disprove my argument (that a deterministic machine can be as "conscious" as a human is), or further define "consciousness". Specific complaints are as follows:

    A) You as an information bearing automaton have a finite (or fixed infinite) amount of storage and processing power. Most of this is being used to run yourself. Thus you physically cannot have enough resources left over to wholly concieve of another of your class.

    Um, so what?
    Not only does this contribute nothing to the debate, but it's also true for any other object or system (deterministic or not).

    B) Indistinguishability != the same.

    Then how do you prove that your model of the human mind (a "conscious", nondeterministic system) is better than mine? For either of us, we can only compare the predictions of our models to actual observations. If our predicted behaviour is indistinguishable from actual behavior, we assume that our model is a working one (note that many different models may work).

    Apparently nondeterministic actions are adequately explained by strong sensitivity to input and the chaotic, effectively unpredictable nature of this input.

    C) Unsubstatiniated "Apparently" : please list source for this external verification.

    You seem to be confused by my perhaps-unclear statement above. A clearer version is: "Actions that appear to be nondeterministic are adequately explained as being the results of a deterministic system interacting with input that is chaotic and thus effectively unpredictable."

    This is self-evident. If you feed something random into a deterministic system, of course you'll get random-looking data out. This is my point; nondeterministic actions do not require a nondeterministic mind.

    D) As for "a very large deterministic system in a chaotic environment" It falls when you point out two things. The deterministic system must itself be a "chaotic environment" as the individual is always a piece of its environment.

    The system itself doesn't need to be chaotic to give chaotic output when given chaotic input. It may very well be chaotic; this is a very different thing from being nondeterministic. Either way, my point holds, so I don't really see what you're getting at here.

    "To obsever is to influence, and to be influenced" Professor Klemke.

    Again, so what?

    E) A far better model of "consciousness" is the imaginary numbers models.
    [...]

    This example is vague enough that it is difficult to tell what, if anything, it contributes to the argument. However, I'll take a shot at the two points I did manage to find in it:

    • You can thus say JonKatz internal universe, consciousness, runs on sqrt(1), sqrt(-1), and sqrt(JohnKatz). Sqrt(JonKatz) being a number that doesn't exist in the real universe, can't even be manipulated therein. This thus gives an easy test for "consciousness", does this system provably contain a mathmatics that doesn't exist in the real world?

      Short answer: No. You've just defined extra symbols for your own mathematical system. There are actually an inifinite or near-infinite number of possible mathematical systems. Talking about whether a given symbol in the system, like "sqrt(-1)" or "sqrt(JonKatx)", exists in the "real world" is not meaningful. The number "5" doesn't exist in the real world - it's just an idea that we choose to associate with certain structuring in the world about us. The manipulation of such symbolic "ideas", under *any* mathematical system, can be performed deterministically. Thus, this example doesn't seem to affect my argument much.

    • Easy test, very, very hard to prove.

      Firstly, this entire example seems to stem from some questionable hand-waving, as mentioned above. Secondly, you've already *claimed* to prove that the human mind is non-deterministic. I'm challenging you to provide support for this proof.


    The only device created thus far to emulate a human mind is the universe, and as you've already said that's a chaotic environment.

    This scores a big "so what?" on two counts.

    Firstly, chaos can easily occur in *deterministic* systems. Look up "chaos".

    Secondly, the only device created thus far that emulates the human mind is the human brain - much smaller than the universe. This also does not constitute a proof by any stretch; you have to prove that emulation by any other method is *not* possible (i.e. disprove the existance of anything other than the human brain which can host something indistinguishable from a human mind).

    It can be shown that as the limit of the accuracy of the emulation approaches == the mind it is emulating, the complexity of the system == universe.

    Um, no.

    The mind has finite complexity, as all of its state information is contained within the human brain. The uncertainty principle and a few other laws place constraints on the amount of information that can be contained in that volume at its measured temperature.

    The proof that you are quoting is flawed hand-waving (one of my complaints about The Emperor's New Mind, among other things).

  • Um...yeah. That's what I want...a criminal turned into a headcase let out ten years early. Hide the children!

    Screw the kids. They don't get home from school until 3:30 anyway. By then, the stationwagon and I will be heading for the hills.


  • So the first thing that has happened is that instead of making our lives easier, technology has been used to automate "easy" jobs (at the cost of the people who used to be paid to do those jobs).

    No way, dude.

    Agreed, those jobs are disappearing, but it's not at the expense of those who would have worked in whatever slave labor put-tab-A-into-slot-B job that we're discussing.

    It's to their benefit; now, they have opportunity.

    They can sit down, read a book or two, save their beer and cigarette money to buy a computer, sit down, play with it, and move on into the fast-paced IT world. Among other opportunies that are open to them.

    The fact is, most of those people who have menial punch-the-clock use-no-brains kinds of jobs are there for a reason: they lack motivation. If they had motivation, they'd have found some way to get into a more exciting field.

    Technology has offered them a *world* of opportunity simply by replacing them with robots. Instead of doing the job replaced by the robots, why don't you schmooze up the robot repair guy into having him take you on as a volunteer on weekends? Between knowing intimately well the job that the robot does, as well as showing an interest in the field, you'll probably get yourself a position.

    I got into the TV field by walking into a local TV station and volunteering. It wasn't even an internship. Within 2 weeks, I was doing studio camera on the 6:PM news - paid. Then, ENG, audio, video, finally, bench tech, repairing the innards of $40,000 professional VTRs, switchers and timebase correctors. Hell, they even put me on the air, doing short opinions and commentary. And a couple of TV commercials, too.

    I don't believe for a second that anyone is trapped in any position that they don't like.

    They're just lazy, and I have no sympathy for them.

    Opportunities exist for anyone in any field. All it takes is a little work.

    Remember, no matter what, *you* are the deciding factor in how successful you will be.

  • No, I would disagree with that. It might be possible to approximate a brain with a turing machine, but that might not be good enough.

    I'm not saying we will never have AI, or anything like that - I just don't believe it will be on a digital computer. I'd suspect that the hardware for developing an artificial intelligence will end up having many of the same features as a biological brain.


    --

  • Yeah, but by the time it solved the problem, it was too late.

    Of course, the solution solved that problem too...


    --

  • I assume several things in this post since arguments for the assumptions might be very long and tedious to read in this format.

    As far as I can see, a human mind is indistinguishable in practice from a very large deterministic system in a chaotic environment. Apparently nondeterministic actions are adequately explained by strong sensitivity to input and the chaotic, effectively unpredictable nature of this input.

    So, rather than making a blanket statement that the human mind can't be emulated by a deterministic machine, you're going to have to prove that it isn't already one :).

    Assume that a thing in a state cannot change that state without a cause enacted on it. This is true in classical physics, and it has not been disproved in quantum physics.

    The state of a single transistor in a computer cannot then change without a cause. And causes cannot be effected without programming from a human, doing electrical rewiring, or turning the power on or off. Yes, a program could be written to change the state rather than having a human directly change the state. Even the simplest algorithm will accomplish this. Yet the change of state is still indirectly caused by a human. I take it for granted that a computer could be programmed to simulate intelligence. If that is true, then that is all they can do; they would not necessarily be the source of intelligence. (Hence the name, AI.)

    If we assume that a physical state does not equal Truth, we might be able to get closer to a proof of the human mind as something other than deterministic. (However, I do not think it is possible to completely empirically prove this.) By a state not equalling truth I mean that a state simply exists as it is with it's own qualities. For example, an apple exists in a hanging state when it is on the tree. It is true that it exists in that state, but that state is not Truth.

    Assume then that Truth is "being as it is, with reason but without cause." Then human choices of the mind, a metaphysical rather than physical mind becomes more possible, and the possibility of the ability of a computer to simply "make up" or "create stuff" diminishes.

    I'm using "human mind" instead of "conscious mind" above because you're going to have one hell of a time defining "consciousness".

    Here's an idea that illustrates that complexity does not necessarily equal conciousness. I guess this is part of helping define conciousness and arguing against the "complex neural network of the ganglia" theory.

    The weather is an extremely complex series of events that is not well understood. I would venture to say that it is far more complex than any computer is, and it is possibly more complex than the human brain. Yet you don't say that the tornado is out to get you, that warm breezes are coming because they want to make you feel better. In short, the weather does not have reason. Yet we apparently have reason. So if reason exists then complexity as the only defining factor of the mind or "conciosness" might safely be ruled out.

    Brandon

  • Yeah, Penrose sure had to dig deep to try to concoct an argument to back up his beliefs. Too bad that his arguments are based on his own horseshit theories of how the mind works, and ignore the known facts.

    Hint: Next time you want to learn about how the mind works, or what it's capable of, try reading a book by a top neurologist rather than a top mathematician.
  • but after the thing dow chemical (or was it some paint company?) did with genetic algorithms, i'm not so sure.

    early nineties: it was getting more and more difficult to make paint. volatility and lead laws, customer demand for particular qualities (glossy, long lived) etc. were driving chemists nuts. drop volitility, get short lived/fugly paint. they were having some luck, but not much.

    the scientists brought in a consulting company to see what could be done with genetic algorithms. after some months of design and encoding of the basic chemical makeup and physical properties of paint, they were stunned that, after a few days processing, the algorithms cranked out several formulas that far exceeded all legal and usability requirements.

    it was estimated that the labs, using traditional processes, would have taken 100+ years to develop these formulas.

    i'm not so sure that you could not take some non-deterministic physical process and use it to drive genetic or neurofuzzy algorithms and blow all the NP stuff out of the water at some point.



  • Yup. Although we haven't gotten into great depths about unsolvable problems, the fact that some problems exist which are unsolvable (and why) was discussed in second year CS at the University of Toronto. I don't think this comes really as much of a surprise to people in CS or well versed in technology.
  • You've sidestepped an important point in the question of hypocrisy: eating meat is, in essence, killing an animal purely for the pleasure of the taste of animal flesh. Never mind the environmental consequences of beef cattle (both inherent to raising the number we do, and flaws in the system that are not inherent but motivated by profit concerns.)

    OTOH, using animals for medical testing often answers questions that simply cannot be answered without involving some animal (though one could experiment directly on homo sapiens, but current ethics seems to find that a lab rat's life is of less value than a human life, but let's not sidestep into that gray area).

    So, in essence, to be non-vegetarian and to oppose medical testing with animals is to say that the death of animals for pleasure is okay, but the possible death and/or suffering of animals for the advancement of medical knowledge (which will benefit both veterinary science as well as human medical science) is not okay.

    Something to think about, anyway.

    (As it happens, I am a vegetarian who doesn't purchase leather or other animal-death products, except for cat food because cats do -not-, biologically, have the option of being vegetarian even if their owner is, and yet I support animal use in medical testing, not without some ambivalence, but it is, at present, the best option; in the future, other options may arise, ie, using cloning technology to develop individual organs to experiment on without needing a living animal, or even computer simulations once we know enough to simulate usefully, though I doubt such technologies will ever completely replace live testing, they may well result in far fewer deaths and less suffering by filtering out less promising technologies early...
    I also somewhat agree with the claim that being against medical testing without being an ethical vegetarian is hypocritical, though I can see the potential for non-hypocritical philosophies that resolve the contradictions, even if I wouldn't, personally, agree with them; I think, though, that many people simply don't ask the questions and formulate their personal opinions on vague feelings and the effectiveness of propaganda directed at them... but then, that's true on every issue.)

    Parity Even

    --Parity
  • by Tackhead ( 54550 ) on Saturday January 15, 2000 @03:06PM (#1369246)
    >If random events played such a significant role in the much larger size circuits of the human mind, you'd think by now we'd have evolved compensatory mechanisms. Our brains are meant to process and react to our environment, not invent new random data.

    On compensatory mechanisms: Who says we haven't? Perhaps epilepsy is merely what happens to a brain when one or more of these mechanisms fails?

    On random data: And what is the input into your ears and eyes, if not "random" data? Yes, our sensory processing mechanisms are engineered to process and react to external stimuli -- but many of those stimuli are essentially random.

    Sorry, Mr. Penrose. Yelling "tubules" and "quantum" over and over again in Emperor's New Mind doesn't refute hard AI. It just means that the CPU in the deterministic Turing machine may need an embedded random-number generator based on a random physical process.

What is algebra, exactly? Is it one of those three-cornered things? -- J.M. Barrie

Working...