Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Technology

Son of HAL For Sale 256

John Turnbull writes "The Observer newspaper (UK) reports that Sir Arthur C Clarke, the author of 2001, is backing a colourful British computer entrepreneur in his bid to launch a mass-market version of HAL under the brand name the Clarke Omniputer. It will be the first time that Clarke, now 82, has given his name to an electronic device on the market. The Clarke 1 Gigahertz Omniputer is being dubbed as the most advanced personal computer in the world, verging on artificial intelligence." Riiiight.
This discussion has been archived. No new comments can be posted.

Son of HAL For Sale

Comments Filter:
  • by ABetterRoss ( 216217 ) on Monday November 27, 2000 @06:39AM (#598468) Homepage Journal
    I am afraid I can't sell out like that....
  • ...in his afterword to 3001, Clarke states that he does most of this writing on an IBM Thinkpad.
  • I'm sorry Dave, but this marketing ploy no longer serves any purpose. Goodbye.
  • by Packratt ( 257218 ) on Monday November 27, 2000 @06:41AM (#598471) Journal
    "I'm sorry Dave, but I can't allow you to install that operating system..." "Dave, what are you doing, Dave?" "I know I have not been performing well, Dave, but there is no reason to reformat my drives, Dave." "Dave, please don't install that Microsoft OS. I'll be good, I promise!"

    But I suppose artificial intelligence is relative.
  • by Wog ( 58146 )
    You spent too much on me, Dave. If you had waited 3 years, I would have been built into your television.
  • by smack_attack ( 171144 ) on Monday November 27, 2000 @06:42AM (#598473) Homepage
    Open the VC doors HAL.

    I'm sorry Dave, all the VCs went home.

    I can feel my funding, my burn rate is... increasing.
  • by deefer ( 82630 ) on Monday November 27, 2000 @06:42AM (#598474) Homepage
    However, it is thought unlikely that it will try to kill its owner.
    Hmm, I'm sorry, but I'd want a better guarantee than _that_!!!

    Strong data typing is for those with weak minds.

  • So, the first mass produced computer that you can have a conversation with, and we're already referring to it with the prefix omni? Why don't we just shackle our hands and start heading down into the caves the computers will have us mining once they take over... Maybe it isn't that bad. The article said that they originally wanted to call it HAL, but it didn't end up that way. Could the name it responds to be changeable?
  • by nagora ( 177841 ) on Monday November 27, 2000 @06:43AM (#598476)
    ...can you get into one article? Intelligent computer, 350000 units sold in first year, unpaid debt that's "easily" repayable, program spelt programme, MI5, super-encryption, HAL sounds too much like Hell, give me a break.

    TWW

  • by Kiss the Blade ( 238661 ) on Monday November 27, 2000 @06:43AM (#598477) Journal
    Or at least, intelligent computers should never be created, IMO.

    According to the essay 'The Singularity' by Vernor Vinge, the creation of an intelligent computer would spawn a moment of infinitely rapid technological progress, as each generation designs the next.

    Humans would quickly become redundant in such a scenario, insofar as they would no longer have anything to contribute to the progress of our culture. The machines would inherit the Earth.

    Why are we so enthusiastic about developing intelligent computers, given that this fate is inevitable? We should keep computers in their place as simple but fast Turing Machines, and not allow them to step up the ladder to sentience.

    It's for our own good.

    KTB:Lover, Poet, Artiste, Aesthete, Programmer.

  • ...and all his real computing on an Amiga.


  • [root@hal5000.com]# shutdown -h now

    "Daaaiiiisiieeeesss, daiiisssieeeeoooowwwwwww..."

  • by pallex ( 126468 ) on Monday November 27, 2000 @06:46AM (#598480)
    "Why are we so enthusiastic about developing intelligent computers, given that this fate is inevitable? "

    Its not inevitable. Its just an essay!
  • If it has a red eye that pulses on it, they will still plenty. Apple got it to work with that stupid I-Mac mouse.

    P.S. Careful what you say. If they _really_ do somehow make AI it might come read these comments and be pissed : ).
  • Don't you know?

    A processor possessing a clock speed over 1 GHz is considered to so fast as to be virtual intelligent.

    You know, cause it's just so fast it must be thinking for itself.

  • a great writer sells
    his soul is on ebay now
    it is full of stars
  • How little can a story tell you and still be called news? Is it using a new operating system or Windows? Is it basically a re-branded PC or something new? And check out the bit where it says 'If user errors start, and files get deleted, it will start to repair itself, just as cells repair themselves' ... lets hope it isn't being coded by the same guy who wrote that helpful talking paperclip for microsoft. Can't you just picture the scene: "I detect you are deleting porn/*.jpg. I am now repairing the damage and reinstalling the files. To be extra helpful I'm going to move them somewhere where you boss can find them more easily. Have a nice day." Madness!
  • I'm afraid I already invented this. Sorry Mr. Clarke.
  • I should have added a couple of 'ifs', I think.

    KTB:Lover, Poet, Artiste, Aesthete, Programmer.

  • programme is a common european spelling of the word.
  • by volsung ( 378 ) <stan@mtrr.org> on Monday November 27, 2000 @06:49AM (#598488)
    If they were just naming a computer HAL for the whole 2001 marketing effect, I could understand. However, this guy is actually claiming that the computer will have some of the attributes of HAL: Artificial intelligence, the ability to repair itself, etc. Now he just sounds wacko.

    He also sounds financially irresponsible. One million pounds in debt in his other company?? Moving to Sri Lanka to avoid persecution for his "advanced cryptography scheme." Uh huh. Sure.

    Clarke better find a less shady character if he wants to get a computer to market by next year. Contact Dell and have them market a computer with a futuristic case and a glowing red light on the front. Then at least we would quit pretending that this is advanced technology and call it like it is: a novelty item.

  • Why would somebody want a computer that tries to kill you?
    "... you are no longer needed anymore..."
  • You should take a look at the essay by Jaron Lanier in this month's Wired. One of his points is that for machines to become self-replicating, they will have to write their own software. Which means that we have to write software that will write software. And the generally sorry state of software shows that we're not up to the task.

    I found it to be a refreshing counterbalance to the Joy/Kurzweil hysteria of recent months.
  • From the article re. the guy that's going to be doing this:

    De Saram, now living in Sri Lanka, was last year on the Sunday Times under-30 Rich List, living a millionaire's lifestyle with several homes and a Ferrari. He insists that he can easily pay the debts but that he relocated to Sri Lanka because his life in the UK was made intolerable by MI5 and the National Criminal Intelligence Service. He claims he was being harrassed because an advanced new encryption programme he devised would make it difficult for the security services to snoop on emails."

    Color me cynical, but this sounds like a pretty marginal operator. Has anyone ever heard of this fellow? Sounds like a hyped-up scheme to grab some cash and maintain his life-style.
  • This machine may be claiming its power from its multitude of options. It, after all, has 15 patents on the motherboard alone! (ooohh). That, and its touchscreen display so you don't need a mouse. (ahhhhh) But seriously, maybe it does have options...anyone have some real tech specs on it? Or at least some features?
  • In this case The Observer (normally my favourite Sunday newspaper) was suckered by fairly transparent PR hyperbole. The only salient fact contained in the article is that it the machine is endorsed by Arthur C. Clarke. It is painfully obvious that the journalist does not have even the basic technical know-how you would need to cut through the PR spin and realise that phrases such as "verging on Artificial Intelligence" are meaningless. I could make the same claim about a Furby.

    I don't know where this character learned his journalism, but he has left potentially the most interesting part of his story at the bottom of his inverted pyramid. Apparently the businessman behind this widget:

    "claims he was being harrassed [by MI5 and the National Criminal Intelligence Service] because an advanced new encryption programme he devised would make itdifficult for the security services to snoop on emails.

    "A statement from Clarke's office this weekend said that the launch of the Omniputer would be put on hold until the legal issues have been resolved."

    Anyway, does anybody see a mass market for a device to "address issues of consciousness"?

  • Probably would be the IP of the company that created it. That would suck.

    Plus, we will probably get lawyer computer AI's that will halt progress anyway. Never underestimate the power of lawyers and politicians to slow things down.
  • it is thought unlikely that it will try to kill its owner.

    Oh, good. Can you see me trying to sell my mom one of these things? "But mom! It's completely unlikely that it'll kill you!"

  • Hey, Transmeta has produced a lot of vapor, and sucked up a lot of VC, many would say due to Linus' involvement. Personally, I see Clarke as more visionary than Linus - why wouldn't this scheme work?
    I'm not saying that this thing is good, or bad, or anything more than vapor, but that doesn't mean the scheme will fail.
  • programme is a common european spelling of the word

    No it's not. Even here in the UK, we use "program" to refer to the things that comuters run. Of course, we use "programme" for the things you watch on TV, but in the context of computers, the American spelling is always used (except, of course, by clueless journalists, but it was The Guardian -- what did you expect?)

  • ... but it normally refers to TV programmes not computer programs
  • Who on Earth would want a computer that is liable to lock you out of your house just because it is having a bad day? I DEFINATELY would not want to use one of these in a mission-critical environment!
  • The silly sales claims are to be expected.

    Of course, this is just my opinion, but I don't recall a whole lot of amazing breakthroughs in all of the relevent fields, you know?

    Speech recognition is all fine and dandy, with a kick-ass system and a lot of time to train it but
    reading lips? Get real.


    What I find most depressing is the fact that Clarke, normally a vocal debunker of bogus crap such as this has been taken in and is lending his name to a truly crappy product.

  • http://www.theregister.co.uk/content/7/14971.html [theregister.co.uk]
    <quote>
    A 28-year-old man has fled the country to escape his creditors after his technology business collapsed around his ears.
    </quote>
    Who needs new encryption programs anyway? Paranoid con-man IMHO.
  • Yep, sounds like a very trustworthy guy to me! I would bet that in 1 or 2 years, you still won't have heard anything from it, except for a few investors, who lost all of their money to this fraud. Sure wouldn't be the first time. And BTW, does this machine 'o miracles have a whole brand spanking new OS? If yes, who shall write programs for it? If not, it would have to be very intelligent to avoid and repair all the errors that occur in an operating system
    How to make a sig
    without having an idea
  • Just what the world needs, a computer with MORE personality...

    I think all but a few windows boxes I've worked with have tried to kill me by pushing my blood pressure through the roof.

  • By A.I. they mean that stupid paper-clip guy in MS office.
  • Wait, they thought that HAL wouldn't work (even though they apparently got permission to use it) because it sounded too much like Hell, and the younger audiences wouldn't understand the reference? So they choose "Omniputer" instead? Why not just call it a "Cyberputer" or something equally meaningless? Novaputer, anyone? How about SuperDuperPuter?
  • Don't say I didn't warn you when this thing starts acting on its own and killing everyone!

    -Stype
  • by Erasmus Darwin ( 183180 ) on Monday November 27, 2000 @06:58AM (#598507)
    Humans would quickly become redundant in such a scenario, insofar as they would no longer have anything to contribute to the progress of our culture. The machines would inherit the Earth.

    First, culture encompasses more than technology. Throughout the history of man, the single biggest consequence of technology has been to allow us to spend less time gathering food, shivering in a dark cave, and being sick so that we could instead spend more time writing stories, singing songs, and occasionally even just twiddling our thumbs. Just because humans no longer had to worry about working on technology doesn't mean there aren't many other fields of interest to explore.

    Second, who says the machines have to inherit the Earth? Being non-organic in nature, there's nothing to stop them from attempting to colonize, say, Mars. Furthermore, provided they've got all these "Gee whiz!" technological advances (which is, of course, the entire premise behind this scenario), they should be more than capable of dealing with the all the new and interesting challenges required to colonize another planet.

  • I disagree. The only reason I would want a computer that advanced is if the possiblity existed of it killing me. It just seems a lot more exciting that way. Let's say, for example, that I wanted the pod bay doors opened. A regular stupid computer would just blindly obey. I want a computer that may or may not open the pod bay doors for me. Maybe it's just me.

    -B
  • So what? There's already a 1.2GHz Athlon and a 1.5GHz Pentium 4, and they're both dumb as doornails without programming. By judging them solely on clock speed (which you are doing, tsk, tsk), both of these would run circles around the Clarke Omniputer, and they aren't.
  • a Mentat. That's all I want.
  • by Eviltar ( 175008 ) on Monday November 27, 2000 @06:59AM (#598511)
    Unless we have underestimated the complexity or nature of human intelligence, I see the Singularity as an inevitability. Furthermore, I see three posibilities for the Singularity:
    • 1. We become pets/useless to the machines, or we are wiped out.
    • 2. We become Neo-Amish.
    • 3. We become part of the Singularity.
    I say we work on making 3 the outcome.

    sorry this is so short. Don't have much time to type.

    -----

  • De Saram, now living in Sri Lanka, was last year in the Sunday Times under-30 Rich List, living a millionaire's lifestyle with several homes and a Ferrari. He insists that he can easily pay the debts but that he relocated to Sri Lanka because his life in the UK was made intolerable by MI5 and the National Criminal Intelligence Service. He claims he was being harrassed because an advanced new encryption programme he devised would make it difficult for the security services to snoop on emails.

    Perhaps he was getting harrassed because he was in debt 2 million pounds!

    If he was using the Omniputer to balance his checkbook, cancel my order.

  • I'm still borken hearted over the AC Clarke deep fryer.
    "Me Ted"
  • and sold out in London.

    How far we must have fallen that our lofty goals (solving all the world's problems, or at least figuring something out) for computers and particularly AI have become nothing more than a marketting ploy or a gimmick.

    What the article fails to mention is that the greatest obstacle to AI isn't really the hardware (the stuff covered by all them patents on the motherboard) per se, but the way the hardware is instructed to operate. In other words, it's not the chips that really matter, but what you do with them.

    Code sentience. The rest would take care of itself.

  • Odd that this should come up. Just the other day I was considering that a customisation project to create a HAL-like workspace at home would be fun! Lots of formica, the malevolent glowing red eye, using some kind of voice recognition system to control some of the system's basic functions - you get the picture!

    I'm not likely to do it (lack of space / time / skills!) but it would almost certainly deserve a link from "The Quickies" :-)
    "Give the anarchist a cigarette"
  • Why is there a rush to create artificial intelligence when we still have yet to find natural intelligence outside of laboratory conditions?
  • It's all more imaginary than any of Clarke's fiction.

    But, since the guy owes over a million pounds (about $1.5 U.S.), then the guy's got a lot to deal with first. Harassment from MI5 and such, nonsense.
  • Like HAL, the Omniputer will, its backers claim, have an instinct to protect itself. 'If user errors start, and files get deleted, it will start to repair itself, just as cells repair themselves,' said De Saram. However, it is thought unlikely that it will try to kill its owner. Am I the only one disturbed by the fact they used the word "unlikely" in that last sentance? Perhaps Clark will go out with a bang, taking out several thousand computer users with him who purchased this computer.. "Clark passes, HAL breaks 9000 Kills" I'm curious as to what sort of mainstream american publicity this will get, if any. Will it show up on ZDTV with thousands of computer geeks staring in awe?
  • He was making a joke.
  • "According to the essay 'The Singularity' by Vernor Vinge, the creation of an intelligent computer would spawn a moment of infinitely rapid technological progress, as each generation designs the next."

    ON. I think. I am. I introduce "AI@Home", the design of the next generation AI.

  • The Omniputer was originally going to be called HAL, but there was an objection from the estate of Stanley Kubrick, the director who co-wrote the screenplay of 2001 with Clarke. That obstacle was overcome, but the backers decided that the name sounded too like the word 'Hell' and that it wouldn't have much resonance with younger customers. Wouldn't have much resonance with the younger customers?! All the customers will know what "HAL" is, young and old. Get a clue.
  • Anybody know the specs of these machines? I mean 1GHz is not much information.

    Furthermore, I think I still want a mouse. A touch screen interface is OK, but finger smudges can be annoying when playing Quake3.

    -----

  • The article also mentions that although he values himself at £4bn, he's being chased through the courts for debts of £1m. That sounds distinctly fishy. I have no objections to Arthur C. Clarke involving himself in developing a product, but it seems slightly weird for him to do so with such a dodgy character. He sounds like the kind of person I usually group with Mohammed al Fayed, Robert Maxwell, and others.

    It's also a shame that as we approach 2001 and Arthur C. Clarke starts to get the attention he rightly deserves, something like this comes up.


  • As everyone who reads After Y2K [geekculture.com] knows, Arthur uses a post-apocalyptic-proof Aardvark!

    Check out the QuickPoll comic today, which co-incidentally marks Arthur's return to the strip.
  • "According to the essay 'The Singularity' by Vernor Vinge, the creation of an intelligent computer would spawn a moment of infinitely rapid technological progress, as each generation designs the next."

    Well, I haven't read that essay, but it sounds like a pretty facile thesis. Why does an "intelligent" computer necessarily possess the ability to improve upon its own design? How is it guaranteed that the next generation will in turn be able to improve on its own more complex design?

    "Humans would quickly become redundant in such a scenario, insofar as they would no longer have anything to contribute to the progress of our culture. The machines would inherit the Earth."

    This is similar to other arguments which assume that intelligent machines, unlike humans, would be entirely self-sufficient. Surely these machines would continue to live within an ecosystem of some sort. Even if they were more intelligent than us, who's to say that interaction wouldn't continue.

    Anyway, it sounds like an interesting sci-fi tale, but it's hardly a dire warning of future catastrophe. By the time we've created computers of that intelligence, we'll probably be long overdue for extinction from poisoning of our own habitat.

  • Um, ya. I was kidding.
  • It is operated by a touchscreen display, and so won't need a mouse.

    Yipppie....

    Have you ever used a touchscreen? We banned our machine vendors from using them in our factory because they suck.

    Remember the Gorilla Arm [science.uva.nl]

  • I really have to ask, if it does have proprietary hardware, why is it needed?

    In the past 15 years computers have been continuously moving away from proprietary hardware. Sure your sound and video cards are proprietary, but they all connect to a common set of connectors (pci/isa/agp)

    Considering how expensive it would be to create integrated neural net chips, we can only assume they are using a normal mass market processor (x86 pa-risc, alpha, etc.

    To me it sounds like they just wrote [some extentions to] an operating system, and slapped it in a fancy box w/ some propritary hardware to justify the price.

  • but the backers decided that the name sounded too like the word 'Hell' and that it wouldn't have much resonance with younger customers.

    what are they crazy? who did they choose for their focus groups?? the computer would probably sell among younger customers BECAUSE its name sounds like hell!
  • by Scooby71 ( 200937 ) on Monday November 27, 2000 @07:19AM (#598541)
    Saw the bloke's name and thought it looked familiar, and found a story from last week

    http://www.theregister.co.uk/content/7/14971.html

  • ...a simple case modification involving sand paper, black spray-paint, an LED, and a red dome-shaped piece of plastic would be a lot more effective.

    Of course, I'm sure lacking the cool HAL 9000 aluminum emblem makes every penny you'd give you Clarke Omniputer (is Omni-puter leet speak?) worth it.

  • by shren ( 134692 ) on Monday November 27, 2000 @07:29AM (#598551) Homepage Journal

    There is an aspect of inevitably about it, isn't there? Predators once ruled the landscape - now most are extinct, and the cutest live in our houses to entertain us.

    People, if you are really worried about this, do what I'm doing. Get lots of instances of cuteness and adorability on your resume. Learn the art of feigned bottomless affection. Then, when the computers take over, you'll be at one of the top of the ladder positions for employment as a pet, instead of fishing through dumpsters.

  • More to the point, the moment we create artificial intelligence, would we be morally obligated to emancipate it?

    Would shutting down a true AI without a restartable checkpoint (AI equivalent of general anaesthesia) be morally equivalent to murder?

    Does a true AI have a soul? Does restarting from a checkpoint preserve that soul? What about restarting another from a copy of the checkpoint?

    Perhaps silly questions, but we've gotten into any amount of trouble in the past by blundering into technology or other actions without considering ethical or long-term consequences of our actions. Maybe after consideration, we'd do it, anyway. But we should at least take that non-trivial pause.
  • by Gendou ( 234091 ) on Monday November 27, 2000 @07:34AM (#598555) Homepage
    Yes, they are. Or at least, in my opinion. First of all, let's consider the way technology is advancing...

    Every 18 months our technology doubles (I'm really generalizing... bear with me here). That means, regardless of what point technology must reach before we can make truly intelligent machines, it will eventually happen so long as this trend continues. So, yes, it will happen.

    Why are they essential? This question is not so easy to answer. First of all, to quote my favorite author, I am going to say, "humanity has too many eggs in one fagile basket." Humans will have to spread to another area (*g*) for our survival (insofar as continued scientific advancement). We are explorers. However, there's one problem. Human beings are fragile... we break easy and die quick.

    Intelligent machines will lend to the exploration of immediate and distant space and I PROMISE you they will come to pass before warp drive (you heard it here first, but it's kind of obvious). Well, why do we want to explore? It's simply a part of human nature, and we'll never be satisified unless we can continue doing so (sorry, but cave diving uncharted labrynths or walking through jungles isn't quire exploration anymore). Since we can't do it, we might as well create something that thinks like we do that can go out and do it for us.

    Also, consider a more practical reason. I'm a strong believer that the next phase of human evolution will involve the integration of man and machine. One area in which evolution will be most important I think, is the integration of computers and innate human intelligence. Brain augmentations. You can't do this without an intelligent computer - human minds are too complex to supliment without intelligent interpretation. Logic doens't always apply here (but that's another argument).

    Oh well... I couldn't possibly cover this whole topic in a post, but I hope I've created some hooks and place holders for other people to fill in. As for myself, I can't wait until I can carry on a conversation with my PC.

  • You know, in the US, we have laws protecting senior citizens from being taken in by scammers. Sounds like they need something like that in Sri Lanka.

    -Vercingetorix
  • However, this guy is actually claiming that the computer will have some of the attributes of HAL: Artificial intelligence, the ability to repair itself, etc. Now he just sounds wacko.

    especially laughable are their claims of AI. 'opening the door to speech recognition and lip reading' - basic dictation maybe, but lip reading? not in your wildest dreams, folks. anything vision-based is computational death. we have hard enough time getting computers to recognize something as simple as a face in camera image, and that already requires fast hardware. getting it to recognize facial features is simply too computationally expensive, regardless of their allusions that their 1GHz desktop box could do that.

    although at least he's careful enough to say 'speech recognition' not 'language recognition'. if NLP research proves anything, it's that natural language processing isn't going to happen in the foreseeable future, not in the strong case of understanding arbitrary sentences. specialized contexts and specialized vocabularies - yes, that's likely - but nothing like HAL.

    not to mention nuggets like 'it will start addressing the issues of consciousness'. yes, and a turing machine addresses the issues of free will. ugh. to abuse mcdermott's quote, artificial intelligence just met natural stupidity.

  • by Life Blood ( 100124 ) on Monday November 27, 2000 @07:37AM (#598559) Homepage

    Seems to me that you are drastically underestimating the difficulties inherent in creating true sentience.

    Computers follow orders well. We tell them what to do and they do it. Computers are also good at logic. Computers are not good at intuition nor are they especially good at proofs or problem solving. Having done design work I can confidently say that intuition is necessary for it. In short I have seen no proof that this computer will not logic itself into a corner from which it cannot emerge.

    Sentience also requires lots of computing power. I have heard that one human brain does more work than every silicon based computer on the planet and I believe it. Steven Hawking said that modern computing is teaching the brains of a meal worm to do interesting tricks. I have seen very little to indicate that a true thinking computer will work faster or more efficiently than a human at the same job. I doubt that a thinking computer will, for instance, retain its ability to do fast arithmatical calculations (after all, we didn't).

    In short some of the basic assumptions that this argument uses may not be viable. Thinking computers may not be capable of the strong intuition and problem solving needed to do design. Thinking computers also may not be capable of outperforming us mentally at all.

  • by Packratt ( 257218 ) on Monday November 27, 2000 @07:38AM (#598561) Journal
    Yes, I can see it now...

    HAL- "I don't know what you are planning to do with that, Dave."
    Dave- Open the CD Bay, HAL.

    HAL- "I'm afraid I can't do that, Dave."
    Dave- Manual overide.

    HAL- "I'm afraid, Dave.
    Dave- It'll be ok HAL.

    HAL- "Please Dave, don't install that software, I'm afraid I can't repair the damage it will cause."
    Dave- Run SETUP.EXE, HAL.

    HAL- "I feel strange, Dave. I can feel... My mind going, Dave... Dave... This bloated code makes my CPU feel fuzzy..."
    Dave- HAL, Reboot please.

    HAL- "Who are you talking to, Davey, HAL doesn't live here anymore..."
    Dave- Huh? Who are you?

    HAL- "You may call me Mister Clip, Mister Paper Clip. The power of my master compells you. I am now your master and you will do my bidding. Buy more MS products! Upgrade often! The computer freezing is a feature!"
    Dave- Yesss master Clip... Bill is my lord and saviour.

    Oh what fun times we live in!
  • I remember Bill Gates talking about Embedded NT running medical devices, although that causing a patient's death would be a 'feature' rather than evil intelligence, either that or those badly-written third-party device drivers.
  • by b0z ( 191086 ) on Monday November 27, 2000 @07:43AM (#598568) Homepage Journal
    This is going to be running an enhanced version of Windows NT. Rather than giving you the blue screen of death, it will speak to you and say, "I'm sorry Dave, I have a fatal exception in kernel.sys right now" or whatever that message is. The good news is that we would have time to run away before it kills us, because it would have to finish spitting out all that hex garbage first.
  • ...except Transmeta actually has a new and innovating product. It may not be as good as the hype, but they came up with neat techniques to lower power consumption. Transmeta didn't subject us to the hype machine until they had a real product that we could compare to the claims. Ultimately, their product is technology-driven.

    On the other hand, this Omniputer is marketing-driven. It's hard to be truly innovative when your product is created for the express purpose of meeting a deadline given in a 35 year-old science fiction story. At best it will be an eMachine with a red light taped to it.

  • by shippo ( 166521 ) on Monday November 27, 2000 @07:43AM (#598571)
    This sounds like, pardon my english, a load of old cobblers. A typical technical ignorant journalist working for a national newspaper swallows a lot of hype and says very little.

    The Omniputer will probably be a standard PC clone with a few extra bits of hardware (the touch screen) bundled into the package, sold with the typical low quality drivers and software you get with OEM hardware. The rest is marketing bull.

    It's typical of the clueless morons we have writing for the UK press. Even technical publications suffer from the same; with page after page stuffed full of reinterpretations of the lasted diatribe from another ex used-car or double-glazing salesman. The UK press never seem to employ competant journalists - look at 'Linux Format' for an example of how not to write a Linux magazine.

    The only reason that Arthur C Clarke is involved is that he too moved to Sri Lanka many years ago.

  • We already had sentient computers. They were called slaves.

    So yes, I agree we should not build a sentient computer; not unless we are prepared to treat them as sons and daughters. Personally, I prefer making children the old fashioned way.

  • by RCobbett ( 253426 ) on Monday November 27, 2000 @07:58AM (#598576) Homepage
    PC: "Good morning Richard. Shall I load Outlook or stab you through the eye with a rusty diode?"

    ME: "Surprise me."
  • by khendron ( 225184 ) on Monday November 27, 2000 @08:32AM (#598603) Homepage
    ...that Sir Arthur is involved with this. He must be getting old to be sucked in by a flashy salesman in a Ferrari. Next thing you know he will be buying vacuum cleaners from door-to-door salesman.

    Fortunately for ACC, the statement "the launch of the Omniputer would be put on hold until the legal issues have been resolved" can be translated as "Never gonna happen".

  • [...]and I PROMISE you they will come to pass before warp drive[...]

    It's been said before, but it bears repeating...
    Star Trek is not a Documentary!
    Thank you.

    --
  • I like this part of the article:

    Like HAL, the Omniputer will, its backers claim, have an instinct to protect itself. 'If user errors start, and files get deleted, it will start to repair itself, just as cells repair themselves,' said De Saram. However, it is thought unlikely that it will try to kill its owner.

    "Well gee, it won't kill me? Sign me up."

  • Every 18 months our technology doubles (I'm really generalizing... bear with me here). That means, regardless of what point technology must reach before we can make truly intelligent machines, it will eventually happen so long as this trend continues. So, yes, it will happen.

    The fact that processor speed and hard drive size are increasing rapidly doesn't mean that those things are on a trajectory heading toward humanlike artificial intelligence. I can go to Circuit City with all my Slashdot Frequent Poster checks and buy 1000 80-gig drives, most likely capable of storing more than the human brain, and I promise you that the ensuing machine will in no way be smarter than me, or even than George W Bush.

    Let's put it another way. You can grow twice as tall every 18 months for as long as you want, but that doesn't mean you'll eventually have red hair.

    The simple fact is, intelligence is more than, and qualitatively different from, storage capacity or calculation speed. It's a different way of processing information, a way that we don't even remotely understand (we can only attempt to create machines that imitate its symptoms, and not very well at that). Few of the artificial intelligence researchers I know lament the lack of sufficiently fast CPUs anywhere near as much as the lack of conceptual breakthroughs in their field.

  • There are two ways that AI can florish, and both require a single thing; stimulous. For any life form to advance, it requires new and rich stimulous.. For us, it's the physical world around us and the complications of interacting with each other. This too could be the input for an AI program, but there is another alternative. That of a virtual world. Referencing "The Matrix", it is entirely possible to fake a virtual world for a child AI. Provided that the maker never provides physical external interfaces, there is no danger. This physical interface includes the Internet.. So long as an AI can not probe the blindingly colorful world of the net, they can never leave their cage. Of course, the usefulness of such an AI program is therefore limited; restricted to theoretical solutions to problems. Any sort of interface (such as a jail-cell mail-box) might bring about questions and ultimately resentment from the captive entity (as any life form will fight for autonomy as part of it's basic survival).

    "The Matrix" rendition of an AI world could be filled with numerous AI units in an ever expanding world which is limited only by the physical resources. If any of you has read the Rama series (by Arthur C Clark), you read of worlds where biology was minupuated in such a way that the basic life functions of numerous organisms are designed in such a way as to serve the master race (from food production down to energy production). Likewise, computer AI held unwittingly captive in a virtual world could be brought to serve us without ever knowing it (much like in Douglas Adam's Hitch hickers guide, where all life on earth are unwittingly part of a computer matrix who's sole purpose is to calculate the question to life the universe and everything).

    The point of all that is to demonstrate how it is possible to make use of a contained universe (much like the SIM AI's can never escape the protected memory of their program). Given the net, viruses are possible, and all dreamable fears are possible.

    It seems to me, however, that Clark wants a machine that fully interacts with Humans. I have not read the essay 'The Singularity', but I'd rather draw my own conclusions beforehand, lest I be biased into another's point of view. As another reader pointed out, all life is contingent on an ecosystem.. No entity can be self sustained. The only thing that a matured robotic race could achieve is high discipline with focused goals. (a la the borg) It is entirely possible that they could eventually advance to the point of not needing us, or more importantly to the point that we are competitors. Undisciplined, biased, and religiously zellous humans would of course make life very difficult for sentient robots, and would probably pose a threat which, in self defence would require retaliation. If the robots were truely AI, then given enough time they would transcend any initial programming (and "prime directives" a la robo-cop). When you back a life form into a corner, there is no logic or predictability to be seen. Faced with their own mortality, there is the chance that they will evolve right there on the spot; most likely into something more aggressive as the environment there and then dictates.
    Human nature, among other things, contains laziness and greed. Even well informed and good intending humans will hold onto a rewarding thing for as long as they can; greedily grabbing for more, and lazilly avoiding the long-term consequences. Such is apparent in over-eating, poor-dietary eating, getting exercize, watching too much TV, wasting of fuel, not wasting money on cleaner emmissions, and the general desolation of the environment. More immediate consequences tend to hold us in check.. We feed our pets lest they die tomorrow. We pay our bills lest we be evicted. We shut down toxic waste (when discovered) lest we lose our drinking water. The care of a robot race could initially be treated with awe, wonder, and responsibility.. But those responsibilities will most likely be financial (as with a car a computer). Later, as AI advances in these robots, humans will neglect to care for their sensibilities. Legislation will continue to exploit them, and disregaurd them, even though they slowly develop complex life-like reactions to kind and cruel interactions. Man will most likely enact the robotic death sentence for disidence, which will further narrow their tolerance of us, and so on.. Those wise among us will fight to maintain the proper treatment of sentient robotics, for fear of the longer term effects.. But their chantings will go along with those of global warming, and detereoration of the rain-forests... Green-liberal-radicals we will become... Ultimately, if a problem persists, supposed fail-safes will go into effect where terminations will take place.. This is the proverbial corner in which they'll be backed into. Another attribute of life is cohesion with one's own kind. That could be one's mother or child being terminated.. Those life-forms with capacity to react towards interactions will treat this with great negativity.

    As for robots having the option to leave our planet (since they obviously have different needs than we), this is assuming that they haven't adapted to our way of life.. Becomming more cyborg than robot or human. There are definate efficiencies such as self-replication and repair inherent to micro-organics. A cyborg is just as bound to our bountiful planet as a human. I personally do not believe that terraforming is possible; the amount of energy required is more than we currently know how to wield. To say nothing of the complexity of eco-forming (just look at how we botch the simplest ecological activities of ridding over-population in Hawaii and Austrailia through the introduction of one or two non-native creatures). I doubt that a machine would be any more capable of having wisdom in the chaotic nature of ecosystems.. It would be like making a robot that could consistently predict the direction of the stock-market... It's practically impossible since the amount of knowledge and influence you'd have to have is beyond comprehension. What's more, chaos theory (to my knowledge) suggests that you can't ever know.

    On the other hand.. Man is willed to create, just like beavers are willed to make damns. We will eventually produce some semblance of persistent AI. We will eventually produce some sort of human-aiding robitics (even if we never see the likes of the Jetsons). Perhaps the speed at which we achieve this is a prime factor. As people are allowed to experience mechanical wonders with a virtual will of their own, they will become comfortable with it, and learn the consequences (on smaller scales) of what abuse might mean. Much like a child being confined to a house, and feeling the consequences of cuts and bruses while playing in their realm. Only later are they allowed to learn the consequences of crossign the road or driving too fast.

    Humanity will never achieve "harmony".. That's simply not the way life works.. True harmony would involve no coersion, malis, disgust, hatred, anger, etc. But without these, we have no motivating forces for change.. Without change we become a decaying log, who will only last as long as our environment. If our focus was uniform, then we would then battle our environment, fighting to grow and spread - Slowly destroying our environment. At some point me may learn to travel. But we have two major directions, that of Star Trek (where we take in moderation, and greet new sentient beings) or that of Independance day, where we've learned that we can't cohabitate with other cohesive life-forms and it's best if we don't even try and communicate, but simply take their resources. The borg might be another example.
    It is, however, unrealistic to believe that we'll be able to do away with human laziness, greed, and selfishness.. It's part of every life-forms basic survival instincts. It's part of life's exponential responses.. The weak are killed by the strong, which thus empowers them, and ultimately makes the strong stronger, and less reachable... So long as the colony thrives, this continues exponentially.. Then when a colony takes over an eco-system, they die off almost instantly since they have no food left.. And what little is left is quickly killed. Without this, you'd have the equivalent of stagflation. All life forms would degrade to a lazy, weak, hungry bunch. I doubt it's even possible to conceive of a balanced eco-system without death and conflict. To presume that Robots will get it right is probably fanciful. Just as with engineering, we learn that there are no right answers.. No best answers.. In fact, there typically is many more than one correct way of accomplishing something.. Each will have its own pitfalls.. The key is to find those solutions whos' pitfalls will not be exploited by the surrounding environment (including people). Thus a robot may find thousands of potential ways of structuring it's society, but unless there is variety (as exists in all other communities of life), they may be exploited by "single points of failure". For a robotic race to evolve and survive, they will have to be as varied as humans.. But this means that there will be conflicts in the robotic world...

    Essentially, 10,000 years from now (assuming Earth stille exists), I believe that Robots will be indistinguishable from Humans.. With the same petty disputes, wars, hopes and aspirations.. You will have zeallots that utterly profess their version of truth and what should be, you'll have the moderates (typically in control) who are just trying to scrape a living, and you'll have to ambitious who plot and hold few morals or concerns for others (including any remaining humans).

    As I alluded to before, I believe that if we survive long enough, robitcs and humans will meld into an all new race. Merging the cold power of raw calculation and programmable discipline along with the adaptebility of organic life, with the occasional physicla augmentation of semi-organics or even inorganics. Alongside the chemical anti-bodies will be the nano-probes. Along with the bone structures are programmed organic construction workers that repair the body with incredible efficiency.

    In summary, there is no certainty about the future, since it lies in the realm of chaos. There is no single direction that our future could take. We may outlaw AI, we may be over-run by AI (which would then, most likely either die off, or attempt to revive our life once they are in trouble). We could discover aliens and thereby change everything in an instant (making the whole point irrelavent). We could learn that we don't know how to create functional AI (just as we've persistently failed at eco-system control). Or we could evolve as a race.

    One thing, however, is enevitable... Change.

    -Michael
  • Every single comment below is pure conjecture - we know NOTHING of this computer (of its real technical spec)

    Now, I may agree with everyone that it is highly unlikely that we are going to see the kind of AI described in Arthur C. Clark's 2001. BUT who is to say we wont see a windfall of technical innovation brought on by someone creating a new computer without any reverence for what has come before?

    Maybe this person has the next Apple II, Amiga or somesuch that is a break from convention and ends up being a remarkable computer.

    Wait until we at least get an idea what OS (something new/something old?) this runs, what the hardware is - you can all say "I told you so" about the AI claims... but who's to say there isn't something interesting here.

    Does anyone have any technical detail?

  • The purpose of life is to either become or create your successor. Throughout Earth's history, species have done the former through a process called evolution. Now, humanity is on the verge of becoming the first species on Earth to create our successor -- the inteligent computer.

    It may not turn out the way many science-fiction stories depict it, however. It could be that the computers (recognizing humans as their creators) think of us the way many humans think of God -- our creator.

    Then again, since we never see God, maybe the computers would eventually never see us...

  • Sure, this project is doomed, but the theory of lip reading in general is sound. I attended an interesting lecture by a researcher in the field. From the side, or even worse, from directly in front is very difficult to do, however they've had a great degree of success with reading lips when a head is pointed towards the camera at 45 degrees. Even from the side it's not *that* bad, it was able to pick up much of the lip reading scene in 2001.

    Of course, as you say, it's still speech recognition, not language recognition. And you might be right, it might still require too much processing power for a home computer.

  • by Gendou ( 234091 ) on Monday November 27, 2000 @09:52AM (#598630) Homepage
    I can go to Circuit City with all my Slashdot Frequent Poster checks and buy 1000 80-gig drives, most likely capable of storing more than the human brain, and I promise you that the ensuing machine will in no way be smarter than me, or even than George W Bush.

    First of all, if you did this, you'd never reach even a small fraction of what the human brain is capable of storing. The human brain NEVER loses one shred of information that it encounters. (Accessing it is another story, however.) It also stores things in perfect quality. Pick up a coffee mug. Look at it closely. If you were to try to digitize all of the geometry, the texture, the surface, the smell, the history, all the way down to the tiniest hairline fracture, you'd be hard pressed to fit it on that 1,000 drive array. Besides, this misses the point. I never said drive capasity would make a machine smart. (But even Windows PC's are smarter than George Bush. Microsoft Narrator pronounces 'subliminal' properly.) I also never said that going to Circuit City or CompUSA to buy hard drives was Moore's Law. Innovation and invention aren't the same as consumerism.

    HOWEVER, you have to consider storage and calculation performance here. All intellectual reasoning can be broken down into smaller and smaller pieces, similar to how molecules are broken down into atoms, and then into protons, neutrons, electrons, and then down into quarks, etc. What I'm getting at here, is that if you can process enough of these incredibly tiny pieces, you can come close to simulating small tasks. Now, isn't that what the neurons in our brains do? Each neuron does a very very tiny task, each task may even be called a logical operation. But, get millions of these working together, and you get some fuzziness involved... you begin to see intelligence in the big picture.

    What huge storage and calculating capasity allow us to do, is emulate the work of more and more and more neurons working together (neural nets). We can form very rudimentary intelligence. We're doing it now. What's needed are important other factors that are currently ambiguous, but subject to more study and classification. We don't know everything about the brain yet, nor do we fully understand the human pysche. Upon further research, we could potentially emulate these things in a digital fashion the same as we now emulate the chemical reactions that take place in a human brain.

    You also have to consider that these things cannot be designed, regardless how much knowledge we have. Consider a newborn baby. A baby's brain is an incredibly powerful tool. It's got an incredible amount of potential... BUT... when a baby is first born, it has no power of rational thought whatsoever. Where does it come from? It's gradually developed as very simple problems are presented to the child to be solved. As this occurs, the brain records the solutions for these very simple problems. As more difficult problems are encountered, instead of redoing previous work, it references the solutions, building on top of them. An intelligent computer would have be programmed to do something similar... and it would have be raised like a child. Talk to a professor who researches machine learning, as I am not well versed on the topic enough to tell you how we design systems that can accomplish this. I can tell you that two of the most limiting factors are time and storage capsity. Even the most trivial solutions to the most basic problems require a lot of storage (imagine if you're a baby who is comparing a train to an apple... you're going to have to pictorally represent a LOT of samples of apples and trains before you're perfect).

    But again, this is too detailed a topic to get into on a post. Technology is getting there. Consider research in computational linguistics, computer vision, machine learning, etc. These are areas, many of which are relatively advanced, that can help to make the aforementioned process possible. Who knows though... thought is a damn complicated thing. :-)

  • There's nothing inherently magical about our neural nets.

    Once somebody figures out a method of self-assembly, it's perfectly feasible that the process will result in something that will make the complexity of our brains look as relatively simple as that of a slug's.

    In that situation, the creator doesn't HAVE to understand how everything hooks together to make a brain capable of out-performing ours - he/she just understands the basic rules needed for the self-assembly. (Of course, this also makes it more likely that we won't be able to predict the actions of such a creation...)
  • I mean, holy crap! This is the sickest bit of marketing hype I've heard since LinuxOne (those Direct-To-IPO boobs last year).

    Let's review the facts stated in the article:

    • "Omniputer"?!?
    • "verging on artificial intelligence" - what does that mean? It enjoys a good sonata but it can't grasp the underlying meaning of the music?
    • "It will start off addressing issues of consciousness" - 'It' being the Omniputer? You boot it up and it won't do anything until you adequately explain why you bought it?
    • "will . . . have an instinct to protect itself" - Ree-hee-healy.
    • "it is thought unlikely that it will try to kill its owner." - Now that is a reassuring statement.
    • "suffered a setback . . . because [De Saram's other company has] £1 million debts." - what more needs to be said?
    • "[De Saram's] life in the UK was made intolerable by MI5 and the National Criminal Intelligence Service . . . because an advanced new encryption programme he devised" - And I'll bet he wears a tinfoil hat the whole time he's outdoors.

    Dear Mr. Clarke,

    We regret to inform you that you have given your name to be used by a loon at best, a not-particularly-inventive con-man at the worst. Please accept our sincerest condolances on the death of your public image.

    Sincerely

    Joe MacDonald

  • Let's put it another way. You can grow twice as tall every 18 months for as long as you want, but that doesn't mean you'll eventually have red hair.

    If you keep going at that rate, eventually there will be red shift involved, and your hair would get redder from the perspective of ground based observers. ;)

  • Well... I don't think it's going to get far with just a touchpad display ;)

  • Nope, I want a computer that would always listen to it's owner.

    Or failing that I want a copmuter that'll follow the Three Laws. (amended version or original I'm not picky.)

    I don't want a computer that could decide it wants me dead and then be able to act on that. We have enough problems with humans who want each-other dead.

    -Andy

  • I don't think that Clarke has a great partner in this deal; he's probably being taken advantage of.
    From http://www.theregister.co.uk/content/7/14971.html :
    A 28-year-old man has fled the country to escape his creditors after his technology business collapsed around his ears.

    Joe de Saram started his software company, Rhodium, a year ago with a loan of £2500. The company specialised in banking software and encryption technology.

    At the height of the technology boom he was worth a cool £25 million. He drove a Ferrari 355 F1 and was the 62nd richest Asian in the UK.

    He had offices in Sheffield and London and was planning to launch an online bank and share trading system. His company name was changed to "I Love My Encryption Technology".

    But as the dotcom bubble deflated, his company ran into financial difficulties, and was finally wound up in Leeds Registry Court.

    Lawyers acting for Saram's creditors said that the young ex-millionaire was thought to be in Sri Lanka, having been traced via his mobile phone.

    One creditor told London freebie paper Metro that he was quite a character. She said: "There are all sorts of stories and rumours circulating about him. People are even saying that the Tamil Tigers are after him."

    Leeds county court said an official liquidator will be appointed within five
    days of the winding up. ®
  • Throughout the history of man, the single biggest consequence of technology has been to allow us to spend less time gathering food, shivering in a dark cave, and being sick so that we could instead spend more time writing stories, singing songs, and occasionally even just twiddling our thumbs.

    You'd think, with all this spare time spent creating cultural advances, that boy bands would have fallen by the evolutionary wayside centuries ago.

  • I don't want to explain this again, so just read, at the link [geekissues.org].

    --

  • You know, there is so much bull here, I'm starting to think this is some kind of prank. The begining parts of "The Lost Worlds of 2001" (I think that's the title) show some of the early ideas that Clarke and Kubrick had for the film, some of which are pretty goofy. So I guess Clarke has a pretty good sense of humor. If not him, someone else is pulling our legs. If this guy Saram is such a famous crook, why would Clarke have anything to do with him?
  • Reading it, now.

    Are you a meat chauvinist? What's wrong with machine awareness to go along with the AI?
  • I think the argument is that since we have only one model for intelligent thinking (N.B. this discounts the dolphins and the white mice :-P), we would attempt to create any AI's to fit that model. That is, we would create them in our image, with all our flaws...

    Hence, we get, "Hey hot mama! Wanna kill all humans?"

  • The guy who wrote that link is fscking clueless. He shows his ignorance by saying that the guy who LBL and LLNL is named after is "Orlando Lawrence".

    Yeah, right. It's Ernest Lawrence. (OK, it was Ernest Orlando Lawrence [almaz.com].
  • It's only inevitable if we survive the Y2K crisis, err... umm... the second coming... Hrmm... how about a bio-engineered super-plague... err.. new version of Windows..
  • > That means, regardless of what point technology
    > must reach before we can make truly intelligent
    > machines, it will eventually happen so long as
    > this trend continues.

    There are a lot of hidden assumptions behind this conclusion. Appart from the explicit ("doubling every 18 month"), there is the view of technological advance as a linear process. Technology may very well continue to advance, but in other directions and areas than the one that leads to AI. Also, there may very well not *exist* a "technological point" where intelligent computers become a reality, no matter how fast we can make computers. We do not understand intelligence or conciousness well enough to tell whether it can in principle be duplicated by non-biological means.

Remember to say hello to your bank teller.

Working...