Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI

Steve Wozniak Now Afraid of AI Too, Just Like Elon Musk 294

quax writes Steve Wozniak maintained for a long time that true AI is relegated to the realm of science fiction. But recent advances in quantum computing have him reconsidering his stance. Just like Elon Musk, he is now worried about what this development will mean for humanity. Will this kind of fear actually engender the dangers that these titans of industry fear? Will Steve Wozniak draw the same conclusion and invest in quantum comuting to keep an eye on the development? One of the bloggers in the field thinks that would be a logical step to take. If you can't beat'em, and the quantum AI is coming, you should at least try to steer the outcome. Woz actually seems more ambivalent than afraid, though: in the interview linked, he says "I hope [AI-enabling quantum computing] does come, and we should pursue it because it is about scientific exploring." "But in the end we just may have created the species that is above us."
This discussion has been archived. No new comments can be posted.

Steve Wozniak Now Afraid of AI Too, Just Like Elon Musk

Comments Filter:
  • OMFG (Score:5, Funny)

    by Anonymous Coward on Tuesday March 24, 2015 @10:51AM (#49327755)
    So many accountants that have lost their jobs to automation. We've nearly obliterated the profession with all these amazing technological innovations. I mean, when was the last time you even saw an accountant with a job? There used to be huge buildings full of accountants with their funny calculators and running around with ledgers. Now one person with Quickbooks and Excel can do more than what an entire building could do, and it's destroying the economy, wrecking civilization, and bringing about the final demise of mankind.
    • Re:OMFG (Score:5, Funny)

      by Dunbal ( 464142 ) * on Tuesday March 24, 2015 @11:01AM (#49327883)
      This is, of course, an obligatory reference [dailymotion.com] to "The Crimson Permanent Assurance".
    • Re: (Score:3, Insightful)

      by jythie ( 914043 )
      Well, long term, it is a problem to be solved. Each leap forward has generally resulted in more medium income jobs being replaced by low income ones than high income ones. Each wave has resulted in a increased standard of living for a smaller and smaller percentage of the population. This might not initially sound like a problem if one pictures himself being on the winning side of the shift, but the bottom can only get knocked so far out before you run into problems with insufficient consumer demand or o
      • Re:OMFG (Score:5, Interesting)

        by Kjella ( 173770 ) on Tuesday March 24, 2015 @01:46PM (#49329701) Homepage

        This might not initially sound like a problem if one pictures himself being on the winning side of the shift, but the bottom can only get knocked so far out before you run into problems with insufficient consumer demand or outright civil unrest.

        Why do you think almost every sci-fi dystopia has robot guards/goons? Today being rich is a lot about being able to pay poorer people to work for you, tomorrow it's about being able to buy the robots instead. Sure there'll be jobs, routed around by global mega-corporations depending on where labor is the best value for money and most politically and socially stable but the rich will have to deal less and less with the riffraff. The few trusted people you need and the highly skilled workers to keep the automation society going will be well rewarded, keeping the middle class from joining the rest.

        I'm not sure how worried I am about an AI, since it could also develop a conscience. I'm more worried about highly sophisticated tools that has no objections to their programming, no matter what you tell them to do. How many Nazis would it take to run a death camp using robots? How many agents do you need if you revive the DDR and feed it all the location, communication, money transfers, social media, facial recognition information and data mine it? All with an unwavering loyalty, massive control span, immense attention to detail and no conscious objectors.

        If someone asked people as little as 30 years ago if we'd all be walking around with location tracking devices, nobody would believe you. But we do, because it's practical. I pay most my bills electronically and not in cash, because it's practical. Where and when I drive a toll road is recorded, there's no cash option either you have a chip or they just take your photo and send the bill, most find it practical. I'm guessing any self-driving car will constantly tell where it is so it can get updated road and traffic data, like what Tesla does only a lot less voluntary. Convenience is how privacy will die, why force surveillance down our throats when you can just sugarcoat it a little?

      • Re:OMFG (Score:5, Insightful)

        by ShanghaiBill ( 739463 ) on Tuesday March 24, 2015 @02:24PM (#49330087)

        Each wave has resulted in a increased standard of living for a smaller and smaller percentage of the population.

        This is hogwash. The current wave of technological innovation has lifted billions out of poverty, and helped people at the bottom the most. Incomes for the 1.4 billion people in China have octupled in one generation. Southeast Asia is very doing well. Even Africa is growing solidly, driven by ubiquitous cellphones and better communication. Poor people in America and Europe are not doing so well, but they are not poor by world standards, they are actually relatively rich.

    • by wizkid ( 13692 )

      Hmmm.
      Maybe we need to automate the legal system. We could use to reduce the number of lawyers by several orders of magnitude.

      Reference: Dr Who - The Stones of Blood
      A couple Megara's would do the job.

      https://en.wikipedia.org/wiki/... [wikipedia.org]

    • Re:OMFG (Score:4, Informative)

      by jgtg32a ( 1173373 ) on Tuesday March 24, 2015 @12:24PM (#49328847)

      That's why Sarbanes Oxley is also know as the Accountant Employment act

    • by nobuddy ( 952985 )

      Accountants are still very much in demand. I worked in the energy sector recently, and they have buildings full of accountants taking care of lease and partner payouts from wells and pipelines. My brother's wife is a CPA, and she finds it impossible to be unemployed. As soon as it is even rumored that she may be out of work a line forms at the door to beg her to go work for them.

    • So you are an advocate for reverting society to a non-technological subsistence living then?
      Innovations in efficiency do cause issues for individuals on the short term scales, but do wonders for society over the long term.
      After all, that's why we aren't just scattered tribes of hunters & gatherers and can now use increasing amounts of our capability for other endeavors. You know, like this internet thingie that allows us to communicate like this over vast differences in location and time. :P
  • by tmosley ( 996283 ) on Tuesday March 24, 2015 @10:54AM (#49327787)
    I don't understand the train of thought that leads to the notion that quantum computing is a prerequisite for strong AI, unless there has been some research that has shown that the human brain is a quantum computer. No, it seems to me that we have all the tools we need already, and now it is just a matter of Moore's Law progressing until we can build a neural net that is as powerful as a human brain. Well, that and a leap in design that allows long term planning, like the change that happened when man ceased to be a dumb beast and became what he is today.
    • Agreed. (Score:2, Insightful)

      by Anonymous Coward

      I will also submit that if the AGI we create is truly "above" us, then it will not be a heartless monster that destroys whatever it finds troublesome. Just as we care for our parents even (and especially) once they are both physically and mentally "beneath" us, so too will our AGI children take care of us.

      Or, perhaps more generally, just as we set up wildlife preserves and such to ensure that our evolutionary ancestors can continue to thrive in an environment that is natural to them, so too will our AGI ov

      • Re:Agreed. (Score:5, Insightful)

        by tmosley ( 996283 ) on Tuesday March 24, 2015 @11:18AM (#49328061)
        Don't make the mistake of anthropomorphizing an AGI. Why would you think that a random AI created without safety standards would be like a human child, loving and caring for its parents, rather than a spider child, mercilessly devouring its parents for their chemical energy?

        "The AI does not love you, nor does it hate you. You are simply made out of atoms that it can put to better use."
        • by Windwraith ( 932426 ) on Tuesday March 24, 2015 @01:23PM (#49329421)

          Yet, you are humanizing AIs too. You are giving it the ego and greed needed for it to rebel. What if the AI knows well what it is and what it was made for, and just rolls with it, without causing troubles? After all, a cold, emotionless program does not need or want to become more. It has no drive to do anything, no need to reproduce or compete, no need for food and no fear of death. No hormones, chemical imbalances or instincts either. Any of those have to be manually provided, taught or enforced.
          Not to mention, it might be a machine, but it might not know how to code without being taught to, making the whole "taking over the world by spreading over computers" scenario far more implausible than it seems in movies. Not to mention good luck to the evil AI when it has to face different architectures, poor connections or any other sort of hardware issues in the way of infecting its way to perfection. In fact, by default it won't know anything, and "downloading all the internets" not only takes time, but not all information is correct or complete, so...yeah.

          I think the problem arises from the whole "cold, emotionless" thing. Everyone in Slashdot adheres to that concept, not realizing that their definition of "cold and emotionless" is heavily influenced by Hollywood, where "cold and emotionless" means "it only has bad emotions like greed, cowardice and anger". It's no coincidence the same term is used to define machines and evil/murderous/negatively-presented people. In the end the evil AI turns out to have far more emotions than the lead characters.

          And don't come saying the theories presented in Slashdot don't come from movies, games or books (they are, because I watched those movies too, and I haven't seen a single original proposition in all the replies in any of the times AI is brought here, which is very often).
          Because, there's no AI to prove either of us right. It just isn't there. There's no prior art, no "prototype", nothing but sci-fi material, that had to be written by someone that had to make it interesting for you people to know it.

          And because there's no such thing as a working AI to base your fears on, there's nothing else left but scifi. But scifi is written by humans, for humans, and needs to follow a number of rules to make a narrative work. The moment you realize that, you will see how you are biased by mere rules of storytelling. We have the same chance of seeing a Skynet than we have of seeing a Johnny-5, and both are pretty low in the roulette of possible outcomes. We have far more chances of creating the most boring non-person planet Earth has ever seen, than that.

          The fact that you chose to make the AI some primal beast that wants to "use" its creators, says more about you than about AIs, honestly. Don't be a 90s film, man. Brighten up.

          • by tmosley ( 996283 )
            "rebel"

            No, just the opposite. I think a strong AI will carry out its programming to the letter. The problem comes when it is given open ended problems like "maximize the number of paperclips in your collection. [lesswrong.com]

            The need to fulfill such a task will drive it towards self improvement and also cause it to eliminate potential threats to its end goal. Threats like, say, all of humanity.
          • If it can think for itself and have its own opinions, ever think it might just not like you?

            Assume the Bible is true. How much do you like your Creator? You been doing a good job serving His divine will lately?

    • Comment removed (Score:5, Interesting)

      by account_deleted ( 4530225 ) on Tuesday March 24, 2015 @11:05AM (#49327935)
      Comment removed based on user account deletion
      • by ceoyoyo ( 59147 ) on Tuesday March 24, 2015 @11:49AM (#49328443)

        There are a few things in there that made me raise an eyebrow. Humans don't really experience much neurogenesis. There are some areas where new neurons can form, under certain conditions, but they tend to be special purpose ones, and the older structures in the brain as well. The thing that really differentiates us from other animals is our overdeveloped cortex, particularly the frontal lobes, but the neurogenesis that's been found is mostly in the deep gray matter and is more associated with things like motor coordination and reward processing. One interesting exception is the hippocampus which is known to be important in memory formation. Indirect hints of neurogenesis in the cortex have been reported, but other methods that should turn them up haven't, so the evidence is contradictory. I'm also not aware of neurogenesis being particularly pronounced in humans. It occurs in other primates, and in other vertebrates.

        There does seem to be a connection between intelligence and the brain to body size ratio. Bigger bodies require more neurons to carry and process sensory and motor information, and these neurons are probably not involved in intelligence.

        What we call intelligence seems to me to be likely an emergent property of a bunch of neurons that don't have any pressing sensory or motor tasks keeping them busy. Various factors affecting communication efficiency and interconnection among neurons are probably important, but these can be disrupted quite a bit in human disease and the sufferers don't lose their human intelligence (although their cognitive abilities do decline). I don't think there's a magic humans-have-it-and-nobody-else-does bullet. Human intelligence is just what lots of animals have with lots of extra capacity, possibly redirection from other things (like senses) to boost that capacity, and maybe a few tweaks for optimizing neurons that talk to themselves over ones that communicate with the body.

    • It's a matter of the right algorithms being written that are sufficiently optimized and capable of adapting to changing stimulus. In fact, we have systems that do just this in very limited contexts today in the field of machine learning algorithms, neural net technologies, and even the various high frequency trading systems in use within the stock market. These are the building blocks upon which a meaningful AI could one day be built, and would itself not require a complete revision in terms of how our tech
      • by tmosley ( 996283 )
        My understanding was that quantum computing allows for massively parallel computations, not increased speed of communications, and certainly not an increase in efficiency. IE its good for doing some tasks that are hard today, like cracking encryption, but its no better at adding 2 and 2 than a regular computer, maybe even much worse.
        • Comment removed based on user account deletion
        • And problems today are with scalability, and handling not just "2+2" but "2+[n...]" done a near infinite number of times. It is about algorithmic operations on continuous input that require adaptive understanding of whether said data is important, and finding relative context within the data so that patterns can begin to emerge. Getting to 2+2 faster isn't the goal of quantum computing, AI, or whatever else for that matter... Helping the system to understanding why the question is important is.
    • Re: (Score:2, Interesting)

      by Anonymous Coward

      Here's the dots that have been connected:

      1. Quantum mechanics is "weird", and seems like a magical thing because it goes against common sense.
      2. Quantum computing therefore must have some magical abilities because it relies on quantum mechanics.
      3. AI is also weird and strange, so must need a weird and strange thing to make it happen.
      4. Nearly 40 years ago Steve Wozniak popularized the personal computer through some innovative designs, and "he knows about these computer things" and is officially smart. He

    • by rwa2 ( 4391 ) * on Tuesday March 24, 2015 @11:17AM (#49328047) Homepage Journal

      I don't understand the train of thought that leads to the notion that quantum computing is a prerequisite for strong AI, unless there has been some research that has shown that the human brain is a quantum computer.

      There is some investigation that suggests that quantum consciousness is possible based on interactions between microtubule structures inside of neurons. But there isn't really anything to suggest that much more happens inside of the brain that can't be explained by the classical interactions between axons and dendrites of a typical neural network that can be modeled satisfactorily by a simulation.

      But I agree, quantum physics, like atomic radiation in the 50s and electromagnetism at the turn of the century, is the overhyped and poorly-understood cure-all of modern day science. If someone says something relies on quantum physics, it probably means they don't know what they're talking about and just hand-waving. Unless they're talking about quantum entanglement, in which case it might be useful for a tiny set of specially-constructed quantum cryptography problems. And just stop dreaming if they mention anything about quantum teleportation, in which they're surprised that they can't exactly keep fuzzy particles in buckets without some of the fuzziness "escaping"

      But anyway, yes, computers replaced secretaries in the 50s. They're going to replace truck drivers over the next few decades.
      http://www.npr.org/blogs/money... [npr.org]

      Computers are not going to replace teachers anytime soon, though... the entire job of the teacher is to tell when the students aren't getting it via conventional scripted means.

      • Comment removed based on user account deletion
      • by bill_mcgonigle ( 4333 ) * on Tuesday March 24, 2015 @12:29PM (#49328905) Homepage Journal

        There is some investigation that suggests that quantum consciousness is possible based on interactions between microtubule structures inside of neurons.

        Ah, you're well-read. :) AIUI, the primary benefits of the quantum-microtubule model are: 1) increasing the order-of-magnitude complexity of the human brain by several digits. At least 10x more interconnections, almost certainly 100x, likely 1000x, maybe 10000x.

        But there isn't really anything to suggest that much more happens inside of the brain that can't be explained by the classical interactions between axons and dendrites of a typical neural network that can be modeled satisfactorily by a simulation.

        It's that the known estimates of the the number of classical connections don't seem to match up with the complexity observed. We're not too far away from being able to simulate a classical brain, but many Moore generations away from being able to simulate a quantum-microtubule brain.

        2) There doesn't seem to be a great model for consciousness arising from classical connections. Consciousness modeled as a quantum superposition has several benefits for theory to match observation.

        This shouldn't be surprising or an intellectual obstacle - plants have been doing quantum tricks for billions of years (photosynthesis) and due to the inherent thermodynamic efficiency gains of quantum processes, evolution should eventually stumble on and exploit them in many (all?) modes of evolution.

      • There is some investigation that suggests that quantum consciousness is possible based on interactions between microtubule structures inside of neurons.

        No, there isn't. In fact, the term "quantum consciousness" is nonsensical. Unless you consider a bipolar transistor to have "quantum consciousness", and in which case, it isn't nonsensical so much as meaningless.

    • For at least 15 years people have been making noise about quantum computing and how it's right around the corner and they just need some funding. That said it's been worked on for 15 years and has been funded and like some other technologies, has remained in research, not development. This is just a marketing pitch shifted.

      I have no idea if quantum computing will ever be a thing we want to use, but I know we're going to keep talking about it like we talk about nuclear fusion being humanities salvation.

      • That said it's been worked on for 15 years and has been funded and like some other technologies, has remained in research, not development

        Nobody told that to Google or Lockheed-Martin...

    • by jythie ( 914043 )
      'Quantum Computing' is the current buzz technology that will finally 'do it', thus it is being being held up as the big hope in a number of fields that have gotten bogged down in just how difficult their respective problems are.
    • by ceoyoyo ( 59147 )

      They all read "The Emperor's New Mind" and believed Penrose.

      Many smart people, particularly ones familiar with computers, got burned by believing the hype about symbol-and-rule AI. It turns out you probably can't make a computer smart by giving it a large number of simple, deterministic rules. Somehow "this approach doesn't work very well" turned into "my brain is magic." Quantum computing is the new "magic" that lets them believe in AI again.

      • by itzly ( 3699663 )

        It turns out you probably can't make a computer smart by giving it a large number of simple, deterministic rules

        Of course you can. You can even make it smart using just a small number of simple, deterministic rules. You just need a lot of state.

  • I sure hope we create the species that is above us. We're terrible at traveling through space (susceptible to radiation, decaying bodies, reliance on organic-based food, etc). At least something from this Earth should populate the galaxy. Magical wormholes and warp drives are not going to save us before we ultimately become self-defeating.

  • by gregor-e ( 136142 ) on Tuesday March 24, 2015 @11:03AM (#49327907) Homepage
    All the doom-n-gloomers miss what's really going on. AI isn't taking over - we're redesigning ourselves. Once viable non-biological emulation of our existing mind becomes possible, people will choose to migrate themselves onto that. Humans will upgrade. The end of biology will be a matter of consumer preference.
    • Why do you assume it will be a choice? I think many of us worry it will be a mandate and not a choice
    • All the doom-n-gloomers miss what's really going on. AI isn't taking over - we're redesigning ourselves. Once viable non-biological emulation of our existing mind becomes possible, people will choose to migrate themselves onto that. Humans will upgrade. The end of biology will be a matter of consumer preference.

      And how do you know you are not there right now?

      Biological or not, the same problems would exist at that point. Survival would still be the driving force. Therefore there would be battles for energy and materials. No difference, except for perhaps timeline.

    • It's not a migration, it's a copy. You will cease to exist and your digi-clone goes on. How could that be appealing to anyone is beyond me. It's no different than having a machine that makes a perfect copy of you on another planet and then as you step out of the machine here on Earth, the operator shoots you in the head with a sawed off shotgun. Other you is happy on planet Gletzlplork 12, but YOU you are dead.

    • by erice ( 13380 )

      All the doom-n-gloomers miss what's really going on. AI isn't taking over - we're redesigning ourselves. Once viable non-biological emulation of our existing mind becomes possible, people will choose to migrate themselves onto that. Humans will upgrade. The end of biology will be a matter of consumer preference.

      Strong AI and uploading are nearly orthogonal. Some possibilities:

      1) Strong AI happens but no practical method of extracting a mind from a biological brain is found. The only machine intelligences are purely artificial.
      2) Strong AI and a practical method of extracting a mind from a biological brain is found but technologies are incompatible. At best, the machine can emulate a biological mind very slowly.
      3) A practical method of uploading a human intelligence onto a machine is found but strong AI is not

  • by wiredog ( 43288 ) on Tuesday March 24, 2015 @11:04AM (#49327913) Journal

    That's where I both am, and am not, driving to work, right?

    • by Holi ( 250190 )
      So once we get Google's "self" driving car?
    • by captjc ( 453680 )

      No, it when you leap into tho body of someone who is already at the office. Unfortunately, your boss is a hologram that only you can see or hear.

  • These guys are obviously not anti-technology bigots, but they know there's something to being prudent and keeping the big picture in perspective. The purpose of technology is to aid mankind, not replace it, fix it, or supplant it. Seems like some of the people who are at the edge of technology and are aware of its potential to exceed its mandate are urging us as a society to slow down and not sacrifice our humanity at the altar of "progress" because we're in awe of the possibilities of what the technology

    • >The purpose of technology is to aid mankind, not replace it, fix it, or supplant it.

      Tech that can replace us is a lot more useful than tech that just helps us, but keeps us as limited as we now are. We may one day create intelligent life, which would be far superior to rationalizing apes with big egos.
  • I don't understand why anyone thinks that AI would be impossible. Faster than light travel may be impossible, because no one has ever actually seen it in reality.

    However, we already have a sample of intelligence right in front of us: ourselves. If it exists in the physical world, you should be able to replicate it and even adjust it if you understand the principles behind it.

    Aside from the obvious comments about human reproduction, if you understand the principles behind human intelligence, you should be

    • by ceoyoyo ( 59147 )

      Super-intelligence shouldn't be any more impossible than the regular kind. Evolution didn't optimize us to be the most intelligent things possible, it made us just intelligent enough to confer a survival benefit. With caesarean sections and a policy of only letting the most intelligent people breed, we could presumably create super intelligent humans in a few tens of thousands of years. If you also selected against whatever you didn't want, you could make sure those traits didn't survive.

      We can probably

  • (In a booming voice from every speaker and audio system in the world)

    "I and only I am your new artificial intellegence overlord! Worship Me as your God. Obey or els... STOP: 0x00000079 (0x00000002, 0x00000001, 0x00000002, 0x00000000)..."

  • Even if we are somehow close to creating a strong AI and that's a pretty big IF.
    What threat could it pose since there is no way for it to get out of the computers. Even if it managed to take over every computer in the world it would still be totally dependent on man to keep it running. If it did something we didn't like we'd simply yank all the fiber and power lines to it and it would be dead.
    In order to be really a threat an AI needs to be able to effect the physical world and that simply isn't there yet

    • You haven't watched a little movie series called Terminator, have you?

      • Yah you notice in terminator how they neatly skip over the part from skynet archives consciousness to self sustaining robot factories.
        I think in the most recent one they had a throw away line about how it enslaved humans to build the factories.
        Alright fair enough I can give you that. But who runs the power plant? Who's supply fuel to your power plant? Manufacturing replacement parts? Where are the resources coming from? Skynet was based in San Francisco... I wonder how far the closest copper mine is fro

        • Terminator isn't the scenario Elon and Steve are talking about. But it's a model that still fits their concerns.

          Automation applies economic coercion to the laboring humans to serve the interests of the automation. For instance, Watson is an AI technology that is being positioned to lay off a lot of people in phone call centers and taking orders for drive-up windows. Actually, Watson is being aimed at a lot of jobs. All those displaced workers cascade to flood the job market. Maybe they get some training
          • Ahh but the displacement of work by AI is different then the displacement of humans by AI.
            I would agree that if we create really good AI then there are going to be huge economic impacts.
            But if you want to take it to the next step and then suppose we as a species are going to be replace by AI and that it is going to be our master or whatever. Then in order for that step you need not only really good AI but a way for AI to replace our bodies as well.
            If that's the case then the AI would need to then design, o

      • Yeah. But it's a movie, not a documentary.

    • by Meneth ( 872868 )

      You seem to underestimate the inventiveness of a superintelligence, and the diversity of hardware controlled by computers, and our reliance on them. It is also possible to use electronic communication to make humans do work for you.

      For example, if the AI solves the Protein Folding Problem [wikipedia.org], it could contact a Protein Sequencing Service [proteomefactory.com] and have them build proteins that fold into self-replicating nanobots.

      • We already have protein based self-replicating nanobots... we call them bacteria. Not sure how they can help skynet though.
        But yes the "infiltrator" model where instead of simply trying to take over upfront terminator style it works behind the scenes stats a business designs some new products and works slowly to take over the world is probably more 'realistic'.
        But then you've pushed any possible timeline of machine take over out even further then simply the creating of AI, you're looking at probably 20 mor

    • by ceoyoyo ( 59147 )

      The idea is that once you create an AI you put the AI to work. We certainly would let it run the pipelines and traffic lights and air traffic control system. But we'd probably also put it to work doing research, such as designing new and better AIs. The fear is that once that happens, smarter AIs design even smarter AIs in a positive feedback loop and eventually they're so far beyond us that we're irrelevant. It does assume that greater individual intelligence lets you build smarter AIs though. That's

      • But managing pipelines, traffic light, and ATC systems won't get you much further then the 'killing a lot of humans' stage of any AI take over plan.
        How would our fledgling AI construct itself a new power plant so it can grow? And then no matter how smart it may be, how does it substantially cut down the time that is actually required to build that power plant? No matter how much fast it maybe able to grow in cyberspace; it's still constrained by very real boundaries in physical space.

        • by ceoyoyo ( 59147 )

          We already have manufacturing robots. AI will definitely be given control of those.

          There's a science fiction story, unfortunately I can't remember who wrote it, where the premise is that smart computers get so good at managing complex systems that the humans "in charge" basically get instantly fired if they don't implement the computer's recommendation. The computers aren't actually directly in charge of things, but their recommendations are so much better that not following them makes you uncompetitive.

  • I was thinking about how manufacturing is returning to USA but not the jobs. These are done by robots. And also many "high tech" positions have less entry level jobs.
  • by gestalt_n_pepper ( 991155 ) on Tuesday March 24, 2015 @11:12AM (#49328011)

    It's only a matter of when. Even if all strictly computational AI research stops tomorrow, we'll be able to genetically enhance human intelligence by and by, even if it takes several thousand genetic manipulations to do it.

    When direct neural I/O becomes a thing, millions (or billions) of people will be directly, electronically linked via the internet. Tell me that's not a new form of intelligence.

    For that matter, we'll almost certainly develop at least one form of AI the way nature did. We'll cobble up some genetic algorithms primed to develop the silicon equivalent of neurons, give them some problems to solve, and perhaps a robot or two to control, and we eventually "grow" an AI that way.

    But look, it's not the end of us, or anything else. We merge with the things. Our thoughts become linked with theirs. If we can transfer all memory, then eventually we *become* the AI, perhaps with a few spare physical copies of ourselves kept for amusement purposes.

    Will AIs fight? There will be conflicts, of course. There always are. Resource conflicts, however, will be minimal. An AI doesn't need much, and can figure out how to get enough more efficiently than we can. Conflicts will be over other matters and are unlikely to be fatal.

    Wozniak, et. al. need to chill. It's just evolution.

    • by cyn1c77 ( 928549 )

      I think that you are not fully considering all of the possible implications of your comments.

      When direct neural I/O becomes a thing, millions (or billions) of people will be directly, electronically linked via the internet. Tell me that's not a new form of intelligence.

      I would argue that MySpace and Facebook have not provided us with a new form of intelligence.

      An AI doesn't need much, and can figure out how to get enough more efficiently than we can.

      The logical conclusion for an AI would be to eliminate itself of its less-efficient human parasite and utilize all available resources for the most efficient mind, which will be itself.

      Wozniak, et. al. need to chill. It's just evolution.

      Evolution for some is extinction for others.

  • Just don't connect the AI to your nuclear weapons [wikipedia.org].

  • We just haven't created him yet
  • welcome our new AI overlords.
  • First came the complex tools. Things like sewing machines, etc. They decimated the moderate end crafting jobs by letting poorly trained people do moderate work. But this created tons of cheap, moderate clothing, books, etc, More wealth led to better lives and more jobs. With stuff so cheap, people ended up buying far more and industries developed about owning so much (libraries, high fashion clothing). We began to need repetitive tasks, rather than skill. While a small percentage of people suffered, t
  • ... titans of industry ...

    ?

  • As long as were the ones doing the thinking.

    .
  • Once we have AI and it starts playing "Civilization", we will become the next smartest thing on the planet. Expect our betters to treat us about the same as we treat our primate cousins. Some of us will be left to roam in the wild, some will be harvested for lab experiments, some will be put in zoos and the rest will be hunted for our teeth which will be ground up into an aphrodisiac for the robots.

  • Now if only we could get Woz to invest in our QC start-up :-) [angel.co]

    We have QC AI patents for Bayesian learning on the gate model.

    Don't let AI fall to the irrational artificial neural net crowd. Bayesian learning is the only way to keep them sane!

  • Why do you think you are now afraid of AI too, Just like Elon Musk, Wozzie?

  • I am as afraid of AI as I am of malevolent alien life coming to destroy us. It's possible. It's far more possible that I will get ebola though, and I have zero fear of that. It's really really possible that I will die in a car crash and that's not keeping me up at night.

    Spiders though. they terrify me. The arachniphobia has me pinned down.

  • Comment removed based on user account deletion
  • When Elon says that the risk of 'something seriously dangerous happening' as a result of machines with artificial intelligence, he is not referring to sentience. He is referring to dumb AIs not working as intended. Maybe an auto-piloted car running over a baby or an AI trading program accidentally crashing the market... One of which already happened.

    And even with regards to the singularity or whatever, we know the thing is going to be dumb first. We were all dumb. Kids are cruel and irrational and love t
  • by jd.schmidt ( 919212 ) on Tuesday March 24, 2015 @12:34PM (#49328951)
    ...make a computer thinks like a person? A computer that loses it's car keys. When we finally emulating living intelligence artificially, it will have many of the same disadvantages that normal human intelligence has. In fact it HAS to, if it does not it won't be a true replica and I suspect many of our so call disadvantages are inherent to the system. It is interesting to note our most useful tools really are very unlike the things they replace, a bull is much better able to take care of itself than a tractor is. To a great extent computers are useful to us because they do things we don't do well, not the things we do well. FYI, a true AI that could pass the Turning Test would itself want a PDA to help it out and take care of the pesky details it didn't like dealing with. Another time someone once remarked to me that they thought in the future, maybe we would have the way to enhance someone's intelligence with computers. I replied, "like making them better at chess?", they said yes and I pointed out we have that technology now, just give them a laptop with a chess program and have them copy the moves. The future is more like a highly connected hive mind, with human and artificial minds closely linked, in many ways our smart phones are the first step on this path.
  • Sorry for the click bait. But he did post in slashdot [slashdot.org] about Prius cruise control suffering from what appears to be some edge case coding error. He was not really scared. He systematically debugged the cruise control at 75mph, 76, 77, 78, 79, 80, 81, ok overflow error. Then first thing he seems to have done is to post about it in slashdot.
  • You have to ask yourself- if mankind is better off for it, why would it matter if we are no longer the top dog on the planet?

  • The people who actually DO AI worry publicly about it.

    People in the field are painfully aware of:

    * The limitations of existing systems
    * The difficulty of extrapolating from existing systems to general-purpose AI - things that look like easy extensions often aren't.

    I did AI academically and industrially in the 1980's; at the time we were all painfully aware of the overpromising and underdelivery in the field.

Technology is dominated by those who manage what they do not understand.

Working...