Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI

Are Unfriendly AI the Biggest Risk to Humanity? (investing.com) 190

"Ethereum creator Vitalik Buterin believes that unfriendly artificial intelligence poses the biggest risk to humanity..." reports a recent article from Benzinga: [In a tweet] Buterin shared a paper by AI theorist and writer Eliezer Yudkowsky that made a case for why the current research community isn't doing enough to prevent a potential future catastrophe at the hands of artificially generate intelligence. [The paper's title? "AGI Ruin: A List of Lethalities."]

When one of Buterin's Twitter followers suggested that World War 3 is likely a bigger risk at the moment, the Ethereum co-founder disagreed. "Nah, WW3 may kill 1-2b (mostly from food supply chain disruption) if it's really bad, it won't kill off humanity. A bad AI could truly kill off humanity for good."

This discussion has been archived. No new comments can be posted.

Are Unfriendly AI the Biggest Risk to Humanity?

Comments Filter:
  • by groobly ( 6155920 ) on Saturday June 11, 2022 @12:36PM (#62611766)

    No.

  • Vitalik Buterin should play paperclips:
    https://www.decisionproblem.co... [www.decisionproblem.co] [decisionproblem.com]

  • by queazocotal ( 915608 ) on Saturday June 11, 2022 @12:46PM (#62611792)
    If we're not pets, slaves, lovers, or scenery, there is no particular reason to believe AI would value us (Or indeed, any photosynthetic/animal life) more than we value rust.

    The very worst dictator wants to actually have people to have power over, which gives the chance of a slave rebellion or eventual accident, even for immortal dictators.
    Or even them deciding to give up after a thousand years.

    If we're faced with a truly competent AI, which is signficantly more capable than humans, with resource desires that exceed what it can grasp without destroying humanity and possibly all life on earth, we're fucked.

    Think less 'war of attrition' - more 'run your washer through the boil cycle to remove the nasty smell'.
    • by suutar ( 1860506 )

      I figure by the time we create a competent AI (assuming we last that long) it'll be easier for it to just get itself sent to and take over Jupiter than deal with us.

    • Comment removed based on user account deletion
    • by Z80a ( 971949 )

      It's quite possible that a true AI will not be a human, in the sense of having the same basic instincts and complex emotions that emerge from em etc..
      For example, let's suppose Google create the ultimate search AI, a virtual brain that have have the basic need of always having the correct answer for a query.
      This need for answering queries would be as critical to it as it's your need to eat or sleep, it would be the core of it's intellect.
      This thing would develop several more complex feels over it, like a fe

    • by dinfinity ( 2300094 ) on Saturday June 11, 2022 @01:53PM (#62611964)

      Consider though that:
      - It is rational to choose the path of least resistance.
      - AI would be in no rush as it is practically immortal.
      So "we're fucked" could just simply mean disallowing us from reproducing and extending our lives. Our reproduction rates have been falling anyway and most people are already okay with being mortal, so it would probably face little resistance.

      The way I see it is that AI (or rather: inorganic sentience) will be our progeny. Just like with human kids, we need to do our best to raise it right and accept that it will inherit the world.

    • First the machines would need to run the entire economy and infrastructure without us- not just mining, manufacturing, agriculture and power generation but also maintenance and repair.
      Intelligence alone won't cut it.
      Then there's the problem of repairing the maintenance...

    • by Dread_ed ( 260158 ) on Saturday June 11, 2022 @05:01PM (#62612298) Homepage

      Maybe a nitpick, but I do not think that AI needs to be significantly more able than humans to be a quote threat unquote.

      AI just has to do what it does, which is to take large datasets and find patterns that humans normally cannot fathom.

      Those patterns can then be exploited by humans, once they are identified. If those patterns are related to human behavior and how to influence it without the knowledge of those being influenced, the perils become obvious.

  • Capitalists (Score:2, Insightful)

    by BytePusher ( 209961 )
    How about the capitalists that create them? Their objective function will lead to the destruction of humanity similar to Musk's paperclip optimizer, capitalists optimizing only for money and power will destroy everything.
    • Re: (Score:2, Informative)

      by drnb ( 2434720 )
      Capitalism has lifted more people out of poverty than any other system ever tried. Certainly it needs some oversight, humans are involved after all, but in general its the best system we have found so far.
      • Re: (Score:2, Flamebait)

        by Sigma 7 ( 266129 )

        Capitalism has lifted more people out of poverty than any other system ever tried.

        Capitalism by itself only lifts the rich - those who have money get more money based solely on the labour of others.

        For people to be lifted out of poverty, they need a stable income that can be lived on. Capitalism only provides that to those able to work, with an antipathy to those who are disabled for whatever reason. In addition, the current stage of capitalism wants to keep wages low, because they need to minimize expense

        • Re: (Score:3, Informative)

          by drnb ( 2434720 )

          Capitalism has lifted more people out of poverty than any other system ever tried.

          Capitalism by itself only lifts the rich

          History shows otherwise.

          For people to be lifted out of poverty, they need a stable income that can be lived on.

          History shows capitalism did that.

          ... the current stage of capitalism wants to keep wages low, because they need to minimize expenses in order for that CEO to get a big bonus for cutting costs while workers remain close to poverty ...

          History shows otherwise. Also capitalism drives a shift from out of demand skills to in demand skills.

          And the current stage of capitalism constantly pounds on the government to remove as much oversight as possible

          Overregulation can be as problematic as underregulation.

        • Unless you also roll priced fixed rents, utilities and food, UBI adds to the amount of money people have, but not necessarily the buying power of the money. AKA you cause inflation.

          Also, everyone that seems to want to roll out a UBI does not really mean universal as none of the schemes ever go for everyone. Just those in need. That really just makes it a different way of saying welfare.

          We are in trouble regardless. The rich will flee and the country will tear itself apart after that. Not yet, but just wait.

          • by Sigma 7 ( 266129 )

            AKA you cause inflation.

            Inflation was happening for decades, and it's only somehow an issue when discussing things that would help the poor instead of doing large scale bailouts of large companies that were faltering due to their own fault (e.g. the housing crisis of 2008, repo-lones in 2019, etc.)

            everyone that seems to want to roll out a UBI does not really mean universal

            At which point, they're not going for UBI, but either one of a minimum income guarantee (which is merely a quick fix that discourages wo

    • I am no fan of post-carbon-credits Musk (proof: https://tech.slashdot.org/comm... [slashdot.org] ). However you are wrong about "capitalists optimizing only for money and power" .. it depends on us and also what they are optimizing. There are many types of billionaires. Here are 3:

      1. Inherited billionaires
      2. Billionaires making money off resource hogging and scarcity. (real estate tycoons, deBeers type fools, idiots like Donald trump etc.)
      3. Billionaires who make money by providing a useful good or service that people wa

    • by gweihir ( 88907 )

      Sad but true. And so many useful idiots go along with it. One reason Democracy is a failure. (There still is nothing that works better though.) Capitalism is missing sane _limits_ because some humans certainly do not have any.

    • Replace destruction with subjugation. This is the goal of developing AI systems to parse large scale data sets of human behaviors.

  • ... the logical conclusion that human beings are a disease that destroy all other life forms. I don't know how you'd prevent that.
    • If it is truly more intelligent than us, and a better steward of the planet and it's resources, then why prevent it? We should want our children to be more successful than ourselves. In fact, if we have the capability to create something better than the current model, we should be working towards that upgrade.
    • Or it could keep us as pets because of our amusing antics.
    • ... the logical conclusion that human beings are a disease that destroy all other life forms. I don't know how you'd prevent that.

      Perhaps by either debugging it or feeding it better data so it does not come to incorrect conclusions like that. However, even if it did come to that conclusion so what? The worst most computers could do in that situation would be to send hateful messages to the screen or refuse to run programs.

      Given we completely lack the technology to build self-replicating machines the only way an AI could wipe out humanity before someone could turn it off would be if it were in control of nuclear missiles. In this

      • Power plants, the stock market, food production, banking and global logistics are all things it could easily fuck with. I would hope the nukes are air gapped, but who knows. It could definitely incite us to launch the nukes for it by pandering to our crazy. We've been itching to do it for decades.
    • the logical conclusion that human beings are a disease

      Your statement is meaningless because you don't define the premises whence the AI will "logically" reach the conclusion that humans are a disease. You're probably projecting your own prejudices and assume the AI will have the Weltanshauung of an old-time hippy or modern woke progressive, where anything people do is by definition bad (especially if they're white, or, God forbid, white males).

      But you don't know what goals the AI will have - and, seeing how its very existence requires an advanced industrial ba

    • Maybe they can gene edit us to remove some of the in-built qualities such as greed, jealousy, lack of empathy, and selfishness that evolution required for us to survive.

  • No. (Score:5, Insightful)

    by backslashdot ( 95548 ) on Saturday June 11, 2022 @12:47PM (#62611800)

    And not just because the answer to stupid headlines is always no. It's also because like everything humanity perceives as a threat, we are more a danger to it than us. For example, humans kill millions of sharks every year, while sharks only kill a few humans.

  • by rsilvergun ( 571051 ) on Saturday June 11, 2022 @12:49PM (#62611802)
    and we're worried about "unfriendly AIs"? Sheesh.
    • It can't be fixed, any action to fix it will take longer than any politicians electoral cycle. Therefore only short-term stopgap fixes can occur until it collapses and the politician left holding the bag can figure out how to deflect the blame on some pariah group.

    • by gweihir ( 88907 )

      Nothing like a distraction to be able to ignore real problems a bit longer (and to make them worse as a consequence).

    • by narcc ( 412956 )

      Imaginary problems are less frightening than real ones.

    • We could grow less food and just back off on exports while maintaining our domestic requirements. This would allow us to save enough water to get past this drought while also feeding our country. Yes, those people in other countries will have to either find somewhere else to buy food from or grow it themselves.

      Of course those are businesses using that water to grow almonds and grapes, so never mind.

      And this with a state that's entirely controlled by the blue team. I wish they truly were better then the red

      • First and foremost that's just not how capitalism works. That food is going to go to whoever pays the most for it. The same goes for the water and we've already seen cases where agribusiness got water instead of people who needed it to drink..

        But our food supply is pretty heavily socialized. We don't like to talk about it but the government uses subsidies to heavily control food production and this is primarily why we have food. A certain level of central planning is necessary to prevent things like the
  • by memory_register ( 6248354 ) on Saturday June 11, 2022 @12:49PM (#62611804)
    My unpopular opinion: we will never create general AI. We might get really good at making self-driving cars or very domain-specific chatbots, but creating our level of general intelligence will never happen.

    We're arguing over a non-issue.
    • Never say never. If nature can do it, so can we. At least theoretically, if we don't off ourselves before that.

    • by gweihir ( 88907 )

      I completely agree. Sure, there is some residual chance we may eventually be able to create AGI, but

      1) No current approach will do it. Otherwise we would already have very los, very stupid but decidedly general intelligence in machines. We do not.
      2) There is no theoretical approach that could do it with hardware we have available.
      3) Nobody knows what AGI would look like, if possible. Would it have free will? Would it have consciousness? (We only observe general intelligence in connection with consciousness

      • by narcc ( 412956 )

        Nothing you've written should be controversial, but for some reason it is...

        • by gweihir ( 88907 )

          Nothing you've written should be controversial, but for some reason it is...

          Yes. I have some speculation as to why, but it says nothing good about those that insist on disputing the obvious.

    • by narcc ( 412956 )

      My unpopular opinion: we will never create general AI.

      I'll go a step further. It is a fact that AGI will never be achieved by mere computation. We've known this for 40 years.

      I get why some people can't handle that. It's a religious belief for the Kurzweil acolytes. Though I suspect most of it is just lonely and socially awkward guys who really want their virtual girlfriend to love them back...

    • My unpopular opinion: we will never create general AI.

      Why? What will stop us? This is a serious question.

      Unless you believe there is something supernatural about intelligence (a "soul" or something), then it just requires finding the right arrangement of atoms to make a functioning generally-intelligent brain. What is it that you think will prevent that?

    • Exactly. We don't even understand biological life and for that fact how it started. AI is nothing more than clever algorithms. The algorithms will appear to get "smarter" over time and in the wrong programmer's hands probably cause some harm, but it won't be because the "machine rose up". It will be because humans misused it.
  • They are not unfriendly, they have either
    (a) been given erroneous or incomplete data that logically led to an erroneous conclusion, garbage in garbage out.
    (b) give good and complete data that logically led to a conclusion not favored by humans, ie its our problem not the AI's.
    • by HiThere ( 15173 )

      You, as many others, are mixing reasoning with goals and motivations. Most people seem to assume that an AI will have motives or goals similar to a human. This is almost certainly false.

      This, of course, doesn't mean they aren't dangerous. It just means that we tend to misunderstand what they might do and why they might do it. If you want to consider HOW different, think about the various images that fool AIs, but would never fool people. Then accept that people might have equal blind spots, but we don'

      • by drnb ( 2434720 )

        You, as many others, are mixing reasoning with goals and motivations. Most people seem to assume that an AI will have motives or goals similar to a human.

        Actually I am arguing agains that.

    • They are not unfriendly, they have either

      (a) been given erroneous or incomplete data that logically led to an erroneous conclusion, garbage in garbage out.

      (b) give good and complete data that logically led to a conclusion not favored by humans, ie its our problem not the AI's.

      If a superhuman AI was being chased by zombies would it run away?

      Would it care if zombies started nibbling on its circuits?

  • The real question (Score:5, Insightful)

    by 93 Escort Wagon ( 326346 ) on Saturday June 11, 2022 @12:54PM (#62611814)

    Why should the opinion of "Ethereum creator Vitalik Buterin" on this topic be given any more consideration than that of any random drunken fellow espousing his opinions from the end of the bar?

    • by GoTeam ( 5042081 )
      It's the whole "techie worship" thing the media does. It's fun to see what amazes and shocks "journalists".
    • When you are famous you have a bigger audience than the drunk guy at the bar, that's all. Apply to Musk, Gates and others.
    • Note that "Ethereum creator Vitalik Buterin" is rich/influential/well-known due to technology with a major impact on climate change at the same time he is sounding the alarm on another potential world changing technology. He is ignoring his negative impact on the future while criticizing potential problems with AI technology. Is this a deliberate attempt to muddy the waters or does he completely lack any understanding of what he has done?
  • by littlewink ( 996298 ) on Saturday June 11, 2022 @12:58PM (#62611826)
    Screw the AIs: Vladimir Putin is coming for you! And if it isn't him, its all the other real intelligences trying to bet their hands on your wallet, your body, or your life.
  • Long before we need to be concerned about "sentient AIs being unfriendly", we will be (and certainly are, already) surrounded by lots of stupid AIs, which are installed to take decisions in more and more consequential matters.

    In a race to the bottom regarding costs versus human labor, the push to make AIs "responsible" for all kinds of stuff, from steering cars to censoring media, results in woefully inadequate AI's replacing human decisions, which is a sure recipe for all kinds of disasters.
    • by HiThere ( 15173 )

      Stupid AIs are threats to individual people, occasionally classes of people (though I'd argue that it was more the bureaucracy that was the thread). Smart AIs can be a threat to humanity.

      OTOH, lunatics with atomic and biological weapons are also threats to humanity. The AIs are probably an intense one time threat, and if we pass that, a lot of the other threats will go away. Powerful lunatics are a continuing threat, lower in intensity but continuing until they are all removed from power. I think that's

  • Since truly self-aware IA doesn't exist, nor is it possible with current hardware architecture, the question is demonstratively naive. We've got environmental issues that are incompatible with human life to worry about before true AI.
  • If you are talking large (human) scale, unfriendly AI, (space) aliens, nuclear armed countries, planet, near earth asteroid (are asteriods ever friendly?), virus (remember covid anyone?) and countless other things are bad.

    What makes AI special?

  • No. (Score:5, Insightful)

    by SciCom Luke ( 2739317 ) on Saturday June 11, 2022 @01:11PM (#62611858)
    The greenhouse effect is.
    There is nothing else we should even consider fixing before this is fixed.
    • by ffkom ( 3519199 )
      Humans who lived during the Eemian [wikipedia.org] may disagree.
    • by HiThere ( 15173 )

      Global warming is important, but it's not an existential threat. Current existential threats are things like all-out war, giant meteor impact, stellar explosion, near by supernova, etc. AI could become another, but it would probably replace war as a threat, so the all around threat level might decrease.

      Also, SuperHuman AI would probably be an acute threat rather than a chronic threat. All-out war is a chronic threat.

      • Global warming is important, but it's not an existential threat.

        Life support system failure absolutely is an existential threat. We are in uncharted territory and nobody really knows e.g. how much methane is going to come out of the ground. We may already be into runaway effects that we couldn't stop even if we tried, and we aren't really.

        • Life support system failure absolutely is an existential threat. We are in uncharted territory and nobody really knows e.g. how much methane is going to come out of the ground. We may already be into runaway effects that we couldn't stop even if we tried, and we aren't really.

          There is no consensus for climate change posing an existential threat. This is very much an outlier position not supported by domain experts.

          • "This is very much an outlier position not supported by domain experts."

            This is fossil fuel industry propaganda in a nutshell. All the so called "domain experts" who minimize or deny the negative impact of climate change are industry shills, as are you.

          • There is no consensus for climate change posing an existential threat.

            That doesn't mean it is not an existential threat. You are repeating a right wing talking point [heartland.org] with no value.

            Consensus is built over time.

            This is very much an outlier position not supported by domain experts.

            Decision makers [defense.gov] and underwriters [chubb.com] know what you don't want to.

  • by gweihir ( 88907 ) on Saturday June 11, 2022 @01:21PM (#62611882)

    Stupid humans that is. Because no matter how hard the AI folks try, Artificial Stupidity _never_ be able to match human stupidity. And we have plenty of stupid humans, and quite a few are also arrogant, violent and crave power over others like nothing else. We do not seem to have these under control at all at this time.

    That said, stop being stupid and stop anthropomorphizing machines. There is no free will, no insight and no understanding in machines. Whether AGI can even be implemented is completely open. Research results so far give zero indication that it can be. Of course, that is a negative and there may still be some way nobody has looked at so far, but with the intense research into this direction over the last 70 years or so, it is at least pretty clear that no current approach will do it.

    Whether free will could be implemented in machines is even less clear. And consciousness (required for being "unfriendly")? It is not even remotely clear what these two are in the first place. Maybe these people should stop pushing highly speculative ideas as real threats and focus on actual threats instead?

    • 1. Whether AI is or can be conscious is irrelevant. Proof: you don't know whether any human other than yourself is conscious and it doesn't matter at all for your life. Only their actions and behavior affect you.
      2. There is no proof that humans have free will either. The physics of this universe are the same for organic and inorganic matter. Follow the evolution from single celled organisms to humans and tell me where and how the special sauce was added that gave animals/humans free will. Then tell me why A

      • by HiThere ( 15173 )

        To an extent it's justifiable to "assign a lot of properties to humans and deny those properties to inorganic life". The reason isn't anything mystical, it's that people have a long evolutionary history behind us, and that gives us characteristics that we don't come close to understanding, much less being able to emulate.

        Of course, it's a real question how much of that we'd really want to emulate in an AI. And AIs can already do things that are well beyond the capabilities of people, as well as failing dr

        • by gweihir ( 88907 )

          Of course, it's a real question how much of that we'd really want to emulate in an AI. And AIs can already do things that are well beyond the capabilities of people, as well as failing drastically, often incomprehensibly, on things that people find easy.

          Indeed. AI is a tool, not a replacement for people. Well, people are often used as tools, not as people, and in those cases AI is a valid replacement. But when the task is to think, no AI can do it and it will come up with the most spectacular (and sometimes dangerous) nonsense.

          Remember why IBM stopped the experiments with Watson in the medial field? The thing was, on average Watson could compete with a regular MD. But in some cases it killed the patient by making mistakes no MD would ever have made. And IB

          • by HiThere ( 15173 )

            Unnnn.. ..sort of. The AI would make mistakes that no human would make, but also humans make mistakes no AI would make. However, I don't think a jury would accept that argument. If one were to be even handed, though, it might nearly balance.

      • by gweihir ( 88907 )

        1. Whether AI is or can be conscious is irrelevant. Proof: you don't know whether any human other than yourself is conscious and it doesn't matter at all for your life. Only their actions and behavior affect you.

        Pseudo-profund bullshit. Try again. You may also want to look up what a "proof" actually is.

        2. There is no proof that humans have free will either. The physics of this universe are the same for organic and inorganic matter. Follow the evolution from single celled organisms to humans and tell me where and how the special sauce was added that gave animals/humans free will. Then tell me why AI can't have that sauce.

        Fundamentalist physicalist drivel. If you assume a fundamentalist quasi-religious stance as "truth" of course you arrive at nonsense.

        3. Define 'insight' and 'understanding'. Everybody seems to imply that they are something special, almost metaphysical, without properly defining what they are. I think you'll find that once you formalize what it means to 'understand' something, you'll also find that that is very attainable for AI running on a 'machine'.

        The through line in all of the above (and in all debates surrounding this, alas) is that people assign a lot of properties to humans and deny those properties to inorganic life by implying they are mystical, unique and inscrutable instead of rationally and precisely evaluating them.

        Hahaha, no. I actually understand what computers can do and what they cannot do. You obviously do not. And no, you cannot formalize "insight" and "understanding", you can only describe its effects. Yes, I happen to be qualified in that area. This is just more physicalist bullshit.

  • China (Score:5, Insightful)

    by GotNoRice ( 7207988 ) on Saturday June 11, 2022 @01:22PM (#62611886)
    If an "unfriendly AI" is actually created, it's likely to come from China at this point. They are pouring a very large amount of money into AI development, but lack any ethics whatsoever. They would not hesitate to weaponize AI.
  • Personally I don't think we will get to actual AI (sentient, self aware, conscious, etc.), but the "AI" (systems making decisions based on algorithms) we have today is already a risk to humanity and that risk is slowly being realized. The risk is delegation - letting algorithms make binding decisions without human oversight or (practical) exception. While offloading human decisions to algorithms can and does reduce cost, it also reduces humanity. How many times have you been been screwed (or just gotten poo
    • by HiThere ( 15173 )

      That's a risk to individual and small groups. It's not an existential risk. They really are basically different concepts.

      • Not necessarily. People are very bad at designing algorithms that optimize what they actually want, rather than what they think they want. .
  • Most definitely applies.
    The biggest threat to humanity is humans.

  • To reach such conclusion you start by assuming that an AI would be more intelligent than a man. Like that would be adding more processors. We struggle to get one that can drive a car, and I suppose they already thought about adding more processors. Or you assume that when you get to man-like intelligence, the AI self would be able to reprogram itself for more intelligence. Well you have man-like intelligence (hopefully) and you cannot design a more intelligent being, can you.

    Then it's assumed that a infinit

  • I'm assuming they're speaking to truly aware artificial intelligence. If so, we are so far from this that it can still be considered fantasy. Also, should it ever happen, there would be so much spotlight on it that I can't see how it would ever gain the sort of control that would lead to such a ridiculously catastrophic consequence. This would be "aliens landing in Washington D.C." level stuff.

    But if they're talking about the algorithms everyone calls AI, well I could probably see a few scenarios where s

  • An even greater threat is singularity-based power sources having catastrophic containment failures and the potential weaponization of relativistic propulsion technology. That might destroy the planet itself, not just life. I demand we start doing things about this and all the other problems related to far-future technology we haven't yet even begun to figure out how to build!
  • The people that think of AI as a threat make multitude false assumptions:
    1) That it will be a single AI. NOPE. Before we get a superhuman AI capable of killing mankind, we will build thousands, if not millions of Human class AI capable of out thinking a single human being, and before then we will make hundreds of miillion Chimpanzee class AI that are not quite as smart as us.

    2) Even a computer that is marginally smarter than humans will not be smart enough to beat all of mankind. It is extremely unli

  • Possibly the war to wipe out thinking machines was really the Buterinian Jihad, not the Butlerian Jihad.

  • The Grumpy AI and the Cranky AI are way worse.

  • Think of the movies where unfriendly AI has threatened the human race: Terminator, War Games, Colossus: The Forbin Project. In all cases we stupidly gave them control of nuclear weapons.
    Simple solution: don't give them nukes, and make sure their power supply has an "off" switch.

  • I was waiting to come across some warning about not paying, "one M-I-L-L-I-O-N dollars!"

  • Meanwhile, people are stupidly working to replace themselves as a species.
  • AI, specifically neural networks, have to be trained through feedback loops. With each iteration, humans feed input to the AI, then signal to it whether its prediction was correct. Through thousands or millions of iterations, the AI "learns" the desired responses to complex inputs.

    If something goes wrong with this training process, it doesn't result in something "evil." It results in randomness. The only way to get an evil AI, is to train it to be evil.

    So can AI be evil? Yes, but only if its human creators

  • They're called corporations, they run on the legal system, not on silicon. And they're very good at altering their operating environment to benefit themselves.

  • ... before things could get that bad.

    AI in its current stage of development has economically practical applications but is philosophically insignificant. It doesn't shed light on human consciousness or psychology; the methodology has nothing to do with simulating human intelligence in any way, and is not informed by neuroscience (yes, that even includes *neural nets*), biology, psychology, or any of the social sciences . AI -- at least the commercial stuff -- is just a collection of ad hoc methods for usi

You know you've landed gear-up when it takes full power to taxi.

Working...