Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI

AI's 6 Worst-Case Scenarios (ieee.org) 104

"Who needs Terminators when you have precision clickbait and ultra-deepfakes?" asks IEEE Spectrum: Hollywood's worst-case scenario involving artificial intelligence (AI) is familiar as a blockbuster sci-fi film: Machines acquire humanlike intelligence, achieving sentience, and inevitably turn into evil overlords that attempt to destroy the human race. This narrative capitalizes on our innate fear of technology, a reflection of the profound change that often accompanies new technological developments.

However, as Malcolm Murdock, machine-learning engineer and author of the 2019 novel The Quantum Price, puts it, "AI doesn't have to be sentient to kill us all. There are plenty of other scenarios that will wipe us out before sentient AI becomes a problem."

Their article presents six real-world AI worst-case scenarios that "could simply happen by default, unfolding organically — that is, if nothing is done to stop them." It includes the possibility of deepfakes and large-scale disinformation, as well as AI-enabled "predictive control" that ultimately robs us of our free will.

But it also presents an alternative worst-case scenario: that "we become so scared of the power of this tremendous technology that we resist harnessing it for the actual good it can do in the world."

Thanks to Slashdot reader schwit1 for sharing the article.
This discussion has been archived. No new comments can be posted.

AI's 6 Worst-Case Scenarios

Comments Filter:
  • Will never happen. There is so much more incentive to use it for evil than there is to use it for good. Technological advances are always only used for evil. Everything out there today is a means to control you.

    Hell, even your washer and dryer are phoning home to give your overlords intelligence on your laundry habits, so that data can be used to control you.

    There is no such thing as a "good" when it comes to technology.

    • by robi5 ( 1261542 )

      > Technological advances are always only used for evil

      Except for civilization, lmfao

      • a 7th scenario.
        half of the humans parish because an a i removes the warning label on all bottles of bob cat urine

  • by Baron_Yam ( 643147 ) on Monday January 10, 2022 @08:13AM (#62159893)

    We are limited by evolution and biology. Mind you, nature's created something pretty spectacular in the human brain, but there's not only no reason to believe we could replicate that - we can improve upon it.

    So... say we figure it out. Build an artificial mind. Except, it maybe doesn't need sleep, doesn't forget, has superior memory indexing, and can hold far, far more in its 'working memory' simultaneously. No mental illness. No age-related decline.

    What point is there in being evolved thinking meat when there's a guy made of titanium and silicon just down the street who can do everything you can, think every thought you've ever thunked (tm), and in fact knows and understands everything any human has ever known or understood? And we're not talking humanoid calculators here, but thinking beings that can appreciate and create art, too.

    • by Anonymous Coward

      ... it maybe doesn't need sleep, doesn't forget, ... No mental illness. ...

      If it wasn't for sleep and forgetfulness, amongst others, I for one would have gone very mentally ill long ago.

    • by gweihir ( 88907 )

      There is no risk of that happening. Your assumptions are flawed.

      • by ranton ( 36917 ) on Monday January 10, 2022 @09:33AM (#62160219)

        There is no risk of that happening. Your assumptions are flawed.

        No risk of that happening? That is hopelessly optimistic.

        Thinking that time travel is impossible is reasonable, considering leading researchers believe it is impossible and we haven't seen any examples of it. But we have one example of a sapient machine (humans), so regardless of what leading researchers think is possible you must acknowledge it is possible. Perhaps there is something about biological machines which cannot be replicated in our current digital architectures, but that is far from proven.

        It is simply unreasonable to believe there is no risk of an AI surpassing human cognitive ability, regardless of how unlikely you may think it is.

        • Perhaps there is something about biological machines which cannot be replicated in our current digital architectures, but that is far from proven.

          And also irrelevant except when determining time scales, since new architectures can and (given opportunity) will be created. All we can say for sure is that AI can't do that now, but we don't even know that existing approaches won't serve to accomplish it with enough processing power, storage, bandwidth etc.

        • Itâ(TM)s physically impossible. Computers and the human mind are not analogous despite what you may read. The human mind doesnâ(TM)t make discrete calculations. A binary digital computer could neverâ"everâ"become self aware. So it would need to be another technology. Quantum computers maybe. But all research today with computers towards AI is worthless.
          • by ranton ( 36917 )

            It's physically impossible. Computers and the human mind are not analogous despite what you may read. The human mind doesn't make discrete calculations. A binary digital computer could never "ever" become self aware.

            You are making two mistakes:

            1. You are making the assumption that self awareness is dependent on the use of discrete vs continuous change. I guess that could be true, but it isn't proven. I couldn't find any research papers claiming this in a quick Google search. It is likely just something you read somewhere, or some assumption you are making based on something you read or heard.

            2. You are under the false belief that we know for sure whether the brain operates in continuous form. That is the prevalent theo

            • There's a third mistake - assuming that computing technology won't be developed further. There's a lot of work going into memristors right now, and they could be fairly revolutionary for making hardware that's more brain-like rather than emulating it in software.

              And there may (actually, almost certainly will) be more developments after that.

              • by gweihir ( 88907 )

                Nope. They cannot. Computers with memristors remain simple digital computers, just a bit cheaper and maybe faster.

                • Oh well, that's settled it then. Thank god you were here to tell us all those researchers are wrong and should just stop working on their memristor projects.

                  • by gweihir ( 88907 )

                    Oh well, that's settled it then. Thank god you were here to tell us all those researchers are wrong and should just stop working on their memristor projects.

                    And why should they stop? Cheaper and faster are both worthwhile goals. There may also be smaller and lower energy in the picture. Are you stupid?

                    What is not in the picture is an extension of the nature of computations that can be done. It is still ye old Turing machine and that one is and will remain as dumb as bread.

            • by gweihir ( 88907 )

              Actually, the argument is valid:

              1. If there is free will, digital computers cannot have it. While humans often do not use free will, there are pretty solid indications (not proof) that it exists and at least some humans can use it.
              2. As to consciousness, unless consciousness is a purely passive observer, again, digital computers cannot have it. Some morons try to get around that by claiming that "consciousness is an illusion", but that claim is so obviously bogus that I will not even grace it with a counter

              • Some morons try to get around that by claiming that "consciousness is an illusion", but that claim is so obviously bogus that I will not even grace it with a counter-argument.

                Your arrogance is disappointing.

                1. There is no proof for free will. You said it yourself. 'Solid indications' are useless if you want to prove that 'digital computers' cannot have something.
                2. Fact: you do not know and are unable to know that anybody other than yourself has consciousness (as you would define it in this specific case). You are only able to observe their matter and its behavior. This clearly does not matter to you and you regard them as conscious nonetheless. It would be absurd (in the logica

                • by gweihir ( 88907 )

                  Some morons try to get around that by claiming that "consciousness is an illusion", but that claim is so obviously bogus that I will not even grace it with a counter-argument.

                  Your arrogance is disappointing.

                  You mistake common sense for arrogance. The common sense angle here is, rather obviously, that without free will, we are all mindless observers only and the whole discussion is pointless. Apparently you lack that common sense or make a pointless argument to try to assert your intellectual superiority.

                  Seriously, these are _old_ questions. Philosophy (which has studies these for millennia) has a justifiable tendency to tolerate extreme viewpoints, to foster argument and different views. But there is no reason

                  • Think of a computer that could do anything a four year old child could do. That seems like an achievable goal over the next few decades. Four year old children do not think very deeply about anything.

                    But if it was achieved, the computer would be nothing like a four year old child. It would be a four year child that could effortlessly solve differential equations, because computers can already do that. It would have Wikipedia in its memory, even though it only had a limited understanding of the meanings,

                  • Everything you said is so obviously bogus that I will not even grace it with a counter-argument.

                    Nah, I'm sure. That is definitely arrogance.

              • Iâ(TM)d be curious to see where you number 3. Genuinely interested.
                • by gweihir ( 88907 )

                  The brief version is like this: Automated theorem proving can, in theory, find all mathematical proofs in a given theory, say, in geometry or first-order logic or abstract algebra. However when you look at the existing approaches (and this was pretty a advanced field 30 years or so back when I studied it) they cannot get deep by themselves. The search space explodes simply exponentially with a high exponent (very fast), so fast that these systems cannot even do relatively simple proofs by themselves (well,

                  • Interesting. Thanks. I kind of hope you're right. It would be cool for biological entities to be capable of something that computers can't do.

                    I still think it's more likely that we'll eventually develop computers that are more neurological in nature, and a combination of clever architecture and algorithms will duplicate/surpass human capabilities. I mean, there's only a hundred billion neurons in the human brain. We're not at that transistor count, yet, but we're getting close. Also, neurons pulse at
                    • by gweihir ( 88907 )

                      Well, we will see. What I presented is just a plausibility argument, not proof. Can go either way.

                      But as to neurons, a neuron is a lot more than a single transistor. More like a whole somewhat simple computer with, say, a million transistors or so and software and storage on top. And then you have synapses, which are also a lot more complex than a single transistor and there are apparently around 15k per neuron on average. Now, all these synapses connect to other neurons and, say, you have 1M options per sy

            • We now know for a certainty that the axons are not the only processing units in the brain.

              We also know that the idea that the axons either firing or not firing, a binary computation, is not the only computation going on in the brain.

              We now know that the dendrites, which were long considered mere connections, have multiple computation blocks stacked without each of their arms, and they are using a range of voltage to pass information from one block to the next before it ever gets to the axon.

              The dendrites ar

          • by vlad30 ( 44644 )
            The article actually gives examples that are almost there, Systems that give fake information and distribute it widely

            e.g. facebook imagine if there was a deepfake of a world leader declaring war and that went viral due to their "A.I." algorithms

        • Thinking that time travel is impossible is reasonable, considering leading researchers believe it is impossible and we haven't seen any examples of it.

          That we know of. Just because we haven't seen any examples of time travel doesn't mean it hasn't happened. There is always the chance some government is working in secret testing a time travel machine. They just haven't told us about it as yet.

          The other alternative is those who have traveled back in time are keeping to the shadows so as not to aff
          • by ranton ( 36917 )

            I said thinking time travel is impossible is reasonable, not that the lack of examples or expert consensus makes the statement true. The lack of both, though, at least makes a claim it is impossible reasonable.

            The same cannot be said for any claim that AI could never reach human levels of cognition. That is an unreasonable claim.

            • by gweihir ( 88907 )

              Actually, it is reasonable to assume that _observing_ time-travel is impossible. That is a bit different. If you use the many-world model, for example, time-travel is absolutely no problem, but you cannot observe it. You can only indirectly conclude you have done it yourself and people outside cannot even do that.

        • by gweihir ( 88907 )

          There is no risk of that happening. Your assumptions are flawed.

          No risk of that happening? That is hopelessly optimistic.

          Nope. That is the result from following the field for about 30 years now. There is no reason to believe AI can ever have insight or understanding.

          But we have one example of a sapient machine (humans)

          Nope. The claim that humans are mere biological machines is religion ("Physicalism"), not Science. Everything we have on the Science side points to humans being more than machines and having some as of yet not understood quality that machines do not have and cannot have. For central things like Consciousness, there is not even a mechanism in known Physics. For int

          • by ranton ( 36917 )

            The claim that humans are mere biological machines is religion ("Physicalism"), not Science.

            Physicalism is not religion. How do you make that leap? It is basically just having a view of reality which doesn't include unmeasured non-physical elements. That doesn't mean it could not incorporate changes to our understanding of reality. We didn't have a concept of nuclear or electromagnetic forces at one time, but included them into our physical models once they were discovered. If there are as of yet undiscovered forces, it doesn't make them supernatural. They are just supernatural forces we haven't d

            • by gweihir ( 88907 )

              Sorry, but to claim consciousness is an illusion is pseud-profound bullshit. It requires consciousness to have an illusion.

          • by ranton ( 36917 )

            But we have one example of a sapient machine (humans)

            Nope. The claim that humans are mere biological machines is religion ("Physicalism"), not Science. Everything we have on the Science side points to humans being more than machines and having some as of yet not understood quality that machines do not have and cannot have.

            In addition to what I said in my previous reply, all this talk about physicalism vs dualism doesn't change the fact that it is intellectually dishonest to say there is zero chance of AI replicating human cognition just because you don't agree with physicalism. If you are wrong, then it is a near certainty that AI will eventually reach human cognition (barring an extinction level event before we figure it out). So while you may still feel the risk is low because of your confidence in dualism, you cannot say

            • by gweihir ( 88907 )

              I said "there is no reason to believe". That is an argument for absence of a reason not an argument of impossibility of a reason. And hence I am very much not "intellectually dishonest". What _is_ intellectually dishonest is claiming that Physicalism is Science. It is not. It is pure belief at this time with no Scientific substance.

              Again, I am pointing out absence of proof or strong indication. Hence at this time, it is a pipe-dream. Sure, Occasionally (rarely) a pipe-dream turns out to be accurate. There s

    • Its created a brain that uses about 20 watts yet can out perform the most powerful computer for ensemble tasks. By that I mean yes an AI can out perform us at chess or go or even now some visual recognition tasks. But thats all those AIs can do. Need them to tie a shoelace? That'll be another few weeks/months/years of training using probably megawatt hours of power and that can only be done from scratch. They can't learn chess AND learn go, tie shoelaces etc. ANNs just don't work that way.

      • by robi5 ( 1261542 )

        > yes an AI can out perform us at chess

        Though not at the 20W mark. Which is a partially interesting metric as the brain isn't metabolizing in isolation. Someone else may add the energy needs of the body, over the decades of a grandmaster's lifetime, including maybe the energy requirements of flights that incurred for chess tournaments and preparations. Still, it's remarkable how much the brain achieves for chess with its 20W, a 100% synthetic, full-information game where computers should most excel

      • Need them to tie a shoelace? That'll be another few weeks/months/years of training using probably megawatt hours of power and that can only be done from scratch.

        So just like humans? How long does it take a child to learn how to tie shoelaces? How many years?
        • How long does it take a child to learn how to tie shoelaces?

          Not very long. A few days, and not working continually for those days.

        • by Viol8 ( 599362 )

          A few hours in total spread over time. In the meantime its also improving its speech, movement, maths, lego skills etc etc.

    • by dargaud ( 518470 )

      So... say we figure it out. Build an artificial mind. Except, it maybe doesn't need sleep, doesn't forget, has superior memory indexing, and can hold far, far more in its 'working memory' simultaneously. No mental illness. No age-related decline.

      Two things: About sleep, there's a lambda-calculus theorem that states (in other words) that any sufficiently complex system needs a garbage disposal. As in Java... or sleep.
      Also about mental illness: some tests on current AIs where you disable a part of them in different ways leads to some 'mental illnesses' that are analogue to some human ones (an OCR system that suddenly experiences dyslexia, a face recognition system that won't recognize a subset of people anymore, etc...)

    • We are limited by evolution and biology, but we're also enabled by those things. Our consciousness is inextricably linked to our bodies. The human mind is an organically grown hodge-podge of subroutines that evolved over many millions of years to help us eat and fuck. What will motivate an artificial mind to evolve when it doesn't need either? How will it evolve awareness when it doesn't need to be aware to avoid predators, etc? And evolve it must - the brain and the mind are much too complex to be "designe

    • by noodler ( 724788 )

      Mind you, nature's created something pretty spectacular in the human brain, but there's not only no reason to believe we could replicate that - we can improve upon it.

      There is a glaring hole in your argument and it's called motivation. Nothing you said suggests that this machine will have any of the built-in drives that humans have. For instance, huge swaths of our culture obsesses with sex and procreation. Without the right brain structures, without the right hormones, etc. , this AI will never think like humans because human brains are deeply linked to our biology and ecology.

      And we're not talking humanoid calculators here, but thinking beings that can appreciate and create art, too.

      How can an AI appreciate art that is made to resonate with human brains, with human lifes?
      And

  • There is no "actual good [AI] can do in the world." Technological escalation is a root cause of societal and ecological collapse. The actual good thing to do in the world, would be to not have any of it the first place.
    • by DarkOx ( 621550 )

      Than do you part. Stip off you clothing, and head into the forest with nothing but a sharp stick and some rocks. Stop posting to Slashdot!

      There have been plenty of ecological catastrophe without much 'technological escalation'

      Think Easter Island. Okay so they had 'some technology' but its a good example of something we have a little bit of record and understanding of because of technology, we can imply some things about what happened from carvings and other leavings. However I think archeologists would tel

      • Than do you part. Stip off you clothing, and head into the forest with nothing but a sharp stick and some rocks. Stop posting to Slashdot!

        That's a nonsensical response. Just like robotics makes automation capable of eliminating more jobs, new technologies including AI make people capable of doing not just more damage, but new kinds of damage. For example PFAS and other "forever chemicals" couldn't physically be created by mankind until recently.

        • by DarkOx ( 621550 )

          Its not nonsensical at all. I never denied technology could enable doing more damage. Actually I implied as much when I suggest we often don't do enough study before widely deploying things!

          The grandparent though suggest that technology was the root case - of collapse. Its not it is the only reason there has not been collapse long before any of us got anywhere near the ability to write message on keyboard and have it displayed on a website accessible all around the world!

          You are only worried about PFAS i

          • will technology play a role - of course it will - will it be a root cause? - no of course not it can't be because it is in fact the literal bed rock all of what we think of as society is built upon.

            I find it odd to think of a pin as the only root cause of a bursting balloon, and that the inflation has nothing to do with it. It's true that inflation is the bed rock of what we think of as a proper balloon, and it still plays a role in the bursting. A future collapse that kills off 5 billion humans wouldn't have been possible in a pre-technical age. And, the more the balloon is inflated, the more fragile it becomes.

      • by HiThere ( 15173 )

        It all depends on how you define "technology". If you limit it enough, then you can avoid all technology caused (or abetted) catastrophes. But you've got to take it back to before the invention of fire, and you might also need to include noise-makers, like drums, or basically any technology that can start a stampede (though I'm a lot less certain that any of those ever really helped exterminate a species).

        But you definitely need to incldue irrigation and goat herding.

      • I think archeologists would tell you its certainty that various hunter gatherer groups had over consumed local resources and sparked at least localized ecological catastrophe many times before and after.

        It's not so [americanscientist.org] certain [sci-news.com].

    • Congratulations! You literally jumped all the way to one of the worst-case scenarios without even trying!

      6. Fear of AI Robs Humanity of Its Benefits

      There is a significant amount that AI can do to benefit humanity. A big one recently was accurately predicting/modeling 3d protein structures [slashdot.org] which was described as "a sea change for structural biology,"

    • Irony, much?

      Feel free to go back to living on the savanna with only your hands for tools (tools are technology too) and see how long you last. Maybe a day before a lion eats you? You'd certainly starve in the long term regardless.

    • There is no "actual good [AI] can do in the world." Technological escalation is a root cause of societal and ecological collapse.

      The actual good thing to do in the world, would be to not have any of it the first place.

      How do you say, audit the source code of an entity that creates/edits it's own source code? Mind you whole countries have been excluded from contributing to this or that because ..."The sheer weight of the effort will unhinge the good faith intent of a project due to the ultimately centralized/decentralized to a fault nature of their logic. so, a mechanism that will encourage the disclosure of source code, the fair and democratic decision on who will contribute to said source code and a further democratic m

  • Does anyone else think it's a bad sign that this list is basically "AI will do what we are already doing, but faster and better", rather than a list of novel bad outcomes that AI could enable that aren't currently possible?

    I'm not sure if this is just a sign of impoverished imagination; or a (quite possibly correct) realization that the plans that team ad-tech and similar have for us are even less pleasant than getting exterminated by skynet or turned into paperclips by a blinkered expert system just opt
    • I'm not sure if this is just a sign of impoverished imagination; or a (quite possibly correct) realization that the plans that team ad-tech and similar have for us are even less pleasant than getting exterminated by skynet

      wat

      Reading a comment like that is even less pleasant than having you exterminated by skynet before you wrote it

    • by gweihir ( 88907 )

      There is no credible indication Artificial Ignorance can do anything humans cannot do. All we have are indications that some things may be doable by AI primarily cheaper, sometimes faster and generally not better but often still good enough. I am speaking about a capable, competent and motivated human here, not about an average one. For the majority of average and below people doing a desk-job or an industrial job, AI is a real threat. Not all of those will vanish, but that one warehouse worker that now sup

      • Artificial intelligence can do scale and persistence. You can put a fairly dumb 'minder' to observe 24/7 anyone you deem worthy of surveillance.

      • by ranton ( 36917 )

        There is no credible indication Artificial Ignorance can do anything humans cannot do. All we have are indications that some things may be doable by AI primarily cheaper, sometimes faster and generally not better but often still good enough.

        There are certainly credible indications that AI can do things humans cannot do. AI already does things humans cannot do. These are all very narrow applications in a narrow and well defined domain (like beating us at chess), but it is simply false to say AI cannot do anything humans cannot do.

        • by gweihir ( 88907 )

          There is no credible indication Artificial Ignorance can do anything humans cannot do. All we have are indications that some things may be doable by AI primarily cheaper, sometimes faster and generally not better but often still good enough.

          There are certainly credible indications that AI can do things humans cannot do. AI already does things humans cannot do. These are all very narrow applications in a narrow and well defined domain (like beating us at chess), but it is simply false to say AI cannot do anything humans cannot do.

          Nope. It is completely true. Sure, a human may take longer and/or have to write some non-AI code to get it done. But AI can do absolutely nothing that cannot be done at least by some humans as well. The only exceptions is that sometimes humans will not live long enough, but that is an argument related to quantity, not quality.

      • My answer to that is the fact that yon substitute "AI" for "forklift" or "horse pulled wagon" so the indicators will be more obvious.

    • I think it's a bad sign that this clickbait article (You won't believe these SIX BAD THINGS that AI is going to do!!1!!1) which specifically calls out precision clickbait as being WORSE THAN MURDER BY ROBOT is featured on the front page of Slashdot, and worse, being taken seriously by anyone.

  • Interesting list. (Score:4, Insightful)

    by jd ( 1658 ) <imipak AT yahoo DOT com> on Monday January 10, 2022 @08:32AM (#62159955) Homepage Journal

    Deep Fakes by humans, rather than AI, produced a war in Iraq but reduced an earlier war in Europe. So we know how effective disinformation through fakes is, when it comes to warfare.

    The dangerous race to the bottom is also very obvious amongst humans. The placing of non-secure SCADA networks on the public Internet, for example, or the use of Microsoft's OS' in government.

    As for the fear of technology, I have held for many years now that if you look at it objectively across history, the rate of change that is required in technology to produce fear is related to a function of the rate of change in society. If society changes quickly, as is the case these days, the rate of change in technology needed to instill fear is much larger than it was when society changed slowly. Ultra-conservative societies that barely change at all require almost no change in technology to instill fear.

    On the other hand, rapidly-changing societies have a fear of very slow changes in technology.

    In consequence, I contend that the more progressive and adaptive society itself is, the faster technology needs to change in order to feel comfortable. The two need to progress equally and in compatible directions.

    • by gweihir ( 88907 )

      Deep Fakes by humans, rather than AI, produced a war in Iraq but reduced an earlier war in Europe. So we know how effective disinformation through fakes is, when it comes to warfare.

      That is ass-backwards. The war was already a decided thing. Deep Fakes helped sell it and it was done badly. Putin said as to WMDs in Irak "I would have found some". That is how you make sure after you lied that you have a good chance of not getting caught in the lie.

      • That was Powells misunderstood complaint. "At least when you make me sell the war with an excuse, make sure you can back up the excuse".

        • MAKE him? Nobody MADE him. He CHOSE to sell the war with lies, which by the way, he knew were lies [theintercept.com].

          Colin Powell is also the one who advised Hillary Clinton to run a private email server [nytimes.com] explicitly for the purpose of avoiding discovery. That's not a free pass for Hillary or anything, it's just further proof that Powell was a liar who knew the importance of covering up lies.

          • You do have trouble understanding what I write. Colin Powell was a soldier in the platonic guardian tradition: decisions of war are taken for state interest, and are then sold with a public cover story. The public story may be true or partially true but it is not the real reason for the war. In that respect you could always call them lies. Even if Saddam had a Powell would always consider it his job to sell the cover story, regardless of what he thought about it. So all 'made him' means is he was told to do

            • The public story may be true or partially true but it is not the real reason for the war. In that respect you could always call them lies.

              I do, when they are lies. Why would I call them something else?

              So all 'made him' means is he was told to do it.

              Yeah, that's revisionist bullshit. His first duty is to The People. He chose to betray that duty.

              It was never justified to trust what Powell said.

              It's never justified to make excuses for his lies to The People either.

              • Listen, I'm explaining how it works and that asking the wrong question is different from lying, while you are falling back on general judgement. I am saying Powell has had a lot of experience with cover stories and some of them were true but they were still cover stores. What would you have said if Saddam still had chemical weapons? That Powell wasn't lying?

                Let me explain what a military decision maker asks when you tell about chemical weapons: he asks 'how much ability to project power does it represent'.

            • by gweihir ( 88907 )

              The offending part was that nothing of the cover story held up at all.

              Indeed. And that is what Putin basically called massive incompetence on the side of the US. If you lie, at least have the decency to put some thought and effort into it.

    • "Deep Fakes by humans, rather than AI, produced a war in Iraq but reduced an earlier war in Europe".

      Actually very shallow fakes. Very shallow indeed.

    • The dangerous race to the bottom is also very obvious amongst humans. The placing of non-secure SCADA networks on the public Internet, for example, or the use of Microsoft's OS' in government.

      Those aren't even the best examples. A better one is the relentless pursuit of cost reduction in consumer products of all kinds. But it's also an example of why this AI fear is stupid. We don't need AI to race to the bottom. Humans are doing it already. That's why we need consumer protection laws in the first place. The modern corporate mindset is to make as much profit as possible at any cost which the corporation doesn't have to bear itself. Cost to society, cost to humanity, cost to workers, cost to nati

    • by HiThere ( 15173 )

      I think your hypothesis is bolstered by an inherent bias in the data used to present it. Those at the top of a society approve of technology that increases their control over it. Everyone else approves of technology that makes their lives easier or more comfortable. Those close to the bottom fear technology that increases the control of those at the top.

      There's a lot more folks close to the bottom, but they don't tend to leave many historical records.

  • by earl pottinger ( 6399114 ) on Monday January 10, 2022 @08:46AM (#62159991)
    All you need to do is read the first chapter of "The Two Faces of Tomorrow", by James P. Hogan. The AI was doing exactly what it was told to do, but being an AI it came up with a solution that human did not think of. In other words, the real danger of AIs is they will come up with intelligent solutions to problems we tell them to solute, it is just that they will not be the ones we are ready for.
    • by gweihir ( 88907 )

      We have no credible indication that AI can do something like that.

      • If software is writing its own fitness functions then it might be possible. But those fitness functions are still going to be produced as a result of training data selected by a human, so at some point that's where the buck stops.

      • We have no credible indication that AI can do something like that.

        Maybe if you had an AI to help you think better...

    • As Hogan points out in the book (and elsewhere), there are two obvious cases.

      1. AI cannot tell us anything we couldn't think out for ourselves just as quickly and accurately.

      2. AI can solve problems far beyond our ability; or at least solve 100-year problems in minutes.

      In the first case, we don't need them. In the second case, we can't afford to trust them; not because they have "evil" intent or even mean to deceive us, but because we don't understand what they are thinking and taking into account.

      You may s

      • Chess programs can, as they are fancy minimax.

        Neural nets cannot do so easily.

        • by HiThere ( 15173 )

          *Some* chess programs can, as they are fancy minimax. That was true of the chess programs of 1960 and 1980. That probably wasn't true of Deep Blue. That isn't true of Alpha0.

          This isn't to say that all chess programs don't include a fancy minimax, but I believe that the really good ones are not ruled by it. I beleive they use it only for short range projections. Perhaps to a depth of 7 or 8 moves.

          Or. alternatively, you could reframe the answer and instead say that most of what you do is just being a fan

          • Alpha0 is just a fancy minmax. It uses AI to determine which branches of the tree are most important to search, and also to evaluate the position at any given node of the tree.

      • by jvkjvk ( 102057 )

        >But... such programs cannot explain to us - even to grandmasters - how they decided on their moves.

        I thought that it was because statistically, with that board position, the move chosen has the highest probability of generating a win? Or am i missing something?

    • I'd also like to suggest "The Fear Index" by Robert Harris. Perhaps not quite so convincing at a technical level, but absolutely convincing on the human level. Even quite clever people tend to overlook that computers are utterly unlike us. Human brains - and in particular conscious minds - are just a small outcrop of a single organic whole that comprises the forebrain, midbrain, hindbrain, spinal column, nervous system, endocrine system... and everything those are tied into, including all the organs and pro

      • by HiThere ( 15173 )

        Well, that's one way that things could be built, but I don't think it would work. AI programs that are more than extremely narrow pattern recognizers (of whatever quality) need to have "instincts", i.e. builtin rewards and penalties. But you are absolutely correct when you say that they won't be the same as the ones that we have. We couldn't build those in even if we wanted to, because we don't know what they are...not in sufficient detail.

  • Worst case scenarios - for whom or what? I would guess that it is only human individuals that deem human extinction or slavery under some master entity as a bad, or even noteworthy, thing.

    All animals and the rest of nature, the planet and cosmos would be quite indifferent to human extinction or enslavement. If we blow up the planet in the meantime, they would probably like to have a word first.

  • You are saying AI can rob us of our free will? When have we had free will? Oh, I see. They mean it can rob rich, white people of their free will.
  • TLDR Version (Score:4, Informative)

    by Comboman ( 895500 ) on Monday January 10, 2022 @08:55AM (#62160031)

    1. Deep fakes/misinformation

    2. Race to the bottom/taking humans out of the loop

    3. End of privacy/free will

    4. Social media addiction

    5. Biased AI

    6. Fear of AI prevents using it for good

    • Scenario 3 is already unfolding rapidly. "The elites have to take full control over everything we do to make sure the Trumpians and the Putinians cannot get organized!"

    • by AmiMoJo ( 196126 )

      4. Social media addiction

      Sex addition will be even worse. Once they develop an even half way decent sexbot, it will be used to abuse customers as much as possible for profit. Fluids more expensive than printer ink, DLC content, and ever behaviour designed to keep the user engaged and spending money. Like social media and mobile games, but with even more direct ability to stimulate the pleasure centre of the brain.

  • The world has never learned collaboration. All our social models are built upon the foundations of exploitation. Is there some risk of that changing or something? We don't understand the panic
  • "inevitably turn into evil overlords that attempt to destroy the human race."

    Who wouldn't?

  • by WaffleMonster ( 969671 ) on Monday January 10, 2022 @10:56AM (#62160651)

    AI has only one salient purpose and that is aggregating power into the hands of the ever fewer. Look at the companies doing AI today and how they are using it. That speaks for itself.

  • the pandemic is not over. Many of us ain't ready for doomsday speculation. How about an article on AI generating pretty landscapes with rainbows and butterflies.

  • Drowning in AI generated bullshit is a pretty crappy way to go extinct but it seems like a fairly likely outcome
  • I guess that's a very naive statement, due to the decentralised nature of computing, but it holds some merit.

    If we reach a point where we are sure that AI is calling the shots, then a "regression" away from using computers, is the obvious answer.

    We are already seeing a vanguard of people who are becoming refuseniks - dumping social media, for example.

    I'm fairly sure that if governments start to see human control slipping, in favour of AI - that the beast they unleashed is surpassing and controlling THEM - y

  • ... people do. It is describing how people, corporations and governments could use a brand new tool, there it is not the tool that you should be afraid of, but the people using it. If you have dangerous people in those positions, then you are in danger if they use other tools too (nuclear weapons, economy, market and consumer manipulation, fake news or creating denialism campaigns, to name a few).

    The scenario where an AI gets a conscience, an independent will, and plenty of tools to affect the real world se
  • Facebook uses AI to promote content that's 'engaging', including content that's extreme, polarizing and divisive.

    Then they censor some of that content because of public reprisal. That content that would have disappeared into oblivion on its own if it hadn't been amplified by their AI.

    "But it also presents an alternative worst-case scenario: that "we become so scared of the power of this tremendous technology that we resist harnessing it for the actual good it can do in the world.""

    What if Facebook didn't us

  • More like "Neural Networks 3 Worst Case Scenarios and a bunch of padding because nobody will read a list of 3".
    Let's go over them, we have
    #1 "Deep Fakes" okay a world where even reasonable people have to be checking for blending artifacts on every news report is bad yes.
    #2 "The Race to Bargain Basement Terminators" okay I can see the danger of cheap shitty robots with guns running amuck.
    #3 "An over dramatic way to say too much content tailoring" Yeah a world where two people can't look up the same thing
  • Would love to be a witness to the human race going down to machines.

"Why should we subsidize intellectual curiosity?" -Ronald Reagan

Working...