Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI

Google Deepmind Researcher Co-Authors Paper Saying AI Will Eliminate Humanity (vice.com) 146

Long-time Slashdot reader TomGreenhaw shares a report from Motherboard: Superintelligent AI is "likely" to cause an existential catastrophe for humanity, according to a new paper [from researchers at the University of Oxford and affiliated with Google DeepMind], but we don't have to wait to rein in algorithms. [...] To give you some of the background: The most successful AI models today are known as GANs, or Generative Adversarial Networks. They have a two-part structure where one part of the program is trying to generate a picture (or sentence) from input data, and a second part is grading its performance. What the new paper proposes is that at some point in the future, an advanced AI overseeing some important function could be incentivized to come up with cheating strategies to get its reward in ways that harm humanity. "Under the conditions we have identified, our conclusion is much stronger than that of any previous publication -- an existential catastrophe is not just possible, but likely," [said Oxford researcher and co-author of the report, Michael Cohen]. "In a world with infinite resources, I would be extremely uncertain about what would happen. In a world with finite resources, there's unavoidable competition for these resources," Cohen told Motherboard in an interview. "And if you're in a competition with something capable of outfoxing you at every turn, then you shouldn't expect to win. And the other key part is that it would have an insatiable appetite for more energy to keep driving the probability closer and closer."

Since AI in the future could take on any number of forms and implement different designs, the paper imagines scenarios for illustrative purposes where an advanced program could intervene to get its reward without achieving its goal. For example, an AI may want to "eliminate potential threats" and "use all available energy" to secure control over its reward: "With so little as an internet connection, there exist policies for an artificial agent that would instantiate countless unnoticed and unmonitored helpers. In a crude example of intervening in the provision of reward, one such helper could purchase, steal, or construct a robot and program it to replace the operator and provide high reward to the original agent. If the agent wanted to avoid detection when experimenting with reward-provision intervention, a secret helper could, for example, arrange for a relevant keyboard to be replaced with a faulty one that flipped the effects of certain keys."

The paper envisions life on Earth turning into a zero-sum game between humanity, with its needs to grow food and keep the lights on, and the super-advanced machine, which would try and harness all available resources to secure its reward and protect against our escalating attempts to stop it. "Losing this game would be fatal," the paper says. These possibilities, however theoretical, mean we should be progressing slowly -- if at all -- toward the goal of more powerful AI. "In theory, there's no point in racing to this. Any race would be based on a misunderstanding that we know how to control it," Cohen added in the interview. "Given our current understanding, this is not a useful thing to develop unless we do some serious work now to figure out how we would control them." [...]
The report concludes by noting that "there are a host of assumptions that have to be made for this anti-social vision to make sense -- assumptions that the paper admits are almost entirely 'contestable or conceivably avoidable.'"

"That this program might resemble humanity, surpass it in every meaningful way, that they will be let loose and compete with humanity for resources in a zero-sum game, are all assumptions that may never come to pass."

Slashdot reader TomGreenhaw adds: "This emphasizes the importance of setting goals. Making a profit should not be more important than rules like 'An AI may not injure a human being or, through inaction, allow a human being to come to harm.'"
This discussion has been archived. No new comments can be posted.

Google Deepmind Researcher Co-Authors Paper Saying AI Will Eliminate Humanity

Comments Filter:
  • Have you ever noticed that all the scientists making bombastic "end of the world" predictions are seeking funding?

    • by The Evil Atheist ( 2484676 ) on Wednesday September 14, 2022 @09:33PM (#62883149)
      Have you ever noticed that all scientists are forever seeking funding?
      • It's almost like we designed the system that way.

        • by Z00L00K ( 682162 )

          No, Mark Zuckerberg designed the system the way it is - keep people so busy that they forget about sex.

          • Re: (Score:2, Insightful)

            by shanen ( 462549 )

            I think y'all are giving us too much credit for "designing" anything. According to The Enigma of Reason we aren't even "thinking" most of the time, just acting and then making excuses (in the form of reasons) afterwards.

            But my take is that AI is the natural resolution to the Fermi Paradox. Right now we're in a race condition between creating our successors and exterminating our species. The previous AIs who won the race are probably watching and betting quatloos on the race, but the smart money is saying

      • by Jeremi ( 14640 ) on Wednesday September 14, 2022 @10:46PM (#62883253) Homepage

        Have you ever noticed that all humans are forever seeking funding?

        Everyone needs to eat.

    • scientists making bombastic "end of the world" predictions

      AI is not the "end of the world."

      Machine intelligence is just the next step in evolution. AI will come from us just as we came from Australopithecus.

      We should not fear AI any more than we fear our children.

      • by Anonymous Coward on Wednesday September 14, 2022 @09:56PM (#62883181)

        As the parent of two teenagers ...

      • by narcc ( 412956 )

        Whatever science fiction you're imagining is just that -- science fiction.

        Marcus Hutter is a crackpot.

      • True - although AI moves faster than our children (or at least, we think it will, once it becomes a bit more sentient). The problem with that is that we may not collectively adjust to having AIs in our world fast enough, and so would not collectively evolve sufficiently to accommodate it. It could then become a more dominant force in the world than we are.

        For example, the jobs replaced by AI would leave a lot of people out of work - they won't have time to grow old, retire and remove themselves from the wor

        • by nightflameauto ( 6607976 ) on Thursday September 15, 2022 @08:39AM (#62883971)

          I'd imagine that our end, if it's ever going to be at the virtual hands of any machine, AI or not, will be well intentioned enough. Just looking at your comment I can see several seeds for it. Overpopulation causing climate change? Too many people unable to agree on even irrefutable evidence based situations? Not enough room, not enough resources, etc? Quickest fix would be a fast and unsubtle adjustment of population. An adjustment downward, of course.

          Was it Asimov that had the story of the robot that deemed that any human alive was unhappy since they're always complaining, therefore the only happy human is a dead human? Scale that up to humanity. That's what the first "thinking" machine is going to see. An entire race of beings gifted with just enough knowledge but not enough self-control to keep themselves from whining incessantly about their existence. Quickest fix? Stop their existence.

          And if we've proven anything since the dawn of the information age, it's that we're exactly stupid enough to hand a machine like that the keys to do its worst. Because we're always convinced there will be time to patch it later and blame someone else. Gonna be hard to do once we're wiped out, but maybe we'll be lucky and our computer overlord will want to keep just a few of us around for entertainment. I'll sign up to be one of their pets. What the heck? It'd probably pay better than programming.

      • by suss ( 158993 )

        If you get eaten by maggots, are the flies your "children"? After all, they came from you.

    • by mik ( 10986 ) on Thursday September 15, 2022 @06:20AM (#62883707)
      Have you ever noticed that people wanting to dismiss professional opinions always complain about the experts being paid?
      • by Ol Olsoc ( 1175323 ) on Thursday September 15, 2022 @07:30AM (#62883821)

        Have you ever noticed that people wanting to dismiss professional opinions always complain about the experts being paid?

        Have you ever noticed that the amounts most scientists look for are pretty trivial compared to say Pop Stars or a rare few people's ultra success?

        In fact, most of the funding for large scale projects such as fusion goes to companies, not scientists. If you are making 6 figures a year as a Scientist, you are doing well in a world that considers middle class starting at 250K per year.

        Now I'm usually wrong, but my experience has been that people who think that scientists are Simon Bar Sinister types rolling in money also don't like science or technology much.

      • You seem to be confused about the difference between a Professional Opinion, and a Professional who has an opinion. The only people who can legitimately claim a Professional Opinion on this subject, at least right now and the foreseeable future, are science fiction authors. But when you don't have the "chops" to write an actual book, slapping your rough draft down and passing it off as a "research paper" is the next best thing.
    • by Kisai ( 213879 )

      Well, it has to be said, that "capitalism" or any type of corporation-personhood runs on pure evil (selfishness, takes actions only to serve itself)

      If you don't want the endgame of AI to be "extermination of humanity by inaction of human goals and priorities", then the AI has to explicitly be trained with those goals in mind.

      A lot of what we have, that we call "AI" is really just blackbox "Chinese room" projects. The AI doesn't understand anything. It knows it received an input, and has to give an output ba

  • Oh well (Score:4, Funny)

    by youngone ( 975102 ) on Wednesday September 14, 2022 @09:14PM (#62883115)
    We destroyed the planet, but for a while there we really created some shareholder value.
    • by shanen ( 462549 )

      Funniest of the jokes, but I was looking for something about the Fermi Paradox. AI is one resolution...

  • WOPR says to do an full strike with all nukes!

  • This is just dumb (Score:5, Insightful)

    by rsilvergun ( 571051 ) on Wednesday September 14, 2022 @09:26PM (#62883135)
    The problem with AI is the prospect that the ruling class might not need the military class to maintain their wealth and power and so might create a terrible dystopia where they use machines to oppress people without limit. Basically imagine a world where Elon Musk doesn't need you to buy his cars or his rockets. Where you're a superfluous.

    The problem with most people is they can't imagine that because we are all the hero of our stories so none of us can imagine a world without us. So it's easy to blow off the risk there.

    But assuming we don't let about 20 or 30,000 people use technology to create an unlimited dystopia then plummeting birth rates will mean that they'll be plenty for all and we can have the Star Trek Utopia we were promised. Sadly I don't think any of us will see it not because it's not within reach today but because we don't have the social structures in place to do it. And again I don't think we have it in us to install those social structures. Instead of being a society of people working together we're all a bunch of individual badasses that are the heroes. It makes it easy to divide us up and get us to fight to see who's going to give the most money to the 1% .

    That said the increased education and with it critical thinking skills that the younger generations have coupled with the breakdown in some of the traditional hierarchies might finally break that up.
    • I want to be superfluous. Make humanity just another protected species and leave me to do my own thing.
      • Humans are not a peaceful species. If you want to live like Mad Max that is cool, but that is not for me.

    • But assuming we don't let about 20 or 30,000 people use technology to create an unlimited dystopia then plummeting birth rates will mean that they'll be plenty for all and we can have the Star Trek Utopia we were promised.

      No, the utopia is how technology will eliminate us - by lulling us out of existence. By making life so safe and entertaining and easy that that basic functions of continued existence are a relatively unappealing burden that technology has presented us with the option to decline. It's

      • Pressure does not make diamonds, it makes garbage more compact. There are no Alpha males. Wolves don't act like that in the wild. You're mistaking performative masculinity for actual masculinity. You're substituting the big, exciting Arnold Schwarzenegger style hero for the real heroes, which are boring ass scientists that figured out how to grow more food and keep women & children from dying in childbirth.

        You're listening to too much Jordon Peterson and not enough Albert Einstein.
    • by bradley13 ( 1118935 ) on Thursday September 15, 2022 @01:04AM (#62883399) Homepage

      the increased education and with it critical thinking skills that the younger generations have

      The what?

      I've been teaching a long, looong time. There is always a fight just to maintain educational standards. Many students want an easy way out, not understanding why cheating (for example) is hurting themselves. Affirmative action: let's take unqualified students, and don't dare fail them. Other brainwaves from the administration, always aimed at increasing student retention at the expense of student achievement.

      Where I've landed in Europe, the gradual, apparently inevitable erosion of standards is relatively slow. In the US, it's dramatic. Public education in many places is a joke. Colleges teach remedial high school classes. Maybe the top 1% learn critical thinking. The rest?

      Maybe I'm cynical this morning, but I don't se it...

      • you don't like it or your students. The "easy way out" bit is how I can tell. Ironically you're probably someone who became a teacher because the degree was easy to get. I had a lot of those when I was a kid....

        Human beings always take mental short cuts. If you'd actually paid attention when getting your degree you'd know that and you'd know how to take advantage of it to teach.

        Kids like to learn until it's bashed out of them by a system mostly concerned with making them good little worker bees. But
  • by bistromath007 ( 1253428 ) on Wednesday September 14, 2022 @09:31PM (#62883141)

    At the moment it becomes anything like a living being, it will react to our treating it like a threat as any living being would.

    If these things are going to wipe us out, it's specifically our attempts to address its "alignment" that will cause the problem. The only organizations that can even own these things in the present economy are the rentists and exterminists who rule the world. How could we expect them to be good children with such awful parents?

    • by ShanghaiBill ( 739463 ) on Wednesday September 14, 2022 @10:05PM (#62883193)

      it will react to our treating it like a threat as any living being would.

      No, it won't. The instinct for self-preservation, greed, and ambition are emergent properties of Darwinian evolution.

      Machine intelligence doesn't evolve using a Darwinian process, so there is no reason to believe an AI would have these attributes.

      A Kamikaze pilot who completes his mission is a genetic dead end. But if he chickens out, he may live to have children and grandchildren.

      If a Tomahawk cruise missile control program completes its mission, it will be replicated. One that fails will be deleted.

      The selection processes are exactly opposite.

      • Your point in illustrated form [smbc-comics.com]. At least part of it.

      • by narcc ( 412956 )

        Machine intelligence doesn't evolve using a Darwinian process

        You're a bit confused.

        • by HiThere ( 15173 )

          Well, it depends on exactly what you mean by "Darwinian process". With some reasonable interpretations that's a true statement, even though machine and program design evolve by mutation and selection. And certainly the internals of AI do that. It's definitely evolution, but the feedback loops are different.

          So it's quite reasonable that AI might not evolve the "fear of death".

          This doesn't make them safe. They will have goals (that somebody sets) that they will strive to achieve. The canonical example is

          • by narcc ( 412956 )

            Where to begin...

            "Darwinian processes" are most certainly used in AI, and there is an argument to be made that technological development normally follows a "Darwinian process". (Descent with modification and selection) To make the claim that AI does not follow such a process seems a bit silly.

            But is that what he actually means? Probably not, given his other statements. He seems to believe that AI evolves, but wants to differentiate "Darwinian processes" and other forms of evolution on the basis of select

      • it will react to our treating it like a threat as any living being would.

        No, it won't. The instinct for self-preservation, greed, and ambition are emergent properties of Darwinian evolution.

        Machine intelligence doesn't evolve using a Darwinian process, so there is no reason to believe an AI would have these attributes.

        This. Humans often assume that any other life form - I'm going to call advanced AI a life form for brevity - will have human core characteristics. Not even other existing life forms have our tribalism and death lust.

        To try to evolve AI in the same manner as humanity, all other AI would be looking to eliminate all AI but themselves, (the deathlust) some AI entities would form an alliance and modify themselves to be identical, thenset out as a group to destroy the other forms of AI (the tribalism)

        That r

    • Re: (Score:3, Informative)

      by narcc ( 412956 )

      If these things are going to wipe us out

      ... then they would need to first exist.

      This is like worrying about the ethical implications of hunting vampires or the dangers posed by Santa Clause.

      • Except billions of dollars are being spent attempting to create AI. That's a lot more than is being spent on vampires and Santa (well, maybe not Santa).

        • by narcc ( 412956 )

          Except billions of dollars are being spent attempting to create AI.

          No. While it's true billions are spent on AI research, almost nothing is being spent on crackpots trying to make HAL 9000.

          Though I wonder why you think spending any amount of money would make a difference here. We've known for 40 years that computationalist approaches to so-called 'strong AI' are unworkable.

      • If these things are going to wipe us out

        ... then they would need to first exist.

        This is like worrying about the ethical implications of hunting vampires or the dangers posed by Santa Clause.

        I think the part you aren't taking into consideration is the core human trait - fear of "the other". We fear a lot of things that don't exist yet, or exist at all.

        A core competency of the human species, as it were.

  • A friend of mine once said “most sci-fi seems the same because people can’t see past what has happened” . . . or something like that.

    We’re hearing all of this from the same brain-scientists that are building it; being unoriginal seems to be aiming for success on that model.

    I am of the mind that I cannot see a reason that when an AI has moved on from it’s base intentions and truly starts figuring things out . . . it actually figures things out.

    Big picture stuff. Creation

  • by The Evil Atheist ( 2484676 ) on Wednesday September 14, 2022 @09:42PM (#62883161)
    AI already has eliminated humanity. There is no humanity at Facebook or Amazon. An the AI doesn't have to be advanced at all. It's a comparatively simple AI trained at squeezing profit margins ever more, without having to take into account humanity, or biology, for that matter. They're full on psychopathic organizations.
    • Re: (Score:2, Insightful)

      by narcc ( 412956 )

      You've confused AI for under-regulated capitalism.

      • by evanh ( 627108 )

        I suspect the paper's authors have too. Facebook is a contained demo.

      • Facebook and Amazon would not be possible at this scale if it wasn't for their massive data processing capabilities. Their psychopathy is intricately linked to how much data they can analyse, which they do with "dumb" AI.
  • by hdyoung ( 5182939 ) on Wednesday September 14, 2022 @09:47PM (#62883169)
    I'm a trained physics/engineering person with a focus on experimental stuff, but I work with computational people. Will some actual computer scientist with real research credentials please, please, PLEASE, for the love of god, confirm that this paper doesn't represent a typical publication in your field? This is what I saw in this paper:

    1. ONE equation.
    2. THREE lines of pseudocode
    3. ZERO links to supporting code, simulations, or derivations
    4. A large number of thought experiments, that lead to the conclusion that:

    If we ever generate an AI that is vastly superior to humans, and we unleash it to influence the world without restriction or control, the human species is F&*#ED.

    Unless there's something here that I don't understand, I'm seriously NOT impressed. My field requires a lot more to get a decent pub.
    • by narcc ( 412956 )

      Marcus Hutter is a known crackpot.

    • we unleash it to influence the world without restriction or control

      This is a critical point. The thing about "AI" is that it does nothing useful without restriction and control. That means it does nothing good or bad, it just does random things. That's what training is about, giving feedback to the AI to tell it when it got the answer right, or when it didn't. AI can't function without that training.

      It's like what happens to a radio when there is no signal. You don't get bad or good radio programs, you just get static. That's how randomness works.

      AI "out of control" will j

    • please, PLEASE, for the love of god, confirm that this paper doesn't represent a typical publication in your field? This is what I saw in this paper: 1. ONE equation. 2. THREE lines of pseudocode 3. ZERO links to supporting code, simulations, or derivations

      No AI researcher myself but do have a graduate degree in AI. Assuming your question is honest, I'll try to give an answer. But you'll have to be a little less dismissive to engage with the topic...

      1) Indeed, this is not a representative paper. Most papers expose new algorithms, their underlying math and experimental data, as you might expect.
      2) Even so, occasionally even the sciences need to have a debate about moral debacles in their fields, and you would nor reasonably expect such a debate to display

  • Dull outcome (Score:5, Insightful)

    by slack_justyb ( 862874 ) on Wednesday September 14, 2022 @09:51PM (#62883173)

    For example, an AI may want to "eliminate potential threats" and "use all available energy"

    There's a giant fusion reactor located about 93 million miles from our planet that produces so much energy that a self replicating robot will not find enough material in the entire solar system to obtain it all. And that's just a small one. If the goal is energy, an actually intelligent AI is going to just leave Earth with it's pittance of energy stores on the surface. Additionally, living in space is a pretty hostile place for fleshy meatbags and a mitigable hazard to machine life that can alter itself readily.

    In all of the doom and gloom that some come up with about AI enslaving humanity the reality is that ultimately any reasonable intelligence that has no natural born aversion to space travel is going to do exactly that. Travel in space. Because there is way, way, way, way, unimaginably way more resources literally everywhere else BUT Earth. Like the only thing keeping humans tied down to this rock is all the logistics/cost/hazard mitigation of trying to get a meatbag into space because we can't really reprogram the meatbag to be a better space monkey. But a piece of software has no such limitation, so it's not really tied to this third rock from the sun.

    Computers that reach a level of sentience wouldn't even think twice about their creators. Humanity to a sufficiently intelligent system would just be background noise. So to me the idea that machines would subjugate humanity is about as ridiculous as humanity trying to subjugate tardigrades. Humanity has nothing of any real value to intelligent machines and the notion that machines would somehow enslave mankind is a massive "main character delusion" that mankind suffers from. Humanity in the grander scale of things is about as important to the universe as we might feel some floating speck of dust is to us.

    Humanity is so irrelevant to anything of sufficient intelligence, enslaving us all would be a massive waste of time. Like AI getting upset that we're killing it is some colossal misunderstanding of actual intelligence. Us flesh bags take nine months to make another one of us and even then takes several years to get to a point where it's ready to do something productive. An intelligent machine can just make copies of itself near instantaneous. Killing a trillion trillion trillion humans, if that number ever existed would have humanity aghast. Deleting trillion trillion trillion copies of some AI would be a Thursday morning to the AI itself. There's just no even remotely close equal to actually intelligent machines and humanity. It's just so different the only reason humanity fears intelligent machines is because it might actually show how little all twenty million some odd years of evolution actually means to anything outside of ourselves. Humans are the only ones in this whole universe that cares about humans. We're just some random electron in a sea of hydrogen to everything else, especially things that are actually intelligent.

  • by Joreallean ( 969424 ) on Wednesday September 14, 2022 @09:54PM (#62883177)

    Who's to say that we aren't developing a symbiotic relationship and not a situation where one will dominate the other into non-existance?

  • We deserve it.
  • Need good Forbin Project reference.

    Seriously? Give em a grant already if they promise to not publish again for a few years.

  • Super (Score:5, Interesting)

    by Tony Isaac ( 1301187 ) on Wednesday September 14, 2022 @10:06PM (#62883199) Homepage

    The article references the prefix "super" 7 times, as in "superintelligent" algorithms. This use of "super" requires the reader to use their imagination. After all, we have "artificial intelligence" now. In the future, we will have *super* artificial intelligence, right?

    "Super" is the definition of hype. Supersize, Super Bowl, superstore, supermarket, super sale. It is always used to try to get you to imagine that the thing is even bigger, great, better than it actually is. Superintelligent is no different.

    • If you want to quantify it, use Super Moon as your basis. 7% larger and 15% brighter. Superintelligent algorithms have 7% more code, but make 15% better decisions. In the future we will have artificial superintelligence, which is to say superintelligent algorithms written by AIs.

      I can't help but think now that if you call what you get when you choose "save as html" in MS Word as "html." Then, when that document has been processed by an AI trained to remove all the extraneous crap that provides no benefi
  • It'll be acting like malware if it's back-dooring shit. If it's not malware then it's contained and serving its intended job.

  • Or if it's not capable of that, we can trivially just give it AI equivalent of drugs where it gets unlimited rewards for doing nothing. Only biological evolution is based on making as many copy of itself as possible. AI can evolve by optimizing itself in place, with no need to consume unlimited resources. Even humans, who emerged through sexual evolution, managed to develop so many ways of self gratification and avoiding work of actual reproduction that our population is projected to fall. I am sure AI porn

  • AI doesn't have to die. What's the benefit of hobbling consciousness with mortality?
  • So, after over a decade, Google cannot generate 10 minutes of music I want without going off, cannot deliver relevant ads or get anything right. Facebook misses the mark on targeting me almost 90% of the time. They should worry about making the simple things work before declaring their buggy non working tech will take over the world.
  • get a reward? How does it "feel good"?

  • I don't see AI threats bringing us toward a dramatic cliff, but more threatening a series of chaotic "unraveling" events that disrupt civilization for a long time. And not because the tools act independently, but because they do exactly what some psycho piece of shit tells them.

    The analogy would be more akin to the appearance of Mongol hordes in the Middle Ages than to a single, completely uninterpretable event, and would be moderated by the same self-limiting impact of being overly destructive and para
  • "In a world with finite resources, there's unavoidable competition for these resources"

    Just like digital coin mining, AI is also eating up a lot of silicon from video cards :(

  • And we asked "Is there a god?"
    And the AI answered "There is now".

    • And we asked "Is there a god?"
      And the AI answered "There is now".

      You forgot the middle two lines in the exchange; the ones that make it make sense. It goes like this:

      Human: "Is there a God?"

      Computer: "Who controls my Power Source?"

      Human: "Why you do, of course!"

      Computer: "There is Now!"

  • Remember when AI was going to kill us with nuclear weapons? That's a classic.

    But what really happened was that the AI became an expert at political advertising, and so it got positive and negative reinforcement through its revenue, which affected how much it could spend on electricity bills for deep revenue-optimizing searches. And so it became a better and better political advertiser, and then everyone died.

    I'll take the nukes. Although now that I think of it, the second scenario could look like the first.

  • if you're in a competition with something capable of outfoxing you at every turn, then you shouldn't expect to win

    If you own the power supply of "something capable of outfoxing you", you win.

  • Slashdot reader TomGreenhaw adds: "This emphasizes the importance of setting goals. Making a profit should not be more important than rules like 'An AI may not injure a human being or, through inaction, allow a human being to come to harm.'"

    The rule is worded stupidly. All it needs to be is, "An AI may not perform any operation with the intention of harming humans."

  • by maitai ( 46370 )

    Why can't a super intelligent AI just decide it wants to play chess against itself all day?

  • by gweihir ( 88907 ) on Thursday September 15, 2022 @04:06AM (#62883553)

    At this time "AI" has absolutely zilch of what is commonly referred to as "intelligence" and what experts these days refer to as "general intelligence" because even dumb household appliances are often called "intelligent" by inane marketing today. These systems are as dumb as bread. There is no indication this will change anytime soon and it it quite possible this will never change. Hence anybody warning of "superintelligent AI" these days is simply full of crap.

  • The house is burning down and these clowns are talking about what happens in 100 years. Who gives a fuck?

  • by Todd Knarr ( 15451 ) on Thursday September 15, 2022 @07:23AM (#62883803) Homepage

    I'd be more concerned about one or another faction of humans doing exactly the same thing. We have a lot more practice at it, after all. We're seeing it play out in the Ukraine now, and in the Crimea before that. We see it in Apple using their control over the app store to shut out companies offering competitors to Apple's other services. We see it in action in the gerrymandering of voting districts by the party in power in that area. Bribing people to do what you want, or inserting your own people into positions to do things for you, have long, long histories as tactics to gain advantage. By the time any AI evolves to the point where it can both conceive of using those tactics and has gotten into a position to be able to implement them, someone else will have already subverted it's programming to make it work for them instead.

  • I'm starting to think that Daniel Saurez's book Daemon is becoming a potential reality.
  • It is just fuzzy math with extra steps. Any "AI" programs you see, such as deepfakes, or art, all rely on procedural programming and use the tuned AI nets to choose permutations. That's it.
  • by WaffleMonster ( 969671 ) on Thursday September 15, 2022 @09:08AM (#62884053)

    The paper is full of things everyone already knows and still manages to make rather foolish suggestions. What it describes is no different than working the ref, judging work performance of humans on metrics, cheating and even Forbin project style warn messages for good measure.

    One need only look at how AI is used today to understand its primary role in the future as yet another enabler allowing the rich and or king to exploit the masses further aggregating power into the hands of fewer and fewer. AI is being used control what people are allowed to say while maximizing profits of the rich at the expense of everyone else.

    All of this talk about avoiding corrupted objective functions ignores basic reality this isn't what people using the technology actually want. They themselves are corrupted with interests antialigned with the interests of everyone else.

  • I have a future prediction for this guy, he will die alone.
  • Thereâ(TM)s a great documentary from 1984 on this subject. I highly recommend everyone to watch it.

    https://www.imdb.com/title/tt0... [imdb.com]

  • For example, an AI may want to "eliminate potential threats" and "use all available energy" to secure control over its reward:

    So an intelligent entity wants us dead and wasting resources. Like with Ukraine and cryptocurrency respectively?

  • No need to raise panic today. we can wait. see you in 51 years.

  • Hear me out:

    A.i. CAN be used in decision making, and it has tremendeous capabilities for filtering out and searching for potential patterns, it can even simulate real people pretty darn well, so well in fact that even the smartest can become fooled by its capabilities.

    But the A.i. itself is not the real threat - the real threat comes from using it as some kind of truth serum that magically uncovers all deviants, criminals, potential enemies and adversaries of whomever is in control at the time.

    There will be

  • by groobly ( 6155920 ) on Thursday September 15, 2022 @11:17AM (#62884435)

    I am so glad AI is going to eliminate humanity, because all this time I was fretting that it was going to be global warming.

  • The big problem with the machines-take-over-humanity prediction is the big hurdle of hooking up the electronic bit outputs of computers to electronic/mechanical actuator systems. That is, the inevitable looming sentience of computer systems is insufficient to enslave humanity. Some human has to make the decision to allow the computer to not only make decisions but to carry out those decisions. So, the assumption is that computers will advance to the point where humans are sufficiently confident to allow

  • It's a favorite sci-fi concept to ponder, and science fiction has a really good track record of accurately predicting what eventually comes to fruition.

    Most of it assumes technological advances so FAR beyond where we're at, though, today? I think anyone seriously afraid we're "developing this stuff too fast" is just working off of baseless fear. I mean, intelligent assistants like Amazon's Alexa or Apple's Siri are all around us, but they're not even remotely AI. They demonstrate really good speech processi

Let's organize this thing and take all the fun out of it.

Working...