Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI Communications Government Robotics The Military Technology

DeepMind, Elon Musk and Others Pledge Not To Make Autonomous AI Weapons (engadget.com) 122

An anonymous reader quotes a report from Engadget: Yesterday, during the Joint Conference on Artificial Intelligence, the Future of Life Institute announced that more than 2,400 individuals and 160 companies and organizations have signed a pledge, declaring that they will "neither participate in nor support the development, manufacture, trade or use of lethal autonomous weapons." The signatories, representing 90 countries, also call on governments to pass laws against such weapons. Google DeepMind and the Xprize Foundation are among the groups who've signed on while Elon Musk and DeepMind co-founders Demis Hassabis, Shane Legg and Mustafa Suleyman have made the pledge as well.

"Thousands of AI researchers agree that by removing the risk, attributability and difficulty of taking human lives, lethal autonomous weapons could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems," says the pledge. It adds that those who sign agree that "the decision to take a human life should never be delegated to a machine."
"I'm excited to see AI leaders shifting from talk to action, implementing a policy that politicians have thus far failed to put into effect," Future of Life Institute President Max Tegmark said in a statement. "AI has huge potential to help the world -- if we stigmatize and prevent its abuse. AI weapons that autonomously decide to kill people are as disgusting and destabilizing as bioweapons, and should be dealt with in the same way."
This discussion has been archived. No new comments can be posted.

DeepMind, Elon Musk and Others Pledge Not To Make Autonomous AI Weapons

Comments Filter:
  • IMHO (Score:4, Insightful)

    by DaMattster ( 977781 ) on Wednesday July 18, 2018 @08:16PM (#56971452)
    The pledge is not worth the paper its printed on. Believe me, once Elon sees a way to make a shit load of money through AI-based weaponry, he'll take action.
    • Seriously. I had hope when they said DeepMind took the pledge, but it was just the creators. Just let us know when our autonomous AI overlords themselves pledge to not make autonomous AI weaponry. Do any other promises really count?

      • by MrL0G1C ( 867445 )

        Indeed, how do I know Deep Mind isn't prone to lying and is plotting to take control of Earth already!

        TBH I think an advanced intelligence would see the stupidity of war.

    • The problem is arms race. Nobody will be able to prevent the Chinese from developing militarized AI, so everyone else will have to follow through.

      • by ceoyoyo ( 59147 )

        Nobody can stop the US military from doing it either.

        Training AI isn't like building stealth bombers. The technology is all public and, while it does help to have some skill in the field, it's not that difficult to become passably competent.

    • and google did no evil and thus it came to pass :p
  • I won't cum in your mouth - I swear, I won't.

  • One. (Score:4, Insightful)

    by SoulMaster ( 717007 ) on Wednesday July 18, 2018 @08:31PM (#56971500)

    All it takes is one.

    -SM

    • But two should cancel each other out (unless they're from the same side).
  • Unless... (Score:4, Insightful)

    by Luthair ( 847766 ) on Wednesday July 18, 2018 @08:34PM (#56971506)
    someone calls him out on twitter. /rimshot
  • Confused (Score:2, Interesting)

    by dohzer ( 867770 )

    I'm confused. Didn't Elon fall off everyone's carelist after his twitter gaf the other day?

    • I'm confused. Didn't Elon fall off everyone's carelist after his twitter gaf the other day?

      You would completely disregard a person in all venues because they said a mean thing on Twitter? Allowing your feelings to blind you is a weakness.

    • Ya so he's human after all.. a little disappointed.
  • No Terminators or mobile dolls, but Gundams are okay.

  • Then they won't have *made* them, since, you know, they didn't install the stuff themselves.

  • by Noishkel ( 3464121 ) on Wednesday July 18, 2018 @09:08PM (#56971644)

    While obviously big players in AI would be able to help develop military AI way faster than without them the hard reality is that this technology already in the wild. And as long as there's a federal government there's going to be corporations mercenary enough to peruse this technology. And this doesn't even take into account the programs of other nation's militaries.

    Of course you might have some motion on international agreements on the use of military AI. But much like nuclear weapons there's no feasible way to truly put the proverbial genie back in the bottle. And that's even before we start talking about the idea of rouge actors. That 'SlaughterBot' video that Elon Musk helped fund was overly hyperbolic bullshit. But that doesn't mean they'll never be a group that doesn't come up with autonomous weapons.

    • That 'SlaughterBot' video that Elon Musk helped fund was overly hyperbolic bullshit.

      How do you figure? A simple ASIC could make a database of millions of people easily searchable in real time with low power consumption. I think the least realistic part was the requirement to actually get close to you when a simple bullet could do all same damage at a greater range without destroying it which would enable multiple rounds to be on a single quadcopter.

      If you are going to commit genocide then the SlaughterBot idea seems to be quite close to how a small group of people could realistically ach

      • Well I'll certain admit there's potential for some person or group of actors using a swarm of autonomous or semi autonomous drones for an attack like shown in the video there's a number of real serious problems with trying to control that many drones in the ways they're depicted. First off the larger carrier drones themselves didn't look to be much bigger than an off the shelf quadro or multicopter drone. That's not a lot of space or weight for the kind of pretty significant computer power you would need
    • by AHuxley ( 892839 )
      This is the reason why the USA wins and why it has DARPA.
      When politics gets in the way of what the USA needs to win, the USA now has DARPA.
      Make its own.
      Find brands that have skills and the best workers to make what is needed.
    • by neoRUR ( 674398 )

      Yes, this is all nice in principle, but a joke otherwise. DARPA and the military have been funding the research and development of AI and many other advancements since it started. Where do you think all the AI technology that Google and the self driving cars came from? Yes DARPA/ARPA.
      And there will always be large industrial military complex companies that will gladly take on those mega billion $ contracts to make AI weapons of all sorts.
      And the thing is, you can't stop it, you can't stop technology and you

  • The government will easily find someone who WILL take their money. There's plenty of actual defense contractors out there.

    • Yeah, that and the companies who don't want to do "military applications" can just do pure research. Pure, ivory-tower research... which the DoD can just pay someone else to integrate into an actual weapon system. It's not like a machine learning algorithm knows or cares to what use it's put, once it is out there.

      Dumb posturing; I also wonder if these people have considered what a world dominated by Chinese and Russian military AI will look like, and what effect it would have... I am not sure it would be th

  • I have been so fucked by the system and my disability that money is the only thing that is important in this world. I have a patent that every browser is infringing upon, but I do not have the money to fight for it and I lost it due to judicial abuse, so I have been chronic homeless and unemployed. I have already used openCV to create a facial recognition system that can target a person and use remote controlled devices.

    When Injustice is LAW
    Revolution is necessary
  • by GPS Pilot ( 3683 ) on Wednesday July 18, 2018 @09:23PM (#56971694)

    Putin says the nation that leads in AI 'will be the ruler of the world' [theverge.com]

    How about we at least look into some ways to defend against this?

    • How about we at least look into some ways to defend against this?

      That would require throwing the political party that is current in power, out of power and literally the only thing they care about is staying in power. To make things worse, they are about 40% of the nation and hate the rest of us for being "coastal elites".

      Frankly, we deserve what we're getting about now.

    • Good thing the people programming them will not be any of the experts!

      That makes me feel safe...

    • Until that nation's AI takes over [dilbert.com] as ruler of the world, that is.

  • They probably mean well, but we all know where it's headed and it will get there with or without them.

  • They should be working on countermeasures.

    It's going to be done ... so the smart guys should be figuring out how to fight it.

  • AI weapons that autonomously decide to kill people are as disgusting and destabilizing as bioweapons,

    Weapons that indiscriminately kill people are are terrible, and only made by the most deplorable, and dispic- wait, I go lost there, was I ranting about multi-stage boosted yield MIRV thermonuclear weapons, or robots with glocks?

    I mean, lets put this in perspective here. We spent half of the last century trying to figure out more ways of incinerating major cities more efficiently, and a few assholes are worried that we are going to build the robot from short circuit?

    GET YOUR FUCKING PRIORITIES STRAIGHT, IDIOTS!
    • by AmiMoJo ( 196126 )

      The issue is that things like nuclear weapons and glocks require human intervention to launch a deadly attack, where as AI may be making that decision on its own. Even if AI doesn't make the final decision, it may influence the humans making decisions somehow. As we have seen it's bad enough with remote drones.

      It's also vastly easier to teach an AI to shoot at anything human shaped in a given area than it is to teach it to recognize civilians or differentiate a gun from a farming tool.

      • Sentry guns are the alternative to other deadly traps, like land mines. But they are much easier to remove. Of course they don't require any AI either, UNLESS you are trying to teach them not to murder civilians.

        We already have autonomous weapons.

  • With AI based image recognition libraries commoditized and made incredibly easy to use, why would anyone want these guys specifically to make weapons. Anyone can make them.

  • by gordguide ( 307383 ) on Wednesday July 18, 2018 @11:07PM (#56971980)

    Crude forms (as in old technology by modern standards, not as in "ineffective") of AI are already in use, and not just recently either.

    The last version of the Canadian frigates (first one hit the water in 1990), which are already being replaced with a newer version, used a set of in-house designed operating systems with triple redundancy (three specific OSes, each of which can operate the ship alone, and three independent networks, etc each of which can operate the ship alone). Although the details are not made public, rumour has it that they are various flavours of UNIX.

    One feature of the ship is it can track and target threats (air, sea and subsea) and fire when all personnel are no longer physically capable of operating the ship. As in dead.

    Most navies have built similar (and newer) ships since, with technology that operates the same way (including the US, UK, etc).

    Sounds like an AI weapon system to me.

    • by AmiMoJo ( 196126 )

      Sounds like a very odd design if true.

      What is the threat model for needing three redundant control systems? Are they worried about hacking or viruses, in which case why did they connect it to the internet or even have writeable media for storage? And three networks... Well, okay, the networks might get damaged by enemy fire, the usual way you deal with that is to have separate, self contained systems in different parts of the ship and operators you can contact via phone or radio and who are fairly antonymou

      • by ceoyoyo ( 59147 )

        Not so odd. Triple redundancy isn't unusual in critical applications, especially ones where you expect pieces might get blown off.

        Automated combat systems are also quite common, particularly on naval missile cruisers. Usually you use them in "officer says kill this target" mode, but they have various levels of automation. US carrier escort vessels are designed to deal with thousands of incoming missiles from mass Soviet air attacks. Far too many for the crew to deal with manually.

      • Sounds like a very odd design if true.

        What is the threat model for needing three redundant control systems? Are they worried about hacking or viruses, in which case why did they connect it to the internet or even have writeable media for storage? And three networks... Well, okay, the networks might get damaged by enemy fire, the usual way you deal with that is to have separate, self contained systems in different parts of the ship and operators you can contact via phone or radio and who are fairly antonymous anyway.

        Then there is this full-auto mode. Firstly, the friend or foe identification system better be amazing or you will end up with a drifting, abandoned frigate that shoots at anything in range. And by the time everyone on board is either dead or gone, the ship must be in such a state as to be next to useless anyway.

        Modern naval planning tends to assume that once the ship is hit then it's probably out of the fight. The focus is on not getting hit, not having some crazy AI carry on fighting afterwards. Aside from anything there would be a very real risk of it attacking any friendly ships that came to rescue the crew.

        I never said "it was connected to the internet"

        Yes, the mainframes are located at three separate areas of the ship.

        The use of different OSes is a tampering reduction feature.

        NATO nations know who the friendlies and the non-aligned military assets, including rogue actors, that are around them. Threat management is determined by the action of other assets ... is that a fighter jet profile heading towards you? Area of detection for aircraft is classified but known to be near 100 miles (for example). Again the

      • Sounds like a very odd design if true.

        What is the threat model for needing three redundant control systems? Are they worried about hacking or viruses, in which case why did they connect it to the internet or even have writeable media for storage? And three networks... Well, okay, the networks might get damaged by enemy fire, the usual way you deal with that is to have separate, self contained systems in different parts of the ship and operators you can contact via phone or radio and who are fairly antonymous anyway.

        Then there is this full-auto mode. Firstly, the friend or foe identification system better be amazing or you will end up with a drifting, abandoned frigate that shoots at anything in range. And by the time everyone on board is either dead or gone, the ship must be in such a state as to be next to useless anyway.

        Modern naval planning tends to assume that once the ship is hit then it's probably out of the fight. The focus is on not getting hit, not having some crazy AI carry on fighting afterwards. Aside from anything there would be a very real risk of it attacking any friendly ships that came to rescue the crew.

        Your idea of naval warfare is pretty much an 1980's and earlier situation. The phalanx naval defence system, for example, only fires on close encounter threats, and is fully automated tracking and fire. It IS coming at you. All modern aircraft use IFF (identify friend or foe) AI. And so on. Obviously voice communication is still used extensively in warfare, but ship systems can be set to automated response, with or without human interaction. And note that this particular class of vessel is being replaced; t

  • by Luckyo ( 1726890 ) on Wednesday July 18, 2018 @11:17PM (#56971996)

    Literally, look at the topic. At this point, if you're not working on weapons, you're just setting yourself up to lose the future war, where competitors will not be so internally weak minded as to not research useful tools for warfare.

    As it has always been in human history. Decadent empires starting to think they can afford to not match upcoming adversaries is one of the key steps on the path to collapse of said decadent empires.

  • by account_deleted ( 4530225 ) on Wednesday July 18, 2018 @11:52PM (#56972110)
    Comment removed based on user account deletion
  • And yet... (Score:4, Insightful)

    by Nicholas Schumacher ( 21495 ) on Thursday July 19, 2018 @01:20AM (#56972386) Homepage

    They are still working on automated vehicles. Doesn't anyone think about how easy it would be to turn an automated vehicle into a weapon?

    Put a bomb in the trunk, use the automated vehicle to get it to the target, and a cell phone to track/detonate the bomb when it gets there...

    And any image recognition system, once advanced enough, can be easily adapted to a targeting system.

    Even if they don't want weapons, the technology they do continue to work on will sooner or later make it's way into weapons.

    The only question is who will have access to them first.

    • by theCat ( 36907 )

      Autonomous vehicles. To which you can already marry any of the open-sourced machine learning/AI frameworks. To which you can include any of the open-source face recognition frameworks. To which you can bolt any number of off-the-shelf AR-15s. To which you can add your own 3D-printed ammunition feeder connected to a 3D-printer continuous belt of off-the-shelf 7mm rounds.

      So for the cost of a '65 Mustang, some HP servers and a trip to Walmart for the weapons and ammo, just about anyone (least of all a nation w

  • It will be created, at least secretly.
  • Good. Now we don't have to worry about sentient flamethrowers!
  • For those of you who have never seen Dark Star: https://www.youtube.com/watch?... [youtube.com]
  • Now where can we apply for jobs?
  • So it is ok to send teenagers off to war but sending in a robot is somehow not ethical? Maybe these people are not as smart as they think they are.

  • I am all for automatic, lethal weapons. Instead of having soldiers stationed in awful places either frying in the heat and bugs or having feet so frozen that amputation is the only option we can st up superior killing machines that are stealthy and powerful turning a battlefield into a peaceful area. Prison systems could have very effective devices that make escape impossible as well. There are so many benefits that I can not list them. By the way Trump has confessed to being a witch. He has s
  • ...someone to say in order to protect its existence. ;)

    For more information, see this movie: https://www.rottentomatoes.com... [rottentomatoes.com]

No man is an island if he's on at least one mailing list.

Working...