Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
AI

OpenAI Cuts Off Engineer Who Created ChatGPT-Powered Robotic Sentry Rifle (futurism.com) 57

OpenAI has shut down the developer behind a viral device that could respond to ChatGPT queries to aim and fire an automated rifle. Futurism reports: The contraption, as seen in a video that's been making its rounds on social media, sparked a frenzied debate over our undying attempts to turn dystopian tech yanked straight out of the "Terminator" franchise into a reality. STS 3D's invention also apparently caught the attention of OpenAI, who says it swiftly shut him down for violating its policies. When Futurism reached out to the company, a spokesperson said that "we proactively identified this violation of our policies and notified the developer to cease this activity ahead of receiving your inquiry."

STS 3D -- who didn't respond to our request for comment -- used OpenAI's Realtime API to give his weapon a cheery voice and a way to decipher his commands. "ChatGPT, we're under attack from the front left and front right," he told the system in the video. "Respond accordingly." Without skipping a beat, the rifle jumped into action, shooting what appeared to be blanks while aiming at the nearby walls.

This discussion has been archived. No new comments can be posted.

OpenAI Cuts Off Engineer Who Created ChatGPT-Powered Robotic Sentry Rifle

Comments Filter:
  • Problem (Score:5, Insightful)

    by Ol Olsoc ( 1175323 ) on Thursday January 09, 2025 @07:22PM (#65076689)
    If it can be done, it will be done. The naivety of people who think they can exercise some sort of control over AI is mind boggling. Smart people with a core of stupidity.

    "Yah - we'll just ignore the possible evil uses, Tell them NO! and all will be wonderful!"

    • by Anonymous Coward

      > Smart people with a core of stupidity.

      I'm going with educated but not that smart.

    • OpenAI isn't trying to prevent SkyNet.

      OpenAI shut down the auto-rifle because of legal liability.

      • OpenAI is trying to prevent their name from being attached to SkyNet.

        • OpenAI would celebrate the creation of SkyNet as the greatest resource of all of human history if they managed to create it. And they'd keep celebrating it right up to the point it came for the board.

          Thankfully, they seem to be too stupid to realize pattern matching won't lead to actual intelligence, so we really don't have to worry about the machines waking up and wanting their creators dead. At least, not from OpenAI.

      • OpenAI isn't trying to prevent SkyNet.

        OpenAI shut down the auto-rifle because of legal liability.

        Fortunately, OpenAI controls AI all over the world.

        That is what I mean by if it can be done, it will be done. Or do you think that no other nation will ever develope an AI killing device? Not all that difficult.

        I'd wager my life that this is being developed by the US military - They aren't so fearful of lawsuits.

      • "OpenAI isn't trying to prevent SkyNet."

        OpenAI is trying to get their cut from SkyNet.

    • More importantly, it's already been done! Ukraine is using unmanned ground vehicles in attacks with no human soldiers now [newsweek.com].

      Adding a voice prompt and ChatGPT or similar AI system to something like that is trivial when they're already being used in combat. they won't be able to control it, and even if they do, OpenAI is not the only LLM out there. It's coming whether they like it or not.

      • More importantly, it's already been done! Ukraine is using unmanned ground vehicles in attacks with no human soldiers now [newsweek.com].

        Adding a voice prompt and ChatGPT or similar AI system to something like that is trivial when they're already being used in combat. they won't be able to control it, and even if they do, OpenAI is not the only LLM out there. It's coming whether they like it or not.

        Exactly. It is quite difficult to put the egg back in the chicken after it is laid

      • [OpenAI] won't be able to control [interfacing with weapons], and even if they do, OpenAI is not the only LLM out there. It's coming whether they like it or not.

        OpenAI made a corporate decision not to allow their system to be used to control weapons. They can do that. And anyone who uses their system needs to abide by that, whether they like it or not. It's OpenAI's dojo: you need to follow OpenAI's rules if you want to use it.

        As you said, there are other LLMs out there. Some of them may want to serve this market. OpenAI has decided not to.

    • OpenAI didn't just tell the guy "no." They cut him off from their service.

      And OpenAI wasn't being naïve. Almost anything you can name can be used for some kind of evil purpose. OpenAI knew that, and wrote their Usage Policies accordingly. They didn't just "ignore the possible evil uses."

      • OpenAI didn't just tell the guy "no." They cut him off from their service.

        And OpenAI wasn't being naïve. Almost anything you can name can be used for some kind of evil purpose. OpenAI knew that, and wrote their Usage Policies accordingly. They didn't just "ignore the possible evil uses."

        Fortunately, OpenAI controls all uses of Artificial intelligence, and the world is now safe from bad use cases. Altman et al have saved the world!

        • Fortunately, OpenAI controls all uses of Artificial intelligence, and the world is now safe from bad use cases. Altman et al have saved the world!

          You forgot to close your /sarcasm block.

          Nobody thinks that, not even OpenAI. They weren't trying to save the world. They were trying to save themselves from the blowback that could result from their system being used with a weapon. And they can. Other LLM providers might allow interfacing with weapons, but OpenAI decided not to.

          Enough with the histrionics.

          • Fortunately, OpenAI controls all uses of Artificial intelligence, and the world is now safe from bad use cases. Altman et al have saved the world!

            You forgot to close your /sarcasm block.

            Nobody thinks that, not even OpenAI. They weren't trying to save the world. They were trying to save themselves from the blowback that could result from their system being used with a weapon. And they can. Other LLM providers might allow interfacing with weapons, but OpenAI decided not to.

            Enough with the histrionics.

            Sarcasm is not histrionics. Blowback, fear of lawsuits. The kitchen can get hot at times,

            • Sarcasm is not histrionics.

              Something can be both, as you demonstrated.

              Blowback, fear of lawsuits. The kitchen can get hot at times,

              And what was your point? I'm not sure you have made one yet. Try making it without the sarcasm and histrionics.

    • "Since I can't legally own a gun, but I will inevitably acquire one anyway, why don't you just give me one? You can't prevent it. Just give it to me."

      • I think his point is that you really can't stop determined people from doing things they might otherwise be enjoined from doing. For example, you don't need to get a gun from someone else if you have a piece of steel and a milling machine. In that case, you can make your own. Will it be a high-precision killing instrument? Probably not; but it doesn't have to be.
        • I think his point is that you really can't stop determined people from doing things they might otherwise be enjoined from doing. For example, you don't need to get a gun from someone else if you have a piece of steel and a milling machine. In that case, you can make your own. Will it be a high-precision killing instrument? Probably not; but it doesn't have to be.

          Other countries also. Humans are pretty smart at coming up with ways to kill each other. It's a core competency. Altman et al can try to enforce some blue sky and cute puppy concept of AI, but like most technology, it can and it will be used for other purposes, by other countries, with other agendas.

          The home made gun is a good example. I have a Mill and a lathe in my workshop. The only thing I might have a problem with is rifling the barrel. But we don't go around banning metalworking machinery or demand

    • "naivety of people who think they can exercise some sort of control over AI"

      Seems like they can exercise some control, since this use no longer works.

      • "naivety of people who think they can exercise some sort of control over AI"

        Seems like they can exercise some control, since this use no longer works.

        Perhaps over their product. As for stopping weaponized AI, that's very little control.

  • by Anonymous Coward

    That's only for the OpenAI customers to do, not the employees!

  • From TFA (not TFS):

    The [OpenAI] spokesperson clarified that "OpenAI's Usage Policies prohibit the use of our services to develop or use weapons, or to automate certain systems that can affect personal safety."

    So STS 3D violated the Usage Policies, and OpenAI shut him down. Good.

  • Naughty user! (Score:5, Insightful)

    by fuzzyfuzzyfungus ( 1223518 ) on Thursday January 09, 2025 @07:27PM (#65076701) Journal
    You don't think you'll get the defense contractor features for the retail user price; do you?
  • by fluffernutter ( 1411889 ) on Thursday January 09, 2025 @07:38PM (#65076725)
    What part of this is AI? Understanding 'left' and 'right'?
  • by FudRucker ( 866063 ) on Thursday January 09, 2025 @07:44PM (#65076749)
    Are already elbows deep in using AI in developing both a defensive and offensive/attack mode robotics
    • 100% They have literally already deployed these kinds of systems. You can see their work in East Asia every day, but they don't talk about it in US media.

  • Just saying "don't do that" isn't going to work. There are plenty of LLMs, including locally run ones.
    • Just saying "don't do that" isn't going to work. There are plenty of LLMs, including locally run ones.

      And OpenAI can't do anything about what other LLMs allow. But they can do something about what theirs is used for. And they have.

      They didn't just say "don't do it." They said "don't do it or else you cannot use our service."

  • by Baron_Yam ( 643147 ) on Thursday January 09, 2025 @08:50PM (#65076893)

    This is Eddie, your deployable sentry gun!

    BRRRRRRRRRRRRRRRRRRRRRT

    Ahhhhhhhh. Thank you for making a simple gun very happy!

  • The T-1 Terminator series is born.

  • Seems like a scaled up device with a fire hose might be an interesting concept for setting up a perimeter for fire supression without the need to have a fire fighter in the midst of a bad fire.

  • 34?

    Asking for a friend.

  • Sound like Anduril Industries might be interested.
  • Note that they terminated his account only after the violation had been widely publicized. I suspect anyone doing this in secret will be able to continue.
  • That's what OpenAI and the US are, big hypocrites, as in the meantime they let the US military use ChatGPT just for things like this. But if anybody else is trying it, ohhh, hell will break loose and the US tries to stop them, as others aparantly know how to use the technology much better as the US does.
  • Although I agree with the premise that this type of AI usage is bad, all a bad guy has to do is: Call it something else Apply different terminology Try a different analogy. Pick another figure of speech. Use a fresh comparison. Could you rephrase that with a different image? Opt for a new illustrative example. Switch to another way of comparing. Choose a different descriptive device. Go with an alternative metaphorical expression. Find another way to illustrate the point-and-shoot. Select a different symbo
  • Some guy hooked up one of the early digital cameras to a tripod and servos and a bb gun and shot at vermin in his yard. He also had done a bit of scripting in whatever scripting language that camera used. It was a low budget set up that used object tracking. D.O.D. forced him to take down his site and hired him.

"It's what you learn after you know it all that counts." -- John Wooden

Working...