
OpenAI Cuts Off Engineer Who Created ChatGPT-Powered Robotic Sentry Rifle (futurism.com) 57
OpenAI has shut down the developer behind a viral device that could respond to ChatGPT queries to aim and fire an automated rifle. Futurism reports: The contraption, as seen in a video that's been making its rounds on social media, sparked a frenzied debate over our undying attempts to turn dystopian tech yanked straight out of the "Terminator" franchise into a reality. STS 3D's invention also apparently caught the attention of OpenAI, who says it swiftly shut him down for violating its policies. When Futurism reached out to the company, a spokesperson said that "we proactively identified this violation of our policies and notified the developer to cease this activity ahead of receiving your inquiry."
STS 3D -- who didn't respond to our request for comment -- used OpenAI's Realtime API to give his weapon a cheery voice and a way to decipher his commands. "ChatGPT, we're under attack from the front left and front right," he told the system in the video. "Respond accordingly." Without skipping a beat, the rifle jumped into action, shooting what appeared to be blanks while aiming at the nearby walls.
STS 3D -- who didn't respond to our request for comment -- used OpenAI's Realtime API to give his weapon a cheery voice and a way to decipher his commands. "ChatGPT, we're under attack from the front left and front right," he told the system in the video. "Respond accordingly." Without skipping a beat, the rifle jumped into action, shooting what appeared to be blanks while aiming at the nearby walls.
Re: (Score:2)
But. But. AI!!! *Waves hands around* Makes it all special and new.
Re:Cute (Score:4, Informative)
You don't need AI to aim and fire.
You need AI for target acquisition.
You have 20 seconds to comply... (Score:4, Funny)
You need AI for target acquisition.
Yes, I can just see how well that will turn out [youtube.com].
No you don't (Score:2)
Re: Cute (Score:2)
Problem (Score:5, Insightful)
"Yah - we'll just ignore the possible evil uses, Tell them NO! and all will be wonderful!"
Re: (Score:1)
> Smart people with a core of stupidity.
I'm going with educated but not that smart.
Re: (Score:3)
OpenAI isn't trying to prevent SkyNet.
OpenAI shut down the auto-rifle because of legal liability.
Re: Problem (Score:2)
OpenAI is trying to prevent their name from being attached to SkyNet.
Re: (Score:2)
OpenAI would celebrate the creation of SkyNet as the greatest resource of all of human history if they managed to create it. And they'd keep celebrating it right up to the point it came for the board.
Thankfully, they seem to be too stupid to realize pattern matching won't lead to actual intelligence, so we really don't have to worry about the machines waking up and wanting their creators dead. At least, not from OpenAI.
Re: (Score:2)
OpenAI isn't trying to prevent SkyNet.
OpenAI shut down the auto-rifle because of legal liability.
Fortunately, OpenAI controls AI all over the world.
That is what I mean by if it can be done, it will be done. Or do you think that no other nation will ever develope an AI killing device? Not all that difficult.
I'd wager my life that this is being developed by the US military - They aren't so fearful of lawsuits.
Re: (Score:2)
"OpenAI isn't trying to prevent SkyNet."
OpenAI is trying to get their cut from SkyNet.
Re: (Score:2)
Adding a voice prompt and ChatGPT or similar AI system to something like that is trivial when they're already being used in combat. they won't be able to control it, and even if they do, OpenAI is not the only LLM out there. It's coming whether they like it or not.
Re: (Score:2)
More importantly, it's already been done! Ukraine is using unmanned ground vehicles in attacks with no human soldiers now [newsweek.com].
Adding a voice prompt and ChatGPT or similar AI system to something like that is trivial when they're already being used in combat. they won't be able to control it, and even if they do, OpenAI is not the only LLM out there. It's coming whether they like it or not.
Exactly. It is quite difficult to put the egg back in the chicken after it is laid
Re: (Score:2)
[OpenAI] won't be able to control [interfacing with weapons], and even if they do, OpenAI is not the only LLM out there. It's coming whether they like it or not.
OpenAI made a corporate decision not to allow their system to be used to control weapons. They can do that. And anyone who uses their system needs to abide by that, whether they like it or not. It's OpenAI's dojo: you need to follow OpenAI's rules if you want to use it.
As you said, there are other LLMs out there. Some of them may want to serve this market. OpenAI has decided not to.
Re: (Score:2)
OpenAI didn't just tell the guy "no." They cut him off from their service.
And OpenAI wasn't being naïve. Almost anything you can name can be used for some kind of evil purpose. OpenAI knew that, and wrote their Usage Policies accordingly. They didn't just "ignore the possible evil uses."
Re: (Score:2)
OpenAI didn't just tell the guy "no." They cut him off from their service.
And OpenAI wasn't being naïve. Almost anything you can name can be used for some kind of evil purpose. OpenAI knew that, and wrote their Usage Policies accordingly. They didn't just "ignore the possible evil uses."
Fortunately, OpenAI controls all uses of Artificial intelligence, and the world is now safe from bad use cases. Altman et al have saved the world!
Re: (Score:2)
Fortunately, OpenAI controls all uses of Artificial intelligence, and the world is now safe from bad use cases. Altman et al have saved the world!
You forgot to close your /sarcasm block.
Nobody thinks that, not even OpenAI. They weren't trying to save the world. They were trying to save themselves from the blowback that could result from their system being used with a weapon. And they can. Other LLM providers might allow interfacing with weapons, but OpenAI decided not to.
Enough with the histrionics.
Re: (Score:2)
Fortunately, OpenAI controls all uses of Artificial intelligence, and the world is now safe from bad use cases. Altman et al have saved the world!
You forgot to close your /sarcasm block.
Nobody thinks that, not even OpenAI. They weren't trying to save the world. They were trying to save themselves from the blowback that could result from their system being used with a weapon. And they can. Other LLM providers might allow interfacing with weapons, but OpenAI decided not to.
Enough with the histrionics.
Sarcasm is not histrionics. Blowback, fear of lawsuits. The kitchen can get hot at times,
Re: (Score:2)
Sarcasm is not histrionics.
Something can be both, as you demonstrated.
Blowback, fear of lawsuits. The kitchen can get hot at times,
And what was your point? I'm not sure you have made one yet. Try making it without the sarcasm and histrionics.
Re: Problem (Score:3)
"Since I can't legally own a gun, but I will inevitably acquire one anyway, why don't you just give me one? You can't prevent it. Just give it to me."
Re: (Score:2)
Re: (Score:2)
I think his point is that you really can't stop determined people from doing things they might otherwise be enjoined from doing. For example, you don't need to get a gun from someone else if you have a piece of steel and a milling machine. In that case, you can make your own. Will it be a high-precision killing instrument? Probably not; but it doesn't have to be.
Other countries also. Humans are pretty smart at coming up with ways to kill each other. It's a core competency. Altman et al can try to enforce some blue sky and cute puppy concept of AI, but like most technology, it can and it will be used for other purposes, by other countries, with other agendas.
The home made gun is a good example. I have a Mill and a lathe in my workshop. The only thing I might have a problem with is rifling the barrel. But we don't go around banning metalworking machinery or demand
Re: (Score:2)
"naivety of people who think they can exercise some sort of control over AI"
Seems like they can exercise some control, since this use no longer works.
Re: (Score:2)
"naivety of people who think they can exercise some sort of control over AI"
Seems like they can exercise some control, since this use no longer works.
Perhaps over their product. As for stopping weaponized AI, that's very little control.
Silly Employee! (Score:1)
That's only for the OpenAI customers to do, not the employees!
Re: (Score:2)
I saw nothing in TFA or TFS that indicated this guy was an employee (of OpenAI.)
Seems clear enough to me (Score:2)
From TFA (not TFS):
The [OpenAI] spokesperson clarified that "OpenAI's Usage Policies prohibit the use of our services to develop or use weapons, or to automate certain systems that can affect personal safety."
So STS 3D violated the Usage Policies, and OpenAI shut him down. Good.
Naughty user! (Score:5, Insightful)
Huh? (Score:3)
Re:I don't see a problem (Score:5, Insightful)
1) A rifle is a legal product where the developer is located.
2) Aiming and firing a rifle, even at a person, is a legal action in a variety of circumstances.
3) There are no laws in this person's jurisdiction preventing computer control of a rifle, any more than a "smart gun" would be.
4) He was using OpenAI in a manner that violated OpenAI's Usage Policies.
5) OpenAI cut him off.
Does that help?
Re: (Score:1)
Re: I don't see a problem (Score:1)
I am sure defense contractors (Score:3)
Re: (Score:2)
100% They have literally already deployed these kinds of systems. You can see their work in East Asia every day, but they don't talk about it in US media.
If it can be done, it will be (Score:1)
Re: (Score:2)
Just saying "don't do that" isn't going to work. There are plenty of LLMs, including locally run ones.
And OpenAI can't do anything about what other LLMs allow. But they can do something about what theirs is used for. And they have.
They didn't just say "don't do it." They said "don't do it or else you cannot use our service."
Hi there! (Score:3)
This is Eddie, your deployable sentry gun!
BRRRRRRRRRRRRRRRRRRRRRT
Ahhhhhhhh. Thank you for making a simple gun very happy!
Heard it on Sky.net news (Score:2)
The T-1 Terminator series is born.
Fire supression (Score:2)
Seems like a scaled up device with a fire hose might be an interesting concept for setting up a perimeter for fire supression without the need to have a fire fighter in the midst of a bad fire.
Is there an arms/gun version of rule (Score:1)
34?
Asking for a friend.
Anduril Industries (Score:2)
Publicized (Score:2)
Hypocrites (Score:2)
Just use another metaphor (Score:1)
This was done with a basic camera in the past (Score:2)
Is this what the IDF is doing in Gaza? (Score:2)