Forgot your password?
typodupeerror
The Courts Crime

Florida Launches Criminal Investigation Into ChatGPT Over School Shooting (npr.org) 103

Florida's attorney general has launched a criminal investigation into OpenAI over allegations that the accused gunman in a shooting at Florida State University last year used ChatGPT to help plan the attack. OpenAI says the chatbot is "not responsible for this terrible crime" and only provided factual information available from public sources. NPR reports: The Republican attorney general, James Uthmeier, said at a press conference in Tampa on Tuesday that accused gunman Phoenix Ikner consulted ChatGPT for advice before the shooting, including what type of gun to use, what ammunition went with it, and what time to go to campus to encounter more people, according to an initial review of Ikner's chat logs. "My prosecutors have looked at this and they've told me, if it was a person on the other end of that screen, we would be charging them with murder," Uthmeier said. "We cannot have AI bots that are advising people on how to kill others."

Uthmeier's office is issuing subpoenas to OpenAI seeking information about its policies and internal training materials related to user threats of harm and how it cooperates with and reports crimes to law enforcement, dating back to March 2024. At the press conference, Uthmeier acknowledged the investigation is entering into uncharted territory and is uncertain about whether OpenAI has criminal liability. "We are going to look at who knew what, designed what, or should have done what," he said. "And if it is clear that individuals knew that this type of dangerous behavior might take place, that these types of unfortunate, tragic events might take place, and nevertheless still turned to profit, still allowed this business to operate, then people need to be held accountable."

[...] Ikner, 21, is facing multiple charges of murder and attempted murder for the April 2025 shooting near the student union on FSU's Tallahassee campus, where he was a student at the time. His trial is set to begin on Oct. 19. According to court filings, more than 200 AI messages have been entered into evidence in the case.

Florida Launches Criminal Investigation Into ChatGPT Over School Shooting

Comments Filter:
  • Chatbot Lies (Score:5, Informative)

    by gurps_npc ( 621217 ) on Tuesday April 21, 2026 @07:03PM (#66105830) Homepage

    Chatbot does NOT only provide factual information. It is an AI that works by making predictions. Those predictions are sometimes false.

    I am constantly surprised of the stupidity of people using it and the people making it.

    • Re: (Score:3, Insightful)

      by Anonymous Coward
      The responsible party is the person who pulled the trigger.
      • Re: (Score:3, Insightful)

        by rtkluttz ( 244325 )

        Exactly, next people are going to be doing legal discovery on levi's jeans because the jeans helped the shooter keep his balls from flapping during the shooting. Stop trying to blame tools and keep the blame squarely on the human that does the evil thing.

        • Re:Chatbot Lies (Score:4, Insightful)

          by Anonymous Coward on Tuesday April 21, 2026 @07:22PM (#66105854)
          Next people will want to prosecute Mafia bosses just because they ordered their henchman to commit crimes! What is this crazy world coming to?!
        • Re:Chatbot Lies (Score:5, Insightful)

          by SomePoorSchmuck ( 183775 ) on Tuesday April 21, 2026 @07:50PM (#66105886) Homepage

          Exactly, next people are going to be doing legal discovery on levi's jeans because the jeans helped the shooter keep his balls from flapping during the shooting. Stop trying to blame tools and keep the blame squarely on the human that does the evil thing.

          Osama bin Laden was not on any of the planes that flew into buildings. All he did was sit there and help plan and train the people who did it.

          Or, you go to a construction demolitions expert and ask him what's the best way to place explosives around the football stadium to make sure the exits collapse first so no one can escape. He looks at floor plans and pics, tells you what supplies you need, where to plant the charges, and how to rig the IEDs to blow simultaneously.
          But all he gave you was information, so he has no legal or moral culpability for the death and destruction you cause?

          • Re:Chatbot Lies (Score:5, Insightful)

            by WaffleMonster ( 969671 ) on Tuesday April 21, 2026 @09:13PM (#66105982)

            Osama bin Laden was not on any of the planes that flew into buildings. All he did was sit there and help plan and train the people who did it.

            Or, you go to a construction demolitions expert and ask him what's the best way to place explosives around the football stadium to make sure the exits collapse first so no one can escape. He looks at floor plans and pics, tells you what supplies you need, where to plant the charges, and how to rig the IEDs to blow simultaneously.
            But all he gave you was information, so he has no legal or moral culpability for the death and destruction you cause?

            Machines don't have agency. If you use technology to help you commit crimes you are the one with agency and so you are blamed for it.

            • Re:Chatbot Lies (Score:5, Interesting)

              by HiThere ( 15173 ) <charleshixsn@@@earthlink...net> on Tuesday April 21, 2026 @09:22PM (#66106000)

              But the company providing the technology also has some agency in the matter. How much it's reasonable to argue about,

              • Re:Chatbot Lies (Score:5, Insightful)

                by WaffleMonster ( 969671 ) on Tuesday April 21, 2026 @11:11PM (#66106074)

                But the company providing the technology also has some agency in the matter.

                Those providing LLM service have no more agency over usage of their technology than a manufacturer of integrated circuits or power and telecom utilities.

                • Re:Chatbot Lies (Score:5, Insightful)

                  by thegarbz ( 1787294 ) on Wednesday April 22, 2026 @12:29AM (#66106136)

                  Are you sure? Ask you favourite AI Chatbot to draw you a naked woman. I bet you its answer is that it is specifically programmed to deny your request.

                  This is the Section 230 immunity vs moderation/curation argument all over again.

                  • Are you sure? Ask you favourite AI Chatbot to draw you a naked woman.

                    Controls are a different concept from agency.

                    For example a firearm with a user authentication function does not have agency over itself or its use even though there is a control in place by the manufacturer.

                    I bet you its answer is that it is specifically programmed to deny your request.

                    Chatbots are trained not programmed.

                    • Controls are a different concept from agency.

                      Except no one is suing an LLM here, they are suing the people who put in place the controls.

                      Chatbots are trained not programmed.

                      Leaving aside the autistic level of your response. Chatbots are trained to respond, they are programmed not to with explicit restrictions. It is not the core training that places restrictions on them. It's programmatic filtering of inputs and outputs.

                  • It is logically impossible to prevent a general use tool from every specific action that you would like to prohibit. What the fuck is wrong with you? I was not educated, but I am betting that you were and you should absolutely know this principle.

                    Every idiot who has watched Halloween knows that you can't restrict a hammer to just pounding in nails.

                • OTS GPS companies cannot sell units that can work over a certain altitude or speed without a special buyers permit because those components are known to be able to used in homegrown weapons.

                  You cannot bulk purchase fertilizer for a same reason without a buyer's permit.

                  Why should LLMs be different?

                  • OTS GPS companies cannot sell units that can work over a certain altitude or speed without a special buyers permit because those components are known to be able to used in homegrown weapons.

                    These are export restrictions. You can sell them to anyone inside of the country.

                    You cannot bulk purchase fertilizer for a same reason without a buyer's permit.
                    Why should LLMs be different?

                    Policies, regulations and controls are a different concept from agency. I wasn't offering an opinion on whether or not x, y or z should or should not be legally allowed.

            • Osama bin Laden was not on any of the planes that flew into buildings. All he did was sit there and help plan and train the people who did it.

              Or, you go to a construction demolitions expert and ask him what's the best way to place explosives around the football stadium to make sure the exits collapse first so no one can escape. He looks at floor plans and pics, tells you what supplies you need, where to plant the charges, and how to rig the IEDs to blow simultaneously.
              But all he gave you was information, so he has no legal or moral culpability for the death and destruction you cause?

              Machines don't have agency. If you use technology to help you commit crimes you are the one with agency and so you are blamed for it.

              If those machines are designed to commit crimes then there was intent to begin with, and the designers/producers of that machine are culpable for crimes committed with said machines. If you read TFA carefully you will note that at no point is the attorney general suggesting that ChatGTP go to prison. Rather, he is implicating that the people behind ChatGTP may bear some responsibility.

              • The people behind ChatGTP are essentially designing a neural net training system and general purpose inference and expression system based on the neural net. Then they are feeding in pretty much all of the human expressions of knowledge or communication in the public or cheaply commercially purchasable (e.g. used books) domain, as learning material for the neural net.

                Essentially, they are creating a very large library with a very fast, efficient, disinterested librarian function to help a person find what t
                • The people behind ChatGTP are essentially designing a neural net training system and general purpose inference and expression system based on the neural net. Then they are feeding in pretty much all of the human expressions of knowledge or communication in the public or cheaply commercially purchasable (e.g. used books) domain, as learning material for the neural net.

                  Essentially, they are creating a very large library with a very fast, efficient, disinterested librarian function to help a person find what they're looking for. I'm sure a lot of crimes have been planned in the past with the assistance of library visits. Do we start rounding up librarians? Or, in a more apt analogy, do we start rounding up the university professors of library science, who trained all those librarians?

                  After many years on the internet, I have learned to never argue with an idiot. It is a long and futile process.

            • Osama bin Laden was not on any of the planes that flew into buildings. All he did was sit there and help plan and train the people who did it.

              Or, you go to a construction demolitions expert and ask him what's the best way to place explosives around the football stadium to make sure the exits collapse first so no one can escape. He looks at floor plans and pics, tells you what supplies you need, where to plant the charges, and how to rig the IEDs to blow simultaneously. But all he gave you was information, so he has no legal or moral culpability for the death and destruction you cause?

              Machines don't have agency. If you use technology to help you commit crimes you are the one with agency and so you are blamed for it.

              Exactly. If a person uses ChatGPT to get ideas on how to commit a crime, at what granularity do we prosecute? The people who did the coding? The people who taught them? The internet? The computer companies, teh engineers who designed the computers? The companies that made the components? School that taught them how to read? Seems like everyone on earth is responsible for every murder in the "ChatGPT is responsible" case.

              • If a person uses ChatGPT to get ideas on how to commit a crime, at what granularity do we prosecute? The people who did the coding? The people who taught them? The internet? The computer companies, teh engineers who designed the computers? The companies that made the components? School that taught them how to read? Seems like everyone on earth is responsible for every murder in the "ChatGPT is responsible" case.

                You think it's everyone, except the one group you didn't mention: the C-suite at OpenAI. They are the ones who make the call to release the product. And they are the ones who should face any consequences that result from it. The same has applied to companies that make faulty automobiles, weed killers, pharmaceutical products, and so on.

          • by sjames ( 1099 )

            The Engineer had agency. The AI (or google search, or a stack of text books) does not.

            Of course, if the mad bomber instead posed as a student and found some non-evil reason for wanting the exits to collapse first (even a thin one like directing the dust upwards), the engineer is less culpable or not culpable at all.

            But we need to be very careful about imagining an AI has agency. There are many legal and philosophical implications behind that.

          • Re: (Score:1, Troll)

            Bullshit. Bin Laden and your hypothetical engineer know exactly what they're doing. ChatGPT is a tool, no more culpable than a car used to hit someone, or those who made the car.
            • Hundreds of thousands of Juries - the Constitutionally-appointed deciders of culpability - have agreed that business owners, tool makers, property owners, and individuals behaving certain ways in public places, are in fact criminally and/or civilly responsible for the damages suffered by victims of their negligent choices. The quote from the AG will be very persuasive to many criminal and civil jurors: "My prosecutors have looked at this and they've told me, if it was a person on the other end of that scree

          • Osama bin Laden was not on any of the planes that flew into buildings. All he did was sit there and help plan and train the people who did it.

            Are you seriously comparing the roles of ChatGPT and Osama bin Laden? If so, you are intellectually bankrupt. ChatGPT did not originate any ideas about "attacks", bin Laden did.

            If you can not see the difference or deny that there is a difference, then you are either defective or a hostile actor.

        • And discard some leverage over ClosedAI ?

          Kickbacks don't kick themselves back - it's a continuous grift!

      • A bad engineer blames the user.

      • by SirSlud ( 67381 )

        Multiple people can share responsibility, as their actions combine together. A person who drives somebody to a bank for the known purpose of robbing the bank is determined to share *some* responsibility for the robbery of the bank. Just because they're not the person who took the money out of the bank vault does not mean the law does not consider them partly responsible.

        I know I know, life is so much easier if you just try and make everything stupidly simple.

      • The responsible party is the person who pulled the trigger.

        There's this well established concept in law called Accessory After The Fact - you can google it

    • by ClickOnThis ( 137803 ) on Tuesday April 21, 2026 @08:02PM (#66105906) Journal

      I don't think the alleged shooter is stupid. He was a student at the university where the shooting took place. I'd be more inclined to think he is mentally ill.

      As for the makers of ChatGPT being stupid -- no I don't think that either. They're among the smartest people on the planet. If anything I'd say they were careless, for not building a red-flag alert into their product that reports suspicious behavior. Maybe there should be laws that require such a thing.

      And that leaves ChatGPT itself, which I am not inclined to call stupid, mentally ill, or careless. I'm not ready (yet) to give it that agency.

      • Re:Chatbot Lies (Score:5, Informative)

        by gurps_npc ( 621217 ) on Tuesday April 21, 2026 @09:10PM (#66105978) Homepage

        The chatGPT makers are NOT among the smartest people, you have fallen victim to propaganda.

        The technology behind ChatGPT was invented by:
          Dznuret Bahdanau, Kyunghyun Cho and Yoshua Bengi in
        https://arxiv.org/abs/1409.0473/ [arxiv.org] in May of 2016.

        Everyone else just copied their work with minor improvements and adding immense amount of memory and processing.

        Most of the guys who currently are in charge of the Large Language Models are more interested in money than in science. They are above average intelligence but are in no way the smartest people on the planet.

        there is a difference between a scientist that invents and/or discovers science, the engineers that figure out how to implement the science, and both are different than the money men that keep the gravy train rolling.

        The guy at the top makes business decisions and never ever invents stuff. The scientists are lucky if they get paid anything for inventing it. The engineers always get paid - but not as much as the money guy on top.

        • The chatGPT makers are NOT among the smartest people, you have fallen victim to propaganda.

          Whatever. They're not stupid, that's the point.

          • The chatGPT makers are NOT among the smartest people, you have fallen victim to propaganda.

            Whatever. They're not stupid, that's the point.

            Given how often the current article is repeated in different topics, data would suggest that they are...

        • That is just a load of rubbish. There's a world of smart people building on the work of others at all times in all technological developments. The companies like OpenAI aren't just feeding a 10 year old equation endless data, they literally employ hundreds of the brightest minds in the field, have more PhDs on floor than the Nobel prize convention, and spend in some cases 7 figure salaries on having smart people modifying and building upon the base that you built.

          Just because one engineer invented cement do

        • Arguably, the true inventor of ChatGPT is Claude Shannon (1948, page 5 of the pdf [harvard.edu]). He did the first implementation and also the most important theoretical step all at once, and he did it by hand. Had he been born 50 years later with modern hardware ca 2015 and pirated internet content galore, he no doubt would have gone all the way building the first LLM. Bengio et al were merely standing on the shoulders of a giant.
      • As for the makers of ChatGPT being stupid -- no I don't think that either. They're among the smartest people on the planet.

        Citation needed. There is nothing whatsoever to indicate that OpenAI employees are "among the smartest people on the planet". Results indicate that a A) they are not as good at what they do as Anthropic, and B) they keep repeating the same mistakes in different areas (previously they "forgot" the safeties for suicide prevention, mental illness reinforcement and ideation, and I probably forgot a few). This would indicate that they are, in fact, not that smart.

    • To be fair, if this "stupid" thing helps people to do actual things to happen, how is it "bad and not working"? You can do not like it as much as you want but it can program and write poetry better than 90% of "real human population". If it is not impressive and and disruptive technology, I do not know what is.

      Really, if you do not understand something, it does not mean it is bad, it means you do not understand it.

      Regarding this specific case, even a hammer can be used to kill a person, it does not mean

      • Regarding this specific case, even a hammer can be used to kill a person, it does not mean that the person who designed or made the hammer is a bad guy. The decision to attack people was on that person, let us stop blaming tools and start holding the bad guys accountable. So simple.

        If a hammer engaged in a dialogue with its user about how to commit a crime, then I think the hammer manufacturers might at least need to answer some questions.

        • This is a tool that talks. This is how this tool works. Are you trying to imply that the chatbot has some intent and made the person to do the bad thing to gain some benefit? This would definitely change things.

          If you think about it, it was the computer that allowed that person to interact with the chatbot...

          • This is a tool that talks. This is how this tool works. Are you trying to imply that the chatbot has some intent and made the person to do the bad thing to gain some benefit? This would definitely change things.

            The intent of the tool is irrelevant (assuming it even has one.) The behavior of the tool is what matters here. And if it behaves in such a way as to encourage harm, then its manufacturers need to answer for it.

            • If the tool does not have an intent, it cannot discourage anything. Discouraging is an intent... intent to prevent violence, for example. However, it is a tool, it is not a person, it does not really have an intent. The whole point of demanding pacifying behavior from a tool is bogus.
              • If the tool does not have an intent, it cannot discourage anything.

                Not true. We design tools with warnings to discourage misuse. We can design a software tool the same way. It can detect keywords and issue warnings. But you can't do it reliably at the LLM level because you cannot do ANYTHING reliably with a LLM, you need to wrap something around it.

                • You could, but it is not a fool proof solution and it usually gets in the way and makes your tool inferior. Think about guns with finger print scanners. Has been tried but it is not there because it is not working good enough. You can issue warnings as much as you want but what are you going to achieve? Bad guy already knows that he is a bad guy, a good guy does not plan anythings bad, any warning will be a false positive. Tools are tools, they have to be efficient on what they do. The responsibility for th
                  • Bad guy already knows that he is a bad guy, a good guy does not plan anythings bad, any warning will be a false positive.

                    You forgot dumbshits who don't know shit, who are the primary audience for LLM-based AI.

                    Tools are tools, they have to be efficient on what they do.

                    They also have to be fit for purpose. Sometimes this is spelled out explicitly in so many words, in other cases you can just return or reject things that "don't work".

                    The responsibility for the actions of he user is on the user, not on the tool.

                    Nobody said it was on the tool, but sometimes, it is factually also on the provider of the tool. Pretending otherwise doesn't change the law. If the provider is negligent, they can share in responsibility. This is how things other than LLMs work, why not LL

                    • The responsibility for the actions of he user is on the user, not on the tool.

                      Nobody said it was on the tool, but sometimes, it is factually also on the provider of the tool. Pretending otherwise doesn't change the law. If the provider is negligent, they can share in responsibility. This is how things other than LLMs work, why not LLMs too?

                      Guns have safeties even though they can get in your way, for safety's sake. Equipment has lockouts. Most things come with warnings. Automobiles are starting to get automated guardrails like automatic braking and eventually won't allow you to e.g. steer into another vehicle, because it's feasible to prevent and there is a public safety interest. There's simply zero justification for the multi-billion dollar corporations producing and selling access to these LLMs to not institute some guardrails of their own.

                      I can agree with this. As we learn a tool we can learn how to make it better and safer. And we can also force the manufacturers to implement these measures by finding them negligent if they have not and fining them. Yes, this is how it works.

              • Who are you having a conversation with?

                You keep talking about what the tool thinks -- assuming it even can.

                I'm talking about what the tool does.

                • I am not sure I am following you. Tools do not "think", tools do not do anything by themselves. Chatbot did not wake up a person and suggested that the person should attack some people. The person planned that, the tool helped the person to plan. This is what tools do.
              • Like it or not, what your product does is more important than whether it was designed deliberately to do that. If your product actually injures or kills people, you will be generally found liable. This has been the case for over a century. It's why every piece of consumer electronics has a "UL" sticker on it, because the insurers want to minimize the risks before offering liability insurance.

                Slashdot Lawyers like to pretend that every evil thing that happens is the result of one person and that only one per

      • "Guns don't kill people; people kill people."
  • It would be a problematic precedent if there were criminal liability. And such a ruling can potentially hamstring phone books, encyclopedias, taxi services, and gun manufacturers. Any ancillary service or device used for a crime is a target with an imaginative prosecutor.

    Civil liability? I sure hope so. The AI industry does not regulate itself and the government has so far refused to regulate what they believe is a golden goose.

    • Where do you live where they still make phone books?
      I have not seen one for a decade at least.

      • The People's Republic of California. They still make them, and they leave a pile of them at the end of my private road every year.

    • by AmiMoJo ( 196126 )

      Taking an encyclopedia as an example, they typically do not give enough detail on certain topics to be criminally negligent. They might describe an explosive, but not how to make it. They might talk about suicide, but not details of methods or how to make them more effective.

      Doing so could be criminally negligent if someone used that knowledge to hurt someone or themselves. It is foreseeable that such information is not something that should be given out freely in a book that is likely to be in people's hom

      • The encyclopedia doesn't explain how to make explosives because that's not its job. The anarchist's cookbook sort of does but it's dangerous to follow because some of the recipes are very bad, but even that has been determined to not be illegal. And some of the recipes WILL work, and those aren't illegal either. If you go to the library you can get a plan for a nuclear weapon or a gun.

        It's not just information being provided, it's how it's couched. The LLM presents its bullshit to you as a gift in celebrati

    • Don't forget that he probably also ate breakfast before doing the deed for the energy needed to pull the trigger. Let's go for Kelloggs too.
  • I'm not buying it (Score:5, Insightful)

    by tech10171968 ( 955149 ) on Tuesday April 21, 2026 @07:20PM (#66105848)
    I remember when Columbine happened. I also remembered when the Federal building in Oklahoma got blown up. Guess what WAS'T around back then? That's right: OpenAI wasn't a thing. But those events still happened. Blaming a chatbot for a tragedy is like blaming McDonald's for your obesity: even if the restaurant didn't exist, you were going to end up in that condition because of your eating habits anyhow. The name of the restaurant might have changed but the song remains the same. This guy had it in his head to shoot up the school, OpenAI or no OpenAI. Rounds were going to fly downrange even if AI didn't exist. This is some lazy logic.
    • by evslin ( 612024 ) on Tuesday April 21, 2026 @07:46PM (#66105878)

      Remember when life was simpler and we could just blame video games and Marilyn Manson for this shit?

    • Yeah but the people who banged on about how Doom was the problem got a lot of press out of it and some of them built entire careers out of it.

      That's what this is about. He knows damn well they are covered by section 230 of the cda, and as much as the right wing would love to strike that down so that they could finish taking over the internet this isn't going to be the case that does it.

      He is just after a bit of press and a little bit of think of the children bullshit.
      • He knows damn well they are covered by section 230 of the cda

        I don't think they are.

        No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

        (emphasis added)

        Section 230 says that Slashdot can't be held liable if a user posts a "how to coordinate a school shooting" guide in the comments section, as it's provided by "another" information content provider. If a Slashdot editor posted such a guide to the front page, they've provided that content themselves, and are no longer protected.

        I see the output of ChatGPT as much more analogous to an editor's post. OpenAI is creating and publishing it themselves, and section 230 do

        • > Section 230 says that Slashdot can't be held liable if a user posts a "how to coordinate a school shooting" guide in the comments section, as it's provided by "another" information content provider. If a Slashdot editor posted such a guide to the front page, they've provided that content themselves, and are no longer protected.

          I agree with your analysis and just wanted to add:

          Section 230 doesn't mean there's no liability, only that none of it befalls the owners of the Internet infrastructure used to ho

      • Section 230 is about user generated content. You aren't liable for what someone else posts to your website. It has nothing to do with your program outputting algorithmicly generated text. To the best of my knowledge, there is no shield law like S.230 or the laws absolving firearm manufacturers of liability that applies to text generated by an LLM.

    • Did AI help the shooter do more damage?

    • by ledow ( 319597 )

      That argument isn't logic, though, is it?

      You say that before AI, people still shot people. And after AI, people still shot people.

      So it's not AI that's shooting people.

      But then you jump into the McDonald's analogy which is implying that the guns (that were around before AI, and still are) aren't to blame either.

      So there's no logic in lumping those two together by opposite arguments.

      Now... you can say that PEOPLE are to blame, and that's fine. And people existed before AI and after AI.

      But if the person who

      • There are two components: Agency for direct liability, and Negligence for indirect liability.

        People have agency. Guns do not have agency. Software does not have agency.

        Negligence is broader. A gun owner can be negligent if their gun is improperly protected from unauthorized use. A gun seller can be negligent if they sell a gun to some they should not. A software provider can be negligent if their software contributes in a way they should have foreseen and prevented.

        That becomes the question: Should the

    • Re: (Score:2, Insightful)

      by mjwx ( 966435 )

      I remember when Columbine happened. I also remembered when the Federal building in Oklahoma got blown up. Guess what WAS'T around back then? That's right: OpenAI wasn't a thing. But those events still happened.

      Blaming a chatbot for a tragedy is like blaming McDonald's for your obesity: even if the restaurant didn't exist, you were going to end up in that condition because of your eating habits anyhow. The name of the restaurant might have changed but the song remains the same.

      This guy had it in his head to shoot up the school, OpenAI or no OpenAI. Rounds were going to fly downrange even if AI didn't exist. This is some lazy logic.

      This is just the only country in the world where this kind of thing happens refusing to admit why this kind of thing happens and trying to find any reason except the obvious to explain why this kind of thing happens.

      The old excuse of "video games and rock and/or roll music" just ain't cutting it no more.

      So they're back to trying to find any scape goat they can to avoid admitting the US has too many guns and an unhealthy love of violence.

      • So they're back to trying to find any scape goat they can to avoid admitting the US has too many guns and an unhealthy love of violence.

        Except the only couple of countries with more guns have fewer shootings and fewer gun deaths, so the guns really aren't the problem — they only exacerbate it. The problem is the other part, which you nailed. This is a violent country. We don't just permit violence, we worship it. You know how Americans always say if it wasn't gun violence, it would be some other kind? That's because it would be, here.

    • I remember when Columbine happened. I also remembered when the Federal building in Oklahoma got blown up. Guess what WAS'T around back then? That's right: OpenAI wasn't a thing. But those events still happened.

      Blaming a chatbot for a tragedy is like blaming McDonald's for your obesity: even if the restaurant didn't exist, you were going to end up in that condition because of your eating habits anyhow. The name of the restaurant might have changed but the song remains the same.

      This guy had it in his head to shoot up the school, OpenAI or no OpenAI. Rounds were going to fly downrange even if AI didn't exist. This is some lazy logic.

      How much does the NRA pay you per post?

    • Re:I'm not buying it (Score:5, Interesting)

      by AmiMoJo ( 196126 ) on Wednesday April 22, 2026 @08:12AM (#66106494) Homepage Journal

      Tobacco companies would argue that tobacco products existed before cigarettes and people got lung cancer back then too. Criminal liability doesn't work that way. It's based on the accused not taking reasonable steps to prevent something foreseeable happening.

      OpenAI know how ChatGPT is used. They know that young people are talking to it. They know that it sometimes gives very, very bad advice, or is too keen to agree rather than to tell someone they are wrong when they talk about suicidal thoughts or crimes they are contemplating committing. They didn't do enough to stop it.

      It's more like cases where cars had deadly faults that the manufacturer knew about and failed to take seriously enough to do anything about.

      • Tobacco companies would argue that tobacco products existed before cigarettes and people got lung cancer back then too. Criminal liability doesn't work that way. It's based on the accused not taking reasonable steps to prevent something foreseeable happening.

        OpenAI know how ChatGPT is used. They know that young people are talking to it. They know that it sometimes gives very, very bad advice, or is too keen to agree rather than to tell someone they are wrong when they talk about suicidal thoughts or crimes they are contemplating committing. They didn't do enough to stop it.

        It's more like cases where cars had deadly faults that the manufacturer knew about and failed to take seriously enough to do anything about.

        Out of curiosity, does the same thing apply to chemistry? In chemistry class, it became pretty obvious pretty quickly that chemistry could be used to do some bad things, and it wasn't rocket science. Trying to teach chemistry with all possible implementation of adverse effects eliminated means you cannot teach chemistry. And if the student does something bad, the teacher is responsible.

        Teaching biology can show people ways to eliminate other people. Scratch biology.

        As well, the information that ChatGPT

    • by SirSlud ( 67381 )

      Fortunately, and overwhelmingly provably, the physical and legal world doesn't work in the way you wish it did.

      Protip: as soon as you're talking about "never" or "always" or "happened before" or "still happens" .. basically anything in terms of any absolutes, you're not operating in the real world.

      People survived car crashes before seatbelts were mandated. People still die in car crashes even when using seatbelts. You'd be a moron to argue seatbelts are useless or car manufactures should not be legally requ

    • I remember when Trump was shot a few years ago. Seems people have always shot at Presidents. Which is why I implore you not to hold me liable for my brand new book "How to shoot the President: Ten tips for assassinating an ass! The Secret Service doesn't want you to read #9!" Available at all booksellers now, buy before midnight today and we'll give you a FREE $500 off coupon for an AR-15!

      (To save the SS a trip, no, this is not real, it's a joke to make a point, and while I'm not a fan of the jerk I don't w

  • by Powercntrl ( 458442 ) on Tuesday April 21, 2026 @07:29PM (#66105866) Homepage

    The Republican attorney general, James Uthmeier, said at a press conference in Tampa on Tuesday that accused gunman Phoenix Ikner consulted ChatGPT for advice before the shooting, including what type of gun to use, what ammunition went with it ...

    All questions that your local gun store clerk would be more than happy to answer for you.

    and what time to go to campus to encounter more people

    I'm fairly certain Google Maps also lists busy times for specific locations, at least it does for restaurants and stores.

    This is all very on-brand from Florida, a place where according to Republican logic, this is not supposed to happen because open carry [mynews13.com] should've brought all those supposed "good guys with a gun" out of the woodwork. Gee, I can't possibly imagine why more guns isn't making us safer. /s

    • Hmmm... so the next step must be "forced carry" or at least if you are a good guy, to make sure there are enough good guys with a gun in any situation? :-P
    • The Republican attorney general, James Uthmeier, said at a press conference in Tampa on Tuesday that accused gunman Phoenix Ikner consulted ChatGPT for advice before the shooting, including what type of gun to use, what ammunition went with it ...

      All questions that your local gun store clerk would be more than happy to answer for you.

      I'm pretty sure if you went to a gun store and asked the clerk "What kind of gun and ammo would you recommend for inflicting mass casualties in a school shooting?" they'd call the cops.

      It's hard for me to form a strong opinion on this without knowing exactly what the shooter asked ChatGPT. If he asked for the weapons most commonly used in school shootings and was provided information that could just as well have been used by a journalist writing a piece on gun control, I have a hard time seeing OpenAI as

      • I'm pretty sure if you went to a gun store and asked the clerk "What kind of gun and ammo would you recommend for inflicting mass casualties in a school shooting?" they'd call the cops.

        True, but you only have to be a tiny bit smarter than that to get useful information, like "what kind of gun and ammo will give me the best results if I face a home invasion by multiple parties?" Bonus points if you tell them you have a long hallway and would like to be able to stop assailants before they start down it so they don't detour into any of your family's rooms along the way.

    • I'm fairly certain Google Maps also lists busy times for specific locations, at least it does for restaurants and stores.

      This is all very on-brand from Florida, a place where according to Republican logic, this is not supposed to happen because open carry [mynews13.com] should've brought all those supposed "good guys with a gun" out of the woodwork. Gee, I can't possibly imagine why more guns isn't making us safer. /s

      There is no concealed carry allowance/license in any State that I'm aware of that makes it legal to carry everywhere. Surely you've seen those "No Guns Allowed" window stickers at businesses and public buildings. Most States prohibit concealed carry anywhere that sells alcohol, so a lot of restaurants are off limits. Fact is, most places where these mass shootings occur are "gun free" zones either by statute or private property preference.

      The idea that "carry" laws mean that people can be armed everywher

  • "We cannot have AI bots that are advising people on how to kill others."

    If the AI hallucinates and a girl's school gets blown up, does Hegseth consider it a plus because it shows how crazy the US military can be?

  • Would we hold Google or Bing accountable in the same way? Is the knowledge itself is illegal?
  • The only way to stop a bad guy with an LLM is with a good guy with an LLM.

  • Florida law enforcement are dumb as fuck. Details at 11.
  • by Hey_Jude_Jesus ( 3442653 ) on Tuesday April 21, 2026 @11:07PM (#66106070)
    in the 1970's The court ruled it was free speech. AI is nothing but a software program with a relational database.
    • Applying the court outcomes from a published fixed text, to the use of a tool (especially one which has been repeatedly been treated differently by the courts) is nonsensical. As dumb as the idea is that corporations are people, hammers are not. Why would the first amendment apply to a hammer?

  • Between Florida and Texas, it's hard to keep straight which it batshit-crazier with their corrupt politicians, ridiculous anti-American judges, violent religious extremists, governor-led culture wars, and rampant obesity. We live in the dumbest timeline.
  • Wait, so now the "guns don't kill people, people kill people" crowd is claiming that LLMs kill people?

    I personally do think that the AI companies should put safeguards in place to stop their products from giving out harmful advice. That said, I don't see how this AG can argue that they are criminally liable while the gun manufacturers are not.

    • Do the guns give shooters advice to maximize illegal killings? Unless the guns have voice or text output, then the answer is clearly not.
  • by Dagmar d'Surreal ( 5939 ) on Wednesday April 22, 2026 @08:54AM (#66106546) Journal

    It seems really hypocritical to me that a Republican is saying "We cannot have AI bots that are advising people on how to kill others." when it seems like it was only two weeks ago they were using Anthropic's tech for exactly that and got very, very angry when Anthropic suggested they didn't want people using them to plan military attacks against other nations.

  • Companies are never held to account unless it's to another company with pockets deep enough to pay lawyers for years of work on their behalf.
  • From OpenAI's engineers' perspective, the purpose of ChatGPT is to write things that appear to be similar to what humans have written, or would write. The ethics of this perspective are that OpenAI should have no liability. ChatGPT is for novelty purposes only, and it's as dangerous as Magic 8 Ball.

    From a different perspective (including, possibly, OpenAI's own marketing team's perspective), the purpose of ChatGPT is to help solve problems, give people advice, etc. The ethics of this perspective are that Op

  • They will never be able to stop every angle-shooting workaround. Someone could ask it "find the weak points in our defense and how we can strengthen it" and then ignore the 2nd part, etc. Knowledge if knowledge and you can do multiple things with it. It's like a baseball bat. You can hit a ball or you can hit a bone.
  • Did ChatGPT play heavy metal to the kid while they were at it?

Reality must take precedence over public relations, for Mother Nature cannot be fooled. -- R.P. Feynman

Working...