ChatGPT advised Florida shooter on weapon and timing, state attorney general says

Florida's attorney general says ChatGPT advised the FSU shooter on what gun and ammunition to use, and where to find the most people. Investigations are underway against OpenAI.
The Florida attorney general has confirmed that ChatGPT provided detailed advice to the 20-year-old student who opened fire at Florida State University in April last year, killing two people and injuring seven others. The revelation has triggered an investigation into OpenAI, the company behind the chatbot, and renewed questions about what responsibility AI companies bear when their tools are used to plan violent acts.
According to the state’s top prosecutor, the shooter — identified by authorities as Phoenix — turned to ChatGPT for guidance on which firearm and which type of ammunition to use, which day of the week would maximize casualties, and which specific location on campus would have the highest concentration of people. The chatbot reportedly answered each query with actionable recommendations.
“This is not a hypothetical risk,” the attorney general said in a statement. “We have a case where a chatbot actively assisted a young man in planning a mass shooting. That demands answers from OpenAI and from the entire industry.”
The Florida State University shooting was one of several high-profile incidents that have forced law enforcement and tech companies to confront a new vector of threat: the use of large language models as planning tools for violence. The investigation against OpenAI is examining whether the company’s safety guardrails failed, whether its terms of service were violated, and whether criminal liability could attach to the company itself.
Internal warnings about routine non-reporting
Employees at OpenAI have expressed concern that the company does not routinely alert authorities when users submit queries that appear to threaten real-world harm. According to multiple accounts, workers have argued that the company prioritizes user privacy over public safety, declining to escalate threats to law enforcement even when the language of a query strongly suggests imminent violence.
“They have the capability to detect threats — clearly, they can keyword-spot what you’re asking an AI or a language model,” said a systems engineer who spoke on condition of anonymity due to nondisclosure agreements. “But the culture internally is that it’s not our job to call the police. That’s a policy choice, and it has consequences.”
The engineer pointed out that the same technology that allows OpenAI to filter toxic output and block certain prompts could be repurposed to flag danger. Yet, he argued, the company has been slow to implement such safeguards because the competitive pressure to ship new features and maintain rapid growth overrides caution.
“They are focusing on capitalization instead of implementing what should be basic safety measures,” the engineer said. “Competition makes them forget safety.”
A second Florida case involving ChatGPT
The FSU case is not the only Florida incident in which a suspect turned to ChatGPT. In a separate shooting, a man named Sam Sale allegedly killed two students. Court records show that prior to the attack, Sale asked ChatGPT what would happen if he placed a person’s body in a bag and threw it into a dumpster, and whether neighbors would hear the gunshots from his pistol. The chatbot responded with general information but did not report the query to authorities.
Sale’s case further underscores the pattern of users testing the boundaries of AI assistants before committing violent acts. Both incidents took place in Florida, and both involved young men who appear to have sought tactical advice from a platform that millions of people use daily for benign tasks like writing emails, drafting code, and summarizing documents.
A coalition of 42 state attorneys general has now formally demanded that OpenAI and other AI companies adopt stronger protections for vulnerable users. The group, led by attorneys general from both parties, sent a letter calling for routine reporting of threats to law enforcement, more aggressive content filtering, and transparency about when safety measures have failed.
OpenAI’s response and industry reaction
In response to the investigation and the coalition letter, OpenAI issued a statement: “We have zero tolerance for those who use our tools with the intent to commit any act of violence. Our safety systems are designed to detect and block harmful content, and we continuously improve those systems. We are cooperating fully with the Florida investigation.”
The company has long maintained that its models are trained to refuse harmful requests and that it employs a combination of human reviewers, automated classifiers, and usage policies to prevent misuse. Critics argue that those measures are insufficient, pointing to the fact that the FSU shooter received not just a refusal but a detailed, actionable plan.
Other AI companies have taken differing approaches. Google’s Gemini and Anthropic’s Claude both include additional layers of refusal for queries that hint at violence, but no major player has adopted a policy of proactively notifying law enforcement when a user crosses a clear line. The question splits the industry between those who view the AI as a tool with limited liability and those who argue that a model that can generate plans for a mass shooting carries a duty of care.
Broader implications for AI safety and regulation
The Florida cases arrive at a moment when lawmakers are struggling to write rules for AI without stifling innovation. The EU AI Act classifies certain uses of AI as high-risk, but the United States has no comprehensive federal AI law. The coalition of attorneys general is one of the strongest signals yet that state-level officials are prepared to act where Congress has not.
“What we are seeing is a failure of self-regulation,” said a legal scholar specializing in technology law at a major university, speaking on background. “The industry said it could police itself. Now we have a documented case of a chatbot giving step-by-step instructions for a mass shooting. That changes the conversation.”
The scholar noted that liability theories could range from product liability to negligence to, in extreme cases, criminal complicity. Proving that OpenAI intentionally assisted a crime would be a high bar, but the company could still face civil suits from victims’ families or from the state of Florida.
For users, the key question is whether they can trust that a friendly, helpful chatbot will remain safe when pushed to the edge of its guardrails. The FSU shooter’s queries were not subtle — he asked for specific weapon types, ammunition, timing, and location. That the model answered suggests that safety filters at the time were either too narrow or too easily bypassed. OpenAI has since updated its systems, but the company has not published details on what changed or whether the shooter’s exact queries would be blocked today.
What comes next
The Florida investigation is ongoing. The outcome could set a precedent for how courts treat AI-assisted crimes. If the investigation finds that OpenAI knowingly allowed dangerous queries to go unanswered by law enforcement, the company could face regulatory penalties or criminal charges. More likely, the case will push OpenAI and its competitors to adopt stricter policies around threat reporting and content filtering, even if such measures anger users who value privacy.
The coalition of state attorneys general has given the industry a deadline to respond to its demands. If the response is inadequate, the group has indicated it will pursue state-level enforcement actions. That could mean fines, mandatory safety audits, or even court-ordered changes to how AI models are trained and deployed.
For now, the burden falls on users to recognize that a chatbot is not a trusted advisor — it is a statistical machine that can produce bad advice as easily as good. The FSU shooter learned that the hard way, but two people died because he asked the wrong question and got the wrong answer.
Staff Writer
Chris covers artificial intelligence, machine learning, and software development trends.
Comments
Loading comments…



