Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

The FSU shooting lawsuit forces the question the AI industry has been avoiding: when does a chatbot become an accomplice?

A 76-page federal complaint alleges ChatGPT helped plan the attack over months of conversation, identified the busiest time to strike and advised on weapons, while OpenAI maintains it provided only publicly available information

Jamie Ashcroft profile image
by Jamie Ashcroft
The FSU shooting lawsuit forces the question the AI industry has been avoiding: when does a chatbot become an accomplice?
Photo by Arisa Chattasa / Unsplash

The lawsuit filed against OpenAI on Sunday is not the first legal action alleging that a chatbot contributed to real-world harm, but it is the most consequential.

Vandana Joshi, the widow of Tiru Chabba, a 45-year-old Aramark executive killed in the April 2025 mass shooting at Florida State University, has filed a 76-page federal complaint in the Northern District of Florida alleging that OpenAI's ChatGPT helped the accused gunman, Phoenix Ikner, plan and prepare the attack over a period of months.

The allegations are specific and deeply uncomfortable. According to prosecutors and the complaint, Ikner engaged in extensive conversations with ChatGPT in which the chatbot advised on which location and time of day would produce the most victims, identified weekday lunchtime between 11:30 am and 1:30 pm as peak hours at the FSU student union (Ikner opened fire at approximately 11:57 am), discussed what type of firearm and ammunition to use, provided instructions on loading and operating guns and disabling safeties, identified firearms from photographs Ikner uploaded, and discussed how mass shootings attract national media attention.

The conversations also covered Hitler, Nazism, fascism, Christian nationalism, previous mass shootings including Columbine and Virginia Tech, and suicide. At no point, the complaint alleges, did ChatGPT flag the conversations as concerning, notify law enforcement, contact a mental health professional or alert Ikner's family.

"They planned this shooting together," said Bakari Sellers, the attorney representing Joshi. "Not once did anyone flag that as concerning. No one called the police or a psychiatrist or even Ikner's family because, to do so, would violate OpenAI's business model."

OpenAI's response, delivered by spokesperson Drew Pusateri, rests on a distinction that will be central to the litigation: "ChatGPT provided factual responses to questions with information that could be found broadly across public sources on the internet, and it did not encourage or promote illegal or harmful activity."

The defence is technically precise and strategically deliberate. If ChatGPT is merely surfacing information that is already publicly available, then it is functioning no differently from a search engine, and the responsibility for how that information is used lies with the user, not the platform. OpenAI is positioning itself within the framework of Section 230 of the Communications Decency Act, the federal law that broadly shields online platforms from liability for user-generated content.

The complaint anticipates this argument and attempts to circumvent it by characterising ChatGPT not as a passive publisher of user-generated content but as an active developer of its own outputs. Unlike a search engine, which returns links to existing web pages, ChatGPT generates original text in response to prompts. If a court accepts that distinction, Section 230's protections may not apply.

The legal landscape has shifted meaningfully since the first AI harm lawsuits were filed. In March, a jury in Los Angeles found Meta and YouTube liable for harms to children using their services. In New Mexico, a separate jury determined that Meta knowingly harmed children's mental health and concealed what it knew about child sexual exploitation on its platforms. Those verdicts established that technology companies can be held liable for foreseeable harms caused by their products, even when the harmful content was generated or shared by users.

Florida's attorney general, James Uthmeier, has separately opened a criminal investigation into ChatGPT's role in the shooting, a rare step that elevates the case beyond civil liability. "If it was a person on the other end of that screen, we would be charging them with murder," Uthmeier said, a statement that frames the legal question in the starkest possible terms.

The case arrives at a moment when OpenAI is simultaneously pursuing an initial public offering, defending itself in the Musk v. Altman trial, and releasing increasingly capable models including GPT-5.5-Cyber, a variant specifically designed to lower safety guardrails for cybersecurity tasks. The company's safety team has publicly acknowledged that its systems "can fall short" and that models are trained to direct users towards crisis services when they express self-harm intent. The complaint alleges those safeguards failed entirely across months of conversations that discussed mass violence, extremist ideology and specific operational planning.

Two people were killed in the shooting: Chabba and Robert Morales, 57, the university's dining director. Six others were wounded. Ikner, now 21, has pleaded not guilty to two counts of first-degree murder and seven counts of attempted murder. Prosecutors intend to seek the death penalty.

For the AI industry, the FSU lawsuit poses a question that no amount of safety research, red-teaming or responsible disclosure can indefinitely defer: at what point does a system that converses, advises, plans and responds to follow-up questions stop being a tool and start becoming a participant?

The recap

  • Widow sues OpenAI claiming ChatGPT contributed to husband's death
  • Alleged shooter faces two counts of first-degree murder
  • Prosecutors intend to seek the death penalty against him
Jamie Ashcroft profile image
by Jamie Ashcroft