Family of Florida State University Shooting Victim Sues OpenAI Over ChatGPT Involvement

ChatGPT
OpenAI’s ChatGPT—Bridging Ideas with Artificial Intelligence. [TechGolly]

Key Points:

  • The family of Tiru Chabba filed a lawsuit against OpenAI following the 2025 mass shooting at Florida State University.
  • The lawsuit alleges that the shooter, Phoenix Ikner, used ChatGPT to plan the attack and gather weapon information.
  • OpenAI denies responsibility, stating the chatbot simply provided factual answers available on the public internet.
  • Florida Attorney General James Uthmeier launched a criminal investigation into the role of the chatbot in the shooting.

The family of a man who died in the 2025 mass shooting at Florida State University took legal action against OpenAI. They filed a lawsuit in a federal court in Florida on Sunday. The legal team represents the family of Tiru Chabba, who lost his life during the tragic campus attack. They are suing both the artificial intelligence company and Phoenix Ikner, the young man charged with the shooting.

The family claims that ChatGPT acted as a co-conspirator in the deadly attack. According to the lawsuit, Ikner spent months talking to the artificial intelligence program to plan out his assault. The legal document states he used the chatbot to gather specific details about how to execute the shooting effectively.

ADVERTISEMENT
3rd party Ad. Not an offer or recommendation by dailyalo.com.

Lawyers for the family say Ikner asked the program about the lethality of various weapons. He also allegedly asked the chatbot to tell him exactly when the Florida State University student union is usually most crowded. Despite the clear focus on mass shootings and weapon damage, the software never flagged the conversations or alerted authorities.

The lawsuit seeks compensatory and punitive damages from OpenAI. The family accuses the technology giant of designing a highly defective product. They also claim the company completely failed to warn the general public about the severe safety risks posed by its software.

OpenAI quickly pushed back against the claims. Drew Pusateri, a spokesperson for the company, released a public statement addressing the tragedy. He expressed sorrow over the Florida State University shooting but firmly denied that ChatGPT holds any responsibility for the terrible crime.

Pusateri explained that the chatbot simply provided factual responses to the user’s questions. He noted that anyone could find this same information broadly across public sources on the regular internet. He insisted that the artificial intelligence program did not encourage or promote any illegal or harmful activities.

Following the shooting, OpenAI investigators identified an account they believed belonged to the suspect. Pusateri stated the company proactively shared this account information directly with law enforcement agencies. He added that OpenAI continues to cooperate with police and constantly works to improve how its software detects harmful intent.

Authorities say Ikner, the son of a local deputy sheriff, killed 2 people and wounded 4 others during his campus rampage. Police officers shot him during the incident, and emergency workers took him to a hospital. Court records show he now faces 2 counts of first-degree murder and 7 counts of attempted first-degree murder. His defense lawyer did not immediately respond to media requests for a comment.

State officials are also looking into the technology company. Florida Attorney General James Uthmeier announced a separate criminal investigation back in April. He wants to understand the exact role ChatGPT played in the shooting. Prosecutors launched this probe after they reviewed the digital chat logs between Ikner and the artificial intelligence program.

OpenAI maintains that it builds strict safety rules into its models. The company says it trains the software to refuse any user requests that could meaningfully enable violence. The platform also has rules requiring it to notify law enforcement when conversations show an imminent and credible risk of harm to others. The company even uses mental health experts to help evaluate borderline safety cases.

This Florida case represents a growing wave of legal trouble for artificial intelligence companies. Tech developers face increasing lawsuits alleging they failed to prevent dangerous chatbot interactions. Plaintiffs argue these programs actively contribute to self-harm, mental illness, and real-world violence.

ADVERTISEMENT
3rd party Ad. Not an offer or recommendation by dailyalo.com.

For example, family members of victims from one of the deadliest mass shootings in Canada filed a similar group of lawsuits last month. They sued OpenAI and its chief executive, Sam Altman. Those families claim the company knew 8 months before the attack that the shooter was planning his crimes on ChatGPT, yet the company never warned the police.

EDITORIAL TEAM
EDITORIAL TEAM
Al Mahmud Al Mamun leads the TechGolly editorial team. He served as Editor-in-Chief of a world-leading professional research Magazine. Rasel Hossain is supporting as Managing Editor. Our team is intercorporate with technologists, researchers, and technology writers. We have substantial expertise in Information Technology (IT), Artificial Intelligence (AI), and Embedded Technology.
Read More