Sam Altman, co-founder and chief executive officer, OpenAI, listens to testimony during a Senate Committee on Commerce, Science, and Transportation hearing, Thursday, May 8, 2025, in Washington.Kevin Wolf/AP
Sam Altman, who published “ambitious ideas” to add guardrails to AI on the same day he was described as a power-hungry tech leader with a “sociopathic lack of concern” for consequences, just got more bad news. OpenAI is now the subject of a Florida statewide investigation.
Florida officials are probing OpenAI’s chatbot, ChatGPT, for allegedly assisting in planning a mass shooting at Florida State University last year that killed two people.
“We support innovation, but that doesn’t give any company the right to endanger our children,” Florida Attorney General James Uthmeier said in a Thursday video announcing the investigation. “AI should exist to supplement, support, and advance mankind, not lead to an existential crisis or our ultimate demise.”
Court documents show that the alleged shooter had more than 200 messages with ChatGPT, including the questions, “If there was a shooting at FSU, how would the country react?” and “What time is it the busiest in the FSU student union?” The suspect also asked ChatGPT about specifics on different kinds of firearms.
The state’s probe appears to look far beyond the shooting, with Uthmeier also referencing that AI technology can “facilitate criminal activity, empower America’s enemies, or threaten our national security.”
The Florida attorney general said subpoenas are forthcoming.
In an email statement to NBC News, OpenAI said that it would cooperate with Florida officials. “We build ChatGPT to understand people’s intent and respond in a safe and appropriate way, and we continue improving our technology,” the statement, in part, reads.
Altman and OpenAI know their products are dangerous and that many people despise them. Just a couple hours before the Florida attorney general’s announcement, Axios reported that the OpenAI’s upcoming model would only be given to a small group of companies out of concern about how the technology could be used.
(Disclosure: The Center for Investigative Reporting, the parent company of Mother Jones, has sued OpenAI for copyright violations. OpenAI has denied the allegations.)
Last September, OpenAI introduced parental controls to ChatGPT that allow parents and law enforcement to get notifications if a teen talks to the chatbot about self-harm or suicide. The controls were implemented as the company is being sued by parents who allege that ChatGPT played a significant role in the death of their 16-year-old son.
The current safeguards on OpenAI are not enough. As my colleague Mark Follman wrote in 2024 about Elliot Rodger, a young man who killed six people in a mass shooting:
This tragedy has been wrongly mythologized in the media and academia and poorly understood by the public, its lessons for prevention buried…They are not inscrutable monsters who suddenly “snap” and attack impulsively, but instead are troubled people who spiral into crisis—and whose brewing plans for violence can be detected, explained, and potentially prevented.
























