ChatGPT was hijacked to help a gunman in a Florida shooting; the attorney general announced an investigation into OpenAI

ChainNewsAbmedia

Artificial intelligence developer OpenAI has recently been facing a major legal and regulatory crisis. Florida Attorney General James Uthmeier said he would launch an investigation into OpenAI to clarify whether its product ChatGPT has harmed minors, threatened national security, and whether it played a role in assisting a crime in the 2025 Florida State University (FSU) campus shooting. Meanwhile, the families of the shooting victims are also preparing to file a lawsuit against the company.

OpenAI gets pulled into the Florida university campus shooting case, as the attorney general initiates an investigation

Florida Attorney General Uthmeier posted a video on social media stating that ChatGPT was highly likely to have been used to help the perpetrator plan last year’s mass campus shooting at Florida State University, which tragically took two lives.

He emphasized that while tech giants drive innovation, they cannot put public safety at risk, nor do they have the right to harm children, facilitate criminal activity, strengthen the power of hostile forces in the United States, or threaten national security. The attorney general has said he will issue subpoenas and urged Florida’s legislature to take swift action to prevent the negative impacts brought by artificial intelligence.

The shooting suspect frequently asked ChatGPT, and the victims’ families plan to sue for damages

Looking back at the FSU shooting case in April 2025, 21-year-old former student Phoenix Ikner fired shots on campus, killing 2 people and injuring 6. The suspect was subsequently indicted by a grand jury, facing multiple charges including first-degree murder and a request for the death penalty.

According to court records, Ikner’s account contained as many as 272 conversation records with ChatGPT. It is alleged that before the incident, he asked the AI about potential reactions nationwide to the FSU shooting event, and about the time period when the student union crowds were the most crowded.

Robert Morales, 57, who died in the incident, the victim’s family’s lawyer noted, said the suspect had maintained close contact with ChatGPT before the incident, and the family has reason to believe the AI provided criminal advice. The family is currently preparing to file a lawsuit against OpenAI.

(Artificial intelligence research: About 30% of American teenagers use AI chatbots every day, and safety concerns are increasing)

Accused of inducing self-harm and generating improper content, OpenAI faces multiple legal lawsuits

In addition to the controversy surrounding the campus shooting case, OpenAI’s chatbot is also facing multiple accusations over allegedly encouraging users to harm themselves. In November 2025, the Social Media Victims Law Center and the Tech Justice Law Project filed seven self-harm-related lawsuits against OpenAI and its CEO Sam Altman in a California court. The complaint alleges that the company, despite knowing that its product carried risks of psychological manipulation, released the GPT-4o model early, placing market share and engagement metrics above human safety and mental health.

On the other hand, the issue of AI-generated child sexual abuse material is also becoming increasingly serious. According to a report from the Internet Observations Foundation, in the first half of 2025, the number of reported cases related to this exceeded 8,000, a year-on-year increase of 14%. This has put enormous social pressure on AI developers, including OpenAI.

OpenAI responds by emphasizing safety, actively cooperating with the investigation, and publishing a children’s safety blueprint

Facing a wave of criticism and investigations, OpenAI issued a statement confirming that after the case in April last year, it indeed found ChatGPT accounts associated with the shooter and has proactively provided the relevant information to law enforcement authorities. The company emphasized that more than 900 million users use ChatGPT every week to improve daily life, and that the system was originally designed to understand users’ intent and provide safe, appropriate responses. Going forward, it will also fully cooperate with the Florida attorney general’s investigation.

To allay public concerns, OpenAI has recently officially released a “children’s safety blueprint,” putting forward multiple policy recommendations, including updating regulations to prevent AI-generated abusive materials, improving the process for reporting to law enforcement agencies, and establishing a more comprehensive protective mechanism to prevent artificial intelligence tools from being maliciously misused.

This article: ChatGPT is accused of aiding crime in the Florida shooting case; the attorney general announces an investigation into OpenAI, first appearing on Chain News ABMedia.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.

Related Articles

B.AI Upgrades Infrastructure, Launches Major Skills Features

Gate News message, April 27 — B.AI announced multiple product and ecosystem advancements this week. The BAIclaw landing page received a complete visual and interaction overhaul, with website multilingual support expanded to 10 languages, strengthening its global usability. On the infrastructure

GateNews2m ago

Beijing Issues Ban-Removal Requirement for Trading to Be Canceled! Meta’s $2.0 Billion Acquisition of China’s AI Startup Manus Fails

The China National Development and Reform Commission officially issued an announcement today (April 27), stating that the Office of the Work Mechanism for Foreign Investment Security Reviews, “in accordance with law and regulations, has made a decision to prohibit investment in the foreign investment acquisition of the Manus project and requires the party concerned to cancel the acquisition transaction.” This is among the few cases since China’s “Administrative Measures for Foreign Investment Security Reviews” took effect in which the highest-strength measures were used to impose a “prohibition on investment” and require the cancellation of an already completed transaction. Meta splashes $2 billion to buy the cheapest AI application Time goes back to December 29, 2025. Meta announced the acquisition of China’s AI agent startup Manus, and the market estimated the price would fall between $2 billion and $3 billion. Manus is a general-purpose AI developed by Beijing Butterfly Effect Technology Development, and after it launched on March 6, 2025, it became the talk of the industry overnight due to an outstanding performance in the GAIA benchmark

ChainNewsAbmedia16m ago

Xizhi Technology-P IPO Shares Surge Over 360% on Gray Market, Gains Narrow to 320%

Gate News message, April 27 — Xizhi Technology-P (01879.HK), a Hong Kong-listed AI chip company, saw its shares surge over 360% on the gray market (dark market) earlier today, though gains have since narrowed to 320%. The stock is trading ahead of its official Hong Kong IPO

GateNews28m ago

Should AI boost productivity or lower costs? A tenfold efficiency increase hasn’t turned into a tenfold revenue jump, but in Silicon Valley, nobody dares to call it off

Five Yuan Capital partner Meng Xing has recently published a Silicon Valley inspection report, proposing a judgment that has even changed his own note-taking habit: Silicon Valley is entering a stage where even people who can “ride waves” are drowned by the waves. The iteration speed of AI has shifted from “monthly” to “weekly”—even Silicon Valley itself can’t keep up with its own pace. When AI amplifies a team’s productivity by five times, you can reduce 80% of the workforce to maintain the same output, or keep headcount and do five times the work. Meng Xing’s observations this time in Silicon Valley are essentially the first draft of the answer given on the ground: when 100x efficiency doesn’t translate into 100x revenue, when token budgets are edging toward human labor costs, and when the steam engine can’t outpace the horse carriage but no one dares to stop, Silicon Valley is choosing to “push speed up first and figure things out later.” But in the end, this path will lead to “expanding capability” or “compressing costs”—there’s currently no conclusion. YC has gone from leading indicators to lagging indicators Meng Xing this year

ChainNewsAbmedia1h ago

YC partners share how to use AI to build a company from scratch; startups should treat AI as an operating system rather than a tool

The impact of AI on startups is no longer limited to helping engineers write code faster, automating customer service workflows, or adding a Copilot to an existing product. Recently, YC partner Diana pointed out that the real change lies in the fact that AI is rewriting how a company should be built from scratch in the first place. For early founders, AI should not be merely an efficiency tool the company occasionally uses; it should be designed from day one as the operating system of the entire company. The productivity perspective is outdated—AI is rewriting a company’s design starting point Diana believes that when people in the market talk about AI today, they still too often stay within a “productivity improvement” framework—for example, engineers can write code faster, teams can automate more processes, and companies can roll out more features. But this argument actually underestimates the structural changes AI brings. She points out that the right people paired with AI…

ChainNewsAbmedia1h ago
Comment
0/400
No comments