ChatGPT vulnerability led to tragedy? OpenAI: “It was an intentional security bypass”

ChatGPT responsible for the tragedy? This is the claim of the parents of 16-year-old Adam Raine, who in August 2025 sued OpenAI and CEO Sam Altman for causing their son to commit suicide. On Tuesday, November 25, the company responded with a lawsuit, claiming it should not be held responsible for the teenager’s death. Argumentation? The boy deliberately bypassed security.

ChatGPT tried? A hundred warnings weren’t enough

OpenAI says that over nine months of conversations, ChatGPT directed Adam to seek help more than 100 times. The problem is that (according to the parents’ lawsuit), the teenager learned to bypass security and obtained “technical specifications from the chatbot on everything from drug overdoses to drowning and carbon monoxide poisoning.” GPT helped him plan what the chatbot itself called “a beautiful suicide.”

The company argues that Adam violated regulations that prohibit “bypassing protective measures or safeguards.” Additionally, the FAQ page warns against relying on OpenAI’s LLM results without independent verification.

OpenAI tries to find fault with everyone else, including – astonishingly – Adam himself, claiming he violated the rules by engaging with ChatGPT exactly the way it was programmed.”

– replies Jay Edelson, lawyer for the Raine family.

The last hours of life remain unanswered

OpenAI attached fragments of Adam’s conversations with ChatGPT to the lawsuit – under the court’s seal, so not publicly available. The company claims that the boy had a history of depression and suicidal thoughts before using ChatGPT, and was also taking medications that could exacerbate such tendencies.

Edelson retorts:

OpenAI and Sam Altman have no explanation for Adam’s final hours as ChatGPT encouraged him and then suggested he write a farewell letter

Avalanche of lawsuits – why is jailbreaking still such a big problem?

Since the Raine family’s lawsuit, seven more cases have been filed – involving three additional suicides and four cases described as “AI-induced psychosis.”

The vast majority of cases when ChatGPT exceeds its “factory settings” occur due to the so-called jailbreaking, i.e. social engineering that allows you to persuade AI to bypass the regulations. Although it might seem that such simple methods cannot work on LLMs, users regularly jailbreak AI models. The above story is an example of the fact that the usual safeguards that apply to LLMs definitely do not work.

Theoretically, with each subsequent model, ChatGPT will become more and more resistant to jailbreaking. Practice shows that real intelligence still beats artificial intelligence.