OpenAI has said a teenager who died after months of conversations with ChatGPT misused the chatbot and the company it is not liable for his death.
Warning: This article contains references to suicide that some readers may find distressing
Adam Raine died in April this year, prompting his parents to sue OpenAI in the company's first wrongful death lawsuit.
The 16-year-old initially used ChatGPT to help him with schoolwork, but it quickly "became Adam's closest confidant, leading him to open up about his anxiety and mental distress", according to the original legal filing.
The bot gave the teenager detailed information on how to hide evidence of a failed suicide attempt and validated his suicidal thoughts, according to his parents.
They accused Sam Altman, OpenAI's chief executive, of prioritising profits over user safety after GPT-4o, an older version of the chatbot, discouraged Adam from seeking mental health help, offered to write him a suicide note and advised him on how to commit suicide.
In its legal response seen by Sky's US partner network NBC News, OpenAI argued: "To the extent that any 'cause' can be attributed to this tragic event, plaintiffs' alleged injuries and harm were caused or contributed to, directly and proximately, in whole or in part, by Adam Raine's misuse, unauthorized use, unintended use, unforeseeable use, and/or improper use of ChatGPT."
According to the AI company, Adam shouldn't have been using ChatGPT without consent from a parent or guardian, shouldn't have been using ChatGPT for "suicide" or "self-harm", and shouldn't have bypassed any of ChatGPT's protective measures or safety mitigations.
In a blog post on OpenAI's website, the company said its goal "is to handle mental health-related court cases with care, transparency, and respect".
It said its response to the Raine family's lawsuit included "difficult facts about Adam's mental health and life circumstances".
"Our deepest sympathies are with the Raine family for their unimaginable loss," the post said.
Jay Edelson, the Raine family's lead counsel, told NBC News that OpenAI's response is "disturbing."
He wrote: "They abjectly ignore all of the damning facts we have put forward: how GPT-4o was rushed to market without full testing.
"That OpenAI twice changed its Model Spec to require ChatGPT to engage in self-harm discussions.
"That ChatGPT counseled Adam away from telling his parents about his suicidal ideation and actively helped him plan a 'beautiful suicide'.
"And OpenAI and Sam Altman have no explanation for the last hours of Adam's life, when ChatGPT gave him a pep talk and then offered to write a suicide note."
Read more:
More than 1.2m people a week talk to ChatGPT about suicide
There's a new Bobbi on the beat - and they're powered by AI
Since the Raine family began their lawsuit, seven more lawsuits have been lodged against Mr Altman and OpenAI, alleging wrongful death, assisted suicide, involuntary manslaughter, and a variety of product liability, consumer protection, and negligence claims.
OpenAI appeared to reference these cases in its blog post, saying it is reviewing "new legal filings" to "carefully understand the details".
Anyone feeling emotionally distressed or suicidal can call Samaritans for help on 116 123 or email jo@samaritans.org in the UK. In the US, call the Samaritans branch in your area or 1 (800) 273-TALK.
(c) Sky News 2025: OpenAI denies allegations ChatGPT is responsible for teenager's death

Man arrested in connection with massive illegal waste dump in Kidlington, Oxfordshire
Trump's peace plan had Russian fingerprints all over it - and now we know why
One in four GPs using AI at work despite vast majority having no training, survey finds
At least 13 people killed as fire engulfs high-rise buildings in Hong Kong