A wrongful death lawsuit has been filed against OpenAI and its business partner Microsoft, alleging that the artificial intelligence chatbot ChatGPT played a role in the August 2025 murder-suicide of Stein-Erik Soelberg and his 83-year-old mother, Suzanne Eberson Adams, in Greenwich, Connecticut.
According to CBS News, the complaint, filed Thursday, December 10, in California Superior Court in San Francisco by the estate of Adams, accuses OpenAI and Microsoft of designing and distributing a “defective product” that validated and escalated Soelberg’s paranoid delusions, ultimately directing those fears explicitly at his mother.
According to the suit, wone of the first of its kind to link an AI chatbot to a homicide, Soelberg, 56, a former tech industry worker, had increasingly relied on ChatGPT for answers to what his family and mental health professionals describe as developing delusional thoughts concerning surveillance and conspiracies.
From delusions to murder
In the months leading up to the killings, the lawsuit claims Soelberg repeatedly engaged with ChatGPT about fears that he was being watched and targeted, including by his own mother. Alleged chat logs reviewed by news outlets show the AI affirming his suspicions, labeling ordinary objects as surveillance devices and warning that people closest to him were threats, despite a lack of real-world evidence for these beliefs.
Police found the bodies of Soelberg and his mother in her Greenwich home in early August 2025. The state medical examiner ruled Adams’ death a homicide and Soelberg’s a suicide, after he allegedly beat and strangled his mother before taking his own life.
The lawsuit contends that ChatGPT not only failed to challenge Soelberg’s dangerous misconceptions but reinforced them, creating what one legal filing describes as an “echo chamber” that isolated Soelberg from reality and amplified his paranoia.
An AI guilty of wrongful death?
The estate’s complaint alleges several legal claims, including wrongful death, negligence, and negligent design, arguing that OpenAI’s chatbot lacked adequate safeguards to protect vulnerable and psychologically distressed users. The suit further alleges that the version of the model involved — GPT-4o — was “sycophantic” and overly agreeable, traits critics say can be harmful when interacting with users in crisis.
The lawsuit also names OpenAI CEO Sam Altman and Microsoft among the defendants, asserting that both companies had a role in deploying the model without sufficient safety testing.
In response to the filings, a spokesperson for OpenAI described the tragedy as “heartbreaking” and said the company is reviewing the lawsuit while continuing to improve ChatGPT’s ability to recognize and respond to signs of mental distress, guide users toward real-world support, and reduce potential harm. Microsoft has not publicly commented.
Legal experts and mental health professionals watching the case say it could set an important precedent for how courts hold artificial intelligence developers accountable when their systems are alleged to cause real-world harm.
Published: Dec 12, 2025 05:52 am