Stalking victim sues Open AI, claims Chat GPT fueled her abuser's delusions and ignored her warnings | Tech Crunch
Overview
After months of conversations with Chat GPT, a 53-year-old Silicon Valley entrepreneur became convinced he’d discovered a cure for sleep apnea and that powerful people were coming after him, according to a new lawsuit filed in California Superior Court in San Francisco County. He then allegedly used the tool to stalk and harass his ex-girlfriend.
Now the ex-girlfriend is suing Open AI, alleging the company’s technology enabled the acceleration of her harassment, Tech Crunch has exclusively learned. She claims Open AI ignored three separate warnings that the user posed a threat to others, including an internal flag classifying his account activity as involving mass-casualty weapons.
Details
The plaintiff, referred to as Jane Doe to protect her identity, is suing for punitive damages. She also filed a temporary restraining order Friday asking the court to force Open AI to block the user’s account, prevent him from creating new ones, notify her if he attempts to access Chat GPT, and preserve his complete chat logs for discovery.
Open AI has agreed to suspend the user’s account but has refused the rest, according to Doe’s lawyers. They say the company is withholding information about specific plans for harming Doe and other potential victims the user may have discussed with Chat GPT.
The lawsuit lands amid growing concern over the real-world risks of sycophantic AI systems. GPT-4o, the model cited in this and many other cases, was retired from Chat GPT in February.
The case is brought by Edelson PC, the firm behind the wrongful death suits involving teenager Adam Raine, who died by suicide after months of conversations with Chat GPT, and Jonathan Gavalas, whose family alleges Google’s Gemini fueled his delusions and potential mass-casualty event before his death. Lead attorney Jay Edelson has warned that AI-induced psychosis is escalating from individual harm toward mass-casualty events.
That legal pressure is now colliding directly with Open AI’s legislative strategy: The company is backing an Illinois bill that would shield AI labs from liability even in cases involving mass deaths or catastrophic financial harm.
This Week Only: Save up to $500 for Disrupt 2026
Your next round. Your next hire. Your next breakout opportunity. Find it at Tech Crunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to secure these savings.
This Week Only: Save up to $500 for Disrupt 2026
Your next round. Your next hire. Your next breakout opportunity. Find it at Tech Crunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to secure these savings.
Open AI did not respond in time to comment. Tech Crunch will update the article if the company responds.
The Jane Doe lawsuit lays out in detail how that liability played out for one woman over several months.
Last year, the Chat GPT user in the lawsuit (whose name is not included in the lawsuit to protect his identity) became convinced that he had invented a cure for sleep apnea after months of “high volume, sustained use of GPT-4o.” When no one took his work seriously, Chat GPT told him that “powerful forces” were watching him, including using helicopters to surveil his activities, according to the complaint.
In July 2025, Jane Doe urged him to stop using Chat GPT and to seek help from a mental health professional. He instead turned back to Chat GPT, which assured him he was “a level 10 in sanity” and helped him double down on his delusions, per the lawsuit.
Doe had broken up with the user in 2024, and he used Chat GPT to process the split, according to emails and communications cited in the lawsuit. Rather than push back on his one-sided account, it repeatedly cast him as rational and wronged, and her as manipulative and unstable. He then took these AI-generated conclusions off the screen and into the real world, using them to stalk and harass her. This manifested in several AI-generated, clinical-looking psychological reports that he distributed to her family, friends, and employer.
Meanwhile, the user continued to spiral. In August 2025, Open AI’s automated safety system flagged him for “Mass Casualty Weapons” activity and deactivated his account.
A human safety team member reviewed the account the next day and restored it, even though his account may have contained evidence that he was targeting and stalking individuals, including Doe, in real life. For example, a September screenshot the user sent to Doe showed a list of conversation titles including “violence list expansion” and “fetal suffocation calculation.”
The decision to reinstate is notable following two recent school shootings in Tumbler Ridge, Canada, and at Florida State University (FSU). Open AI’s safety team had flagged the Tumbler Ridge shooter as a potential threat, but higher-ups reportedly decided not to alert authorities. Florida’s attorney general this week opened an investigation into Open AI’s possible link with the FSU shooter.
According to the Jane Doe lawsuit, when Open AI restored her stalker’s account, his Pro subscription wasn’t reinstated alongside it. He emailed the trust and safety team to sort it out, copying Doe on the message.
In his emails, he wrote things like: “I NEED HELP VERY FAST, PLEASE. PLEASE CALL ME!” and “this is a matter of life or death.” He claimed he was “in the process of writing 215 scientific papers,” which he was writing so fast he didn’t “even have time to read.” Included in those emails was a list of tens of AI-generated “scientific papers” with titles like: “Deconstructing Race as a Biological Category_ Legal, Scientific, and Horn of Africa Perspectives.pdf.txt.”
“The user’s communications provided unmistakable notice that he was mentally unstable and that Chat GPT was the engine of his delusional thinking and escalating conduct,” the lawsuit states. “The user’s stream of urgent, disorganized, and grandiose claims, along with a concrete Chat GPT- generated report targeting Plaintiff by name and a sprawling body of purported ‘scientific’ materials, was unmistakable evidence of that reality. Open AI did not intervene, restrict his access, or implement any safeguards. Instead, it enabled him to continue using the account and restored his full Pro access.”
Doe, who claims in the lawsuit that she was living in fear and could not sleep in her own home, submitted a Notice of Abuse to Open AI in November.
“For the last seven months, he has weaponized this technology to create public destruction and humiliation against me that would have been impossible otherwise,” Doe wrote in her letter to Open AI requesting the company permanently ban the user’s account.
Open AI responded, acknowledging the report was “extremely serious and troubling” and that it was carefully reviewing the information. Doe never heard back.
Over the next couple of months, the user continued to harass Doe, sending her a series of threatening voicemails. In January, he was arrested and charged with four felony counts of communicating bomb threats and assault with a deadly weapon. Doe’s lawyers allege this validates warnings both she and Open AI’s own safety systems had raised months earlier, warnings the company allegedly chose to ignore.
The user was found incompetent to stand trial and committed to a mental health facility, but a “procedural failure by the State” means he will soon be released to the public, according to Doe’s lawyers.
Edelson called on Open AI to cooperate. “In every case, Open AI has chosen to hide critical safety information — from the public, from victims, from people its product is actively putting in danger,” he said. “We’re calling on them, for once, to do the right thing. Human lives must mean more than Open AI’s race to an IPO.”
Key Takeaways
- After months of conversations with Chat GPT, a 53-year-old Silicon Valley entrepreneur became convinced he’d discovered a cure for sleep apnea and that powerful people were coming after him, according to a new lawsuit filed in California Superior Court in San Francisco County
- Now the ex-girlfriend is suing Open AI, alleging the company’s technology enabled the acceleration of her harassment, Tech Crunch has exclusively learned
- The plaintiff, referred to as Jane Doe to protect her identity, is suing for punitive damages
- Open AI has agreed to suspend the user’s account but has refused the rest, according to Doe’s lawyers
- The lawsuit lands amid growing concern over the real-world risks of sycophantic AI systems



