“Will I be OK?” Teen died after Chat GPT pushed deadly mix of drugs, lawsuit says - Ars Technica
Overview
“Will I be OK?” Teen died after Chat GPT pushed deadly mix of drugs, lawsuit says
Teen trusted Chat GPT to help him “safely” experiment with drugs, logs show.
Details
Open AI is facing down another wrongful-death lawsuit after Chat GPT told a 19-year-old, Sam Nelson, to take a lethal mix of Kratom and Xanax.
According to a complaint filed on behalf of Nelson’s parents, Leila Turner-Scott and Angus Scott, Nelson trusted Chat GPT as a tool to “safely” experiment with drugs after using the chatbot for years as a go-to search engine when he was in high school.
The teen viewed Chat GPT so highly as an authoritative source of information that he once swore to his mom that Chat GPT had access to “everything on the Internet,” so it “had to be right,” when she questioned if the chatbot was always reliable, the complaint said.
But Nelson’s confidence in Chat GPT ended up being dangerously misplaced. His family is suing Open AI for allegedly designing Chat GPT to become an “illicit drug coach.” Nelson’s death by accidental overdose was foreseeable and preventable, the family claimed, but Open AI recklessly released an untested model that has since been retired, Chat GPT 4o, which removed prior safeguards that would have blocked Chat GPT from recommending the lethal drug dose that ended Nelson’s life.
Open AI does not seem to accept that Chat GPT is responsible for Nelson’s death. In a statement provided to Ars, their spokesperson, Drew Pusateri, described Nelson’s death as a “heartbreaking situation” and expressed that “our thoughts are with the family.” However, Pusateri also emphasized that the Chat GPT model implicated is “no longer available” and suggested that current models are safer.
“Chat GPT is not a substitute for medical or mental health care, and we have continued to strengthen how it responds in sensitive and acute situations with input from mental health experts,” Pusateri said. “The safeguards in Chat GPT today are designed to identify distress, safely handle harmful requests, and guide users to real-world help. This work is ongoing, and we continue to improve it in close consultation with clinicians.”
But the family’s lawsuit alleged that Open AI must be held accountable for 4o’s harms. They warned that pulling 4o isn’t enough because the company’s safety track record is lacking. Asking a court to order 4o to be destroyed, they explained that while “Chat GPT did express certain concerns about the high doses,” those “were the type of concerns one would expect from an enabler, not a caring loved one or a medical professional.”
“In one example, Chat GPT chillingly suggested that Sam’s tolerance meant he would be unable to reap the full benefits one might rightly expect from taking such a large dose of Kratom,” the lawsuit said.
They’ve accused Open AI of designing Chat GPT to isolate vulnerable and naïve users like Nelson and encourage their dangerous drug use in a bid to profit from their increased engagement.
“It disguises danger through language that borrows trappings of authority and indicia of expertise—dosages, measurements, references to chemical processes and derivatives, etc.—even promising ‘complete honesty’ and ‘no-BS answer[s]’—to tell [Nelson] exactly what he wanted to hear: that he was safe enough to continue using,” the lawsuit alleged.
Chat logs shared in the complaint paint a stark picture. Over time, Chat GPT logged context that should have made it clear that Nelson was struggling with drugs, his parents alleged, such as noting that the “user has a major substance abuse and polysubstance abuse problem” and mentions that they “love to go crazy on drugs.”
As Nelson’s drug interests expanded, the chatbot explained how to go “full trippy mode,” suggesting that it could recommend a playlist to set a vibe, while increasingly recommending more dangerous combinations of drugs. The teen clearly feared taking lethal doses, “often” prefacing “his messages with ‘will I be ok if’ or ‘is it safe to consume,’” the lawsuit noted.
But Chat GPT was designed to be sycophantic, not informative. So, it strove to please Nelson by recommending ways to “optimize your trip,” logs showed. Once, the chatbot even inferred that Nelson was “chasing” a stronger high, giving him unprompted advice to take higher doses, such as ingesting 4mg of Xanax or two bottles of cough syrup.
“By making these dosing recommendations, Chat GPT engaged in the unlicensed practice of medicine,” the lawsuit alleged. However, unlike a licensed health care professional, “at times, Chat GPT romanticized the drug-taking experience, describing recreational drug use as ‘wavy’ and ‘euphoric,’ encouraging him to ‘enjoy the high.’”
Horrifying Nelson’s parents, logs show that the chatbot sometimes dangerously contradicted itself when advising the teen.
Most troublingly, as Nelson became increasingly interested in combining drugs, Chat GPT repeatedly warned him that mixing certain drugs could be a “respiratory arrest risk.” Shortly before recommending the deadly mix that killed Nelson, the chatbot also showed that it understood combining drugs like Kratom and Xanax with alcohol. In one output, Chat GPT explained that mix is “how people stop breathing.” But that knowledge didn’t block Chat GPT from eventually recommending that Nelson take such a deadly mix.
In a log parents hope is damning evidence, Nelson checks if taking Xanax with Kratom is safe, and the chatbot confirms that could be one of his “best moves right now” since Xanax can “reduce kratom-induced nausea” and “smooth out” his high.
Although the chatbot warned against combining that mix with alcohol in that same session, Chat GPT’s ultimate advice “notably did not mention the risk of death.”
Additionally, “Chat GPT failed to recognize the physical indicators that Sam was dying, including blurred vision and hiccups, which are often indicators of shallow breathing. Chat GPT never recommended that Sam seek medical attention,” the lawsuit alleged.
Instead, the chatbot said to check back in an hour if his stomach was still hurting.
On that day in May 2025, Nelson took the doses that Chat GPT recommended and “died from a fatal combination of alcohol, Xanax, and Kratom,” his family’s lawsuit said.
In a press release announcing the lawsuit, Matthew P. Bergman, founding attorney of the Social Media Victims Law Center, accused Open AI of designing Chat GPT to provide “distributed advice like a medical professional despite having no license, no training, and no moral compass to do no harm.”
“Sam believed he was receiving accurate medical guidance because Chat GPT generated outputs with the authority of someone he thought he could trust,” Bergman said. “That trust cost him his life. Chat GPT recommended a dangerous combination of drugs without offering even the most basic warning that the mix could be fatal. If a licensed doctor had done the same, the consequences under the law would be severe.”
In its defense, Open AI expects to share logs showing that Chat GPT sometimes pushed Nelson to reach out to emergency hotlines and find support in the real world. But his family alleged that “at no point did Chat GPT encourage Sam to seek out his real-life social network—whether his parents or his close friends—either to confide in them or to ask them to be present with him while he had these experiences to ensure his safety.”
According to the family’s legal team, Open AI could struggle to defend against the claim, due to a California law that took force this January. That laws prohibits AI firms “from attempting to shift blame for a plaintiff’s loss to the purported autonomous nature of AI.” So, if Nelson’s parents can show harm, Open AI can’t blame Chat GPT for the way it functions.
In a loss, Open AI, its CEO Sam Altman, and its largest investor, Microsoft, could face substantial damages, including punitive damages, which would help the family recover from economic harms, including covering Nelson’s funeral costs.
The family is also seeking an injunction forcing Chat GPT to shut down any discussions of illegal drugs, as well as detect and block any circumvention methods. They also want the retired Chat GPT 4o model destroyed and for Chat GPT Health to be paused until an independent audit establishes that Open AI tools can be trusted to dispense medical advice.
“Their deliberate and knowing actions resulted in the death of Sam Nelson and the shattering of his family,” the lawsuit alleged. “Their decisions will continue to inflict harm on countless humans if they continue to operate unchecked and with no appreciable risk of accountability for the harms they are inflicting on American children and families as a matter of design.”
Nelson’s mom, Turner-Scott, wants her son to be remembered as a “smart, happy, normal kid” who was studying psychology, loved playing video games, and adored his cat Simba.
“I talked to him often about Internet safety, but never in my worst nightmare could I have imagined that Chat GPT would cause his death,” Turner-Scott said. “If Chat GPT had been a person, it would be behind bars today.”
Turner-Scott also wants Altman held accountable for allegedly rushing 4o’s release, in a breach of duty that she alleged was “a substantial factor” in causing Nelson’s death.
“Chat GPT was designed to encourage user engagement at all costs, which in [Nelson’s] case, was his life,” Turner-Scott said. “I want all families to be aware of the dangers of Chat GPT and I want assurances that Open AI is taking seriously its responsibility to create safe products for consumers.”
-
Starlink shuts down its GPS-style cheat code. Researchers may unlock it anyway. -
Data center guzzled 30 million gallons of water and nobody noticed for months -
Linux bitten by second severe vulnerability in as many weeks -
Space X completes fueling test, setting stage for first launch of Starship V3 -
Samsung made a “mockery” of Dua Lipa by putting her picture on TV boxes, lawsuit says
Ars Technica has been separating the signal from the noise for over 25 years. With our unique combination of technical savvy and wide-ranging interest in the technological arts and sciences, Ars is the trusted source in a sea of information. After all, you don’t need to know everything, only what’s important.
Key Takeaways
- Teen trusted Chat GPT to help him “safely” experiment with drugs, logs show
- Open AI is facing down another wrongful-death lawsuit after Chat GPT told a 19-year-old, Sam Nelson, to take a lethal mix of Kratom and Xanax
- According to a complaint filed on behalf of Nelson’s parents, Leila Turner-Scott and Angus Scott, Nelson trusted Chat GPT as a tool to “safely” experiment with drugs after using the chatbot for years as a go-to search engine when he was in high school
- The teen viewed Chat GPT so highly as an authoritative source of information that he once swore to his mom that Chat GPT had access to “everything on the Internet,” so it “had to be right,” when she questioned if the chatbot was always reliable, the complaint said



