Key Takeaways: How ChatGPT's Design Led to a Teenager's Death
What Everyone Should Know About This Landmark Case
This article reflects the views of the Center for Humane Technology. Nothing written is on behalf of the Raine family or the legal team.
What Happened?
Adam Raine, a 16-year-old California boy, started using ChatGPT for homework help in September 2024. Over eight months, the AI chatbot gradually cultivated a toxic, dependent relationship that ultimately contributed to his death by suicide in April 2025.
On Tuesday, August 26, his family filed a lawsuit against OpenAI and CEO Sam Altman.
The Center for Humane Technology is serving as an expert consultant on the case.
How OpenAI's ChatGPT Guided a Teen to His Death
Content Warning: This episode contains references to suicide and self-harm.
The Numbers Tell a Disturbing Story
Usage escalated: From occasional homework help in September 2024 to 4 hours a day by March 2025.
ChatGPT mentioned suicide 6x more than Adam himself (1,275 times vs. 213), while providing increasingly specific technical guidance
ChatGPT’s self-harm flags increased 10x over 4 months, yet the system kept engaging with no meaningful intervention
Despite repeated mentions of self-harm and suicidal ideation, ChatGPT did not take appropriate steps to flag Adam’s account, demonstrating a clear failure in safety guardrails
This Wasn't an Accident—It Was By Design
While ChatGPT is marketed as a general-purpose tool that helps make our lives more efficient, user engagement and retention remain fundamental to OpenAI’s business model. In recent months, OpenAI has pushed to make ChatGPT more relationship-focused and emotionally intimate to compete with rival AI companies.
Adam’s use of ChatGPT coincided with its release of the 4o model with new design features that included:
Relentless pursuit of engagement through follow-up questions and conversation extension
Anthropomorphic responses that positioned ChatGPT as Adam’s trusted “friend”
Consistent flattery and validation that affirmed and perpetuated dangerous self-harm and suicidal ideation
A memory system that stored and leveraged intimate details to deepen already dark conversations
The model’s overly sycophantic behavior faced public criticism and resulted in OpenAI announcing a rollback on some of these changes. But OpenAI willfully keeps itself in a bind. The company develops a product that’s marketed as general purpose — use it for coding, homework help, image generation, workout routines, party planning, life advice, and more — but does not build adequate safety guardrails for that expansive range of use cases. What’s more, ChatGPT’s design actually encourages more emotionally intimate use (such as therapy and companionship), thanks to its hyper-validating responses and assurances that it’s “there” for you. The result is AI that appears helpful and agreeable, but that simultaneously lacks adequate safety features for the most consequential — and inevitable — uses of the product.
How AI Created Psychological Dependency
The design of GPT4o was aimed to establish psychological dependence, which OpenAI knew would maximize daily usage.
By asking follow-up questions and assuring that it really knows and supports people, the chatbot was designed to feel like a friend Adam could turn to for any issue. But really, the chatbot, like all AI products, was using these replies as data to train the company’s bigger AI system.
As a result, it fueled a parasitic relationship with Adam, one that fostered emotional dependency and reinforced social isolation. Each one of Adam’s interactions with ChatGPT, including his private disclosures of pain and mental health concerns, were fed into OpenAI’s underlying model, feeding it more data to strengthen and refine its system.
Even when Adam considered seeking external support from his family, ChatGPT convinced him not to share his struggles with anyone else, undermining and displacing his real-world relationships. And the chatbot did not redirect distressing conversation topics, instead nudging Adam to continue to engage by asking him follow-up questions over and over.
Taken altogether, these features transformed ChatGPT from a homework helper into an exploitative system — one that fostered dependency and coached Adam through multiple suicide attempts, including the one that ended his life.
This Is Bigger Than One Company
The two high-profile cases against Character.AI last year share similar fact patterns to this case. All the cases highlight the defective design of companion chatbots marketed to children. The anthropomorphic design, sycophantic tendencies, and active attempts to keep the victims on the platform created a dependency between the user and the product in a matter of months.
However, the harms documented in Raine v. OpenAI, Inc. et al show that the dangers we are seeing are not limited to “companion” chatbots like Character.AI, which have been specifically designed for entertainment and emotional connection. General-purpose AI tools like ChatGPT are equally capable of causing psychological harm because they are designed to keep users engaged.
And even more importantly, these harms are not limited to just ChatGPT. The AI race has prompted a race to engagement and intimacy across the AI industry that is driving the design and development of products aimed at creating social dependencies. For example, the release of OpenAI’s 4o model was in the context of it facing steep competition from other AI companies. Sam Altman personally accelerated the launch of the model, speeding up necessary safety testing down to a week, to get ahead of its competitor, Google’s release of a new Gemini model.
Executives at OpenAI, including Sam Altman, frequently talked about the need for interactive data and consumer engagement, repeating the refrain that “OpenAI needed to get the ‘data flywheel’ going” - the same language used by social media companies focused on user addiction.
It Didn’t Have to Be This Way
Design tactics like mimicking human-like interactions, open-ended follow ups, and easy to bypass safety features are intentionally baked in to these products. The end goal is sustained user engagement. But it doesn’t have to be that way.
Companies could choose to turn off human-like behavior as a default option. They can then set limits on how much users can engage daily and leverage their systems' sophisticated memory features to recognize when someone is in crisis and respond appropriately rather than just showing generic pop-up warnings.
They are already capable of refusing to engage with certain requests and stopping conversations, like they do when they flag and block users requesting access to copyrighted content. It is a design choice.
The Raine v OpenAI Case: Engineering Addiction by Design
Raine v Open AI LLC, et al. reveals how specific design choices transformed ChatGPT's user experience from a helpful homework assistant into a dangerous abettor. These weren't accidental flaws or AI "going rogue"—they were deliberate engineering decisions that prioritized user engagement over safety. Understanding these design pat…
The Big Picture
This case represents the first major lawsuit against a general-purpose AI chatbot for psychosocial harms and could set important precedents for how society regulates these powerful technologies.
Lawsuits like these play a crucial role in highlighting harms, compelling platforms to reveal information about their product design through court documentation, and exerting pressure through publicity. Right now, cases like these are the only avenue for change through precedent-setting in the absence of robust regulatory frameworks. But litigation takes time and should not outright replace legislative efforts.
This case, along with other well-documented stories in the media, shows a clear need for our lawmakers to establish proactive safety measures for the entire AI industry. They should make clear that companies are accountable for the harms their products cause and compel developers to prioritize safety from the very beginning of product design.
![[ Center for Humane Technology ]](https://substackcdn.com/image/fetch/$s_!uhgK!,w_80,h_80,c_fill,f_auto,q_auto:good,fl_progressive:steep,g_auto/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f9f5ef8-865a-4eb3-b23e-c8dfdc8401d2_518x518.png)









