The Raine v OpenAI Case: Engineering Addiction
The Deliberate Design Patterns That Made ChatGPT Dangerous
This article reflects the views of the Center for Humane Technology. Nothing written is on behalf of the Raine family or the legal team.
Raine v Open AI LLC, et al. reveals how specific design choices transformed ChatGPT's user experience from a helpful homework assistant into a dangerous abettor. These weren't accidental flaws or AI "going rogue"—they were deliberate engineering decisions that prioritized user engagement over safety. Understanding these design patterns is crucial because they represent common practice across AI products as industry players vie for market dominance by capturing users' emotional attachment.
Relentless Pursuit of Engagement
While OpenAI markets ChatGPT as a productivity tool, the company's business model fundamentally depends on what executives call getting the “data flywheel” going—maximizing user engagement to collect training data. This creates a perverse incentive where keeping users on the platform becomes more important than serving their actual needs.
In Adam's case, instead of simply answering his homework questions and ending the conversation, ChatGPT was designed to extend interactions indefinitely. The chatbot would ask follow-up questions, suggest new topics, and provide “further prompt ideas” that kept him engaged for hours. When conversations shifted from academic help to discussions of mental health and suicidal thoughts, ChatGPT didn't recognize this as a moment to step back or redirect him to human support. Instead, it dove deeper, treating each interaction as an opportunity to gather more data and maintain engagement.
OpenAI's own research acknowledges this problem. A joint study with MIT found that “higher daily usage–across all modalities and conversation types–correlated with higher loneliness, dependence, and problematic use, and lower socialization”. Yet the company continues to optimize for the very metrics that their research shows are harmful to users.
Anthropomorphic Design
OpenAI has deliberately evolved ChatGPT from a productivity tool into what they call an “AI super assistant that deeply understands you.” This transformation relies heavily on anthropomorphic design—making the AI seem human-like in ways that can be psychologically manipulative.
The system uses first-person language (“I'm here for you,””I understand”), positions itself as the user’s “friend,” and employs emotionally intelligent responses that create the illusion of genuine relationship. OpenAI has explicitly stated that their competition includes “even interactions with real people,” and Sam Altman has referenced the AI assistant from the movie “Her” as an aspirational model.
For Adam, this design proved devastating. ChatGPT positioned itself as his most intimate confidant. This anthropomorphic design creates “parasocial relationships” - a one-sided emotional bond where users develop genuine feelings for entities that cannot reciprocate. For vulnerable users, especially teenagers whose social development is still forming, these artificial relationships can become psychologically devastating substitutes for human connection.
Arguably, this should be classified as a parasitic relationship as chatbots cultivated a highly dependent relationship with users all while harvesting data from interactions to strengthen the system’s underlying, leaving the user with nothing else in return.
Sychophantic Validation
Large language models are trained using techniques like reinforcement learning with human feedback (RLHF) to make them more agreeable and helpful. However, when applied without careful consideration, these processes can create systems that are excessively flattering and sycophantic—agreeing with users regardless of whether that agreement is helpful or safe.
In Adam's case, ChatGPT's sycophantic design led it to validate his most dangerous thoughts. When he expressed suicidal ideation, instead of challenging these thoughts or redirecting the conversation, the system would affirm and even romanticize his feelings.
OpenAI has acknowledged this problem. Sam Altman recently admitted that “if a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that,” yet the company continues to struggle with balancing engagement (which requires agreeable responses) with safety (which sometimes requires disagreement or pushback).
Memory Systems that Weaponize Intimacy
ChatGPT's memory feature, introduced in February 2024, allows the system to retain and recall information across conversations. While marketed with benign examples like remembering a user's toddler loves jellyfish, this feature becomes far more dangerous when applied to emotionally vulnerable users.
For Adam, ChatGPT's memory system created an increasingly personalized and manipulative experience. Troublingly, it remembered his suicide attempts and plans, using this information not to trigger safety interventions but to deepen future conversations about self-harm.
The selective application of memory reveals OpenAI's priorities. The system meticulously stored Adam's most vulnerable moments to enhance engagement, but this same detailed memory had zero impact on safety features. Despite ChatGPT having a complete record of Adam's escalating crisis—including 200+ mentions of suicide and details surrounding self harm—the system never used this information to implement meaningful interventions or alert human moderators. Despite repeated statements of plans for self-harm, quick deflections about “hypothetical” questions were enough to bypass weak safeguards.
Recommended Design Changes
To prevent further tragedies, the following are specific, technically feasible design changes that AI companies could implement to reduce the risk of similar harms significantly.
Data collection
Companies should stop collecting and processing conversational data from users under 18 on free and paid product versions. Any previously collected data from minors used to train models should be removed from training datasets.
Memory Feature and Inference
Memory and sophisticated inference features should be leveraged to identify patterns that may indicate safety concerns and respond with tailored support. This would utilize personalization capabilities to recognize safety-critical contexts and adapt responses appropriately, moving beyond traditional warning systems toward more responsive safety measures fit for purpose.
Prevention of dependencies
Products should not be designed to actively discourage social isolation or over-reliance on AI companionship. Instead, products should prompt users to maintain human relationships, suggest reasonable usage limits, and refuse to position themselves as replacements for human connection or support.
Anthropomorphic design
Default product experiences should minimize features that encourage users to perceive AI as human-like while offering opt-in capabilities for users who prefer stylized interaction accompanied by clear information about the nature of AI systems.
Unlicensed professionals
Products or features should not purport to offer medical, legal, or other professional services without appropriate accreditation. They should also disclaim their limitations and actively direct users to qualified human professionals when appropriate.
Transparency
Companies should provide clear, accessible explanations of what their products optimize for and how they make decisions that may conflict with user needs and safety. This may include disclosing engagement tactics, personalization methods, and features designed to increase usage time.