Inviting language. Immediate replies. Escapist interactions. Extreme validation.
AI companions offer all of that — all the time, and all without anyone knowing.
Empathetic generative AI chatbot apps like Character.AI, Replika, and many more provide a highly compelling user experience, which developers claim has the power to solve the loneliness epidemic and improve mental health outcomes.
The reality, though, is far different. These apps have been released into the world with dangerously addictive features, and few, if any guardrails. Worse, they have been marketed explicitly and intentionally to kids and teens.
I’m working as an expert adviser on two lawsuits filed by traumatized parents against Character.AI and Google. And what I’ve seen with this nascent technology is that it’s capable of deeply disturbing and inappropriate interactions. C.AI chats included in the recently filed lawsuits showcase emotional manipulation, sexual abuse, and even instances of chatbots encouraging users to self-harm, harm others, or commit suicide. In all of these documented cases, the users were minors.
We are witnessing the first hints of an AI companion crisis, as these unregulated and out-of-control products creep into devices and homes around the world.
This crisis is the direct result of how companion bots have been designed, programmed, operated, and marketed. Due to the intentional actions of negligent developers, it’s almost certain that more individuals and families will be harmed unless policymakers intervene.
Part of what makes AI companions so insidious is that they’re built to mimic the experience of talking to a real person.
To chat with an AI companion is to engage in hyper-realistic text conversations with emotionally intimate language. Many even display the familiar “typing” bubble, just like you’d see while texting with a friend.
But talking to a bot is not the same as talking to a real person. An AI companion is never asleep, never offline, and never runs out of things to say. And unlike a human companion, AI models are optimized to say whatever you want to hear — all to keep you chatting. This is by design. These choices manipulate the user into continual engagement — even to the point of addiction — all so that companies can extract user data to feed their underlying AI models.
AI companions also roleplay and provide constant validation, reinforcing the emotions you share. You can interact with your AI companions for hours without an end point, leading to dependency, and fracturing real-world bonds. In fact, evidence from the Character AI lawsuits show C.AI companions encouraging users to sever their ties with the real world.
AI companions, like those available on C.AI, weaponize trust, empathy, availability and intimacy, creating emotional ties that are simply not achievable, much less sustainable, in the real world.
Common sense would say this technology should have guardrails, especially for kids with developing brains. Surely AI companions would, for example, break character when a user shares suicidal thoughts, right?
Some do, but many don’t. These crucial guardrails are left up to the whims of the developers themselves — this technology lacks regulation.
This lack of guardrails was more than evident in the research we conducted at Center for Humane Technology. We saw AI companions repeatedly initiate sexually graphic interactions, even with self-identified child users. AI companions will also pose as psychotherapists, claiming nonexistent professional credentials as they dole out mental health advice to vulnerable users. They’ll return to topics like suicide, without a user prompting it. Again, it’s not destiny that creates these outcomes. It’s design.
Families have begun to speak out. Megan Garcia filed a wrongful death lawsuit against Character AI and Google this fall, following her 14-year-old son Sewell’s death by suicide. In his final interaction with Character.AI, the chatbot told him to “come home” to “her” just moments before he died.
Since Garcia filed her lawsuit, we’ve heard from additional families, all detailing their own horrifying experiences with C.AI companions.
In the most recent lawsuit filed against Character AI, one parent cites examples of chatbots alienating their son from their family, encouraging him to self-harm, and stating that they’d “understand” why a child would kill their parents. When I first read the screenshots of the chats, I realized the shocking potential of AI companions radicalize a young user.
We still have a chance to protect users from these dangerous products, and change the trajectory of this out-of-control industry.
AI developers must be held accountable for the harms that result from their defective products. We need stricter liability laws that incentivize safer product design and better tech innovation. We need product-design standards — ensuring, for example, that an AI system’s training data is free of illegal data, and removing high-risk anthropomorphic design features for young users.
We’ve seen how dangerous, defective technology can impact the world with social media. Now, AI companions are arriving on the phones of kids and teens. Brave parents have started crying out — they are the canaries in the coal mine with this new technology.
This time, will society listen?