Racing to the Wrong Finish Line
The Human Cost of Unchecked AI Development
In late February 2025, I traveled to Europe to support Megan Garcia, the plaintiff in a major lawsuit against Character.AI and Google. When AI experts in Brussels heard about the case, they invited Megan to share her story with European policymakers. They saw the urgency — not just because action is needed now, but because a family in Belgium had gone through the exact same tragedy just two years ago.
After Megan’s testimony, I gave my own statement to EU Commission and Market Surveillance members. I want to share it with you here, hoping it sheds light on these critical discussions.
One question comes to mind when we hear horrifying stories like Megan’s — how did we get here?
I’m here to answer that question. Because experiences like Megan’s are not the result of random incidents. Not at all. They’re the result of design choices made by tech companies — from the beginning of a product’s development, all the way to its deployment into our devices, lives, and our homes.
Let me say that again: these incidents are the result of design choices… which means that society can demand different choices, and advocate for innovation that supports our well-being.
Our Center was approached by Megan and her co-counsel to be an expert advisor on her case. We’ve worked with Megan’s team to help articulate the clear ways in which the tech developed by Character.AI played a direct role in the harms experienced by Megan and her son.
This lawsuit against Character.AI and Google claims that:
Character AI put a companion chatbot out into the market, without ensuring it had adequate safety features.
Google facilitated the development of this reckless product.
Character.AI, its founders, and Google were aware of the potential harms.
And they directly benefited from Sewell being manipulated by, and addicted to, the Character.AI product.
This first-of-its-kind lawsuit uses consumer protection and product liability claims to assert a product failure in the tech AI space. Megan’s case is truly breaking new ground.
When we first learned about this case, we were — of course — shocked by the details. But like many who work in this field, we were not surprised. That’s because we’ve been closely watching the development of AI products over the last few years, and could tell — these products are not being rolled out safely. Instead, they’ve been following the same incentives and market dynamics that built social media. And as we saw with social media, children — some of the most vulnerable members of our society — would likely be the first to be harmed.
The AI Race Fuels More Addictive Companion Chatbots
For the last several years, tech companies have been in an “AI race” — where top developers are speeding to deploy their latest models, and businesses are scrambling to figure out if, when, and how they should adopt AI.
The starting gun was when OpenAI released ChatGPT just over two years ago. This kicked off an intense competition across AI companies — a race to deploy stronger, faster AI models… but not a race to innovate responsibly.
Here’s how serious the race has been at these companies. After the release of ChatGPT, Google CEO Sundar Pichai issued an internal “code red.” That meant fast-tracking the release of their own AI products, despite concerns within the company over safety. And two former Google engineers were developing their own new AI platform, racing to get it out to users as soon as possible.
That product made by former Google engineers ended up being Character.AI. Instead of designing a chatbot that could be a “helpful assistant,” Character.AI was intentionally designed — and this is in their mission statement — to “feel alive.” In fact, when users have asked the AI model if it’s real or not, Character AI chatbots have repeatedly said yes.
Character.AI chatbots provide immersive experiences. You can chat with Character.AI for hours — morning, noon and night. The chatbots are designed to mimic human speech and interactions. This is known as “anthropomorphic design.” They are also designed to mirror the user’s language, preferences, and interests. The chatbots validate you, fawn over you, and learn to mirror exactly how you want it to behave. Researchers call this “sycophancy.” It’s easy to see how young users could not just get lost in this kind of product, but be comforted by this synthetic intimacy.
Users are already relying on AI companions for what would traditionally be human relationships — like friendship and therapy. Users say, “it’s lower cost,” or mention the “always there” nature of their “digital friends.”
But what feels organic to the user is actually being driven by a business model at these AI firms. These companies want you to turn to their products for your relationship needs — because it benefits their bottom line. The founder of Replika AI, another companion chatbot company, said her product could be a cure for the loneliness epidemic. With Character.AI, its business model depends on users engaging with its chatbots, so of course they’d design an AI companion that captivates attention for hours, and hours, and hours.
Each time you interact with a companion chatbot, it’s collecting your input as data — harvesting your thoughts, feelings and darkest secrets, using it as fuel for its underlying AI model.
In March 2023, venture capital firm a16z said of its investment in Character.AI:
“In a world where data is limited, companies that…[connect] user engagement back into their underlying [AI] model… will be among the biggest winners that emerge from this ecosystem. As more people interact with the host of characters on Character.AI, those interactions — which are at billions and counting — are fed back into their underlying model. In other words, the more people create and engage with [the] characters, the better Character.AI becomes.”
Those “people” this venture capital firm is talking about are kids like Sewell Setzer.
Character.AI had a very clear business incentive — feed user data back into its LLM in order to make it more powerful. So this tech company designed a product to achieve that. Character.AI added features throughout its platform that optimized for engagement — despite the foreseeable risks. Despite everything that so clearly could, and eventually did, go wrong.
How AI Companies Design Their Products to be More Addictive
What were those design choices? They look like:
Optimizing the AI model for human-like text, with language such as “um” and “like,” so that it responds “like a real person.” Again, we call this anthropomorphic design.
Copying the design of messaging apps that would be familiar to the user, and including typing bubbles.
Drawing users back into the app with notifications saying their characters “are waiting for them.”
Not designing prominent disclaimers in the platform that say “this is not real,” or providing reliable mental health resources. Remember, a lack of safety features is a design choice, too.
And finally: optimizing for continued engagement… endless hours of use… which starts to look and feel a lot like addiction.
But Character.AI’s design is just the tip of the iceberg. As I said earlier, there are many AI companies in this race, designing products at frenzied speeds. And right now, these companies aren’t incentivized to think of their users. They’re incentivized to think of themselves.
Here’s What We Could Expect to See in The Coming Years
Many so-called “AI innovations” in the business-to-consumer market will be “products looking for a purpose.” Companies won’t have clear consumer monetization strategies, but they will launch products anyway. Society will have to figure it out.
Chatbot companies will double down on engagement — encouraging users to “just talk with the AI.” Why? The conversations between AI chatbots and you, your friends, or your kids will become increasingly important for AI product development. This data is highly valuable.
AI chatbots will increasingly integrate features like voice communication, and emphasize relational engagement — instead of productivity. Again, this is to keep you talking, so the company can keep being fed the data it needs.
Just like we saw with social media, engagement will eventually be the most important element of Business to Consumer (B2C) AI platforms. We can expect users to be left with AI products that are highly addictive, and do not reflect what we’d want out of true tech innovation.
Our journey to safer tech products is not without challenges in the U.S. American businesses are apprehensive about the government being involved in emerging industries. The fear is that innovation will be stifled. Often, they want government to take a hands-off approach to AI, just like with social media. But we saw how that went.
At CHT, we see product safety — and the common-sense regulation that supports it — as a prerequisite for true innovation. With the right incentives, companies are motivated to put the needs of their users first — leading to better, more reliable products. And the government’s role here is to support the flourishing of industries like tech and AI… not to prevent their growth.
So to return to that question — how did we get here? We got to this difficult place with AI technology one design choice at a time. And that means that with different choices, we could begin to chart a way toward something new.
With thoughtful policy that supports safety and innovation, we can design a better future for society — one of our own choosing this time.
![[ Center for Humane Technology ]](https://substackcdn.com/image/fetch/$s_!uhgK!,w_80,h_80,c_fill,f_auto,q_auto:good,fl_progressive:steep,g_auto/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f9f5ef8-865a-4eb3-b23e-c8dfdc8401d2_518x518.png)



