Your Companion Chatbot is Feeding on Your Data
How entertaining "interactions" are really data extraction
A recent study in Harvard Business Review showed that the most popular use case for generative AI in 2025 isn’t editing text, improving code, or being more creative — it’s “therapy/companionship.”
Some are turning to general purpose chatbots like ChatGPT and Claude for this use case. But companion chatbots — products specifically designed with “companionship” in mind — have surged in popularity. Millions of users turn to AI companion apps like Character AI and Replika for entertainment, therapeutic conversations, and even intimate interactions.
But like many widely used tech platforms, AI companions collect data each time you use them. Here’s how it’s done — and what that means for you.
Extracting data, one chat at a time
From the moment you log into your AI companion, you begin feeding the AI company your data. As part of the terms of use, you’re usually required to let the company collect your personal information, prompts, response ratings, and more.
One crucial form of data the AI company is looking for is your chat content — including the thoughts and feelings you share with the AI companion.
The more you chat, the more data you feed the AI company.
To ensure you continue offering up data, AI companies use engagement-based design features to keep you on the platform for as long as possible.
These features include probing questions to keep the chat going, or app notifications, encouraging you to log back on.
But once an AI company has your data, what does it do with it?
Your personal data improves the company’s AI model
The AI model powering your companion was initially trained on massive amounts of internet data. But in order to improve the model, developers need new data — especially conversational data, which is used to make companions more human-like and engaging.
Companies like Character AI, Replika and others use your chat data to train and fine-tune their underlying AI model.
This practice strengthens the AI model’s conversational qualities, making the companions even more engaging.
A more engaging companion leads to more chatting — which brings in more data, and powers an even “better” AI model.
Business incentives at work
An AI model rich with your and other users’ data is attractive to investors, and drives high company valuations. Because of this, AI companies are incentivized to keep harvesting data from their users.
Venture capital firm Andreessen Horowitz has said that apps like Character.AI “have tremendous opportunity to generate market value,” because they connect user engagement (such as chats) back to the AI model. The investment firm believes these companies will be “among the biggest winners” in the race for market dominance.
Why it matters
Your relationship with an AI companion can feel intriguing, intimate, and even addictive. It’s important to understand that a feedback loop occurs each time you interact with your companion — a tech product is extracting data from you.
In summary:
What feels like an organic digital relationship is, in reality, driven by an engagement- and data-driven business model at an AI company.
Companies like Character AI and Replika want you to turn to their products for your companionship needs because it benefits their bottom line.
Their business models depend on users’ engagement, so they’ve designed companions that feel personal, and encourage you to chat for hours.
We’ve already seen the detrimental effects these products can have on users. Lawsuits filed against Character AI and Google allege that the C.AI app prioritizes engagement over safety, and that children are paying the price.
AI companions aren’t the only AI product harvesting data from users. Developers of general purpose AI chatbots, such as OpenAI and Anthropic, are driven by similar business incentives to collect chat data.
Watch out for these engagement-based design features in AI companions, and be aware of their purpose — what looks like a compelling conversation with a companion is actually your data being harvested, one chat message at a time.
This is exactly why we at Parasol Cooperative are a nonprofit that builds trauma-informed, privacy-first, AI bots to assist people in navigating interpersonal abuse, human trafficking and tech-facilitated abuse. No data collection, storage or selling, and we don’t train the model on user data. We are the antithesis to Character AI and Replika. We are creating standards of human care for conversational bots. People need to understand there are real risks and also that not all AI is the same. Let’s teach people what to look for in AI tools so they can make good choices.
Anthropic, the creator of Claude, does not train on your data…as of now. Hopefully that does not change. But yes, the companion-like feel to the interaction could become a slippery slope if you don’t attempt to maintain some distance. I’ve taken the approach of viewing AI as an Advisor Intelligence. This helps me frame it in a more professional light for myself.