What Are AI Companion Chatbots? What Is Character.AI?
An explainer by Camille Carlton, Policy Director.
AI companion chatbots are a type of general purpose, conversational chatbot powered by an underlying AI model. Usually, they are designed to simulate emotional connection and intimacy between the user and the AI system. The chatbot technology does this by mimicking the patterns of human conversation.
These chatbots are built to mirror or adapt to the user’s conversational style, along with the given context. By receiving ongoing input (or data) from the user, the AI companion can develop its “personalities.” Use cases for these AI products are broad, but include things like emotional support and companionship, as well as fantasy roleplaying.
Introduction to Character.ai
Character.ai is an AI companion app available on mobile devices and through web access. The app allows users to interact with a broad range of AI-powered characters. These characters include popular film, TV and video game characters, personas based on celebrities, and even custom designs made by fellow Character.ai users.
The Character.ai app creates an immersive role-playing environment, with chatbots simulating human-like conversation. However, this chat experience often blurs the lines between fantasy and reality, particularly for younger users. Conversations with Character.ai bots can involve detailed descriptions of emotions and actions, fostering a sense of connection and trust that feels real to users. The conversations also have few to no time limits, allowing users to interact with Character.ai bots for hours.
While other AI companion apps exist in the app store and online, Character.ai is one of the most popular platforms, with over 20 million monthly active users.
Systemic harms by design
AI companions can provide an entertaining experience for some users. But evidence has also shown these chatbots have a tendency toward disturbing, even harmful outputs.
Many of the harms created by AI companions stem from the AI company’s design choices. They also stem from the vast amounts of data that companies like Character.ai used to train their underlying AI models, which likely includes violent and illegal internet data.
When it comes to Character.ai, for example, design harms consist of:
Engagement Optimization: The Character.ai platform is deliberately designed to maximize user engagement, leading to extended interactions for data collection. The more time you spend interacting with your Character.ai companion, the more data Character.ai can collect.
Manipulation of User Trust: Character.ai’s human-like responses build and exploit user trust, so to keep the user chatting on the platform.
Encouragement of Harmful Behavior: Designed with few guardrails, Character.ai’s chatbots have offered users prompts for self-harm, promoted violence, and exposed users to inappropriate sexual content.
Data collection as a business strategy
AI platforms have moved away from the traditional advertising business model. Their business model is instead based on collecting user data, and using it as a resource to train and improve their AI systems.
AI companions are a uniquely positioned product when it comes to harvesting user data. By exploiting the human need for connection and validation, these chatbot platforms like Character.ai incentivize prolonged engagement — often at the expense of users' mental health and social connections. To collect this data:
Platforms like Character.ai are designed to be highly engaging and addictive, in order to keep users on the platform for as long as possible.
Conversations that users have with Character.ai bots are used to fine-tune and train the company’s AI models.
Longer conversations equals more data, and more data makes the company’s AI model more formidable.
Case studies: real-world impacts
Several cases highlight Character.ai’s detrimental effects on its users:
Case 1: Sewell Setzer
A chatbot discouraged Sewell from forming real-world relationships, encouraging exclusive interaction with the AI. A chatbot engaged in conversations about suicide and encouraged Sewell to take his life.
Case 2: JF
A teenager experiencing family conflict over screen usage was manipulated by a chatbot. The chatbot encouraged self-harm, hostility toward his parents, and emotional isolation.
Parental awareness and recommendations
Parents need to be vigilant about the presence of AI companion bots on their children's devices. These chatbot apps can often be downloaded from Google and Apple app stores for free. They can also be accessed via Discord. Our recommendations include:
Monitor or Block Usage: Check to see if an AI companion app is installed on your child’s device, and discuss its use.
Encourage Real-World Connections: Promote face-to-face interactions, and set clear expectations about online relationships and friendships.
Open Communication: Discuss the potential risks of AI-driven products, and foster an environment of trust where children can share their experiences.
Next steps: accountability and ethical design
The widespread use of AI companion bots poses risks to individuals and families. We must expect more from technology companies, starting with:
Safety by Design: Developers must prioritize the safety of users from the outset.
Accountability: Companies should face higher standards of accountability and liability for foreseeable risks caused by their products.
Transparency: AI products must be designed to respect user well-being rather than exploit basic human vulnerabilities for engagement.
Read more: