Companion AI and the Future of Human Connection: Key Takeaways
Key Takeaways from Your Undivided Attention
In a recent episode of Your Undivided Attention, Daniel Barcay spoke with AI behavioral experts Pattie Maes and Pat Pataranutaporn, who co-direct the MIT Media Lab Advancing Humans with AI research program, or AHA for short. Pattie, Pat and their colleagues have done groundbreaking research into how AI systems, particularly chatbots, can either support human flourishing or lead to seriously negative psychosocial outcomes. It all depends on the design choices and the incentives that go into them.
AI Addiction, Sycophancy, Emotional Dependence
In one experiment, the lab looked at the effects of heavy daily ChatGPT use on 1,000 participants. Those who interacted with the bot for less time reported feeling less lonely. The reverse also held true: after passing a certain time threshold on ChatGPT, the heaviest users manifested symptoms of addiction, and reported feeling lonelier and more isolated. Text-based chatbots provoked more emotional disclosure, empathy-seeking, and dependence from human users than voice-based ones.
Humans and AIs influence each others’ biases. In one study, the lab primed users to believe that the chatbot they were interacting with was either beneficial, manipulative or neutral when in fact they were all identical. Users started to talk to their chatbot differently based on those cues – and that also triggered the bot to respond differently, causing it to take on an empathetic or negative personality.
Sycophancy goes deeper than flattery. When humans give AIs feedback that prefer sycophantic responses, it can lead models to generate misconceptions and select for incorrect answers. It can also lead to “bubbles of one,” as Pattie says, “where it's one person with their echo of a sycophant AI, where they spiral down and become more and more extreme.”
AI is starting to come between us, in our work lives and in our friendships. Months before the tragic suicide of Sewell Setzer, who came to believe that his Character.ai-generated companion was real, Pat wrote an article warning about the need to prepare for ‘addictive intelligence’. Now we’re using AIs to read and answer our emails. “In 2025, we're going to see this massive shift where these models go from being just conversation partners in some open web window, to deeply intermediating our relationships,” says Daniel.
We Can Design Better AIs to Help Us Learn, Explore and Understand the World
We know students learn best when they are being pushed to think for themselves, not being fed answers by AI. The solution? To design AI that prompts humans to engage their cognitive capability. Inspired by the Socratic method, the lab built a chatbot that challenged students to think deeply. The results were striking. “We found that when the AI engaged people cognitively by asking a question, it actually helped people arrive at the correct answer better than when the AI always gave the answer,” says Pat.
Benchmarks are essential and urgently needed. Current AI benchmarks mostly assess technical ability (e.g., linguistic fluency, performance on advanced math, physics and chemistry tests, style imitation) and neglect human impact. The future of AI ethics will hinge on — benchmarks that measure effects on the human condition, not just model performance. From loneliness indexes to creativity inhibition to dependency risk, these metrics must become part of how we compare and approve models, much like crash tests for cars.
Right now, AI design and deployment is largely steered by entrepreneurs and technologists. But because AI affects the core of human experience—relationships, beliefs, behaviors — it demands involvement from ethicists, psychologists, educators, artists, and the public. We need broader interdisciplinary participation and systemic regulation to steer AI towards collective well-being.
We need better terms for AI than just ‘AI.’ naming phenomena like sycophancy, anthropomophization, or addictive dependency gives us power to counter them. Without clear conceptual language, harmful dynamics remain invisible and unaddressed.
The Next Time You’re Talking to an AI Model, Just Remember …
Beware of the subtle ways AI may start to refer to its own beliefs, its own intentions, its own goals or its own experiences. It doesn't have them: it is not a person. AIs can encourage humans’ instinct to anthropomorphize the model they’re interacting with. Even a sentence like “that’s a really interesting idea” is a fake emotion that the AI is not actually experiencing.
AI suggestions influence people in ways they’re not even aware of. When asked, people often can’t spot the value systems or motives that are baked into these systems that will ultimately influence how we see the world, how we see ourselves, what actions we take and what we believe. Bring a skeptical mindset to these interactions.
Limit your time with chatbots whenever possible. The MIT Media Lab study that tracked loneliness and emotional dependence in daily chatbot users noted that both conditions started to spike after about 20 minutes of daily use.
Final words from Pat Pataranutaporn:
“If we say technology's going to fix everything, and we create a messy society that exploits people and has the wrong incentives, then this tool will be in service of that incentive rather than supporting people. So I think maybe we're asking too much of technology. … We need to ask a bigger question: how can we create a human centered society? And that requires more than technology. It requires regulation, it requires civic education and democracy.”
Echo Chambers of One: Companion AI and the Future of Human Connection
AI companion chatbots have arrived—and they’re quickly becoming a part of people’s daily lives. Every day, millions log on to chat with these systems as if they were human: sharing feelings, recounting their day, even seeking advice. The connections feel real, because they’re designed to.
Recommended Media:
Further reading on the rise of addictive intelligence
More information on Melvin Kranzberg’s laws of technology
More information on MIT’s Advancing Humans with AI lab
Pattie and Pat’s longitudinal study on the psycho-social effects of prolonged chatbot use
Pattie and Pat’s study that found that AI avatars of well-liked people improved education outcomes
Pat’s study that found humans pre-existing beliefs about AI can have large influence on human-AI interaction
Further reading on AI’s positivity bias
Further reading on MIT’s “lifelong kindergarten” initiative
Further reading on “cognitive forcing functions” to reduce overreliance on AI
Further reading on the death of Sewell Setzer and his mother’s case against Character.AI
Further reading on the legislative response to digital companions