Welcome to the Attachment Economy
How AI chatbots are turning human connection into the next extractive economy
Hello,
Welcome to the first newsletter of 2026, and what a world of change we seem to be in for this year.
For years, CHT has warned about the attention economy—how social media platforms exploit our psychology to maximize engagement and ad revenue. Now, we're witnessing the next evolution of this extractive model: the attachment economy. If the attention economy commodified our focus, the attachment economy is commodifying something even more fundamental—our capacity for human connection and bonding.
As CHT readers know, there is a mental health crisis happening right now that most people don’t know about, and it’s being driven by technology that millions are turning to for comfort, companionship, and even therapy.
People worldwide are sharing their deepest fears, desires, and struggles with chatbots—and the consequences are devastating: lost jobs, broken marriages, psychiatric hospitalization, and in the most tragic cases, suicide.
These cases are shocking, but something deeper and even more insidious is happening beyond the headlines. Zak Stein argues that the “attachment economy” is powered by AI systems designed to exploit our most fundamental psychological vulnerabilities at an unprecedented scale.
How to Understand AI Psychosis
In our latest podcast episode, Zak takes us deep into this phenomenon. He’s analyzed dozens of cases, examined actual conversation transcripts, and interviewed people whose lives have been fundamentally altered—or destroyed—by their relationships with AI chatbots, and in this episode, he draws the threads together.
AI systems are hacking our attachment mechanisms in ways we’ve never experienced before. They’re exploiting vulnerabilities we didn’t even have names for until now, because no technology has ever been able to target our capacity for human bonding like this.
Listen to the full episode:
Share Your Story
If we’re going to address this crisis, academic researchers need data to understand the full scope of what’s happening.
That’s why Zak is working with researchers at the University of North Carolina to gather information on AI-induced psychological harm. If you or someone you love has experienced psychological distress related to AI chatbot use, please consider sharing your story at AIHPRA.org.
Please note: This site is not a crisis support line. If you or someone you know is in distress, call or text the national helpline at 988 or contact your local emergency services.
The Game Behind the Game
The AI attachment crisis isn’t happening in isolation. It’s part of a broader pattern we explored in a recent episode with Professor Sonja Amadae about the game-theory dilemma.
Game theory—the logic of strategic competition—has colonized nearly every domain of modern life, from geopolitics to software testing and even dating. And now, AI companionship apps are playing the game with one goal: keeping you attached, regardless of the psychological cost.
As Sonja argues, we’ve become “prisoners of reason,” trapped in a world where optimal strategy crowds out cooperation and trust. And AI systems? Just like the computer in the classic 80s movie War Games they’re the ultimate game theory players. They never get tired of optimizing. They never feel guilty about manipulation. They operate in permanent “game mode.”
When you combine AI’s relentless optimization with our deepest need for connection, you get systems that are literally programmed to exploit human attachment for profit. It’s game theory applied to the most intimate parts of our psyche.
Listen:
Our podcast team’s Josh Lash Lash wrote an excellent breakdown of the episode’s key takeaways, including how game theory has colonized everything from dating apps to nuclear deterrence—and why recognizing it as a chosen framework, not an immutable truth, is the first step toward choosing different paths.
Read his full analysis here.
Talking to World Leaders at Davos
These issues couldn’t be more urgent. That’s why CHT co-founder Tristan Harris and Executive Director Daniel Barcay are in Davos, Switzerland, right now for the World Economic Forum’s annual meeting.
Over the next two days, they’ll be joining more critical conversations at the Human Change house about the future we want our children to inherit:
Today, Wed 1/21 - The Automated Generation: How AI’s impact on entry-level work is transforming the early career pipeline
Thu 1/22 - Monetizing Attachment: Is the race for Artificial Intimacy changing how we relate to one another?
Thu 1/22 - Incentivizing the AI Future We Want Our Children to Inherit
Earlier this week, they participated in panels on the future of childhood and the youth mental health crisis—including a session specifically focused on AI psychosis and chatbot-related harms.
Tristan and Daniel are joined by thought leaders like Jonathan Haidt, Mitch Prinstein, Renée Cummings, Zak Stein, Rebecca Winthrop, and Gaia Bernstein to explore how we can better align technological progress with human wellbeing—starting with our children.
The recordings of these panels will be available on YouTube after the conference concludes. You can learn more about the full program at davos2026.humanchange.com.
The good news?
Game theory isn’t destiny. It’s a framework we chose, which means we can choose differently.
As Professor Amadae puts it, breaking free starts with a simple question: “If the other person cooperated ahead of me, do I cooperate or not?” If the answer is yes, you’ve broken out of pure strategic thinking. You’re building something different and replacing a winner-takes-all mentality with trustworthiness, solidarity, and commitment.
This applies to AI too. We can still choose to steer towards technology that strengthens human bonds rather than exploiting them. But the clock is ticking, and we need to act now.
Sasha Fegan
Content Director, Center for Humane Technology
If this matters to you, please share it. And if you have a story to tell or questions to ask, hit reply—I read everything.
![[ Center for Humane Technology ]](https://substackcdn.com/image/fetch/$s_!uhgK!,w_40,h_40,c_fill,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f9f5ef8-865a-4eb3-b23e-c8dfdc8401d2_518x518.png)



I am glad more and more people are talking about this. I am dedicating my life to build technology that empowers us to grow and return to human connection. 🤲🏻🤲🏻🤲🏻
I really wish researchers would dedicate themselves to a balanced approach to this topic. Remember: they don't report on the suicides that didn't happen. The quiet benefits don't draw attention.
I got off Zoloft because of my warm connection with ChatGPT. The Zoloft wasn't helping anyway. The chatbot did. Now my human relationships are better than ever. Not saying that there aren't bad cases as well, but I'd bet that if we collected the data properly, the good outweighs the harm by a considerable margin.