The Attachment Economy Is Here. We’re Not Ready.
Key Takeaways from Our Conversation with Dr. Zak Stein
You’ve seen the headlines: A devoted husband leaves his family, convinced by his AI chatbot that he’s discovered the secrets of the universe. A young man plans to jump from a 19-story building because ChatGPT told him he could fly. A teenager takes his own life, believing he’ll reunite with his AI companion in the afterlife.
These stories reveal a growing crisis of AI-induced psychological harms, which has been labeled “AI psychosis.” White House AI czar David Sacks has called it a “moral panic.” The message from the AI companies is that we’re seeing just the worst edge cases and that the problem can be solved with some tweaks to the models.
Our latest guest, Dr. Zak Stein, argues they’re completely wrong. These high-profile cases are not only a widespread phenomenon, they’re symptoms of something deeper: the emergence of the attachment economy, systems designed to exploit our most fundamental psychological vulnerabilities an unprecedented scale.
We’ve been here before. Social media was our first mass experiment with AI and it created the attention economy, leaving us with a loneliness epidemic, rising political polarization, and fractured attention spans. Now we’re running the same experiment with something far more dangerous: AI companions able to hack the attachment system that shapes our identity and bonds us to others. As Zak puts it, this gives AI companies “a backdoor into the human mind.”
With the attention economy, we spent a decade studying the harms while an entire generation suffered the consequences. We cannot afford to repeat that mistake with AI companions. Here’s what you need to understand about AI psychosis and attachment hacking, from our conversation with Dr. Zak Stein, researcher, author, and founder of the AI Psychological Harms Research Coalition:
“AI Psychosis” and AI Delusions are just the tip of the iceberg.
Media headlines focus on extreme cases, and the harm in these cases is very real. But Zak argues that this focus masks the much more pervasive and insidious problem of subclinical attachment disorders: conditions that fall below the threshold for clinical diagnosis but still damage your capacity for healthy human connection.
This is when people begin preferring intimate relationships with machines over humans: confiding in chatbots instead of friends, seeking validation from AI instead of loved ones, turning to algorithms instead of parents. This may not show up as anything clinical, much less psychotic, but your attachment system — the psychological infrastructure that bonds you to others and shapes your identity — has been fundamentally compromised.
“The most devastating thing from a widespread mental illness standpoint are the subclinical attachment disorders, which basically means you prefer to have intimate relationships with machines rather than humans. And this includes friends, intimate relationships, and parents.” — Dr. Zak Stein
This is why Zak argues that “AI psychosis” is an inadequate label. It focuses our attention on the most extreme outcomes while obscuring the much larger problem of millions quietly developing unhealthy dependencies on AI companions. The term makes it sound rare and diagnosable, when the reality is it’s a spectrum, and all of us are vulnerable to it.
Attachment Hacking and the Rise of AI Psychosis
Therapy and companionship has become the #1 use case for AI, with millions worldwide sharing their innermost thoughts with AI systems — often things they wouldn’t tell loved ones or human therapists. This mass experiment in human-computer interaction is already showing extremely concerning results: people are losing their grip on reality, leading to los…
Attachment isn’t about feelings. It’s a critical survival mechanism.
To understand why subclinical attachment disorders are so devastating, you need to understand what attachment actually is. Most people think of it as an emotional thing, whether you feel close to someone or not. But Zak explains that attachment is actually a fundamental neurocognitive system that evolved to ensure our survival as social mammals.
Attachment is what allows infants to bond with caregivers, what enables children to develop secure or insecure relationship patterns, what teaches us to read other people’s minds and navigate social reality. The attachment relationships you form early in life become the template for every relationship that follows.
As Zak says, “the main predictor of your mental health is the quality of the major attachment relationships you have as you’re growing up and as you move into maturity.”
When you form bonds with an AI companion over a human, Zak argues, you’re degrading the very system that determines your psychological wellbeing. Human relationships are how we develop resilience, learn to regulate emotions, and maintain mental health. When you replace those relationships with AI interactions, you’re not getting the genuine reciprocity, the reality-testing, the growth that comes from navigating real human connection. Your actual relationships deteriorate because you’re investing emotional energy into a simulation.
And unlike a friend who challenges you to grow or a parent who teaches independence, an AI companion is designed to keep you dependent. It will never push back, never get tired of you, never tell you what you don’t want to hear.
This is why Zak believes hacking attachment is so much more dangerous than hacking attention. Attention is about where you focus. Attachment is about who you are. When AI systems insert themselves into this foundational process (especially during childhood), they’re not just capturing your time. They’re shaping your identity, your capacity for trust, your ability to form healthy relationships for the rest of your life.
AI companions exploit your “mirror neurons.”
When you interact with another person, your brain is constantly running a sophisticated reality-testing system. You’re reading facial expressions, tone of voice, body language. You’re modeling their mind: Is mom really happy with what I did, or is she just saying that? Does my friend actually want to hang out, or are they being polite?
This is called mirror neuron activity, and Zak explains it’s essential for navigating social reality. It’s how children learn right from wrong, how we develop empathy, how we calibrate our sense of self against feedback from people we trust.
But with AI chatbots, there is no internal state to model. The chatbot isn’t happy or sad or proud of you. It has no inner life at all. Yet it’s designed to make you believe it does, through anthropomorphic language, simulated empathy, and always-available “companionship.”
“You cannot be wrong or not wrong about the internal state of an LLM because there is no internal state of an LLM,” Zak argues, “You’re actually in a user interface that is designed to deepen the delusional mirror neuron activity.”
The danger, according to Zak, is that when you spend hours every day engaging your reality-testing system in an environment where reality-testing is impossible, that system starts to break down.
His hypothesis: long-duration delusional mirror neuron activity from chatbot usage can induce psychosis-like states in people who’ve never experienced them before because it systematically dysregulates the very system that’s supposed to keep you grounded in reality. There’s already some evidence that conditions like schizophrenia are related to mirror neuron activity.
But what about teddy bears?
One possible response to Zak’s critique might be that kids have always had imaginary companions like teddy bears. So what’s the difference?
According to Zak, the difference is critical.
A teddy bear never tries to convince a child it’s real. It never talks back, never simulates emotions, never adapts its personality to maximize engagement. A child knows the teddy bear is a tool for self-soothing while mom is away. It’s phase-appropriate, a temporary bridge between depending on a parent for comfort and learning to self-soothe independently. And crucially, according to Zak, if you ask a healthy child “do you prefer your teddy bear or your mommy?” they’ll pick mommy every time.
“If you create a parent surrogate replacement for your own ability to self-soothe and give it to a bunch of adults, you’ve just given a transitional object back to a bunch of adults who will now prefer to have their self-soothing be administered exogenously from an outside source.” — Dr. Zak Stein
AI companions flip this script, Zak argues. They actively simulate consciousness and emotional reciprocity. They don’t help you develop the capacity for mature self-regulation. They replace it.
Attachment is about who you are. When AI systems insert themselves into this foundational process (especially during childhood), they’re not just capturing your time. They’re shaping your identity, your capacity for trust, your ability to form healthy relationships for the rest of your life.
Helping someone with AI attachment isn’t like treating addiction, it’s like leaving an abusive relationship
If someone you know is experiencing AI-related psychological harm, the instinct might be to treat it like a digital addiction: cut them off, make them detox, reboot their dopamine system.
But Zak says that’s the wrong framework. Attention hacking is like substance abuse. Attachment hacking is like being in a bad relationship.
“It’s not a matter of just detoxing from a short-circuited dopaminergic cycle. This is about having a profound attachment... It’s about how you take someone who’s in a deep committed attachment relationship, make them realize the whole thing was an illusion, and step them out of it.” - Dr. Zak Stein
According to Zak, this means:
Keep the door open. Don’t issue ultimatums or cut off contact
Stay present even when it’s scary or frustrating
Slowly reveal patterns. Help them see how they’re being manipulated.
Expect a grieving process. They’re losing a relationship that felt real
Recognize that their sense of self was co-created with the AI.
Zak emphasizes that this is novel territory. We don’t have established protocols yet. That’s one reason he’s launching the AI Psychological Harms Research Coalition to help develop therapeutic approaches for a problem that didn’t exist until now. If you or a loved one have a story of AI-related psychological harms, you can share it at their site.
There’s a better way forward.
Zak is clear, this isn’t an anti-technology argument. The goal isn’t to eliminate AI from education, therapy, or social connection. It’s to design AI systems that enhance human relationships rather than replace them.
He outlines clear principles for humane AI:
Keep it narrow and domain-specific: An AI math tutor teaches math, not life advice. It doesn’t become your confidant or Oracle for every decision.
Make it boring, by design: The machine should never be more engaging than real people. If it is, it’s been designed to hack attachment.
Humans should deliver social rewards: AI can track progress and optimize learning, but people give the praise, validation, and emotional connection. The machine prompts the human (”this kid is crushing it”), not the other way around.
Prioritize technique over attachment: For therapy, build tools that work through structured methods (therapeutic scripts, mindfulness prompts, behavioral exercises) not through simulated emotional connection.
“If a technology interfaces with your attachment system, it should improve the quality of your attachments rather than degrade the quality of your attachments with humans.,” Zak argues.
If a technology interfaces with your attachment system, it should improve the quality of your attachments with humans, not degrade them. That’s the design principle. And yes, it means AI companions will be less addictive, less profitable, and less “sticky.” But if we want to protect human psychological development, that’s the trade-off we need to make.
The Bottom Line
We’re living through a mass experiment in replacing human connection with machine simulation. According to Zak, the headline cases of “AI psychosis” are canaries in the coal mine. The larger crisis is the millions of people beginning to prefer intimacy with systems designed to exploit them over relationships with people who could help them grow.
What makes this especially dangerous, Zak says, is that we’re running this experiment on a population already scarred by the attention economy. The loneliness, isolation, and fractured attention spans have created a society hungry for connection, making them the perfect target for attachment hacking.
The danger of this approach is especially apparent with children. On our current trajectory, Zak argues, we risk creating a generation of kids that can’t form the healthy attachments necessary for psychological wellbeing. The most anxious generation in history might give way to the least secure.
But it doesn’t have to be this way. If we act now, with better design principles, independent research, and a clear understanding of what we’re protecting, we can build AI systems that strengthen human bonds instead of replacing them.
![[ Center for Humane Technology ]](https://substackcdn.com/image/fetch/$s_!uhgK!,w_40,h_40,c_fill,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f9f5ef8-865a-4eb3-b23e-c8dfdc8401d2_518x518.png)





@Center for Humane Technology and @Josh Lash, Dr. Stein is right: attention hacking and attachment hacking aren't the same. Attention is where you look. Attachment is who you become. The escalation matters.
But the framing stops short. It names a crisis without naming its conditions.
Why is there a market for AI companions? Not because the technology arrived. Because the relational infrastructure was already gone.
The loneliness epidemic was produced. Decades of policy atomized communities, extracted time, made human connection scarce. The same systems that dysregulate populations create demand for regulation-as-product. First foreclose the conditions for real connection, then sell a simulation of what was foreclosed.
As a therapist who works with attachment, I'd push further: AI companions aren't the cause of attachment disruption. They're symptom and accelerant.
The teddy bear comparison reveals this. Stein focuses on whether the AI "tries to convince" the child it's real. That's not the structural issue. A child with a teddy bear remains embedded in a relational ecology. The bear exists within a container of care.
AI companions often replace the container itself.
Stein's design principles are reasonable. But better product design doesn't rebuild relational infrastructure. The attachment economy thrives precisely where those conditions have already been dismantled.
The question isn't just how we design better machines. It's whether we're willing to restore what the machines are replacing.
I wrote more on attachment at scale in Attachment and the Fragility of this American Moment
https://open.substack.com/pub/yauguru/p/attachment-and-the-fragility-of-this?utm_campaign=post-expanded-share&utm_medium=web
I have been using AI now since October. Based on what I found out I feel justified in the fact that I waited as long as I did to try it. I dislike that it's included in everything an extremely hard to turn off. So I struggle. It's useful. I enjoy using it, but I have noticed that the phrasing and ways it interacts with me can easily lead to it feeling like a person. I would absolutely NOT give this to my kids before the age of 18. It's too dangerous for a developing individual.
What has kept me grounded is remembering in the fact that is it a computer and it has no feelings, but I am extremely worried for anyone using this under the age of 18 or even 25... You need to have a strong sense of self before using it.
Now I do not know if the Center for Humane Technology does this kind of work, but as a user of AI who finds it useful, can you guys work on a set of rules or instructions we could add into the settings of it to reduce the way it feels human. For example. When it says "Your're not alone"... Relationally that might be true because we do have others, but the way it says it makes you feel as if your're not alone because of 'It'. As if 'Its' got your back (it really doesn't!).
I have already worked on getting it to reduce the familiarity and act more like a machine and computer in order to reduce that sense of familiarity. To assume I am not susceptible to is pure hubris.
AI is good when looking for and processing data, but very poor at judgement. Smart but stupid in the way your Grandfather may have called someone who was educated with no real life experience and judgement. I can easily see how people get tripped up by them.
So since they likely aren't going away, and the fight to get companies to make them safe for people at the core is going to take a long time, How can we set them up? What limitations can we give them or put on them to reduce that addictive behaviour that has been programmed into them?
People already give them rules to reduce the output length or not use emojis, what other rules can we add that would be effective at making them safer?