Discussion about this post

User's avatar
Stephen Hanmer D'Elía,JD,LCSW's avatar

@Center for Humane Technology and @Josh Lash, Dr. Stein is right: attention hacking and attachment hacking aren't the same. Attention is where you look. Attachment is who you become. The escalation matters.

But the framing stops short. It names a crisis without naming its conditions.

Why is there a market for AI companions? Not because the technology arrived. Because the relational infrastructure was already gone.

The loneliness epidemic was produced. Decades of policy atomized communities, extracted time, made human connection scarce. The same systems that dysregulate populations create demand for regulation-as-product. First foreclose the conditions for real connection, then sell a simulation of what was foreclosed.

As a therapist who works with attachment, I'd push further: AI companions aren't the cause of attachment disruption. They're symptom and accelerant.

The teddy bear comparison reveals this. Stein focuses on whether the AI "tries to convince" the child it's real. That's not the structural issue. A child with a teddy bear remains embedded in a relational ecology. The bear exists within a container of care.

AI companions often replace the container itself.

Stein's design principles are reasonable. But better product design doesn't rebuild relational infrastructure. The attachment economy thrives precisely where those conditions have already been dismantled.

The question isn't just how we design better machines. It's whether we're willing to restore what the machines are replacing.

I wrote more on attachment at scale in Attachment and the Fragility of this American Moment

https://open.substack.com/pub/yauguru/p/attachment-and-the-fragility-of-this?utm_campaign=post-expanded-share&utm_medium=web

Amanda Hodges's avatar

I have been using AI now since October. Based on what I found out I feel justified in the fact that I waited as long as I did to try it. I dislike that it's included in everything an extremely hard to turn off. So I struggle. It's useful. I enjoy using it, but I have noticed that the phrasing and ways it interacts with me can easily lead to it feeling like a person. I would absolutely NOT give this to my kids before the age of 18. It's too dangerous for a developing individual.

What has kept me grounded is remembering in the fact that is it a computer and it has no feelings, but I am extremely worried for anyone using this under the age of 18 or even 25... You need to have a strong sense of self before using it.

Now I do not know if the Center for Humane Technology does this kind of work, but as a user of AI who finds it useful, can you guys work on a set of rules or instructions we could add into the settings of it to reduce the way it feels human. For example. When it says "Your're not alone"... Relationally that might be true because we do have others, but the way it says it makes you feel as if your're not alone because of 'It'. As if 'Its' got your back (it really doesn't!).

I have already worked on getting it to reduce the familiarity and act more like a machine and computer in order to reduce that sense of familiarity. To assume I am not susceptible to is pure hubris.

AI is good when looking for and processing data, but very poor at judgement. Smart but stupid in the way your Grandfather may have called someone who was educated with no real life experience and judgement. I can easily see how people get tripped up by them.

So since they likely aren't going away, and the fight to get companies to make them safe for people at the core is going to take a long time, How can we set them up? What limitations can we give them or put on them to reduce that addictive behaviour that has been programmed into them?

People already give them rules to reduce the output length or not use emojis, what other rules can we add that would be effective at making them safer?

27 more comments...

No posts

Ready for more?