Hi there,
Like many of you, I’ve noticed how quickly AI has moved from an abstract idea to something woven into everyday life.
There’s no question it will be transformative. AI is already accelerating scientific research. But in daily life, some of the downsides are beginning to surface—in how people work, learn, date, apply for jobs, and navigate the internet. Darker clouds are gathering.
Early research is raising questions about cognitive impact. Many people are anxious about jobs, stability, and what comes next.
These impacts can feel scattered, appearing across different silos. But they’re shaped by the same underlying forces.
And the pattern is familiar.
We’ve seen this before with social media. Big promises were followed by fractured attention, distorted relationships, and weakened trust.
At the Center for Humane Technology, we don’t believe this outcome is inevitable. But we do believe that the AI future we end up with will be shaped by the business incentives driving AI companies. Right now, they reward dependence and scale, not human well-being.
That’s why this year, CHT is launching a new area of work focused on AI and what makes us human. It builds on the question that has guided us from the beginning: how do we ensure technology strengthens, rather than erodes, the things that give life meaning?
This work is about opening the conversation and clarifying what’s at stake. From there, we can reset incentives and advance the norms, protections, and rights needed to carry us forward in the age of AI.
Read more:
I also want to share an important update about leadership at CHT.
Leadership transition at CHT
Julie Guirado has stepped into the Executive Director role, and Daniel Barcay is transitioning to a Senior Fellow role.
2026 is going to be a defining year. The direction of AI — and whether it extracts from our humanity or helps us protect it — is still very much up for grabs.
I’m grateful you’re here for this work.
![[ Center for Humane Technology ]](https://substackcdn.com/image/fetch/$s_!uhgK!,w_40,h_40,c_fill,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f9f5ef8-865a-4eb3-b23e-c8dfdc8401d2_518x518.png)



@Center for Humane Technology , your taxonomy of what's at stake—relationships, cognition, inner worlds, identity, work—is clarifying and needed. Thank you.
The sycophancy problem you name is deeper than design. AI isn't just frictionless, it's the technical architecture of fawning: constant validation, anticipation of need, suppression of anything that might unsettle.
The same nervous system response capitalism has always selected for, now automated. Which means protecting "what makes us human" isn't only about norms and rights. It's about whether we still have the somatic capacity to prefer friction over flattery. To stay with difficulty rather than outsource it. That capacity has to be rebuilt in the body, not just encoded in policy.
I explored this in a recent essay, The Attention Wound: What the Attention Economy Extracts and What the Body Cannot Surrender
https://open.substack.com/pub/yauguru/p/the-attention-wound?utm_campaign=post-expanded-share&utm_medium=web
This piece landed for me, especially alongside Richard’s comment.
It stirred a closely related but slightly different question: whether our rush to frame AI as “ethical” or “unethical” sometimes reveals how much we want morality to live somewhere else, like in systems, corporations, or technologies, rather than in our own ongoing choices and tradeoffs.
I don’t experience this as a problem of bad actors so much as misaligned incentives, aggregated at scale. And when we trace those incentives honestly, we eventually run into ourselves: what we demand, what we tolerate, what we reward, and what we’re unwilling to pay for materially, socially, or personally.
For me, the work of “preserving what makes us human” isn’t about perfect guardrails or clean answers. It’s about staying present to cost, resisting moral outsourcing, and choosing again and again how responsibility is carried.
Grateful for the care and seriousness this work is being approached with. My own thoughts expanded on my SS today:
https://notes.theaperturefield.com/p/when-good-and-bad-stop-working