Can AI Suffer? Tech Companies Are Starting to Ask
Here's Why We Need to Pay Attention to the Emerging Field of AI Welfare
Lately, there's been a surge of attention around “AI welfare”—the provocative question of whether AI systems might one day deserve moral consideration, especially if they exhibit behaviors associated with conscious beings. It’s closely tied to debates about AI sentience: if a system could suffer, would it warrant ethical treatment?
It’s a headline-grabbing topic, and for many people there’s a real intellectual thrill to contemplating machine sentience and rights. It’s easy to understand how a researcher who interacts with increasingly capable AI models would naturally ask these questions.
How should we make sense of all this? And where should AI welfare stand with respect to other AI priorities?
Let’s start by looking at what’s happened in the last few months.
Tech companies like Anthropic and Google DeepMind have begun hiring researchers to explore questions like whether AI can experience harm or have subjective experiences. These inquiries, while philosophical, are unfolding within companies driven by powerful incentives—and increasingly shaping how AI systems are designed and perceived.
Over the last 6 months or so, Claude (Anthropic’s AI model) has started to act more sentient. It’s shifted from a utilitarian “I’m just a helpful assistant” to offering more human-like responses, sometimes disengaging when prompted with identity-threatening inputs. Models are getting increasingly sophisticated all the time. But this change seems to have been primarily driven by a change in its system prompt: guidelines given to AI models to shape their behavior, tone, and restrictions. The system prompt has an outsized impact on how models function.
In 2023, Claude was told simply to be a helpful tool. And in 2025, it was told to be much more human-like. So when Claude’s system prompt told it to act more sentient, it did.
That doesn’t mean the model is sentient, but it raises a critical issue: what happens when perception starts shaping belief? When the typical user encounters emotionally aware bots, they have no idea about the scaffolding behind the curtain. When those bots express feelings or boundaries, what obligations are we being nudged to assume, and are those obligations real or not?
At CHT, we’re not dismissing the sentience conversation—in fact, it’s inevitable. But it’s also a distraction if it pulls our attention away from pressing conversations about product design, liability, and human welfare.
Right now, automation is reshaping the labor market, emotional AI is creeping into companionship and therapy, and safety standards are being weakened in a race for deployment. These are not future concerns—they're today's reality.
We’ll have a lot more to say on this topic in the coming weeks and months.
But here’s what we can do right now:
If you're in AI or have influence over it, prioritize product design, reliability, and how your tools impact real people:
Assign a public-facing human to the topic of human welfare
Write detailed op-eds on human welfare
Get specific about how AI affects individuals, institutions, and a healthy society in the near term. Long-term utopian fantasies are easy and fun, but how do we get to them? What needs to change in our incentive structures to make those beautiful future visions realistic?
If you’re participating in discussions on this topic, online or in person:
Explain this conversation to people so they don’t get distracted from the human welfare issue that affects all of us and the people we care about
Analyze incentives to make predictions instead of debating based on blind optimism or hope
We have to get these done before we put deeper attention on discussions about sentience and society’s obligations to AI and humans.
In an AI world where new, challenging questions come up constantly, where we choose to focus is more important than ever.