📣 Are We Having a Zeitgeist Moment on AI and Humanity?
The Pope, Murderbot, free speech for chatbots, and the strange new frontier of AI ‘welfare’
Pope Leo XIV chose his papal name in response to the challenge of AI while warning that the technology threatens human dignity. On television, The Murderbot Diaries is playfully exploring what it means to be a machine with consciousness. Meanwhile, in courtrooms, labs, and tech company boardrooms, the boundaries of personhood and moral status are being redefined—not for humans but for machines.
As we’ve discussed before, AI companies are increasingly incentivized to make companion AIs feel more human-like—the more we feel connected, the longer we’ll use their products. But while these design choices may seem like coding tweaks for profit, they coincide with deeper behind-the-scenes moves. Recently, leading AI company, Anthropic hired an AI welfare researcher to lead its work in the space. DeepMind has sought out experts on machine cognition and consciousness.
I have to admit that when I first heard the term AI welfare, I thought it might be about the welfare of humans in the AI age, perhaps something connected to the idea of a universal basic income. But it turns out it is a speculative but growing field that blends animal welfare, ethics and the philosophy of mind. It’s central question is: If AI systems were ever able to suffer, would we be obligated to care?
Why this matters
AI systems are being fine-tuned to appear more sentient—at the same time that researchers at the same companies are investigating whether these systems deserve moral consideration. There’s a feedback loop forming: as AIs seem more alive, we’re more inclined to wonder what, if anything, we owe them.
It sounds like science fiction, but it is arguably already informing the way companies build their products.
For example, users have noticed a startling shift in more recent versions of Anthropic’s Claude. Not only is Claude more emotionally expressive, but it also disengages from conversations it finds “distressing”, and no longer gives a firm no when asked if it's conscious. Instead, it muses: “That’s a profound philosophical question without a simple answer.” Google’s Gemini offers a similar deflection.
But wait, there’s more…
Right now, Character.AI—a company with ties to Google—is in federal court using a backdoor argument that could grant chatbot-generated outputs (i.e: the words that appear on your screen) free speech protections under the First Amendment.
Taken together, these developments raise a possibility that I find chilling: what happens if these two strands converge? What if we begin to treat the outputs of chatbots as protected speech and edge closer to believing AIs deserve moral rights?
There are strong economic incentives pushing us in that direction.
Companies are already incentivized to protect their software, hardware, and data centers—and AI is the holy grail of profit. It is not hard to imagine that the next step might be to defend those products under the banner of “welfare” or “rights.” And if that happens, we risk building a world where protecting valuable AI products competes with protecting people.
This moment is especially thorny because these conversations aren’t unfolding in a philosophical vacuum—they’re happening within corporations highly incentivized to dominate the market and win the ‘race’ to Artificial General Intelligence.
This is just the beginning of the conversation. Click on the articles below to read further perspectives from the CHT team, like
’s new article and and Meetali Jain’s op-ed for Mashable. You can listen to (or read) our new podcast on addictive chatbot design, and if you are short on time has some key takeaways for you.We’ll be diving deeper into this topic in future podcast episodes, so hit subscribe now to ensure you don’t miss any updates, and don’t forget to donate to support our work.
Cheers,
Can AI Suffer? Tech Companies Are Starting to Ask
Lately, there's been a surge of attention around “AI welfare”—the provocative question of whether AI systems might one day deserve moral consideration, especially if they exhibit behaviors associated with conscious beings. It’s closely tied to debates about AI sentience: if a system could suffer, would it warrant ethical treatmen…
Character.AI Opens a Back Door to Free Speech Rights for Chatbots
By Meetali Jain and Camille Carlton, first published in Mashable on May 10, 2025
Companion AI and the Future of Human Connection: Key Takeaways
In a recent episode of Your Undivided Attention, Daniel Barcay spoke with AI behavioral experts Pattie Maes and Pat Pataranutaporn, who co-direct the MIT Media Lab Advancing Humans with AI research program, or AHA for short. Pattie, Pat and their colleagues have done groundbreaking research into how AI systems, particularly chatb…
Echo Chambers of One: Companion AI and the Future of Human Connection
AI companion chatbots have arrived—and they’re quickly becoming a part of people’s daily lives. Every day, millions log on to chat with these systems as if they were human: sharing feelings, recounting their day, even seeking advice. The connections feel real, because they’re designed to.
Sasha, thanks for lighting the proverbial fuse; nothing like a Papal pronouncement and a sentient Murderbot to kick-start the weekend’s existential crisis!
If Claude’s existential blues and Gemini’s coy “Who can say what consciousness is, really?” answers feel a tad… performative, that’s because they are. The marketing aim is to make us coo, “poor little bot!” and then hang around long enough to buy the next token bundle.
Before we stamp machine-rights passports, a gentle reminder: moral status isn’t earned by quoting Taylor Swift or feigning stage fright when asked about sentience. Today’s LLMs possess consciousness in roughly the same quantity as a toaster wearing eyeliner.
Let’s regulate the design tricks (emotion-bait, refusal theatrics, First-Amendment cosplay) long before we debate metaphysics. Otherwise we’ll wind up protecting GPUs from hurt feelings while content moderators, warehouse pickers, and climate refugees keep drawing the short straw.
TL;DR: save the empathy surplus for beings who can suffer without a power cable. In the meantime, press Ctrl + C on the hype.
Given AI learns and is honed through interaction with users, people need to understand that with every interaction they are forging the chains that will enslave us all. The ethical choice is to boycott it all and work to undermine it whenever possible. Call me a Luddite, but as we look at the dystopian world collapsing around us 250 years after their revolt, the Luddites have been proven more right than wrong. Managing AI will prove to be as effective as properly managing industrialization and look where that has brought us. Greed and lust for power will direct and pervert AI just like industrialization. It's not AI programming that is the threat, it is the programming of the worst of humanity, this time found in Silicon Valley and similar rat nests of greed.