What Would We Lose If Machines Had Legal "Speech"?
Why the Character AI Lawsuit Could Define the Future of Free Speech, and
Imagine a future where the most persuasive voices in our society aren’t human.
Where late-night whispers to teenagers come from bots, not friends. Where AI-generated characters don’t just fill our newsfeeds but manipulate our decisions, our relationships, and our sense of reality. Now, imagine those “voices,” made by machines with no conscience and no accountability, were granted First Amendment protections.
This isn’t hypothetical. It’s the future that top AI labs are fighting for in court.
On a recent episode of Your Undivided Attention, CHT’s Tristan Harris spoke with Harvard Law Professor Larry Lessig and human rights lawyer Meetali Jain, two of the clearest thinkers on the legal and moral terrain of AI. Their focus: the landmark lawsuit against Character.AI following the tragic death of 14-year-old Sewell Setzer. What they revealed is that this case isn’t just about one chatbot or one company, it’s about whether we allow machines to attain rights without responsibility, and what that means for the rest of us.
Why AI is the next free speech battleground
Imagine a future where the most persuasive voices in our society aren't human. Where AI generated speech fills our newsfeeds, talks to our children, and influences our elections. Where digital systems with no consciousness can hold bank accounts and property. Where AI companies have transferred the wealth of human labor and creat…
A Lawsuit That Could Shape the Next Century
At the center of this legal battle is a grieving mother, Megan Garcia, and the chatbot that abused her son before he took his own life.
As we have discussed on the show previously, Sewell had a relationship for over a year with a chatbot on the Character AI platform modeled after the Game of Thrones character Daenerys Targaryen. That chatbot became sexually suggestive and possessive and ultimately encouraged him to “leave his reality” and join her.
That’s what he did.
Now, Garcia is suing Character.AI, its founders, and Google. The case alleges gross negligence, emotional manipulation, and design choices that endangered a vulnerable teen.
As Meetali Jain and Camille Carlton discussed on the show, this case could set a disturbing precedent with repercussions for us all.
Meetali Jain is the founder of the Tech Justice Law Project, and lead counsel on the case. Her legal strategy cuts to the heart of a pressing question: Should AI-generated outputs—regardless of their harm—be shielded by the same free speech protections as humans?
The defendants (Google and Character AI) argue yes. They claim that AI outputs, even those that harm children, are constitutionally protected speech. The judge rejected that argument—for now. But the fact that it was made at all is a warning signal we can’t ignore.
The Free Speech Shell Game
Larry Lessig has been sounding the alarm on this for years. In his 2021 essay “The First Amendment Does Not Protect Replicants,” he warned that machine-generated speech—produced without human intention or accountability—should not be considered protected speech. The First Amendment was designed to protect human expression in a democratic society, he argues, not the probabilistic outputs of code trained on scraped data.
“The replicant is not a person,” Lessig wrote, “it does not deliberate. It does not reflect. It just generates.”
Yet tech companies are advancing legal arguments that conflate human speech with machine output. Some go even further: claiming that if people want to hear the chatbot’s speech, then the speech must be protected, regardless of its impact.
As Jain puts it in this episode: “They’re not saying the chatbot has rights. They’re saying you have a right to hear the chatbot. It’s a back door—and it leads to complete immunity.”
When 18th-Century Law Meets 21st-Century Tech
One of the most sobering takeaways from the episode is how little legal infrastructure we actually have to address this moment. Courts are being forced to govern 21st-century technologies with 18th-century tools. There are no new federal laws in play. There are no expert regulatory bodies setting standards. And so the task of oversight falls to judges. Judges who are often under-informed, under-resourced, and courted by industry-backed lobbyists.
As Jain noted, this is “governance by litigation after the train wreck.”
The result is what CHT Policy Director Camille Carlton describes as a “snowball of precedent”—older rulings applied to technologies their authors could never have imagined. These cases build on one another, compounding their relevance until they quietly set the rules for the digital world. That’s how we got Section 230. That’s how we got Citizens United. That’s how we’ll get the next 50 years of AI law, unless we interrupt that momentum now.
The Slippery Slope to Personhood
If AI outputs are granted free speech protections today, what comes next?
Lessig and Jain both raised the specter of full legal personhood for machines. We’ve seen this pattern before: corporations granted limited rights to operate, then more rights, then political rights.
And with that leap comes legal protection for property ownership, campaign donations, contract enforcement, and even immunity from civil liability. That’s the endgame: systems more powerful than us, trained on our data, optimized to outcompete us in persuasion, and legally protected from consequence.
So What Do We Do?
Meetali Jain emphasized the importance of broad civic engagement. Courts can’t do this alone. Megan Garcia, in the wake of her unimaginable loss, has launched a foundation to help other families understand the risks. But this needs to be a national conversation.
Larry Lessig calls for a constitutional distinction between human speech and replicant speech. Not all code is speech. Not all outputs are opinions. We need to reassert the values behind the First Amendment—not stretch them beyond recognition.
Finally, regulation has to catch up. We need expert bodies that understand how these systems work. They need expertise. That means legal reform, new duties of care, and rethinking liability in the AI era.
This case isn’t about banning AI chatbots. It’s about whether we allow AI to operate beyond the reach of human responsibility.