📣 Character.AI is Claiming First Amendment Protection For Its Chatbots
An important update on Megan Garcia’s lawsuit against the chatbot company Character.AI.
Today is the motion to dismiss hearing for Garcia v Character Technologies Inc. Folks might remember the lawsuit against Character AI, which was filed last year, in which Megan Garcia’s 14-year-old son, Sewell Setzer, tragically died after interacting with Character AI’s bots.
Now, Character.AI is asking the court to dismiss the case against it, arguing that the outputs from its chatbot are protected speech under the First Amendment. We’ve seen tech companies use the First Amendment as a liability shield when it comes to social media, but this time it’s a little bit different.
Here is what is at stake:
This case could set a worrying legal precedent with cascading consequences.
If Character.AI is successful, AI-generated, non-human, non-intentional outputs — like chatbot responses — could gain protection under the First Amendment.
It also raises a thorny legal question: If the responsibility for AI-generated outputs (and thus any resulting harm) lies with the AI bots themselves rather than the companies that developed them, who should be held liable for damages caused by these products? This issue could fundamentally reshape how the law approaches artificial intelligence, free speech, and corporate accountability.
I think many of us would agree that extending constitutional protections to chatbots is not part of the future that we want.
Note: CHT serves as a technical advisor to the legal team representing Megan Garcia against C.AI, Google, and its cofounders.
For comment, reach out to press@humanetech.com
Another "Tech Crazy Town" idea. Pursuing the avoidance of responsibility always.
If only we could ask the Seven questions that Neil Postman proposed in his Amazing lecture back in 1997 (the surrender of culture to technology) before releasing any new tech product we' d be so much better of. So much ignored wisdom.
What is the problem that this new technology solves?
Whose problem is it?
What new problems do we create by solving this problem?
Which people and institutions will be most impacted by a technological solution?
What changes in language occur as the result of technological change?
Which shifts in economic and political power might result when this technology is adopted?
What alternative (and unintended) uses might be made of this technology?
https://youtu.be/hlrv7DIHllE?feature=shared
While it is obvious that chatbots should not qualify for First Amendment rights; for this particular case- the company can still be held liable. Abetment to suicide in any way is still a crime.