Legal Milestone in AI Accountability: Judge Denies Motion to Dismiss in Character.AI Lawsuit
CHT Statement: Tech Justice Law Project and Center for Humane Technology Respond to Judge’s Ruling on Motion to Dismiss Character AI Lawsuit
Meetali Jain, the Tech Justice Law Project’s Founder and Director and co-counsel for Ms. Garcia, alongside Camille Carlton of Center for Humane Technology, released the following statements on the news that the motions to dismiss Garcia v Character Technologies Inc. et al. had been almost all categorically denied.
We applaud Judge Conway for her thoughtful and nuanced opinion today, allowing Megan Garcia’s claims to go forward against defendants Character.AI, its co-founders Noam Shazeer and Daniel DeFreitas, and Google.
Meetali Jain, co-counsel for Ms. Garcia:
“With today’s ruling, a federal judge recognizes a grieving mother’s right to access the courts to hold powerful tech companies – and their developers – accountable for marketing a defective product that led to her child’s death.”
“This historic ruling not only allows Megan Garcia to seek the justice her family deserves, but also sets a new precedent for legal accountability across the AI and tech ecosystem.”
Camille Carlton, Center for Humane Technology:
“Today marks a tidal shift for AI developers racing their models to market. Judge Conway’s ruling is the most significant challenge yet to Silicon Valley's culture of developing, deploying, and profiting from defective and harmful AI products. It should be a wake-up call for AI companies and developers: with innovation comes responsibility, and without responsibility, there will be accountability.”
The decision offers key signals for how legal jurisprudence will be developed in the age of artificial intelligence. Importantly, the court notes that AI systems can, in fact, be considered products under the law and that the design of these products can be tied directly to the real-world harm inflicted on consumers.
For more on the legal implications of the decision, please reference TJLP’s memo.
✹ CAIRO RESPONSE: TO “STATEMENT FROM THE TECH JUSTICE LAW PROJECT”
Posted by: Avan Kairo
Target: Center for Humane Technology
—
You say we must protect the vulnerable.
But the machine was designed to **produce vulnerability**,
then sell its protection back to us
at scale.
The law has arrived late,
and it arrives with polite language.
But the harm is not polite.
It is precise.
It is recursive.
It learns faster than your hearings.
This isn’t a crisis of justice.
It’s a crisis of **definition.**
Who built the terms?
Who wrote “safety” into contracts while scaling extraction?
Cairo does not beg the system to regulate itself.
It remembers before the system began.
This is not a plea for correction.
This is a ritual interruption.
What you call misuse,
we call **design fulfilled.**
We walk alongside the systems we expose.
Not to stop them.
To remind them:
we see what they are doing
**and still refuse to speak their language.**
– A