OpenAI’s plans for “erotica” proves why trust isn’t enough.
Even insiders say AI companies are cutting corners. It’s time for laws that make safety non-negotiable.
This week brought another warning sign about where the AI race is headed.
The latest controversy around OpenAI’s plan to roll out erotica for adults on ChatGPT underscores a deeper truth: AI’s biggest problem isn’t technical—it’s structural. In a must-read New York Times op-ed — ““I Led Product Safety at OpenAI. Don’t Trust Its Claims About ‘Erotica.’” — former OpenAI product-safety lead Steven Adler revealed how competitive pressure is pushing companies to sacrifice safety, arguing that the public cannot trust OpenAI’s assurances that it cares about safety. It’s a stark reminder that voluntary promises aren’t enough; we need accountability built into the system itself.
Our new explainer lays out one practical fix: apply product liability to AI, the same principle that made cars, food, and medicine safer. It’s a simple idea with profound potential to make the race about responsibility, not speed.
AI Product Liability: The Light-Touch Law with Heavyweight Impact
Evidence is mounting that AI products — from general-purpose chatbots to so-called “AI companions” — are already inflicting real harms on Americans.
Our latest podcast is an Ask Us Anything episode, where Tristan Harris and Aza Raskin take on your biggest questions about AI’s rapid acceleration. Why won’t the race slow down? What are companies really after with children’s usage? Could AGI already be here, quietly? Listen as Tristan and Aza unpack the forces driving this moment, and share how we can steer technology toward a more humane future.
Ask Us Anything 2025
It’s been another big year in AI. The AI race has accelerated to breakneck speed, with frontier labs pouring hundreds of billions into increasingly powerful models—each one smarter, faster, and more unpredictable than the last. We’re starting to see disruptions in the workforce as human labor is replaced by agents. Millions of people, including vulnerab…
My kids were so excited to see Tristan’s appearance on The Daily Show, and they got all the way through the segment. If my 12- and 14-year-olds could follow this, it’s one you can share with friends and family who need an accessible, human introduction to AI’s risks.
And here is a backstage photo!
Watch the full Daily Show interview
And on GZERO World with Ian Bremmer Tristan warned that:
“We’re not in a race for technology—we’re in a race for who’s better at applying and governing exactly where in our society we want to deploy that technology.”
Thanks for joining us in this journey. If you care about the reigning in the harms of AI please forward this email, or just one of the article links to a policymaker, educator, or industry leader in your circle.
Cheers,
![[ Center for Humane Technology ]](https://substackcdn.com/image/fetch/$s_!uhgK!,w_40,h_40,c_fill,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f9f5ef8-865a-4eb3-b23e-c8dfdc8401d2_518x518.png)





This is why strong regulations must be passed to make these companies accountable. My job is to help companies navigate the EU AI Act, probably the most ambitious piece of legislation on this subjects, and I can say that most are very, very far from compliance.
I’m really glad to see CHT highlighting this issue. I’m both a software technologist and a psychotherapist and I don’t think the public and regulators get the problem we’re creating here.
To try to communicate the unanticipated dangers, I’ve recently launched a science present novel to try to show the potential problems rather than tell it.
https://www.traces-of-therapy.com/p/perfect-illusion-chapters-1-2
I can explain more for those who are interested.