Last month’s AI Action Summit in Paris marked a turning point in how world leaders talk about AI. Unlike the U.K.’s 2023 AI Safety Summit, where risk dominated the conversation, this year’s event was all about opportunity and growth—with safety concerns taking a backseat.
That shift was loud and clear in remarks from U.S. Vice President JD Vance, who opened with:
💬 “I’m not here to talk about AI safety… I’m here to talk about AI opportunity.”
He doubled down on keeping the U.S. ahead in the AI race, calling for regulations that fuel innovation rather than restrict it. This aligns with the White House’s recent directive on protecting American AI companies from foreign oversight.
Meanwhile, Macron made his pitch for Europe’s AI dominance. The French president unveiled a 109-billion-euro private AI investment plan, encouraging companies to “choose Europe and France for AI.” Some attendees described the event as like an advertisement for France’s technology ecosystem.
Even the European Commission signaled a pro-innovation shift, shelving the AI Liability Directive — a move that mirrors its efforts to soften restrictions on European companies like Mistral AI during AI Act negotiations.
Notably, 60+ countries signed an AI cooperation pledge — but the U.S. and U.K. refused. Why?
📌 The U.S. rejected any references to the UN, inclusivity, and sustainability in AI governance.
📌 The U.K. cited concerns over unclear global governance structures, but their reluctance also reflects a strategic need to align with U.S. priorities, lest the U.K. draw the U.S.’s ire.
AI safety advocate Max Tegmark called the summit a “negation” of the Bletchley consensus, and U.K. organizers worked hard to distance the event from the previous summit’s safety focus.
So, where does that leave AI policy? Less about risk, more about investment. The global divide on AI governance is growing, and the U.S. is shifting toward a bilateral, innovation-first strategy that de-emphasizes broad international cooperation — a trend we’ll be watching closely.
Other Key Policy Moves This Month
✅ Kids Online Safety Is Back in the Spotlight
The Senate Judiciary Committee held a hearing on children’s online safety, with bipartisan support for stronger protections. Senator Alex Padilla pointed to the Character.AI case, calling AI chatbots a "new frontier in kids' safety.”
✅ Senate Passes the "Take It Down" Act
A big move on deepfake and nonconsensual intimate image removal — backed by Melania Trump and Ted Cruz. It passed unanimously in the Senate, with a House vote pending.
✅ State-Level AI Policy Under Fire
A bipartisan multi-state working group is facing pushback from conservative analysts accusing it of pushing “woke AI bills.” Expect continued challenges on policies tackling algorithmic bias and AI-driven content moderation.
✅ U.K.’s Copyright & AI Scraping Debate Heats Up
The U.K. is walking a tightrope:
📌 Copyright holders get a new “opt-out” right — but it doesn’t undo past AI training on scraped content.
📌 Tougher AI laws? The UK wants them, but also needs to attract investment in a post-Brexit economy while maintaining a tenuous relationship with the U.S.
💡 Final Thoughts
AI policy is shifting fast — less focus on safety, more on innovation and competition. The global divide is deepening, and the U.S. is increasingly shaping AI policy on its own terms.
🔎 Want a deeper dive? Camille Carlton spoke before the European Commission on AI risks and digital safety — read her remarks here.
*CHT values the ethical use of technology, including AI products. Pete researched and wrote a more extended version of this article for internal purposes. For Substack, it was summarized and formatted using generative AI. A member of the CHT team provided edits, fact-checking and proofreading.
Hi Jennifer, Thanks so much for the correction. It looks like that error came through in the summarizing and formatting and was not picked up by two human readers! We are hoping to use AI to help us get some more dense internal documents out into the world. But we are absolutely still figuring out the best way to make that work. It's a work in progress. Thanks for reading.
I see the note at the bottom of this piece mentioning it’s been summarized and formatted using AI. The article refers to”U.S. Senator JD Vance” instead of Vice President. Was that a human oversight or an AI edit?