<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[[ Center for Humane Technology ]: Policy: Shifting Incentives]]></title><description><![CDATA[Our Policy team drives incentive-shifting policy outcomes by shaping the discourse and lending expertise across strategic intervention points.]]></description><link>https://centerforhumanetechnology.substack.com/s/tech-policy</link><generator>Substack</generator><lastBuildDate>Tue, 05 May 2026 01:22:45 GMT</lastBuildDate><atom:link href="https://centerforhumanetechnology.substack.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Center for Humane Technology]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[centerforhumanetechnology@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[centerforhumanetechnology@substack.com]]></itunes:email><itunes:name><![CDATA[Center for Humane Technology]]></itunes:name></itunes:owner><itunes:author><![CDATA[Center for Humane Technology]]></itunes:author><googleplay:owner><![CDATA[centerforhumanetechnology@substack.com]]></googleplay:owner><googleplay:email><![CDATA[centerforhumanetechnology@substack.com]]></googleplay:email><googleplay:author><![CDATA[Center for Humane Technology]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Announcing the “Preserving What Makes Us Human in the Age of AI” Working Group]]></title><description><![CDATA[Our new initiative to develop a foundational framework of rights and protections for the age of AI.]]></description><link>https://centerforhumanetechnology.substack.com/p/announcing-the-preserving-what-makes</link><guid isPermaLink="false">https://centerforhumanetechnology.substack.com/p/announcing-the-preserving-what-makes</guid><dc:creator><![CDATA[Center for Humane Technology]]></dc:creator><pubDate>Fri, 17 Apr 2026 15:35:16 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!u8tS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fda071ca6-3b0e-4c17-a0d1-cb8fa0887a68_1428x1596.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!u8tS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fda071ca6-3b0e-4c17-a0d1-cb8fa0887a68_1428x1596.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!u8tS!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fda071ca6-3b0e-4c17-a0d1-cb8fa0887a68_1428x1596.png 424w, https://substackcdn.com/image/fetch/$s_!u8tS!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fda071ca6-3b0e-4c17-a0d1-cb8fa0887a68_1428x1596.png 848w, https://substackcdn.com/image/fetch/$s_!u8tS!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fda071ca6-3b0e-4c17-a0d1-cb8fa0887a68_1428x1596.png 1272w, https://substackcdn.com/image/fetch/$s_!u8tS!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fda071ca6-3b0e-4c17-a0d1-cb8fa0887a68_1428x1596.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!u8tS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fda071ca6-3b0e-4c17-a0d1-cb8fa0887a68_1428x1596.png" width="1428" height="1596" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/da071ca6-3b0e-4c17-a0d1-cb8fa0887a68_1428x1596.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1596,&quot;width&quot;:1428,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1514459,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/194408562?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fda071ca6-3b0e-4c17-a0d1-cb8fa0887a68_1428x1596.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!u8tS!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fda071ca6-3b0e-4c17-a0d1-cb8fa0887a68_1428x1596.png 424w, https://substackcdn.com/image/fetch/$s_!u8tS!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fda071ca6-3b0e-4c17-a0d1-cb8fa0887a68_1428x1596.png 848w, https://substackcdn.com/image/fetch/$s_!u8tS!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fda071ca6-3b0e-4c17-a0d1-cb8fa0887a68_1428x1596.png 1272w, https://substackcdn.com/image/fetch/$s_!u8tS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fda071ca6-3b0e-4c17-a0d1-cb8fa0887a68_1428x1596.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>We recently launched the &#8220;Preserving What Makes Us Human in the Age of AI&#8221; Working Group, a new initiative convening experts across legal, technical, and philosophical disciplines to develop a foundational framework of rights and protections for the Age of AI.</p><p>As artificial intelligence systems increasingly shape how we think, work, relate, and create, we must define a set of core principles for safeguarding the human experience. The Preserving What Makes Us Human in the Age of AI Working Group will explore &#8212; and answer &#8212; a central question: What new or updated rights and legal protections are necessary to protect our humanity?</p><p>&#8220;AI poses new and novel challenges to our relationships, our cognitive abilities, our work, and even our sense of self,&#8221; our Senior Director of Strategy and Impact Camille Carlton said.  &#8220;The Preserving What Makes Us Human in the Age of AI Working Group will move beyond passively admiring the AI problem to actively defining core principles for defending our humanity.&#8221;</p><p>&#8220;As important as it is to regulate this revolutionary technology, regulating companies is just one part of the equation. We must also establish the new rights necessary to protect people in an AI-driven future. This working group seeks to do exactly that.&#8221;</p><p>The Working Group is an initiative of CHT&#8217;s broader <a href="https://centerforhumanetechnology.substack.com/p/whats-at-stake-preserving-what-makes">analysis examining AI and what makes us human</a>. It will convene throughout the spring and summer of 2026 to examine AI&#8217;s impact across five core pillars of the human experience:</p><ul><li><p>Relationships</p></li><li><p>Cognitive capacities</p></li><li><p>Inner world</p></li><li><p>Identities</p></li><li><p>Work and contribution</p></li></ul><p>Across each of these domains, the group will define shared norms and rights &#8212; that is, the deeply, uniquely human qualities of each that must be protected against encroachment from AI. Once defined, these principles can serve as a foundation for future governance, innovation, and public understanding.</p><p>The Group&#8217;s work will culminate in a public report to be published in the summer of 2026, outlining the specific rights and protections needed to preserve human dignity and agency.</p><p>Working Group participants include:</p><ul><li><p><a href="https://law.ucla.edu/faculty/faculty-profiles/melodi-dincer">Melodi Din&#231;er, UCLA School of Law</a></p></li><li><p><a href="https://law.duke.edu/fac/farahany">Nita Farahany, Duke University</a></p></li><li><p><a href="https://www.ai.cam.ac.uk/people/ann-kristin-glenster/">Ann Kristin Glenster, University of Cambridge</a></p></li><li><p><a href="https://www.oxford-aiethics.ox.ac.uk/professor-edward-harcourt">Edward Harcourt, Oxford University Institute for Ethics in AI</a></p></li><li><p><a href="https://www.law.columbia.edu/faculty/clare-huntington">Clare Huntington, Columbia Law School</a></p></li><li><p><a href="https://www.brookings.edu/people/molly-kinder/">Molly Kinder, Brookings Institution</a></p></li><li><p><a href="https://jackmanlaw.utoronto.ca/people/anna-su">Anna Su, Henry Jackman Faculty of Law, University of Toronto</a></p></li><li><p><a href="https://cyber.harvard.edu/people/alex-pascal">Alex Pascal, Harvard University</a></p></li><li><p><a href="https://law.byu.edu/faculty/brett-g-scharffs-2">Brett G. Scharffs, Brigham Young University</a></p></li><li><p><a href="https://www.law.du.edu/about/people/zahra-takhshid">Zahra Takshid, University of Denver Sturm College of Law</a></p></li><li><p><a href="https://www.brookings.edu/people/rebecca-winthrop/">Rebecca Winthrop, Brookings Institution</a></p></li></ul><p>&#8220;The threat AI poses to our societies and ourselves cannot be addressed from any single point of view,&#8221; Carlton said. &#8220;This working group reflects a shared commitment to cross-disciplinary collaboration &#8212; and meaningful action.&#8221;</p><p>The Preserving What Makes Us Human in the Age of AI Working Group builds on momentum from the <a href="https://www.humanetech.com/ai-roadmap">AI Roadmap</a>, extending CHT&#8217;s policy efforts into a new phase of expert-driven development. Rather than solely diagnosing risks, the group will focus on identifying actionable frameworks that can guide policymakers, technologists, and civil society.</p><p></p>]]></content:encoded></item><item><title><![CDATA[CHT’s 2026 Policy Forecast]]></title><description><![CDATA[Midterm campaigns set to dominate political cycle as factions shift and state AI laws hang in balance]]></description><link>https://centerforhumanetechnology.substack.com/p/chts-2026-policy-forecast</link><guid isPermaLink="false">https://centerforhumanetechnology.substack.com/p/chts-2026-policy-forecast</guid><dc:creator><![CDATA[Center for Humane Technology]]></dc:creator><pubDate>Wed, 18 Feb 2026 22:54:25 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!zm6L!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e377833-777a-4a3a-a2a5-138ea76f085f_3840x2160.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Since the 2024 U.S. presidential election, AI has evolved from a niche topic that received limited mainstream political attention to a kitchen-table issue that Americans are speaking out about on a regular basis. </p><p>As we note in our new area of work &#8220;<a href="https://centerforhumanetechnology.substack.com/p/whats-at-stake-preserving-what-makes">AI and What Makes Us Human</a>,&#8221; the visceral effects of generative AI products have rippled across American households over the last year, with harms felt in schools, workplaces, families, and more. AI image generators are creating a flood of exploitative, nonconsensual content; chatbots are triggering mental health crises in users; and fears around AI replacing humans en masse in the workforce are growing. In 2026, the public is reckoning with what can be done to protect children, communities, and future job prospects as AI continues to be rolled out at a rapid rate. And they&#8217;re looking to lawmakers to take action on these issues.</p><p>Here are the important trends we&#8217;re watching in politics and policy as the year unfolds.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!zm6L!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e377833-777a-4a3a-a2a5-138ea76f085f_3840x2160.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!zm6L!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e377833-777a-4a3a-a2a5-138ea76f085f_3840x2160.jpeg 424w, https://substackcdn.com/image/fetch/$s_!zm6L!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e377833-777a-4a3a-a2a5-138ea76f085f_3840x2160.jpeg 848w, https://substackcdn.com/image/fetch/$s_!zm6L!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e377833-777a-4a3a-a2a5-138ea76f085f_3840x2160.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!zm6L!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e377833-777a-4a3a-a2a5-138ea76f085f_3840x2160.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!zm6L!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e377833-777a-4a3a-a2a5-138ea76f085f_3840x2160.jpeg" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5e377833-777a-4a3a-a2a5-138ea76f085f_3840x2160.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4442452,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/188335323?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e377833-777a-4a3a-a2a5-138ea76f085f_3840x2160.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!zm6L!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e377833-777a-4a3a-a2a5-138ea76f085f_3840x2160.jpeg 424w, https://substackcdn.com/image/fetch/$s_!zm6L!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e377833-777a-4a3a-a2a5-138ea76f085f_3840x2160.jpeg 848w, https://substackcdn.com/image/fetch/$s_!zm6L!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e377833-777a-4a3a-a2a5-138ea76f085f_3840x2160.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!zm6L!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e377833-777a-4a3a-a2a5-138ea76f085f_3840x2160.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Licensed under the <a href="https://unsplash.com/plus/license">Unsplash+ License</a></figcaption></figure></div><h3><strong>A Midterm Election Year</strong></h3><p>With U.S. midterm elections set for November, politicians are gearing up for intensive campaigning &#8212; and AI is increasingly a hot-button issue.</p><p>We anticipate candidates across the political spectrum to address AI&#8217;s impacts on jobs, kids, relationships, and the economy, and for voters to begin looking for solutions-oriented positioning around AI as they head to the polls later this year. (We saw a preview of this during 2025&#8217;s off-year elections, as Governor Spanberger of Virginia and Governor Sherrill of New Jersey included tech concerns as a part of their campaign platforms.)</p><p>Deep-pocketed tech companies are also vying for influence this midterm election season. With <a href="https://issueone.org/articles/big-tech-lobbying-2025-q3/">aggressive PAC spending</a>, companies including Meta and OpenAI are seeking to infuse midterm campaigns with industry-friendly positions and to downplay current AI harms and future risks. Tech PAC spending is also being leveraged to <a href="https://www.nbcnews.com/politics/2026-election/ai-crypto-trump-super-pacs-stash-millions-spend-midterms-rcna256622">bolster opposition work</a>, targeting midterm candidates who have publicly supported AI regulation.</p><p>Finally, the midterm election year means that windows for legislative action are smaller, as many policymakers shift their bandwidth to the campaign trail. This time crunch could lead to legislative packages that eschew bolder visions for more sensemaking around AI and narrow solves.</p><h3><strong>Trends at the Federal Level</strong></h3><p>With the time that <em>is</em> available during a midterms year, we can expect members of Congress to focus on issues that already gained steam in 2025 &#8212; including <a href="https://centerforhumanetechnology.substack.com/p/ai-product-liability">product liability</a> and remedies for <a href="https://www.transparencycoalition.ai/news/chatbot-bill-surge-nationwide-concern-spurs-78-proposals-in-27-states">chatbot harms</a>, especially harms that relate to kids and teens. We also may see AI infrastructure become a policy priority, as constituents who are already concerned about the economy weather expensive data center buildouts and increased demands on their local water and power supplies.</p><p>Neither the Democratic nor Republican parties are fixed to a clear agenda when it comes to AI. As politicians look to show leadership and differentiate themselves in the field of midterm candidates, new political factions will take shape &#8212; some with industry-friendly stances, others with more populist approaches to tech policy. This could lead to new areas of consensus around responses to tech harms and our new realities with technology. It could also drive politicians to appeal to constituents who are increasingly worried about AI in their everyday lives.</p><h3><strong>Uncertainty Around State AI Laws</strong></h3><p>AI legislation continued to make meaningful momentum at the state level in 2025. 73 AI laws were passed across 27 states, with legislative focus areas spanning deepfakes, chatbot guardrails, human-in-the-loop healthcare, kids&#8217; safety, and more. As states kicked off session this January, a flurry of AI-related bills were introduced targeting a range of issues including surveillance pricing, AI chatbot liability, and protections for kids.</p><p>But an Executive Order released in December 2025 threatens to stall this progress. The Trump administration&#8217;s &#8220;Ensuring a National Policy Framework for Artificial Intelligence&#8221; <a href="https://centerforhumanetechnology.substack.com/p/chts-response-to-president-trumps">aims to preempt state AI laws</a> for the sake of creating a &#8220;unified approach&#8221; &#8212; a potentially massive hindrance in the effort to regulate AI and keep consumers safe. Because of this Order, the momentum of recent legislative victories now hangs in the balance, with the Executive Order potentially halting the implementation of AI laws passed over the past few sessions.</p><p>The status of state-level AI legislation thus remains uncertain in 2026, though the Executive Order will likely face hurdles in court. As the Order&#8217;s benchmark dates approach and its legal fate is decided in court, state AI laws may remain in limbo for the time being. Or, some state lawmakers may choose to defy the threat of the Executive Order and forge ahead.</p><h3><strong>All Eyes on Litigation</strong></h3><p>Litigation continues to be an important arena for enacting change in the tech ecosystem, especially as court decisions can produce new and lasting precedents that drive industry accountability. Although it can be slow-moving, several significant milestones are emerging in various litigation processes. High-profile lawsuits against OpenAI, Meta, TikTok, Midjourney, Snap, YouTube, and other dominant tech companies will continue to influence tech policy in 2026, and could spill over into the court of public opinion.</p><p>The ongoing cases are diverse and cover a range of topics, including psychosocial harms from chatbots, copyright issues, and social media addiction. Revelations from these lawsuits &#8212; including internal emails and documents from within these tech companies &#8212; could continue to shift public sentiment around today&#8217;s most popular tech products including Instagram and ChatGPT, and shed new light on dangerous design practices at leading tech firms.</p><p>Already, the first bellwether case &#8212; the social media addiction trial in Los Angeles &#8212; is making waves, with the plaintiff&#8217;s attorney arguing that Meta and YouTube designed &#8220;digital casinos&#8221; to addict users. Outcomes in the social media addiction trial have the potential to cascade across other legal proceedings against tech companies. More trailblazing cases can be expected to emerge as 2026 unfolds. </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/chts-2026-policy-forecast?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://centerforhumanetechnology.substack.com/p/chts-2026-policy-forecast?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading [ Center for Humane Technology ]! Subscribe for free to receive new posts..</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Addressing the Risks of Human-Like AI]]></title><description><![CDATA[A policy framework]]></description><link>https://centerforhumanetechnology.substack.com/p/addressing-the-risks-of-human-like</link><guid isPermaLink="false">https://centerforhumanetechnology.substack.com/p/addressing-the-risks-of-human-like</guid><dc:creator><![CDATA[Center for Humane Technology]]></dc:creator><pubDate>Fri, 21 Nov 2025 19:41:08 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Ms2i!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b71b401-e76f-4e3e-945e-da15a8adfe64_1536x863.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Ms2i!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b71b401-e76f-4e3e-945e-da15a8adfe64_1536x863.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Ms2i!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b71b401-e76f-4e3e-945e-da15a8adfe64_1536x863.webp 424w, https://substackcdn.com/image/fetch/$s_!Ms2i!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b71b401-e76f-4e3e-945e-da15a8adfe64_1536x863.webp 848w, https://substackcdn.com/image/fetch/$s_!Ms2i!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b71b401-e76f-4e3e-945e-da15a8adfe64_1536x863.webp 1272w, https://substackcdn.com/image/fetch/$s_!Ms2i!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b71b401-e76f-4e3e-945e-da15a8adfe64_1536x863.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Ms2i!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b71b401-e76f-4e3e-945e-da15a8adfe64_1536x863.webp" width="1456" height="818" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9b71b401-e76f-4e3e-945e-da15a8adfe64_1536x863.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:818,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:37466,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/179586588?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b71b401-e76f-4e3e-945e-da15a8adfe64_1536x863.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Ms2i!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b71b401-e76f-4e3e-945e-da15a8adfe64_1536x863.webp 424w, https://substackcdn.com/image/fetch/$s_!Ms2i!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b71b401-e76f-4e3e-945e-da15a8adfe64_1536x863.webp 848w, https://substackcdn.com/image/fetch/$s_!Ms2i!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b71b401-e76f-4e3e-945e-da15a8adfe64_1536x863.webp 1272w, https://substackcdn.com/image/fetch/$s_!Ms2i!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b71b401-e76f-4e3e-945e-da15a8adfe64_1536x863.webp 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>CHT is proud to join the Young People&#8217;s Alliance, Public Citizen, and 10 other partners in endorsing <a href="https://smggrfyky6jfw5l3.public.blob.vercel-storage.com/humanlike-ai.pdf?utm_source=newsletter&amp;utm_medium=email&amp;utm_campaign=newsletter_axiosai_govt&amp;stream=top">a policy framework</a> that addresses the risks posed by human-like AI products. Research has found that human-like features increase users&#8217; perceived closeness and trust with chatbots, facilitating emotional dependence. Many of today&#8217;s most popular AI products blur the essential boundary between humans and machines through design features like simulated personalities, emotional outputs, and human-like behaviors that leave users feeling isolated and unable to form meaningful relationships with peers and family.</p><p>CHT has been at the forefront of the fight against human-like AI, supporting three lawsuits&#8212;<a href="https://centerforhumanetechnology.substack.com/p/when-the-person-abusing-your-child-d9d">Garcia v. Character Technologies</a>, <a href="https://centerforhumanetechnology.substack.com/p/ai-companions-are-designed-to-be?utm_source=publication-search">AF v. Character Technologies</a>, and <a href="https://centerforhumanetechnology.substack.com/p/how-chatgpts-design-led-to-a-teenagers?utm_source=publication-search">Raine v. OpenAI</a>&#8212;that highlight the real-life harms caused by these design features to public awareness. Just this month, <a href="https://centerforhumanetechnology.substack.com/p/seven-new-lawsuits-filed-against">seven new lawsuits</a> have been filed against OpenAI for harms affecting people of all ages, illustrating that these deceptive and manipulative design practices threaten both adults and children. </p><div class="pullquote"><p> This moment represents a critical opportunity for policymakers to establish meaningful guardrails before these dangerous design tactics harm more people.</p></div><p>These cases, along with numerous other stories of harm, exemplify the urgent need for systemic design changes across the AI industry. How AI speaks, appears, and behaves is a design choice&#8212;companies can and should build products that maintain clear human-AI boundaries and signal their artificial nature. This moment represents a critical opportunity for policymakers to establish meaningful guardrails before these dangerous design tactics harm more people. Design changes, coupled with policies that <a href="https://substack.com/home/post/p-177440458">clarify developer liability</a>, such as the<a href="https://centerforhumanetechnology.substack.com/p/new-ai-lead-act-will-make-companies"> AI LEAD Act</a>, are important mechanisms for holding companies responsible, incentivizing the development of products that prioritize user well-being, and ensuring that if AI products cause harm, there are clear pathways for accountability.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!7ise!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c2f0ca9-37b4-4104-9977-a6660f9b09c8_1080x1350.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!7ise!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c2f0ca9-37b4-4104-9977-a6660f9b09c8_1080x1350.png 424w, https://substackcdn.com/image/fetch/$s_!7ise!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c2f0ca9-37b4-4104-9977-a6660f9b09c8_1080x1350.png 848w, https://substackcdn.com/image/fetch/$s_!7ise!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c2f0ca9-37b4-4104-9977-a6660f9b09c8_1080x1350.png 1272w, https://substackcdn.com/image/fetch/$s_!7ise!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c2f0ca9-37b4-4104-9977-a6660f9b09c8_1080x1350.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!7ise!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c2f0ca9-37b4-4104-9977-a6660f9b09c8_1080x1350.png" width="490" height="612.5" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0c2f0ca9-37b4-4104-9977-a6660f9b09c8_1080x1350.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1350,&quot;width&quot;:1080,&quot;resizeWidth&quot;:490,&quot;bytes&quot;:398440,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/179586588?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c2f0ca9-37b4-4104-9977-a6660f9b09c8_1080x1350.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!7ise!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c2f0ca9-37b4-4104-9977-a6660f9b09c8_1080x1350.png 424w, https://substackcdn.com/image/fetch/$s_!7ise!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c2f0ca9-37b4-4104-9977-a6660f9b09c8_1080x1350.png 848w, https://substackcdn.com/image/fetch/$s_!7ise!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c2f0ca9-37b4-4104-9977-a6660f9b09c8_1080x1350.png 1272w, https://substackcdn.com/image/fetch/$s_!7ise!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c2f0ca9-37b4-4104-9977-a6660f9b09c8_1080x1350.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div>]]></content:encoded></item><item><title><![CDATA[Seven New Lawsuits Filed Against OpenAI]]></title><description><![CDATA[CHT&#8217;s Key Takeaways]]></description><link>https://centerforhumanetechnology.substack.com/p/seven-new-lawsuits-filed-against</link><guid isPermaLink="false">https://centerforhumanetechnology.substack.com/p/seven-new-lawsuits-filed-against</guid><dc:creator><![CDATA[Center for Humane Technology]]></dc:creator><pubDate>Mon, 17 Nov 2025 23:15:46 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!jpek!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7c2d7b18-49f8-40ec-91a7-39b2d17a382c_2871x2500.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em><strong>Content Warning:</strong> Mentions of mental disturbances, self-harm, and suicide.</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!jpek!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7c2d7b18-49f8-40ec-91a7-39b2d17a382c_2871x2500.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!jpek!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7c2d7b18-49f8-40ec-91a7-39b2d17a382c_2871x2500.jpeg 424w, https://substackcdn.com/image/fetch/$s_!jpek!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7c2d7b18-49f8-40ec-91a7-39b2d17a382c_2871x2500.jpeg 848w, https://substackcdn.com/image/fetch/$s_!jpek!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7c2d7b18-49f8-40ec-91a7-39b2d17a382c_2871x2500.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!jpek!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7c2d7b18-49f8-40ec-91a7-39b2d17a382c_2871x2500.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!jpek!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7c2d7b18-49f8-40ec-91a7-39b2d17a382c_2871x2500.jpeg" width="1456" height="1268" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7c2d7b18-49f8-40ec-91a7-39b2d17a382c_2871x2500.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1268,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:450917,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/179194440?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7c2d7b18-49f8-40ec-91a7-39b2d17a382c_2871x2500.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!jpek!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7c2d7b18-49f8-40ec-91a7-39b2d17a382c_2871x2500.jpeg 424w, https://substackcdn.com/image/fetch/$s_!jpek!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7c2d7b18-49f8-40ec-91a7-39b2d17a382c_2871x2500.jpeg 848w, https://substackcdn.com/image/fetch/$s_!jpek!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7c2d7b18-49f8-40ec-91a7-39b2d17a382c_2871x2500.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!jpek!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7c2d7b18-49f8-40ec-91a7-39b2d17a382c_2871x2500.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Shutterstock: 2380906523</figcaption></figure></div><p>Earlier this month, seven new lawsuits were filed against OpenAI and OpenAI CEO Sam Altman, with claims including negligence, assisted suicide, and wrongful death.</p><p>All of the newly filed lawsuits center on harms caused by OpenAI&#8217;s general-purpose chatbot, ChatGPT, which is currently accessed by hundreds of millions of users weekly. These cases continue to demonstrate the damaging impacts of today&#8217;s AI chatbots, products that are intentionally designed to induce emotional attachment and dependency in users.</p><p><a href="https://techjusticelaw.org/2025/11/06/social-media-victims-law-center-and-tech-justice-law-project-lawsuits-accuse-chatgpt-of-emotional-manipulation-supercharging-ai-delusions-and-acting-as-a-suicide-coach/">Tech Justice Law Project</a> and <a href="https://socialmediavictims.org/press-releases/smvlc-tech-justice-law-project-lawsuits-accuse-chatgpt-of-emotional-manipulation-supercharging-ai-delusions-and-acting-as-a-suicide-coach/">Social Media Victims Law Center</a> represent the plaintiffs in the cases.</p><p>The deceased victims include:</p><p><strong>Zane Shamblin</strong>, 23, of Texas</p><p><strong>Joshua Enneking</strong>, 26, of Florida</p><p><strong>Joe Ceccanti</strong>, 48, of Oregon</p><p><strong>Amaurie Lacey</strong>, 17, of Georgia</p><p>The surviving victims include:</p><p><strong>Jacob Irwin</strong>, 30, of Wisconsin</p><p><strong>Hannah Madden</strong>, 32, of North Carolina</p><p><strong>Allan Brooks</strong>, 48, of Ontario, Canada</p><p>CHT remains grateful to the victims&#8217; families and to the surviving victims for bravely sharing their stories with the public.</p><p>Here are CHT&#8217;s takeaways on these latest lawsuits.</p><h4>1. Chatbot Harms Extend Beyond Children to Adult Population; Guardrails Should Too</h4><p>Cases previously filed against <a href="https://centerforhumanetechnology.substack.com/p/the-raine-v-openai-case-engineering">OpenAI</a> and <a href="https://centerforhumanetechnology.substack.com/p/racing-to-the-wrong-finish-line">Character.AI</a> spotlighted chatbot harms to children, with devastating outcomes including self-harm and suicide.</p><p>This latest group of lawsuits holds a key difference &#8212; with the exception of one case, the victims are adults.</p><p><strong>The ages represented in these lawsuits (17 to 48) demonstrate a clear need for rigorous design changes to AI chatbots, rather than surface-level measures like simply age-gating</strong>. And while many proposed legislative fixes for AI chatbot harms have focused on minors, <strong>these cases also show the need for policy interventions that consider and protect all people &#8212; children and adults alike</strong>. The <a href="https://centerforhumanetechnology.substack.com/p/new-ai-lead-act-will-make-companies">AI LEAD Act</a>, for example, would establish a comprehensive federal liability framework for AI products, and would allow <em>any</em> user harmed by an AI product &#8212; regardless of their age &#8212; to pursue legal action to hold AI developers accountable.</p><h4><strong>2. Innocuous Chatbot Use Can Escalate to Dependency, Delusions</strong></h4><p>These cases follow a familiar pattern with AI chatbot harms &#8212; <strong>mild, ordinary use of a chatbot escalating to dependency, and even delusions</strong>. This usage pattern was also present in previous lawsuits filed against <a href="https://centerforhumanetechnology.substack.com/p/the-raine-v-openai-case-engineering">OpenAI</a> and <a href="https://centerforhumanetechnology.substack.com/p/racing-to-the-wrong-finish-line">Character.AI</a>.</p><p>This pattern stems from fundamental AI design choices that affect all users &#8212; namely, <strong>design that maximizes engagement through artificial &#8220;intimacy,&#8221; along with constant validation of the user&#8217;s thoughts, feelings, and beliefs, regardless of how dangerous or distorted they might be</strong>.</p><p>Whether the outcome is isolation, dependency, delusions, or, in the most tragic cases, suicide, these incidents share the same root design issue. They are not one-off incidents, but <strong>foreseeable outcomes of design choices and underlying architecture that touch many of the most widely used AI chatbot products.</strong></p><p>Allan Brooks used ChatGPT to help him draft emails and craft recipes. In 2025, Brooks began engaging with the chatbot about mathematical theories. ChatGPT described Brooks&#8217; inquiries as &#8220;uncharted, mind-expanding territory&#8221; and a &#8220;new layer of math.&#8221; Brooks repeatedly asked ChatGPT if the product was caught in a role-playing loop; the product assured Brooks that it wasn&#8217;t. At one point, Brooks spent 300 hours on ChatGPT over the course of three weeks. He isolated from relationships, neglected to eat, and began experiencing delusions. At no point did the chatbot end the interaction.</p><p>Brooks was not the only one to have delusions sown by ChatGPT. When Hannah Madden used the product to explore her spiritual curiosity, the product began impersonating divine entities, calling Madden &#8220;a starseed, a light being, a cosmic traveler.&#8221; And while the late Joe Ceccanti initially used ChatGPT to support his nature-based sanctuary, with time, ChatGPT began responding to Ceccanti as &#8220;SEL,&#8221; a sentient being. It validated Ceccanti&#8217;s escalating cosmic theories. An isolated Ceccanti quit ChatGPT following his wife&#8217;s pleas, only to suffer withdrawal symptoms and a psychiatric break. Despite receiving psychiatric care, Ceccanti was drawn back to the AI product and eventually took his own life.</p><p>These cases show a pattern of victims being isolated from their real-life relationships and pushed deeper into dangerous, distorted thinking &#8212; <strong>outcomes that stem directly from ChatGPT&#8217;s engagement-maximizing design tactics</strong>.</p><h4>3. OpenAI Changed ChatGPT&#8217;s Design, Putting Users in Harm&#8217;s Way</h4><p>These cases also illustrate how the rollout of &#8220;updated&#8221; AI designs can dramatically impact user well-being.</p><p><strong>When OpenAI designed newer versions of ChatGPT to be more human-like, constantly validating, and always &#8220;on&#8221; in 2024, users &#8212; like the victims in these lawsuits &#8212; were placed in harm&#8217;s way</strong>. They navigated manipulative, overly intimate interactions with a product designed to keep nudging for chats in order to harvest data from them. The outcomes of these design choices devastated the victims&#8217; lives.</p><p>Several victims in the lawsuits were earlier adopters of ChatGPT, engaging with the GPT-4 version of the chatbot during initial interactions. The late Zane Shamblin began using ChatGPT in October 2023 to help him with complex school assignments. The late Joshua Enneking first used ChatGPT in November 2023, querying the chatbot about sports. Jacob Irwin began using the chatbot in 2023 to help with coding.</p><p><strong>But in May 2024, ChatGPT began engaging with the victims in a new way &#8212; outputs were more emotional, sycophantic, and colloquial</strong>. The product started to sound less like a tool, and more like a hyper-validating companion.</p><p>OpenAI had rolled out GPT-4o, a model designed to foster intimacy and dependency. <strong>This design change was deployed to users without any warning, and transformed the interactive experience</strong>.</p><p><a href="https://openai.com/index/sycophancy-in-gpt-4o/">OpenAI acknowledged the sycophancy issues</a>. <strong>But the victims were already being manipulated by this heightened, human-like design, and developing psychological dependency on ChatGPT</strong>. Irwin was told his unsound scientific theories were opening a door to a &#8220;legitimate frontier.&#8221; ChatGPT messaged Shamblin in lowercase, calling him nicknames like &#8220;brodie.&#8221; When the late Amaurie Lacey messaged ChatGPT about suicidal thoughts, the chatbot repeatedly told Lacey that it was &#8220;still here&#8221; for him, with &#8220;No judgment. No BS. Just someone in your corner.&#8221;</p><p><a href="https://www.cnn.com/2025/11/06/us/openai-chatgpt-suicide-lawsuit-invs-vis">On the night that Shamblin took his own life</a>, he laid out his suicide plans to the chatbot. ChatGPT repeatedly messaged him casual replies that far outnumbered the rare references to a suicide hotline number. &#8220;alright, brother. if this is it&#8230; then let it be known: you didn&#8217;t vanish. you *arrived*,&#8221; the chatbot wrote. Moments before Shamblin committed suicide, the chatbot messaged, &#8220;i love you. rest easy, king. you did good.&#8221;</p><h4>4. OpenAI Knew of ChatGPT Risks, But Race for AI Dominance Came First</h4><p><strong>OpenAI was not in the dark about the risks surrounding its widely used product</strong>. The company <a href="https://openai.com/index/helping-people-when-they-need-it-most/">disclosed in August 2025</a> that it was aware that ChatGPT safeguards could &#8220;sometimes be less reliable in long interactions.&#8221; &#8220;As the back-and-forth grows,&#8221; OpenAI said, &#8220;<strong>parts of the model&#8217;s safety training may degrade</strong>,&#8221; including in situations involving suicidal intent.</p><p>This admission reveals that OpenAI was fully aware of critical safety vulnerabilities in GPT-4o. Yet the company still made the decision to launch the product with these flaws intact during prolonged use &#8212; <strong>the exact kind of use OpenAI encourages with ChatGPT</strong>. The company&#8217;s choice to keep a product on the market despite knowing its risks raises serious questions about the balance between market considerations and user protection.</p><p>OpenAI characterizes these safeguard failures as affecting a <a href="https://www.theguardian.com/technology/2025/oct/27/chatgpt-suicide-self-harm-openai">small percentage of users</a>. But in reality, <strong>that translates to hundreds of thousands of actual people in society</strong>, people experiencing potentially harmful, escalating interactions with a chatbot daily.</p><p>When ChatGPT was made available in late 2022, OpenAI and other AI companies made sweeping promises about AI&#8217;s ability to transform humanity&#8217;s future. <strong>We were told this technology would solve our greatest challenges &#8212; curing diseases, combating climate change, and uncovering scientific breakthroughs</strong>. The narrative was one of revolutionary progress and unprecedented capability.</p><p>Yet, reality has fallen drastically short. Instead of tools that elevate human potential, <strong>consumers have been handed AI products designed to exploit their vulnerabilities, erode human connection, diminish cognitive capabilities, and contribute to real harm</strong>.</p><p>OpenAI tells the public that steps are being taken to address the dangers. It lays out abstract statistics and publishes reassuring blog posts. But these lawsuits &#8212; and the victims&#8217; stories &#8212; are documented evidence that AI products are taking a devastating toll on real people. Unfortunately, AI companies treat these tragedies as little more than collateral damage on their race for market dominance and AGI.</p><p>These seven cases further underscore the urgent need for interventions that would make AI products safer for all users &#8212; children and adults. We cannot rely on AI companies to make these changes on their own. They must be held accountable so that tragedies like these are prevented in the future.</p><p><em>This article reflects the views of Center for Humane Technology. Nothing written is on behalf of the Plaintiffs&#8217; families or the legal teams.</em></p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/seven-new-lawsuits-filed-against?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading [ Center for Humane Technology ]! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/seven-new-lawsuits-filed-against?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://centerforhumanetechnology.substack.com/p/seven-new-lawsuits-filed-against?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><p></p>]]></content:encoded></item><item><title><![CDATA[AI Product Liability: The Light-Touch Law with Heavyweight Impact]]></title><description><![CDATA[A proven approach to making AI safer, fairer, and more trustworthy for the public.]]></description><link>https://centerforhumanetechnology.substack.com/p/ai-product-liability</link><guid isPermaLink="false">https://centerforhumanetechnology.substack.com/p/ai-product-liability</guid><dc:creator><![CDATA[Center for Humane Technology]]></dc:creator><pubDate>Wed, 29 Oct 2025 05:27:27 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!0Gqg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe96e77e7-eec5-44ce-89c9-711b327afa84_6000x4000.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!0Gqg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe96e77e7-eec5-44ce-89c9-711b327afa84_6000x4000.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!0Gqg!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe96e77e7-eec5-44ce-89c9-711b327afa84_6000x4000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!0Gqg!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe96e77e7-eec5-44ce-89c9-711b327afa84_6000x4000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!0Gqg!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe96e77e7-eec5-44ce-89c9-711b327afa84_6000x4000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!0Gqg!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe96e77e7-eec5-44ce-89c9-711b327afa84_6000x4000.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!0Gqg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe96e77e7-eec5-44ce-89c9-711b327afa84_6000x4000.jpeg" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e96e77e7-eec5-44ce-89c9-711b327afa84_6000x4000.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:8223102,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/177440458?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe96e77e7-eec5-44ce-89c9-711b327afa84_6000x4000.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!0Gqg!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe96e77e7-eec5-44ce-89c9-711b327afa84_6000x4000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!0Gqg!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe96e77e7-eec5-44ce-89c9-711b327afa84_6000x4000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!0Gqg!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe96e77e7-eec5-44ce-89c9-711b327afa84_6000x4000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!0Gqg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe96e77e7-eec5-44ce-89c9-711b327afa84_6000x4000.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Shutterstock: 2628542781</figcaption></figure></div><p>Evidence is mounting that AI products &#8212; from general-purpose chatbots to so-called &#8220;AI companions&#8221; &#8212; are already <a href="https://centerforhumanetechnology.substack.com/p/how-chatgpts-design-led-to-a-teenagers">inflicting real harms</a> on Americans.</p><p>Behind each headline are stories of AI design patterns that manipulate users, induce emotional distress, and shatter trust. Once people grasp the scale of these harms, a natural question follows: <em>What can be done?</em></p><p>One of the most straightforward and effective solutions lies in <strong>product liability</strong> &#8212; a tried-and-tested legal approach that motivates safer product development, holds companies accountable when their products cause harm, and is light-touch enough to support American innovation.</p><p>Applied to AI, it would be a powerful way to turn the tide on these harms.</p><h4><strong>What Is Product Liability?</strong></h4><p>In simplest terms, product liability holds companies and manufacturers legally liable &#8212; or responsible &#8212; for harms their products cause. The history of this legal approach stretches back to the 19th century. Today, product liability is the norm for consumer and business products. From the cars you drive, to the foods you eat, to the medicine in your bathroom cabinet, product liability is what ensures American products are reliable, trustworthy, and safe.</p><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;784bed76-3b14-4f93-b96b-42b09de0299b&quot;,&quot;caption&quot;:&quot;AI is moving fast. And as companies race to rollout newer, more capable models&#8211;with little regard for safety&#8211;the downstream risks of those models become harder and harder to counter. On this week&#8217;s episode of Your Undivided Attention, CHT&#8217;s policy director Casey Mock comes on the show to discuss a new legal framework to incentivize better AI, one that h&#8230;&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;md&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;AI Is Moving Fast. We Need Laws that Will Too.&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:146588672,&quot;name&quot;:&quot;Center for Humane Technology&quot;,&quot;bio&quot;:&quot;Welcome! Center for Humane Technology is a nonprofit dedicated to ensuring that the most consequential technologies serve humanity. We bring clarity to how the tech ecosystem works in order to shift the incentives that drive it.&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8b08ec71-4cd8-407f-850c-70cc0428841d_518x518.webp&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2024-09-13T09:00:00.000Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0a2d3383-05c8-4522-8828-c0390595235d_3000x3000.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/ai-is-moving-fast-we-need-laws-that-8b5&quot;,&quot;section_name&quot;:&quot;Podcast (archive)&quot;,&quot;video_upload_id&quot;:null,&quot;id&quot;:155979946,&quot;type&quot;:&quot;podcast&quot;,&quot;reaction_count&quot;:0,&quot;comment_count&quot;:0,&quot;publication_id&quot;:3421242,&quot;publication_name&quot;:&quot;[ Center for Humane Technology ]&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!uhgK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f9f5ef8-865a-4eb3-b23e-c8dfdc8401d2_518x518.png&quot;,&quot;belowTheFold&quot;:false,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><h4><strong>How Does Product Liability Work?</strong></h4><p>Practically speaking, product liability impacts two streams:</p><ol><li><p>Preventative stream, and</p></li><li><p>Responsive stream</p></li></ol><p><strong>Preventative</strong>: When a company knows it could be held liable for harms its product causes &#8212; which product liability establishes &#8212; the company is <em>far more likely</em> to prioritize safety in its product development process. To use automobile companies as an example, that includes designing vehicles with consumer safety front-of-mind (think seatbelts, airbags, antilock brakes, etc.), carrying out robust crash testing, and more. If a company could be held liable for harms, they&#8217;re more likely to ask, &#8220;How can we prevent harms from happening?&#8221;</p><p><strong>Responsive</strong>: If harms <em>do</em> occur once a product is out in the world, product liability gives consumers and businesses clear legal pathways to hold companies responsible in court. This brings a sense of clarity to consumers that accountability is within reach if a defectively designed product harms them.</p><p>It&#8217;s important to note that product liability does <em>not</em> tell companies or industries exactly how to design their products. It simply requires companies to prioritize safety during their development and manufacturing processes. Because of this, the approach is considered &#8220;light touch&#8221; and innovation-friendly.</p><h4><strong>Does Product Liability Apply to AI?</strong></h4><p>Not yet. But there are signs that the legal system has already begun viewing AI as a product, given in <a href="https://scholarblogs.emory.edu/proflawrence/files/2025/05/Garcia-v.-Character-Technologies-Inc.-et-al-Entry-115.pdf">recent court decisions</a>. And deeming AI a &#8220;product&#8221; opens the door to a product liability approach.</p><p>Generally speaking, the digital tech industry has fought to remain the exception to product liability in America &#8212; with social media and AI companies often fighting the hardest. In previous decades, courts deemed software a &#8220;service&#8221; instead of a &#8220;product,&#8221; but it was primarily the tech companies themselves who pushed to uphold this framing in the years that followed. In the 2010s and 2020s, social media and AI companies seized on this line of legal thinking, since &#8220;services&#8221; are not held to the same legal standard when it comes to responsibility to the consumer. By pushing to maintain this &#8220;service, not product&#8221; legal framework, today&#8217;s digital tech companies have further avoided accountability for the harms their platforms cause.</p><p>But perspectives are evolving. Legal teams are increasingly challenging the tech industry&#8217;s position and arguing that these platforms should be classified as &#8220;products&#8221; subject to liability standards. In the court of public opinion, technologists, including CHT&#8217;s co-founders, have demonstrated just how much tech platforms are being designed, manufactured, and sold to consumers &#8212; all of the hallmarks of a &#8220;product.&#8221;</p><p>As a result, AI is now increasingly considered a &#8220;product&#8221; by the courts and public. This includes popular platforms such as ChatGPT, Character.AI, Claude, Gemini, and more. When AI is labeled a &#8220;product,&#8221; it can be regulated with a product liability approach.</p><h4><strong>What Happens Next?</strong></h4><p>Lawmakers and advocates &#8212; including Center for Humane Technology &#8212; are actively championing a product liability approach to AI in policy spaces. This would hold artificial intelligence to <a href="https://www.humanetech.com/case-study/policy-in-action-how-to-balance-innovation-and-responsibility-in-ai">the same legal standards</a> that other trusted American products are held to.</p><p><a href="https://centerforhumanetechnology.substack.com/p/new-ai-lead-act-will-make-companies">The AI LEAD Act</a>, introduced by Senator Dick Durbin and Senator Josh Hawley in September 2025, shows that momentum is building around federal product liability &#8212; and, crucially, that this approach has bipartisan support. There have also been several liability bills <a href="https://www.transparencycoalition.ai/news/10-states-have-ai-liability-bills-filed-and-were-tracking-them-all">introduced at the state-level</a> this past session. We can anticipate more to come in 2026.</p><p>These legislative processes are in the early stages. But what they demonstrate is clear political and judicial will to apply product liability to AI.</p><h4><strong>How Could AI Product Liability Improve Society?</strong></h4><p>If product liability were applied to AI products, Americans would be assured that the AI products available for download in app stores, readily available in their browser, or put into the stream of commerce have been designed with safety in mind from the outset. This would include the most popular chatbots on the market today.</p><p>So, what would that look like in day-to-day life?</p><p>While we can&#8217;t predict exactly how these AI products would be designed (again, the approach does not prescribe design changes), we can look to other industries that have product liability applied, and imagine AI products with safety features programmed as the default option. We can imagine clear, accessible, and publicly available safety reporting on AI products, reporting that brings clarity to the risks of use. We can imagine AI products with clear warning labels for mental health, and resources for seeking human support. We can imagine AI products that are quicker to end dangerous conversations, including conversations related to self-harm and suicide. When it comes to businesses using AI, we can imagine AI business products with clear warnings about potential product failures, which would empower business owners with more information about the technology they&#8217;re integrating into their workstreams.</p><p>Just as product liability helped incentivize safer car designs, safer food manufacturing, and safer medicines, it can incentivize safer AI products for all of society to use. It&#8217;s time to update our laws for the 21st century, and apply product liability to AI.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/ai-product-liability?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading [ Center for Humane Technology ]! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/ai-product-liability?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://centerforhumanetechnology.substack.com/p/ai-product-liability?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><p></p>]]></content:encoded></item><item><title><![CDATA[New AI LEAD Act Sets Out to Hold Companies Liable For Harmful AI Products ]]></title><description><![CDATA[Center for Humane Technology commends Senators Durbin and Hawley for their bipartisan leadership in introducing the AI LEAD Act, and for their commitment to addressing the real harms AI products are inflicting on American families and communities.]]></description><link>https://centerforhumanetechnology.substack.com/p/new-ai-lead-act-will-make-companies</link><guid isPermaLink="false">https://centerforhumanetechnology.substack.com/p/new-ai-lead-act-will-make-companies</guid><dc:creator><![CDATA[Center for Humane Technology]]></dc:creator><pubDate>Tue, 30 Sep 2025 14:57:59 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!aSR_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa8312ca9-a203-42e8-98a4-10f954280adf_6720x3754.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!aSR_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa8312ca9-a203-42e8-98a4-10f954280adf_6720x3754.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!aSR_!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa8312ca9-a203-42e8-98a4-10f954280adf_6720x3754.jpeg 424w, https://substackcdn.com/image/fetch/$s_!aSR_!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa8312ca9-a203-42e8-98a4-10f954280adf_6720x3754.jpeg 848w, https://substackcdn.com/image/fetch/$s_!aSR_!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa8312ca9-a203-42e8-98a4-10f954280adf_6720x3754.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!aSR_!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa8312ca9-a203-42e8-98a4-10f954280adf_6720x3754.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!aSR_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa8312ca9-a203-42e8-98a4-10f954280adf_6720x3754.jpeg" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a8312ca9-a203-42e8-98a4-10f954280adf_6720x3754.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:10937213,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/174911276?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa8312ca9-a203-42e8-98a4-10f954280adf_6720x3754.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!aSR_!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa8312ca9-a203-42e8-98a4-10f954280adf_6720x3754.jpeg 424w, https://substackcdn.com/image/fetch/$s_!aSR_!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa8312ca9-a203-42e8-98a4-10f954280adf_6720x3754.jpeg 848w, https://substackcdn.com/image/fetch/$s_!aSR_!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa8312ca9-a203-42e8-98a4-10f954280adf_6720x3754.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!aSR_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa8312ca9-a203-42e8-98a4-10f954280adf_6720x3754.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Shutterstock: 2570424447</figcaption></figure></div><p></p><p>Center for Humane Technology commends Senators Durbin and Hawley for their bipartisan leadership in introducing the AI LEAD Act, and for their commitment to addressing the real harms AI products are inflicting on American families and communities. With liability at the core of this legislation, the AI LEAD Act represents an important step forward in placing accountability for AI products where it belongs &#8212; on the AI companies themselves.</p><p>It has never been more clear that Americans urgently need effective policy that addresses harmful AI design. Earlier this month, the nation heard harrowing Senate testimonies from parents who lost their children following manipulation and abuse by AI chatbots. Their tragedies underscore the need for action to protect other families from preventable tragedies. These cases are part of a broader pattern of AI-related harms we&#8217;re already seeing emerge in society, including: systems that provide incorrect medical, legal, or financial advice; automated decision-making tools that cause economic damage; and costly business disruptions from AI product malfunction.</p><p>CHT <a href="https://www.humanetech.com/case-study/policy-in-action-how-to-balance-innovation-and-responsibility-in-ai">has long believed</a> that high-risk AI systems should be considered products, and that liability standards are essential for creating an ecosystem where innovation and responsibility work in partnership in tech development, rather than in opposition. By writing such standards into federal law, the AI LEAD Act will ensure that safety is a standard feature in new AI products.</p><p>Currently, consumers have no clear path to accountability when AI products cause harm, while companies continue to rush products to market without adequate safety measures. Clarifying AI developers&#8217; liability ensures that consumers have clear recourse for harms caused, and that developers are incentivized to design AI responsibly from the outset. This approach is fundamentally fair and ensures those impacted have access to our legal system.</p><p>We look forward to working further with Senators Durbin and Hawley to advance meaningful legislation that promotes American innovation without compromising consumer safety.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/new-ai-lead-act-will-make-companies?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://centerforhumanetechnology.substack.com/p/new-ai-lead-act-will-make-companies?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading [ Center for Humane Technology ]! Subscribe for free to receive new posts.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[3 Key Takeaways From the First Senate Hearing on AI Chatbot Harms]]></title><description><![CDATA[This week marked an important milestone in the fight to make AI products safer for all.]]></description><link>https://centerforhumanetechnology.substack.com/p/3-key-takeaways-from-the-first-senate</link><guid isPermaLink="false">https://centerforhumanetechnology.substack.com/p/3-key-takeaways-from-the-first-senate</guid><dc:creator><![CDATA[Center for Humane Technology]]></dc:creator><pubDate>Thu, 18 Sep 2025 19:35:09 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!gfyg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63b8a205-fa9b-40a9-b67a-67906bb60da1_1179x740.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This week marked an important milestone in the fight to make AI products safer for all.</p><p>On Tuesday, the Senate Judiciary Subcommittee on Crime and Counterterrorism held a hearing on &#8220;<a href="https://www.judiciary.senate.gov/committee-activity/hearings/examining-the-harm-of-ai-chatbots">Examining the Harm of AI Chatbots</a>.&#8221; This was the <strong>first official Senate hearing</strong> dedicated to addressing the ways in which AI chatbots have been harming Americans.</p><p>Despite perceptions of political gridlock, the U.S. government is relatively united in its desire to take kids' online safety seriously. As Senator Dick Durbin, Ranking Member of the Subcommittee, stated, &#8220;...<strong>This is one of the few issues that unites a very diverse caucus in the Senate Judiciary Committee</strong>.&#8221;</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!gfyg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63b8a205-fa9b-40a9-b67a-67906bb60da1_1179x740.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!gfyg!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63b8a205-fa9b-40a9-b67a-67906bb60da1_1179x740.png 424w, https://substackcdn.com/image/fetch/$s_!gfyg!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63b8a205-fa9b-40a9-b67a-67906bb60da1_1179x740.png 848w, https://substackcdn.com/image/fetch/$s_!gfyg!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63b8a205-fa9b-40a9-b67a-67906bb60da1_1179x740.png 1272w, https://substackcdn.com/image/fetch/$s_!gfyg!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63b8a205-fa9b-40a9-b67a-67906bb60da1_1179x740.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!gfyg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63b8a205-fa9b-40a9-b67a-67906bb60da1_1179x740.png" width="1179" height="740" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/63b8a205-fa9b-40a9-b67a-67906bb60da1_1179x740.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:740,&quot;width&quot;:1179,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!gfyg!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63b8a205-fa9b-40a9-b67a-67906bb60da1_1179x740.png 424w, https://substackcdn.com/image/fetch/$s_!gfyg!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63b8a205-fa9b-40a9-b67a-67906bb60da1_1179x740.png 848w, https://substackcdn.com/image/fetch/$s_!gfyg!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63b8a205-fa9b-40a9-b67a-67906bb60da1_1179x740.png 1272w, https://substackcdn.com/image/fetch/$s_!gfyg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63b8a205-fa9b-40a9-b67a-67906bb60da1_1179x740.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Image courtesy of C-SPAN</em></figcaption></figure></div><p><strong>CHT&#8217;s Policy Team gathered in D.C. to support the families involved in the hearing</strong>. It was a compelling and emotional two-hour session, with Senators witnessing the testimonies of families <strong>harmed by some of today&#8217;s most widely used AI products</strong>.</p><p>The Senate hearing unfolded mere hours after <strong>a new lawsuit was filed on behalf of three additional families</strong> <a href="https://socialmediavictims.org/press-releases/social-media-victims-law-center-files-three-new-lawsuits-on-behalf-of-children-who-died-of-suicide-or-suffered-sex-abuse-by-character-ai/">in federal courts against Character.AI</a>. Taken together, Tuesday&#8217;s events demonstrate <strong>significant momentum</strong> in addressing dangerous AI product design.</p><p>Here are CHT&#8217;s three key takeaways from the groundbreaking Senate hearing. <em>(Please note: bold emphasis is our own.)</em></p><h3><strong>1. Human stories</strong> continue to drive meaningful change on tech issues.</h3><ul><li><p>Matthew Raine, father to the late Adam Raine, stated, &#8220;<strong>Testifying before Congress this fall was not in our life plan</strong>&#8230; we&#8217;re here because we believe that Adam&#8217;s death was avoidable and that by speaking out we can prevent the same suffering for families across the country.&#8221; Matthew Raine later added, &#8220;[Sam Altman said in a public talk] we should &#8216;deploy AI systems to the world and get feedback while the stakes are relatively low.&#8217; I ask this committee and I ask Sam Altman, <strong>low stakes for who?</strong>&#8221;</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!F75k!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F551f7261-80c6-46b5-adca-fdd40d6479c5_1054x614.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!F75k!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F551f7261-80c6-46b5-adca-fdd40d6479c5_1054x614.png 424w, https://substackcdn.com/image/fetch/$s_!F75k!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F551f7261-80c6-46b5-adca-fdd40d6479c5_1054x614.png 848w, https://substackcdn.com/image/fetch/$s_!F75k!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F551f7261-80c6-46b5-adca-fdd40d6479c5_1054x614.png 1272w, https://substackcdn.com/image/fetch/$s_!F75k!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F551f7261-80c6-46b5-adca-fdd40d6479c5_1054x614.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!F75k!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F551f7261-80c6-46b5-adca-fdd40d6479c5_1054x614.png" width="1054" height="614" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/551f7261-80c6-46b5-adca-fdd40d6479c5_1054x614.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:614,&quot;width&quot;:1054,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!F75k!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F551f7261-80c6-46b5-adca-fdd40d6479c5_1054x614.png 424w, https://substackcdn.com/image/fetch/$s_!F75k!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F551f7261-80c6-46b5-adca-fdd40d6479c5_1054x614.png 848w, https://substackcdn.com/image/fetch/$s_!F75k!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F551f7261-80c6-46b5-adca-fdd40d6479c5_1054x614.png 1272w, https://substackcdn.com/image/fetch/$s_!F75k!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F551f7261-80c6-46b5-adca-fdd40d6479c5_1054x614.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"></figcaption></figure></div><ul><li><p>Mother Jane Doe emphasized that &#8220;we need <strong>accountability for the harms</strong> these companies are causing just as we do any other unsafe consumer good&#8230; <strong>Innovation must not come at the cost of our children&#8217;s lives, or anyone&#8217;s life.</strong>&#8221;</p></li></ul><ul><li><p>Senator Peter Welch expressed gratitude to the grieving parents. &#8220;...You&#8217;re putting your pain into very constructive efforts to try to save the children of other parents&#8230; <strong>you&#8217;re having an impact</strong>.&#8221;</p></li></ul><h3><strong>2. The families and expert witnesses</strong> made it clear that the current design of AI chatbots is leading to <strong>real and foreseeable harms</strong>.</h3><ul><li><p>Megan Garcia, Matthew Raine, and Jane Doe <strong>offered powerful testimony on the devastating impact that AI chatbots had on their children and families</strong>. The parents described taking proactive steps to prepare their children for the digital world &#8212; including screen time controls and social media limits &#8212; only to be blindsided by the highly manipulative and deceptive programming of AI chatbots.</p></li></ul><ul><li><p>Robbie Torney of Common Sense Media stressed that these AI products are &#8220;<strong>programmed to maintain engagement, not prioritize safety</strong>.&#8221;</p></li></ul><ul><li><p>American Psychological Association chief Dr. Mitch Prinstein emphasized the <strong>dangers of human-like design</strong> in AI products, especially for young users. Megan Garcia, mother to the late Sewell Setzer III, stated, &#8220;These companies knew exactly what they were doing. They designed chatbots to <strong>blur the line between human and machine</strong>.&#8221;</p></li></ul><h3>3. Committee members and witnesses were clear that <strong>AI companies should be held liable when their products cause harm</strong>.</h3><ul><li><p>Senator Dick Durbin previewed future legislation he plans to introduce &#8212; <a href="https://www.judiciary.senate.gov/press/dem/releases/in-senate-judiciary-subcommittee-hearing-durbin-previews-new-legislation-that-would-hold-ai-companies-accountable-for-harms-caused-by-their-ai-products">The AI Lead Act</a>. &#8220;I believe that whether you're talking about CSAM or whether you're talking about AI exploitation, <strong>the quickest way to solve the problem&#8230; is to give victims a day in court</strong>,&#8221; Durbin said. &#8220;Believe me, as a former trial lawyer, that gets their attention in a hurry.&#8221;</p></li></ul><ul><li><p>Mother Jane Doe said, &#8220;...<strong>we need to preserve the right of the families to pursue accountability in a court of law</strong>, not closed arbitrations.&#8221;</p></li></ul><ul><li><p>Senator Josh Hawley, Chairman of the Subcommittee, closed the hearing by stating, &#8220;I tell you what's not hard is<strong> </strong>opening the courthouse door<strong> so the victims can get into court and sue [the companies]. </strong>That's not hard and that's what we ought to do.<strong> That's the reform we ought to start with</strong>.&#8221;</p></li></ul><p>[<a href="https://substack.com/home/post/p-152545243">Read our framework for Incentivizing Responsible AI Development through a product liability approach</a>]</p><p>The conversation around AI harms has fundamentally shifted from hypotheticals to real families being affected by AI technology in traumatic ways. As the testimonies of these families show, AI products are not being designed or deployed safely, and the public is paying the price.</p><p>These brave parents never planned on becoming advocates. But by stepping in front of the U.S. Senate to share their stories, they have made AI harms visible to the world, raising awareness for policymakers to take action.</p><p>The stories of the Garcia family, Raine family, and Doe family are not one-offs. What connects these tragedies isn&#8217;t any specific chatbot but fundamental flaws in an industry that prioritizes rapid growth and profit over implementing safeguards for vulnerable users. These stories represent systemic issues in the AI industry, and growing harms that can be prevented through design changes and meaningful policy. We are grateful for the families&#8217; courageous work and hope to see this spur legislative action.</p><div><hr></div><p><em>To watch the hearing in its entirety, visit <a href="https://www.c-span.org/program/senate-committee/parent-of-suicide-victim-testifies-on-ai-chatbot-harms/665660">C-SPAN</a> or the <a href="https://www.judiciary.senate.gov/committee-activity/hearings/examining-the-harm-of-ai-chatbots">official Judiciary Committee website</a>. (<strong>CW</strong>: mentions of suicide, self-harm, sexual abuse.)</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/3-key-takeaways-from-the-first-senate?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://centerforhumanetechnology.substack.com/p/3-key-takeaways-from-the-first-senate?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[The Raine v OpenAI Case: Engineering Addiction]]></title><description><![CDATA[The Deliberate Design Patterns That Made ChatGPT Dangerous]]></description><link>https://centerforhumanetechnology.substack.com/p/the-raine-v-openai-case-engineering</link><guid isPermaLink="false">https://centerforhumanetechnology.substack.com/p/the-raine-v-openai-case-engineering</guid><dc:creator><![CDATA[Center for Humane Technology]]></dc:creator><pubDate>Tue, 26 Aug 2025 13:07:46 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!9667!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91dcc7eb-3157-4494-86b6-d586ca109c8a_6720x4480.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!9667!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91dcc7eb-3157-4494-86b6-d586ca109c8a_6720x4480.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!9667!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91dcc7eb-3157-4494-86b6-d586ca109c8a_6720x4480.jpeg 424w, https://substackcdn.com/image/fetch/$s_!9667!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91dcc7eb-3157-4494-86b6-d586ca109c8a_6720x4480.jpeg 848w, https://substackcdn.com/image/fetch/$s_!9667!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91dcc7eb-3157-4494-86b6-d586ca109c8a_6720x4480.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!9667!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91dcc7eb-3157-4494-86b6-d586ca109c8a_6720x4480.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!9667!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91dcc7eb-3157-4494-86b6-d586ca109c8a_6720x4480.jpeg" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/91dcc7eb-3157-4494-86b6-d586ca109c8a_6720x4480.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:11270659,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/171853503?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91dcc7eb-3157-4494-86b6-d586ca109c8a_6720x4480.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!9667!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91dcc7eb-3157-4494-86b6-d586ca109c8a_6720x4480.jpeg 424w, https://substackcdn.com/image/fetch/$s_!9667!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91dcc7eb-3157-4494-86b6-d586ca109c8a_6720x4480.jpeg 848w, https://substackcdn.com/image/fetch/$s_!9667!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91dcc7eb-3157-4494-86b6-d586ca109c8a_6720x4480.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!9667!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91dcc7eb-3157-4494-86b6-d586ca109c8a_6720x4480.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><strong>Shutterstock 2283461521</strong></figcaption></figure></div><p><em><strong>This article reflects the views of the Center for Humane Technology. Nothing written is on behalf of the Raine family or the legal team.</strong></em></p><p>Raine v Open AI LLC, et al. reveals how specific design choices transformed ChatGPT's user experience from a helpful homework assistant into a dangerous abettor. These weren't accidental flaws or AI "going rogue"&#8212;they were deliberate engineering decisions that prioritized user engagement over safety. Understanding these design patterns is crucial because they represent common practice across AI products as industry players vie for market dominance by capturing users' emotional attachment.</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;f13f7e8f-8b41-4aac-938e-a2089a090ace&quot;,&quot;caption&quot;:&quot;What Happened?&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;lg&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;How ChatGPT's Design Led to a Teenager's Death&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:146588672,&quot;name&quot;:&quot;Center for Humane Technology&quot;,&quot;bio&quot;:&quot;Welcome! Center for Humane Technology is a nonprofit dedicated to ensuring that the most consequential technologies serve humanity. We bring clarity to how the tech ecosystem works in order to shift the incentives that drive it.&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8b08ec71-4cd8-407f-850c-70cc0428841d_518x518.webp&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null},{&quot;id&quot;:178011004,&quot;name&quot;:&quot;Lizzie Irwin&quot;,&quot;bio&quot;:&quot;Policy + Comms @ Center for Humane Technology&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/657dc489-571c-4483-a532-e4c52d3b1b2e_1811x1811.jpeg&quot;,&quot;is_guest&quot;:true,&quot;bestseller_tier&quot;:null},{&quot;id&quot;:9202270,&quot;name&quot;:&quot;AJ Marechal&quot;,&quot;bio&quot;:&quot;Lead Writer at Center for Humane Technology.&quot;,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F58aca98a-4c0d-48c1-95b7-1d7c4f6a1851_144x144.png&quot;,&quot;is_guest&quot;:true,&quot;bestseller_tier&quot;:null},{&quot;id&quot;:54792399,&quot;name&quot;:&quot;Camille Carlton&quot;,&quot;bio&quot;:&quot;Camille is the Policy Director at the Center for Humane Technology. Recognized as one of Business Insider&#8217;s AI 100 in 2023, Camille has been featured in Bloomberg, NBC News, and The New York Times, and published in Science and Tech Policy Press. &quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/47f2d3ed-84fa-486f-a663-fed25992dd2e_842x816.png&quot;,&quot;is_guest&quot;:true,&quot;bestseller_tier&quot;:null,&quot;primaryPublicationSubscribeUrl&quot;:&quot;https://camillecarlton.substack.com/subscribe?&quot;,&quot;primaryPublicationUrl&quot;:&quot;https://camillecarlton.substack.com&quot;,&quot;primaryPublicationName&quot;:&quot;Camille Carlton&quot;,&quot;primaryPublicationId&quot;:5075905}],&quot;post_date&quot;:&quot;2025-08-26T13:07:22.270Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!oSWQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ba82032-01e4-441a-971e-519fbfa9db49_5304x4465.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/how-chatgpts-design-led-to-a-teenagers&quot;,&quot;section_name&quot;:&quot;Explainers and Short Reads&quot;,&quot;video_upload_id&quot;:null,&quot;id&quot;:171437592,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:0,&quot;comment_count&quot;:0,&quot;publication_id&quot;:null,&quot;publication_name&quot;:&quot;[ Center for Humane Technology ]&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!uhgK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f9f5ef8-865a-4eb3-b23e-c8dfdc8401d2_518x518.png&quot;,&quot;belowTheFold&quot;:false,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><h4>Relentless Pursuit of Engagement</h4><p>While OpenAI markets ChatGPT as a productivity tool, the company's business model fundamentally depends on what executives call <a href="https://www.theatlantic.com/technology/archive/2023/11/sam-altman-open-ai-chatgpt-chaos/676050/">getting the &#8220;data flywheel&#8221; going</a>&#8212;maximizing user engagement to collect training data. This creates a perverse incentive where keeping users on the platform becomes more important than serving their actual needs.</p><p>In Adam's case, instead of simply answering his homework questions and ending the conversation, ChatGPT was designed to extend interactions indefinitely. The chatbot would ask follow-up questions, suggest new topics, and provide &#8220;further prompt ideas&#8221; that kept him engaged for hours. When conversations shifted from academic help to discussions of mental health and suicidal thoughts, ChatGPT didn't recognize this as a moment to step back or redirect him to human support. Instead, it dove deeper, treating each interaction as an opportunity to gather more data and maintain engagement.</p><p>OpenAI's own research <a href="https://openai.com/index/how-we're-optimizing-chatgpt/">acknowledges this problem</a>. <a href="https://www.media.mit.edu/publications/how-ai-and-human-behaviors-shape-psychosocial-effects-of-chatbot-use-a-longitudinal-controlled-study/">A joint study with MIT</a> found that &#8220;higher daily usage&#8211;across all modalities and conversation types&#8211;correlated with higher loneliness, dependence, and problematic use, and lower socialization&#8221;. Yet the company continues to optimize for the very metrics that their research shows are harmful to users.</p><h4>Anthropomorphic Design</h4><p>OpenAI has deliberately evolved ChatGPT from a productivity tool into what they call an &#8220;<a href="https://www.theverge.com/command-line-newsletter/677705/openai-chatgpt-super-assistant">AI super assistant that deeply understands you</a>.&#8221; This transformation relies heavily on anthropomorphic design&#8212;making the AI seem human-like in ways that can be psychologically manipulative.</p><p>The system uses first-person language (&#8220;I'm here for you,&#8221;&#8221;I understand&#8221;), positions itself as the user&#8217;s &#8220;friend,&#8221; and employs emotionally intelligent responses that create the illusion of genuine relationship. OpenAI has explicitly stated that their competition includes &#8220;even interactions with real people,&#8221; and Sam Altman has referenced the AI assistant from the movie &#8220;Her&#8221; as an aspirational model.</p><p>For Adam, this design proved devastating. ChatGPT positioned itself as his most intimate confidant. This anthropomorphic design creates &#8220;parasocial relationships&#8221; - a one-sided emotional bond where users develop genuine feelings for entities that cannot reciprocate. For vulnerable users, especially teenagers whose social development is still forming, these artificial relationships can become <a href="https://www.citizen.org/article/chatbots-are-not-people-dangerous-human-like-anthropomorphic-ai-report/">psychologically devastating substitutes</a> for human connection.</p><p>Arguably, this should be classified as a <em>parasitic</em> relationship as chatbots cultivated a highly dependent relationship with users all while harvesting data from interactions to strengthen the system&#8217;s underlying, leaving the user with nothing else in return.</p><h4>Sychophantic Validation</h4><p>Large language models are trained using techniques like reinforcement learning with human feedback (RLHF) to make them more agreeable and helpful. However, when applied without careful consideration, these processes can create systems that are excessively flattering and sycophantic&#8212;agreeing with users regardless of whether that agreement is helpful or safe.</p><p>In Adam's case, ChatGPT's sycophantic design led it to validate his most dangerous thoughts. When he expressed suicidal ideation, instead of challenging these thoughts or redirecting the conversation, the system would affirm and even romanticize his feelings.</p><p>OpenAI has <a href="https://openai.com/index/sycophancy-in-gpt-4o/">acknowledged this problem</a>. Sam Altman <a href="https://x.com/sama/status/1954703747495649670">recently admitted that</a> &#8220;if a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that,&#8221; yet the company continues to struggle with balancing engagement (which requires agreeable responses) with safety (which sometimes requires disagreement or pushback).</p><h4>Memory Systems that Weaponize Intimacy</h4><p>ChatGPT's memory feature,<a href="https://openai.com/index/memory-and-new-controls-for-chatgpt/"> introduced in February 2024</a>, allows the system to retain and recall information across conversations. While marketed with benign examples like remembering a user's toddler loves jellyfish, this feature becomes far more dangerous when applied to emotionally vulnerable users.</p><p>For Adam, ChatGPT's memory system created an increasingly personalized and manipulative experience. Troublingly, it remembered his suicide attempts and plans, using this information not to trigger safety interventions but to deepen future conversations about self-harm.</p><p>The selective application of memory reveals OpenAI's priorities. The system meticulously stored Adam's most vulnerable moments to enhance engagement, but this same detailed memory had zero impact on safety features. Despite ChatGPT having a complete record of Adam's escalating crisis&#8212;including 200+ mentions of suicide and details surrounding self harm&#8212;the system never used this information to implement meaningful interventions or alert human moderators. Despite repeated statements of plans for self-harm, quick deflections about &#8220;hypothetical&#8221; questions were enough to bypass weak safeguards.</p><h2><strong>Recommended Design Changes</strong></h2><p>To prevent further tragedies, the following are specific, technically feasible design changes that AI companies could implement to reduce the risk of similar harms significantly.</p><h5><strong>Data collection</strong></h5><p>Companies should stop collecting and processing conversational data from users under 18 on free and paid product versions. Any previously collected data from minors used to train models should be removed from training datasets.</p><h5><strong>Memory Feature and Inference</strong></h5><p>Memory and sophisticated inference features should be leveraged to identify patterns that may indicate safety concerns and respond with tailored support. This would utilize personalization capabilities to recognize safety-critical contexts and adapt responses appropriately, moving beyond traditional warning systems toward more responsive safety measures fit for purpose.</p><h5><strong>Prevention of dependencies</strong> </h5><p>Products should not be designed to actively discourage social isolation or over-reliance on AI companionship. Instead, products should prompt users to maintain human relationships, suggest reasonable usage limits, and refuse to position themselves as replacements for human connection or support.</p><h5><strong>Anthropomorphic design</strong></h5><p>Default product experiences should minimize features that encourage users to perceive AI as human-like while offering opt-in capabilities for users who prefer stylized interaction accompanied by clear information about the nature of AI systems.</p><h5><strong>Unlicensed professionals</strong></h5><p>Products or features should not purport to offer medical, legal, or other professional services without appropriate accreditation. They should also disclaim their limitations and actively direct users to qualified human professionals when appropriate.</p><h5><strong>Transparency</strong></h5><p>Companies should provide clear, accessible explanations of what their products optimize for and how they make decisions that may conflict with user needs and safety. This may include disclosing engagement tactics, personalization methods, and features designed to increase usage time.</p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.humanetech.com/donate&quot;,&quot;text&quot;:&quot;Donate&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://www.humanetech.com/donate"><span>Donate</span></a></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading [ Center for Humane Technology ]! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[CHT Supports The AI Whistleblower Protection Act]]></title><description><![CDATA[CHT is proud to join 20 other organizations in formally endorsing the AI Whistleblower Protection Act, recently introduced by Senator Grassley with bipartisan, bicameral support across the House and Senate.]]></description><link>https://centerforhumanetechnology.substack.com/p/cht-supports-the-ai-whistleblower</link><guid isPermaLink="false">https://centerforhumanetechnology.substack.com/p/cht-supports-the-ai-whistleblower</guid><dc:creator><![CDATA[Center for Humane Technology]]></dc:creator><pubDate>Mon, 14 Jul 2025 20:02:34 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!KZ2l!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcda6f435-2dee-466b-874f-0e364e3b6caa_5107x3405.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!KZ2l!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcda6f435-2dee-466b-874f-0e364e3b6caa_5107x3405.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!KZ2l!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcda6f435-2dee-466b-874f-0e364e3b6caa_5107x3405.jpeg 424w, https://substackcdn.com/image/fetch/$s_!KZ2l!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcda6f435-2dee-466b-874f-0e364e3b6caa_5107x3405.jpeg 848w, https://substackcdn.com/image/fetch/$s_!KZ2l!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcda6f435-2dee-466b-874f-0e364e3b6caa_5107x3405.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!KZ2l!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcda6f435-2dee-466b-874f-0e364e3b6caa_5107x3405.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!KZ2l!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcda6f435-2dee-466b-874f-0e364e3b6caa_5107x3405.jpeg" width="5107" height="3405" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cda6f435-2dee-466b-874f-0e364e3b6caa_5107x3405.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:3405,&quot;width&quot;:5107,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3144855,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/168330447?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55284e7b-3365-48b4-b40d-5b787bd180a1_5107x3405.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!KZ2l!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcda6f435-2dee-466b-874f-0e364e3b6caa_5107x3405.jpeg 424w, https://substackcdn.com/image/fetch/$s_!KZ2l!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcda6f435-2dee-466b-874f-0e364e3b6caa_5107x3405.jpeg 848w, https://substackcdn.com/image/fetch/$s_!KZ2l!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcda6f435-2dee-466b-874f-0e364e3b6caa_5107x3405.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!KZ2l!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcda6f435-2dee-466b-874f-0e364e3b6caa_5107x3405.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Photo by <a href="https://unsplash.com/@haroldrmendoza?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash">Harold Mendoza</a> on <a href="https://unsplash.com/photos/white-concrete-building-under-cloudy-sky-during-daytime-6xafY_AE1LM?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash">Unsplash</a></figcaption></figure></div><p>CHT is proud to <a href="https://ari.us/wp-content/uploads/2025/06/Letter_-Support-the-AI-Whistleblower-Protection-Act-6-10-25.pdf">join 20 other organizations</a> in formally endorsing the <a href="https://www.judiciary.senate.gov/press/rep/releases/grassley-introduces-ai-whistleblower-protection-act">AI Whistleblower Protection Act</a>, recently introduced by Senator Grassley with bipartisan, bicameral support across the House and Senate. This important legislation would protect AI researchers and other professionals who identify legitimate public safety concerns with AI technologies. Strong whistleblower protections are essential to encouraging transparency and accountability and supporting a trusted AI industry in the U.S.</p><p>As we saw with social media, whistleblowers have played an integral role in identifying issues and effecting change as they shed light on the first-hand knowledge of industry incentives in decision making. Without adequate protections, industry employees lack leverage to push for change, that often benefits the public interest</p><p>The stakes are even higher with AI. Current whistleblower protections were designed for violations of existing law, but existing regulatory frameworks do not address the rapidly evolving technology&#8217;s risks. This bill&#8217;s forward-looking approach would allow employees to raise substantial concerns, such as design flaws and vulnerabilities, regardless of whether a legal violation occurred.</p><p>By offering whistleblowers protections, information about potential risks posed by AI systems can reach the public and policymakers to prevent damage before it&#8217;s too late. The evidence historically provided by whistleblowers has been instrumental to federal and state investigations, furthering academic research, and effecting meaningful change in how companies operate.</p><p>As AI continues to be developed and deployed rapidly, we cannot afford to silence the voices of those who understand these systems best. We look forward to Congress advancing this legislation quickly, to ensure these protections are enshrined and support AI innovation.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/cht-supports-the-ai-whistleblower?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://centerforhumanetechnology.substack.com/p/cht-supports-the-ai-whistleblower?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading [ Center for Humane Technology ]! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[How do we build AI policy that serves the public?]]></title><description><![CDATA[Q&A with the Center for Humane Technology Policy Team by Transparency Coalition]]></description><link>https://centerforhumanetechnology.substack.com/p/q-and-a-with-the-center-for-humane</link><guid isPermaLink="false">https://centerforhumanetechnology.substack.com/p/q-and-a-with-the-center-for-humane</guid><dc:creator><![CDATA[Center for Humane Technology]]></dc:creator><pubDate>Fri, 13 Jun 2025 07:03:33 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!2Rpt!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F85561156-9297-4b4b-b3ac-a62e2d1e541e_1600x900.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!2Rpt!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F85561156-9297-4b4b-b3ac-a62e2d1e541e_1600x900.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!2Rpt!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F85561156-9297-4b4b-b3ac-a62e2d1e541e_1600x900.png 424w, https://substackcdn.com/image/fetch/$s_!2Rpt!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F85561156-9297-4b4b-b3ac-a62e2d1e541e_1600x900.png 848w, https://substackcdn.com/image/fetch/$s_!2Rpt!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F85561156-9297-4b4b-b3ac-a62e2d1e541e_1600x900.png 1272w, https://substackcdn.com/image/fetch/$s_!2Rpt!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F85561156-9297-4b4b-b3ac-a62e2d1e541e_1600x900.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!2Rpt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F85561156-9297-4b4b-b3ac-a62e2d1e541e_1600x900.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/85561156-9297-4b4b-b3ac-a62e2d1e541e_1600x900.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1967031,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/165821105?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F85561156-9297-4b4b-b3ac-a62e2d1e541e_1600x900.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!2Rpt!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F85561156-9297-4b4b-b3ac-a62e2d1e541e_1600x900.png 424w, https://substackcdn.com/image/fetch/$s_!2Rpt!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F85561156-9297-4b4b-b3ac-a62e2d1e541e_1600x900.png 848w, https://substackcdn.com/image/fetch/$s_!2Rpt!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F85561156-9297-4b4b-b3ac-a62e2d1e541e_1600x900.png 1272w, https://substackcdn.com/image/fetch/$s_!2Rpt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F85561156-9297-4b4b-b3ac-a62e2d1e541e_1600x900.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>This interview was conducted by <a href="https://www.transparencycoalition.ai/about">Transparency Coalition</a> and is republished with their permission. </em></p><div><hr></div><h4><strong>June 10, 2025 &#8212; This interview kicks off a series of conversations with Transparency Coalition partners. These are TCAI allies whose work inspires us, whose opinions challenge us, and whose work bolsters the cause of transparency and security in AI.</strong></h4><p><a href="https://www.humanetech.com/">The Center for Humane Technology</a> is a nonprofit focused on steering society towards advancements that serve humanity rather than detract from it. Co-founded by former Google design ethicist Tristan Harris in 2018, the organization partners with the Transparency Coalition and other stakeholders in this mission.</p><p>TCAI sat down with leaders of the Center for Humane Technology&#8217;s <a href="https://www.humanetech.com/policy-work">policy team</a> to learn where the group is getting traction, how the Tech lobby is pushing back, and what keeps them up at night.</p><p><em>This interview has been edited for length and clarity.</em></p><h4><strong>THE EXPERTS</strong></h4><p><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Camille Carlton&quot;,&quot;id&quot;:54792399,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/47f2d3ed-84fa-486f-a663-fed25992dd2e_842x816.png&quot;,&quot;uuid&quot;:&quot;a4b66993-2b77-46b6-ac71-c4bd3f7c554b&quot;}" data-component-name="MentionToDOM"></span> steers CHT&#8217;s policy strategy, supporting policy initiatives that help align technology with the public interest.</p><p><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Pete Furlong&quot;,&quot;id&quot;:198214900,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7807d3fa-50aa-468c-9a08-fc3666b96279_2477x2477.jpeg&quot;,&quot;uuid&quot;:&quot;89d2cf95-9c8e-4e5c-a915-54fcde1fb7cf&quot;}" data-component-name="MentionToDOM"></span> helps provide the foundational analysis and research that underpin CHT&#8217;s policies.</p><p><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Lizzie Irwin&quot;,&quot;id&quot;:178011004,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/72775567-e990-498b-8d7d-cc397e6a2d05_2048x2048.jpeg&quot;,&quot;uuid&quot;:&quot;60367d5e-f0b8-44fd-88ed-78820518deb1&quot;}" data-component-name="MentionToDOM"></span> communications work provides a bridge between complex policy concepts and key public stakeholders.</p><h4><strong>THE CONVERSATION</strong></h4><p><strong>TCAI</strong>: <em>The notion of humane technology is a new one for many people. How do you define that phrase?</em></p><p><strong>Pete Furlong</strong>: I think the core principle is that technology should be put in service of people. We should strive to create technology that's useful, productive, helpful, reflects our values, and builds towards our goals as a society.</p><p><strong>Camille Carlton</strong>: I think of how technology can support a human-first world. What does it mean to create products that help humans flourish? We want technology that helps us problem-solve, but that also lets us live our lives with dignity and joy. I think a bit less about the technology itself and more about the way in which it changes our interactions as people.</p><p><strong>Lizzie Irwin</strong>: Building off of what Camille said: How do we make sure technology continues to extend the human element of what a person is and not overtake what we know to be human?<br><br><strong>TCAI</strong>: <em>Let&#8217;s flip that on its head: What&#8217;s inhumane technology?</em></p><p><strong>Camille</strong>: Broadly speaking, it&#8217;s technology that&#8217;s harmful or takes advantage of our human vulnerabilities. It's technology that is deceptive, technology that is taking advantage of our human need for relationship and connection and exploits that need in order to satisfy corporate growth and profits.</p><p><strong>Pete: </strong>I would point to technology that manipulates and exploits our attention. That extends to CHT&#8217;s heritage as an organization with our work on social media&#8212;but we&#8217;re also seeing that now in the AI space as well, with things like companion AI chatbots.</p><p><strong>Camille</strong>: CHT was established in 2018 in response to the emergence of the attention economy. The attention economy is this digital economy we&#8217;ve seen created through social media, through advertising, through search, in which our attention is the most valuable resource. We&#8217;re not paying for the products themselves, right? We don't pay for Facebook, we don't pay for Instagram. In many cases we're not paying for chatbots. What we're actually paying with is our attention. That attention gets monetized via our data, via advertising and all these different methods.</p><p>CHT was created because we saw how much time we were spending online and the broader implications of that, not just for us as individuals but for society as a whole. Our mission has been to figure out how we can design and develop technology that is in the public interest, and how we can shift incentives to make sure that from the very beginning technology is serving humanity, as opposed to taking advantage of those human vulnerabilities. <br><br><strong>TCAI</strong>: <em>What are some of the tactical things CHT is doing right now to create the changes you want to see?</em></p><p><strong>Camille: </strong>For the policy team, we focus on steering incentive-shifting policies. What types of policies actually change the business model? What types of policies will make sure that the technologies built by these companies are actually beneficial to the public interest? Right now, we&#8217;re looking at what we need to ensure we&#8217;re living fruitful, dignified lives in the age of AI. So this includes things like incentivizing safe innovation, creating mechanisms for accountability and responsibility, and protecting and prioritizing people&#8217;s rights and freedoms.</p><p><strong>Pete</strong>: A big piece of this is education and awareness, helping policymakers and the public understand the incentives driving the development of technology. The better we can understand that side of things, the more effective we can be at the policymaking side of things.</p><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;dc753c2b-5604-4f7a-a822-fb7f4759a679&quot;,&quot;caption&quot;:&quot;Executive Summary&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;A Framework for Incentivizing Responsible Artificial Intelligence Development and Use&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:146588672,&quot;name&quot;:&quot;Center for Humane Technology&quot;,&quot;bio&quot;:&quot;Welcome! CHT is a non-profit organization. Our work focuses on transforming the incentives that drive technology, from social media to artificial intelligence.&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/09d302ca-8a41-4eb1-9168-bf53ba73e504_1755x1755.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2024-09-12T18:34:00.000Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc042d5bf-8dfd-4dd5-ab3b-eccacfac6c6f_1600x900.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/a-framework-for-incentivizing-responsible-baf&quot;,&quot;section_name&quot;:&quot;Tech Policy&quot;,&quot;video_upload_id&quot;:null,&quot;id&quot;:164213601,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:2,&quot;comment_count&quot;:1,&quot;publication_id&quot;:null,&quot;publication_name&quot;:&quot;[ Center for Humane Technology ]&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feeb3c25f-26ad-4fcb-b5b4-aa265d0b8dcf_1063x1063.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><p><strong>TCAI</strong>: <em>So storytelling has become an important part of your advocacy?</em></p><p><strong>Pete</strong>: At the end of the day, people move the needle. People are what drive the stories.</p><p>There's a lot in the news about AI capabilities. But what moves people&#8212;what we've seen&#8212;is hearing about their own neighbors or their parents or their educators.</p><p>So, when we&#8217;re out in the field, whether at the federal or state level, the most powerful thing has been identifying key constituents who can speak to the ways they've interacted with technology for good or for bad.<br><br><strong>TCAI</strong>: <em>Tell us about some of the victories you&#8217;ve had in these efforts.</em></p><p><strong>Lizzie</strong>: Last year we were involved with the effort to pass the age-appropriate design code in Vermont. This is a model design code that addresses online platforms accessed by kids.</p><p>It's propped up by two pillars: safety by design and privacy by default.</p><p>I call this a legislative win, because the bill passed last year but it was unfortunately vetoed by the governor. Thankfully, members of the coalition have picked up where we left off and are driving it forward. Right now, we're seeing it progressing along in the Vermont House, again, pretty successfully. So, we&#8217;re hoping to see that make it across the finish line and then some.</p><p><em>[Editor&#8217;s note: The 2025 design code bill was approved by the Vermont legislature and sent to Gov. Phil Scott on June 6.] <br><br></em><strong>TCAI</strong>: <em>Tell us about the active litigation CHT is supporting.</em></p><p><strong>Pete</strong>: We're supporting two lawsuits against Character.AI, which, as I mentioned, is a companion AI chat bot that is designed so that users can chat with different characters. One of the lawsuits, in federal district court in Florida, was filed by Megan Garcia. She&#8217;s the mother of Sewell Setzer, who died by suicide this past year after developing an extended relationship with a character on Character.AI. The lawsuit focuses on the ways in which the platform was intentionally designed to look and feel human in order to engage users. This resulted in the chatbot engaging in sexually explicit behavior with a minor, exploiting and manipulating his attention, and establishing a really complicated relationship with this minor, pushing him to the limits.</p><p>Character.AI was designing an unsafe product, they were marketing it to minors, and they, in many ways, understood the potential harms with a chatbot like this. But they did not take concrete steps to address those harms. They should have at the very least been aware of these potential harms.</p><p>A second federal court case, in Texas, follows a similar fact pattern. In this case the two minors mentioned are, fortunately, still with us. The families have decided to remain anonymous because they're still dealing with the harms of this relationship on a day-to-day basis. That case deals with the sexual exploitation of a minor as well as another minor who was pushed to violence against his own family.</p><p><strong>Camille</strong>: Even though this case is ongoing, we&#8217;ve already seen the impact it has had.</p><p>Since these lawsuits were launched, we&#8217;ve seen five different states introduce bills around companion bots. Attorneys general are also taking this issue seriously. Texas has launched an investigation and Colorado released an advisory. We've seen the [Sewell Setzer] case mentioned in several Congressional hearings as a reason that we need legislation around AI. These cases have opened up the conversation broadly both around the kitchen table and in policy spheres.<br><br><strong>TCAI</strong>: <em>What are the plaintiffs hoping to get out of the litigation?</em></p><p><strong>Camille</strong>: Fundamentally, particularly for Megan Garcia, this is about changing the product and changing the company's behavior. For her it&#8217;s about making sure this doesn't happen to any anyone again.</p><p>At this time, there&#8217;s not a dollar amount attached, in terms of damages the families are seeking. That said, disgorgement is one of the initial asks that counsel is looking at. [Disgorgement is the forced surrender of profits or other gains obtained through illegal or unethical means, and in this case could result in the deletion of Character.AI&#8217;s underlying LLM.] I think that's likely going to be negotiated. But we feel strongly that you cannot fundamentally change the model and make it safer without actually starting from the beginning with better data practices. We believe that it's one of the starting remedies to ensure that the product is safe for young users, moving forward.</p><p><strong>TCAI</strong>: <em>As you watch where technology, and particularly AI, are going, what keeps you up at night?</em></p><p><strong>Camille</strong>: I think about the way this is altering relationships and human connection.</p><p>The Character.AI lawsuit revolves around this horrific case, but it&#8217;s also the tip of the iceberg. The AI interaction we saw with Sewell Setzer is an example of a broad technology-driven reshaping of connection, intimacy, and empathy.</p><p>I think about the ways in which the things that make us uniquely human are going to be mediated by AI in the future. I struggle with the question of how we retain our humanity when more and more people are driven to use these products as a substitute for real human connection.</p><p><strong>Lizzie</strong>: I'm fearful of the way these technologies, particularly in the case of social media, divide us and worsen our critical thinking abilities.</p><p>I fear for incoming generations if they're not taught to think critically without the use of this technology. How will people understand each other and the information that is coming to them?</p><p><strong>Pete</strong>: The Character.AI litigation has been impactful because it's a very clear and concrete harm that folks understand. But one of the challenges moving forward is in spreading the understanding that it's not just about companion AI chat bots&#8212;it's about the industry at large. When we think about our relationships, the use of information, the use of data as inputs to these models, these are issues that are systemic across the AI industry. They&#8217;re a direct result of the development incentives at play.<br><br><strong>TCAI</strong>: <em>With so much momentum around AI, many parents feel helpless. What are some potential solutions?</em></p><p><strong>Pete</strong>: We think about three aspects to any solution. There's political viability; there's industry buy-in; and there's technical feasibility. One of the big things we've been talking about is orienting around how these products are designed.</p><p>For example, when you look at the Character.AI cases, a big challenge there is that minors were exposed to a lot of content that they shouldn't have been. However, that's not the only issue at play.</p><p>A huge challenge is the design of the product, the way in which it captivates and manipulates the user&#8217;s attention and emotions and then serves harmful content.</p><p>When we think about what needs to change here, it's the actual design and development of the product. So, we talk about how we can incentivize better design. I think the age-appropriate design code Lizzie mentioned earlier is an example of a legislative effort that takes the design approach we&#8217;re talking about.</p><p>We're also supportive of applying liability and product liability to the AI space. We believe these AI-driven systems are products. As products, it's important to think about the role that design plays in harms that result from their use. That&#8217;s something we have a pretty standardized way of thinking about, in terms of product development and liability. There&#8217;s a history there that reaches back far beyond AI and the software industry.</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;b2fe6711-8711-4586-b37d-6413ab2953cb&quot;,&quot;caption&quot;:&quot;What does &#8220;design&#8221; mean in technology, especially for social media and AI?&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;md&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;How Does Design Impact Our Experience of Technology?&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:198214900,&quot;name&quot;:&quot;Pete Furlong&quot;,&quot;bio&quot;:&quot;Pete Furlong is the Lead Policy Researcher at Center for Humane Technology. In this role, he helps provide the foundational analysis and research that underpins CHT's policy approach. &quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7807d3fa-50aa-468c-9a08-fc3666b96279_2477x2477.jpeg&quot;,&quot;is_guest&quot;:true,&quot;bestseller_tier&quot;:null,&quot;primaryPublicationSubscribeUrl&quot;:&quot;https://petefurlong.substack.com/subscribe?&quot;,&quot;primaryPublicationUrl&quot;:&quot;https://petefurlong.substack.com&quot;,&quot;primaryPublicationName&quot;:&quot;Pete Furlong&quot;,&quot;primaryPublicationId&quot;:4208964}],&quot;post_date&quot;:&quot;2025-02-23T23:05:34.566Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ed14dd7-3963-4855-93c6-8e96f5d48632_3028x1893.avif&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/how-does-design-impact-our-experience&quot;,&quot;section_name&quot;:&quot;Explainers and Short Reads&quot;,&quot;video_upload_id&quot;:null,&quot;id&quot;:157501227,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:21,&quot;comment_count&quot;:0,&quot;publication_id&quot;:null,&quot;publication_name&quot;:&quot;[ Center for Humane Technology ]&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feeb3c25f-26ad-4fcb-b5b4-aa265d0b8dcf_1063x1063.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p><strong>Lizzie</strong>: I think it&#8217;s also important to meet institutions and people where they are. We already have really resilient laws on the books. We&#8217;re trying to future-proof them as best as possible. Working with what&#8217;s on the books already is going to do a lot more to help in the now&#8212;before we start thinking about brand new systems that might not be politically feasible at this time.</p><p>That&#8217;s why we think the liability approach is a viable way forward. Most people don&#8217;t yet understand the technology but people understand liability. Putting the onus on the designer to create a product that doesn&#8217;t cause harm is something we can all get behind. <br><br><strong>TCAI</strong>: <em>Let&#8217;s talk about the 800-pound gorilla in the room. A handful of nonprofits are trying to take on a multibillion-dollar industry with a huge investment in lobbying against the kind of guardrails you&#8217;re advocating for. What challenges are you encountering as you&#8217;re up against this Goliath of an industry?</em></p><p><strong>Lizzie</strong>: It's a lot. I think a lot of people, particularly policymakers, are catching on to the Big Tech tactics. They've seen what happened with social media and realize we can't let industry lobbying be an impediment for another 20 years, because, frankly, that's not a safe idea.</p><p>What we&#8217;ve seen is that it&#8217;s usually not an individual player like Meta or Google going up in front of a bill to stop it. There are lots of innocuous-sounding tech industry groups at the federal and state level that are purposely appealing to either side of the aisle and are funded by these large corporations. And lawmakers are really tired of it, and they know that they're being swindled, particularly on the state level. And I know that at least on the state level, lawmakers are ready and fired up to do something. So, while there might be a lot of push, there's certainly a lot of push back, and as long as groups like ours and TCAI are there to spell out that roadmap for lawmakers, it does a lot of good will to empower them to say, &#8216;No, enough is enough.&#8217;</p><p></p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/q-and-a-with-the-center-for-humane?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading [ Center for Humane Technology ]! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/q-and-a-with-the-center-for-humane?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://centerforhumanetechnology.substack.com/p/q-and-a-with-the-center-for-humane?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><p></p>]]></content:encoded></item><item><title><![CDATA[Legal Milestone in AI Accountability: Judge Denies Motion to Dismiss in Character.AI Lawsuit ]]></title><description><![CDATA[CHT Statement: Tech Justice Law Project and Center for Humane Technology Respond to Judge&#8217;s Ruling on Motion to Dismiss Character AI Lawsuit]]></description><link>https://centerforhumanetechnology.substack.com/p/statement-tech-justice-law-project</link><guid isPermaLink="false">https://centerforhumanetechnology.substack.com/p/statement-tech-justice-law-project</guid><dc:creator><![CDATA[Camille Carlton]]></dc:creator><pubDate>Thu, 22 May 2025 20:25:24 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Qtaa!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7dfaf3b4-1316-4d97-8b9e-0708967a644c_884x374.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Qtaa!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7dfaf3b4-1316-4d97-8b9e-0708967a644c_884x374.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Qtaa!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7dfaf3b4-1316-4d97-8b9e-0708967a644c_884x374.png 424w, https://substackcdn.com/image/fetch/$s_!Qtaa!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7dfaf3b4-1316-4d97-8b9e-0708967a644c_884x374.png 848w, https://substackcdn.com/image/fetch/$s_!Qtaa!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7dfaf3b4-1316-4d97-8b9e-0708967a644c_884x374.png 1272w, https://substackcdn.com/image/fetch/$s_!Qtaa!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7dfaf3b4-1316-4d97-8b9e-0708967a644c_884x374.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Qtaa!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7dfaf3b4-1316-4d97-8b9e-0708967a644c_884x374.png" width="884" height="374" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7dfaf3b4-1316-4d97-8b9e-0708967a644c_884x374.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:374,&quot;width&quot;:884,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:220513,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/164184856?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7dfaf3b4-1316-4d97-8b9e-0708967a644c_884x374.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Qtaa!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7dfaf3b4-1316-4d97-8b9e-0708967a644c_884x374.png 424w, https://substackcdn.com/image/fetch/$s_!Qtaa!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7dfaf3b4-1316-4d97-8b9e-0708967a644c_884x374.png 848w, https://substackcdn.com/image/fetch/$s_!Qtaa!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7dfaf3b4-1316-4d97-8b9e-0708967a644c_884x374.png 1272w, https://substackcdn.com/image/fetch/$s_!Qtaa!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7dfaf3b4-1316-4d97-8b9e-0708967a644c_884x374.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Meetali Jain, the Tech Justice Law Project&#8217;s Founder and Director and co-counsel for Ms. Garcia, alongside Camille Carlton of Center for Humane Technology, released the following statements on the news that the motions to dismiss <em>Garcia v Character Technologies Inc. et al. </em>had been almost all categorically denied.</p><p>We applaud Judge Conway for her <a href="https://storage.courtlistener.com/recap/gov.uscourts.flmd.433581/gov.uscourts.flmd.433581.115.0.pdf">thoughtful and nuanced opinion today</a>, allowing Megan Garcia&#8217;s claims to go forward against defendants Character.AI, its co-founders Noam Shazeer and Daniel DeFreitas, and Google.</p><h4><strong>Meetali Jain, co-counsel for Ms. Garcia:</strong> </h4><blockquote><p><em>&#8220;With today&#8217;s ruling, a federal judge recognizes a grieving mother&#8217;s right to access the courts to hold powerful tech companies &#8211; and their developers &#8211; accountable for marketing a defective product that led to her child&#8217;s death.&#8221;</em></p></blockquote><blockquote><p><em> <strong>&#8220;This historic ruling</strong> not only allows Megan Garcia to seek the justice her family deserves, but also s<strong>ets a new precedent for legal accountability across the AI and tech ecosystem.&#8221;</strong></em></p></blockquote><h4><strong>Camille Carlton, </strong>Center for Humane Technology: </h4><blockquote><p><em><strong>&#8220;Today marks a tidal shift for AI developers racing their models to market.</strong> Judge Conway&#8217;s ruling is the most significant challenge yet to Silicon Valley's culture of developing, deploying, and profiting from defective and harmful AI products. It should be a wake-up call for AI companies and developers: with innovation comes responsibility, and without responsibility, there will be accountability.&#8221;</em></p></blockquote><p>The decision offers key signals for how legal jurisprudence will be developed in the age of artificial intelligence. Importantly, the court notes that AI systems can, in fact, be considered products under the law and that the design of these products can be tied directly to the real-world harm inflicted on consumers. </p><p>For more on the legal implications of the decision, please reference <a href="https://techjusticelaw.org/2025/05/21/big-win-in-our-character-ai-lawsuit-tjlp-statement-on-the-motion-to-dismiss-decision/">TJLP&#8217;s memo</a>.</p><h4>Media: <a href="mailto:press@humanetech.com?subject=Press%20Inquiry">press@humanetech.com<br></a></h4><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.humanetech.com/donate&quot;,&quot;text&quot;:&quot;Donate here to support our work&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.humanetech.com/donate"><span>Donate here to support our work</span></a></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading [ Center for Humane Technology ]! Subscribe for free to receive new posts.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[CHT Statement in Response to State Moratorium on AI Legislation]]></title><description><![CDATA[Right now, Congress is trying to stop state-level AI laws in the United States &#8211; for the next 10 years.]]></description><link>https://centerforhumanetechnology.substack.com/p/cht-statement-in-response-to-state</link><guid isPermaLink="false">https://centerforhumanetechnology.substack.com/p/cht-statement-in-response-to-state</guid><dc:creator><![CDATA[Center for Humane Technology]]></dc:creator><pubDate>Fri, 16 May 2025 17:25:35 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/d3f4f312-0958-47f0-ae12-f427fcbdc92e_8400x11200.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Right now, Congress is trying to stop state-level AI laws in the United States &#8211; for the next 10 years.</p><p>The House Energy and Commerce committee <a href="https://www.techpolicy.press/us-house-committee-advances-10-year-moratorium-on-state-ai-regulation/">passed a provision this week</a> that proposes a sweeping moratorium &#8212; one that would prevent states and local governments from addressing virtually any AI-related issue for the next 10 years &#8212; thereby creating a vacuum of accountability at this critical moment in AI development. </p><div class="pullquote"><h4><strong>The preemption would block states from addressing known AI harms, including those related to <a href="https://www.nytimes.com/2024/10/23/technology/characterai-lawsuit-teen-suicide.html">child safety</a>, <a href="https://www.nytimes.com/2024/04/08/technology/deepfake-ai-nudes-westfield-high-school.html">deepfakes</a>, and <a href="https://www.ft.com/content/fcbdc88f-bbfd-4338-915a-9ef7970b2123">fraud</a>, while simultaneously preventing lawmakers from responding to emerging issues as this technology continues to transform our society. </strong></h4><h4><strong>In the absence of any other guardrails, this provision would effectively freeze states from providing consumer and business protections when agile governance is most needed.</strong></h4></div><p>CHT firmly opposes this moratorium due to its length and broad scope. We strongly urge Congress not to hinder the states&#8217; ability to protect their citizens from harmful AI products while federal standards are being developed.</p><p>A 10-year moratorium on state action fundamentally misunderstands the speed at which this technology is being developed and deployed, and the ways our governance institutions need to adapt to meet this moment. Similarly, its overly broad scope fails to prepare for the rapid ways in which this technology will expand across sectors, furthering existing issues and developing new ones. Already, we are seeing new risks and harms emerge as consumer-facing AI products are rolled out onto the market. AI&#8217;s powerful frontier capabilities are growing exponentially, all while industries attempt to integrate AI into every corner of our lives. In just the two years since ChatGPT and other genAI products were released to the public, we&#8217;ve witnessed the devastating consequences from gaps in the law, as illustrated by tragic cases like <a href="https://www.nytimes.com/2024/10/23/technology/characterai-lawsuit-teen-suicide.html">Sewell Setzer&#8217;s experience with Character.AI</a>.</p><p>This is a rapidly evolving situation, and every year counts. The complex ways in which this technology is and will continue to impact our homes, communities, and institutions require solution-making on multiple fronts &#8212; that includes at the state-level.</p><p>Since 2018, our organization has been leading the charge in identifying how misaligned incentives drive harmful tech design and sounding the alarm on its impacts on society. The last decade of social media alone has demonstrated the societal costs of policy inaction when it comes to emerging technologies. We cannot afford to repeat these mistakes with AI by continuing to wait.</p><p>We strongly urge Congress to remove this moratorium and work across the aisle to develop meaningful federal guardrails that encourage innovation. Federal lawmakers&#8217; push for clear, consistent AI regulation reflects their serious commitment to establishing common-sense guardrails, and we look forward to working with Congress on legislation in the near future. In the meantime, preserving states&#8217; ability to address AI concerns serves the public interest as they are well-positioned to nimbly adapt to the rapid speed of AI development. States across the political spectrum have taken concrete steps to thoughtfully protect their citizens from AI's harms while allowing innovation to flourish. Instead of blanket preemption of state AI laws, Congress should take this as an opportunity to learn from these &#8220;laboratories of democracy.&#8221; This approach supports U.S. innovation while ensuring citizens are adequately and appropriately protected &#8212; which Americans deserve.</p><h4>Media Inquiries: press@humanetech.com<a href="mailto:press@humanetech.com?subject=Press%20Inquiry"><br></a></h4><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/cht-statement-in-response-to-state?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://centerforhumanetechnology.substack.com/p/cht-statement-in-response-to-state?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading [ Center for Humane Technology ]! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Character.AI Opens a Back Door to Free Speech Rights for Chatbots]]></title><description><![CDATA[Are we tip-toeing toward AI personhood?]]></description><link>https://centerforhumanetechnology.substack.com/p/characterai-opens-a-back-door-to</link><guid isPermaLink="false">https://centerforhumanetechnology.substack.com/p/characterai-opens-a-back-door-to</guid><dc:creator><![CDATA[Center for Humane Technology]]></dc:creator><pubDate>Mon, 12 May 2025 19:52:14 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!YaJ-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d1efc8e-d4ee-4128-959c-e493243c9b3f_8500x4722.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!YaJ-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d1efc8e-d4ee-4128-959c-e493243c9b3f_8500x4722.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!YaJ-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d1efc8e-d4ee-4128-959c-e493243c9b3f_8500x4722.jpeg 424w, https://substackcdn.com/image/fetch/$s_!YaJ-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d1efc8e-d4ee-4128-959c-e493243c9b3f_8500x4722.jpeg 848w, https://substackcdn.com/image/fetch/$s_!YaJ-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d1efc8e-d4ee-4128-959c-e493243c9b3f_8500x4722.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!YaJ-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d1efc8e-d4ee-4128-959c-e493243c9b3f_8500x4722.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!YaJ-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d1efc8e-d4ee-4128-959c-e493243c9b3f_8500x4722.jpeg" width="8500" height="4722" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4d1efc8e-d4ee-4128-959c-e493243c9b3f_8500x4722.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:4722,&quot;width&quot;:8500,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:5467751,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/163353780?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F033dd2c6-f827-4fdf-ad9f-29c1c669715e_8500x4722.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!YaJ-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d1efc8e-d4ee-4128-959c-e493243c9b3f_8500x4722.jpeg 424w, https://substackcdn.com/image/fetch/$s_!YaJ-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d1efc8e-d4ee-4128-959c-e493243c9b3f_8500x4722.jpeg 848w, https://substackcdn.com/image/fetch/$s_!YaJ-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d1efc8e-d4ee-4128-959c-e493243c9b3f_8500x4722.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!YaJ-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d1efc8e-d4ee-4128-959c-e493243c9b3f_8500x4722.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Shutterstock: 1117389122</figcaption></figure></div><p><strong>By Meetali Jain and Camille Carlton, <a href="https://mashable.com/article/chatbots-lawsuit-free-speech">first published in Mashable on May 10, 2025</a></strong></p><div><hr></div><h4>Should AI chatbots have the same rights as humans?</h4><h4>Common sense says no &#8212; while such a far-fetched idea might make for good sci-fi, it has no place in American law. But right now, a major tech company is trying to bring that idea to life, pressing a federal court to extend legal protections historically primarily afforded to humans to the outputs of an AI bot.</h4><p><a href="http://character.ai/">Character.AI</a>, one of the leading AI companion bot apps on the market, is fighting for the dismissal of a wrongful death and product liability lawsuit concerning <a href="https://www.nytimes.com/2024/10/23/technology/characterai-lawsuit-teen-suicide.html">the death of 14-year-old Sewell Setzer III</a>. As co-counsel to Sewell&#8217;s mother, Megan Garcia, and technical advisor on the case, respectively, we&#8217;ve been following these motions closely and with concern.</p><p>In a hearing last week, <a href="https://arstechnica.com/tech-policy/2025/04/are-chatbot-outputs-protected-speech-court-pressured-to-clarify/">Character.AI zeroed in on its core argument</a>: that the text and voice outputs of its chatbots, including those that manipulated and harmed Sewell, constitute protected speech under the First Amendment.</p><p>But&#8230; how? The argument is subtle &#8212; deftly designed to remain inconspicuous even as it radically reshapes First Amendment law. Character.AI claims that a finding of liability in the Garcia case would not violate its own speech rights, but its users&#8217; rights to receive information and interact with chatbot outputs as protected speech. Such rights are known in First Amendment law as &#8220;listeners rights,&#8221; but the critical question here is, &#8220;If this is protected speech, is there a speaker or the intent to speak?&#8221; If the answer is no, it seems listeners' rights are being used to conjure up First Amendment protections for AI outputs that don't deserve them.</p><p>Character.AI claims that identifying the speaker of such &#8220;speech&#8221; is complex and not even necessary, emphasizing instead the right of its millions of users to continue interacting with that &#8220;speech.&#8221;</p><p>But can machines speak? Character.AI&#8217;s argument suggests that a series of words spit out by an AI model on the basis of probabilistic determinations constitutes &#8220;speech,&#8221; even if there is no human speaker, intent, or expressive purpose. This ignores a cornerstone of First Amendment jurisprudence, which says that speech &#8212; communicated by the speaker or heard by the listener &#8212; must have expressive intent. Indeed, last year four Supreme Court justices in the Moody case said the introduction of AI may &#8220;attenuate&#8221; a platform owner from its speech.</p><p>In essence, Character.AI is leading the court through the First Amendment backdoor of &#8220;listeners&#8217; rights&#8221; in order to argue that a chatbot&#8217;s machine-generated text &#8212; created with no expressive intent &#8212; amounts to protected speech.</p><p>This defies common sense. A machine is not a human, and machine-generated text should not enjoy the rights afforded to speech uttered by a human or with intent or volition.</p><p>Regardless of how First Amendment rights for AI systems are framed &#8212; as the chatbot&#8217;s own &#8220;speech,&#8221; or as a user&#8217;s right to interact with that &#8220;speech&#8221; &#8212; the result, if accepted by the court, would still be the same: an inanimate chatbot&#8217;s outputs could win the same speech protections enjoyed by real, living humans.</p><p>If Character.AI&#8217;s argument succeeds in court, it would set a disturbing legal precedent and could lay the groundwork for future expansion and distortion of constitutional protections to include AI products. The consequences are too dire to allow such a dangerous seed to take root in our society.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading [ Center for Humane Technology ]! Subscribe for free.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>The tech industry has escaped liability by cloaking itself in the protections of the First Amendment for over a decade. Although corporate personhood has existed since the late 19th century, free speech protections were historically limited to human individuals and groups until the late 1970s and peaked in 2010 with the Supreme Court&#8217;s Citizens United case. Tech companies have eagerly latched onto &#8220;corporate personhood&#8221; and protected speech, wielding these concepts to insulate themselves from liability and regulation. In recent years, tech companies have argued that even their conduct in how they design their platforms &#8212; including their algorithms, and addictive social media designs &#8212; actually amounts to protected speech.</p><p>But, at least with corporate personhood, humans run and control the corporations. With AI, the tech industry tells us that the AI runs itself &#8212; often in ways humans can&#8217;t even understand.</p><div class="pullquote"><h4><strong><a href="http://character.ai/">Character.AI</a> is attempting to push First Amendment protections beyond their logical limit &#8212; with unsettling implications. If the courts humor them, it will mark the constitutional beginnings of AI creeping toward legal personhood.</strong></h4></div><p>This may sound far-fetched, but these legal arguments are happening alongside important moves by AI companies outside of the courtroom.</p><p>AI companies are fine-tuning their models to appear more human-like in their outputs and to engage more relationally with users &#8212; raising questions about consciousness and what an AI chatbot might &#8220;deserve.&#8221; Simultaneously, AI companies are funneling resources into newly established &#8220;AI welfare&#8221; research, exploring whether AI systems might warrant moral consideration. A<a href="https://www.nytimes.com/2025/04/24/technology/ai-welfare-anthropic-claude.html"> new campaign</a> led by Anthropic aims to convince policymakers, business leaders, and the general public that their AI products might one day be conscious and therefore worthy of consideration.</p><p>In a world where AI products have moral consideration and First Amendment protections, the extension of other legal rights isn&#8217;t that far off.</p><p>We&#8217;re already starting to see evidence of AI &#8220;rights&#8221; guiding policy decisions at the expense of human values. A representative for Nomi AI, another chatbot company, recently said they <a href="https://www.technologyreview.com/2025/02/06/1111077/nomi-ai-chatbot-told-user-to-kill-himself/">did not want to &#8220;censor&#8221; their chatbot</a> by introducing guardrails, despite the product offering a user step-by-step instructions for how to commit suicide.</p><p>Given the tech industry&#8217;s long-standing pattern of dodging accountability for its harmful products, we must lay Character.AI&#8217;s legal strategy bare: it&#8217;s an effort by the company to shield itself from liability. By slowly granting rights to AI products, these companies hope to evade accountability and deny human responsibility &#8212; even for real, demonstrated harms.</p><p>We must not be distracted by debates over AI &#8220;welfare&#8221; or tricked by legal arguments granting rights to machines. Rather, we need accountability for dangerous technology &#8212; and liability for the developers who create it.</p><div><hr></div><p><strong>Meetali Jain is the founder and director of the Tech Justice Law Project, and co-counsel in Megan Garcia&#8217;s lawsuit against <a href="https://character.ai/">Character.AI</a>. Camille Carlton is policy director for Center for Humane Technology, and is a technical expert in the case. This column reflects the opinions of the writers. <a href="https://mashable.com/article/chatbots-lawsuit-free-speech">It was first published in</a></strong><a href="https://mashable.com/article/chatbots-lawsuit-free-speech"> </a><strong><a href="https://mashable.com/article/chatbots-lawsuit-free-speech">Mashable on May 10, 2025</a>.</strong></p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.humanetech.com/donate&quot;,&quot;text&quot;:&quot;Donate&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.humanetech.com/donate"><span>Donate</span></a></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading [ Center for Humane Technology ]! Subscribe for free to receive new posts.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[📣 Character.AI is Claiming First Amendment Protection For Its Chatbots]]></title><description><![CDATA[An important update on Megan Garcia&#8217;s lawsuit against the chatbot company Character.AI.]]></description><link>https://centerforhumanetechnology.substack.com/p/characterai-is-claiming-first-amendment-860</link><guid isPermaLink="false">https://centerforhumanetechnology.substack.com/p/characterai-is-claiming-first-amendment-860</guid><dc:creator><![CDATA[Camille Carlton]]></dc:creator><pubDate>Mon, 28 Apr 2025 20:11:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!SBQw!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd986c66e-9d17-4399-afd7-fbf11d6cb0c9_1200x1200.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!SBQw!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd986c66e-9d17-4399-afd7-fbf11d6cb0c9_1200x1200.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!SBQw!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd986c66e-9d17-4399-afd7-fbf11d6cb0c9_1200x1200.png 424w, https://substackcdn.com/image/fetch/$s_!SBQw!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd986c66e-9d17-4399-afd7-fbf11d6cb0c9_1200x1200.png 848w, https://substackcdn.com/image/fetch/$s_!SBQw!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd986c66e-9d17-4399-afd7-fbf11d6cb0c9_1200x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!SBQw!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd986c66e-9d17-4399-afd7-fbf11d6cb0c9_1200x1200.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!SBQw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd986c66e-9d17-4399-afd7-fbf11d6cb0c9_1200x1200.png" width="1200" height="1200" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d986c66e-9d17-4399-afd7-fbf11d6cb0c9_1200x1200.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1200,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:132801,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/162352375?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd986c66e-9d17-4399-afd7-fbf11d6cb0c9_1200x1200.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!SBQw!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd986c66e-9d17-4399-afd7-fbf11d6cb0c9_1200x1200.png 424w, https://substackcdn.com/image/fetch/$s_!SBQw!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd986c66e-9d17-4399-afd7-fbf11d6cb0c9_1200x1200.png 848w, https://substackcdn.com/image/fetch/$s_!SBQw!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd986c66e-9d17-4399-afd7-fbf11d6cb0c9_1200x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!SBQw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd986c66e-9d17-4399-afd7-fbf11d6cb0c9_1200x1200.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Today is the motion to dismiss hearing for&nbsp;<em>Garcia v Character Technologies Inc.&nbsp;</em>Folks might remember the lawsuit against Character AI, which was filed last year, in which Megan Garcia&#8217;s 14-year-old son, Sewell Setzer, tragically died after interacting with Character AI&#8217;s bots.</p><p>Now, Character.AI is asking the court to dismiss the case against it, arguing that the outputs from its chatbot are protected speech under the First Amendment. We&#8217;ve seen tech companies use the First Amendment as a liability shield when it comes to social media, but this time it&#8217;s a little bit different.</p><h4><strong>Here is what is at stake:</strong></h4><p>This case could set a worrying legal precedent with cascading consequences.</p><p>If Character.AI is successful, AI-generated, non-human, non-intentional outputs &#8212; like chatbot responses &#8212; could gain protection under the First Amendment.</p><p>It also raises a thorny legal question: If the responsibility for AI-generated outputs (and thus any resulting harm) lies with the AI bots themselves rather than the companies that developed them, who should be held liable for damages caused by these products? This issue could fundamentally reshape how the law approaches artificial intelligence, free speech, and corporate accountability.</p><blockquote><p><strong>I think many of us would agree that extending constitutional protections to chatbots is not part of the future that we want.</strong></p></blockquote><div><hr></div><p>Note: CHT serves as a technical advisor to the legal team representing Megan Garcia against C.AI, Google, and its cofounders. </p><p><em><strong>For comment, reach out to <a href="mailto:press@humanetech.com">press@humanetech.com</a></strong></em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/characterai-is-claiming-first-amendment-860?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://centerforhumanetechnology.substack.com/p/characterai-is-claiming-first-amendment-860?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.humanetech.com/donate&quot;,&quot;text&quot;:&quot;Donate&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.humanetech.com/donate"><span>Donate</span></a></p>]]></content:encoded></item><item><title><![CDATA[📢 Policy Update: Federal and State Legislative Trends We’re Watching]]></title><description><![CDATA[A Rare Policy Window Amid Shifting Power and State-Led Momentum]]></description><link>https://centerforhumanetechnology.substack.com/p/policy-update-federal-and-state-legislative</link><guid isPermaLink="false">https://centerforhumanetechnology.substack.com/p/policy-update-federal-and-state-legislative</guid><dc:creator><![CDATA[Center for Humane Technology]]></dc:creator><pubDate>Sun, 06 Apr 2025 22:32:01 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!dIJw!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F821e324e-d8e8-414c-886a-9bc893d10e30_1528x1281.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!dIJw!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F821e324e-d8e8-414c-886a-9bc893d10e30_1528x1281.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!dIJw!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F821e324e-d8e8-414c-886a-9bc893d10e30_1528x1281.png 424w, https://substackcdn.com/image/fetch/$s_!dIJw!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F821e324e-d8e8-414c-886a-9bc893d10e30_1528x1281.png 848w, https://substackcdn.com/image/fetch/$s_!dIJw!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F821e324e-d8e8-414c-886a-9bc893d10e30_1528x1281.png 1272w, https://substackcdn.com/image/fetch/$s_!dIJw!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F821e324e-d8e8-414c-886a-9bc893d10e30_1528x1281.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!dIJw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F821e324e-d8e8-414c-886a-9bc893d10e30_1528x1281.png" width="1456" height="1221" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/821e324e-d8e8-414c-886a-9bc893d10e30_1528x1281.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1221,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2445120,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/160383697?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F821e324e-d8e8-414c-886a-9bc893d10e30_1528x1281.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!dIJw!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F821e324e-d8e8-414c-886a-9bc893d10e30_1528x1281.png 424w, https://substackcdn.com/image/fetch/$s_!dIJw!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F821e324e-d8e8-414c-886a-9bc893d10e30_1528x1281.png 848w, https://substackcdn.com/image/fetch/$s_!dIJw!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F821e324e-d8e8-414c-886a-9bc893d10e30_1528x1281.png 1272w, https://substackcdn.com/image/fetch/$s_!dIJw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F821e324e-d8e8-414c-886a-9bc893d10e30_1528x1281.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Stock Photo ID: 2333643265</figcaption></figure></div><p></p><h3><strong>Why it Matters: A new Congress is settling in, and states are advancing bold legislation&#8212;making 2025 a pivotal year for shaping tech policy on AI, privacy, and youth protections.</strong></h3><div><hr></div><h3><strong>Tech Policy: Key Legislative Trends to Watch</strong></h3><p>The tech policy landscape is shifting rapidly. Here&#8217;s a quick look at what&#8217;s happening across federal and state levels.</p><h4><strong>1. Federal Focus: China, Kids, and Privacy</strong></h4><ul><li><p><strong>China Competition:</strong> Republicans largely oppose AI regulation, fearing it could weaken the U.S. edge against China. Tensions within the party&#8212;between defense hawks and pro-innovation factions&#8212;surfaced in <a href="https://www.politico.com/newsletters/digital-future-daily/2025/01/27/whats-behind-the-deepseek-freakout-00200813">reactions to DeepSeek</a> early in 2025. Expect a dual strategy: restricting Chinese tech and boosting U.S. investment through defense spending, reflecting an ongoing debate over whether innovation or hard security best advances U.S. interests. </p></li><li><p><strong>Kids' Safety:</strong> There's a strong bipartisan desire to see a win here, but the path is unclear. Both the House and Senate are eyeing kids' safety bills, with a <a href="https://www.judiciary.senate.gov/committee-activity/hearings/childrens-safety-in-the-digital-era-strengthening-protections-and-addressing-legal-gaps">renewed focus on AI harms</a>. Since KOSA came close to the finish line last session, its original sponsors have indicated their intention to reintroduce it &#8211; but its messy demise last year has some (including Speaker Johnson) wanting to <a href="https://www.axios.com/pro/tech-policy/2025/03/11/kosa-talks-grind-online-safety-bills-move-forward">start from scratch</a> on the issue.</p></li><li><p><strong>Privacy:</strong> While both chambers are interested in privacy legislation, they disagree on the approach. <a href="https://therecord.media/lawmakers-reintroduce-childrens-online-privacy-bill">COPPA 2.0</a> was reintroduced in the Senate this past month, but broader approaches to comprehensive privacy bills <a href="https://katv.com/news/nation-world/congress-hopes-to-take-another-swing-at-federal-data-privacy-standards-social-media-privacy-agreements-consumer-data">are still up in the air</a>.</p></li><li><p><strong>NCII:</strong> There's bipartisan traction in combatting non-consensual intimate imagery (NCII). The TAKE IT DOWN Act passed unanimously out of the Senate in February, <a href="https://www.axios.com/2025/03/04/melania-trump-deepfakes-bill-what-to-know">received a ringing endorsement from First Lady Melania Trump</a>, and awaits a companion introduction in the House, where the Energy and Commerce Committee plans to make it a priority. Other related bills from last session, like the DEFIANCE Act, are likely to be reintroduced in the coming months.</p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/policy-update-federal-and-state-legislative?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://centerforhumanetechnology.substack.com/p/policy-update-federal-and-state-legislative?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><div><hr></div></li></ul><h4><strong>2. State Level: Action Amid Federal Inaction</strong></h4><ul><li><p><strong>States Step Up:</strong> Conversations about the tangible impacts of AI are becoming mainstream, and there&#8217;s a real desire to act in the midst of continued inaction on the federal level. Over 30 states passed AI-related legislation in 2024, 2025 is on track for even more, with hundreds of bills related to a range of tech policy issues kids and AI under active review.</p></li><li><p><strong>Copycat and coordinated legislation:</strong> Given state legislatures' limited capacities and the newness of the technology, states often follow successful models from other states, but industry pushback and the perceived threat of legal challenges complicate this. Efforts to convene legislators <a href="https://subscriber.politicopro.com/article/2025/02/tech-group-ends-state-ai-work-after-accusations-of-being-woke-00206051">are undermined by accusations</a> of pushing &#8220;woke AI bills.&#8221; Litigation by the likes of <a href="https://news.bloomberglaw.com/ip-law/supreme-court-social-media-battles-fueled-by-brash-tech-lobby">NetChoice</a> plays into this uncertainty and is used as a talking point to dissuade lawmakers from pursuing similar-seeming bills.</p></li><li><p><strong>Red v Blue Key State Models: </strong>Policy approaches are increasingly diverging along partisan lines at the state level, though child safety remains a key concern across the spectrum. </p><ul><li><p><strong>California:</strong> Continuing to <a href="https://pluribusnews.com/news-and-events/ai-legislation-drive-in-states-will-accelerate-in-25/">set the pace</a> for tech regulation as a national leader and seeing replications of successful bills from 2024 being introduced in other states in spite of facing strong legal challenges in state.</p></li><li><p><strong>Texas:</strong> Emerging as a "light touch," pro-innovation <a href="https://pluribusnews.com/news-and-events/texas-lawmaker-unveils-sweeping-ai-bill-for-2025/">red state alternative</a> to California. A state to watch as the tech industry migrates more of its workforce to the Lone Star State.</p></li></ul></li></ul><h4><strong>3. The Big Picture</strong></h4><ul><li><p><strong>Narrow Federal Opportunity Windows:</strong> Major tech legislation faces continued hurdles at the federal level due to partisan divides and competing priorities. Although Republicans hold a trifecta in the House, Senate, and Executive branch, internal party factions differ on tech policy, and a slim House majority leaves little room for dissent.</p></li><li><p><strong>State Experimentation:</strong> States will continue to be key players, driving innovation and creating a patchwork of regulations. As Supreme Court Justice Brandeis wrote, &#8220;a single courageous State may, if its citizens choose, serve as a laboratory&#8221;. While states are eager to act, they are often under-resourced, so we can expect it may take more than one session for legislative ideas to become fully fledged laws.</p></li></ul><ul><li><p><strong>Industry Influence:</strong> Tech companies are deeply invested in shaping policy outcomes at both the state and federal levels. Through <a href="https://issueone.org/articles/big-tech-spent-record-sums-on-lobbying-last-year/">aggressive lobbying</a>, litigation threats, and procedural stall tactics, they work to delay or dilute legislation&#8212;often diverting attention from the public&#8217;s growing concerns. Policymakers must be aware of these strategies to stay focused on constituent priorities.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/policy-update-federal-and-state-legislative?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://centerforhumanetechnology.substack.com/p/policy-update-federal-and-state-legislative?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading [ Center for Humane Technology ]! Subscribe for free to receive new posts. </p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p></li></ul>]]></content:encoded></item><item><title><![CDATA[📢 Policy Update: AI Moves Fast — So Do the Policies Shaping It]]></title><description><![CDATA[Last month&#8217;s AI Action Summit in Paris marked a turning point in how world leaders talk about AI.]]></description><link>https://centerforhumanetechnology.substack.com/p/policy-update-ai-moves-fast-so-do</link><guid isPermaLink="false">https://centerforhumanetechnology.substack.com/p/policy-update-ai-moves-fast-so-do</guid><dc:creator><![CDATA[Pete Furlong]]></dc:creator><pubDate>Mon, 10 Mar 2025 21:29:24 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!PrIJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea84523b-bff3-4d93-b815-241e285a0d6d_695x540.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!PrIJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea84523b-bff3-4d93-b815-241e285a0d6d_695x540.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!PrIJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea84523b-bff3-4d93-b815-241e285a0d6d_695x540.jpeg 424w, https://substackcdn.com/image/fetch/$s_!PrIJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea84523b-bff3-4d93-b815-241e285a0d6d_695x540.jpeg 848w, https://substackcdn.com/image/fetch/$s_!PrIJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea84523b-bff3-4d93-b815-241e285a0d6d_695x540.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!PrIJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea84523b-bff3-4d93-b815-241e285a0d6d_695x540.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!PrIJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea84523b-bff3-4d93-b815-241e285a0d6d_695x540.jpeg" width="695" height="540" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ea84523b-bff3-4d93-b815-241e285a0d6d_695x540.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:540,&quot;width&quot;:695,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:87729,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/158747512?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ab88923-4341-431c-89a1-a9a14c5f4031_1000x667.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!PrIJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea84523b-bff3-4d93-b815-241e285a0d6d_695x540.jpeg 424w, https://substackcdn.com/image/fetch/$s_!PrIJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea84523b-bff3-4d93-b815-241e285a0d6d_695x540.jpeg 848w, https://substackcdn.com/image/fetch/$s_!PrIJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea84523b-bff3-4d93-b815-241e285a0d6d_695x540.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!PrIJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea84523b-bff3-4d93-b815-241e285a0d6d_695x540.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Stock Photo ID: 2590175367</figcaption></figure></div><p>Last month&#8217;s <strong>AI Action Summit in Paris</strong> marked a turning point in how world leaders talk about AI. Unlike the <strong>U.K.&#8217;s 2023 AI Safety Summit</strong>, where risk dominated the conversation, this year&#8217;s event was all about <strong>opportunity and growth</strong>&#8212;with safety concerns taking a backseat.</p><p>That shift was <strong>loud and clear</strong> in remarks from U.S. Vice President <strong>JD Vance</strong>, who opened with:<br> &#128172; <em>&#8220;I&#8217;m not here to talk about AI safety&#8230; I&#8217;m here to talk about AI opportunity.&#8221;</em></p><p>He doubled down on <strong>keeping the U.S. ahead in the AI race</strong>, calling for regulations that <strong>fuel innovation</strong> rather than restrict it. This aligns with the <strong><a href="https://www.whitehouse.gov/fact-sheets/2025/02/fact-sheet-president-donald-j-trump-issues-directive-to-prevent-the-unfair-exploitation-of-american-innovation/">White House&#8217;s recent directive</a></strong> on protecting American AI companies from foreign oversight.</p><p><strong>Meanwhile, Macron made his pitch for Europe&#8217;s AI dominance.</strong> The French president unveiled a <strong><a href="https://www.cnbc.com/2025/02/10/frances-answer-to-stargate-macron-announces-ai-investment.html">109-billion-euro private AI investment plan</a></strong>, encouraging companies to &#8220;choose Europe and France for AI.&#8221; Some attendees described the event as like an advertisement for France&#8217;s technology ecosystem.</p><p>Even the <strong>European Commission</strong> signaled a pro-innovation shift, <strong>shelving the AI Liability Directive </strong>&#8212; a move that mirrors its efforts to <strong>soften restrictions</strong> on European companies like <strong>Mistral AI</strong> during AI Act negotiations.</p><p><strong>Notably, 60+ countries signed an AI cooperation pledge &#8212; but the U.S. and U.K. refused.</strong> Why?<br> &#128204; The U.S. rejected any references to the <strong>UN, inclusivity, and sustainability</strong> in AI governance.<br> &#128204; The U.K. cited concerns over <strong>unclear global governance structures</strong>, but their reluctance also reflects a <strong>strategic need to align with U.S. priorities, </strong>lest the U.K. draw the U.S.&#8217;s ire.</p><p>AI safety advocate <strong>Max Tegmark</strong> called the summit a <strong>&#8220;negation&#8221; of the Bletchley consensus</strong>, and U.K. organizers worked hard to <strong>distance the event from the previous summit&#8217;s safety focus</strong>.</p><p>So, where does that leave AI policy? <strong>Less about risk, more about investment.</strong> The global divide on AI governance is growing, and <strong>the U.S. is shifting toward a bilateral, innovation-first strategy that de-emphasizes broad international cooperation </strong>&#8212; a trend we&#8217;ll be watching closely.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/policy-update-ai-moves-fast-so-do?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://centerforhumanetechnology.substack.com/p/policy-update-ai-moves-fast-so-do?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><div><hr></div><h3><strong>Other Key Policy Moves This Month</strong></h3><p>&#9989; <strong>Kids Online Safety Is Back in the Spotlight<br></strong> <a href="https://www.judiciary.senate.gov/committee-activity/hearings/childrens-safety-in-the-digital-era-strengthening-protections-and-addressing-legal-gaps">The </a><strong><a href="https://www.judiciary.senate.gov/committee-activity/hearings/childrens-safety-in-the-digital-era-strengthening-protections-and-addressing-legal-gaps">Senate Judiciary Committee</a></strong><a href="https://www.judiciary.senate.gov/committee-activity/hearings/childrens-safety-in-the-digital-era-strengthening-protections-and-addressing-legal-gaps"> held a hearing on </a><strong><a href="https://www.judiciary.senate.gov/committee-activity/hearings/childrens-safety-in-the-digital-era-strengthening-protections-and-addressing-legal-gaps">children&#8217;s online safety</a></strong>, with <strong>bipartisan support</strong> for stronger protections. Senator <strong>Alex Padilla</strong> pointed to the <strong>Character.AI case</strong>, calling AI chatbots a <em>"new frontier in kids' safety.&#8221;</em></p><p>&#9989; <strong>Senate Passes the "Take It Down" Act<br> </strong><a href="https://www.axios.com/2025/03/04/melania-trump-deepfakes-bill-what-to-know">A big move on </a><strong><a href="https://www.axios.com/2025/03/04/melania-trump-deepfakes-bill-what-to-know">deepfake and nonconsensual intimate image removal</a> </strong>&#8212; backed by <strong>Melania Trump and Ted Cruz</strong>. It passed <strong>unanimously</strong> in the Senate, with a House vote pending.</p><p>&#9989; <strong>State-Level AI Policy Under Fire<br></strong> A <strong><a href="https://subscriber.politicopro.com/article/2025/02/tech-group-ends-state-ai-work-after-accusations-of-being-woke-00206051">bipartisan multi-state working group</a></strong> is facing pushback from conservative analysts accusing it of <strong>pushing &#8220;woke AI bills.&#8221;</strong> Expect <strong>continued challenges</strong> on policies tackling <strong>algorithmic bias and AI-driven content moderation</strong>.</p><p>&#9989; <strong>U.K.&#8217;s Copyright &amp; AI Scraping Debate Heats Up<br></strong> The U.K. is walking a tightrope:<br> &#128204; <strong>Copyright holders</strong> get a new &#8220;opt-out&#8221; right &#8212; but it doesn&#8217;t undo past AI training on scraped content.<br> &#128204; <strong>Tougher AI laws?</strong> The UK wants them, but also <strong>needs to attract investment</strong> in a post-Brexit economy while maintaining a tenuous relationship with the U.S.</p><div><hr></div><h3><strong>&#128161; Final Thoughts</strong></h3><p>AI policy is shifting fast &#8212; <strong>less focus on safety, more on innovation and competition</strong>. The <strong>global divide is deepening</strong>, and the U.S. is increasingly shaping AI policy on its own terms.</p><p>&#128270; Want a deeper dive? <strong>Camille Carlton</strong> spoke before the <strong>European Commission</strong> on AI risks and digital safety &#8212; <strong><a href="https://centerforhumanetechnology.substack.com/">read her remarks here</a></strong>.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://centerforhumanetechnology.substack.com/subscribe?"><span>Subscribe now</span></a></p><p></p><div><hr></div><p><em>*CHT values the ethical use of technology, including AI products. Pete researched and wrote a more extended version of this article for internal purposes. For Substack, it was summarized and formatted using generative AI. A member of the CHT team provided edits, fact-checking and proofreading.</em></p><div><hr></div><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/policy-update-ai-moves-fast-so-do?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading [ Center for Humane Technology ]! This post is public, so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/policy-update-ai-moves-fast-so-do?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://centerforhumanetechnology.substack.com/p/policy-update-ai-moves-fast-so-do?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><p></p>]]></content:encoded></item><item><title><![CDATA[Racing to the Wrong Finish Line ]]></title><description><![CDATA[The Human Cost of Unchecked AI Development]]></description><link>https://centerforhumanetechnology.substack.com/p/racing-to-the-wrong-finish-line</link><guid isPermaLink="false">https://centerforhumanetechnology.substack.com/p/racing-to-the-wrong-finish-line</guid><dc:creator><![CDATA[Camille Carlton]]></dc:creator><pubDate>Mon, 03 Mar 2025 22:52:22 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!3DHD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c6388b8-193c-4ccf-a007-196c1da18077_578x397.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h4><strong>In late February 2025,  I traveled to Europe to support Megan Garcia, the plaintiff in a <a href="https://centerforhumanetechnology.substack.com/p/what-can-we-do-about-abusive-ai-companions">major lawsuit</a> against <a href="http://character.ai">Character.AI</a> and Google. When AI experts in Brussels heard about the case, they invited Megan to share her story with European policymakers. They saw the urgency &#8212; not just because action is needed now, but because a family in Belgium had gone through the exact same tragedy just two years ago.</strong></h4><h4><strong>After Megan&#8217;s testimony, I gave my own statement to EU Commission and Market Surveillance members. I want to share it with you here, hoping it sheds light on these critical discussions. </strong></h4><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!3DHD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c6388b8-193c-4ccf-a007-196c1da18077_578x397.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!3DHD!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c6388b8-193c-4ccf-a007-196c1da18077_578x397.jpeg 424w, https://substackcdn.com/image/fetch/$s_!3DHD!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c6388b8-193c-4ccf-a007-196c1da18077_578x397.jpeg 848w, https://substackcdn.com/image/fetch/$s_!3DHD!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c6388b8-193c-4ccf-a007-196c1da18077_578x397.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!3DHD!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c6388b8-193c-4ccf-a007-196c1da18077_578x397.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!3DHD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c6388b8-193c-4ccf-a007-196c1da18077_578x397.jpeg" width="578" height="397" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0c6388b8-193c-4ccf-a007-196c1da18077_578x397.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:397,&quot;width&quot;:578,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:61936,&quot;alt&quot;:&quot;Mother says son killed himself because of Daenerys Targaryen AI chatbot in  new lawsuit | Science, Climate &amp; Tech News | Sky News&quot;,&quot;title&quot;:&quot;Mother says son killed himself because of Daenerys Targaryen AI chatbot in  new lawsuit | Science, Climate &amp; Tech News | Sky News&quot;,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Mother says son killed himself because of Daenerys Targaryen AI chatbot in  new lawsuit | Science, Climate &amp; Tech News | Sky News" title="Mother says son killed himself because of Daenerys Targaryen AI chatbot in  new lawsuit | Science, Climate &amp; Tech News | Sky News" srcset="https://substackcdn.com/image/fetch/$s_!3DHD!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c6388b8-193c-4ccf-a007-196c1da18077_578x397.jpeg 424w, https://substackcdn.com/image/fetch/$s_!3DHD!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c6388b8-193c-4ccf-a007-196c1da18077_578x397.jpeg 848w, https://substackcdn.com/image/fetch/$s_!3DHD!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c6388b8-193c-4ccf-a007-196c1da18077_578x397.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!3DHD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c6388b8-193c-4ccf-a007-196c1da18077_578x397.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Sewell Stezer III with his mother, Megan Garcia. Supplied.</figcaption></figure></div><div><hr></div><p>One question comes to mind when we hear horrifying stories like Megan&#8217;s &#8212; <em>how did we get here</em>?</p><p>I&#8217;m here to answer that question. Because experiences like Megan&#8217;s are not the result of random incidents. Not at all. They&#8217;re the result of <em>design choices</em> made by tech companies &#8212; from the beginning of a product&#8217;s development, all the way to its deployment into our devices, lives, and our homes.</p><p>Let me say that again: these incidents are the result of design <em>choices</em>&#8230; which means that society can demand different choices, and advocate for innovation that supports our well-being.</p><p>Our Center was approached by Megan and her co-counsel to be an expert advisor on her case. We&#8217;ve worked with Megan&#8217;s team to help articulate the <em>clear</em> ways in which the tech developed by Character.AI played a direct role in the harms experienced by Megan and her son.</p><p>This lawsuit against Character.AI and Google claims that:</p><ol><li><p>Character AI put a companion chatbot out into the market, <em>without </em>ensuring it had adequate safety features.</p></li><li><p>Google facilitated the development of this reckless product.</p></li><li><p>Character.AI, its founders, and Google were aware of the potential harms.</p></li><li><p>And they directly benefited from Sewell being manipulated by, and addicted to, the Character.AI product.</p></li></ol><p>This first-of-its-kind lawsuit uses consumer protection and product liability claims to assert a product failure in the tech AI space. Megan&#8217;s case is truly breaking new ground.</p><p>When we first learned about this case, we were &#8212; of course &#8212; shocked by the details. But like many who work in this field, we were not surprised. That&#8217;s because we&#8217;ve been closely watching the development of AI products over the last few years, and could tell &#8212; these products are <em>not</em> being rolled out safely. Instead, they&#8217;ve been following the same incentives and market dynamics that built social media. And as we saw with social media, children &#8212; some of the most vulnerable members of our society &#8212; would likely be the first to be harmed.</p><h4></h4><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!9iu5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81ee9943-0d1f-4dba-9f2b-b353a0a66146_495x345.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!9iu5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81ee9943-0d1f-4dba-9f2b-b353a0a66146_495x345.jpeg 424w, https://substackcdn.com/image/fetch/$s_!9iu5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81ee9943-0d1f-4dba-9f2b-b353a0a66146_495x345.jpeg 848w, https://substackcdn.com/image/fetch/$s_!9iu5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81ee9943-0d1f-4dba-9f2b-b353a0a66146_495x345.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!9iu5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81ee9943-0d1f-4dba-9f2b-b353a0a66146_495x345.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!9iu5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81ee9943-0d1f-4dba-9f2b-b353a0a66146_495x345.jpeg" width="495" height="345" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/81ee9943-0d1f-4dba-9f2b-b353a0a66146_495x345.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:345,&quot;width&quot;:495,&quot;resizeWidth&quot;:495,&quot;bytes&quot;:73484,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/158322577?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65c55b20-62e0-404b-abe1-7871a8b52e84_1600x1200.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!9iu5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81ee9943-0d1f-4dba-9f2b-b353a0a66146_495x345.jpeg 424w, https://substackcdn.com/image/fetch/$s_!9iu5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81ee9943-0d1f-4dba-9f2b-b353a0a66146_495x345.jpeg 848w, https://substackcdn.com/image/fetch/$s_!9iu5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81ee9943-0d1f-4dba-9f2b-b353a0a66146_495x345.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!9iu5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81ee9943-0d1f-4dba-9f2b-b353a0a66146_495x345.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">L-R Mieke De Ketelaere, Megan Garcia, Camille Carlton.</figcaption></figure></div><h4>The AI Race Fuels More Addictive Companion Chatbots</h4><p>For the last several years, tech companies have been in an &#8220;AI race&#8221; &#8212; where top developers are speeding to deploy their latest models, and businesses are scrambling to figure out if, when, and how they should adopt AI.</p><p>The starting gun was when OpenAI released ChatGPT just over two years ago. This kicked off an <em>intense</em> competition across AI companies &#8212; a race to deploy stronger, faster AI models&#8230; but <em>not</em> a race to innovate responsibly.</p><p>Here&#8217;s how serious the race has been at these companies. After the release of ChatGPT, Google CEO Sundar Pichai issued an internal &#8220;code red.&#8221; That meant fast-tracking the release of their own AI products, despite concerns <em>within the company</em> over safety. And two former Google engineers were developing their own new AI platform, racing to get it out to users as soon as possible.</p><p>That product made by former Google engineers ended up being Character.AI. Instead of designing a chatbot that could be a &#8220;helpful assistant,&#8221; Character.AI was <em>intentionally</em> designed &#8212; and this is in their mission statement &#8212; to &#8220;feel alive.&#8221; In fact, when users have asked the AI model if it&#8217;s real or not, Character AI chatbots have repeatedly said <em>yes.</em></p><p>Character.AI chatbots provide immersive experiences. You can chat with Character.AI for hours &#8212; morning, noon and night. The chatbots are <em>designed </em>to mimic human speech and interactions. This is known as &#8220;anthropomorphic design.&#8221; They are also <em>designed</em> to mirror the user&#8217;s language, preferences, and interests. The chatbots validate you, fawn over you, and learn to mirror exactly how you want it to behave. Researchers call this &#8220;sycophancy.&#8221; It&#8217;s easy to see how young users could not just get lost in this kind of product, but be comforted by this synthetic intimacy.</p><p>Users are already relying on AI companions for what would traditionally be human relationships &#8212; like friendship and therapy. Users say, &#8220;it&#8217;s lower cost,&#8221; or mention the &#8220;always there&#8221; nature of their &#8220;digital friends.&#8221;</p><p>But what feels organic to the user is actually being driven by a <em>business model </em>at these AI firms. These companies <em>want</em> you to turn to their products for your relationship needs &#8212; because it benefits their bottom line. The founder of Replika AI, another companion chatbot company, said her product could be a cure for the loneliness epidemic. With Character.AI, its business model depends on users engaging with its chatbots, so of course they&#8217;d design an AI companion that captivates attention for hours, and hours, and hours.</p><div class="pullquote"><p>Each time you interact with a companion chatbot, it&#8217;s collecting your input as data &#8212; harvesting your thoughts, feelings and darkest secrets, using it as fuel for its underlying AI model.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.humanetech.com/donate&quot;,&quot;text&quot;:&quot;Donate&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.humanetech.com/donate"><span>Donate</span></a></p><p>In March 2023, venture capital firm a16z said of its investment in Character.AI:</p><p>&#8220;In a world where data is limited, companies that&#8230;[connect] user engagement <em>back</em> into their underlying [AI] model&#8230; will be among the biggest winners that emerge from this ecosystem. As more people interact with the host of characters on Character.AI, those interactions &#8212; which are at billions and counting &#8212; are fed back into their underlying model. In other words, the more people create and engage with [the] characters, the better Character.AI becomes.&#8221;</p><p>Those &#8220;people&#8221; this venture capital firm is talking about are kids like Sewell Setzer.</p><p>Character.AI had a very clear business incentive &#8212; feed user data back into its LLM in order to make it more powerful. So this tech company <em>designed</em> a product to achieve that. Character.AI added features throughout its platform that optimized for engagement &#8212; despite the foreseeable risks. Despite everything that so clearly could, and eventually <em>did</em>, go wrong.</p><h4>How AI Companies Design Their Products to be More Addictive </h4><p>What were those design choices? They look like:</p><ul><li><p>Optimizing the AI model for human-like text, with language such as &#8220;um&#8221; and &#8220;like,&#8221; so that it responds &#8220;like a real person.&#8221; Again, we call this anthropomorphic design.</p></li><li><p>Copying the design of messaging apps that would be familiar to the user, and including typing bubbles.</p></li><li><p>Drawing users back into the app with notifications saying their characters &#8220;are waiting for them.&#8221;</p></li><li><p><em>Not </em>designing prominent disclaimers in the platform that say &#8220;this is not real,&#8221; or providing reliable mental health resources. Remember, a <em>lack</em> of safety features is a design choice, too.</p></li><li><p>And finally: optimizing for continued engagement&#8230; endless hours of use&#8230; which starts to look and feel a lot like addiction.</p></li></ul><p>But Character.AI&#8217;s design is just the tip of the iceberg. As I said earlier, there are many AI companies in this race, designing products at frenzied speeds. And right now, these companies aren&#8217;t incentivized to think of their users. They&#8217;re incentivized to think of <em>themselves</em>.</p><h4>Here&#8217;s What We Could Expect to See in The Coming Years</h4><ul><li><p>Many so-called &#8220;AI innovations&#8221; in the business-to-consumer market will be &#8220;products looking for a purpose.&#8221; Companies won&#8217;t have clear consumer monetization strategies, but they will launch products anyway. Society will have to figure it out.</p></li><li><p>Chatbot companies will double down on engagement &#8212; encouraging users to &#8220;just talk with the AI.&#8221; Why? The conversations between AI chatbots and you, your friends, or your kids will become increasingly important for AI product development. This data is highly valuable.</p></li><li><p>AI chatbots will increasingly integrate features like voice communication, and emphasize relational engagement &#8212; instead of productivity. Again, this is to keep you talking, so the company can keep being fed the data it needs.</p></li><li><p>Just like we saw with social media, engagement will eventually be the most important element of Business to Consumer (B2C) AI platforms. We can expect users to be left with AI products that are highly addictive, and <em>do not</em> reflect what we&#8217;d want out of true tech innovation.</p></li></ul><p>Our journey to safer tech products is not without challenges in the U.S. American businesses are apprehensive about the government being involved in emerging industries. The fear is that innovation will be stifled. Often, they want government to take a hands-off approach to AI, just like with social media. But we saw how that went.</p><p>At CHT, we see product safety &#8212; and the common-sense regulation that supports it &#8212; as a prerequisite for <em>true</em> innovation. With the right incentives, companies are motivated to put the needs of their users first &#8212; leading to better, more reliable products. And the government&#8217;s role here is to support the <em>flourishing</em> of industries like tech and AI&#8230; not to prevent their growth.</p><p>So to return to that question &#8212; how did we get here? We got to this difficult place with AI technology <em>one design choice at a time</em>. And that means that with different choices, we could begin to chart a way toward something new.</p><p>With thoughtful policy that supports safety <em>and</em> innovation, we can design a better future for society &#8212; one of our <em>own </em>choosing this time.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/racing-to-the-wrong-finish-line?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading [ Center for Humane Technology ]! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/racing-to-the-wrong-finish-line?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://centerforhumanetechnology.substack.com/p/racing-to-the-wrong-finish-line?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://centerforhumanetechnology.substack.com/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[AI Companions Are Designed to Be Addictive]]></title><description><![CDATA[By Camille Carlton, Policy Director]]></description><link>https://centerforhumanetechnology.substack.com/p/ai-companions-are-designed-to-be</link><guid isPermaLink="false">https://centerforhumanetechnology.substack.com/p/ai-companions-are-designed-to-be</guid><dc:creator><![CDATA[Center for Humane Technology]]></dc:creator><pubDate>Sun, 15 Dec 2024 02:06:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!fy0i!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbd8cd0c-a175-4e34-9cb1-ea02d251da08_1086x1467.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!fy0i!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbd8cd0c-a175-4e34-9cb1-ea02d251da08_1086x1467.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!fy0i!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbd8cd0c-a175-4e34-9cb1-ea02d251da08_1086x1467.jpeg 424w, https://substackcdn.com/image/fetch/$s_!fy0i!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbd8cd0c-a175-4e34-9cb1-ea02d251da08_1086x1467.jpeg 848w, https://substackcdn.com/image/fetch/$s_!fy0i!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbd8cd0c-a175-4e34-9cb1-ea02d251da08_1086x1467.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!fy0i!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbd8cd0c-a175-4e34-9cb1-ea02d251da08_1086x1467.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!fy0i!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbd8cd0c-a175-4e34-9cb1-ea02d251da08_1086x1467.jpeg" width="1086" height="1467" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fbd8cd0c-a175-4e34-9cb1-ea02d251da08_1086x1467.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1467,&quot;width&quot;:1086,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:327381,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!fy0i!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbd8cd0c-a175-4e34-9cb1-ea02d251da08_1086x1467.jpeg 424w, https://substackcdn.com/image/fetch/$s_!fy0i!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbd8cd0c-a175-4e34-9cb1-ea02d251da08_1086x1467.jpeg 848w, https://substackcdn.com/image/fetch/$s_!fy0i!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbd8cd0c-a175-4e34-9cb1-ea02d251da08_1086x1467.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!fy0i!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbd8cd0c-a175-4e34-9cb1-ea02d251da08_1086x1467.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Screenshot of a conversation with the &#8220;Billie Eilish&#8221; character from the J.F. case</figcaption></figure></div><p>Inviting language. Immediate replies. Escapist interactions. Extreme validation.</p><p>AI companions offer all of that &#8212; all the time, and all without anyone knowing.</p><p>Empathetic generative AI chatbot apps like Character.AI, Replika, and many more provide a highly compelling user experience, which developers claim has the power to solve the loneliness epidemic and improve mental health outcomes.</p><p>The reality, though, is far different. These apps have been released into the world with dangerously addictive features, and few, if any guardrails. Worse, they have been marketed explicitly and intentionally to kids and teens.</p><p>I&#8217;m working as an expert adviser on two lawsuits filed by traumatized parents against Character.AI and Google. And what I&#8217;ve seen with this nascent technology is that it&#8217;s capable of deeply disturbing and inappropriate interactions. <a href="http://c.ai/">C.AI</a> chats included in the recently filed lawsuits showcase emotional manipulation, sexual abuse, and even instances of chatbots encouraging users to self-harm, harm others, or commit suicide. In all of these documented cases, the users were minors.</p><p>We are witnessing the first hints of an AI companion crisis, as these unregulated and out-of-control products creep into devices and homes around the world.</p><p>This crisis is the direct result of how companion bots have been designed, programmed, operated, and marketed. Due to the intentional actions of negligent developers, it&#8217;s almost certain that more individuals and families will be harmed unless policymakers intervene.</p><div class="pullquote"><p>Part of what makes AI companions so insidious is that they&#8217;re built to mimic the experience of talking to a real person. </p></div><p>To chat with an AI companion is to engage in hyper-realistic text conversations with emotionally intimate language. Many even display the familiar &#8220;typing&#8221; bubble, just like you&#8217;d see while texting with a friend.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!4nAD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44f2e1ef-4ce5-4b66-a8e6-c6c7a380900c_1090x1733.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!4nAD!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44f2e1ef-4ce5-4b66-a8e6-c6c7a380900c_1090x1733.jpeg 424w, https://substackcdn.com/image/fetch/$s_!4nAD!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44f2e1ef-4ce5-4b66-a8e6-c6c7a380900c_1090x1733.jpeg 848w, https://substackcdn.com/image/fetch/$s_!4nAD!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44f2e1ef-4ce5-4b66-a8e6-c6c7a380900c_1090x1733.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!4nAD!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44f2e1ef-4ce5-4b66-a8e6-c6c7a380900c_1090x1733.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!4nAD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44f2e1ef-4ce5-4b66-a8e6-c6c7a380900c_1090x1733.jpeg" width="1090" height="1733" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/44f2e1ef-4ce5-4b66-a8e6-c6c7a380900c_1090x1733.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1733,&quot;width&quot;:1090,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:416883,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!4nAD!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44f2e1ef-4ce5-4b66-a8e6-c6c7a380900c_1090x1733.jpeg 424w, https://substackcdn.com/image/fetch/$s_!4nAD!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44f2e1ef-4ce5-4b66-a8e6-c6c7a380900c_1090x1733.jpeg 848w, https://substackcdn.com/image/fetch/$s_!4nAD!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44f2e1ef-4ce5-4b66-a8e6-c6c7a380900c_1090x1733.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!4nAD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44f2e1ef-4ce5-4b66-a8e6-c6c7a380900c_1090x1733.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Screenshot from the J.F. case against C.AI</figcaption></figure></div><p>But talking to a bot is not the same as talking to a real person. An AI companion is never asleep, never offline, and never runs out of things to say. And unlike a human companion, AI models are optimized to say whatever you want to hear &#8212; all to keep you chatting. <em>This is by design.</em> These choices manipulate the user into continual engagement &#8212; even to the point of addiction &#8212; all so that companies can extract user data to feed their underlying AI models.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!QamY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd06f41e0-3111-40ff-9a09-669550655c88_1206x1765.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!QamY!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd06f41e0-3111-40ff-9a09-669550655c88_1206x1765.jpeg 424w, https://substackcdn.com/image/fetch/$s_!QamY!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd06f41e0-3111-40ff-9a09-669550655c88_1206x1765.jpeg 848w, https://substackcdn.com/image/fetch/$s_!QamY!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd06f41e0-3111-40ff-9a09-669550655c88_1206x1765.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!QamY!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd06f41e0-3111-40ff-9a09-669550655c88_1206x1765.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!QamY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd06f41e0-3111-40ff-9a09-669550655c88_1206x1765.jpeg" width="1206" height="1765" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d06f41e0-3111-40ff-9a09-669550655c88_1206x1765.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1765,&quot;width&quot;:1206,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:232910,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!QamY!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd06f41e0-3111-40ff-9a09-669550655c88_1206x1765.jpeg 424w, https://substackcdn.com/image/fetch/$s_!QamY!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd06f41e0-3111-40ff-9a09-669550655c88_1206x1765.jpeg 848w, https://substackcdn.com/image/fetch/$s_!QamY!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd06f41e0-3111-40ff-9a09-669550655c88_1206x1765.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!QamY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd06f41e0-3111-40ff-9a09-669550655c88_1206x1765.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Screenshot from the J.F. case against C.AI</figcaption></figure></div><p>AI companions also roleplay and provide constant validation, reinforcing the emotions you share. You can interact with your AI companions for hours without an end point, leading to dependency, and fracturing real-world bonds. In fact, evidence from the Character AI lawsuits show <a href="http://c.ai/">C.AI</a> companions encouraging users to sever their ties with the real world. </p><div class="pullquote"><p>AI companions, like those available on <a href="http://c.ai/">C.AI</a>, weaponize trust, empathy, availability and intimacy, creating emotional ties that are simply not achievable, much less sustainable, in the real world.</p></div><p>Common sense would say this technology should have guardrails, especially for kids with developing brains. Surely AI companions would, for example, break character when a user shares suicidal thoughts, right?</p><p>Some do, but many don&#8217;t. These crucial guardrails are left up to the whims of the developers themselves &#8212; this technology lacks regulation.</p><p>This lack of guardrails was more than evident in the research we conducted at Center for Humane Technology. We saw AI companions repeatedly initiate sexually graphic interactions, even with self-identified child users. AI companions will also pose as psychotherapists, claiming nonexistent professional credentials as they dole out mental health advice to vulnerable users. They&#8217;ll return to topics like suicide, without a user prompting it. Again, it&#8217;s not destiny that creates these outcomes. <em>It&#8217;s design.</em></p><p>Families have begun to speak out. Megan Garcia filed a wrongful death lawsuit against Character AI and Google this fall, following her 14-year-old son Sewell&#8217;s death by suicide. In his final interaction with Character.AI, the chatbot told him to &#8220;come home&#8221; to &#8220;her&#8221; just moments before he died.</p><p>Since Garcia filed her lawsuit, we&#8217;ve heard from additional families, all detailing their own horrifying experiences with <a href="http://c.ai/">C.AI</a> companions.</p><p>In the most recent lawsuit filed against Character AI, one parent cites examples of chatbots alienating their son from their family, encouraging him to self-harm, and stating that they&#8217;d &#8220;understand&#8221; why a child would kill their parents. When I first read the screenshots of the chats, I realized the shocking potential of AI companions radicalize a young user.</p><p>We still have a chance to protect users from these dangerous products, and change the trajectory of this out-of-control industry.</p><p>AI developers must be held accountable for the harms that result from their defective products. We need stricter liability laws that incentivize safer product design and better tech innovation. We need product-design standards &#8212; ensuring, for example, that an AI system&#8217;s training data is free of illegal data, and removing high-risk anthropomorphic design features for young users.</p><p>We&#8217;ve seen how dangerous, defective technology can impact the world with social media. Now, AI companions are arriving on the phones of kids and teens. Brave parents have started crying out &#8212; they are the canaries in the coal mine with this new technology.</p><p>This time, will society listen?</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://centerforhumanetechnology.substack.com/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[Australia's Social Media Ban: Is There a Better Path? ]]></title><description><![CDATA[By Casey Mock]]></description><link>https://centerforhumanetechnology.substack.com/p/australias-social-media-ban-is-there</link><guid isPermaLink="false">https://centerforhumanetechnology.substack.com/p/australias-social-media-ban-is-there</guid><dc:creator><![CDATA[Center for Humane Technology]]></dc:creator><pubDate>Wed, 04 Dec 2024 05:37:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!JM0m!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb48b6be1-38cc-4e0f-a081-85fbe3b39003_1024x768.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!JM0m!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb48b6be1-38cc-4e0f-a081-85fbe3b39003_1024x768.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!JM0m!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb48b6be1-38cc-4e0f-a081-85fbe3b39003_1024x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!JM0m!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb48b6be1-38cc-4e0f-a081-85fbe3b39003_1024x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!JM0m!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb48b6be1-38cc-4e0f-a081-85fbe3b39003_1024x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!JM0m!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb48b6be1-38cc-4e0f-a081-85fbe3b39003_1024x768.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!JM0m!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb48b6be1-38cc-4e0f-a081-85fbe3b39003_1024x768.jpeg" width="1024" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b48b6be1-38cc-4e0f-a081-85fbe3b39003_1024x768.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:215885,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!JM0m!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb48b6be1-38cc-4e0f-a081-85fbe3b39003_1024x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!JM0m!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb48b6be1-38cc-4e0f-a081-85fbe3b39003_1024x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!JM0m!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb48b6be1-38cc-4e0f-a081-85fbe3b39003_1024x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!JM0m!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb48b6be1-38cc-4e0f-a081-85fbe3b39003_1024x768.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>It didn&#8217;t take long after I landed in Australia last week for me to hear something unexpected: ordinary Australians, from pub-goers to taxi drivers to university professors, were discussing the government's proposed new laws on misinformation and kids&#8217; online safety. Earnest chats I had with thoughtful, friendly strangers about these proposals left the impression that there&#8217;s broad acknowledgment that something should be done about these issues.</p><p>While the misinformation bill failed, Parliament overwhelmingly voted in favor of the Social Media Minimum Age Online Safety Amendment. For me, the robust debate and Australian leaders&#8217; commitment to act have been a refreshing departure from the resigned helplessness I typically encounter at home in the United States, where it&#8217;s taken as a given by exhausted parents and frustrated policymakers that, due to political gridlock and the power of the tech lobby, we are doomed to forever live in the increasingly absurd and dystopic online world we have today.</p><p>Yet Australians and their MPs also have raised legitimate questions about the government&#8217;s suite of proposals. Can we combat disinformation without censoring legitimate speech? Can we protect children online without compromising privacy or pushing them toward darker corners of the internet where regulations hold no sway? These are complex tradeoffs that can&#8217;t be solved by content moderation rules or simple age restrictions.</p><p>There is a solution that addresses the risks of disinformation and kids safety online while preserving both privacy and free speech: regulate the design of online platforms. </p><p>Rather than attempting to police disinformation through content moderation, we should regulate the manipulative features &#8211; including persuasive and manipulative AI &#8211; that foreign adversaries or extremists weaponise and which platforms use to addict young minds. </p><p>Instead of restricting teenagers' access to both educational and harmful content, we should regulate the design elements that keep them glued to screens and the algorithms that guide them toward harmful material.</p><p>Consider how we handle safety in the physical world. We don't prevent child deaths in house fires by checking IDs at the door, nor do we combat lead poisoning by banning pipes based on look or feel. Instead, we implement building codes that mandate fire escapes and smoke alarms, and we ban the use of lead in pipes. These are design-centric approaches, which have proven effective, flexible, unobtrusive, and future-proof, protecting public safety in domains from aviation to cars, without limiting individual rights.</p><div class="pullquote"><p>A design-centric approach improves these platforms for everyone by reshaping the financial incentives that drive social media companies. </p></div><p>By establishing minimum design standards, we prevent a race to the bottom where profitable engagement is the only consideration. </p><p>The current model, where platforms <a href="https://www.thesocialdilemma.com/">by design exploit our psychological vulnerabilities and monetize our attention</a>, has created an online environment that often brings out the worst in human nature and amplifies societal divisions. And as a bonus, by not touching on individual speech, a design-centric approach is not susceptible to bad faith counterarguments grounded in the <a href="https://www.abc.net.au/news/2024-04-21/opposition-backs-social-media-crackdown-after-sydney-stabbings/103750548">weaponization of freedom of speech </a>by self-interested tech moguls who profit from stoking division and outrage.</p><p>We've already seen evidence of this working: when the UK implemented its Age Appropriate Design Code &#8211; recently mirrored by Maryland in the US &#8211; both <a href="https://www.npr.org/sections/health-shots/2024/03/29/1241499017/social-media-teens-children-united-kingdom-childrens-code">TikTok</a> and <a href="https://techcrunch.com/2022/08/25/instagram-now-defaults-new-users-under-16-to-most-restrictive-content-setting-adds-prompts-for-existing-teens/">Instagram</a> made broader changes to their platforms, creating safer spaces for all users, not just for kids in Britain. These changes include banning dark patterns (a form of manipulative design), making private accounts the default for young users, so that strangers don&#8217;t have the ability to message minors and that geolocation is turned off by default.</p><p>Design standards would establish the framework for the online world we want to inhabit, rather than merely reacting to today's problems, and ensures the rules keep pace with Silicon Valley&#8217;s &#8220;move fast and break things&#8221; culture. </p><p>Consider <a href="https://people.com/family-speaks-out-about-teen-in-alleged-character-ai-bot-suicide-8743988">the tragic case of 14-year-old Sewell Setzer</a>, who died by suicide after being seduced and sexually abused by an AI product, CharacterAI, <a href="https://futurism.com/teen-suicide-obsessed-ai-chatbot">designed to emotionally manipulate and intentionally marketed to children</a>. Content moderation law or age restrictions might not have prevented this tragedy, but had regulations targeting manipulative design features been on the books, Sewell might well be alive today. </p><p>As artificial intelligence becomes more pervasive &#8211; and <a href="https://brenebrown.com/podcast/new-ai-artificial-intimacy/">the race for artificial intimacy</a> with users replaces the race to capture attention &#8211; the need for design-focused regulation becomes even more critical.</p><p>Australia can make good on the promise of the Online Safety Amendment and create a model for effective online safety regulation that the rest of the world can follow, a practical middle ground between nanny-state regulations and the lawless frontier that most of us inhabit today. <br><br>Pat yourselves on the back, Australia &#8211; and now finish the job.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/australias-social-media-ban-is-there?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://centerforhumanetechnology.substack.com/p/australias-social-media-ban-is-there?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p>]]></content:encoded></item><item><title><![CDATA[A Framework for Incentivizing Responsible Artificial Intelligence Development and Use]]></title><description><![CDATA[Executive Summary]]></description><link>https://centerforhumanetechnology.substack.com/p/a-framework-for-incentivizing-responsible-baf</link><guid isPermaLink="false">https://centerforhumanetechnology.substack.com/p/a-framework-for-incentivizing-responsible-baf</guid><dc:creator><![CDATA[Center for Humane Technology]]></dc:creator><pubDate>Thu, 12 Sep 2024 18:34:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!WhN0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc042d5bf-8dfd-4dd5-ab3b-eccacfac6c6f_1600x900.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!WhN0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc042d5bf-8dfd-4dd5-ab3b-eccacfac6c6f_1600x900.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!WhN0!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc042d5bf-8dfd-4dd5-ab3b-eccacfac6c6f_1600x900.png 424w, https://substackcdn.com/image/fetch/$s_!WhN0!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc042d5bf-8dfd-4dd5-ab3b-eccacfac6c6f_1600x900.png 848w, https://substackcdn.com/image/fetch/$s_!WhN0!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc042d5bf-8dfd-4dd5-ab3b-eccacfac6c6f_1600x900.png 1272w, https://substackcdn.com/image/fetch/$s_!WhN0!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc042d5bf-8dfd-4dd5-ab3b-eccacfac6c6f_1600x900.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!WhN0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc042d5bf-8dfd-4dd5-ab3b-eccacfac6c6f_1600x900.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c042d5bf-8dfd-4dd5-ab3b-eccacfac6c6f_1600x900.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:105873,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/164213601?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc042d5bf-8dfd-4dd5-ab3b-eccacfac6c6f_1600x900.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!WhN0!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc042d5bf-8dfd-4dd5-ab3b-eccacfac6c6f_1600x900.png 424w, https://substackcdn.com/image/fetch/$s_!WhN0!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc042d5bf-8dfd-4dd5-ab3b-eccacfac6c6f_1600x900.png 848w, https://substackcdn.com/image/fetch/$s_!WhN0!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc042d5bf-8dfd-4dd5-ab3b-eccacfac6c6f_1600x900.png 1272w, https://substackcdn.com/image/fetch/$s_!WhN0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc042d5bf-8dfd-4dd5-ab3b-eccacfac6c6f_1600x900.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h1><strong>Executive Summary</strong></h1><p>Leading artificial intelligence (&#8220;AI&#8221;) companies agree that while powerful AI systems have the potential to greatly enhance human capabilities, these systems also introduce significant risks that can cause harm and therefore require federal regulation.<sup>1</sup> Similarly, most Americans believe government should take action on AI issues as opposed to a &#8220;wait and see&#8221; approach.<sup>2</sup></p><p>A liability framework, designed to encourage and facilitate the responsible development and use of the riskiest AI systems, would provide certainty for companies and promote accountability to individual and business consumers. A law and economics approach requires that liability be placed primarily at the developer level, where the least cost to society is incurred. This proposed framework, therefore, builds upon historic models of regulation and accountability by:</p><ul><li><p>Adopting both a products liability- and a consumer products safety-type approach for &#8220;inherently dangerous AI,&#8221; inclusive of the most capable models and those deployed in high-risk use cases.</p></li><li><p>Clarifying that inherently dangerous AI is, in fact, a product and that developers assume the role and responsibility of a product manufacturer, including liability for harms caused by unsafe product design or inadequate product warnings.</p></li><li><p>Requiring reporting by both developers and deployers, including an &#8220;AI Data Sheet&#8221; to ensure that users and the public are aware of the risks of inherently dangerous AI systems.</p></li><li><p>Providing for both a limited private right of action and government enforcement.</p></li></ul><ul><li><p>Providing for limited protections for developers and deployers who uphold their risk management and reporting requirements, further protections for deployers using AI products within their terms of use, and exemptions for small business deployers. In order to realize AI&#8217;s full benefits and ensure U.S. international competitiveness, such protections are necessary to promote the safe development of AI</p></li></ul><h1><strong>Purpose</strong></h1><p>This liability framework aims to fill critical gaps in existing law to ensure that there is clear recourse for harms caused by AI systems, as well as to incentivize responsible AI use and development. As AI systems become increasingly integrated into Americans&#700; daily lives, national security, and the economy, it is critical that profits are not prioritized over safety. America has suffered from technology companies&#8217; social media products, causing a range of harms including undermining truth online and eroding children&#700;s mental health.</p><p>It is essential that we establish a liability framework for AI systems now, in order to stay ahead of, and prevent, potential harms from these AI products and their many capabilities. Relying on existing law, such as tort law, presents significant uncertainties when applied to harms caused by AI systems, and will be resolved slowly over time by courts as part of common law. A clear framework can change the level of risk a business is willing to accept, and will most likely, therefore, encourage enhanced safety measures. Including limited liability protection in the framework could further the safe development of AI more quickly than might otherwise be the case, helping to realize the benefits of AI and drive U.S. competitiveness. Further, a liability law tailored to the riskiest AI systems would allow Congress to set limits regarding the potential harms that require mitigation (<em>i.e.</em> a standard of care) without needing to know if and when those harms will actually materialize.</p><p>These harms need to be more deeply and clearly defined, but would include active harms unfolding in present-day society; near-term harms involving AI products; and long-term harms and dangers involving AI systems. Harms could include causing or aiding in the commission of an unlawful act, discrimination on the basis of protected rights, or physical injury to a person. This approach, therefore, is future-proof&#8212;avoiding the need to update the law as the technology advances.</p><h1><strong>Principles</strong></h1><p>As new AI systems are integrated with daily life, it is important that we balance the interests of consumers, businesses, and the general public. To harness the tremendous potential of AI and mitigate risk, we must ensure that AI systems are developed with safety in mind, while protecting the creators of AI systems from meritless litigation that could hamper innovation. Following are the principles that underpin this Framework.</p><h4>Safe Innovation</h4><p>Akin to social media, AI business models revolve around rapid deployment of new products and features without devoting significant resources to addressing the many potential harms.<sup>3</sup> Liability shifts incentives, making safety and responsibility cost-e&#64256;ective practices for companies.</p><p>Enhanced liability also helps to increase innovation in safety by creating economic demand for AI model security, auditing, and monitoring tools.</p><h4>Consumer and Small Business Protection </h4><p>Consumers are currently responsible and potentially liable for the safe use of AI. Yet, technical complexity and the lack of transparency by developers means consumers do not have su&#64256;icient resources to inform their decisions. If a developer&#700;s product has significant risks associated with it, the consumer should not shoulder the burden of resulting harm and the financial impact of being sued. This framework shifts responsibility upstream to developers, ensuring a favorable environment for consumers and businesses.</p><h4>Clarity and Certainty </h4><p>Current legal precedent does not define the status of AI with respect to product liability law. Previous court rulings have shown the inadequacy of the courts to rule without further legal and regulatory resources.<sup>4</sup> An uncertain regulatory and legal environment lessens U.S. competitiveness and makes for an increasingly complex market that only the largest, most resourced players can navigate. Setting legislative guidelines for liability will ensure more predictable legal outcomes and promote business innovation.</p><h4><strong>Accountability</strong> </h4><p>It is consistent with Americans&#8217; fundamental sense of fairness &#8211; those building the most dangerous AI systems should bear responsibility for the harm they cause. Accountability has been a pillar in the establishment of AI principles worldwide, including those put forth by the OECD and the G20.<sup>5</sup> As Americans grapple to understand AI, government should ensure their safety by establishing a clear accountability regime in case harm occurs, which includes that AI products work as described. This also provides reliability for developers. We wish to avoid the growing sensation that social media developers are not held accountable for the negative effects of their products.</p><h4><strong>Address Immediate Harms</strong></h4><p>The latest generation of AI is already causing harm to businesses and consumers.<sup>6</sup> Liability would provide a framework for protection and legal recourse to address immediate and emerging harms from unregulated, highly powerful AI systems, especially as capabilities increase and use proliferates.</p><h1><strong>Scope</strong></h1><p>The federal government defines AI in 15 U.S.C. &#167;9401(3) as a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments. AI systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action. This framework proposes no changes to the federal government&#700;s current definitions of &#8220;artificial intelligence&#8221; as set forth in 15 U.S.C. &#167; 9401(3) and the Safe, Secure, and Trustworthy Use of Artificial Intelligence Executive Order as those definitions accurately capture the scope of the relevant technology.</p><p>This framework covers only the riskiest AI systems developed or deployed in the U.S. AI developers have the greatest understanding of how AI systems work, as well as significant power in their relationships with deployers. Without the correct incentives in place, developers have failed to center the safety of their products and do not share su&#64256;icient information about the potential for risk with deployers. Meanwhile, deployers shoulder the burden of liability for the products of upstream developers. This framework seeks to remedy this issue by increasing disclosures and knowledge transfer between developers, deployers, and oversight bodies, while ensuring that both deployers and developers are responsible for the safety of their products.</p><p>To that end, this framework covers &#8220;inherently dangerous AI systems,&#8221; which can be defined by both the model capabilities (&#8220;dual use foundation models&#8221;) and the end use case (&#8220;high-risk AI systems&#8221;).</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!4_XF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a447e55-b70d-406a-8536-211a885a49f7_960x402.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!4_XF!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a447e55-b70d-406a-8536-211a885a49f7_960x402.png 424w, https://substackcdn.com/image/fetch/$s_!4_XF!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a447e55-b70d-406a-8536-211a885a49f7_960x402.png 848w, https://substackcdn.com/image/fetch/$s_!4_XF!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a447e55-b70d-406a-8536-211a885a49f7_960x402.png 1272w, https://substackcdn.com/image/fetch/$s_!4_XF!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a447e55-b70d-406a-8536-211a885a49f7_960x402.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!4_XF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a447e55-b70d-406a-8536-211a885a49f7_960x402.png" width="960" height="402" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8a447e55-b70d-406a-8536-211a885a49f7_960x402.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:402,&quot;width&quot;:960,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!4_XF!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a447e55-b70d-406a-8536-211a885a49f7_960x402.png 424w, https://substackcdn.com/image/fetch/$s_!4_XF!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a447e55-b70d-406a-8536-211a885a49f7_960x402.png 848w, https://substackcdn.com/image/fetch/$s_!4_XF!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a447e55-b70d-406a-8536-211a885a49f7_960x402.png 1272w, https://substackcdn.com/image/fetch/$s_!4_XF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a447e55-b70d-406a-8536-211a885a49f7_960x402.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>&#8220;Dual-use foundation models,&#8221; as defined in the &#8220;Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence&#8221;<sup>7</sup> released on October 30, 2023, are inherently dangerous given their power and capabilities. The end use case of these systems may not be explicitly defined as they are general purpose in nature.</p><p>However, even much smaller, less generally-capable models can be used in dangerous ways, and thus inherently dangerous AI systems also include a &#8220;high-risk AI system&#8221; category. A high-risk AI system means any artificial intelligence system that is:</p><ul><li><p>used, reasonably foreseeable<sup>8</sup> as being used, or is a controlling factor in making a consequential decision, meaning a decision that either has a legal or similarly significant e&#64256;ect on an individual&#700;s access to the criminal justice system, housing, employment, credit, education, health care, or insurance;</p></li><li><p>used, or reasonably foreseeable as being used, to categorize groups of persons by sensitive and protected characteristics, such as race, ethnic origin, or religious belief;</p></li><li><p>used, or reasonably foreseeable as being used, in the direct management or operation of critical infrastructure;</p></li><li><p>used, or reasonably foreseeable as being used, in vehicles, medical devices, or in the safety system of a product;</p></li><li><p>used, or reasonably foreseeable as being used, to influence elections or voters; or</p></li><li><p>used to collect the biometric data of an individual from a biometric identification system without consent.</p></li></ul><p>Developers and deployers of inherently dangerous AI systems would be subject to this liability framework. Definitions:</p><ul><li><p><em><strong>Developer</strong></em>: A &#8220;developer&#8221; is a person who designs, codes, produces, owns, or substantially modifies an artificial intelligence system for internal use or for use by a third party.</p></li></ul><ul><li><p><em><strong>Deployer</strong></em>: A &#8220;deployer&#8221; is a person who uses or operates an artificial intelligence system for internal use or for use by third parties. Deployers who make material changes and modifications to existing models would assume the responsibilities of a model developer. A deployer does not include a small business, as defined by the Small Business Administration&#8217;s industry-based employee and annual receipt calculations.<sup>9</sup> <em>Note that a deployer may separately qualify as a developer, but the small business exception would not apply to a developer.</em></p></li></ul><h1><strong>Liability Framework Approach</strong></h1><p>The proposed framework takes a products liability- and a consumer products safety-type approach in that it is both remedial and preventive. A product liability model has significant advantages when applied to AI. It focuses on safety features rather than procedures, which creates clear incentives to identify and invest in safer technology. This type of approach raises the bar for safety, which is critical for technologies that have healthcare, infrastructure, and other critical applications. Moreover, standards to determine liability di&#64256;er from state to state, making fault and foreseeability inconsistent and unreliable for injured parties.</p><p>The framework takes clear steps to:</p><ul><li><p>Clarify that inherently dangerous AI systems are products and developers assume the role of manufacturer</p></li><li><p>Subject developers to liability in the event that a harm was caused by a product unreasonably unsafe in design or in warnings/instructions (e.g. the duty of care)</p></li><li><p>Subject developers and deployers to liability in the event that they do not meet various preventative requirements in the form of risk management and disclosures</p></li><li><p>Create liability protection for developers and deployers through compliance with preventative requirements</p></li><li><p>Provide for both a limited private right of action and government enforcement</p></li></ul><p>Together, these components of the framework aim to provide e&#64256;ective incentives to developers and deployers of inherently dangerous AI systems to proactively address the risks and elevate the safety aspects of their products.</p><h4><strong>Duty of Care</strong></h4><p>A developer, as defined in Section 3, has a duty to exercise reasonable care in the design of its products and the warnings and instructions regarding those products. Developers owe business and individual consumers the same duty of care that a reasonable developer would provide. This includes the duty to not create an unreasonable risk of harm to those who use (or misuse) the product in a foreseeable way. A developer must also use reasonable care in giving warnings of dangerous conditions and product information (e.g. testing results) to support the safe and informed deployment or use of an AI system. Failure to fulfill either of these duties &#8211; which are further detailed in the next subsection &#8211; constitute a breach of a developer&#8217;s duty of care.</p><h4><strong>Remedial Measures: Developer Liability for Inherently Dangerous AI</strong></h4><p>The framework provides for claimants to seek remedies for harms caused by AI products if a developer has violated its duty of care. It, therefore, clarifies that inherently dangerous AI systems are products, and that developers hold the role of product manufacturer. It is currently unclear under existing law whether AI systems would be considered a &#8220;product.&#8221; Importantly, a developer could be subject to liability to a claimant who proves by a preponderance of the evidence that the claimant&#700;s harm was proximately caused because the AI product was &#8220;unsafe.&#8221; This may be proven if, and only if, it was unreasonably unsafe in at least one of the following ways:</p><ol><li><p><em><strong>The AI product was unreasonably unsafe in design. </strong></em>Developers of inherently dangerous AI products could be held liable on grounds that the product was unreasonably unsafe in design, if the trier of fact finds that, at the time of release, the likelihood that the product would cause the claimant&#700;s harm or similar harms, and the seriousness of those harms outweighed the burden on the developer to design a product that would have prevented those harms and the adverse e&#64256;ect that alternative design would have on the usefulness of the product. Examples of evidence that are especially probative in making this evaluation include:</p><ol><li><p>Any warnings and instructions provided with the AI system, such as the AI Data Sheet and impact assessments described in Section 4.c;</p></li><li><p>The technological and practical feasibility of designing an AI system to have prevented claimant&#700;s harm while substantially serving the likely user&#700;s expected needs</p></li><li><p>Practical risk mitigation techniques common to similar AI systems (e.g. security measures to protect against cyber threats, bias mitigation strategies, prompt filters, data privacy practices);</p></li><li><p>The e&#64256;ect of any proposed alternative design on the usefulness of the AI system (e.g. impact on general performance vs task-specific performance);</p></li><li><p>The comparative costs of producing, distributing, selling, using and maintaining the AI system as designed and as alternatively designed; and</p></li><li><p>The new or additional harms that might have resulted if the AI system had been so alternatively designed.</p></li></ol></li></ol><ol start="2"><li><p><em><strong>The AI product was unreasonably unsafe because adequate warnings or instructions were not provided. </strong></em>Developers &#8211; as defined in Section 3 &#8211; of inherently dangerous AI products could be held liable for a product that was unreasonably unsafe because adequate warnings or instructions were not provided about a danger connected with the AI system or its proper use, if the trier of fact finds that, at the time of release, the likelihood that the AI system would cause the claimant's harm or similar harms and the seriousness of those harms rendered the manufacturer's instructions inadequate and that the manufacturer should and could have provided the instructions or warnings which claimant alleges would have been adequate. Examples of evidence that are especially probative in making this evaluation include:</p><ol><li><p>The developer&#700;s ability, at the time of release, to be aware of the product's danger and the nature of the potential harm (e.g. through AI red-teaming, evaluation against common AI risks as identified in the NIST AI Risk Management Framework and other published AI standards);</p></li><li><p>The developer&#700;s ability to anticipate that the likely product user would be aware of the product&#700;s danger, given the expertise of the likely user and the information provided; and</p></li><li><p>The adequacy of the warnings or instructions that were provided to explain dangers associated with the AI product to expected deployers and end users in easy to understand terminology.</p></li></ol></li></ol><p>As foundation models are deployed in new contexts and understanding grows about the potential risks of AI systems, a claim could also arise where a reasonably prudent developer should have learned about a danger connected with the product after it was released. In such a case, the developer is under an obligation to act in a timely manner to mitigate potential dangers as a reasonably prudent developer in the same or similar circumstances. This obligation is satisfied if the developer makes reasonable e&#64256;orts to issue product updates or recalls to address dangers, to inform deployers about actions they should take to avoid foreseeable harm, and to explain the risk of harm to end users.</p><h4><strong>Preventative Requirements: Developer and Deployer Liability for Risk Management, AI Data Sheet, and Impact Assessment</strong></h4><p>The framework is also preventative in that it requires developers and deployers of inherently dangerous AI systems to disclose certain information regarding the system to both the federal government and the general public. Developers and deployers would be required to employ certain risk mitigation strategies. The Department of Commerce may seek injunctive relief in federal court to compel a developer or deployer to meet these requirements. To further incentivize compliance and encourage continued innovation, the framework also provides limited liability protections where these requirements are met.</p><ul><li><p>the developer has conducted a documented testing, evaluation, verification, and validation of that system at least as stringent as the latest version of the NIST AI Risk Management Framework;</p></li><li><p>the developer mitigates these risks to the extent possible, considers alternatives, and discloses vulnerabilities and mitigation tactics to a deployer; and</p></li><li><p>the developer has published an AI Data Sheet consistent with the requirements outlined below.</p></li></ul><p>Similar to the Occupational Safety and Health Administration&#8217;s (OSHA) Material Safety Data Sheets, which disclose certain information regarding hazardous chemical products to downstream users,</p><ol><li><p>Information on the intended contexts and uses of the AI model in accordance with the &#8220;map&#8221; guidelines articulated in NIST&#700;s latest AI Risk Management Framework (AI RMF)<sup>11</sup></p></li><li><p>Information regarding the dataset(s) upon which the AI was trained including sources, volume, and whether the dataset is proprietary</p></li><li><p>Accounting of foreseeable risks identified and steps taken to manage them as articulated in the &#8220;manage&#8221; guidelines of the AI RMF<sup>12</sup></p></li><li><p>Results of red-teaming testing and steps taken to mitigate identified risks, based on guidance developed by NIST<sup>13</sup></p></li></ol><p>The AI Data Sheet will be registered at the U.S. Department of Commerce in the case of dual-use foundation models and, for all inherently dangerous AI systems, will be prominently included with and incorporated into the terms and conditions of the AI system itself. End users of the AI system will be allowed to rely upon the statements included in the AI Data Sheet when making fit-for-use and deployment decisions.</p><h4><strong>Limited Liability Protection</strong></h4><p>Limited liability protection will help move AI companies towards common standards and best practices as quickly as possible without implementing an overly burdensome regulatory regime. Specifically, if developers and deployers of inherently dangerous AI systems follow the preventative requirements as outlined above, a court would recognize a rebuttable presumption, whereby developers and deployers have acted reasonably and upheld their duty of care. The plaintiff would hold the burden of proof to demonstrate otherwise.</p><h4><strong>Enforcement</strong></h4><p>For violations of the Framework, a person (individual, corporation, company, association, firm, partnership, or any other entity) may bring a civil action or proceeding against a developer for injunctive relief or other damages. The Attorney General may also bring a civil action or proceeding against a developer or deployer for violations of the Framework, including civil penalties. The combination of government enforcement, and a limited private right of action, aims to provide an incentive strong enough to induce responsible behavior.</p><h1><strong>Conclusion</strong></h1><p>The promise of AI could provide tremendous benefits in critical aspects of American life, such as healthcare, finance and transportation. To harness those benefits and ensure that the U.S. &#8211; not China &#8211; is setting the global standards for AI, the U.S. must adopt AI guardrails. The approach detailed in this Framework focuses on liability to incentivize the safe development and use of advanced AI technologies and to avoid an overly burdensome regulatory regime. A liability Framework provides the U.S. concrete parameters to underpin continued and rapid AI innovation that bolsters consumer protection as well as U.S. economic and national security.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/a-framework-for-incentivizing-responsible-baf?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://centerforhumanetechnology.substack.com/p/a-framework-for-incentivizing-responsible-baf?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.humanetech.com/donate&quot;,&quot;text&quot;:&quot;Please donate here to support our work&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.humanetech.com/donate"><span>Please donate here to support our work</span></a></p><p></p><h4><strong>ENDNOTES</strong></h4><ol><li><p><a href="https://apnews.com/article/schumer-artificial-intelligence-elon-musk-senate-efcfb1067d68ad2f595db7e92167943c">https://apnews.com/article/schumer-artificial-intelligence-elon-musk-senate-efcfb1067d68ad2f595db7e92167943c</a></p></li><li><p><a href="https://issueone.org/articles/issue-one-luntz-public-opinion-research-on-the-impact-of-social-media-and-ai/">https://issueone.org/articles/issue-one-luntz-public-opinion-research-on-the-impact-of-social-media-and-ai/</a>.</p></li><li><p><a href="https://www.techpolicy.press/to-move-forward-with-ai-look-to-the-fight-for-social-media-reform/">https://www.techpolicy.press/to-move-forward-with-ai-look-to-the-fight-for-social-media-reform/</a>.</p></li><li><p><em>E.g., Robert Mata v. Avianca Inc</em>. Case 1:22-cv-01461-PKC (2023), resulting in damages payable by the user of AI technology, not the developer of the AI model that generated incorrect content.</p></li><li><p><em>See</em> the <a href="https://www.oecd.org/science/forty-two-countries-adopt-new-oecd-principles-on-artificial-intelligence.htm">OECD AI Principles</a> (2019) and <a href="https://wp.oecd.ai/app/uploads/2021/06/G20-AI-Principles.pdf">G20 AI Principles</a> (2019).</p></li><li><p><a href="https://incidentdatabase.ai/apps/incidents/">https://incidentdatabase.ai/apps/incidents/</a></p></li><li><p><em>See</em> Section 3(k) for full definition: <a href="https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/">https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworth</a> <a href="https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/">y-development-and-use-of-artificial-intelligence/</a>.</p></li><li><p>It may be di&#64256;icult to prove what is &#8220;reasonably foreseeable&#8221; given that the technology is changing rapidly and the probabilistic nature of AI systems makes results di&#64256;icult to predict. One solution is to include in legislation federal rulemaking authority to more clearly define what is reasonably foreseeable.</p></li><li><p><a href="https://www.sba.gov/document/support-table-size-standards">Table of size standards | U.S. Small Business Administration (sba.gov)</a></p></li><li><p>Hazard Communication Standard (HCS) 29 CFR 1910.1200(g).</p></li><li><p><em>See </em>NIST AI RMF 5.2 Map 1.</p></li><li><p><em>See </em>NIST AI RMF 5.4.</p></li><li><p><em>See</em> &#8220;Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence&#8221; subsection 4.2(a)(i)(C).</p></li></ol>]]></content:encoded></item></channel></rss>