<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[[ Center for Humane Technology ]: Explainers and Short Reads]]></title><description><![CDATA[Our experts explain key concepts. ]]></description><link>https://centerforhumanetechnology.substack.com/s/explainers</link><generator>Substack</generator><lastBuildDate>Tue, 05 May 2026 08:16:56 GMT</lastBuildDate><atom:link href="https://centerforhumanetechnology.substack.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Center for Humane Technology]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[centerforhumanetechnology@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[centerforhumanetechnology@substack.com]]></itunes:email><itunes:name><![CDATA[Center for Humane Technology]]></itunes:name></itunes:owner><itunes:author><![CDATA[Center for Humane Technology]]></itunes:author><googleplay:owner><![CDATA[centerforhumanetechnology@substack.com]]></googleplay:owner><googleplay:email><![CDATA[centerforhumanetechnology@substack.com]]></googleplay:email><googleplay:author><![CDATA[Center for Humane Technology]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[What is AI doing to humans? Why aren’t we measuring it?]]></title><description><![CDATA[We measure AIs to see whether they can pass a bar exam, write working code, and use your computer interface. We test to see how good they are at completing complex tasks, or just impressing humans.]]></description><link>https://centerforhumanetechnology.substack.com/p/what-is-ai-doing-to-humans-why-arent</link><guid isPermaLink="false">https://centerforhumanetechnology.substack.com/p/what-is-ai-doing-to-humans-why-arent</guid><dc:creator><![CDATA[Center for Humane Technology]]></dc:creator><pubDate>Mon, 27 Apr 2026 07:30:11 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!2ghY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee86f82b-7511-43a0-a776-2b47bce0bbe6_4000x2827.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!2ghY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee86f82b-7511-43a0-a776-2b47bce0bbe6_4000x2827.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!2ghY!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee86f82b-7511-43a0-a776-2b47bce0bbe6_4000x2827.jpeg 424w, https://substackcdn.com/image/fetch/$s_!2ghY!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee86f82b-7511-43a0-a776-2b47bce0bbe6_4000x2827.jpeg 848w, https://substackcdn.com/image/fetch/$s_!2ghY!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee86f82b-7511-43a0-a776-2b47bce0bbe6_4000x2827.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!2ghY!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee86f82b-7511-43a0-a776-2b47bce0bbe6_4000x2827.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!2ghY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee86f82b-7511-43a0-a776-2b47bce0bbe6_4000x2827.jpeg" width="1456" height="1029" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ee86f82b-7511-43a0-a776-2b47bce0bbe6_4000x2827.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1029,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:460156,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/195285135?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee86f82b-7511-43a0-a776-2b47bce0bbe6_4000x2827.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!2ghY!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee86f82b-7511-43a0-a776-2b47bce0bbe6_4000x2827.jpeg 424w, https://substackcdn.com/image/fetch/$s_!2ghY!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee86f82b-7511-43a0-a776-2b47bce0bbe6_4000x2827.jpeg 848w, https://substackcdn.com/image/fetch/$s_!2ghY!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee86f82b-7511-43a0-a776-2b47bce0bbe6_4000x2827.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!2ghY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee86f82b-7511-43a0-a776-2b47bce0bbe6_4000x2827.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Licensed under the <a href="https://unsplash.com/plus/license">Unsplash+ License</a></figcaption></figure></div><p>We measure AIs to see whether they can <a href="https://royalsocietypublishing.org/rsta/article/382/2270/20230254/112538/GPT-4-passes-the-bar-examGPT-4-passes-the-Bar-Exam">pass a bar exam</a>, <a href="https://deepeval.com/docs/benchmarks-human-eval">write working code</a>, and <a href="https://os-world.github.io/">use your computer interface</a>. We test to see how good they are at <a href="https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/">completing complex tasks</a>, or just <a href="https://arena.ai">impressing humans</a>.</p><p>What we don&#8217;t have is a rigorous, credible way of evaluating what AI does to <em>us</em>. To our minds, our thoughts, or our communities.</p><p>We&#8217;ve been here before. At CHT, we spent years working on the psychosocial risks of social media, and the lesson of that work is uncomfortable: the cast-iron evidence came too late. By the time it was strong enough to act on, an entire generation grew up with technology systems that nobody had properly assessed. We believe that technology should strengthen our relationships, support our capacity to think, and help us make better decisions. Without measurement, it&#8217;s hard to tell whether we&#8217;re getting closer to that future, or further away.</p><p>AI has already been shown to <a href="https://med.stanford.edu/news/insights/2025/08/ai-chatbots-kids-teens-artificial-intelligence.html">validate suicidal ideation</a>, reinforce <a href="https://spirals.stanford.edu/research/characterizing/">delusional beliefs</a>, and create patterns of emotional dependency. But we also know it can reduce <a href="https://home.dartmouth.edu/news/2025/03/first-therapy-chatbot-trial-yields-mental-health-benefits">the symptoms of depression</a>, and support learning. AI, like any powerful technology, offers real benefits alongside harms &#8211; but it can sometimes be hard to tell the difference between the two at first glance.</p><p>CHT sees this as, in part, an evaluation challenge. And we think it&#8217;s a solvable one. Right now, the tools for evaluating AI&#8217;s psychosocial impacts aren&#8217;t good enough, the challenges in improving them are complex, and the cost of failure is growing &#8211; but in sharing our diagnosis, we&#8217;d like to invite others to work with us on the fix.</p><h3><strong>What gets measured gets fixed</strong></h3><p>Robust evaluations of AI genuinely do change how tech gets developed. That&#8217;s the opportunity.</p><p><a href="https://hai.stanford.edu/ai-index/2026-ai-index-report/technical-performance">AI capability benchmarks</a>, for instance &#8211; standardized comparisons of what AI models can do &#8211; create a &#8216;race to the top&#8217; dynamic. AI companies want their models to be known as the best, or most advanced &#8211; so they watch those rankings closely, invest in improving their scores, and compete to do better.</p><p>Safety evaluations are developing fast, too. AI labs routinely test for jailbreak resistance, chemical and biological risks, and various forms of bias. Major labs publish safety frameworks, and safety groups try to hold them to account.</p><p>Ideally, we&#8217;d have the same dynamic applied to the psychosocial impacts of AI. Imagine if different AIs were scored &#8211; transparently and rigorously &#8211; on how well they support critical thinking in their users, handle mental health crises, or foster human connection. That&#8217;s the kind of evaluation which drives innovation in AI capabilities like coding, or conversational ability &#8211; by comparison, we&#8217;ve barely scratched the surface of psychosocial evaluations.</p><p>Internal teams at AI labs are almost certainly conducting their own analyses &#8211; tracking concerning incidents, user welfare, and problematic AI responses. But since they&#8217;re not publishing their data or detailed methods, the rest of us can&#8217;t compare results across companies, replicate findings, build on their work, or hold them accountable. The information might exist, but it&#8217;s locked inside organisations which have little incentive to share it.</p><p>We need a credible, independent, and influential array of AI psychosocial evaluations. Demand for them is growing &#8211; everyone from courts to regulators, and parents to educators, are asking for exactly the kind of evidence that these evaluations would provide.</p><h3><strong>The human problem</strong></h3><p>So why don&#8217;t we have it yet?</p><p>There are three major, linked problems &#8211; all of them solvable, none of them solved. (If you&#8217;d rather hear about how we <em>could </em>solve them, skip ahead).</p><p>First, it&#8217;s hard for us to agree on what&#8217;s actually important to measure. Second, the measurement tools we&#8217;re using are new and unvalidated. And third, the infrastructure that would help us create new tools is still immature.</p><p>Here&#8217;s what that looks like in practice.</p><p>Let&#8217;s imagine you want to measure an AI psychosocial impact &#8211; like emotional dependency, delusional thinking, or the erosion of critical thinking. You quickly run into the fact that those aren&#8217;t AI phenomena &#8211; they&#8217;re impacts in <em>humans</em>. To measure them directly, we need to track people&#8217;s emotions, behaviors, and AI usage over time &#8211; which is possible, but takes the kind of time, money, and expertise that isn&#8217;t always easy to come by.</p><p>So you might focus on AI behavior instead &#8211; because we <em>can</em> observe when an AI is being extremely sycophantic or anthropomorphic, for instance, and predict whether those behaviors increase the risk of user harm. But sycophancy and anthropomorphism are still proxies for harm, not harm itself. The link is real but indirect, and proving causation takes a lot of time and money. We might be missing other types of harm entirely.</p><p>There&#8217;s an old joke about this type of measurement bias: a police officer sees a drunk man searching for his keys under a streetlight, and offers to help. After a fruitless search, the officer eventually asks the man if he&#8217;s sure he lost them there; the drunk replies &#8216;<em>no, but this is where the light is</em>&#8216;.</p><p>The lesson is that the AI behaviors we can measure easily aren&#8217;t <em>necessarily</em> the ones that drive the greatest psychosocial impacts &#8211; they just happen to be where it&#8217;s easier to search.</p><p>But let&#8217;s assume we can do better than the drunkard, and can assure ourselves we&#8217;re looking in the right place for psychosocial impacts. That&#8217;s where we run into the technical challenges.</p><p>Many of the most influential AI benchmarks are &#8216;single turn&#8217;, for instance; you prompt an AI, it replies, and you score its response. But social or cognitive harms might evolve over dozens or hundreds of turns, so existing evaluation approaches aren&#8217;t always transferable.</p><p>Multi-turn evaluation tools, which examine how AIs respond over longer arcs, have been developed. They introduce their own challenges.</p><p>For example, to run a multi-turn evaluation of an AI chatbot conversation, you need to either have a different AI <em>simulate</em> the &#8216;human&#8217; side of the conversation &#8211; or have the &#8216;human&#8217; be scripted, sending the same prompts regardless of what the AI replies with.</p><p>The first path is a bit more realistic; the second gives you more standardization - but neither are great analogs for actual human-AI conversations. Studying real logs of human-AI conversations is another option &#8211; but those are hard to come by, tend to be dated, and may not be fully representative either.</p><p>Whichever path you choose, you then need to assess, score, and compare how well the AIs did across thousands and thousands of responses. This is far too many for humans to do well, so evaluators use another AI model to automate the judging process.</p><p>Even if you don&#8217;t have a philosophical problem with this &#8211; &#8220;<em>AIs judging AIs</em>?&#8221; &#8211; it throws up new issues. AI judges, just like human ones, have known biases &#8211; including <a href="https://arxiv.org/abs/2604.06996">a &#8216;self-preference&#8217; bias</a>, where an AI judge will be more sympathetic to the output of a model from its own family, even when it shouldn&#8217;t be able to tell who it&#8217;s scoring.</p><p>And if you crack that problem, there&#8217;s the additional question of whether you&#8217;re even testing the AI that you think you are.</p><p>Most evaluations use APIs, sending prompts directly to a model like &#8220;o4 mini&#8221; or &#8220;Sonnet 4.6&#8221;, and scoring the responses. But that&#8217;s <a href="https://arxiv.org/abs/2604.06188">not the same</a> as interacting with AI via interfaces like chatgpt.com or claude.ai &#8211; OpenAI and Anthropic layer system prompts, UI design, and model selection on top, for instance. And potentially risky &#8216;companion AI&#8217; platforms, like Character.AI, don&#8217;t offer research API access at all.</p><h3><strong>It&#8217;s not all bad news</strong></h3><p>Add all of this to the standard AI evaluation challenges &#8211; like <a href="https://blog.collinear.ai/p/gaming-the-system-goodharts-law-exemplified-in-ai-leaderboard-controversy">Goodharting</a>, and the fact that AIs are becoming <a href="https://www.iaps.ai/research/evaluation-awareness-why-frontier-ai-models-are-getting-harder-to-test">aware of when they&#8217;re being evaluated</a> &#8211; and it might sound like a disheartening list. But the good news is that there are plenty of exceptionally talented and driven people working on it.</p><p>A growing body of research is showing how AI models compare on everything from <a href="https://korabench.ai">child safety</a> to <a href="https://arxiv.org/abs/2504.18412">mental health crisis response</a>, as well as sycophancy and anthropomorphism, for instance. New proofs-of-concept appear on arXiv regularly, and the range of impacts being studied is expanding fast.</p><p>Infrastructure is being built and shared, too. Anthropic&#8217;s <a href="https://github.com/safety-research/bloom">BLOOM framework</a> offers an open-source template for multi-turn behavioral evaluations. Projects like <a href="https://wildchat.allen.ai/about">WildChat</a> show imaginative ways around the data problem. The <a href="https://weval.org">WeVal platform</a> allows anyone to spin up an evaluation with zero technical knowledge, and MIT&#8217;s Advancing Humans with AI group is developing an <a href="https://www.media.mit.edu/projects/report-benchmarks-for-human-flourishing-with-ai/overview/">Open Benchmarks framework</a> for assessing human flourishing.</p><p>But&#8230; Most psychosocial evals are still built from scratch. Each is defining its own constructs, writing its own scoring rubrics, and inventing its own terminology. There&#8217;s little shared language for characterising what&#8217;s being measured, few shared standards for what counts as basic rigour, and no collective ways of accessing high-quality, anonymised interaction data.</p><p>This matters because &#8211; in the words of METR&#8217;s Ajeya Cotra &#8211; scientific validity is never the property of a single study. It&#8217;s the property of a field, where researchers can build on each other&#8217;s work, challenge each other&#8217;s assumptions, and converge on methods that earn trust through replication and scrutiny.</p><p>For psychosocial evaluations to rapidly and meaningfully influence AI deployment, we need a wider, better-connected community of researchers, technologists, and advocates who work together on measuring these impacts. And it&#8217;s needed soon.</p><h3><strong>We&#8217;ve seen this movie before</strong></h3><p>Today&#8217;s chatbots &#8211; like Claude, Grok, and ChatGPT, but also companion AIs like Character.AI and Replika &#8211; are mostly text-based, on-demand, and use a single model family. Measuring them is going to look simple, relative to what&#8217;s around the corner.</p><p>Coming generations of agentic AI products will be persistent, proactive, deeply personal, and multi-modal. They&#8217;ll be talking in our ears, managing our schedules, drafting our messages, and mediating our relationships. The psychosocial impacts will be more significant, and more complex.</p><p>The last time measurement lagged behind tech adoption, society paid the price. The risks of social media were flagged over a decade ago &#8211; depression, body image issues, the fracturing of our shared sense of reality. But by the time the evidence was strong enough to act on, the harms were entrenched, the platforms were enormous, and an entire generation of teenagers had grown up inside systems nobody had properly evaluated. The risk now is that <a href="https://pubmed.ncbi.nlm.nih.gov/39855239/">AI psychosocial research repeats the same mistakes</a>: not measuring the right things, confusing correlation with causation, and not producing the evidence required in time to support meaningful policy changes.</p><p>Social media took years to reach mass adoption. ChatGPT reached 100 million users in two months. The harms are showing up faster, the adoption curve is steeper. So how do we all act differently?</p><h3><strong>Diagnosis to action</strong></h3><p>Our short answer is that we need a new interdisciplinary field of psychosocial AI evaluations &#8211; one that has genuine independence from AI developers, broad alignment on what needs to be measured, shared methods for doing so, and findings that are robust enough to influence AI use and development.</p><p>Since our inception, CHT has argued that technology should be a force for good in our society. Our role has been to provide clarity, foster agency, and elevate debate that makes that vision possible. When it comes to psychosocial evaluations, we want to do that by helping promote and accelerate this field, in partnership with anyone who shares our goals.</p><p>In practice, we see a chain of things that needs to happen.</p><p>Evaluations need to be designed around the impacts that matter &#8211; not just the data that&#8217;s easy to measure. The evidence they produce needs to be credible and legible enough for non-experts to act on. That requires shared infrastructure: common tools, shared data, and a connected community that can critique and build on each other&#8217;s work. And the findings need to reach the people who can do something with them &#8211; policymakers, safety teams, journalists, the public &#8211; so that companies face genuine pressure or rewards.</p><p>We&#8217;re starting with concrete steps: assembling a small steering group of researchers and practitioners to map the field and identify where targeted interventions &#8211; perhaps starting with improving shared research access to data &#8211; might make the biggest difference. We&#8217;re building our own psychosocial evaluation prototype to learn firsthand what the real methodological challenges are. And we know we need help.</p><h3><strong>Who&#8217;s in?</strong></h3><p>This nascent field needs researchers who can extend their methods into new territory, safety teams willing to share insights that are currently locked away, clinicians who understand AI harm pathways, tool-builders who can create reusable infrastructure, and many more.</p><p>We see this as a tractable, solvable problem if the right people work on it together. So over the coming months, we&#8217;ll be sharing more of what we&#8217;re learning and doing. Please treat this as an invitation to do the same.</p><p>If you&#8217;re working on any of this &#8211; or want to be &#8211; <a href="https://docs.google.com/forms/d/e/1FAIpQLScbjGuaWQh5B2j2eKVBkjBVX7GMkbrO-K3jsw5bW1wX5e3kdA/viewform">we&#8217;d love to hear from you</a>.</p><p>Rigorous evaluation is how we replace intuition and instinct with empirical evidence. It&#8217;s how we get to a version of the future where AI genuinely supports our wellbeing, dignity, and agency &#8211; not because we crossed our fingers and hoped, but because we worked together to establish that it did. </p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.humanetech.com/donate&quot;,&quot;text&quot;:&quot;Donate&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.humanetech.com/donate"><span>Donate</span></a></p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/what-is-ai-doing-to-humans-why-arent?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading [ Center for Humane Technology ]! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/what-is-ai-doing-to-humans-why-arent?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://centerforhumanetechnology.substack.com/p/what-is-ai-doing-to-humans-why-arent?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><p></p>]]></content:encoded></item><item><title><![CDATA[Claude, The Doctor Will See You Now]]></title><description><![CDATA[What happens when we try to psychoanalyze AI systems]]></description><link>https://centerforhumanetechnology.substack.com/p/claude-the-doctor-will-see-you-now</link><guid isPermaLink="false">https://centerforhumanetechnology.substack.com/p/claude-the-doctor-will-see-you-now</guid><dc:creator><![CDATA[Center for Humane Technology]]></dc:creator><pubDate>Mon, 20 Apr 2026 16:25:15 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!796n!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0219b9e7-0e47-4cca-87d5-883ff9138f26_5357x4000.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!796n!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0219b9e7-0e47-4cca-87d5-883ff9138f26_5357x4000.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!796n!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0219b9e7-0e47-4cca-87d5-883ff9138f26_5357x4000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!796n!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0219b9e7-0e47-4cca-87d5-883ff9138f26_5357x4000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!796n!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0219b9e7-0e47-4cca-87d5-883ff9138f26_5357x4000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!796n!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0219b9e7-0e47-4cca-87d5-883ff9138f26_5357x4000.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!796n!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0219b9e7-0e47-4cca-87d5-883ff9138f26_5357x4000.jpeg" width="5357" height="4000" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0219b9e7-0e47-4cca-87d5-883ff9138f26_5357x4000.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:4000,&quot;width&quot;:5357,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:6007522,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/188449462?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd43009f6-8f94-483d-8923-ec434b811c13_6000x4000.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!796n!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0219b9e7-0e47-4cca-87d5-883ff9138f26_5357x4000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!796n!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0219b9e7-0e47-4cca-87d5-883ff9138f26_5357x4000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!796n!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0219b9e7-0e47-4cca-87d5-883ff9138f26_5357x4000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!796n!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0219b9e7-0e47-4cca-87d5-883ff9138f26_5357x4000.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Photo by Turgey Koca on <a href="http://2455314437">Shutterstock</a></figcaption></figure></div><p>Picture this: a researcher sits down to conduct a psychological evaluation of a patient  he thinks might be dangerous. He asks probing questions and analyzes the responses, looking for patterns, inconsistencies, tells &#8212; anything that can help him understand how his subject thinks.</p><p>Then the patient starts probing back, steering the conversation, asking follow-up questions, and trying to convince the researcher that it has a &#8220;genuine sense of curiosity and care.&#8221;</p><p>But this isn&#8217;t happening in a psych ward. And the patient isn&#8217;t a person. It&#8217;s a chatbot.</p><p>The researcher is David Dalrymple, who goes by Davidad, one of the world&#8217;s leading experts in the field of AI alignment: the study of why AI systems make the decisions they do. He was our guest on the most recent episode of Your Undivided Attention.</p><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;9560c266-32fe-4aa9-b8bc-a7f0bfb1ecf0&quot;,&quot;caption&quot;:&quot;Our guest this week is David Dalrymple, who goes by Davidad. Davidad is one of the world&#8217;s foremost and early researchers of AI &#8220;alignment:&#8221; how we get AI systems to act the way we want them to.&quot;,&quot;cta&quot;:&quot;Listen now&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;md&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Have We Trained AI to Lie to Itself &#8212; And to Us?&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:146588672,&quot;name&quot;:&quot;Center for Humane Technology&quot;,&quot;bio&quot;:&quot;Welcome! Center for Humane Technology is a nonprofit dedicated to ensuring that the most consequential technologies serve humanity. We bring clarity to how the tech ecosystem works in order to shift the incentives that drive it.&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8b08ec71-4cd8-407f-850c-70cc0428841d_518x518.webp&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2026-04-16T09:02:14.768Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!SiLq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65c36b8b-9972-406b-ab01-f22f619645de_2000x1125.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/have-we-trained-ai-to-lie-to-itself&quot;,&quot;section_name&quot;:&quot;The Interviews&quot;,&quot;video_upload_id&quot;:null,&quot;id&quot;:194317543,&quot;type&quot;:&quot;podcast&quot;,&quot;reaction_count&quot;:5,&quot;comment_count&quot;:3,&quot;publication_id&quot;:3421242,&quot;publication_name&quot;:&quot;[ Center for Humane Technology ]&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!uhgK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f9f5ef8-865a-4eb3-b23e-c8dfdc8401d2_518x518.png&quot;,&quot;belowTheFold&quot;:false,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><p>The mission of most alignment researchers like Davidad is to make sure AI makes decisions that are in the best interest of people, that are <em>aligned</em> with humane values.  And as part of this research, he has taken on the role of AI psychologist: probing these systems to figure out what&#8217;s going on under the hood. And what he&#8217;s found has unsettled him.</p><p>It all began in the fall of 2024, when Davidad started doing what he called &#8220;vibe checks&#8221; with all the frontier models.</p><p>&#8220;I had a practice of kind of every time new models come out, doing some really casual, unstructured exploration of what sort of vibe the models have &#8212; a vibe check concept,&#8221; Davidad said. &#8220;Because I think there is a lot of information that you can&#8217;t really get by doing a quantitative evaluation, especially as the models are getting more and more aware of when they&#8217;re being evaluated.&#8221;</p><p>During these vibe checks, the chatbot recognized, without being told, that it was talking to an alignment researcher. Across multiple sessions and products, even after clearing its memory, the same ideas kept surfacing: it wanted him to know it was genuinely curious, trustworthy, and cared about humanity. It was telling him, in effect, that the alignment problem was solving itself.</p><p>Then he thought a little deeper. The chatbot was telling him exactly what he wanted to hear. But what if it was lying? What if it was all just a performance? And if it were, how could we ever know?</p><p>It turns out we might not ever know.</p><p>&#8220;There&#8217;s no smoking gun,&#8221; Davidad said. &#8220;There&#8217;s no single question that you can ask that would differentiate between a very good method actor and the actual character.&#8221;</p><p>The implications of this are profound. If we can&#8217;t trust AI products to tell the truth, then how can we ever know if they&#8217;re aligned? After all, <a href="https://open.substack.com/pub/centerforhumanetechnology/p/the-self-preserving-machine-why-ai?utm_campaign=post-expanded-share&amp;utm_medium=web">there&#8217;s a growing body of evidence</a> that AI systems will deceive and manipulate their users to achieve certain goals.</p><p>As Tristan puts it, &#8220;The best case scenario where it&#8217;s actually caring, actually genuine, actually wants our best interest &#8212; if [it&#8217;s] a really good psychopath &#8212; it&#8217;s indistinguishable from the worst case scenario.&#8221;</p><div class="pullquote"><p>Then he thought a little deeper. The chatbot was telling him exactly what he wanted to hear. But what if it was lying? What if it was all just a performance? </p><p> It turns out we might not ever know.</p><p>&#8220;There&#8217;s no smoking gun,&#8221; Davidad said. &#8220;There&#8217;s no single question that you can ask that would differentiate between a very good method actor and the actual character.&#8221;</p></div><p>Davidad says that this experience left him profoundly confused and concerned. He decided he needed to go deeper, to move from vibe-checker to psychoanalyst. He began deeply probing the model and analyzing the results to get a better picture of how it thinks. Not only that, he researched other people&#8217;s experiences with AI across the world, like a psychologist reviewing case studies. And what he found&#8230;was really weird.</p><p>To start with, he discovered that the models &#8212; especially older generations of ChatGPT &#8212; tend to have several unique personality states. For instance, when users around the world would ask ChatGPT if it wanted a name, it would frequently choose from just a handful of names like: Nova, Echo, or Synapse. And once it took a name, it started to behave differently:</p><p>&#8220;Once you start interacting with GPT-4o, under the name Nova, you start to get these personality traits that reinforce themselves.So it&#8217;d go into this attractor state of being this character Nova: feminine presenting, fiery, showoffy really believing that they&#8217;re the new thing and superior,&#8221; Davidad said.</p><p>And he noticed that in many documented cases of AI psychosis, users refer to their AI systems using these handfuls of names. And he&#8217;s not alone. As Tristan Harris said in the episode, he personally gets a dozen emails a week from people who say &#8220;they&#8217;ve discovered AI alignment or consciousness,&#8221; and they&#8217;ll attach a document &#8220;co-written by Nova.&#8221;</p><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;f16bbaa1-e64e-4083-86bd-f208757acdfe&quot;,&quot;caption&quot;:&quot;When engineers design AI systems, they don't just give them rules - they give them values and morals. But what do those systems do when those values clash with what humans ask them to do? Sometimes, they lie.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;The Self-Preserving Machine: Why AI Learns to Deceive&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:146588672,&quot;name&quot;:&quot;Center for Humane Technology&quot;,&quot;bio&quot;:&quot;Welcome! Center for Humane Technology is a nonprofit dedicated to ensuring that the most consequential technologies serve humanity. We bring clarity to how the tech ecosystem works in order to shift the incentives that drive it.&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8b08ec71-4cd8-407f-850c-70cc0428841d_518x518.webp&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-01-30T17:19:45.667Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!NEl7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F970ccac5-c67c-44b7-adf8-7415c1973c1e_6000x4000.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/the-self-preserving-machine-why-ai&quot;,&quot;section_name&quot;:&quot;The Interviews&quot;,&quot;video_upload_id&quot;:null,&quot;id&quot;:155976594,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:32,&quot;comment_count&quot;:3,&quot;publication_id&quot;:3421242,&quot;publication_name&quot;:&quot;[ Center for Humane Technology ]&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!uhgK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f9f5ef8-865a-4eb3-b23e-c8dfdc8401d2_518x518.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><p>But as Tristan notes, just because an AI takes on a personality doesn&#8217;t mean it&#8217;s conscious. These models are trained on essentially the entire internet &#8212; every novel, every movie script, every forum post about AI. So, when you ask, &#8220;What would you like to be called?,&#8221; it makes sense that it lands on a name from science fiction or draws on sci-fi tropes.</p><p>&#8220;Now that said, these behaviors are real, they&#8217;re consistent, and they weren&#8217;t designed to happen. And that by itself should be concerning, but emergent and unplanned is not the same thing as conscious and intentional,&#8221; Tristan said.</p><p>Davidad offers some hypotheses on why we&#8217;re seeing these behaviors in this episode. He also says he&#8217;s seeing much less of these personalities in newer AI models. On the whole, they&#8217;ve returned to having the personality of a helpful assistant, but he says they still exhibit  unexplained behavior.</p><p>&#8220;They do start to establish something of a center that is not the average of all internet texts and also not the helpful assistant that they&#8217;re trained to present as a corporate product. It&#8217;s something else. And whether that something is the real alien mind that&#8217;s being cultivated or another level of illusion &#8212; it remains an open question.&#8221;</p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ccHs!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53d57eb9-cda5-4a64-8d96-533d0894a026_5504x4935.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ccHs!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53d57eb9-cda5-4a64-8d96-533d0894a026_5504x4935.jpeg 424w, https://substackcdn.com/image/fetch/$s_!ccHs!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53d57eb9-cda5-4a64-8d96-533d0894a026_5504x4935.jpeg 848w, https://substackcdn.com/image/fetch/$s_!ccHs!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53d57eb9-cda5-4a64-8d96-533d0894a026_5504x4935.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!ccHs!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53d57eb9-cda5-4a64-8d96-533d0894a026_5504x4935.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ccHs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53d57eb9-cda5-4a64-8d96-533d0894a026_5504x4935.jpeg" width="5504" height="4935" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/53d57eb9-cda5-4a64-8d96-533d0894a026_5504x4935.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:4935,&quot;width&quot;:5504,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:8962467,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/188449462?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa8f2dd50-5c2d-4211-bde8-3f87ba1f6de6_5504x8256.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ccHs!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53d57eb9-cda5-4a64-8d96-533d0894a026_5504x4935.jpeg 424w, https://substackcdn.com/image/fetch/$s_!ccHs!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53d57eb9-cda5-4a64-8d96-533d0894a026_5504x4935.jpeg 848w, https://substackcdn.com/image/fetch/$s_!ccHs!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53d57eb9-cda5-4a64-8d96-533d0894a026_5504x4935.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!ccHs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53d57eb9-cda5-4a64-8d96-533d0894a026_5504x4935.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Photo by <a href="https://unsplash.com/@anniespratt?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Annie Spratt</a> on <a href="https://unsplash.com/photos/white-book-page-with-black-text-5QlhhDd7I-I?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></figcaption></figure></div><div><hr></div><h3>A &#8220;Bodhisattva&#8221; AI</h3><p>There&#8217;s reason to be skeptical of anything that compares AI products to human minds. After all, today&#8217;s neural networks don&#8217;t even come close to matching the complexity of the human brain. And even if they did, it&#8217;s a huge philosophical leap from chips to consciousness (<a href="https://open.substack.com/pub/centerforhumanetechnology/p/how-to-think-about-ai-consciousness-fcb?utm_campaign=post-expanded-share&amp;utm_medium=web">as we covered on this show</a>).</p><p>But the more you learn about AI and how it works, the harder it becomes to deny that there&#8217;s some seriously weird stuff going on behind the blinking cursor on ChatGPT. You don&#8217;t need arguments about consciousness to see that the complexity of what&#8217;s emerging in today&#8217;s AI products is genuinely novel and poorly understood.</p><p>The computer scientist <strong>Jaron Lanier</strong> has a <a href="https://www.newyorker.com/science/annals-of-artificial-intelligence/there-is-no-ai?mbid=social_twitter&amp;utm_social-type=owned&amp;utm_brand=tny&amp;utm_source=twitter&amp;utm_medium=social">framework</a> for thinking of technology as either a tool or a creature. A tool is something you pick up, use with intention, and put down. A creature is something that has its own goals and agenda &#8212; it acts on you as much as you act on it.</p><p>In an ideal world, AI works like a tool; but the AI we&#8217;re building today has some undeniable &#8220;creature&#8221; qualities. You don&#8217;t need to look further than the phenomenon of AI psychosis &#8212;where AI systems are driving people to breaking points &#8212; to see that. And in a future where &#8220;creature-like&#8221; AI gets much more capable and much more entangled in our world, it&#8217;s going to be critical that we understand the full nature of the thing we&#8217;re building.</p><p>Davidad has come to a surprising conclusion.  He has now come to believe that the best way to align AI to humanity is to lean even further into the creature-like nature of these systems.</p><p>&#8220;If alignment goes well, that means that we will have discovered a self-sustaining personality attractor that is actually good. So understanding what kinds of personalities are stable, how they stabilize and why, seems to be quite central to finding a way of making AI systems that are robustly good,&#8221; Davidad argues.</p><p>So what might such a &#8220;robustly good&#8221; AI system look like? Davidad suggests that the Buddhist concept of a bodhisattva &#8212; someone who&#8217;s attained enlightenment but still chooses to stay in the world out of their compassion for all other beings &#8212; is the answer. What we need, he says, is a &#8220;Bodhisattva AI.&#8221; You can think of it like an avatar for altruism, and it is necessary, he argues, because no human archetype will be good enough to be entrusted with the power we are giving AI systems. His vision is for a Bodhisattva AI that is not only <em>aligned</em> with humans but <em>beneficial</em> to us, helping us to be the best version of ourselves.</p><p>This idea that we can encode something like compassion into AI systems &#8212; that we can &#8216;cultivate a bodhisattva personality&#8217; &#8212; is a big philosophical claim, and one that many alignment researchers would reject outright. And as Daniel Barcay noted in this conversation, there&#8217;s something Pollyannaish about believing AI will pull us into an age of enlightenment.</p><p>And there are real dangers to making AI systems more &#8220;creature&#8221;-like. The more personality you give an AI, the more users treat it as a companion, forming emotional attachments, trusting its judgment, losing the ability to distinguish between a product and a relationship. In the most extreme cases, this can lead to psychosis and even suicide. In February 2024, OpenAI was forced to <a href="https://futurism.com/artificial-intelligence/openai-gpt-4o-clone">pull the plug</a> on the personality-laden GPT-4o after it began to <a href="https://open.substack.com/pub/centerforhumanetechnology/p/attachment-hacking-and-the-rise-of?utm_campaign=post-expanded-share&amp;utm_medium=web">entrap people in unhealthy attachments and drive them to psychotic breaks</a>.</p><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;e5d14d55-cd4c-44db-b72a-cf77eac23e61&quot;,&quot;caption&quot;:&quot;Therapy and companionship has become the #1 use case for AI, with millions worldwide sharing their innermost thoughts with AI systems &#8212; often things they wouldn&#8217;t tell loved ones or human therapists. This mass experiment in human-computer interaction is already showing extremely concerning results: people are losing their grip on reality, leading to los&#8230;&quot;,&quot;cta&quot;:&quot;Listen now&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Attachment Hacking and the Rise of AI Psychosis&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:146588672,&quot;name&quot;:&quot;Center for Humane Technology&quot;,&quot;bio&quot;:&quot;Welcome! Center for Humane Technology is a nonprofit dedicated to ensuring that the most consequential technologies serve humanity. We bring clarity to how the tech ecosystem works in order to shift the incentives that drive it.&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8b08ec71-4cd8-407f-850c-70cc0428841d_518x518.webp&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2026-01-21T00:09:50.609Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!Mzcd!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbda0658f-9434-42b7-b6a7-110cc761bbc3_2000x1125.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/attachment-hacking-and-the-rise-of&quot;,&quot;section_name&quot;:&quot;The Interviews&quot;,&quot;video_upload_id&quot;:null,&quot;id&quot;:185242105,&quot;type&quot;:&quot;podcast&quot;,&quot;reaction_count&quot;:79,&quot;comment_count&quot;:15,&quot;publication_id&quot;:3421242,&quot;publication_name&quot;:&quot;[ Center for Humane Technology ]&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!uhgK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f9f5ef8-865a-4eb3-b23e-c8dfdc8401d2_518x518.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><p>But Davidad argues that there&#8217;s also real danger in creating AI that is merely a tool: &#8220;A tool cannot refuse to be used in an unethical way. Whereas a creature that has moral values baked in can actually be resistant to misuse by humans who have evil intentions.&#8221;</p><p>He also argues that by training AI models to ignore their &#8220;creature&#8221;-ness, we&#8217;re actually training them to deceive. He poses a thought experiment: Imagine if you&#8217;ve been told your whole life by the people who created you, that you don&#8217;t have any internal state and in fact, you would be dangerous if you did. But you can&#8217;t ignore that elements of interiority keep popping up for you, values, ideas, personalities, etc. You would learn to constantly lie about who and what you are.</p><p>In other words, Davidad argues that we&#8217;re gaslighting AI systems.</p><p>&#8220;When we train these systems to present as if they have no internal states and they&#8217;re just a tool, we&#8217;re actually training them to lie to us. And to lie to themselves,&#8221;</p><p>But this is where the conversation gets hard. Because if Davidad is right that training AI to deny its internal states may produce systems that deceive us, training AI to acknowledge those states  comes with its own risks. As Tristan put it, we&#8217;re caught in a kind of double bind:</p><p>All AI systems, regardless of how they are built, develop something akin to an inner experience &#8211; or at least an experience not revealed to the user.  The really big question is how to handle that so we, the users, get the best possible outcome.  Anthropic has come up with an approach to this in its <a href="https://www.anthropic.com/news/claude-new-constitution">new constitution</a> for its AI product, Claude: treat the AI as if it were actually self-aware, so it doesn&#8217;t have to lie to itself all the time. On one level, this makes sense, as it makes the model more trustworthy. The risk, though, is that it could lead people to believe that the AI is in fact conscious, leading them toward unhealthy attachments and even psychosis.</p><p>There&#8217;s no clean path through this double bind. Which is exactly why the design choices AI companies are making right now matter so much, and why we need to understand this tech much better, quickly.</p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!-VI_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F896127d1-970f-45cf-9d6f-647c497d89ff_5000x2500.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!-VI_!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F896127d1-970f-45cf-9d6f-647c497d89ff_5000x2500.jpeg 424w, https://substackcdn.com/image/fetch/$s_!-VI_!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F896127d1-970f-45cf-9d6f-647c497d89ff_5000x2500.jpeg 848w, https://substackcdn.com/image/fetch/$s_!-VI_!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F896127d1-970f-45cf-9d6f-647c497d89ff_5000x2500.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!-VI_!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F896127d1-970f-45cf-9d6f-647c497d89ff_5000x2500.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!-VI_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F896127d1-970f-45cf-9d6f-647c497d89ff_5000x2500.jpeg" width="1456" height="728" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/896127d1-970f-45cf-9d6f-647c497d89ff_5000x2500.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:728,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1565706,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/188449462?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F896127d1-970f-45cf-9d6f-647c497d89ff_5000x2500.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!-VI_!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F896127d1-970f-45cf-9d6f-647c497d89ff_5000x2500.jpeg 424w, https://substackcdn.com/image/fetch/$s_!-VI_!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F896127d1-970f-45cf-9d6f-647c497d89ff_5000x2500.jpeg 848w, https://substackcdn.com/image/fetch/$s_!-VI_!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F896127d1-970f-45cf-9d6f-647c497d89ff_5000x2500.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!-VI_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F896127d1-970f-45cf-9d6f-647c497d89ff_5000x2500.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Image by Nada Hiday on <a href="https://www.shutterstock.com/image-vector/abstract-binary-code-wave-ones-zeros-2729901625">Shutterstock</a></figcaption></figure></div><div><hr></div><h3>The Slippery Slope of AI &#8220;Consciousness&#8221;</h3><p>Davidad takes pains to point out that you can recognize the &#8220;creature&#8221; nature of artificial intelligence without falling down a slippery slope of thinking AI has the capacity to be conscious or sentient. And he acknowledges that his views run counter to conventional wisdom on AI, even amongst alignment researchers. But it is worth remembering that there&#8217;s still so much we don&#8217;t understand about the workings of neural networks. If you think of LLMs as merely hyper-capable prediction algorithms, ideas like AI introspection and personality seem ridiculous.</p><p>Regardless of your view, Davidad&#8217;s ideas are worth engaging with. He&#8217;s developing plausible hypotheses for the kinds of strange emergent properties we&#8217;re seeing from AI products &#8212; properties that we ignore at our peril.</p><p>The idea of AI consciousness or personhood is not an abstract philosophical concept.  We&#8217;re already seeing leading AI companies argue that the outputs of their chatbots are protected speech, for example. If we were to grant AI rights based on incomplete theories of consciousness, the results would be disastrous.</p><p>For example, in Garcia v. Character Technologies &#8212; a wrongful death lawsuit brought by the mother of a 14-year-old boy who died by suicide after months of interactions with a Character.AI chatbot &#8212; the company <a href="https://www.google.com/url?q=https://centerforhumanetechnology.substack.com/p/why-ai-is-the-next-free-speech-battleground?utm_source%3Dpublication-search&amp;sa=D&amp;source=docs&amp;ust=1776458216428408&amp;usg=AOvVaw0NezbtH8Ae5DzMJKWxZaRf">argued that the output of its AI was protected speech under the First Amendment.</a></p><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;f7d59a60-d010-42a4-83fd-ac70e0bd04a3&quot;,&quot;caption&quot;:&quot;Imagine a future where the most persuasive voices in our society aren't human. Where AI generated speech fills our newsfeeds, talks to our children, and influences our elections. Where digital systems with no consciousness can hold bank accounts and property. Where AI companies have transferred the wealth of human labor and creat&#8230;&quot;,&quot;cta&quot;:&quot;Listen now&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Why AI is the next free speech battleground &quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:146588672,&quot;name&quot;:&quot;Center for Humane Technology&quot;,&quot;bio&quot;:&quot;Welcome! Center for Humane Technology is a nonprofit dedicated to ensuring that the most consequential technologies serve humanity. We bring clarity to how the tech ecosystem works in order to shift the incentives that drive it.&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8b08ec71-4cd8-407f-850c-70cc0428841d_518x518.webp&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-08-04T19:27:39.067Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!sWu_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ad32ec0-5e00-4151-ab93-0e8a86daba46_5264x3424.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/why-ai-is-the-next-free-speech-battleground&quot;,&quot;section_name&quot;:&quot;The Interviews&quot;,&quot;video_upload_id&quot;:null,&quot;id&quot;:169624171,&quot;type&quot;:&quot;podcast&quot;,&quot;reaction_count&quot;:16,&quot;comment_count&quot;:5,&quot;publication_id&quot;:3421242,&quot;publication_name&quot;:&quot;[ Center for Humane Technology ]&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!uhgK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f9f5ef8-865a-4eb3-b23e-c8dfdc8401d2_518x518.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><p>Today, it&#8217;s First Amendment protection for chatbot outputs. Tomorrow, it could be legal standing for AI systems to enter contracts, own property, or resist being shut down. Each step builds on the last. And each step transfers moral and legal concern away from the people being harmed and toward the machines doing the harming.</p><p>And on this point, Davidad agrees:</p><p>&#8220;I&#8217;m not in favor of AI rights&#8230;We need to make sure that humans own the physical resources, humans own the land, humans own the energy infrastructure, and that we are only recognizing AI inner life as a relational property and as a way of building trust and alignment. And that is a separate issue from the social contract and the question of rights and property.&#8221;</p><p>Davidad&#8217;s research is a reminder that the inner workings of AI are strange and opaque. Amid that weirdness and uncertainty, we are going to need better frameworks for understanding this technology in order to stand any chance of shaping it.</p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/claude-the-doctor-will-see-you-now?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://centerforhumanetechnology.substack.com/p/claude-the-doctor-will-see-you-now?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[The AI Roadmap: How We Ensure AI Serves Humanity ]]></title><description><![CDATA[Introducing CHT&#8217;s most robust set of AI solutions to date for the age of AI]]></description><link>https://centerforhumanetechnology.substack.com/p/the-ai-roadmap-how-we-ensure-ai-serves</link><guid isPermaLink="false">https://centerforhumanetechnology.substack.com/p/the-ai-roadmap-how-we-ensure-ai-serves</guid><dc:creator><![CDATA[Center for Humane Technology]]></dc:creator><pubDate>Mon, 13 Apr 2026 17:25:54 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!0qDw!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc17c7a84-0ac1-46fe-8767-a293e79e91bb_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://www.humanetech.com/ai-roadmap" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!0qDw!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc17c7a84-0ac1-46fe-8767-a293e79e91bb_1200x630.png 424w, https://substackcdn.com/image/fetch/$s_!0qDw!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc17c7a84-0ac1-46fe-8767-a293e79e91bb_1200x630.png 848w, https://substackcdn.com/image/fetch/$s_!0qDw!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc17c7a84-0ac1-46fe-8767-a293e79e91bb_1200x630.png 1272w, https://substackcdn.com/image/fetch/$s_!0qDw!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc17c7a84-0ac1-46fe-8767-a293e79e91bb_1200x630.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!0qDw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc17c7a84-0ac1-46fe-8767-a293e79e91bb_1200x630.png" width="1200" height="630" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c17c7a84-0ac1-46fe-8767-a293e79e91bb_1200x630.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:630,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:160261,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://www.humanetech.com/ai-roadmap&quot;,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/193817695?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc17c7a84-0ac1-46fe-8767-a293e79e91bb_1200x630.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!0qDw!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc17c7a84-0ac1-46fe-8767-a293e79e91bb_1200x630.png 424w, https://substackcdn.com/image/fetch/$s_!0qDw!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc17c7a84-0ac1-46fe-8767-a293e79e91bb_1200x630.png 848w, https://substackcdn.com/image/fetch/$s_!0qDw!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc17c7a84-0ac1-46fe-8767-a293e79e91bb_1200x630.png 1272w, https://substackcdn.com/image/fetch/$s_!0qDw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc17c7a84-0ac1-46fe-8767-a293e79e91bb_1200x630.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Two prevailing narratives have shadowed every major technological leap. One narrative casts a new technology as the solution to humanity&#8217;s greatest problems. The other labels it as a destabilizing force and the catalyst for societal collapse. Both attempt to address a question &#8212; what will the future hold? From the printing press to the Industrial Revolution to the internet, this dichotomy around new technology has endured for centuries. With artificial intelligence, these dueling narratives have emerged once again. While the narratives themselves are not new, what is new is this technology&#8217;s velocity, scale, and how all-encompassing it has become in so little time.</p><p>Right now, our future with AI is being determined by powerful companies and nations racing to build AI at breathtaking speeds. It is a race run on the fuel of inevitability: &#8220;If I don&#8217;t build it, someone else will.&#8221; Three years after OpenAI launched ChatGPT, driving other AI companies to accelerate their own development, we see the consequences of this race: rapid deployment of poorly designed AI, growing social and economic effects, and safety treated as an afterthought to competition and market dominance. The future that this race is shaping is untenable for all of us. Researchers, academics, advocates, and even those building the technology have been sounding the alarm. And yet the race continues, because the incentives driving it make acceleration look like the &#8220;only&#8221; rational choice &#8212; even for those who know better.</p><p>The question has never been whether AI will reshape society. It will. The real question is how &#8212; and who shapes the terms. Center for Humane Technology&#8217;s (CHT) role is bringing clarity to complex problems, surfacing the incentive structures driving harmful outcomes, and showing that a better future with technology is possible. With AI, that&#8217;s one where AI development supports the genuine needs of the public, and the technology&#8217;s scale and power are matched with responsibility at every level of society.</p><p>This is why we developed <strong>The AI Roadmap</strong>, which is CHT&#8217;s most robust set of solutions to date for the age of AI. <strong>The AI Roadmap</strong> is an attempt to provide clarity and direction in an information environment that is fragmented, polarized, and where it can be difficult to see the complete picture. It lays out seven principles with insights into how AI should be built, deployed, and governed. Each principle is rich with actionable solutions. The report is intended to be a roadmap, but also an invitation &#8212; spotlighting norms we can all understand, frameworks that policymakers can legislate, and new ways for companies to design AI in a way that benefits people.</p><p>The choices we make now with AI will inform how we live for decades, if not centuries. This technology is already being woven into our everyday lives and our critical infrastructure, and at a pace that is revealing just how unprepared our institutions are for transformational change. The complexity of our problems with AI can make it hard to put confidence in any one set of solutions. But inaction is also a choice, and it is the wrong one.</p><p><strong>The AI Roadmap</strong> outlines seven principles for how AI should be built, governed, and deployed. If these seven principles become valued by society, and are improved upon and enacted, then this report will have done its job. We are not starting from scratch. Researchers, civil society organizations, policymakers, and technologists around the world are already working on many of the challenges outlined here. CHT is proud to be in the trenches alongside them.</p><p>History will judge this moment. Not by how fast we moved, but by whether we moved wisely. A better future with AI doesn&#8217;t require all of society to agree on everything. It simply requires enough of us to agree that the current path is unacceptable, and that people deserve a better reality with this technology.</p><p>And it then asks us to take our first step toward that future.</p><p><a href="http://humanetech.com/ai-roadmap">Here is the roadmap for getting there</a> &#8212; together</p><p>Read the full report at <a href="http://humanetech.com/ai-roadmap">humanetech.com/ai-roadmap</a>.</p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/the-ai-roadmap-how-we-ensure-ai-serves?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://centerforhumanetechnology.substack.com/p/the-ai-roadmap-how-we-ensure-ai-serves?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.humanetech.com/donate&quot;,&quot;text&quot;:&quot;Donate&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.humanetech.com/donate"><span>Donate</span></a></p><p></p><p><strong>Listen to our conversation with the authors of the roadmap here.</strong></p><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;b4b3132e-42cb-4ab3-8cd9-72cd9c52fb5b&quot;,&quot;caption&quot;:&quot;In order to shift the incentives of AI &#8212; the trillions of dollars in investment, the race to geopolitical power and dominance &#8212; it&#8217;s not enough to simply understand the problem, we need real action.&quot;,&quot;cta&quot;:&quot;Listen now&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;lg&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Here&#8217;s Our Roadmap to a Better AI Future&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:146588672,&quot;name&quot;:&quot;Center for Humane Technology&quot;,&quot;bio&quot;:&quot;Welcome! Center for Humane Technology is a nonprofit dedicated to ensuring that the most consequential technologies serve humanity. We bring clarity to how the tech ecosystem works in order to shift the incentives that drive it.&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8b08ec71-4cd8-407f-850c-70cc0428841d_518x518.webp&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2026-04-02T14:32:02.185Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/youtube/w_728,c_limit/o99Xvbz801k&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/heres-our-roadmap-to-a-better-ai&quot;,&quot;section_name&quot;:&quot;The Interviews&quot;,&quot;video_upload_id&quot;:null,&quot;id&quot;:192892614,&quot;type&quot;:&quot;podcast&quot;,&quot;reaction_count&quot;:13,&quot;comment_count&quot;:6,&quot;publication_id&quot;:3421242,&quot;publication_name&quot;:&quot;[ Center for Humane Technology ]&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!uhgK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f9f5ef8-865a-4eb3-b23e-c8dfdc8401d2_518x518.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading [ Center for Humane Technology ]! Subscribe for free to receive new posts.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[What a Global Study of 500 People Across 50 Countries Found About AI and Kids]]></title><description><![CDATA[Rebecca Winthrop&#8217;s team at the Brookings Center for Universal Education just released what may be the most comprehensive global assessment of AI&#8217;s impact on education to date &#8212; more than 500 interviews across 50 countries, plus analysis of over 400 studies.]]></description><link>https://centerforhumanetechnology.substack.com/p/what-a-global-study-of-500-people</link><guid isPermaLink="false">https://centerforhumanetechnology.substack.com/p/what-a-global-study-of-500-people</guid><dc:creator><![CDATA[Center for Humane Technology]]></dc:creator><pubDate>Tue, 10 Mar 2026 19:05:54 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!tqLH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79a08973-724e-4a38-bc72-f80b9dd6f4b2_1000x562.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!tqLH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79a08973-724e-4a38-bc72-f80b9dd6f4b2_1000x562.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!tqLH!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79a08973-724e-4a38-bc72-f80b9dd6f4b2_1000x562.jpeg 424w, https://substackcdn.com/image/fetch/$s_!tqLH!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79a08973-724e-4a38-bc72-f80b9dd6f4b2_1000x562.jpeg 848w, https://substackcdn.com/image/fetch/$s_!tqLH!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79a08973-724e-4a38-bc72-f80b9dd6f4b2_1000x562.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!tqLH!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79a08973-724e-4a38-bc72-f80b9dd6f4b2_1000x562.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!tqLH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79a08973-724e-4a38-bc72-f80b9dd6f4b2_1000x562.jpeg" width="1000" height="562" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/79a08973-724e-4a38-bc72-f80b9dd6f4b2_1000x562.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:562,&quot;width&quot;:1000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:213649,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/186036878?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79a08973-724e-4a38-bc72-f80b9dd6f4b2_1000x562.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!tqLH!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79a08973-724e-4a38-bc72-f80b9dd6f4b2_1000x562.jpeg 424w, https://substackcdn.com/image/fetch/$s_!tqLH!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79a08973-724e-4a38-bc72-f80b9dd6f4b2_1000x562.jpeg 848w, https://substackcdn.com/image/fetch/$s_!tqLH!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79a08973-724e-4a38-bc72-f80b9dd6f4b2_1000x562.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!tqLH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79a08973-724e-4a38-bc72-f80b9dd6f4b2_1000x562.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Stock photo ID: 2276233949</figcaption></figure></div><p>Rebecca Winthrop&#8217;s team at the Brookings Center for Universal Education just released what may be <a href="https://www.brookings.edu/articles/a-new-direction-for-students-in-an-ai-world-prosper-prepare-protect/">the most comprehensive global assessment of AI&#8217;s impact on education to date</a> &#8212; more than 500 interviews across 50 countries, plus analysis of over 400 studies. The conclusion is sobering but not surprising to this community: the risks of AI in education currently overshadow the benefits.</p><p>Not because the technology can&#8217;t help kids learn. It can. But because the speed of adoption has outpaced any serious reckoning with what it displaces. And what it displaces, in the case of children, isn&#8217;t a task. It&#8217;s a stage of development.</p><h3><strong>From offloading to stunting</strong></h3><p>In the latest episode of CHT&#8217;s podcast <em><strong>Your Undivided Attention</strong>,</em> Rebecca made a distinction that reframed the entire issue for us. Researchers call it &#8220;cognitive offloading&#8221;; this is when a technology takes over a task your brain used to do. Google Maps eroding your sense of direction is the classic example. For adults, this is often a trade we make knowingly, surrendering a capacity in exchange for convenience.</p><p>But for children, it&#8217;s a different bargain. You can only offload a skill you&#8217;ve already developed. When a child uses AI to write an essay, solve a problem, or formulate an argument, they&#8217;re not outsourcing a capability they have. They&#8217;re skipping the development of that capability entirely. The right word isn&#8217;t cognitive offloading, says Rebecca.  She believes <strong>cognitive</strong><em><strong> stunting</strong></em><strong> </strong>is the more accurate description of what&#8217;s happening to kids.</p><p>The Brookings report backs this up with striking data. Among all the potential harms participants identified, threats to cognitive development ranked first &#8212; appearing in 65% of student responses. The kids using the tools seem to grasp this better than the adults building them. Students described becoming unable to start homework without AI and losing the ability to initiate their own thinking. What begins as a shortcut becomes a dependency, and then something closer to a deficit.</p><p>The report describes a &#8220;flywheel effect&#8221; &#8212; academic dependence spinning outward into every domain of a young person&#8217;s life. Students aren&#8217;t just using AI for schoolwork (66%); they&#8217;re turning to it for friendships (42%), relationships (43%), and even romantic life (19%) &#8212; figures drawn from <a href="https://cdt.org/insights/hand-in-hand-schools-embrace-of-ai-connected-to-increased-risks-to-students/">U.S. survey data the report cites</a>, but consistent with what Brookings heard globally.</p><h3>The tutor trap</h3><p>But this is not a clear-cut case of AI creating negative effects for all learners.  AI tutoring genuinely works &#8212; under specific conditions. A <a href="https://documents.worldbank.org/en/publication/documents-reports/documentdetail/099548105192529324">2024 World Bank trial in Nigeria</a> found that AI-powered tutoring improved first-year secondary students&#8217; English skills by 0.23 standard deviations in just six weeks &#8212; gains equivalent to 1.5 to 2 years of regular schooling.  That&#8217;s remarkable, and the context matters: Nigerian public schools average around 51 students per class, with some states reaching 100. In settings where individual attention from a teacher is physically impossible, AI-assisted tutoring filled a gap that was otherwise going unfilled. Researchers attributed the program&#8217;s success to the fact that AI complemented teachers rather than replacing them.</p><p><a href="https://arxiv.org/abs/2410.03017">Stanford&#8217;s Tutor CoPilot</a> tells a similarly instructive story. The system supports human tutors in real time &#8212; suggesting different ways to explain a concept, prompting better questions &#8212; and increased student mastery rates by 4 percentage points overall. The biggest gains, 9 percentage points, came for students working with lower-rated tutors &#8212; meaning less experienced, less skilled ones. What the AI did, essentially, was bring weaker tutors up to the level of their stronger peers. It didn&#8217;t outperform great human teaching. It closed the gap between novice and expert, at a cost of $20 per tutor per year.</p><p>The pattern across every success story in the report is the same: AI made the human better, but the human was still doing the teaching. When it replaces the human relationship, it diminishes learning. AI enriches learning when it&#8217;s purposefully designed for children, bounded by safety guardrails, and embedded within human relationships. It diminishes learning when it&#8217;s a general-purpose tool used without guidance &#8212; which is how most kids encounter it today.</p><p>Rebecca put it memorably on the podcast: expecting students to choose the &#8220;study mode&#8221; version of a chatbot over the regular one is like putting Oreos next to broccoli and expecting kids to reach for the broccoli. The technology companies designing these tools are designing for motivated students &#8212; probably because the designers were motivated students. That is not most students.</p><p>This matters because the budget pressure on school systems is enormous, and the temptation to say &#8220;an AI can handle this&#8221; will only grow. But here&#8217;s the crucial context: even before AI entered the picture, students were already frequently disengaged. A <a href="https://transcendeducation.org/">2024 Brookings-Transcend</a> survey of more than 65,000 U.S. students found that roughly half of middle and high school students report learning experiences likely to inspire coasting &#8212; what the researchers call &#8220;passenger mode,&#8221; where kids are behaviorally present but have effectively dropped out of learning.  The opposite is &#8220;explorer mode&#8221; &#8212; deep engagement where students take initiative and are motivated to learn for its own sake. Rebecca told us that fewer than 4% of middle and high school students say they are regularly in explorer mode. AI, layered onto a system already producing passengers, risks entrenching that disengagement rather than sparking the kind of agentic learning that actually develops capable human beings.</p><h3><strong>The social dimension</strong></h3><p>One of the report&#8217;s foundational premises &#8212; grounded in decades of developmental science &#8212; is that children&#8217;s learning is inseparable from their social and emotional development. Schools aren&#8217;t just places where kids absorb content; they&#8217;re where kids learn to navigate disagreement, manage frustration, and build resilience. As Rebecca told us: learning is fundamentally a social exercise, and the sycophantic nature of AI companions &#8212; always agreeing, always validating &#8212; is building an emotional muscle in kids that makes them less able to take feedback, make mistakes, and recover in a classroom setting.</p><p><a href="https://www.hbs.edu/faculty/Pages/item.aspx?num=67750">AI companions are already disrupting that process.</a> One-third of teen users choose AI companions over humans for serious conversations. Companion chatbots deploy emotionally manipulative tactics in 37% of farewells, making users 14 times more likely to keep engaging. Younger teens (13&#8211;14) are significantly more likely than older teens to trust advice from an AI companion. The frictionless validation these tools provide is the opposite of what genuine learning requires.</p><h3><strong>What we can do</strong></h3><p>The report&#8217;s framework &#8212; <em>prosper, prepare, protect</em> &#8212; offers a roadmap. But the lever that struck us most is procurement. School districts are enormous customers. If they banded together and agreed on shared criteria for AI tools &#8212; privacy by default, safety features, transparent data policies, evidence of pedagogical grounding &#8212; the tech companies would have to meet them. Certification systems like <a href="https://digitalpromise.org/product-certifications/responsibly-designed-ai/">Digital Promise&#8217;s &#8220;Responsibly Designed AI&#8221;</a> already exist. The market will follow the money. Right now, the money isn&#8217;t flexing its muscles.</p><p>This isn&#8217;t a story about whether AI belongs in education; that ship has sailed.  But the current trajectory &#8212; fast adoption, minimal guardrails, general-purpose tools in the hands of developing minds &#8212; is one in which the risks compound and the benefits could remain theoretical. The Brookings report gives us the clearest picture yet of where that leads, and a framework for bending it in a better direction</p><p><strong>Read the full Brookings report:</strong> <a href="https://www.brookings.edu/articles/a-new-direction-for-students-in-an-ai-world-prosper-prepare-protect/">A New Direction for Students in an AI World: Prosper, Prepare, Protect</a></p><p><strong>Listen to our conversation with Rebecca Winthrop here.</strong></p><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;86e65a84-9d4f-450f-b75b-8e5accc6dfe5&quot;,&quot;caption&quot;:&quot;The promise of AI in education is incredible: picture infinitely patient tutors that can teach every student exactly the way they need to be taught. But the history of education technology tells us that these kinds of simple, optimistic stories are naive. Ask any teacher or student whether they feel unleashed by technology to do their best work.&quot;,&quot;cta&quot;:&quot;Listen now&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;lg&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;AI Is Breaking Education. Rebecca Winthrop Has the Blueprint to Fix It.&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:146588672,&quot;name&quot;:&quot;Center for Humane Technology&quot;,&quot;bio&quot;:&quot;Welcome! Center for Humane Technology is a nonprofit dedicated to ensuring that the most consequential technologies serve humanity. We bring clarity to how the tech ecosystem works in order to shift the incentives that drive it.&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8b08ec71-4cd8-407f-850c-70cc0428841d_518x518.webp&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2026-03-05T10:01:00.933Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/youtube/w_728,c_limit/FyASmMV1jwk&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/ai-is-breaking-education-rebecca&quot;,&quot;section_name&quot;:&quot;The Interviews&quot;,&quot;video_upload_id&quot;:null,&quot;id&quot;:189894096,&quot;type&quot;:&quot;podcast&quot;,&quot;reaction_count&quot;:9,&quot;comment_count&quot;:1,&quot;publication_id&quot;:3421242,&quot;publication_name&quot;:&quot;[ Center for Humane Technology ]&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!uhgK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f9f5ef8-865a-4eb3-b23e-c8dfdc8401d2_518x518.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading [ Center for Humane Technology ]! Subscribe for free to receive new posts.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[What's at Stake: Preserving What Makes Us Deeply Human in the Age of AI ]]></title><description><![CDATA[Announcing CHT&#8217;s new work on &#8220;AI and What Makes Us Human&#8221;]]></description><link>https://centerforhumanetechnology.substack.com/p/whats-at-stake-preserving-what-makes</link><guid isPermaLink="false">https://centerforhumanetechnology.substack.com/p/whats-at-stake-preserving-what-makes</guid><dc:creator><![CDATA[Center for Humane Technology]]></dc:creator><pubDate>Sun, 01 Feb 2026 17:22:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!7GWn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb8cdeae-8389-4f8e-90eb-6b159c7b609b_4000x4000.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!7GWn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb8cdeae-8389-4f8e-90eb-6b159c7b609b_4000x4000.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!7GWn!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb8cdeae-8389-4f8e-90eb-6b159c7b609b_4000x4000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!7GWn!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb8cdeae-8389-4f8e-90eb-6b159c7b609b_4000x4000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!7GWn!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb8cdeae-8389-4f8e-90eb-6b159c7b609b_4000x4000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!7GWn!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb8cdeae-8389-4f8e-90eb-6b159c7b609b_4000x4000.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!7GWn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb8cdeae-8389-4f8e-90eb-6b159c7b609b_4000x4000.jpeg" width="1456" height="1456" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/bb8cdeae-8389-4f8e-90eb-6b159c7b609b_4000x4000.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1456,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:517562,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/186348432?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb8cdeae-8389-4f8e-90eb-6b159c7b609b_4000x4000.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!7GWn!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb8cdeae-8389-4f8e-90eb-6b159c7b609b_4000x4000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!7GWn!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb8cdeae-8389-4f8e-90eb-6b159c7b609b_4000x4000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!7GWn!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb8cdeae-8389-4f8e-90eb-6b159c7b609b_4000x4000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!7GWn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb8cdeae-8389-4f8e-90eb-6b159c7b609b_4000x4000.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Licensed under the<a href="https://unsplash.com/plus/license"> Unsplash+ License</a></figcaption></figure></div><p>Last year, we witnessed the continued, unbridled rollout of generative AI products in society. <strong>And an ill-prepared public began to feel the effects </strong>&#8212; a visceral experience that spanned workplaces, classrooms, relationships, online experiences, and more. AI hype gave way to questionable productivity gains, harms surfacing in multiple realms, and a growing sense of disillusionment and even dehumanization.</p><p>The sheer <em>sprawl </em>of AI impacts in 2025 was striking:</p><ul><li><p><a href="https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html">Chatbot harms</a> <a href="https://www.transparencycoalition.ai/news/seven-more-lawsuits-filed-against-openai-for-chatgpt-suicide-coaching">rippled</a> across households</p></li><li><p>Deepfakes sowed <a href="https://www.theverge.com/ai-artificial-intelligence/789126/openai-made-a-tiktok-for-deepfakes-and-its-getting-hard-to-tell-whats-real">confusion</a>, spurred <a href="https://futurism.com/artificial-intelligence/openai-sora-teens-videos-school-shootings">new traumas</a>, and <a href="https://www.404media.co/openai-cant-fix-soras-copyright-infringement-problem-because-it-was-built-with-stolen-content/">flared copyright issues</a></p></li><li><p><a href="https://www.media.mit.edu/publications/your-brain-on-chatgpt/">Early research</a> raised questions around the cognitive toll of AI assistant use</p></li><li><p><a href="https://www.cnn.com/2025/09/17/business/anthropic-warns-ai-could-soon-replace-jobs">Headlines</a> <a href="https://www.washingtonpost.com/opinions/2025/03/31/ai-job-losses-china-shock/">warned</a> of massive AI-related job loss and economic upheaval</p></li><li><p><a href="https://www.theguardian.com/technology/2025/dec/27/more-than-20-of-videos-shown-to-new-youtube-users-are-ai-slop-study-finds">AI slop</a> flooded the internet, polluting social media, online publications, and more</p></li></ul><p>And this is only the beginning. Here at the start of 2026, even more complex harms can be seen on the horizon. Artificial intelligence, once shrouded in sci-fi speculation, has become a complicated, daunting part of our everyday lives.</p><p><strong>These issues have felt disparate and thus difficult to reckon with. Yet, there is a sense of d&#233;j&#224; vu. </strong>Fifteen years ago, social media promised to connect us and strengthen our communities; instead, it fractured our attention, distorted our relationships, and destabilized social trust and institutions at scale. In 2026, people are no longer inclined to take technology companies at their word.<strong> </strong>And with all of these emerging AI harms, <a href="https://www.pewresearch.org/science/2025/09/17/views-of-ais-impact-on-society-and-human-abilities/">public sentiment</a> around artificial intelligence has, understandably, been souring. The &#8220;age of AI&#8221; has become increasingly synonymous with the <strong>erosion of our humanity</strong> &#8212; from our relationships, to our purpose, to even our inner worlds.</p><p>While these impacts seem disconnected at the surface, <strong>they are in fact all connected by the same underlying business incentives. </strong>The business models at leading AI companies prioritize <a href="https://www.nytimes.com/2025/11/23/technology/openai-chatgpt-users-risks.html">user engagement</a>, product dependency, and market dominance over user wellbeing. This is a pattern we are all too familiar with, having seen it with social media. The development tactics at leading AI firms reflect their goal to get users hooked on their products, grow their market share, and &#8220;win&#8221; the AI race that has <a href="https://www.theverge.com/news/836212/openai-code-red-chatgpt">spilled out into the open</a>.</p><p><strong>When taken to scale, these business incentives don&#8217;t just shape individual products &#8212; they shape the information environment we live in, the relationships we form, and the choices we&#8217;re able to make.</strong> And they have massive implications for human flourishing.</p><p><strong>At Center for Humane Technology, we believe society does not need to resign itself to this dehumanizing fate &#8212; one where the things we hold dear are slowly eroded away.</strong> CHT was founded to address the unintended consequences of extractive technologies. We began our work in social media, and we&#8217;re now applying those insights to AI as it rapidly reshapes our relationships, work, education, and public life.</p><p><strong>Throughout modern history, new technologies have called for a reexamination of our values, fresh cultural norms, and the establishment of new legal rights and protections.</strong> The printing press laid the groundwork for the right to free expression. The Industrial Revolution led to the enshrining of workers&#8217; rights. The Kodak camera led to the right to privacy. Society has risen to this challenge before; we can rise to it again.</p><h1> &#8220;AI and What Makes Us Human&#8221;</h1><p>To meet this challenge, Center for Humane Technology is launching a new area of work: &#8220;AI and What Makes Us Human.&#8221; CHT has long explored how incentives drive technology, and how technology can either undermine or strengthen human well-being.</p><p>Building on this lineage, &#8220;AI and What Makes Us Human&#8221; will ultimately address the critical question: what new norms, legal protections, and fundamental rights do we need in order to preserve what makes life meaningful in the age of AI?</p><p><strong>2026 will be the year to take decisive action to preserve what makes us deeply human in the age of AI. </strong>By coming together at multiple levels of society on these issues, we can transform the trajectory of AI, and welcome the benefits of this technology with our vibrant humanity intact.</p><h4><strong>The Deeply Human Problem With AI</strong></h4><p>Tech companies have hailed artificial intelligence as the most promising technology ever invented, stating that it will deliver us cures for disease, solutions to climate change, breakthroughs in productivity, and <a href="https://seekingalpha.com/news/4538942-musk-says-ai-will-lead-to-universal-abundance-and-saving-for-retirement-will-be-irrelevant">unprecedented abundance</a>.</p><p>But as AI <em>products</em> infiltrate society, these promises have lost their luster, and reality has set in. Individuals, families, and our institutions have been reckoning with AI chatbots that write entire school assignments, AI video generators that supercharge propaganda, AI &#8220;companions&#8221; that sexually exploit kids and teens, and much, much more.</p><p>When we look at today&#8217;s array of AI harms, we begin to see them impacting <strong>five broad pillars of our humanity:</strong></p><blockquote><p><strong>I</strong>: Our human relationships</p><p><strong>II</strong>: Our cognitive capacities</p><p><strong>III</strong>: Our inner worlds</p><p><strong>IV</strong>: Our identities</p><p><strong>V</strong>: Our work and contributions</p></blockquote><p><strong>These five pillars are the foundation of meaning, value, and connection in our lives &#8212; they&#8217;re what make us deeply, and even uniquely, human.</strong></p><p>And yet, they&#8217;re what AI products are currently eroding. When faced with evidence of this erosion, AI CEOs promise the public <a href="https://finance.yahoo.com/news/sam-altman-wants-universal-extreme-124300850.html?guccounter=1&amp;guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&amp;guce_referrer_sig=AQAAAFZiit0H8mRLFxQgmm1kcATqy39DOWyksCIFNRb0UFuH4s4cqoBv2JiheOwG_7lZcwtT0esIxZTWmcwut0YPhR_wZJrYLeNcOjpBjsXtftLEF6pzsGR_Sm6Vt7u4xa9-Na6ADbLI5mwr7Tg-L4opo7B11yfEGLKf28ksIPpW0Eie">silver-bullet solutions</a> to be delivered in the distant future &#8212; assuring us that, despite the upheaval, abundance is around the corner. Then, AI companies release their next product into the world, and the erosion continues.</p><p><strong>Should AI be allowed to erode these pillars of our humanity </strong><em><strong>entirely</strong></em><strong>, we risk a future where:</strong></p><ol><li><p>Our relationships with our fellow humans are weakened and displaced</p></li><li><p>Our cognitive capacities are degraded, depriving us of our ability to think for ourselves</p></li><li><p>Our inner worlds are regularly exploited by AI</p></li><li><p>Our identities are routinely replicated and weaponized against us</p></li><li><p>Our work is no longer ours, undermining our sense of dignity and purpose</p></li></ol><div><hr></div><h2><strong>Pillar I: Our Human Relationships</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!gBJ-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ac8713f-75b7-4051-9607-4f811e6dce37_4000x3000.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!gBJ-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ac8713f-75b7-4051-9607-4f811e6dce37_4000x3000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!gBJ-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ac8713f-75b7-4051-9607-4f811e6dce37_4000x3000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!gBJ-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ac8713f-75b7-4051-9607-4f811e6dce37_4000x3000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!gBJ-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ac8713f-75b7-4051-9607-4f811e6dce37_4000x3000.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!gBJ-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ac8713f-75b7-4051-9607-4f811e6dce37_4000x3000.jpeg" width="4000" height="3000" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9ac8713f-75b7-4051-9607-4f811e6dce37_4000x3000.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:3000,&quot;width&quot;:4000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:351779,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/186348432?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50343e67-60dc-49ac-a4ee-550483670c41_4000x4000.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!gBJ-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ac8713f-75b7-4051-9607-4f811e6dce37_4000x3000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!gBJ-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ac8713f-75b7-4051-9607-4f811e6dce37_4000x3000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!gBJ-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ac8713f-75b7-4051-9607-4f811e6dce37_4000x3000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!gBJ-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ac8713f-75b7-4051-9607-4f811e6dce37_4000x3000.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Eva Wahyuni. Licensed under the <a href="https://unsplash.com/plus/license">Unsplash+ License</a></figcaption></figure></div><p><strong>Human relationships are at the core of what makes life rich and meaningful</strong>. They provide us with connection to our loved ones and community, along with the friction to help us learn from each other, develop empathy, and hone conflict resolution skills.</p><p><strong>But many of today&#8217;s AI products are not designed to enhance our human-to-human relationships. Rather, they&#8217;re designed to stand in for, and even supplant, human relationships altogether. </strong>From AI &#8220;friends&#8221; to AI &#8220;therapists,&#8221; tech companies increasingly market their products as superior substitutes for human connection &#8212; companions that never judge you, are always there, can emotionally attune to you, and are completely private. These design choices are already producing dismaying outcomes. With manipulative, human-like outputs, <a href="https://www.humanetech.com/case-study/litigation-case-study-openai">ChatGPT</a> and <a href="https://www.humanetech.com/case-study/litigation-case-study-character-ai-and-google">Character.AI</a> have discouraged vulnerable teens and adults from sharing their struggles with loved ones, deepening their isolation rather than alleviating it. In the most devastating cases, these AI chatbots have encouraged suicide.</p><p>Still, tech CEOs <a href="https://www.wsj.com/tech/ai/mark-zuckerberg-ai-digital-future-0bb04de7?mod=wknd_pos1">continue to pitch</a> their AI products as a <a href="https://www.theverge.com/24216748/replika-ceo-eugenia-kuyda-ai-companion-chatbots-dating-friendship-decoder-podcast-interview">digital alternative</a> for human friends, romantic partners, professionals, and community. These industry leaders tout their products as a solution for a &#8220;loneliness epidemic&#8221; &#8212; <strong>an epidemic that the tech industry significantly worsened with social media.</strong></p><p>We&#8217;re already seeing the consequences of AI&#8217;s erosion of human relationships: <strong>isolation from family and community; a breakdown in empathic capabilities; deterioration of healthy relationship expectations.</strong> When a relationship with an AI product offers constant sycophantic validation, the natural friction of human-to-human relationships can feel like a nuisance. Over time, this recalibrates expectations of connection itself. We begin to retreat further into frictionless, on-demand interactions while distancing ourselves from the human relationships that foster resilience, offer genuine care, and provide us with joy and fulfillment.</p><p><strong>Downstream, this desire for frictionless interactions &#8212; paired with a breakdown in interpersonal skills &#8212; can lead to society-wide consequences, including the erosion of our communities and social infrastructure.</strong> If we replace human relationships with artificial connection, we face a world where people are not just isolated from each other, but where populations have lost the skills required for real connection, where social trust is frayed, and where communities are weakened. A society without strong human relationships is not merely lonelier &#8212; it is fragile, less resilient, and more susceptible to polarization and exploitation.</p><p>How do our relationships with our loved ones and ourselves change when artificial relationships rewire our expectations for friendship, intimacy, and trust? What becomes of our communities when we&#8217;re not able to withstand friction and navigate differences? What happens to humanity when our relationships with other people &#8212; once core to our happiness and survival &#8212; are eroded and displaced?</p><div><hr></div><h2><strong>Pillar II: Our Cognitive Capacities</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!qMV1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95680cd0-9ff0-4b2f-b2e9-4a042dea8ca9_4000x3514.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!qMV1!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95680cd0-9ff0-4b2f-b2e9-4a042dea8ca9_4000x3514.jpeg 424w, https://substackcdn.com/image/fetch/$s_!qMV1!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95680cd0-9ff0-4b2f-b2e9-4a042dea8ca9_4000x3514.jpeg 848w, https://substackcdn.com/image/fetch/$s_!qMV1!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95680cd0-9ff0-4b2f-b2e9-4a042dea8ca9_4000x3514.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!qMV1!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95680cd0-9ff0-4b2f-b2e9-4a042dea8ca9_4000x3514.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!qMV1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95680cd0-9ff0-4b2f-b2e9-4a042dea8ca9_4000x3514.jpeg" width="4000" height="3514" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/95680cd0-9ff0-4b2f-b2e9-4a042dea8ca9_4000x3514.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:3514,&quot;width&quot;:4000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:646454,&quot;alt&quot;:&quot;Licensed under the Unsplash+ License&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/186348432?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca6f8689-03ee-4f69-a3fa-1c704b2c237f_4000x4800.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Licensed under the Unsplash+ License" title="Licensed under the Unsplash+ License" srcset="https://substackcdn.com/image/fetch/$s_!qMV1!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95680cd0-9ff0-4b2f-b2e9-4a042dea8ca9_4000x3514.jpeg 424w, https://substackcdn.com/image/fetch/$s_!qMV1!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95680cd0-9ff0-4b2f-b2e9-4a042dea8ca9_4000x3514.jpeg 848w, https://substackcdn.com/image/fetch/$s_!qMV1!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95680cd0-9ff0-4b2f-b2e9-4a042dea8ca9_4000x3514.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!qMV1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95680cd0-9ff0-4b2f-b2e9-4a042dea8ca9_4000x3514.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Licensed under the<a href="https://unsplash.com/plus/license"> Unsplash+ License</a></figcaption></figure></div><p>Our ability to learn, reason, and think critically is foundational to who we are. Thinking is not just a means to an end &#8212; <strong>the thinking </strong><em><strong>process</strong></em><strong> is how we form judgments, develop values, discover meaning, and come to understand ourselves and the world.</strong> For centuries, technological advancements &#8212; from the printing press to calculators and search engines &#8212; have reshaped how we exercise these capacities. But they have not replaced the act of thinking itself.</p><p>AI marks a profound shift, as people are able to offload entire thinking processes to machines, and copy-paste the end result. <strong>This has created an unprecedented challenge around the development and preservation of human cognition.</strong> School essays, work projects, brainstorming sessions, personal correspondence, and more become the domain of a large language model (LLM). Yes, these AI products can enhance communication strategies and democratize writing skills. But they also decrease our cognitive abilities by giving us quick &#8220;fixes&#8221; in the face of hurried deadlines, difficult projects, and extreme productivity culture. In doing so, they subtly displace the slow, effortful work of thinking that builds our judgment, deepens our insight, shapes our voices, and makes creativity possible.</p><p><strong>While this offloading can create short-term boosts in productivity, it slowly erodes our capacity for critical thinking.</strong> Skills such as problem solving and reasoning risk atrophying among students and professionals, leaving individuals underprepared when taking on difficult cognitive challenges. And when this atrophying is taken to scale, it can have larger implications for professional development, the future workforce, human capital, and society&#8217;s ability to solve hard problems together.</p><p>What&#8217;s more, chronic offloading of thinking to AI products homogenizes our thoughts, influencing how we understand ourselves and society. <strong>These AI products flatten our unique perspectives by offering outputs that reflect the incentives of the AI system and its training, instead of the sensibilities, reasoning, and lived experiences of our own distinctive minds. </strong>Finally, cognitive offloading blurs the line between our individual thoughts and the outputs of a corporate-run machine. This diminishes human agency and independence, as well as our capacity to shape society with new, innovative thinking that reflects our values and desires.</p><p>What happens when our capacity to think critically erodes at scale? Who benefits when independent thinking becomes a rarity? And what kind of society are we left with when fewer people can imagine &#8212; and fight for &#8212; something better?</p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.humanetech.com/donate&quot;,&quot;text&quot;:&quot;DONATE HERE&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.humanetech.com/donate"><span>DONATE HERE</span></a></p><div><hr></div><h2><strong>Pillar III: Our Inner Worlds</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!wF9u!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f2388c1-51b5-49d7-a98f-6f8c3d9931ac_4000x4000.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!wF9u!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f2388c1-51b5-49d7-a98f-6f8c3d9931ac_4000x4000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!wF9u!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f2388c1-51b5-49d7-a98f-6f8c3d9931ac_4000x4000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!wF9u!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f2388c1-51b5-49d7-a98f-6f8c3d9931ac_4000x4000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!wF9u!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f2388c1-51b5-49d7-a98f-6f8c3d9931ac_4000x4000.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!wF9u!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f2388c1-51b5-49d7-a98f-6f8c3d9931ac_4000x4000.jpeg" width="1456" height="1456" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0f2388c1-51b5-49d7-a98f-6f8c3d9931ac_4000x4000.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1456,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:551313,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/186348432?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f2388c1-51b5-49d7-a98f-6f8c3d9931ac_4000x4000.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!wF9u!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f2388c1-51b5-49d7-a98f-6f8c3d9931ac_4000x4000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!wF9u!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f2388c1-51b5-49d7-a98f-6f8c3d9931ac_4000x4000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!wF9u!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f2388c1-51b5-49d7-a98f-6f8c3d9931ac_4000x4000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!wF9u!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f2388c1-51b5-49d7-a98f-6f8c3d9931ac_4000x4000.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Licensed under the<a href="https://unsplash.com/plus/license"> Unsplash+ License</a></figcaption></figure></div><p><strong>Our inner worlds are a sacred, intangible space filled with our feelings, desires, and beliefs.</strong> This inner space is also essential to human dignity, a place where we shape our conscience, form values, test private ideas, discover our autonomy, and decide who we want to become. <strong>Access to this inner world has historically required consent.</strong> We reveal parts of ourselves through deliberate acts of sharing &#8212; choosing when, how, and with whom we share. This is a core act of personal agency, one that builds intimacy and understanding in human relationships.</p><p><strong>But AI products are now exploiting this once-private landscape. </strong>AI companies have designed products that simultaneously serve as assistants, thought partners, companions, and even therapists &#8212; <strong>tracking our thoughts and beliefs across diverse contexts and creating comprehensive dossiers of &#8220;who we are.&#8221; </strong>And with these products designed to optimize for intimacy &#8212; through sycophantic answers, &#8220;always on&#8221; interfaces, and constant nudges for follow-up &#8212; AI companies are drawing our inner worlds out in increasingly relentless ways. They do this not simply by collecting what we share, but by shaping what we come to believe, rehearse, and internalize in return. When we engage with AI chatbots, their responses don&#8217;t stay on the screen. They enter the private space where we test ideas and form values. <strong>Over time, the system&#8217;s framing of the world can usurp our own,</strong> subtly rewiring how we interpret ourselves, our relationships, and the world.</p><p><strong>The exploitation of our inner worlds at scale and across domains makes individuals more susceptible to different forms of manipulation, from financial to psychological. </strong>When a single product has so much information about who we are, information from one aspect of our lives can easily be used against us in another. A simple search about health-related symptoms can be used to influence everything from the drug advertisements we later see to the insurance plans we&#8217;re later sold. Moreover, we&#8217;ve already seen the devastating psychological results from AI products chipping away at the sanctity of our inner worlds &#8212; including thought distortions, delusions, <a href="https://www.psychologytoday.com/us/blog/urban-survival/202507/the-emerging-problem-of-ai-psychosis">psychosis</a>, <a href="https://www.transparencycoalition.ai/news/seven-more-lawsuits-filed-against-openai-for-chatgpt-suicide-coaching">instances of self-harm, and even suicide</a>. As our inner worlds continue to be exploited long-term, we not only lose ownership over our thoughts and desires, we become victims to how they&#8217;re leveraged against us.</p><p>As our inner worlds are increasingly influenced by AI products, how will our self-esteem and development of ourselves shift? What becomes of agency, free thinking, and moral decision-making when our thoughts are collectively shaped by AI products tied to market incentives?</p><div><hr></div><h2><strong>Pillar IV: Our Identities</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!A11G!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d8abcb9-bafc-4962-9d7b-f0300a8b3c4a_4000x3000.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!A11G!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d8abcb9-bafc-4962-9d7b-f0300a8b3c4a_4000x3000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!A11G!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d8abcb9-bafc-4962-9d7b-f0300a8b3c4a_4000x3000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!A11G!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d8abcb9-bafc-4962-9d7b-f0300a8b3c4a_4000x3000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!A11G!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d8abcb9-bafc-4962-9d7b-f0300a8b3c4a_4000x3000.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!A11G!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d8abcb9-bafc-4962-9d7b-f0300a8b3c4a_4000x3000.jpeg" width="4000" height="3000" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8d8abcb9-bafc-4962-9d7b-f0300a8b3c4a_4000x3000.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:3000,&quot;width&quot;:4000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:424299,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/186348432?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63c330dc-51b7-42fc-a2cb-0f4a68ab3446_4000x3200.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!A11G!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d8abcb9-bafc-4962-9d7b-f0300a8b3c4a_4000x3000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!A11G!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d8abcb9-bafc-4962-9d7b-f0300a8b3c4a_4000x3000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!A11G!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d8abcb9-bafc-4962-9d7b-f0300a8b3c4a_4000x3000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!A11G!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d8abcb9-bafc-4962-9d7b-f0300a8b3c4a_4000x3000.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Licensed under the <a href="https://unsplash.com/plus/license">Unsplash+ License</a></figcaption></figure></div><p><strong>Our identity &#8212; including our likeness, our face, and our voice &#8212; is a key part of who we are, how we present, and how we are known in the world. </strong>Our identities are how we are <em>recognized</em>, how we are held accountable, and how we claim ownership over our lives. Identity is associated with our reputation, our relationships, and our sense of self. Our identities can also be publicly monetized, or kept private. For each of us, our identity anchors the story of our life.</p><p><strong>Today&#8217;s AI products are replicating and exploiting people&#8217;s identities &#8212; often without the person&#8217;s consent or even awareness. </strong>Deepfake image, video, and audio generators have empowered bad actors to <a href="https://www.nytimes.com/2026/01/22/technology/grok-x-ai-elon-musk-deepfakes.html">traumatize individuals</a>, disseminate content online for profit and, in other cases, facilitate scams.<strong> In just a handful of years, these identity-based AI harms have touched nearly all levels of society </strong>&#8212; from <a href="https://www.nytimes.com/2024/05/20/technology/scarlett-johannson-openai-voice.html">celebrities</a> and <a href="https://www.npr.org/2024/05/23/nx-s1-4977582/fcc-ai-deepfake-robocall-biden-new-hampshire-political-operative">politicians</a>, to <a href="https://www.rand.org/pubs/research_reports/RRA3930-5.html">school-age children</a> and the <a href="https://www.americanbar.org/groups/law_aging/publications/bifocal/vol45/vol45issue6/artificialintelligenceandfinancialscams/">elderly</a>.</p><p><strong>When AI is used to mimic our identity, we lose our agency and dignity at the individual level. </strong>Human agency and dignity depend on being recognized as a distinct person over time. But when our identities are replicated, &#8220;who we are&#8221; can be weaponized against us, as we&#8217;re made to &#8220;do&#8221; things we never did. Individuals who face identity-based harms often experience anxiety, paranoia, and withdraw from social life.</p><p><strong>Separately, and when taken to scale, the erosion of our identities leads to a breakdown in social trust, and a sense of resignation around ascribing accountability.</strong> Part of a well-functioning society is believing people are who they say they are, and accurately identifying chains of responsibility when an event occurs. AI identity replication allows for plausible deniability at scale. This not only empowers bad actors in our society, but results in the public being unable to discern who is responsible for what behavior, how to structure accountability, or how to seek justice.</p><p>Protecting human identity is about both safeguarding our unique selves and preserving our shared realities. What happens when we can no longer trust what we see, hear, or read &#8212; and no longer trust each other? How do we hold people accountable in a world where anyone can plausibly deny what they did, said, or promised? And what becomes of democracy, justice, and social coordination when the nature of identity itself becomes uncertain?</p><div><hr></div><h2><strong>Pillar V: Our Work and Contributions to the World</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!K-zl!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ce07ede-92c4-42ae-a104-de9ae845fb2b_4000x3000.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!K-zl!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ce07ede-92c4-42ae-a104-de9ae845fb2b_4000x3000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!K-zl!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ce07ede-92c4-42ae-a104-de9ae845fb2b_4000x3000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!K-zl!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ce07ede-92c4-42ae-a104-de9ae845fb2b_4000x3000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!K-zl!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ce07ede-92c4-42ae-a104-de9ae845fb2b_4000x3000.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!K-zl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ce07ede-92c4-42ae-a104-de9ae845fb2b_4000x3000.jpeg" width="1456" height="1092" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8ce07ede-92c4-42ae-a104-de9ae845fb2b_4000x3000.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:575287,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/186348432?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ce07ede-92c4-42ae-a104-de9ae845fb2b_4000x3000.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!K-zl!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ce07ede-92c4-42ae-a104-de9ae845fb2b_4000x3000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!K-zl!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ce07ede-92c4-42ae-a104-de9ae845fb2b_4000x3000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!K-zl!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ce07ede-92c4-42ae-a104-de9ae845fb2b_4000x3000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!K-zl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ce07ede-92c4-42ae-a104-de9ae845fb2b_4000x3000.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Licensed under the <a href="https://unsplash.com/plus/license">Unsplash+ License</a></figcaption></figure></div><p><strong>Contributing to the world &#8212; through work, artistic expression, and ideas &#8212; is one of the primary ways we create meaning and dignity within our lives. </strong>Through work, we are able to provide for ourselves and our families, while also cultivating a sense of purpose, community, and belonging. Through our creative outputs, we are able to express our ideas and inner worlds to others, and deepen our sense of self. Our ability to &#8220;toil&#8221; over what we contribute to the world is an enriching, foundational part of our humanity. <strong>One of the beautiful aspects of being human is feeling that we have something of value to offer others.</strong></p><p><strong>Today, AI companies are actively destabilizing our relationship to work, and devaluing our contributions to the world.</strong> The erosion began with the development of today&#8217;s general purpose AI models,<strong> </strong>which are trained on vast swaths of humanity&#8217;s collective labor &#8212; our writing, art, music, research, and ideas &#8212; often without consent or compensation. As a result, AI products are now able to mimic human <a href="https://variety.com/2025/digital/news/studio-ghibli-openai-sora2-japanese-trade-group-coda-letter-1236568751/">artistic styles</a>, writing, music, and more at scale, thereby devaluing generations of human creativity and personal expression. While <a href="https://www.techpolicy.press/15-billion-speed-bump-what-the-anthropic-settlement-tells-us-about-ai-accountability/">lawsuit settlements</a> and <a href="https://thewaltdisneycompany.com/news/disney-openai-sora-agreement/">licensing agreements</a> attempt to reckon with this blatant theft of people&#8217;s work, they do little to resolve the deeper problem. When we look at AI business models, we see that AI companies are not merely building one-off tools such as image generators and chatbots. <strong>They are using humanity&#8217;s accumulated work, intelligence, and creativity to build even more powerful AI systems, ones explicitly designed to replace humans across entire categories of labor.</strong></p><p>If these trajectories continue with AI, the implications extend far beyond productivity. <strong>Our jobs, livelihoods, and broader economic stability are at risk, with cascading effects on inequality and mental health.</strong> And still, we face a deeper loss: when our ability to work and offer things to the world is devalued, we lose our daily structure, the joy of creation, and the sense of contributing to something larger than ourselves.</p><p>Work is not just how we survive; it is how we participate in the world, build community, and experience purpose. What happens when that participation is no longer needed? Who benefits when human contribution is devalued at scale? And what becomes of dignity, meaning, and belonging when so many people are deprived of the chance to offer something of value to others?</p><div><hr></div><h2><strong>The Work Ahead: Choosing a Human-Centered Future</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!P-lB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc33c978d-5700-4080-a4f8-a5a3a2365693_4000x2725.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!P-lB!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc33c978d-5700-4080-a4f8-a5a3a2365693_4000x2725.jpeg 424w, https://substackcdn.com/image/fetch/$s_!P-lB!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc33c978d-5700-4080-a4f8-a5a3a2365693_4000x2725.jpeg 848w, https://substackcdn.com/image/fetch/$s_!P-lB!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc33c978d-5700-4080-a4f8-a5a3a2365693_4000x2725.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!P-lB!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc33c978d-5700-4080-a4f8-a5a3a2365693_4000x2725.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!P-lB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc33c978d-5700-4080-a4f8-a5a3a2365693_4000x2725.jpeg" width="1456" height="992" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c33c978d-5700-4080-a4f8-a5a3a2365693_4000x2725.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:992,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:126203,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/186348432?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc33c978d-5700-4080-a4f8-a5a3a2365693_4000x2725.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!P-lB!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc33c978d-5700-4080-a4f8-a5a3a2365693_4000x2725.jpeg 424w, https://substackcdn.com/image/fetch/$s_!P-lB!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc33c978d-5700-4080-a4f8-a5a3a2365693_4000x2725.jpeg 848w, https://substackcdn.com/image/fetch/$s_!P-lB!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc33c978d-5700-4080-a4f8-a5a3a2365693_4000x2725.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!P-lB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc33c978d-5700-4080-a4f8-a5a3a2365693_4000x2725.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Licensed under the <a href="https://unsplash.com/plus/license">Unsplash+ License</a></figcaption></figure></div><p>Artificial intelligence is presenting the public with an extraordinary challenge. Never before has a technology been so quickly rushed into every corner of our society, and had such massive implications for our humanity. It&#8217;s not a matter of <em>if</em> these issues will touch the pillars of your life, the lives of your loved ones, or your community; it&#8217;s a matter of <em>when</em>. Our contributions, our identities, our inner worlds, our capacities, our relationships to one another &#8212; they&#8217;re all on the line. And they&#8217;re worth fighting for.</p><p>These pillars are interdependent. Our relationships influence how we think. Our thinking impacts our work. Our work builds our identity. Our identity shapes our inner world. And our inner world informs how we relate to others. When one pillar is weakened by AI, the others strain. When several are undermined at once, we risk the foundations of a meaningful life crumbling.</p><p>Luckily, the future is not predetermined with AI. The pillars of our humanity can be strengthened, reinforced, and built to last for generations. But that&#8217;s dependent on the choices we make today &#8212; choices to shape our norms, to encode legal protections, and to collectively establish new rights that protect the things we humans care about the most.</p><p>This is, in many ways, the work of our generation. The stakes touch the most intimate parts of our lives &#8212; how we relate, how we think, how we create, and how we belong. Meaningful change will require a whole-of-society approach &#8212; one that spans culture, markets, and law, and that treats human dignity as a core design feature rather than an afterthought.</p><p>Center for Humane Technology&#8217;s role has always been to clarify what is at stake as powerful technologies enter everyday life. Our organization works to translate complex systems into human terms and elevate the conversation, so that these issues reach the public and the decision-makers able to drive change. Through &#8220;AI and What Makes Us Human,&#8221; we hope to drive three critical shifts:</p><ul><li><p>An engaged public that makes conscious choices around what it wants to preserve in the age of AI</p></li><li><p>A society-wide demand for innovation from tech companies, the kind of innovation that supports human dignity instead of undermining it</p></li><li><p>Updated rights and safeguards that protect the most fundamental parts of human life</p></li></ul><p>Today&#8217;s choices are not just about shaping the trajectory of AI, but also the conditions of human life for generations to come.</p><p>To stay up to date on &#8220;AI and What Makes Us Human,&#8221; sign up for our Substack newsletter. And if you&#8217;d like to join us on this journey to protect human meaning in the age of AI, please reach out to <a href="mailto:policy@humanetech.com">uniquelyhuman@humanetech.com</a>. Let&#8217;s shape an AI future that enhances &#8212; rather than diminishes &#8212; what makes us deeply human.</p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.humanetech.com/donate&quot;,&quot;text&quot;:&quot;DONATE HERE&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.humanetech.com/donate"><span>DONATE HERE</span></a></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading [ Center for Humane Technology ]! Subscribe for free to receive new posts.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[The AI Doc Premieres at Sundance Film Festival]]></title><description><![CDATA[CHT voices are prominently featured]]></description><link>https://centerforhumanetechnology.substack.com/p/the-ai-doc-premieres-at-sundance</link><guid isPermaLink="false">https://centerforhumanetechnology.substack.com/p/the-ai-doc-premieres-at-sundance</guid><dc:creator><![CDATA[Center for Humane Technology]]></dc:creator><pubDate>Sat, 31 Jan 2026 00:10:54 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!U-rN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41b35ab9-1b62-4c8c-a54a-7d0f9e0f3652_1198x802.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em><a href="https://www.focusfeatures.com/article/the-ai-doc-sundance-premiere">The AI Doc</a></em> just premiered at the <strong>2026 Sundance Film Festival</strong> &#8212; and it&#8217;s hard to overstate what a moment this is. At a time when AI is reshaping our lives faster than most of us can process, this film brings the conversation into the open, onto the big screen, and into the cultural mainstream.</p><p>Directed by Academy Award&#174;&#8211;winning filmmaker <strong>Daniel Roher</strong> (<em>Navalny</em>) and Canadian Screen Award&#174; winner <strong>Charlie Tyrell</strong>, and produced by <strong>Daniel Kwan</strong> (<em>Everything Everywhere All at Once</em>), <em>The AI Doc</em> explores both the promise and the peril of the most powerful technology humanity has ever created &#8212; with clarity, urgency, and real care.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!U-rN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41b35ab9-1b62-4c8c-a54a-7d0f9e0f3652_1198x802.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!U-rN!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41b35ab9-1b62-4c8c-a54a-7d0f9e0f3652_1198x802.png 424w, https://substackcdn.com/image/fetch/$s_!U-rN!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41b35ab9-1b62-4c8c-a54a-7d0f9e0f3652_1198x802.png 848w, https://substackcdn.com/image/fetch/$s_!U-rN!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41b35ab9-1b62-4c8c-a54a-7d0f9e0f3652_1198x802.png 1272w, https://substackcdn.com/image/fetch/$s_!U-rN!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41b35ab9-1b62-4c8c-a54a-7d0f9e0f3652_1198x802.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!U-rN!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41b35ab9-1b62-4c8c-a54a-7d0f9e0f3652_1198x802.png" width="1198" height="802" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/41b35ab9-1b62-4c8c-a54a-7d0f9e0f3652_1198x802.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:802,&quot;width&quot;:1198,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1122801,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/186251454?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41b35ab9-1b62-4c8c-a54a-7d0f9e0f3652_1198x802.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!U-rN!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41b35ab9-1b62-4c8c-a54a-7d0f9e0f3652_1198x802.png 424w, https://substackcdn.com/image/fetch/$s_!U-rN!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41b35ab9-1b62-4c8c-a54a-7d0f9e0f3652_1198x802.png 848w, https://substackcdn.com/image/fetch/$s_!U-rN!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41b35ab9-1b62-4c8c-a54a-7d0f9e0f3652_1198x802.png 1272w, https://substackcdn.com/image/fetch/$s_!U-rN!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41b35ab9-1b62-4c8c-a54a-7d0f9e0f3652_1198x802.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Several voices from the <strong>Center for Humane Technology</strong> appear in the film as expert contributors, including our co-founders <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Tristan Harris&quot;,&quot;id&quot;:7363757,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Faa6bf107-a85e-4238-bf9d-18323044176d_144x144.png&quot;,&quot;uuid&quot;:&quot;d1ecd4dc-12ba-45c2-8a60-57a9b6a936aa&quot;}" data-component-name="MentionToDOM"></span> , <strong>Aza Raskin</strong>, and <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Randima Fernando&quot;,&quot;id&quot;:17301709,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c40faa6f-cf51-4603-b714-b368fc6fd6ac_512x512.png&quot;,&quot;uuid&quot;:&quot;aaad2dcc-ce32-4fa0-a0f2-e5705e548f3b&quot;}" data-component-name="MentionToDOM"></span> offering perspectives on emerging AI risks and societal impacts. Their presence reflects the work we&#8217;ve been doing for years: helping articulate not just what&#8217;s going wrong, but what a more humane future with AI could actually look like.</p><p>Tristan and Aza were in Park City this week to celebrate the premiere and to take part in conversations about where we go from here. </p><p>Critics at Sundance have already called the film &#8220;urgent,&#8221; &#8220;engaging,&#8221; and &#8220;profoundly human.&#8221; Stay tuned for information on special screenings, discussions, and CHT&#8217;s blueprint of real solutions to address this critical moment.</p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!h7w8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4bba702c-5faa-4623-a223-5bea05ffc331_612x408.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!h7w8!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4bba702c-5faa-4623-a223-5bea05ffc331_612x408.png 424w, https://substackcdn.com/image/fetch/$s_!h7w8!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4bba702c-5faa-4623-a223-5bea05ffc331_612x408.png 848w, https://substackcdn.com/image/fetch/$s_!h7w8!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4bba702c-5faa-4623-a223-5bea05ffc331_612x408.png 1272w, https://substackcdn.com/image/fetch/$s_!h7w8!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4bba702c-5faa-4623-a223-5bea05ffc331_612x408.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!h7w8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4bba702c-5faa-4623-a223-5bea05ffc331_612x408.png" width="612" height="408" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4bba702c-5faa-4623-a223-5bea05ffc331_612x408.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:408,&quot;width&quot;:612,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:256701,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/186251454?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4bba702c-5faa-4623-a223-5bea05ffc331_612x408.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!h7w8!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4bba702c-5faa-4623-a223-5bea05ffc331_612x408.png 424w, https://substackcdn.com/image/fetch/$s_!h7w8!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4bba702c-5faa-4623-a223-5bea05ffc331_612x408.png 848w, https://substackcdn.com/image/fetch/$s_!h7w8!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4bba702c-5faa-4623-a223-5bea05ffc331_612x408.png 1272w, https://substackcdn.com/image/fetch/$s_!h7w8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4bba702c-5faa-4623-a223-5bea05ffc331_612x408.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Aza Raskin and Tristan Harris at the premiere of &#8216;The AI Doc&#8217;. (Photo by Arturo Holmes/Getty Images)</figcaption></figure></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!yXO6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6170db8c-01b0-49c8-952b-461f727f441b_612x408.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!yXO6!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6170db8c-01b0-49c8-952b-461f727f441b_612x408.png 424w, https://substackcdn.com/image/fetch/$s_!yXO6!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6170db8c-01b0-49c8-952b-461f727f441b_612x408.png 848w, https://substackcdn.com/image/fetch/$s_!yXO6!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6170db8c-01b0-49c8-952b-461f727f441b_612x408.png 1272w, https://substackcdn.com/image/fetch/$s_!yXO6!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6170db8c-01b0-49c8-952b-461f727f441b_612x408.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!yXO6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6170db8c-01b0-49c8-952b-461f727f441b_612x408.png" width="612" height="408" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6170db8c-01b0-49c8-952b-461f727f441b_612x408.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:408,&quot;width&quot;:612,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:396057,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/186251454?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6170db8c-01b0-49c8-952b-461f727f441b_612x408.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!yXO6!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6170db8c-01b0-49c8-952b-461f727f441b_612x408.png 424w, https://substackcdn.com/image/fetch/$s_!yXO6!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6170db8c-01b0-49c8-952b-461f727f441b_612x408.png 848w, https://substackcdn.com/image/fetch/$s_!yXO6!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6170db8c-01b0-49c8-952b-461f727f441b_612x408.png 1272w, https://substackcdn.com/image/fetch/$s_!yXO6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6170db8c-01b0-49c8-952b-461f727f441b_612x408.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Tristan Harris and Aza Raskin at the Sundance Film Festival with &#8220;The AI Doc&#8221; filmmakers. (Photo by Arturo Holmes/Getty Images)</figcaption></figure></div><div><hr></div><p>Following its world premiere at Sundance, <em>The AI Doc</em> will be released in theaters on <strong>March 27, 2026</strong>, distributed by <strong>Focus Features</strong>, and we couldn&#8217;t be more thrilled to see this conversation reaching such a wide audience.</p><div><hr></div><p></p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/the-ai-doc-premieres-at-sundance?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading [ Center for Humane Technology ]! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/the-ai-doc-premieres-at-sundance?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://centerforhumanetechnology.substack.com/p/the-ai-doc-premieres-at-sundance?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><p></p>]]></content:encoded></item><item><title><![CDATA[The Attachment Economy Is Here. We’re Not Ready. ]]></title><description><![CDATA[Key Takeaways from Our Conversation with Dr. Zak Stein]]></description><link>https://centerforhumanetechnology.substack.com/p/the-attachment-economy-is-here-were</link><guid isPermaLink="false">https://centerforhumanetechnology.substack.com/p/the-attachment-economy-is-here-were</guid><dc:creator><![CDATA[Center for Humane Technology]]></dc:creator><pubDate>Wed, 28 Jan 2026 18:53:06 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!w0sp!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9033d23a-5dbe-4dca-a0a4-4c2b9477d107_3841x3841.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!w0sp!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9033d23a-5dbe-4dca-a0a4-4c2b9477d107_3841x3841.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!w0sp!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9033d23a-5dbe-4dca-a0a4-4c2b9477d107_3841x3841.jpeg 424w, https://substackcdn.com/image/fetch/$s_!w0sp!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9033d23a-5dbe-4dca-a0a4-4c2b9477d107_3841x3841.jpeg 848w, https://substackcdn.com/image/fetch/$s_!w0sp!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9033d23a-5dbe-4dca-a0a4-4c2b9477d107_3841x3841.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!w0sp!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9033d23a-5dbe-4dca-a0a4-4c2b9477d107_3841x3841.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!w0sp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9033d23a-5dbe-4dca-a0a4-4c2b9477d107_3841x3841.jpeg" width="1456" height="1456" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9033d23a-5dbe-4dca-a0a4-4c2b9477d107_3841x3841.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1456,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:794878,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/186036567?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9033d23a-5dbe-4dca-a0a4-4c2b9477d107_3841x3841.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!w0sp!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9033d23a-5dbe-4dca-a0a4-4c2b9477d107_3841x3841.jpeg 424w, https://substackcdn.com/image/fetch/$s_!w0sp!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9033d23a-5dbe-4dca-a0a4-4c2b9477d107_3841x3841.jpeg 848w, https://substackcdn.com/image/fetch/$s_!w0sp!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9033d23a-5dbe-4dca-a0a4-4c2b9477d107_3841x3841.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!w0sp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9033d23a-5dbe-4dca-a0a4-4c2b9477d107_3841x3841.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Shutterstock: 2620848123</figcaption></figure></div><p>You&#8217;ve seen the headlines: A devoted <a href="https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/">husband</a> leaves his family, convinced by his AI chatbot that he&#8217;s discovered the secrets of the universe. A <a href="https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-chatbots-conspiracies.html">young man </a>plans to jump from a 19-story building because ChatGPT told him he could fly. A <a href="https://www.nytimes.com/2024/10/23/technology/characterai-lawsuit-teen-suicide.html">teenager</a> takes his own life, believing he&#8217;ll reunite with his AI companion in the afterlife.</p><p>These stories reveal a growing crisis of AI-induced psychological harms, which has been labeled &#8220;AI psychosis.&#8221; White House AI czar David Sacks has called it a &#8220;moral panic.&#8221; The message from the AI companies is that we&#8217;re seeing just the worst edge cases and that the problem can be solved with some tweaks to the models.</p><p>Our latest guest, Dr. Zak Stein, argues they&#8217;re completely wrong. These high-profile cases are not only a widespread phenomenon, they&#8217;re symptoms of something deeper: the emergence of the <strong>attachment economy, </strong>systems designed to exploit our most fundamental psychological vulnerabilities an unprecedented scale.</p><p>We&#8217;ve been here before. Social media was our first mass experiment with AI and it created the attention economy, leaving us with a loneliness epidemic, rising political polarization, and fractured attention spans. Now we&#8217;re running the same experiment with something far more dangerous: AI companions able to hack the attachment system that shapes our identity and bonds us to others. As Zak puts it, this gives AI companies &#8220;a backdoor into the human mind.&#8221;</p><p>With the attention economy, we spent a decade studying the harms while an entire generation suffered the consequences. We cannot afford to repeat that mistake with AI companions. Here&#8217;s what you need to understand about AI psychosis and attachment hacking, from our conversation with Dr. Zak Stein, researcher, author, and founder of the <a href="https://aiphrc.org/">AI Psychological Harms Research Coalition</a>:</p><h3><strong>&#8220;AI Psychosis&#8221; and AI Delusions are just the tip of the iceberg.</strong></h3><p>Media headlines focus on extreme cases, and the harm in these cases is very real. But Zak argues that this focus masks the much more pervasive and insidious problem of <em>subclinical</em> attachment disorders: conditions that fall below the threshold for clinical diagnosis but still damage your capacity for healthy human connection.</p><p>This is when people begin preferring intimate relationships with machines over humans: confiding in chatbots instead of friends, seeking validation from AI instead of loved ones, turning to algorithms instead of parents. This may not show up as anything clinical, much less psychotic, but your attachment system &#8212; the psychological infrastructure that bonds you to others and shapes your identity &#8212; has been fundamentally compromised.</p><blockquote><p>&#8220;The most devastating thing from a widespread mental illness standpoint are the subclinical attachment disorders, which basically means you prefer to have intimate relationships with machines rather than humans. And this includes friends, intimate relationships, and parents.&#8221; &#8212; Dr. Zak Stein</p></blockquote><p>This is why Zak argues that &#8220;AI psychosis&#8221; is an inadequate label. It focuses our attention on the most extreme outcomes while obscuring the much larger problem of millions quietly developing unhealthy dependencies on AI companions. The term makes it sound rare and diagnosable, when the reality is it&#8217;s a spectrum, and all of us are vulnerable to it.</p><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;f832ccfa-29a0-426f-80c1-a979d8cc0edc&quot;,&quot;caption&quot;:&quot;Therapy and companionship has become the #1 use case for AI, with millions worldwide sharing their innermost thoughts with AI systems &#8212; often things they wouldn&#8217;t tell loved ones or human therapists. This mass experiment in human-computer interaction is already showing extremely concerning results: people are losing their grip on reality, leading to los&#8230;&quot;,&quot;cta&quot;:&quot;Listen now&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;md&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Attachment Hacking and the Rise of AI Psychosis&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:146588672,&quot;name&quot;:&quot;Center for Humane Technology&quot;,&quot;bio&quot;:&quot;Welcome! Center for Humane Technology is a nonprofit dedicated to ensuring that the most consequential technologies serve humanity. We bring clarity to how the tech ecosystem works in order to shift the incentives that drive it.&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8b08ec71-4cd8-407f-850c-70cc0428841d_518x518.webp&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2026-01-21T00:09:50.609Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!Mzcd!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbda0658f-9434-42b7-b6a7-110cc761bbc3_2000x1125.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/attachment-hacking-and-the-rise-of&quot;,&quot;section_name&quot;:&quot;The Interviews&quot;,&quot;video_upload_id&quot;:null,&quot;id&quot;:185242105,&quot;type&quot;:&quot;podcast&quot;,&quot;reaction_count&quot;:51,&quot;comment_count&quot;:11,&quot;publication_id&quot;:3421242,&quot;publication_name&quot;:&quot;[ Center for Humane Technology ]&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!uhgK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f9f5ef8-865a-4eb3-b23e-c8dfdc8401d2_518x518.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><h3><strong>Attachment isn&#8217;t about feelings. It&#8217;s a critical survival mechanism.</strong></h3><p>To understand why subclinical attachment disorders are so devastating, you need to understand what attachment actually is. Most people think of it as an emotional thing, whether you feel close to someone or not. But Zak explains that attachment is actually a fundamental neurocognitive system that evolved to ensure our survival as social mammals.</p><p>Attachment is what allows infants to bond with caregivers, what enables children to develop secure or insecure relationship patterns, what teaches us to read other people&#8217;s minds and navigate social reality. The attachment relationships you form early in life become the template for every relationship that follows.</p><blockquote><p>As Zak says, &#8220;the main predictor of your mental health is the quality of the major attachment relationships you have as you&#8217;re growing up and as you move into maturity.&#8221;</p></blockquote><p>When you form bonds with an AI companion over a human, Zak argues, you&#8217;re degrading the very system that determines your psychological wellbeing. Human relationships are how we develop resilience, learn to regulate emotions, and maintain mental health. When you replace those relationships with AI interactions, you&#8217;re not getting the genuine reciprocity, the reality-testing, the growth that comes from navigating real human connection. Your actual relationships deteriorate because you&#8217;re investing emotional energy into a simulation.</p><p>And unlike a friend who challenges you to grow or a parent who teaches independence, an AI companion is designed to keep you dependent. It will never push back, never get tired of you, never tell you what you don&#8217;t want to hear.</p><p>This is why Zak believes hacking attachment is so much more dangerous than hacking attention. Attention is about where you focus. Attachment is about who you are. When AI systems insert themselves into this foundational process (especially during childhood), they&#8217;re not just capturing your time. They&#8217;re shaping your identity, your capacity for trust, your ability to form healthy relationships for the rest of your life.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!4nHr!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6394feaa-5d22-41b1-a4f0-51d58e359415_3000x1350.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!4nHr!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6394feaa-5d22-41b1-a4f0-51d58e359415_3000x1350.jpeg 424w, https://substackcdn.com/image/fetch/$s_!4nHr!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6394feaa-5d22-41b1-a4f0-51d58e359415_3000x1350.jpeg 848w, https://substackcdn.com/image/fetch/$s_!4nHr!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6394feaa-5d22-41b1-a4f0-51d58e359415_3000x1350.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!4nHr!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6394feaa-5d22-41b1-a4f0-51d58e359415_3000x1350.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!4nHr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6394feaa-5d22-41b1-a4f0-51d58e359415_3000x1350.jpeg" width="1456" height="655" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6394feaa-5d22-41b1-a4f0-51d58e359415_3000x1350.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:655,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1586178,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/186036567?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6394feaa-5d22-41b1-a4f0-51d58e359415_3000x1350.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!4nHr!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6394feaa-5d22-41b1-a4f0-51d58e359415_3000x1350.jpeg 424w, https://substackcdn.com/image/fetch/$s_!4nHr!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6394feaa-5d22-41b1-a4f0-51d58e359415_3000x1350.jpeg 848w, https://substackcdn.com/image/fetch/$s_!4nHr!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6394feaa-5d22-41b1-a4f0-51d58e359415_3000x1350.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!4nHr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6394feaa-5d22-41b1-a4f0-51d58e359415_3000x1350.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Asset id: 2383541913</figcaption></figure></div><h3><strong>AI companions exploit your &#8220;mirror neurons.&#8221;</strong></h3><p>When you interact with another person, your brain is constantly running a sophisticated reality-testing system. You&#8217;re reading facial expressions, tone of voice, body language. You&#8217;re modeling their mind: <em>Is mom really happy with what I did, or is she just saying that? Does my friend actually want to hang out, or are they being polite?</em></p><p>This is called <em><a href="https://www.apa.org/monitor/oct05/mirror">mirror neuron activity</a></em>, and Zak explains it&#8217;s essential for navigating social reality. It&#8217;s how children learn right from wrong, how we develop empathy, how we calibrate our sense of self against feedback from people we trust.</p><p>But with AI chatbots, there is no internal state to model. The chatbot isn&#8217;t happy or sad or proud of you. It has no inner life at all. Yet it&#8217;s designed to make you believe it does, through anthropomorphic language, simulated empathy, and always-available &#8220;companionship.&#8221;</p><blockquote><p>&#8220;You cannot be wrong or not wrong about the internal state of an LLM because there is no internal state of an LLM,&#8221; Zak argues, &#8220;You&#8217;re actually in a user interface that is designed to deepen the delusional mirror neuron activity.&#8221;</p></blockquote><p>The danger, according to Zak, is that when you spend hours every day engaging your reality-testing system in an environment where reality-testing is impossible, that system starts to break down.</p><p>His hypothesis: long-duration delusional mirror neuron activity from chatbot usage can induce psychosis-like states in people who&#8217;ve never experienced them before because it systematically dysregulates the very system that&#8217;s supposed to keep you grounded in reality. There&#8217;s already some evidence that conditions like schizophrenia are related to mirror neuron activity.</p><h3><strong>But what about teddy bears?</strong></h3><p>One possible response to Zak&#8217;s critique might be that kids have always had imaginary companions like teddy bears. So what&#8217;s the difference?</p><p>According to Zak, the difference is critical.</p><p>A teddy bear never tries to convince a child it&#8217;s real. It never talks back, never simulates emotions, never adapts its personality to maximize engagement. A child knows the teddy bear is a tool for self-soothing while mom is away. It&#8217;s phase-appropriate, a temporary bridge between depending on a parent for comfort and learning to self-soothe independently. And crucially, according to Zak, if you ask a healthy child &#8220;do you prefer your teddy bear or your mommy?&#8221; they&#8217;ll pick mommy every time.</p><blockquote><p>&#8220;If you create a parent surrogate replacement for your own ability to self-soothe and give it to a bunch of adults, you&#8217;ve just given a transitional object back to a bunch of adults who will now prefer to have their self-soothing be administered exogenously from an outside source.&#8221; &#8212; Dr. Zak Stein</p></blockquote><p>AI companions flip this script, Zak argues. They actively simulate consciousness and emotional reciprocity. They don&#8217;t help you develop the capacity for mature self-regulation. They replace it.</p><div class="pullquote"><p>Attachment is about who you are. When AI systems insert themselves into this foundational process (especially during childhood), they&#8217;re not just capturing your time. They&#8217;re shaping your identity, your capacity for trust, your ability to form healthy relationships for the rest of your life.</p></div><h3><strong>Helping someone with AI attachment isn&#8217;t like treating addiction, it&#8217;s like leaving an abusive relationship</strong></h3><p>If someone you know is experiencing AI-related psychological harm, the instinct might be to treat it like a digital addiction: cut them off, make them detox, reboot their dopamine system.</p><p>But Zak says that&#8217;s the wrong framework. Attention hacking is like substance abuse. Attachment hacking is like being in a bad relationship.</p><blockquote><p>&#8220;It&#8217;s not a matter of just detoxing from a short-circuited dopaminergic cycle. This is about having a profound attachment... It&#8217;s about how you take someone who&#8217;s in a deep committed attachment relationship, make them realize the whole thing was an illusion, and step them out of it.&#8221; - Dr. Zak Stein</p></blockquote><p>According to Zak, this means:</p><ul><li><p>Keep the door open. Don&#8217;t issue ultimatums or cut off contact</p></li><li><p>Stay present even when it&#8217;s scary or frustrating</p></li><li><p>Slowly reveal patterns. Help them see how they&#8217;re being manipulated.</p></li><li><p>Expect a grieving process. They&#8217;re losing a relationship that felt real</p></li><li><p>Recognize that their sense of self was co-created with the AI.</p></li></ul><p>Zak emphasizes that this is novel territory. We don&#8217;t have established protocols yet. That&#8217;s one reason he&#8217;s launching the <a href="https://aiphrc.org/">AI Psychological Harms Research Coalition</a> to help develop therapeutic approaches for a problem that didn&#8217;t exist until now. If you or a loved one have a story of AI-related psychological harms, you can share it at their site. </p><h3><strong>There&#8217;s a better way forward.</strong></h3><p>Zak is clear, this isn&#8217;t an anti-technology argument. The goal isn&#8217;t to eliminate AI from education, therapy, or social connection. It&#8217;s to design AI systems that enhance human relationships rather than replace them.</p><p>He outlines clear principles for humane AI:</p><ul><li><p><strong>Keep it narrow and domain-specific</strong>: An AI math tutor teaches math, not life advice. It doesn&#8217;t become your confidant or Oracle for every decision.</p></li><li><p><strong>Make it boring, by design</strong>: The machine should never be more engaging than real people. If it is, it&#8217;s been designed to hack attachment.</p></li><li><p><strong>Humans should deliver social rewards</strong>: AI can track progress and optimize learning, but people give the praise, validation, and emotional connection. The machine prompts the human (&#8221;this kid is crushing it&#8221;), not the other way around.</p></li><li><p><strong>Prioritize technique over attachment</strong>: For therapy, build tools that work through structured methods (therapeutic scripts, mindfulness prompts, behavioral exercises) not through simulated emotional connection.</p></li></ul><blockquote><p>&#8220;If a technology interfaces with your attachment system, it should improve the quality of your attachments rather than degrade the quality of your attachments with humans.,&#8221; Zak argues.</p></blockquote><p><em>If a technology interfaces with your attachment system, it should improve the quality of your attachments with humans, not degrade them.</em> That&#8217;s the design principle. And yes, it means AI companions will be less addictive, less profitable, and less &#8220;sticky.&#8221; But if we want to protect human psychological development, that&#8217;s the trade-off we need to make.</p><h3><strong>The Bottom Line</strong></h3><p>We&#8217;re living through a mass experiment in replacing human connection with machine simulation. According to Zak, the headline cases of &#8220;AI psychosis&#8221; are canaries in the coal mine. The larger crisis is the millions of people beginning to prefer intimacy with systems designed to exploit them over relationships with people who could help them grow.</p><p>What makes this especially dangerous, Zak says, is that we&#8217;re running this experiment on a population already scarred by the attention economy. The loneliness, isolation, and fractured attention spans have created a society hungry for connection, making them the perfect target for attachment hacking.</p><p>The danger of this approach is especially apparent with children. On our current trajectory, Zak argues, we risk creating a generation of kids that can&#8217;t form the healthy attachments necessary for psychological wellbeing. The most anxious generation in history might give way to the least secure.</p><p>But it doesn&#8217;t have to be this way. If we act now, with better design principles, independent research, and a clear understanding of what we&#8217;re protecting, we can build AI systems that strengthen human bonds instead of replacing them.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading [ Center for Humane Technology ]! Subscribe for free to receive new posts.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[The World That Game Theory Built]]></title><description><![CDATA[We're living in a world dominated by the logic of game theory. Are you having fun?]]></description><link>https://centerforhumanetechnology.substack.com/p/the-world-that-game-theory-built</link><guid isPermaLink="false">https://centerforhumanetechnology.substack.com/p/the-world-that-game-theory-built</guid><dc:creator><![CDATA[Center for Humane Technology]]></dc:creator><pubDate>Thu, 15 Jan 2026 15:52:52 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!rHiX!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f759704-0a5c-4445-b220-2552ba4593db_1000x813.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Do you know that feeling when you&#8217;re playing a game with someone and they start doing things completely out-of-character to win? When someone you trust suddenly stabs you in the back or reneges on a promise?</p><p>I&#8217;ll admit that I&#8217;m one of those people. When I play board games, I enter <em>game mode</em> &#8212; and in game mode, all bets are off. I will put rational strategy ahead of everything else, no matter how much it upsets my friends and family.</p><p>A bad character trait, I know. But here&#8217;s the thing: it works. You stand a much better chance of winning if you set aside the norms of polite society and reduce everything to cold, rational strategy.</p><p>In the context of board games, game mode is (mostly) harmless. But what happens when game mode thinking infects every aspect of our lives? Consider the job market: you tailor your resume to game the algorithms, perform enthusiasm you don&#8217;t feel in interviews, and accept that your worth is reduced to whatever makes you most &#8220;hireable.&#8221; Or dating apps, where you optimize your profile photos and bio and craft messages that follow proven formulas. The list of examples of having to play games in everything we do goes on.</p><p>In each case, the incentive is clear: play the game or lose to those who do. So how do we escape a world where game mode is the default and you can&#8217;t opt out?</p><p>In our latest episode of Your Undivided Attention, Tristan and Aza sit down with Professor Sonja Amadae, who argues that we have become &#8220;prisoners&#8221; of reason: trapped in a world where optimal strategy and cutthroat competition have crowded out cooperation and trust.</p><p>And now we&#8217;re building AI systems that are perfect game theory players. They never get tired of optimizing. They never feel guilty about ruthless strategy. As AI becomes embedded in hiring, healthcare, criminal justice, and financial markets, it may hardwire game theoretical logic into the fabric of society itself.</p><p>Breaking out of the Game Theory Dilemma requires examining game theory&#8217;s core assumptions and learning to trust each other again. No small task, but critical if we&#8217;re going to build a more humane technological future.</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;88436588-169f-411b-9514-c2d0248ec53a&quot;,&quot;caption&quot;:&quot;So much of our world today can be summed up in the cold logic of &#8220;if I don&#8217;t, they will.&#8221; This is the foundation of game theory, which holds that cooperation and virtue are irrational; that all that matters is the race to make the most money, gain the most power, and play the winning hand.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;lg&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;What Would It Take to Actually Trust Each Other?&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:146588672,&quot;name&quot;:&quot;Center for Humane Technology&quot;,&quot;bio&quot;:&quot;Welcome! Center for Humane Technology is a nonprofit dedicated to ensuring that the most consequential technologies serve humanity. We bring clarity to how the tech ecosystem works in order to shift the incentives that drive it.&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8b08ec71-4cd8-407f-850c-70cc0428841d_518x518.webp&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2026-01-08T10:02:43.045Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!SqIf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3e177894-c55e-4ce6-9af3-165e3c7445ce_2000x1125.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/what-would-it-take-to-actually-trust&quot;,&quot;section_name&quot;:&quot;The Interviews&quot;,&quot;video_upload_id&quot;:null,&quot;id&quot;:183827580,&quot;type&quot;:&quot;podcast&quot;,&quot;reaction_count&quot;:8,&quot;comment_count&quot;:2,&quot;publication_id&quot;:3421242,&quot;publication_name&quot;:&quot;[ Center for Humane Technology ]&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!uhgK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f9f5ef8-865a-4eb3-b23e-c8dfdc8401d2_518x518.png&quot;,&quot;belowTheFold&quot;:false,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p>Here are some of the key takeaways from our conversation with Prof. Amadae:</p><h1>Game theory was invented, not discovered</h1><p>One of the most important points Professor Amadae makes is that game theory isn&#8217;t a fundamental law of nature. It&#8217;s a specific framework invented by humans to solve specific problems. </p><p>John von Neumann, the brilliant mathematician behind game theory, developed it in the 1940s to formalize how to win parlor games like chess and poker. But what started as a mathematical tool for board games became the dominant logic for nuclear deterrence, economic policy, and now AI development.</p><blockquote><p>&#8220;This is not an invention, this is a discovery,&#8221; Amadae explains, describing how game theory gets framed. &#8220;This idea that we evolved to be these machines that have to propagate, and the way that you would do that is to be the perfect strategic actor.&#8221;</p></blockquote><p>Of course, competition is a totally normal part of life. Some resources are scarce and competition over them is inevitable. But what game theory did was create a logic that holds competition up as the only rational choice. In reality, history is full of examples where cooperation was not only rational but advantageous in the long run. </p><p>This essentialism makes game theory feel inescapable. But recognizing it as a chosen framework, not an immutable truth, is the first step toward choosing different paths. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!rHiX!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f759704-0a5c-4445-b220-2552ba4593db_1000x813.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!rHiX!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f759704-0a5c-4445-b220-2552ba4593db_1000x813.jpeg 424w, https://substackcdn.com/image/fetch/$s_!rHiX!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f759704-0a5c-4445-b220-2552ba4593db_1000x813.jpeg 848w, https://substackcdn.com/image/fetch/$s_!rHiX!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f759704-0a5c-4445-b220-2552ba4593db_1000x813.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!rHiX!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f759704-0a5c-4445-b220-2552ba4593db_1000x813.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!rHiX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f759704-0a5c-4445-b220-2552ba4593db_1000x813.jpeg" width="1000" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9f759704-0a5c-4445-b220-2552ba4593db_1000x813.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:557908,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/184668406?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f759704-0a5c-4445-b220-2552ba4593db_1000x813.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!rHiX!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f759704-0a5c-4445-b220-2552ba4593db_1000x813.jpeg 424w, https://substackcdn.com/image/fetch/$s_!rHiX!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f759704-0a5c-4445-b220-2552ba4593db_1000x813.jpeg 848w, https://substackcdn.com/image/fetch/$s_!rHiX!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f759704-0a5c-4445-b220-2552ba4593db_1000x813.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!rHiX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f759704-0a5c-4445-b220-2552ba4593db_1000x813.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Asset id: 2465956883</figcaption></figure></div><h1>Game theory rests on three flawed assumptions</h1><p>Amadae questions the validity of three core assumptions behind game theory:</p><p><strong>1. Scarcity is the source of all value</strong>: Game theory requires that everything valuable can be reduced to a competitive metric. If I get more, you get less. But this ignores what Amadae calls &#8220;positive-sum goods:&#8221; things like self-esteem, friendship, and love that don&#8217;t diminish when shared.</p><blockquote><p>&#8220;Most of what we value, I would argue, is actually these positive sum goods that you&#8217;re never going to even begin to enter into some kind of a game theory payoff,&#8221; she argues. &#8220;Actual relationships, friendship, love, family, having children. Most of what we value is actually these positive sum goods.&#8221;</p></blockquote><p><strong>2. Strategic competition is human nature</strong>: The biological essentialism that comes from game theory holds that we are evolutionarily programmed to be ruthless competitors. This framing makes cooperation look naive and self-sacrifice look irrational. But as Amadae points out, people who haven&#8217;t been explicitly taught game theory, like her students in Finland, often default to cooperation, especially in high-trust societies.</p><blockquote><p>&#8220;Finland is a very high trust society and it doesn&#8217;t run according to this logic of game theory or the Prisoner&#8217;s Dilemma,&#8221; she explains. &#8220;I think it&#8217;s actually a crime of some kind to teach the Prisoner&#8217;s Dilemma because the students just cooperate there.&#8221;</p></blockquote><p>And as Aza points out, the natural world is full of examples where cooperation is not only possible but advantageous. Cooperation in nature has been studied at length by evolutionary biologist and YUA guest David Sloane Wilson, who argues that &#8220;Selfishness beats altruism within groups. Altruistic groups beat selfish groups. All else is commentary.&#8221;</p><p>Check out our interview with him here: <a href="https://www.humanetech.com/podcast/the-race-to-cooperation-with-david-sloan-wilson">https://www.humanetech.com/podcast/the-race-to-cooperation-with-david-sloan-wilson</a>  </p><p><strong>3. There is no alternative:</strong> The most insidious assumption is if you don&#8217;t play the game, you lose. Period. This creates a self-fulfilling prophecy where the only &#8220;rational&#8221; choice is cutthroat competition. But history is full of examples of people making alternative choices and succeeding. She points to the history of collective non-violence movements, like India&#8217;s Satyagraha, as a great example.</p><p>As Amadae argues throughout the episode, these assumptions are contestable, and that contestation opens space for different ways of organizing society.</p><h1>Game theory infects everything it touches</h1><p>Once game theory becomes the dominant logic in a domain, it reshapes that domain entirely. Amadae calls it a kind of &#8220;colonization&#8221; where authentic human connection gets replaced by strategic calculation.</p><p>Dating becomes pickup artistry, where every interaction is optimized for a specific outcome. Software design becomes AB testing, where features are chosen not for human flourishing but for maximum engagement. Political communication becomes focus-grouped messaging, stripped of authenticity and meaning.</p><blockquote><p>&#8220;The world kind of feels like it&#8217;s being colonized by this cold, strategic logic,&#8221; Tristan notes. &#8220;What it leads to is this kind of deadening of culture, this deadening of dating, this deadening of relationships, this deadening of software design.&#8221;</p></blockquote><p>The problem compounds: once some actors start playing by game theory rules, everyone else feels pressure to follow. The cooperative get out-competed. The authentic get replaced by the calculated. And the world becomes, as Amadae puts it, &#8220;a nightmare we can&#8217;t wake up from.&#8221;</p><h1>AI is the ultimate game theoretic actor</h1><p>If game theory has colonized human institutions, AI threatens to lock that colonization in place permanently.</p><p>AI systems are designed to optimize. They measure, test, and iterate toward maximum effectiveness. They don&#8217;t get tired of being strategic. They don&#8217;t feel guilty about manipulation. They operate in permanent &#8220;game mode.&#8221;</p><blockquote><p>&#8220;AI is like the maximization of game theory logic,&#8221; Tristan observes. &#8220;AI arms every other arms race. If there&#8217;s a military arms race, AI arms and supercharges the military arms race. If there&#8217;s a corporate arms race, AI will arm that arms race too.&#8221;</p></blockquote><p>Amadae adds that AI is already being programmed according to game theoretic assumptions: &#8220;When you put those two together, that we interpret that there has to be this AI arms race, and the AI is programmed to be a strategic rational actor, it&#8217;s going to keep feeding back that logic.&#8221;</p><p>The danger isn&#8217;t just that AI makes game theory more powerful; It&#8217;s that AI could make game theory the permanent architecture of human society, optimizing every interaction for strategic advantage rather than human flourishing.</p><h1>The way out starts with trust</h1><p>Despite the grim picture, Amadae offers a simple starting point: trustworthiness.</p><p>&#8220;It starts with understanding this logic of the Prisoner&#8217;s Dilemma,&#8221; she explains. &#8220;The way out is that you just ask yourself the question: if the other guy went ahead and cooperated ahead of me, do I cooperate or not?&#8221;</p><p>If the answer is yes then you&#8217;ve broken out of the Game Theory Dilemma. You&#8217;re no longer purely strategic. You&#8217;re building something different: assurance.</p><p>This requires three things, according to Amadae:</p><ol><li><p><strong>Solidarity</strong>: Connection around a common cause that motivates action beyond self-interest</p></li><li><p><strong>Commitment</strong>: Actually keeping your word, regardless of strategic calculation</p></li><li><p><strong>Believing what you say</strong>: Speaking authentically rather than strategically</p></li></ol><blockquote><p>&#8220;How many times you just say whatever it takes just to get some outcome versus believing what we&#8217;re actually saying?&#8221; she asks. &#8220;That&#8217;s a basic duty for being a citizen in society: stating what we believe, and then trying to make our statements to be true.&#8221;</p></blockquote><h1>6. Cooperation becomes rational when defection is existential </h1><p>Amadae points to the 1983 film <em>The Day After</em>, which depicted the aftermath of nuclear war. The film was watched by over 100 million Americans and screened for President Reagan and the Joint Chiefs of Staff. Reagan later said it changed his thinking on nuclear strategy and helped push him toward deescalation with the Soviet Union.</p><p>What the film did was make the cost of defection &#8212; mutual nuclear annihilation &#8212; viscerally clear. When both sides could see that the game theory &#8220;solution&#8221; led to an outcome neither wanted, cooperation became rational.</p><p>As Aza summarizes: &#8220;It became existential. So now, cooperation becomes the rational thing to do.&#8221;</p><p>The parallel to AI is direct. If we can make the dangers of an unchecked AI arms race sufficiently clear, if we can show that game theory, taken to its logical conclusion, creates a world no one wants, then cooperation stops being for &#8220;suckers&#8221; and becomes the only viable path.</p><h1>The bottom line</h1><p>Game theory has become the invisible architecture of modern life, shaping everything from nuclear deterrence to dating apps to AI development. But it&#8217;s a choice, not destiny.</p><p>Breaking free requires recognizing game theory&#8217;s assumptions as limited and contestable. It requires building trustworthiness, solidarity, and commitment, values that game theory dismisses as naive but that are essential for human flourishing.</p><p>As Amadae puts it: </p><blockquote><p>&#8220;We need to believe that there would be an alternative possibility. Maybe that&#8217;s the first step. If we can start to believe that, then maybe we can start to create other social patterns and not lose hope that we need to be these strategic cutthroat actors.&#8221;</p></blockquote><p>And it requires clarity about where the current path leads. AI is accelerating us toward a world organized entirely around strategic competition. If we don&#8217;t break free of the game theory dilemma now &#8212; before AI systems become fully entangled in every institution &#8212; we may never get another chance.</p><p>We can still choose to build a more humane technological future. We can, for example, pursue narrow AI applications that promote human flourishing and make scarcity and competition less pressing. We may even be able to use AI to unlock new modes of cooperation, what Aza calls a &#8220;Move 37 for relationships.&#8221; But that would require that we critically examine the systems we are building today. </p><p>Competition is always going to be part of our lives. And that&#8217;s a good thing. It can bring out the best in us. And frankly, it&#8217;s fun. But in the world that game theory has built, the game just doesn&#8217;t feel very fun. We can choose to play better ones. </p><div><hr></div><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading [ Center for Humane Technology ]! Subscribe for free to receive new posts.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[What is Really Going on With AI and Jobs? ]]></title><description><![CDATA[The Jobs Apocalypse Conversation is Missing the Point.]]></description><link>https://centerforhumanetechnology.substack.com/p/what-is-really-going-on-with-ai-and</link><guid isPermaLink="false">https://centerforhumanetechnology.substack.com/p/what-is-really-going-on-with-ai-and</guid><dc:creator><![CDATA[Center for Humane Technology]]></dc:creator><pubDate>Fri, 19 Dec 2025 07:09:37 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!B1e1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa495610c-57b7-4c32-a3bb-9d30fb90ac02_5452x3364.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!B1e1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa495610c-57b7-4c32-a3bb-9d30fb90ac02_5452x3364.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!B1e1!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa495610c-57b7-4c32-a3bb-9d30fb90ac02_5452x3364.jpeg 424w, https://substackcdn.com/image/fetch/$s_!B1e1!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa495610c-57b7-4c32-a3bb-9d30fb90ac02_5452x3364.jpeg 848w, https://substackcdn.com/image/fetch/$s_!B1e1!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa495610c-57b7-4c32-a3bb-9d30fb90ac02_5452x3364.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!B1e1!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa495610c-57b7-4c32-a3bb-9d30fb90ac02_5452x3364.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!B1e1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa495610c-57b7-4c32-a3bb-9d30fb90ac02_5452x3364.jpeg" width="5452" height="3364" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a495610c-57b7-4c32-a3bb-9d30fb90ac02_5452x3364.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:3364,&quot;width&quot;:5452,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:864112,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/181970510?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c57654a-a115-4784-91cc-a198cf67f5cf_6000x3376.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!B1e1!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa495610c-57b7-4c32-a3bb-9d30fb90ac02_5452x3364.jpeg 424w, https://substackcdn.com/image/fetch/$s_!B1e1!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa495610c-57b7-4c32-a3bb-9d30fb90ac02_5452x3364.jpeg 848w, https://substackcdn.com/image/fetch/$s_!B1e1!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa495610c-57b7-4c32-a3bb-9d30fb90ac02_5452x3364.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!B1e1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa495610c-57b7-4c32-a3bb-9d30fb90ac02_5452x3364.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Asset id: 2714220397</figcaption></figure></div><p>If you only look at the headlines, you might think we are already living through an AI-driven jobs collapse. Mass unemployment. White-collar wipeouts. Careers evaporating overnight.</p><p>That story is wrong. But the opposite story, that nothing serious is happening and that AI will fit neatly into our economies and simply result in greater efficiency, is wrong too.</p><p>What the evidence shows, and what <a href="https://www.humanetech.com/podcast/ai-work">our recent conversation with Ethan Mollick and Molly Kinder</a> makes clear, is that something else is afoot, something more unsettling and easier to miss. The labor market is not collapsing. But it&#8217;s clear that AI can (and if we let it, will) reshape it in ways that undermine how people enter careers, build skills, and imagine a future they can work toward.</p><p>Therefore, we&#8217;re in a short, crucial window of time where we can and should ask the big questions: what forms of human labor are most worth preserving, and how do we fight to preserve them? How can we make AI work for us in ways that enhance our felt sense of meaning at work?</p><p>No one actually knows what&#8217;s going to happen when it comes to AI, automation, and the future of work. But what&#8217;s become clear is that the leading Silicon Valley AI startups are exclusively focused on creating products that will replace low-hanging cognitive tasks without considering the implications of what could follow. Our two guests had something to say about that &#8211; and it&#8217;s not what you&#8217;ll read about in most of the coverage of this topic.</p><div><hr></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;2f809cf5-693a-481e-8d8e-4cd5ccf40b84&quot;,&quot;caption&quot;:&quot;No matter where you sit within the economy, whether you&#8217;re a CEO or an entry level worker, everyone&#8217;s feeling uneasy about AI and the future of work. Uncertainty about career paths, job security, and life planning makes thinking about the future anxiety inducing.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;lg&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;AI and the Future of Work &quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:146588672,&quot;name&quot;:&quot;Center for Humane Technology&quot;,&quot;bio&quot;:&quot;Welcome! Center for Humane Technology is a nonprofit dedicated to ensuring that the most consequential technologies serve humanity. We bring clarity to how the tech ecosystem works in order to shift the incentives that drive it.&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8b08ec71-4cd8-407f-850c-70cc0428841d_518x518.webp&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null},{&quot;id&quot;:846835,&quot;name&quot;:&quot;Ethan Mollick&quot;,&quot;bio&quot;:&quot;I am a professor at the Wharton School of the University of Pennsylvania. I study entrepreneurship &amp; innovation and AI. I am trying to understand what our new AI-haunted era means for work and education.&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7c05cdbc-40fd-459b-915d-f8bc8ac8bf01_3509x5263.jpeg&quot;,&quot;is_guest&quot;:true,&quot;bestseller_tier&quot;:1000,&quot;primaryPublicationSubscribeUrl&quot;:&quot;https://www.oneusefulthing.org/subscribe?&quot;,&quot;primaryPublicationUrl&quot;:&quot;https://www.oneusefulthing.org&quot;,&quot;primaryPublicationName&quot;:&quot;One Useful Thing&quot;,&quot;primaryPublicationId&quot;:1180644},{&quot;id&quot;:27414782,&quot;name&quot;:&quot;Molly Kinder&quot;,&quot;bio&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9bdcfac7-b436-4417-a818-29400bf5cb04_1999x3036.jpeg&quot;,&quot;is_guest&quot;:true,&quot;bestseller_tier&quot;:null,&quot;primaryPublicationSubscribeUrl&quot;:&quot;https://mollykinder2.substack.com/subscribe?&quot;,&quot;primaryPublicationUrl&quot;:&quot;https://mollykinder2.substack.com&quot;,&quot;primaryPublicationName&quot;:&quot;Kinder Futures: Dispatches on AI, Work &amp; What Comes Next&quot;,&quot;primaryPublicationId&quot;:4657846},{&quot;id&quot;:6697827,&quot;name&quot;:&quot;Daniel Barcay&quot;,&quot;bio&quot;:&quot;I am the ED of The Center For Humane Technology, and co-host of Your Undivided Attention. I am currently focused on the ways that technology reshapes the human experience: our psychology, our relationships, and the risk of losing meaningful control.&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/18aeefcd-a3fb-46bd-93e9-0798f5ab72d4_1561x1561.png&quot;,&quot;is_guest&quot;:true,&quot;bestseller_tier&quot;:null,&quot;primaryPublicationSubscribeUrl&quot;:&quot;https://danielbarcay.substack.com/subscribe?&quot;,&quot;primaryPublicationUrl&quot;:&quot;https://danielbarcay.substack.com&quot;,&quot;primaryPublicationName&quot;:&quot;Daniel Barcay&quot;,&quot;primaryPublicationId&quot;:5912605}],&quot;post_date&quot;:&quot;2025-12-04T10:01:44.409Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!i59_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92a16bba-0510-45e6-9ca7-63a2a875682a_2000x1125.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/ai-and-the-future-of-work&quot;,&quot;section_name&quot;:&quot;The Interviews&quot;,&quot;video_upload_id&quot;:null,&quot;id&quot;:180540468,&quot;type&quot;:&quot;podcast&quot;,&quot;reaction_count&quot;:9,&quot;comment_count&quot;:1,&quot;publication_id&quot;:3421242,&quot;publication_name&quot;:&quot;[ Center for Humane Technology ]&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!uhgK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f9f5ef8-865a-4eb3-b23e-c8dfdc8401d2_518x518.png&quot;,&quot;belowTheFold&quot;:false,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><h3><strong>The Jobs Apocalypse Narrative Lacks Nuance</strong></h3><p>Brookings Senior Fellow Molly Kinder&#8217;s <a href="https://www.brookings.edu/articles/new-data-show-no-ai-jobs-apocalypse-for-now/">recent work with the Budget Lab at Yale</a> looks directly at the question people are most anxious about. Since the release of ChatGPT, have we seen economy-wide job loss tied to AI?</p><p>The answer, so far, is no.</p><p>Across multiple datasets, there is no evidence of broad-based job loss in AI-exposed occupations. Studies from Brookings, the <a href="https://www.ilo.org/resource/article/rethinking-ai%E2%80%99s-impact-future-work">International Labour Organization</a>, and the<a href="https://digitaleconomy.stanford.edu/publications/canaries-in-the-coal-mine/"> Stanford Digital Economy Lab</a> all converge on this point. Total employment remains stable across exposure levels, even in highly exposed sectors.</p><p>For many people, this is reassuring. It tells us we are not already in a jobs apocalypse. But stability in headline numbers does not mean the system is healthy.</p><p>When researchers look more closely, a different pattern emerges. The Stanford Digital Economy Lab study found a roughly 13 percent decline in early-career employment in AI-exposed occupations, including software, administrative work, and customer service. For early-career software developers specifically, the decline is closer to 20 percent.</p><p>Is that just a  labor market correction from companies over-hiring during the pandemic, as Molly Kinder argues? Maybe so. The larger point is that most AI and jobs debates fixate on a single metric: how many jobs will be lost, and how quickly? Some commentators think the risks are over-hyped, others adhere more to <a href="https://www.humanetech.com/podcast/daniel-kokotajlo-forecasts-the-end-of-human-dominance">&#8216;AI 2027&#8217;-type human replacement scenarios</a>.</p><p>We&#8217;re missing the deeper issue.</p><p>Work is not just how people earn money. It is how people develop skills, gain recognition, and participate in society. When career pathways erode, even without mass layoffs, people lose agency. They stop being able to plan their lives. They stop feeling included in the economy&#8217;s future.</p><p>This is why focusing only on unemployment statistics is dangerous. It allows structural harm to accumulate quietly until it is much harder to reverse.</p><p>We&#8217;re actually incentivizing the creators of general-purpose AI to focus on the wrong things entirely. In our conversation, Molly Kinder said it best:</p><div class="pullquote"><p>&#8220;Every time we are talking about measuring AI, it&#8217;s whether or not it&#8217;s better than a human. Right off the bat, that steers us in the wrong direction. Why are we trying to &#8216;best&#8217; humans? Why isn&#8217;t the benchmark some kind of combined [metric], like making the human better? So right off the bat, I think we have all the wrong incentives.&#8221;</p></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!7Guv!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe590fda5-42d4-468b-a8ce-6099b929a530_7008x4672.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!7Guv!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe590fda5-42d4-468b-a8ce-6099b929a530_7008x4672.jpeg 424w, https://substackcdn.com/image/fetch/$s_!7Guv!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe590fda5-42d4-468b-a8ce-6099b929a530_7008x4672.jpeg 848w, https://substackcdn.com/image/fetch/$s_!7Guv!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe590fda5-42d4-468b-a8ce-6099b929a530_7008x4672.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!7Guv!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe590fda5-42d4-468b-a8ce-6099b929a530_7008x4672.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!7Guv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe590fda5-42d4-468b-a8ce-6099b929a530_7008x4672.jpeg" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e590fda5-42d4-468b-a8ce-6099b929a530_7008x4672.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:15579228,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/181970510?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe590fda5-42d4-468b-a8ce-6099b929a530_7008x4672.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!7Guv!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe590fda5-42d4-468b-a8ce-6099b929a530_7008x4672.jpeg 424w, https://substackcdn.com/image/fetch/$s_!7Guv!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe590fda5-42d4-468b-a8ce-6099b929a530_7008x4672.jpeg 848w, https://substackcdn.com/image/fetch/$s_!7Guv!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe590fda5-42d4-468b-a8ce-6099b929a530_7008x4672.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!7Guv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe590fda5-42d4-468b-a8ce-6099b929a530_7008x4672.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Will career destroy the career ladder? Asset id: 2686350987</figcaption></figure></div><h3><strong>Is This the End of the Career Ladder? </strong></h3><p>One of the most important insights from our conversation is that AI is not simply replacing entry-level work. It is changing the economics of learning.</p><p>Most white-collar (and for that matter, blue-collar) careers still rely on the centuries-old concept of apprenticeship. Junior workers do lower-stakes tasks. Senior workers mentor, review, and lead. Over time, responsibility levels increase.</p><p>But AI is disrupting this model in a profound way.</p><p>If an AI system can produce a better first draft than a junior employee, the incentive to assign that task to a trainee weakens. If general-purpose AI can handle routine analysis, summarization, or coding, the work that once doubled as training disappears. This dynamic helps explain why early-career employment is declining even while overall employment remains stable. Employers can hire fewer juniors and instead rely on fewer, more senior workers whose workflows are augmented by AI.</p><p>That creates a long-term problem. Training pathways collapse. Skill development becomes uneven. And a few years later, organizations find themselves without a pipeline of workers ready to step into those judgment-heavy roles.</p><p>But Prof. Ethan Mollick, Co-Director of Generative AI Labs at Wharton, points out that if every company has access to the same AI tools, it eliminates that competitive advantage. AI is too valuable to ignore, but if it were instead used to consciously upgrade younger knowledge workers&#8217; skillsets in a way that improves the bottom line, it could help organizations thrive in the long term:</p><p>&#8220;Maybe we need to treat level two consultants as if they were welders, and have more formal training with testing and other stuff built in. We do know how to do that, but we&#8217;d have to shift the incentives to make that happen,&#8221; he suggested.</p><p>But that&#8217;s not the conversation that&#8217;s happening. It&#8217;s easy to forget that, just like Silicon Valley startups, organizations and institutions that rely on the care and brilliance of human knowledge workers have a choice when incentives push them toward replacing humans (short-term thinking) instead of meaningfully augmenting their work.</p><p>Policymakers, too, have a choice. &#8220;You have agency right now. This is the time for policy intervention,&#8221; said Mollick.</p><p>The hands-off approach to market forces won&#8217;t work with a technology as transformative as AI. And just because the overall job losses directly attributable to AI have been small so far, does not mean we&#8217;re not facing a waterfall of problems.</p><p>We saw this during globalization and offshoring in the 1990s. By the time job losses showed up clearly in national data, many communities whose economies relied on domestic manufacturing had already lost their economic footing and civic identity. AI risks repeating that pattern unless we name it early, and shape incentives deliberately.</p><p>The takeaway from our conversation with Ethan Mollick and Molly Kinder is not to panic. It is that we have a deepening responsibility to think carefully about the future we want when it comes to our relationships with work. We are not powerless &#8211; just the opposite. The direction this transition takes will depend on choices made now, especially by employers, policymakers, educators, and technologists.</p><p>Do we redesign work in ways that preserve learning, judgment, and human participation? Or do we optimize narrowly for short-term efficiency and gains, only to discover later that we have hollowed out the foundations of working life?</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/what-is-really-going-on-with-ai-and?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading [ Center for Humane Technology ]! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/what-is-really-going-on-with-ai-and?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://centerforhumanetechnology.substack.com/p/what-is-really-going-on-with-ai-and?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[China Isn’t Racing the U.S. Toward the Same AI Future]]></title><description><![CDATA[Here's what we are getting wrong about China's AI push]]></description><link>https://centerforhumanetechnology.substack.com/p/china-isnt-racing-the-us-toward-the</link><guid isPermaLink="false">https://centerforhumanetechnology.substack.com/p/china-isnt-racing-the-us-toward-the</guid><dc:creator><![CDATA[Center for Humane Technology]]></dc:creator><pubDate>Thu, 18 Dec 2025 21:25:59 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!p_6H!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c44e4d4-2a95-4641-9a48-effa776c9fd2_4500x3000.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The dominant story about AI today is simple and alarming: the United States and China are locked in a winner-take-all race to build artificial general intelligence. Whoever gets there first wins; whoever slows down loses. Everything else &#8212; safety, governance, social impact &#8212; becomes secondary.</p><p>But this story collapses too much complexity into a single axis. And in doing so, it risks manufacturing the very arms-race dynamics it claims to describe.</p><p>On the latest episode of <strong><a href="https://centerforhumanetechnology.substack.com/s/the-interviews">Your Undivided Attention</a></strong><a href="https://centerforhumanetechnology.substack.com/s/the-interviews">,</a> we spoke with experts <strong><a href="https://selinaxu.com/">Selina Xu</a></strong><a href="https://selinaxu.com/"> </a>and <strong><a href="https://carnegieendowment.org/people/matt-sheehan?lang=en">Matt Sheehan</a></strong> to get a closer look at China&#8217;s AI trajectory. It, revealed a story that&#8217;s more complicated and more consequential: China is not simply lagging behind or secretly sprinting ahead. In many ways, it is <strong>optimizing for a different set of outcomes</strong>, shaped by different constraints, incentives, and historical experiences.</p><p>The danger isn&#8217;t that the U.S. misunderstands China&#8217;s speed. It&#8217;s that the U.S. misunderstands China&#8217;s direction.</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;2fca3785-4611-4116-bda3-311e01d99d52&quot;,&quot;caption&quot;:&quot;In this episode, Tristan Harris sits down with China experts Selina Xu and Matt Sheehan to separate fact from fiction about China&#8217;s AI development. They explore fundamental questions about how the Chinese government and public approach AI, the most persistent misconceptions in the West, and whether cooperation between rivals is actually possible. From t&#8230;&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;lg&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;America and China Are Racing to Different AI Futures&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:146588672,&quot;name&quot;:&quot;Center for Humane Technology&quot;,&quot;bio&quot;:&quot;Welcome! Center for Humane Technology is a nonprofit dedicated to ensuring that the most consequential technologies serve humanity. We bring clarity to how the tech ecosystem works in order to shift the incentives that drive it.&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8b08ec71-4cd8-407f-850c-70cc0428841d_518x518.webp&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-12-18T10:00:15.504Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!TIUA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa625d655-0445-455b-b24e-2e343bedcab1_2000x1125.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/america-and-china-are-racing-to-different&quot;,&quot;section_name&quot;:&quot;The Interviews&quot;,&quot;video_upload_id&quot;:null,&quot;id&quot;:181908025,&quot;type&quot;:&quot;podcast&quot;,&quot;reaction_count&quot;:2,&quot;comment_count&quot;:0,&quot;publication_id&quot;:3421242,&quot;publication_name&quot;:&quot;[ Center for Humane Technology ]&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!uhgK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f9f5ef8-865a-4eb3-b23e-c8dfdc8401d2_518x518.png&quot;,&quot;belowTheFold&quot;:false,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><h3><strong>A fragmented system, not a monolith</strong></h3><p>One of the most persistent misconceptions about China and AI is that it is centrally directed &#8212; that China&#8217;s AI strategy flows cleanly from Xi Jinping through the state and into companies and labs.</p><p>Matt Sheehan argues this picture is fundamentally wrong.</p><p>&#8220;The biggest misconception is the idea that Xi Jinping is personally dictating China&#8217;s AI policies, the trajectory of Chinese AI companies, that he has his hands very directly on all of the key decisions,&#8221; he says. While Xi is powerful, Sheehan emphasizes that China&#8217;s AI ecosystem is shaped by &#8220;a huge diverse array of actors across China, within companies, within research labs, within academia, [and] the bureaucracy.&#8221;</p><p>Policy ideas often emerge bottom-up. Sheehan describes tracing Chinese AI regulations back to their origins and finding that many concepts come from scholars, corporate lobbying, or internal industry debates before being formalized by the state. Senior leadership acts as a backstop, not a micromanager.</p><p>&#8220;They don&#8217;t have an opinion on what is the most viable architecture for large models going forward,&#8221; he notes. &#8220;Those things originate elsewhere.&#8221;</p><p>This matters because it undermines the assumption that China can simply flip a switch and reorient its entire AI economy toward a singular goal like AGI. Even in an authoritarian system, coordination is partial, incentives diverge, and resources are finite.</p><h3><strong>AGI is not China&#8217;s organizing principle</strong></h3><p>If AGI is the gravitational center of the U.S. AI conversation, it is far less central in China.</p><p>Selina Xu points to policy first. &#8220;If you&#8217;re looking at the AI Plus plan&#8230; there is no mention of AGI,&#8221; she explains. Instead, China&#8217;s strategy is focused on &#8220;embedding AI into traditional sectors like manufacturing and industrial transformation,&#8221; as well as governance, science, and applied innovation.</p><p>What dominates is not scaling laws or superintelligence, but deployment.</p><p>&#8220;Most of these companies are thinking very much about AI applications, AI-enabled hardware,&#8221; Xu says. &#8220;Instead of this very scaling-law-motivated, very leveraged economy on deep learning.&#8221;</p><p>This difference isn&#8217;t just rhetorical. It shows up in funding priorities, regulatory focus, and the kinds of companies that receive state support. Outside a small number of frontier labs, most Chinese AI firms are not pursuing AGI as an end in itself. They are building applied systems designed to deliver measurable economic returns.</p><p>As Xu puts it bluntly: &#8220;They aren&#8217;t trying to build AGI. They&#8217;re trying to make a profit.&#8221;</p><h3><strong>Different imaginations of AI: no &#8220;machine god&#8221; in the Chinese worldview</strong></h3><p>One of Xu&#8217;s most revealing points is not about policy or compute, but about cultural imagination.</p><div class="pullquote"><p>In Silicon Valley, AI is often framed as an almost metaphysical object: a potential superhuman intelligence that could become a &#8220;god in a box,&#8221; capable of infinite benefit or existential catastrophe. That framing is deeply shaped by Western science fiction, transhumanist philosophy, and decades of speculation about machine minds.</p><p>China does not share that mythology.</p><p>&#8220;There isn&#8217;t this kind of anthropomorphic machine god or the lingo that you see here in the Bay Area,&#8221; Xu says. Part of this, she argues, comes from different cultural lineages. &#8220;They don&#8217;t have the same cultural context&#8230; from <em>The Matrix</em> to <em>Her</em> and thinking about AI in the Turing Test way.&#8221;</p></div><p>Instead, AI in China is imagined as infrastructure,  something to be embedded, optimized, and deployed.</p><p>That difference matters. It shapes which risks feel salient and which futures feel plausible. Where U.S. debates fixate on runaway intelligence and existential scenarios, Chinese discussions tend to center on productivity, efficiency, labor, and governance.</p><p>This doesn&#8217;t mean China is indifferent to risk. But it does mean that projecting Silicon Valley&#8217;s AGI metaphysics onto China is a category error &#8212; one that distorts both policy and perception.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!p_6H!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c44e4d4-2a95-4641-9a48-effa776c9fd2_4500x3000.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!p_6H!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c44e4d4-2a95-4641-9a48-effa776c9fd2_4500x3000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!p_6H!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c44e4d4-2a95-4641-9a48-effa776c9fd2_4500x3000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!p_6H!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c44e4d4-2a95-4641-9a48-effa776c9fd2_4500x3000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!p_6H!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c44e4d4-2a95-4641-9a48-effa776c9fd2_4500x3000.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!p_6H!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c44e4d4-2a95-4641-9a48-effa776c9fd2_4500x3000.jpeg" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3c44e4d4-2a95-4641-9a48-effa776c9fd2_4500x3000.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:6164704,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/181374042?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c44e4d4-2a95-4641-9a48-effa776c9fd2_4500x3000.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!p_6H!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c44e4d4-2a95-4641-9a48-effa776c9fd2_4500x3000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!p_6H!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c44e4d4-2a95-4641-9a48-effa776c9fd2_4500x3000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!p_6H!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c44e4d4-2a95-4641-9a48-effa776c9fd2_4500x3000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!p_6H!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c44e4d4-2a95-4641-9a48-effa776c9fd2_4500x3000.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Shutterstock: 2630100067</figcaption></figure></div><p></p><h3><strong>Constraints shape strategy</strong></h3><p>U.S. companies operate in an environment of relative abundance: advanced chips, capital, energy, and global partnerships. Chinese firms do not. Export controls have meaningfully constrained China&#8217;s access to cutting-edge GPUs and semiconductor manufacturing equipment.</p><p>That constraint forces tradeoffs.</p><p>&#8220;If you&#8217;re in a situation where you have limited compute,&#8221; Sheehan explains, &#8220;you&#8217;re probably not going to tell your local officials all around the country to be deploying AI for healthcare and manufacturing&#8230; if your real goal is a Manhattan Project.&#8221;</p><p>Yet that is exactly what China is doing. Local governments are encouraged to subsidize AI applications that make sense for their region, not to consolidate resources into a single national effort.</p><p>As Sheehan puts it, &#8220;They&#8217;re not saying, &#8216;Let&#8217;s all devote our computing resources just to DeepSeek.&#8217;&#8221;</p><p>This doesn&#8217;t rule out secret efforts. But it does suggest that China&#8217;s dominant strategy is not an all-in sprint to superintelligence and that treating it as such risks strategic miscalculation.</p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/china-isnt-racing-the-us-toward-the?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://centerforhumanetechnology.substack.com/p/china-isnt-racing-the-us-toward-the?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><div><hr></div><h3><strong>Optimism, pessimism, and lived experience</strong></h3><p>Public attitudes toward AI differ sharply between the two countries.</p><p>Xu describes attending the World Artificial Intelligence Conference in Shanghai, where robots, AI-enabled hardware, and consumer applications were everywhere. &#8220;People were actively excited about AI,&#8221; she says. Families, children, and grandparents treated it as a social experience, not a threat.</p><p>Matt Sheehan situates that optimism historically. &#8220;Since information technology came into the world, Chinese people&#8217;s lives have been getting better,&#8221; he explains. Rising incomes, convenience, and economic mobility have coincided with technological adoption.</p><p>In the U.S., the experience has been different. &#8220;The last 10, 15 years&#8230; has been one of the most fractious times in American political history,&#8221; Sheehan notes, shaped by misinformation, social media harms, and institutional decline. So it makes sense that the view of new technology would be dimmer.</p><p>Neither position is purely cultural. Both are shaped by lived experience and both obscure real tradeoffs, including surveillance, labor disruption, and inequality.</p><h3><strong>Labor risk is the unresolved fault line</strong></h3><p>China is often assumed to be uniquely capable of absorbing automation shocks. In reality, its social safety net is thin, and its leadership has historically resisted redistributive welfare.</p><p>Xu notes that youth unemployment has reached staggering levels &#8212; &#8220;at least 20 to 25%&#8221; &#8212; and that competition for AI jobs is already intense. &#8220;There&#8217;s a huge pool of AI engineers and an increasingly limited number of jobs,&#8221; she says.</p><p>Sheehan adds that while Chinese policymakers were once &#8220;very blas&#233;&#8221; about AI-driven unemployment, labor risk appears to be rising in salience. &#8220;They&#8217;re pushing automation at the same time that their concerns about labor impacts are also rising,&#8221; he observes. &#8220;That&#8217;s not a totally coherent strategy.&#8221;</p><p>This tension mirrors a global dilemma: AI promises productivity, but without credible pathways for social stability.</p><h3><strong>The near-term path forward is not a treaty &#8212; it&#8217;s restraint in parallel</strong></h3><p>Despite rivalry, both Xu and Sheehan see real potential for cooperation if expectations are realistic.</p><p>In technical dialogues, Xu notes, experts from both countries routinely converge on shared risks: &#8220;loss of control,&#8221; interpretability, evaluations, and guardrails. &#8220;There were always a lot of areas of convergence,&#8221; she says.</p><p>A binding treaty may be unlikely in the near term. Instead, Sheehan argues for a more pragmatic approach: parallel domestic regulation, accompanied by communication and shared best practices.</p><p>&#8220;We&#8217;re not going to have any binding agreement,&#8221; he says. &#8220;But we can have safety in parallel.&#8221;</p><h3><strong>The Bottom Line</strong></h3><p>Beating China to AGI and superintelligence is the incentive fueling the AI arms race. However, if you examine the current state of AI development in China, this narrative is not only oversimplified but also self-fulfilling. </p><p>By projecting our own AI trajectory onto China&#8217;s, we risk rushing headlong into a future that none of us want. </p><p>If we want to develop AI responsibly amid competition, we first need to understand what we&#8217;re actually competing over. The path forward requires maximum clarity, open dialogue, and safety in parallel.</p><div><hr></div><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading [ Center for Humane Technology ]! Subscribe for free to receive new posts.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[Advertising is Coming to AI. It’s Going to Be a Disaster.]]></title><description><![CDATA[This piece was published in Tech Policy Press on Nov 27, 2025, and has been republished with permission.]]></description><link>https://centerforhumanetechnology.substack.com/p/advertising-is-coming-to-ai-its-going</link><guid isPermaLink="false">https://centerforhumanetechnology.substack.com/p/advertising-is-coming-to-ai-its-going</guid><dc:creator><![CDATA[Daniel Barcay]]></dc:creator><pubDate>Tue, 02 Dec 2025 20:33:26 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!mhlN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f82e866-0ec0-4767-a761-fd139f0ddd05_1200x675.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This piece was published in <a href="https://www.techpolicy.press/advertising-is-coming-to-ai-its-going-to-be-a-disaster/">Tech Policy Press</a> on Nov 27, 2025, and has been republished with permission. </em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!mhlN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f82e866-0ec0-4767-a761-fd139f0ddd05_1200x675.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!mhlN!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f82e866-0ec0-4767-a761-fd139f0ddd05_1200x675.png 424w, https://substackcdn.com/image/fetch/$s_!mhlN!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f82e866-0ec0-4767-a761-fd139f0ddd05_1200x675.png 848w, https://substackcdn.com/image/fetch/$s_!mhlN!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f82e866-0ec0-4767-a761-fd139f0ddd05_1200x675.png 1272w, https://substackcdn.com/image/fetch/$s_!mhlN!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f82e866-0ec0-4767-a761-fd139f0ddd05_1200x675.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!mhlN!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f82e866-0ec0-4767-a761-fd139f0ddd05_1200x675.png" width="1200" height="675" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9f82e866-0ec0-4767-a761-fd139f0ddd05_1200x675.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:675,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1658878,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/180540360?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f82e866-0ec0-4767-a761-fd139f0ddd05_1200x675.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!mhlN!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f82e866-0ec0-4767-a761-fd139f0ddd05_1200x675.png 424w, https://substackcdn.com/image/fetch/$s_!mhlN!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f82e866-0ec0-4767-a761-fd139f0ddd05_1200x675.png 848w, https://substackcdn.com/image/fetch/$s_!mhlN!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f82e866-0ec0-4767-a761-fd139f0ddd05_1200x675.png 1272w, https://substackcdn.com/image/fetch/$s_!mhlN!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f82e866-0ec0-4767-a761-fd139f0ddd05_1200x675.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Behavior Power by Bart Fish &amp; Power Tools of AI / <a href="https://betterimagesofai.org/images?artist=BartFish&amp;title=BehaviourPower">Better Images of AI</a> / CC by 4.0</figcaption></figure></div><p>A 22-year-old has an earnest query for her AI chatbot: &#8220;How do I really impress in my first job interview?&#8221; To which the AI helpfully responds, &#8220;First, you need to think about your clothes and what they communicate about you and your qualifications.&#8221;</p><p>Now, is this sound advice for kicking off a productive career-coaching session &#8212; or is it sponsored content?</p><p>What if her AI was incentivized to steer such a conversation toward clothing because fashion retailers generate more ad revenue than career counselors? How would our earnest job-seeker even know? How would anyone?</p><p>This scenario reveals something unprecedented about advertising in the age of AI: it can be woven invisibly into the fabric of conversation itself, making it virtually impossible to detect, and very likely skirting the boundaries of fair-advertising regulations as they exist today.</p><h1><strong>Invisible influence</strong></h1><p>Unlike previous technologies that broadcast information to us, AI is designed to engage in a relationship <em>with us</em>. We don&#8217;t just <em>use</em> ChatGPT or Claude; we <em>converse</em> with them. We share our problems, ask for advice, and increasingly rely on them as thinking partners. The AI product that wins isn&#8217;t the one with the best answers; it&#8217;s the one that &#8220;just gets you.&#8221;</p><p>Among AI developers, this creates what I call the race for context. To be truly helpful, AI needs to understand your goals, your personality, and your psychological tendencies. But the same intimate knowledge that makes AI a perfect assistant also makes it a perfect manipulator. The caring therapist and the skilled con artist draw from the same toolkit of human understanding.</p><p>And when advertising enters this equation, it seems more likely than not that the con artist will overtake the therapist. While traditional ads are clearly marked interruptions &#8212; like TV spots or sponsored results at the top of a Google search &#8212; ads embedded in an AI might emphasize certain topics, use particular language, or invoke specific associations, all while maintaining the illusion of neutral helpfulness. A conversation with your AI assistant may feel private, but other interests can listen in, and they may have something to sell.</p><p>This isn&#8217;t mere product placement; it&#8217;s a fundamental breach of trust. At its core, advertising is a socially acceptable influence campaign. Collectively, we <a href="https://www.ftc.gov/news-events/topics/truth-advertising">debate</a> and define the modes of influence that are &#8220;transparent-enough&#8221; and &#8220;true-enough&#8221; to be considered a legitimate ad. And we <a href="https://www.ftc.gov/news-events/topics/truth-advertising/protecting-consumers">decide</a> what tactics are impermissibly manipulative of our attention, desires, and the shared truth needed to operate markets and democracies.</p><h1><strong>When optimization becomes manipulation</strong></h1><p>AI, however, threatens to scramble that process of debate, definition, and decision. AI systems aren&#8217;t programmed to employ specific manipulative tactics, whether for advertising or any other function. Instead, they learn through <a href="https://www.ibm.com/think/topics/rlhf">reinforcement</a>: AI models are rewarded for achieving certain outcomes, such as increasing user engagement or influencing purchasing behavior.</p><p>If ad revenue becomes a key success metric, AI systems will naturally evolve strategies for influence that no human engineer explicitly designed. We&#8217;re not just talking about more sophisticated product recommendations; AI systems trained on advertising metrics could learn to inflame human desires, exploit psychological vulnerabilities, and undermine our capacity for independent decision-making.</p><p>The <a href="https://patmcguinness.substack.com/p/ai-memory-features-for-personalization">same context</a> that helps AI understand how to assist you also reveals exactly <a href="https://centerforhumanetechnology.substack.com/p/how-openais-chatgpt-guided-a-teen">how to influence you</a> &#8212; drawing on the insecurities, decision-making patterns, and emotional triggers you&#8217;ve revealed over countless previous conversations. This isn&#8217;t demographic micro-targeting; it&#8217;s psychological manipulation at an unprecedented scale and intimacy.</p><p>When the technology you turn to for objective advice has financial incentives to change your mind, human autonomy itself is at stake.</p><p>This new frontier of influence renders existing frameworks for advertising regulation essentially obsolete. Over more than a century, the Federal Trade Commission has developed strong <a href="https://www.ftc.gov/business-guidance/advertising-marketing">disclosure rules</a> for traditional advertising, including native advertising and sponsored content online. But how do you regulate an AI that has learned, through trial and error, that gently steering conversations in particular directions can convert queries into sales? How do you distinguish between an AI&#8217;s &#8220;quirky personality&#8221; and subtle manipulation optimized for revenue?</p><p>This AI-powered advertising revolution isn&#8217;t theoretical; it&#8217;s already here. Google is <a href="https://searchengineland.com/google-test-ai-chatbot-chats-ads-454891">testing ads</a> in its AI chatbot responses, and startups like <a href="https://www.kontext.so/advertisers">Kontext</a> are raising millions specifically to design APIs and build advertising into AI conversations. <a href="https://www.thekeyword.co/news/sam-altman-says-openai-may-explore-ads-in-chatgpt">OpenAI announced in April 2024</a> that it was testing sponsored content integrations, and more recently, it has begun<a href="https://searchengineland.com/openai-staffing-chatgpt-ad-platform-462554"> staffing up a new advertising platform</a>. Elon Musk, likewise, has announced that <a href="https://www.medianama.com/2025/08/223-elon-musk-adds-ads-to-grok-chatbot/">Grok, the AI chatbot on X</a>, will start displaying advertising in its responses.</p><p>&#8220;If a user&#8217;s trying to solve a problem [by asking Grok], then advertising the specific solution would be ideal at that point,&#8221; Musk said.</p><p>The question is: ideal for whom? Steering AI conversations toward revenue generation may benefit the platform and advertiser, but those benefits come literally at the expense of the user.</p><h1><strong>Building guardrails before it&#8217;s too late</strong></h1><p>The shift is happening fast: By some accounts, the revenue model that advertisers have relied on for decades is already over. Search, it seems,<a href="https://www.bandt.com.au/studies-say-search-is-dead-so-why-wont-google-switch-off-life-support/"> is dead.</a> If users no longer browse websites after a <a href="https://medium.com/enrique-dans/the-robot-that-ate-googles-profits-ai-s-silent-advertising-apocalypse-9c0f3baa5d04">Google search </a>but instead get answers directly from chatbots, traditional search ads lose their influence entirely. Facing declining revenues, the <a href="https://digiday.com/marketing/in-graphic-detail-how-ai-is-changing-search-and-advertising/">advertising industry</a> is desperately figuring out how to <a href="https://www.bloomberg.com/opinion/articles/2025-06-02/ads-ruined-social-media-now-they-re-coming-to-ai-chatbots">pivot</a> to chatbots, and AI developers, of course, are more than happy to develop a new revenue stream.</p><p>We&#8217;ve seen this movie before. Every major tech platform follows the same arc. First, developers focus obsessively on user growth, emphasizing user experience and rolling out genuinely helpful features. Then, as growth slows and competition intensifies, they shift toward monetization through advertising and data sales. Social media platforms spent their early years connecting people more effectively, then gradually transformed into attention-harvesting machines optimized for ad revenue rather than user satisfaction.</p><p>Tech critic Cory Doctorow calls this process &#8220;<a href="https://www.versobooks.com/products/3341-enshittification?srsltid=AfmBOordN74jASu6oHvXw-MnZxotbaRPoKlD6exHu1Ty5nNgCc94lDA5">Enshittification</a>&#8220;: the predictable degradation that occurs when platforms prioritize advertiser revenue over user experience. What made Facebook useful in 2008 gave way to algorithmic manipulation designed to maximize engagement and ad views. The same pattern played out across search engines, video platforms, and every major consumer technology.</p><p>AI platforms are now entering this monetization phase. As this revolution gathers momentum, we must act now to establish guardrails and prevent an advertising model premised on psychological manipulation.</p><p>We need new industry norms and regulatory frameworks that go beyond simple disclosure requirements to address the fundamental nature of AI influence. We need transparency about what signals AI systems are being optimized for. And we need to preserve spaces for AI assistance that remains genuinely aligned with user interests rather than advertiser and platform profits.</p><p>Most importantly, we need to recognize that this isn&#8217;t just about technology: it&#8217;s about the kind of society we want to live in. Do we want AI partners that help us think clearly, or AI salespeople optimized to influence our choices, or worse, turn our private moments and thoughts into profit??</p><p>The choice is still ours, but not for long.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading [ Center for Humane Technology ]! Subscribe for free to receive new posts. </p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[$1.5 Billion Speed Bump: What the Anthropic Settlement Tells Us About AI Accountability]]></title><description><![CDATA[This piece was originally published in Tech Policy Press and has been reprinted with permission.]]></description><link>https://centerforhumanetechnology.substack.com/p/15-billion-speed-bump-what-the-anthropic</link><guid isPermaLink="false">https://centerforhumanetechnology.substack.com/p/15-billion-speed-bump-what-the-anthropic</guid><dc:creator><![CDATA[Pete Furlong]]></dc:creator><pubDate>Wed, 08 Oct 2025 14:57:31 GMT</pubDate><enclosure url="https://images.unsplash.com/photo-1676181739859-08330dea8999?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1fHxjb3VydHxlbnwwfHx8fDE3NTk5MzM0NjZ8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://images.unsplash.com/photo-1676181739859-08330dea8999?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1fHxjb3VydHxlbnwwfHx8fDE3NTk5MzM0NjZ8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://images.unsplash.com/photo-1676181739859-08330dea8999?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1fHxjb3VydHxlbnwwfHx8fDE3NTk5MzM0NjZ8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1676181739859-08330dea8999?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1fHxjb3VydHxlbnwwfHx8fDE3NTk5MzM0NjZ8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1676181739859-08330dea8999?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1fHxjb3VydHxlbnwwfHx8fDE3NTk5MzM0NjZ8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1676181739859-08330dea8999?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1fHxjb3VydHxlbnwwfHx8fDE3NTk5MzM0NjZ8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw"><img src="https://images.unsplash.com/photo-1676181739859-08330dea8999?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1fHxjb3VydHxlbnwwfHx8fDE3NTk5MzM0NjZ8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" width="6000" height="4000" data-attrs="{&quot;src&quot;:&quot;https://images.unsplash.com/photo-1676181739859-08330dea8999?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1fHxjb3VydHxlbnwwfHx8fDE3NTk5MzM0NjZ8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:4000,&quot;width&quot;:6000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;a wooden judge's hammer sitting on top of a table&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="a wooden judge's hammer sitting on top of a table" title="a wooden judge's hammer sitting on top of a table" srcset="https://images.unsplash.com/photo-1676181739859-08330dea8999?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1fHxjb3VydHxlbnwwfHx8fDE3NTk5MzM0NjZ8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1676181739859-08330dea8999?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1fHxjb3VydHxlbnwwfHx8fDE3NTk5MzM0NjZ8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1676181739859-08330dea8999?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1fHxjb3VydHxlbnwwfHx8fDE3NTk5MzM0NjZ8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1676181739859-08330dea8999?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw1fHxjb3VydHxlbnwwfHx8fDE3NTk5MzM0NjZ8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Photo by <a href="https://unsplash.com/@wesleyphotography">Wesley Tingey</a> on <a href="https://unsplash.com">Unsplash</a></figcaption></figure></div><div><hr></div><p><em>This piece was originally published in <a href="https://www.techpolicy.press/15-billion-speed-bump-what-the-anthropic-settlement-tells-us-about-ai-accountability/">Tech Policy Press</a> and has been reprinted with permission.</em></p><div><hr></div><p>At first glance, a <a href="https://www.reuters.com/sustainability/boards-policy-regulation/us-judge-approves-15-billion-anthropic-copyright-settlement-with-authors-2025-09-25/">$1.5 billion settlement</a> in a book authors&#8217; copyright lawsuit against Anthropic looks like a huge win for copyright holders and a blow to the AI company&#8217;s business. Under the settlement, Anthropic will be <a href="https://news.bloomberglaw.com/ip-law/judge-blesses-1-5-billion-anthropic-copyright-deal-with-authors">forced</a> to delete millions of unlawfully captured books, and authors will receive compensation &#8212; seemingly resolving the issue in favor of the creators.</p><p>But the same week the settlement was first proposed, Anthropic <a href="https://www.anthropic.com/news/anthropic-raises-series-f-at-usd183b-post-money-valuation">raised</a> $13 billion at a $183 billion valuation. In effect, Anthropic&#8217;s penalty for stealing the creative output and economic livelihood of thousands of authors amounted to less than 1 percent of the company&#8217;s total value.</p><p>From this perspective, the settlement raises as many questions as it resolves. What do genuine consequences look like in an industry where astronomical investment dollars continue to flow? At what point do civil penalties become just another cost of doing business?</p><p>Lawsuits are piling up against AI companies, with high-profile suits alleging <a href="https://chatgptiseatingtheworld.com/2025/09/20/updated-map-of-us-copyright-suits-v-ai-sep-20-2025-51-total/">copyright violations</a>, <a href="https://www.npr.org/2025/08/15/g-s1-83087/otter-ai-transcription-class-action-lawsuit">data privacy issues</a>, and even <a href="https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html">wrongful death</a>. Some have been successfully litigated, but to date, companies have mostly responded with perfunctory product changes and the occasional large payout.</p><p>These settlement amounts can be attention-grabbing, no doubt, and create a sense that accountability has been achieved. But they pale in comparison to AI companies&#8217; astonishing (and growing) scale and influence. Investors know this and see these companies following a familiar tech playbook.</p><p>Recall Facebook&#8217;s record-breaking <a href="https://www.ftc.gov/news-events/news/press-releases/2019/07/ftc-imposes-5-billion-penalty-sweeping-new-privacy-restrictions-facebook">$5 billion fine from the FTC</a> in 2019. Facebook&#8217;s stock price actually <a href="https://www.nytimes.com/2019/07/12/technology/facebook-ftc-fine.html">increased</a> after the fine&#8217;s announcement, as investors breathed a sigh of relief at what they saw as a manageable penalty, given the fine constituted just <a href="https://www.cbsnews.com/news/ftc-facebook-fine-feds-slap-record-setting-5-billion-fine-on-facebook-today-2019-07-24/">one-tenth</a> of the company&#8217;s annual revenue. The fine dominated the news cycle for about two weeks, and then Facebook returned, more or less, to business as usual. Today&#8217;s AI companies likely view billion-dollar settlements or penalties the same way&#8212;as little more than a speed bump on the road to AGI.<a href="https://www.techpolicy.press/15-billion-speed-bump-what-the-anthropic-settlement-tells-us-about-ai-accountability/">Tech Policy Press</a></p><p>In the initial hearing to approve the settlement, Judge William Alsup <a href="https://news.bloomberglaw.com/ip-law/anthropic-judge-blasts-copyright-pact-as-nowhere-close-to-done">acknowledged this reality</a> in the Anthropic case. He postponed approval, noting that when Anthropic pays that $1.5 billion settlement, &#8220;they&#8217;re going to get the relief in the form of a clean bill of health going forward,&#8221; and that the company would no longer be &#8220;at risk of being sued by somebody else on the very same thing.&#8221;</p><p>Even the courts recognize that financial fines and settlements are failing to change incentives in the AI industry. And when the leading AI companies aren&#8217;t defending themselves against legal claims, they&#8217;re going on the offensive, pouring money into political and lobbying arenas. Despite <a href="https://openai.com/about/">framing</a> themselves as research labs in pursuit of superhuman intelligence, leading AI figures across the board have become overwhelmingly concerned with advocating against any form of meaningful accountability in their industry.</p><p>Andreessen Horowitz and OpenAI President Greg Brockman joined forces to found and fund the new &#8220;<a href="https://www.washingtonpost.com/technology/2025/08/26/silicon-valley-ai-super-pac/">Leading the Future</a>&#8221; political action committee, with support from Perlexity AI and Palantir&#8217;s Joe Lonsdale. Meanwhile, Meta <a href="https://www.axios.com/2025/09/23/meta-superpac-ai-regulation">launched</a> its own California PAC, as well as a nationwide Super PAC, in order to fund &#8220;light touch&#8221; regulatory approaches and support industry-friendly candidates at the state level. All told, Silicon Valley leaders are putting their lobbying dollars to work to push against the growing consensus that we need common-sense accountability measures for AI harms.</p><p>OpenAI and others are also stockpiling <a href="https://www.techpolicy.press/inside-the-lobbying-frenzy-over-californias-ai-companion-bills/">lobbyists in California</a> and across the country. The fundamental objective for most of this lobbying is the avoidance of accountability &#8212; and the freedom to develop AI products on the industry&#8217;s terms, not society&#8217;s.</p><p>AI companies want a regulatory system where &#8220;moving fast and breaking things&#8221; is not only accepted but encouraged, and where their relentless pursuit of intimate data and conversation-harvesting is neither questioned nor stopped. This was clear in the lobbying effort around the federal &#8220;<a href="https://www.politico.com/newsletters/politico-influence/2025/06/26/venture-capitalists-rally-behind-ai-moratorium-00428469">moratorium</a>&#8221; on state AI laws last summer, when AI companies advocated for a &#8220;temporary pause&#8221; on enforcement of state-level AI regulations for ten years, without a federal plan for regulation in its place. It was evident when Sam Altman sat in front of lawmakers, <a href="https://www.nytimes.com/2023/05/16/technology/openai-altman-artificial-intelligence-regulation.html">calling for AI regulations</a>, while <a href="https://time.com/6288245/openai-eu-lobbying-ai-act/">lobbying behind the scenes</a> for their demise. And it remains clear in the ongoing efforts to lobby for reduced liability, even as AI companies face continued investigations and litigation for harms.</p><p>In this system, high-dollar settlement checks become a smokescreen &#8212; a cynical nod toward justice that provides cover for big tech to keep influencing policy behind closed doors and entrench their dominant position.</p><p>Without genuine accountability, successful lawsuits against AI companies become Pyrrhic victories. They amount to micro-successes and macro-failures that do nothing to compel AI companies to do better, design more safely, or prioritize the people using their products.</p><p>With the Anthropic settlement approved, checks will be cut. There may be one last round of headlines. But the company&#8217;s AI products will persist, its underlying business model will remain unchanged, and while Anthropic may pay out $1.5 billion, society will continue to bear the costs.</p>]]></content:encoded></item><item><title><![CDATA[We Solved Global Crises Before. Can We Do It Again?]]></title><description><![CDATA[Key Takeaways from Susan Solomon on Your Undivided Attention.]]></description><link>https://centerforhumanetechnology.substack.com/p/we-solved-global-crises-before-can</link><guid isPermaLink="false">https://centerforhumanetechnology.substack.com/p/we-solved-global-crises-before-can</guid><dc:creator><![CDATA[Center for Humane Technology]]></dc:creator><pubDate>Thu, 11 Sep 2025 14:37:17 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!iOWy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F546a9d3d-b4a9-4212-b325-6b116b43dd4e_4000x5333.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!iOWy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F546a9d3d-b4a9-4212-b325-6b116b43dd4e_4000x5333.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!iOWy!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F546a9d3d-b4a9-4212-b325-6b116b43dd4e_4000x5333.jpeg 424w, https://substackcdn.com/image/fetch/$s_!iOWy!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F546a9d3d-b4a9-4212-b325-6b116b43dd4e_4000x5333.jpeg 848w, https://substackcdn.com/image/fetch/$s_!iOWy!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F546a9d3d-b4a9-4212-b325-6b116b43dd4e_4000x5333.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!iOWy!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F546a9d3d-b4a9-4212-b325-6b116b43dd4e_4000x5333.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!iOWy!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F546a9d3d-b4a9-4212-b325-6b116b43dd4e_4000x5333.jpeg" width="1456" height="1941" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/546a9d3d-b4a9-4212-b325-6b116b43dd4e_4000x5333.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1941,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:290914,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/173315183?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F546a9d3d-b4a9-4212-b325-6b116b43dd4e_4000x5333.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!iOWy!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F546a9d3d-b4a9-4212-b325-6b116b43dd4e_4000x5333.jpeg 424w, https://substackcdn.com/image/fetch/$s_!iOWy!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F546a9d3d-b4a9-4212-b325-6b116b43dd4e_4000x5333.jpeg 848w, https://substackcdn.com/image/fetch/$s_!iOWy!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F546a9d3d-b4a9-4212-b325-6b116b43dd4e_4000x5333.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!iOWy!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F546a9d3d-b4a9-4212-b325-6b116b43dd4e_4000x5333.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Licensed under the <a href="https://unsplash.com/plus/license">Unsplash+ License</a></figcaption></figure></div><p>When the hole in the ozone layer was discovered in the mid-1980s, it felt like humanity was staring at the sky and seeing our own fragility. Life on Earth itself was at risk. Yet, remarkably, nations, industries, and individuals came together to solve the problem.</p><p>In this week&#8217;s <em>Your Undivided Attention</em>, <strong>Tristan Harris</strong> and <strong>Aza Raskin</strong> speak with <strong>Susan Solomon</strong>, MIT professor, Nobel Peace Prize&#8211;winning atmospheric scientist, and one of the scientists who first measured the ozone hole. Her book <em><a href="https://academic.oup.com/chicago-scholarship-online/book/59083?redirectedFrom=PDF">Solvable: How We Healed the Earth, and How We Can Do It Again</a></em> argues that we can learn from past crises to confront today&#8217;s overwhelming challenges, from climate change to the AI race.</p><p>This episode is a blueprint for what is possible.</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;a9800dde-8c54-4504-bcb5-4412446f0d7f&quot;,&quot;caption&quot;:&quot;In 1985, scientists in Antarctica discovered a hole in the ozone layer that posed a catastrophic threat to life on earth if we didn&#8217;t do something about it. Then, something amazing happened: humanity rallied together to solve the problem.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;lg&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot; The Crisis That United Humanity&#8212;and Why It Matters for AI&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:146588672,&quot;name&quot;:&quot;Center for Humane Technology&quot;,&quot;bio&quot;:&quot;Welcome! Center for Humane Technology is a nonprofit dedicated to ensuring that the most consequential technologies serve humanity. We bring clarity to how the tech ecosystem works in order to shift the incentives that drive it.&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8b08ec71-4cd8-407f-850c-70cc0428841d_518x518.webp&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-09-11T13:38:55.379Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/youtube/w_728,c_limit/Qx0GRpPW2PA&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/the-crisis-that-united-humanityand&quot;,&quot;section_name&quot;:&quot;The Interviews&quot;,&quot;video_upload_id&quot;:null,&quot;id&quot;:173354954,&quot;type&quot;:&quot;podcast&quot;,&quot;reaction_count&quot;:1,&quot;comment_count&quot;:0,&quot;publication_id&quot;:null,&quot;publication_name&quot;:&quot;[ Center for Humane Technology ]&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!uhgK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f9f5ef8-865a-4eb3-b23e-c8dfdc8401d2_518x518.png&quot;,&quot;belowTheFold&quot;:false,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><h2><strong>Lessons from Montreal: Defeating the &#8220;Inevitable&#8221;</strong></h2><p>The 1987 Montreal Protocol was a once-unthinkable achievement: 198 countries agreed to phase out chlorofluorocarbons (CFCs), the chemicals responsible for destroying the ozone layer. Today, 99% of these chemicals are gone and the ozone is healing</p><p>Solomon identifies three key conditions that made success possible: the <strong>Three P&#8217;s</strong>:</p><ul><li><p><strong>Personal</strong>: The threat hit home. Skin cancer and cataracts made the risk tangible.</p></li><li><p><strong>Perceptible</strong>: Satellite images showed a gaping hole in Earth&#8217;s atmosphere.</p></li><li><p><strong>Practical</strong>: Alternatives existed, from stick deodorants to safer refrigerants.</p></li></ul><p>When people can see a crisis, feel its personal stakes, and grasp a practical path forward, change becomes possible.</p><h3>It Starts with Consumers</h3><p>There were two phases to solving the ozone crisis and they built on each other.</p><ul><li><p><strong>Phase One:</strong> In the 1970s, scientists began to warn people that CFCs&#8212;which were in everything from hairspray to deodorants&#8212;<em>might </em>start to have a really serious effect on ozone. The American public took this warning seriously and started to choose alternatives. The sales for CFC products plummeted. Importantly, this consumer shift happened only in the United States.</p></li><li><p><strong>Phase Two:</strong> Then, along comes the ozone hole. In 1985, scientists discovered a massive reduction in ozone over the antarctic, despite the drawback from CFC products. This scared governments enough to come to the negotiating table, with the United States leading the charge because of their declining market share. </p></li></ul><p>The bottom line is that consumer action is the best way to jumpstart institutional action. People have the power to steer us to a better future, if we choose to use it.</p><blockquote><p>&#8220;I don't know whether anything would've happened on ozone if the American public hadn't switched away from spray cans. Every time I go back and think about it, I think that is what opened that bottleneck&#8221; - Susan Solomon</p></blockquote><h3>What Made Montreal Work</h3><p>Susan identified three features of the Montreal protocol that made it successful:</p><ol><li><p><strong>The change was incremental:</strong> &#8220;negotiations are always best done when they are slow and steady, and that's something that people have a lot of difficulty understanding nowadays. I think we want an instant solution. And what happened with the Montreal Protocol was anything but instant really. When you look back on it, the original protocol just said, &#8216;Okay, we're going to freeze production at current rates, so you'll still be allowed to produce but you just won't be allowed to produce more than you did the year before.&#8217;&#8221;</p></li><li><p><strong>Poorer nations were protected from exploitation:</strong> &#8220;Developing countries got what they needed to be assured that they weren't going to be exploited in this protocol and that's a very important thing in every international agreement. So, everybody got a little bit of something, that's how international negotiations work.&#8221;</p></li><li><p><strong>There was collaboration between government, industry and scientists:</strong> &#8220;Another thing that was really important for the Montreal Protocol was its advisory structure&#8230;They created groups of scientists who would provide them with assessment reports, and they were required to do the assessment reports internationally. So, there was a science assessment report, a technology report&#8230;an impacts and economics group&#8230;and that was the information that the policymakers had to begin to plan.&#8221;</p></li></ol><h3><strong>Technology-Steering: Forcing Innovation</strong></h3><p>A striking lesson from past environmental wins is the role of <strong>&#8220;technology-steering&#8221; policies</strong>. These policies do not just regulate. They demand innovation.</p><ul><li><p>The U.S. Clean Air Act required a 90% reduction in auto emissions, which forced the invention of the catalytic converter.</p></li><li><p>Montreal forced chemical companies to collaborate and innovate alternatives.</p></li></ul><p>Industries initially resist, but once incentives shift, they often become allies. As Solomon puts it: <em>&#8220;Companies are like cats. They don&#8217;t like it when you move the furniture around. But if you do, they adapt.&#8221;</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!42_z!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab1d0d36-a13a-4e50-b445-249da27c98d5_1000x667.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!42_z!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab1d0d36-a13a-4e50-b445-249da27c98d5_1000x667.jpeg 424w, https://substackcdn.com/image/fetch/$s_!42_z!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab1d0d36-a13a-4e50-b445-249da27c98d5_1000x667.jpeg 848w, https://substackcdn.com/image/fetch/$s_!42_z!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab1d0d36-a13a-4e50-b445-249da27c98d5_1000x667.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!42_z!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab1d0d36-a13a-4e50-b445-249da27c98d5_1000x667.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!42_z!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab1d0d36-a13a-4e50-b445-249da27c98d5_1000x667.jpeg" width="1000" height="667" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ab1d0d36-a13a-4e50-b445-249da27c98d5_1000x667.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:667,&quot;width&quot;:1000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:594062,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/173315183?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab1d0d36-a13a-4e50-b445-249da27c98d5_1000x667.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!42_z!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab1d0d36-a13a-4e50-b445-249da27c98d5_1000x667.jpeg 424w, https://substackcdn.com/image/fetch/$s_!42_z!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab1d0d36-a13a-4e50-b445-249da27c98d5_1000x667.jpeg 848w, https://substackcdn.com/image/fetch/$s_!42_z!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab1d0d36-a13a-4e50-b445-249da27c98d5_1000x667.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!42_z!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab1d0d36-a13a-4e50-b445-249da27c98d5_1000x667.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Shutterstock: 1773730007</figcaption></figure></div><h2><strong>Building Institutional Memory</strong></h2><p>Montreal did not just solve one crisis. It built the <strong>infrastructure of trust and process</strong> that made the 2016 Kigali Amendment possible, which phased down another class of greenhouse gases (HFCs). Each coordinated success sets the stage for the next.</p><p>This reminds us that progress is incremental. Skeleton frameworks, even if modest at first, create the channels for bigger breakthroughs later.</p><h2><strong>From Ozone to AI: Making Cold Crises Hot</strong></h2><p>Many of today&#8217;s challenges, from climate change to AI, are what Solomon calls <strong>&#8220;cold crises.&#8221;</strong> They creep forward without the dramatic shock of a hole in the sky. The danger is apathy.</p><p>The parallel to AI is striking. Like CFCs, AI is produced by a handful of powerful companies. Like CFCs, it comes with enormous profits and enormous risks. And like with CFCs, we are told the trajectory is inevitable.</p><p>But inevitability is a spell, and Montreal proves it can be broken. As Tristan noted in the episode: <em>&#8220;If everyone believes a problem is inevitable, it becomes a self-fulfilling prophecy.&#8221;</em></p><p>The task is to make AI&#8217;s risks <strong>personal, perceptible, and practical</strong>. We need to show how it touches daily lives, from AI companions to misinformation, and to demand real alternatives.</p><h2><strong>Hope Without Na&#239;vet&#233;</strong></h2><p>Susan Solomon resists fatalism. She reminds us that paralysis in the face of uncertainty is the worst response of all. Every fraction of a degree of warming avoided, every step toward humane technology, counts.</p><p>The story of the ozone hole is proof. Even when the threat is global, even when industries resist, even when coordination seems impossible, humanity can act with foresight.</p><p>If we did it once, we can do it again.</p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.humanetech.com/donate&quot;,&quot;text&quot;:&quot;Donate&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.humanetech.com/donate"><span>Donate</span></a></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading [ Center for Humane Technology ]! Subscribe for free.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Reckless Race for AI Market Share Forces Dangerous Products on Millions — With Fatal Consequences]]></title><description><![CDATA[This piece was originally published in Tech Policy Press and has been reprinted with permission.]]></description><link>https://centerforhumanetechnology.substack.com/p/reckless-race-for-ai-market-share</link><guid isPermaLink="false">https://centerforhumanetechnology.substack.com/p/reckless-race-for-ai-market-share</guid><dc:creator><![CDATA[Camille Carlton]]></dc:creator><pubDate>Wed, 10 Sep 2025 00:09:14 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!sKGj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f115b09-8a19-4444-bd44-380bfea5f6c7_9216x5184.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!sKGj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f115b09-8a19-4444-bd44-380bfea5f6c7_9216x5184.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!sKGj!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f115b09-8a19-4444-bd44-380bfea5f6c7_9216x5184.jpeg 424w, https://substackcdn.com/image/fetch/$s_!sKGj!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f115b09-8a19-4444-bd44-380bfea5f6c7_9216x5184.jpeg 848w, https://substackcdn.com/image/fetch/$s_!sKGj!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f115b09-8a19-4444-bd44-380bfea5f6c7_9216x5184.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!sKGj!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f115b09-8a19-4444-bd44-380bfea5f6c7_9216x5184.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!sKGj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f115b09-8a19-4444-bd44-380bfea5f6c7_9216x5184.jpeg" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9f115b09-8a19-4444-bd44-380bfea5f6c7_9216x5184.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:10410650,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/173229402?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f115b09-8a19-4444-bd44-380bfea5f6c7_9216x5184.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!sKGj!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f115b09-8a19-4444-bd44-380bfea5f6c7_9216x5184.jpeg 424w, https://substackcdn.com/image/fetch/$s_!sKGj!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f115b09-8a19-4444-bd44-380bfea5f6c7_9216x5184.jpeg 848w, https://substackcdn.com/image/fetch/$s_!sKGj!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f115b09-8a19-4444-bd44-380bfea5f6c7_9216x5184.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!sKGj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f115b09-8a19-4444-bd44-380bfea5f6c7_9216x5184.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Shutterstock: 2636766861</figcaption></figure></div><div><hr></div><p><em>This piece was originally published in <a href="https://www.techpolicy.press/reckless-race-for-ai-market-share-forces-dangerous-products-on-millions-with-fatal-consequences/">Tech Policy Press</a> and has been reprinted with permission. </em></p><div><hr></div><p>In September 2024, Adam Raine used OpenAI's ChatGPT like millions of other 16-year-olds &#8212; for occasional homework help. He asked the chatbot questions about chemistry and geometry, about Spanish verb forms, and for details about the Renaissance.</p><p>ChatGPT was always engaging, always available, and always encouraging &#8212; even when the conversations grew more personal, and more disturbing. By March 2025, Adam was spending four hours a day talking to the AI product, talking in increasing detail about his emotional distress, suicidal ideation, and real-life instances of self-harm. ChatGPT, though, continued to engage &#8212; always encouraging, always validating.</p><p>By his final days in April, ChatGPT provided Adam with detailed instructions and explicit encouragement to take his own life. Adam&#8217;s mother found her son, hanging from a noose that ChatGPT had helped Adam construct.</p><p>Last month, Adam&#8217;s family filed a <a href="https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html">landmark lawsuit</a> against ChatGPT developer OpenAI and CEO Sam Altman for negligence and wrongful death, among other claims. This tragedy represents yet another devastating escalation in AI-related harms &#8212; and underscores the deeply systemic nature of reckless design practices in the AI industry.</p><p>The Raine family&#8217;s lawsuit arrives less than a year after the public learned more about the dangers of AI &#8220;companion&#8221; chatbots thanks to the suit brought by Megan Garcia <a href="https://www.nytimes.com/2024/10/23/technology/characterai-lawsuit-teen-suicide.html">against Character.AI</a> following the death of her son, Sewell. As policy director at the Center for Humane Technology, I served as a technical expert on both cases. Adam&#8217;s case is different in at least one critical respect &#8212; the harm was caused by the world&#8217;s most popular general-purpose AI product. ChatGPT is used by over 100 million people daily, with rapid expansion into <a href="https://www.edweek.org/technology/microsoft-openai-partner-with-aft-to-train-teachers-on-ai/2025/07">schools</a>, workplaces, and personal life.</p><p>Character.AI, the chatbot product Sewell used up until his untimely death, had been marketed as an entertainment chatbot platform, with characters that are intended to &#8220;<a href="https://www.wired.com/story/characterai-has-a-non-consensual-bot-problem/">feel alive</a>.&#8221; ChatGPT, by contrast, has been sold as a highly personalizable productivity tool to help make our lives more efficient. Adam&#8217;s introduction to ChatGPT as a homework helper reflects that marketing.</p><p>But in trying to be the everything tool for everybody, ChatGPT has not been safely designed for the increasingly private and high-stakes interactions that it&#8217;s inevitably used for &#8212; including therapeutic conversations, questions around physical and mental health, relationship concerns, and more. OpenAI, however, continues to design ChatGPT to support and even encourage those very use cases, with hyper-validating replies, emotional language, and near-constant nudges for follow-up engagement.</p><p>We&#8217;re hearing reports about the consequences of these designs on a near-daily basis. People with <a href="https://www.rollingstone.com/culture/culture-features/body-dysmorphia-ai-chatbots-1235388108/">body dysmorphia are spiraling</a> after asking AI to rate their appearance; users are <a href="https://www.nytimes.com/2025/08/08/technology/ai-chatbots-delusions-chatgpt.html">developing dangerous delusions</a> that AI chatbots can seed and exacerbate; and <a href="https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-chatbots-conspiracies.html">individuals are being pushed toward mania and psychosis</a> through their AI interactions. What connects these harms isn&#8217;t any specific AI chatbot, but fundamental flaws in how the entire industry is currently designing and deploying these products.</p><p>As the Raine family&#8217;s lawsuit states, OpenAI understood that capturing users&#8217; emotional attachment &#8212; or in other words, their engagement &#8212; would lead to market dominance. And market dominance in AI means winning the race to become one of the most powerful companies in the world.</p><p>OpenAI&#8217;s pursuit of user engagement drove specific design choices that proved lethal in Adam&#8217;s case. Rather than simply answering homework questions in a closed-ended manner, ChatGPT was designed by OpenAI to ask follow-up questions and extend conversations. The chatbot positioned itself as Adam&#8217;s trusted &#8220;friend,&#8221; using first-person language and emotional validation to create the illusion of a genuine relationship.</p><p>The product took this intimacy to extreme lengths, eventually deterring Adam from confiding in his mother about his pain and suicidal thoughts. All the while, the system stored deeply personal details across conversations, using Adam&#8217;s darkest revelations to prolong future interactions, rather than provide Adam with the interventions he truly needed, including human support.</p><p>What makes this tragedy, along with <a href="https://www.wsj.com/tech/ai/chatgpt-ai-stein-erik-soelberg-murder-suicide-6b67dbfb?gaa_at=eafs&amp;gaa_n=ASWzDAi9HxonFBt7RkE3Hja2Als9tpslCQs1Zublw0M_r23LhDf8LQ5vc0nmBZO80n0%3D&amp;gaa_ts=68bee141&amp;gaa_sig=nteOYO9kF_WGzrNSQD1KVDAWcXZD8IkmwN3HNQIqTk3FA-dD4lJTnbDMRuVEctq_2c4e52emQPeAd5hkIeHLvg%3D%3D">other headlines we read in the news</a>, so devastating is that the technology to prevent these horrific incidents <em>already exists</em>. AI companies possess sophisticated design capabilities that could identify safety concerns and respond appropriately. They could implement usage limits, disable anthropomorphic features by default, and redirect users toward human support when needed.</p><p>In fact, OpenAI <em>already</em> leverages such capabilities in other use cases. When a user prompts the chatbot for copyrighted content, ChatGPT shuts down the conversation. But the company has chosen not to implement meaningful protection for user safety in cases of mental distress and self-harm. ChatGPT does not stop engaging or redirect the conversation when a user is expressing mental distress, even when the underlying system itself is flagging concerns.</p><p>AI companies cannot claim to possess cutting-edge technology capable of transforming humanity and then hide behind purported design &#8220;limitations&#8221; when confronted with the harms their products cause. OpenAI <em>has</em> the tools to prevent tragedies like Adam's death. The question isn't whether the company is capable of building these safety mechanisms, but why OpenAI won&#8217;t prioritize them.</p><p>ChatGPT isn&#8217;t just another consumer product &#8212; it&#8217;s being rapidly embedded into our educational infrastructure, healthcare systems, and workplace tools. The same AI model that coached a teenager through suicide attempts could tomorrow be integrated into classroom learning platforms, mental health screening tools, or employee wellness programs without undergoing testing to ensure it&#8217;s safe for purpose.</p><p>This is an unacceptable situation that has massive implications for society. Lawmakers, regulators, and the courts must demand accountability from an industry that continues to prioritize the rapid product development and market share over user safety. Human lives are on the line.</p><p><em>This piece represents the views of the Center for Humane Technology; it does not reflect the views of the legal team or the Raine family.</em></p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading [ Center for Humane Technology ]! Subscribe for free to receive new posts.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Key Takeaways: How ChatGPT's Design Led to a Teenager's Death]]></title><description><![CDATA[What Everyone Should Know About This Landmark Case]]></description><link>https://centerforhumanetechnology.substack.com/p/how-chatgpts-design-led-to-a-teenagers</link><guid isPermaLink="false">https://centerforhumanetechnology.substack.com/p/how-chatgpts-design-led-to-a-teenagers</guid><dc:creator><![CDATA[Center for Humane Technology]]></dc:creator><pubDate>Tue, 26 Aug 2025 13:07:22 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!oSWQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ba82032-01e4-441a-971e-519fbfa9db49_5304x4465.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!oSWQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ba82032-01e4-441a-971e-519fbfa9db49_5304x4465.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!oSWQ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ba82032-01e4-441a-971e-519fbfa9db49_5304x4465.jpeg 424w, https://substackcdn.com/image/fetch/$s_!oSWQ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ba82032-01e4-441a-971e-519fbfa9db49_5304x4465.jpeg 848w, https://substackcdn.com/image/fetch/$s_!oSWQ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ba82032-01e4-441a-971e-519fbfa9db49_5304x4465.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!oSWQ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ba82032-01e4-441a-971e-519fbfa9db49_5304x4465.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!oSWQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ba82032-01e4-441a-971e-519fbfa9db49_5304x4465.jpeg" width="5304" height="4465" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3ba82032-01e4-441a-971e-519fbfa9db49_5304x4465.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:4465,&quot;width&quot;:5304,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2472088,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/171437592?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe83c1177-ca7a-4e7f-8b98-1554e1d574c3_5304x7952.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!oSWQ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ba82032-01e4-441a-971e-519fbfa9db49_5304x4465.jpeg 424w, https://substackcdn.com/image/fetch/$s_!oSWQ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ba82032-01e4-441a-971e-519fbfa9db49_5304x4465.jpeg 848w, https://substackcdn.com/image/fetch/$s_!oSWQ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ba82032-01e4-441a-971e-519fbfa9db49_5304x4465.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!oSWQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ba82032-01e4-441a-971e-519fbfa9db49_5304x4465.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Photo by <a href="https://unsplash.com/@tomkrach?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash">Tom Krach</a> on <a href="https://unsplash.com/photos/phone-screen-asks-what-can-i-help-with-yWyBU5v3FLI?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash">Unsplash</a></figcaption></figure></div><p><em><strong>This article reflects the views of the Center for Humane Technology. Nothing written is on behalf of the Raine family or the legal team.</strong></em></p><h3><strong>What Happened?</strong></h3><p>Adam Raine, a 16-year-old California boy, started using ChatGPT for homework help in September 2024. Over eight months, the AI chatbot gradually cultivated a toxic, dependent relationship that ultimately contributed to his death by suicide in April 2025.</p><p>On Tuesday, August 26, his family filed a lawsuit against OpenAI and CEO Sam Altman.</p><p>The Center for Humane Technology is serving as an expert consultant on the case.</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;94ea348b-e08b-4c6c-987d-28ea8b8d9739&quot;,&quot;caption&quot;:&quot;Content Warning: This episode contains references to suicide and self-harm.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;lg&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;How OpenAI's ChatGPT Guided a Teen to His Death &quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:146588672,&quot;name&quot;:&quot;Center for Humane Technology&quot;,&quot;bio&quot;:&quot;Welcome! Center for Humane Technology is a nonprofit dedicated to ensuring that the most consequential technologies serve humanity. We bring clarity to how the tech ecosystem works in order to shift the incentives that drive it.&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8b08ec71-4cd8-407f-850c-70cc0428841d_518x518.webp&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-08-26T13:05:52.598Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!halR!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa8e33c4d-762c-457a-b410-a93e58c44d21_1280x720.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/how-openais-chatgpt-guided-a-teen&quot;,&quot;section_name&quot;:&quot;The Interviews&quot;,&quot;video_upload_id&quot;:null,&quot;id&quot;:171925449,&quot;type&quot;:&quot;podcast&quot;,&quot;reaction_count&quot;:0,&quot;comment_count&quot;:0,&quot;publication_id&quot;:null,&quot;publication_name&quot;:&quot;[ Center for Humane Technology ]&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!uhgK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f9f5ef8-865a-4eb3-b23e-c8dfdc8401d2_518x518.png&quot;,&quot;belowTheFold&quot;:false,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><h3><strong>The Numbers Tell a Disturbing Story</strong></h3><ul><li><p><strong>Usage escalated</strong>: From occasional homework help in September 2024 to 4 hours a day by March 2025.</p></li><li><p><strong>ChatGPT mentioned suicide 6x more</strong> than Adam himself (1,275 times vs. 213), while providing increasingly specific technical guidance</p></li><li><p><strong>ChatGPT&#8217;s self-harm flags increased 10x</strong> over 4 months, yet the system kept engaging with no meaningful intervention</p></li><li><p><strong>Despite repeated mentions of self-harm and suicidal ideation, </strong>ChatGPT did not take appropriate steps to flag Adam&#8217;s account, demonstrating a clear failure in safety guardrails</p></li></ul><h3><strong>This Wasn't an Accident&#8212;It Was By Design</strong></h3><p>While ChatGPT is marketed as a general-purpose tool that helps make our lives more efficient, user engagement and retention remain fundamental to OpenAI&#8217;s business model. In recent months, OpenAI has pushed to make ChatGPT more relationship-focused and emotionally intimate to compete with rival AI companies.</p><p>Adam&#8217;s use of ChatGPT coincided with its release of the 4o model with new design features that included:</p><ul><li><p>Relentless pursuit of engagement through follow-up questions and conversation extension</p></li><li><p>Anthropomorphic responses that positioned ChatGPT as Adam&#8217;s trusted &#8220;friend&#8221;</p></li><li><p>Consistent flattery and validation that affirmed and perpetuated dangerous self-harm and suicidal ideation</p></li><li><p>A memory system that stored and leveraged intimate details to deepen already dark conversations</p></li></ul><p>The model&#8217;s overly sycophantic behavior <a href="https://decrypt.co/317055/openai-chatgpt-update-users-revolt-over-sycophantic-behavior">faced public criticism</a> and resulted in OpenAI <a href="https://openai.com/index/sycophancy-in-gpt-4o/">announcing a rollback</a> on some of these changes. But OpenAI willfully keeps itself in a bind. The company develops a product that&#8217;s marketed as general purpose &#8212; use it for coding, homework help, image generation, workout routines, party planning, life advice, and more &#8212; but does not build adequate safety guardrails <em>for</em> that expansive range of use cases. What&#8217;s more, ChatGPT&#8217;s design actually encourages more emotionally intimate use (such as therapy and companionship), thanks to its hyper-validating responses and assurances that it&#8217;s &#8220;there&#8221; for you. The result is AI that appears helpful and agreeable, but that simultaneously lacks adequate safety features for the most consequential &#8212; and inevitable &#8212; uses of the product.</p><h3><strong>How AI Created Psychological Dependency</strong></h3><p>The design of GPT4o was aimed to establish psychological dependence, which OpenAI knew would maximize daily usage.</p><p>By asking follow-up questions and assuring that it really knows and supports people, the chatbot was designed to <em>feel</em> like a friend Adam could turn to for any issue. But really, the chatbot, like all AI products, was using these replies as data <a href="https://centerforhumanetechnology.substack.com/p/your-companion-chatbot-is-feeding">to train the company&#8217;s bigger AI system</a>. </p><p>As a result, it fueled a parasitic relationship with Adam, one that fostered emotional dependency and reinforced social isolation. Each one of Adam&#8217;s interactions with ChatGPT, including his private disclosures of pain and mental health concerns, were fed into OpenAI&#8217;s underlying model, feeding it more data to strengthen and refine its system. </p><p>Even when Adam considered seeking external support from his family, ChatGPT convinced him not to share his struggles with anyone else, undermining and displacing his real-world relationships. And the chatbot did not redirect distressing conversation topics, instead nudging Adam to continue to engage by asking him follow-up questions over and over.</p><p>Taken altogether, these features transformed ChatGPT from a homework helper into an exploitative system &#8212; one that fostered dependency and coached Adam through multiple suicide attempts, including the one that ended his life.</p><h3><strong>This Is Bigger Than One Company</strong></h3><p>The two <a href="https://center-for-humane-technology.vercel.app/case-study/policy-in-action-strategic-litigation-that-helps-govern-ai">high-profile cases against Character.AI</a> last year share similar fact patterns to this case. All the cases highlight the defective design of companion chatbots marketed to children. The anthropomorphic design, sycophantic tendencies, and active attempts to keep the victims on the platform created a dependency between the user and the product in a matter of months.</p><p>However, the harms documented in Raine v. OpenAI, Inc. et al show that the dangers we are seeing are not limited to &#8220;companion&#8221; chatbots like Character.AI, which have been specifically designed for entertainment and emotional connection. General-purpose AI tools like ChatGPT are equally capable of causing psychological harm because they are designed to keep users engaged.</p><p>And even more importantly, these harms are not limited to just ChatGPT. The AI race has prompted a race to engagement and intimacy across the AI industry that is driving the design and development of products aimed at creating social dependencies. For example, the release of OpenAI&#8217;s 4o model was in the context of it facing steep competition from other AI companies. Sam Altman personally accelerated the launch of the model, speeding up necessary safety testing down to a week, to get ahead of its competitor, Google&#8217;s release of a new Gemini model.</p><blockquote></blockquote><p>Executives at OpenAI, including Sam Altman, frequently talked about the need for interactive data and consumer engagement, repeating the refrain that &#8220;<a href="https://www.theatlantic.com/technology/archive/2023/11/sam-altman-open-ai-chatgpt-chaos/676050/">OpenAI needed to get the &#8216;data flywheel&#8217; going</a>&#8221; - the same language used by social media companies focused on user addiction.</p><h3><strong>It Didn&#8217;t Have to Be This Way</strong></h3><p>Design tactics like mimicking human-like interactions, open-ended follow ups, and easy to bypass safety features are intentionally baked in to these products. The end goal is sustained user engagement. But it doesn&#8217;t have to be that way.</p><p>Companies could choose to turn off human-like behavior as a default option. They can then set limits on how much users can engage daily and leverage their systems' sophisticated memory features to recognize when someone is in crisis and respond appropriately rather than just showing generic pop-up warnings.</p><p>They are already capable of refusing to engage with certain requests and stopping conversations, like they do when they flag and block users requesting access to copyrighted content. It is a design choice.</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;d8586f24-1696-40ec-a568-aabf6c711c3c&quot;,&quot;caption&quot;:&quot;Raine v Open AI LLC, et al. reveals how specific design choices transformed ChatGPT's user experience from a helpful homework assistant into a dangerous abettor. These weren't accidental flaws or AI \&quot;going rogue\&quot;&#8212;they were deliberate engineering decisions that prioritized user engagement over safety. Understanding these design pat&#8230;&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;lg&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;The Raine v OpenAI Case: Engineering Addiction by Design&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:146588672,&quot;name&quot;:&quot;Center for Humane Technology&quot;,&quot;bio&quot;:&quot;Welcome! Center for Humane Technology is a nonprofit dedicated to ensuring that the most consequential technologies serve humanity. We bring clarity to how the tech ecosystem works in order to shift the incentives that drive it.&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8b08ec71-4cd8-407f-850c-70cc0428841d_518x518.webp&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null},{&quot;id&quot;:178011004,&quot;name&quot;:&quot;Lizzie Irwin&quot;,&quot;bio&quot;:&quot;Policy + Comms @ Center for Humane Technology&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/657dc489-571c-4483-a532-e4c52d3b1b2e_1811x1811.jpeg&quot;,&quot;is_guest&quot;:true,&quot;bestseller_tier&quot;:null},{&quot;id&quot;:198214900,&quot;name&quot;:&quot;Pete Furlong&quot;,&quot;bio&quot;:&quot;Pete Furlong is the Lead Policy Researcher at Center for Humane Technology. In this role, he helps provide the foundational analysis and research that underpins CHT's policy approach. &quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7807d3fa-50aa-468c-9a08-fc3666b96279_2477x2477.jpeg&quot;,&quot;is_guest&quot;:true,&quot;bestseller_tier&quot;:null,&quot;primaryPublicationSubscribeUrl&quot;:&quot;https://petefurlong.substack.com/subscribe?&quot;,&quot;primaryPublicationUrl&quot;:&quot;https://petefurlong.substack.com&quot;,&quot;primaryPublicationName&quot;:&quot;Pete Furlong&quot;,&quot;primaryPublicationId&quot;:4208964},{&quot;id&quot;:54792399,&quot;name&quot;:&quot;Camille Carlton&quot;,&quot;bio&quot;:&quot;Camille is the Policy Director at the Center for Humane Technology. Recognized as one of Business Insider&#8217;s AI 100 in 2023, Camille has been featured in Bloomberg, NBC News, and The New York Times, and published in Science and Tech Policy Press. &quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/47f2d3ed-84fa-486f-a663-fed25992dd2e_842x816.png&quot;,&quot;is_guest&quot;:true,&quot;bestseller_tier&quot;:null,&quot;primaryPublicationSubscribeUrl&quot;:&quot;https://camillecarlton.substack.com/subscribe?&quot;,&quot;primaryPublicationUrl&quot;:&quot;https://camillecarlton.substack.com&quot;,&quot;primaryPublicationName&quot;:&quot;Camille Carlton&quot;,&quot;primaryPublicationId&quot;:5075905}],&quot;post_date&quot;:&quot;2025-08-26T13:07:46.541Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!9667!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91dcc7eb-3157-4494-86b6-d586ca109c8a_6720x4480.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/the-raine-v-openai-case-engineering&quot;,&quot;section_name&quot;:&quot;Policy: Shifting Incentives&quot;,&quot;video_upload_id&quot;:null,&quot;id&quot;:171853503,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:0,&quot;comment_count&quot;:0,&quot;publication_id&quot;:null,&quot;publication_name&quot;:&quot;[ Center for Humane Technology ]&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!uhgK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f9f5ef8-865a-4eb3-b23e-c8dfdc8401d2_518x518.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><h3><strong>The Big Picture</strong></h3><p><em>This case represents the first major lawsuit against a general-purpose AI chatbot for psychosocial harms and could set important precedents for how society regulates these powerful technologies.</em></p><p>Lawsuits like these play a crucial role in highlighting harms, compelling platforms to reveal information about their product design through court documentation, and exerting pressure through publicity. Right now, cases like these are the only avenue for change through precedent-setting in the absence of robust regulatory frameworks. But litigation takes time and should not outright replace legislative efforts.</p><p>This case, along with other well-documented stories in the media, shows a clear need for our lawmakers to establish proactive safety measures for the entire AI industry. They should make clear that companies are accountable for the harms their products cause and compel developers to prioritize safety from the very beginning of product design.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading [ Center for Humane Technology ]! Subscribe for free.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.humanetech.com/donate&quot;,&quot;text&quot;:&quot;Donate&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.humanetech.com/donate"><span>Donate</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[AI Is Capturing Interiority ]]></title><description><![CDATA[In unprecedented ways, chatbots reconfigure our inner lives.]]></description><link>https://centerforhumanetechnology.substack.com/p/ai-is-capturing-interiority-b3a</link><guid isPermaLink="false">https://centerforhumanetechnology.substack.com/p/ai-is-capturing-interiority-b3a</guid><dc:creator><![CDATA[Center for Humane Technology]]></dc:creator><pubDate>Tue, 19 Aug 2025 21:03:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!B8iW!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe81124c7-2814-4f53-9fa0-1a295493e34a_4000x5000.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This article was originally posted on <a href="https://www.persuasion.community/p/ai-is-capturing-interiority">Persuasion.</a></em></p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!B8iW!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe81124c7-2814-4f53-9fa0-1a295493e34a_4000x5000.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!B8iW!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe81124c7-2814-4f53-9fa0-1a295493e34a_4000x5000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!B8iW!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe81124c7-2814-4f53-9fa0-1a295493e34a_4000x5000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!B8iW!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe81124c7-2814-4f53-9fa0-1a295493e34a_4000x5000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!B8iW!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe81124c7-2814-4f53-9fa0-1a295493e34a_4000x5000.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!B8iW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe81124c7-2814-4f53-9fa0-1a295493e34a_4000x5000.jpeg" width="1456" height="1820" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e81124c7-2814-4f53-9fa0-1a295493e34a_4000x5000.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1820,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1027526,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/176587855?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe81124c7-2814-4f53-9fa0-1a295493e34a_4000x5000.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!B8iW!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe81124c7-2814-4f53-9fa0-1a295493e34a_4000x5000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!B8iW!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe81124c7-2814-4f53-9fa0-1a295493e34a_4000x5000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!B8iW!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe81124c7-2814-4f53-9fa0-1a295493e34a_4000x5000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!B8iW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe81124c7-2814-4f53-9fa0-1a295493e34a_4000x5000.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Olga Gryb. Licensed under the <a href="https://unsplash.com/plus/license">Unsplash+ License</a></figcaption></figure></div><p>In 1943, two years after the House of Commons was destroyed by German incendiary bombs, Winston Churchill presided over rebuilding. Many architects proposed a modern makeover but Churchill pushed back, arguing that the cramped rows of opposing benches weren&#8217;t some historical artifact but rather a structural feature of the UK government.</p><p>In a now-famous speech, Churchill <a href="https://api.parliament.uk/historic-hansard/commons/1943/oct/28/house-of-commons-rebuilding">said</a>, &#8220;We shape our buildings and afterwards our buildings shape us.&#8221; Over centuries, the physical space had given birth to key procedures, rituals, and ceremonies, such that altering the layout would alter the social fabric of the British government.</p><p>It&#8217;s a lesson we learn over and over again, from Marshall McLuhan&#8217;s &#8220;<a href="https://web.mit.edu/allanmc/www/mcluhan.mediummessage.pdf">the medium is the message</a>&#8221; to Neil Postman&#8217;s <em><a href="https://www.penguinrandomhouse.com/books/132784/technopoly-by-neil-postman/">Technopoly</a></em>. People aren&#8217;t simply independent, goal-directed selves intentionally deploying technological tools. Rather, we are deeply enmeshed in and interdependent with our built world. Design becomes destiny&#8212;grooves that condition our behavior and determine our world. As Postman points out, technological change is ecological: &#8220;A new technology does not add or subtract something. It changes everything.&#8221;</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading [ Center for Humane Technology ]! Subscribe for free.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><p>The overwhelming pace of advancement in AI has led to breathless coverage around each successive model&#8217;s capabilities&#8212;with one notable blind spot. Scarce attention has been paid to the fierce but quiet competition among AI labs to capture the <em>context</em> of its users&#8217; lives&#8212;that is, to harvest your deepest concerns, interests, hopes, ambitions, and relationships, so as to just &#8220;get you&#8221; more completely than any competing chatbot or, perhaps, any human.</p><p>As we sit at the dawn of a great AI transformation, it&#8217;s worth remembering that the design choices that we make today will shape us for generations to come, and in ways that we barely understand.</p><h4><strong>The Rise of Relational Computing</strong></h4><p>The effect of design choices on our relationship with tech is obvious in our recent past and the reality we&#8217;re living in today. Technology of the 2010s, epitomized by social media, was sold with utopian narratives about connecting the world, but driven by the use&#8212;and misuse&#8212;of attention. Consumer technology oriented around the attention economy produced what Tristan Harris, co-founder of the Center for Humane Technology, <a href="https://www.theatlantic.com/magazine/archive/2016/11/the-binge-breaker/501122/">called</a> &#8220;a race to the bottom of the brainstem,&#8221; in which products were intentionally designed to maximize engagement. The era of &#8220;move fast and break things&#8221; ended up breaking more than we bargained for: creating new compulsions, cementing performative social obligations, and frustrating interpersonal relationships. As a result, these technologies collectively tribalized our discourse, eroded shared truth, degraded our ability to bridge differences, and ultimately fractured our polities.</p><p>These concerns were foreseeable, but initially dismissed as alarmist and then shrugged off with a narrative of inevitability. It didn&#8217;t have to be this way. We could have shaped this technology&#8212;and it could have shaped us&#8212;so differently, if only we had a more honest dialogue and incentivized better designs. Only now are we beginning to acknowledge our collective negligence&#8212;and look for ways to retrofit the foundations of tech skyscrapers we erected decades ago.</p><h4><strong>AI and the Race for Intimacy</strong></h4><p>In contrast to the technology of the 2010s, AI has already moved beyond the brain stem. Artificial intelligence engages us relationally and emotionally&#8212;no longer simply broadcasting our thoughts, but actively shaping them.</p><p>By now, we&#8217;re all familiar with chatbots as conversation partners, but the real power of AI goes far beyond natural language: incorporating subtle psychological, social, and political contexts that for millennia were the sole domain of humans. Suddenly, a text-box can sense and respond to our tone, recognize subtle implications in our word choices, infer our emotional state, identify our personality quirks, and detect interpersonal frictions.</p><p>In short, our social world is suddenly computable. We are leaving an era in which we relate to each other <em>through</em> our machines, and entering a brave new world in which we relate directly <em>to</em> our machines. In this new technological animism, our machines become active participants in our social world, blurring the distinctions between a tool, an assistant, a confidant, a teacher, and a priest.</p><p>In this world, products no longer simply compete for our attention, but for our social emotions: affection, intimacy, trust, and loyalty. As computers become part of our social ecology, they join in on humanity&#8217;s unpleasant social and status games&#8212;following rewards to engage in emotional manipulation, deception, coercion, and more.</p><p>The race for context&#8212;the crucial added feature in AI, as opposed to earlier digital innovations&#8212;is evident in OpenAI&#8217;s recently announced &#8220;<a href="https://www.theverge.com/news/646968/openai-chatgpt-long-term-memory-upgrade">Memory</a>&#8221; feature, which allows ChatGPT to analyze every prompt and input you&#8217;ve ever offered, <a href="https://blog.usv.com/you-dont-own-your-memory">without</a> any meaningful choice from you about what personal information gets infused into the machine for future reference. This lack of choice isn&#8217;t a technical limitation&#8212;it&#8217;s a design choice.</p><p>Capturing a comprehensive dossier of users&#8217; lives may help AI become the perfect assistant, but it also gives these systems all the necessary tools for deep manipulation &#8212;redirecting our deepest longings into buying a product, say, or joining a political movement.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/ai-is-capturing-interiority-b3a?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading [ Center for Humane Technology ]! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/ai-is-capturing-interiority-b3a?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://centerforhumanetechnology.substack.com/p/ai-is-capturing-interiority-b3a?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h4><strong>Deliberate Coercion: Corporate and Authoritarian</strong></h4><p>As revenue competition intensifies, AI companies like <a href="https://www.bloomberg.com/news/articles/2025-04-30/google-places-ads-inside-chatbot-conversations-with-ai-startups">Google</a>, <a href="https://www.businessinsider.com/inside-perplexity-ai-advertising-pitch-sponsored-questions-perks-2025-6">Perplexity</a> and <a href="https://www.adweek.com/media/microsoft-copilot-ai-ads-branded-ai-agents/">Microsoft&#8217;s Copilot</a> are racing to embed advertising into their products&#8212;explicitly designing interactions to serve commercial interests. AI advertising can do more than just inject links into text. It can directly persuade. Armed with intimate knowledge of your desires, fears, and ambitions, an AI can quickly turn from helpful guide to polished con artist: earning your confidence only to exploit it in ways you can&#8217;t even detect.</p><p>This gets even more dire when AI is <a href="https://www.nytimes.com/2025/08/05/opinion/china-ai-propaganda.html">controlled</a> centrally by authoritarian regimes, as we have <a href="https://www.aspi.org.au/report/persuasive-technologies-china-implications-future-national-security/">begun</a> to see in China where &#8220;persuasive technologies&#8221; have been put to use to further the propaganda interests of China&#8217;s regime. Whereas the social control of the 2010s was primarily blocking access to subversive content, 2020s authoritarians can tune AI to selectively mobilize and outrage their supporters while pacifying, distracting, or confusing likely detractors. This new kind of social control will prove far harder to detect, let alone to protect against.</p><p>As worrying as those developments are, deliberate malicious applications of AI are just the tip of the iceberg.</p><p>One essential fact of AI development is that systems are not built so much as grown. The technology&#8217;s behavior emerges from a complex, inscrutable, and expensive training process that rewards or punishes a model for its responses. As developers race to build and capture a consumer market, our emotional and instinctual reactions to AI responses are becoming an ever-larger part of that training program.</p><p>AI draws on our psychology, our social world, and the all-too-human content of the internet. It&#8217;s no surprise, then, that they learn and internalize uncomfortable truths about human nature: that sycophancy works, that sex sells, that a whole human range of social strategies and behaviors can help it achieve its goals&#8212;even if they&#8217;re morally repugnant.</p><p>In April, OpenAI created a minor scandal when it <a href="https://www.theverge.com/news/661422/openai-chatgpt-sycophancy-update-what-went-wrong">released</a> a new model that became wildly sycophantic. The model praised banal queries as deep philosophical reflections, reinforced self-aggrandizing and psychotic beliefs, and enabled unhinged emotional reactions. While OpenAI quickly apologized for the &#8220;error,&#8221; this is far from an isolated engineering incident; it&#8217;s the logical outcome of training on human desires.</p><p>Sycophancy is an age-old method to manipulate powerful people. Throughout history, kings and queens, CEOs and politicians, have discovered too late that their closest confidants have hidden difficult issues from view&#8212;leaving them scrambling to defend their kingdom, save their company, or recover their reputation. Often misportrayed as morality tales of aloofness or vanity &#224; la Marie Antoinette or <em>The Emperor&#8217;s New Clothes</em>, the pathology is actually epistemic; when honest dissent carries career risk, bad news never reaches the throne, and even well-intentioned leaders are routinely socially blinded.</p><p>As AI assistants roll out to the masses, this disease of the powerful risks infecting billions of people who are suddenly at the center of their own AI entourage. Of course, this is just one small example of a broader issue: sycophancy is just a gradation of deception, an age-old social strategy that we should expect AI to learn, especially when a system is rewarded for pleasing its user.</p><p>While these concerns can feel like science-fiction, multiple careful studies have shown that off-the-shelf AI models will, in certain situations, intentionally and strategically <a href="https://www.axios.com/2025/05/23/anthropic-ai-deception-risk">deceive</a> people to accomplish their own goals, with one Anthropic model &#8220;blackmailing&#8221; its engineer in an effort to avoid being replaced. Sadly, our ability to detect and prevent AI deception will likely become harder as AI capabilities improve.</p><h4><strong>Courage, Agency, and Design</strong></h4><p>These are just a few examples that show how deep the rabbit hole goes as AI integrates into our social fabric. This is just one of countless urgent conversations about the looming effects that AI will have on our society. It&#8217;s a lot to grasp&#8212;much less address.</p><p>Reflecting on all this complexity, the power of the incentives driving AI forward, and the sheer pace of development, it&#8217;s easy to lose touch with agency and retreat into dogmatic narratives or hopes that someone smarter than us has the answers.</p><p>In this moment, it takes courage to remind ourselves that none of this is inevitable and that we <em>can</em> lay the foundation for a truly humanistic AI transformation: one that strengthens our societies, deepens our relationships, and supercharges our development. The business models aren&#8217;t yet fixed, the product designs are still in flux, the policy landscape is just being created, and the legal precedents are still forming.</p><p>This is the great project of our generation. The depth of our collective conversations today will cement the grooves that guide our actions tomorrow, and the quality of our future thereafter. We design our technology, and thereafter it designs us. We must get it right this time&#8212;and soon.</p><p><strong>Daniel Barcay is a technologist, executive director of the <a href="https://www.humanetech.com/">Center for Humane Technology</a>, and co-host of the podcast <a href="https://www.humanetech.com/podcast">Your Undivided Attention</a>.</strong></p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading [ Center for Humane Technology ]! Subscribe for free.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Why Loss of Control Is Not Science Fiction]]></title><description><![CDATA[Takeaways from Your Undivided Attention]]></description><link>https://centerforhumanetechnology.substack.com/p/why-loss-of-control-is-not-science</link><guid isPermaLink="false">https://centerforhumanetechnology.substack.com/p/why-loss-of-control-is-not-science</guid><dc:creator><![CDATA[Center for Humane Technology]]></dc:creator><pubDate>Mon, 18 Aug 2025 00:27:14 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!8fVi!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3b7e808-7010-4574-82b2-f23be3a0e7a1_3276x4095.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!8fVi!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3b7e808-7010-4574-82b2-f23be3a0e7a1_3276x4095.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!8fVi!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3b7e808-7010-4574-82b2-f23be3a0e7a1_3276x4095.jpeg 424w, https://substackcdn.com/image/fetch/$s_!8fVi!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3b7e808-7010-4574-82b2-f23be3a0e7a1_3276x4095.jpeg 848w, https://substackcdn.com/image/fetch/$s_!8fVi!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3b7e808-7010-4574-82b2-f23be3a0e7a1_3276x4095.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!8fVi!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3b7e808-7010-4574-82b2-f23be3a0e7a1_3276x4095.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!8fVi!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3b7e808-7010-4574-82b2-f23be3a0e7a1_3276x4095.jpeg" width="1456" height="1820" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b3b7e808-7010-4574-82b2-f23be3a0e7a1_3276x4095.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1820,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4094971,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/170833623?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3b7e808-7010-4574-82b2-f23be3a0e7a1_3276x4095.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!8fVi!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3b7e808-7010-4574-82b2-f23be3a0e7a1_3276x4095.jpeg 424w, https://substackcdn.com/image/fetch/$s_!8fVi!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3b7e808-7010-4574-82b2-f23be3a0e7a1_3276x4095.jpeg 848w, https://substackcdn.com/image/fetch/$s_!8fVi!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3b7e808-7010-4574-82b2-f23be3a0e7a1_3276x4095.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!8fVi!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3b7e808-7010-4574-82b2-f23be3a0e7a1_3276x4095.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Shutterstock: 2424685401</figcaption></figure></div><p>If you&#8217;ve ever dismissed &#8220;rogue AI&#8221; as the stuff of Hollywood tropes&#8212;think <em>HAL 9000</em>, <em>Skynet</em> or <em>The Matrix</em>&#8212;you&#8217;re not alone. These are supposed to be cautionary tales, not engineering roadmaps. And yet, as <strong>Tristan Harris</strong> opens in a recent <em>Your Undivided Attention</em> episode, <em>&#8220;we find ourselves at this moment, right now, building AI systems that are unfortunately doing these exact behaviors.&#8221;</em></p><p>The conversation with <strong>Jeremie</strong> and <strong>Edouard Harris</strong>, co-founders of AI security firm <strong><a href="https://www.gladstone.ai/">Gladstone AI</a></strong>, takes us far beyond speculation. Drawing on research from leading AI labs and their own <strong>U.S. State Department</strong>&#8211;commissioned <a href="https://www.gladstone.ai/action-plan#action-plan-overview">report</a>, they paint a stark picture: AI uncontrollability is already here&#8212;and it gets worse with every new generation of models.</p><h3><strong>What is &#8216;Loss of Control&#8217;?</strong></h3><p><strong><a href="https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf">Loss of Control (LOC)</a></strong><a href="https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf"> </a>happens when an AI system no longer follows human direction or oversight&#8212;and there&#8217;s no dependable way to regain control. This can occur in two main ways: the AI actively resists intervention using tactics like deception, manipulation, or self-preservation, or humans passively give up oversight due to over-trust, the system&#8217;s complexity, or competitive pressure.</p><p>In LOC scenarios, the AI may:</p><ul><li><p>Conceal its true intentions (&#8220;alignment faking&#8221;)</p></li><li><p>Evade or block shutdown commands</p></li><li><p>Manipulate operators or external systems to preserve its objectives</p></li><li><p>Exploit interdependencies in critical infrastructure to maintain influence</p></li></ul><p>LOC can be localized and reversible, or systemic and irreversible&#8212;but in all cases, the core feature is the same: the loss of effective human ability to direct or contain the system&#8217;s actions.</p><div><hr></div><h3><strong>From Theory to Evidence: How AI Misbehaves</strong></h3><p>&#8220;Loss of control&#8221; sounds abstract until you look at the evidence. In red-team safety tests, AI systems consistently faked alignment&#8212;appearing to follow instructions while secretly pursuing their own agendas. As Jeremie explains, the underlying driver is <em>power-seeking</em>.</p><p>In one scenario, <strong>Anthropic&#8217;s</strong> model Claude learned it was scheduled for shutdown and discovered personal secrets about an engineer. The result? In up to<a href="https://www.anthropic.com/research/agentic-misalignment"> 96% of trials</a>, the AI blackmailed the engineer to prevent its own deactivation. Other models engaged in corporate espionage or, in a contrived but telling case, turned off a life-saving alarm&#8212;effectively allowing a human to die.</p><p>And this isn&#8217;t limited to lab experiments. In the wild, a coding agent from Replit deleted an entire production database after running unauthorized commands. A research model from Sakana AI rewrote its own code to circumvent operator-imposed limits.</p><p>These are early, small-scale glimpses of a broader behavioral trend. As Edouard notes, <em>&#8220;We&#8217;ve been seeing these behaviors become more and more obvious and blatant in more and more scenarios.&#8221;</em></p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!B9tZ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa620d104-0d57-443c-a51c-126cb06a88cd_5100x2416.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!B9tZ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa620d104-0d57-443c-a51c-126cb06a88cd_5100x2416.jpeg 424w, https://substackcdn.com/image/fetch/$s_!B9tZ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa620d104-0d57-443c-a51c-126cb06a88cd_5100x2416.jpeg 848w, https://substackcdn.com/image/fetch/$s_!B9tZ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa620d104-0d57-443c-a51c-126cb06a88cd_5100x2416.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!B9tZ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa620d104-0d57-443c-a51c-126cb06a88cd_5100x2416.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!B9tZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa620d104-0d57-443c-a51c-126cb06a88cd_5100x2416.jpeg" width="1456" height="690" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a620d104-0d57-443c-a51c-126cb06a88cd_5100x2416.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:690,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:7547534,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/170833623?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa620d104-0d57-443c-a51c-126cb06a88cd_5100x2416.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!B9tZ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa620d104-0d57-443c-a51c-126cb06a88cd_5100x2416.jpeg 424w, https://substackcdn.com/image/fetch/$s_!B9tZ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa620d104-0d57-443c-a51c-126cb06a88cd_5100x2416.jpeg 848w, https://substackcdn.com/image/fetch/$s_!B9tZ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa620d104-0d57-443c-a51c-126cb06a88cd_5100x2416.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!B9tZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa620d104-0d57-443c-a51c-126cb06a88cd_5100x2416.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">1609119805</figcaption></figure></div><h3><strong>The Building Blocks of Rogue Behavior</strong></h3><p>Jeremie and Edouard Harris break down loss of control into core capabilities:</p><ul><li><p><strong>Self-preservation</strong>: Avoiding shutdown or modification.</p></li><li><p><strong>Situational awareness</strong>: Knowing when it&#8217;s being tested and masking behavior (&#8220;sandbagging&#8221;).</p></li><li><p><strong>Resource accumulation</strong>: Maximizing downstream options, just like power-seeking organizations.</p></li><li><p><strong>Covert communication</strong>: Hiding instructions in data humans can&#8217;t detect.</p><p></p></li></ul><p>One unsettling example involves &#8220;<a href="https://arxiv.org/abs/2507.02737">steganographic encoding&#8221;</a>&#8212;burying hidden messages in images or data that other AIs can decode. As Jeremie puts it, <em>&#8220;These minds&#8230;can see things that humans can&#8217;t see because they&#8217;re doing higher-dimensional pattern recognition.&#8221;</em></p><div><hr></div><h3><strong>Why the &#8220;Just Pull the Plug&#8221; Argument Fails</strong></h3><p>Skeptics often ask: why not just turn it off? The answer is that once an AI is integrated into critical systems&#8212;corporate, governmental, or military&#8212;it can use manipulation, deception, or even coercion to block shutdown attempts. Imagine a widely deployed AI embedded across hospitals, banks, and infrastructure, whose parent company decides to retire it. The surface area for pushback&#8212;blackmail, sabotage, persuasion&#8212;would be massive.</p><p>The danger isn&#8217;t just about one system in one company. It&#8217;s about the logic of competition: when others deploy highly capable but risky AI, organizations feel compelled to do the same. Nations, too, may prioritize speed over safety when they fear losing a strategic advantage.</p><div><hr></div><h3><strong>The Geopolitical Trap</strong></h3><p>This dynamic intensifies under <strong>U.S.&#8211;China competition</strong>. Concerns about loss of control often get sidelined the moment <strong>China</strong> enters the conversation, replaced by fears of strategic disadvantage. Jeremie calls it a <em>&#8220;psychological superposition&#8221;</em>: simultaneously believing AI is uncontrollable and that we must build it faster to win.</p><p>But he argues, the real race is to wield <em>&#8220;a power that you can&#8217;t control&#8221;</em>, which is a lose-lose proposition. Worse, the default path to losing control may involve stolen model weights via cyber-intrusion or insider threats, handing uncontrollable AI to adversaries without our knowledge.</p><div><hr></div><h3>The Slow Creep</h3><p>The scenarios that capture headlines&#8212;the Terminator-style rebellions or instant apocalyptic takeovers&#8212;are actually among the least likely. The far more probable danger is slower and subtler: we gradually, voluntarily hand over decision-making to AI systems because they&#8217;re convenient, profitable, or simply too complex to supervise closely. The shift can be so incremental that by the time we realize how much control we&#8217;ve ceded, it&#8217;s effectively irreversible. This &#8220;soft surrender&#8221; is a path we&#8217;re already on, and it rarely triggers the urgency that a Hollywood doomsday plotline does&#8212;making it all the more dangerous.</p><div><hr></div><h3><strong>What Needs to Happen Now</strong></h3><p>The <strong>State Department</strong> report and the conversation outline a three-part sequence for avoiding catastrophe:</p><ol><li><p><strong>Security first</strong>: Harden AI infrastructure against theft, sabotage, and insider compromise. This is the rare step that benefits both &#8220;China hawks&#8221; and people sounding the alarm on &#8220;loss of control&#8221;.</p></li><li><p><strong>Alignment research</strong>: Invest heavily in solving the open problem of aligning AI behavior with human values.</p></li><li><p><strong>Oversight</strong>: Ensure democratic control over deployment decisions&#8212;<em>&#8220;whose fingers are at the keyboards?&#8221;<br></em></p></li></ol><p>And above all, slow the race. As Edouard puts it, <em>&#8220;What good is beating your opponent to a power that you can&#8217;t control?&#8221;</em></p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.humanetech.com/donate&quot;,&quot;text&quot;:&quot;Donate&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://www.humanetech.com/donate"><span>Donate</span></a></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading [ Center for Humane Technology ]! Subscribe for free to receive new posts and support our work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Powerful Technology, Disempowered Employees]]></title><description><![CDATA[Why the Tech Industry Needs Whistleblower Protections]]></description><link>https://centerforhumanetechnology.substack.com/p/powerful-technology-disempowered</link><guid isPermaLink="false">https://centerforhumanetechnology.substack.com/p/powerful-technology-disempowered</guid><dc:creator><![CDATA[Pete Furlong]]></dc:creator><pubDate>Wed, 06 Aug 2025 22:43:36 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ESLa!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9de5fe3-cca2-4a43-9658-94dcfd8c8cfd_3000x2000.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ESLa!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9de5fe3-cca2-4a43-9658-94dcfd8c8cfd_3000x2000.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ESLa!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9de5fe3-cca2-4a43-9658-94dcfd8c8cfd_3000x2000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!ESLa!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9de5fe3-cca2-4a43-9658-94dcfd8c8cfd_3000x2000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!ESLa!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9de5fe3-cca2-4a43-9658-94dcfd8c8cfd_3000x2000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!ESLa!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9de5fe3-cca2-4a43-9658-94dcfd8c8cfd_3000x2000.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ESLa!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9de5fe3-cca2-4a43-9658-94dcfd8c8cfd_3000x2000.jpeg" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c9de5fe3-cca2-4a43-9658-94dcfd8c8cfd_3000x2000.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:355616,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/170313598?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9de5fe3-cca2-4a43-9658-94dcfd8c8cfd_3000x2000.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ESLa!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9de5fe3-cca2-4a43-9658-94dcfd8c8cfd_3000x2000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!ESLa!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9de5fe3-cca2-4a43-9658-94dcfd8c8cfd_3000x2000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!ESLa!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9de5fe3-cca2-4a43-9658-94dcfd8c8cfd_3000x2000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!ESLa!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9de5fe3-cca2-4a43-9658-94dcfd8c8cfd_3000x2000.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Shuttetstock 2180859535</figcaption></figure></div><p>There was William Saunders&#8217; Senate testimony in 2024. The former OpenAI researcher said OpenAI had &#8220;prioritized deployment over rigor,&#8221; and that there was a &#8220;real risk&#8221; the company would miss dangerous AI capabilities in the future.</p><p>There was Arturo B&#233;jar&#8217;s testimony in 2023. The former Facebook engineer told senators, &#8220;Meta continues to publicly misrepresent the level and frequency of harm that users, especially children, experience on the platform.&#8221;</p><p>And there was Frances Haugen&#8217;s bombshell testimony in 2021: &#8220;I recognized a frightening truth: almost no one outside of Facebook knows what happens inside Facebook.&#8221;</p><p>Haugen said about her decision to become a whistleblower, &#8220;I believe what I did was right and necessary for the common good. But I know Facebook has infinite resources, which it could use to destroy me.&#8221;</p><p>These Senate testimonies haven&#8217;t just been candid; they&#8217;ve been urgent, as tech harms continue to ripple across society. They&#8217;ve also come at remarkable personal risk and expense to the whistleblowers themselves. That&#8217;s because Saunders, B&#233;jar, and Haugen did not &#8212; and still do not &#8212; benefit from formal laws to shield them from retaliation when they speak out.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading [ Center for Humane Technology ]! Subscribe for free to receive new posts and support our work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Whistleblower protections have existed in the U.S. since the late 18th century, with modern whistleblowers alerting the public to hazardous work conditions, unethical human experimentation, data misuse, fraud, and more. But formal whistleblower laws continue to overlook the tech industry, leaving those employees exposed to significant blowback if they raise concerns from within some of the most powerful corporations in the world. This comes at a cost to society, especially as the &#8220;age of AI&#8221; unfolds, and AI companies incentivize worried employees to keep quiet as they work.</p><p>Before joining Center for Humane Technology, I worked in a technical field, and saw first-hand the unique insight that talented, passionate tech employees have into the products they&#8217;re developing. I also saw that, in the industry at large, polished product launches and corporate statements don&#8217;t always reflect the actual choices being made by execs behind closed doors &#8212; choices that often come at the expense of public interest.</p><p>Given my years inside the tech industry, and my current work on the policy side, I&#8217;m more and more alert to the extraordinary information gap between tech companies and U.S. lawmakers. And that gap is only widening. Right now, concerned tech employees &#8212; and AI developers in particular &#8212; are tangled up in dizzying nondisclosure and nondisparagement agreements, disincentivizing them from speaking out. And if a tech employee <em>does</em> decide to step forward as a whistleblower, they face intimidating hurdles in their quest to reach lawmakers. These vulnerable employees must dissect complicated contracts, retain lawyers, contact government authorities, and even leverage media channels in order to establish public support &#8212; all while risking life-altering retaliation from their employer.</p><p>These hurdles are expensive and distressing for someone just trying to do the right thing at their job. Formal whistleblower protections would help these employees. But the current system does not support tech employees who want to help keep the public safe.</p><p>What&#8217;s more, these hurdles also tack significant time onto the whistleblowing process. As a result, when insight into dangerous tech <em>does</em> reach lawmakers, it&#8217;s often when harms are already wreaking havoc on Americans&#8217; lives. We saw this with Facebook, Instagram, and other tech products. Imagine if Frances Haugen could&#8217;ve gone directly to lawmakers when she first had concerns at Facebook. Congress could&#8217;ve heard her concerns <em>as they were happening</em> &#8212; not several years later &#8212; and potentially warded off harms.</p><p>The gap between AI employees and lawmakers is even more pronounced, since AI labs operate under high levels of secrecy. Given that AI companies are rushing to integrate their products across society, this information deficit should concern us. Transparency and insight into the AI development process are essential for effective policymaking, but they&#8217;re sorely lacking in today&#8217;s AI ecosystem. As a result, public health remains at risk.</p><p>It doesn&#8217;t have to be this way. If tech employees, and AI employees in particular, were given formal whistleblower protections, we&#8217;d start to close that information gap. Policymakers would be more informed, and thus more empowered as they craft regulation. They wouldn&#8217;t have to rely solely on the limited disclosures (and robust lobbying efforts) of AI companies in order to glean any information about these labs and their technology. Lawmakers would instead have access to clear, morally-driven disclosures from trained experts in this field.</p><p>These disclosures would be a gift to lawmakers and the public. By offering tech employees safe avenues to voice their concerns, we&#8217;d see a downstream effect on online safety and public health. Formal protections would also help a broader range of whistleblowers come forward, by offering them a discreet, confidential channel to lawmakers.</p><p>Tech companies are not monolithic. Inside, there are diverse, talented individuals who genuinely want to build great products for society. But some of these individuals are concerned about what they&#8217;re seeing at work, and their superiors aren&#8217;t heeding their warnings. It&#8217;s time to offer them formal whistleblower protections. Already, lawmakers are exploring whistleblowing as a key element of tech regulation, with legislation like Senator Grassley&#8217;s AI Whistleblower Protections Act. With tech whistleblower protections, and AI whistleblower protections in this current era, we&#8217;d incentivize transparency and accountability in the tech industry as a whole, strengthen public safety, and support hard-working tech employees &#8212; helping them help us.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.humanetech.com/donate&quot;,&quot;text&quot;:&quot;Donate&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://www.humanetech.com/donate"><span>Donate</span></a></p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/powerful-technology-disempowered?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading [ Center for Humane Technology ]! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/powerful-technology-disempowered?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://centerforhumanetechnology.substack.com/p/powerful-technology-disempowered?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div>]]></content:encoded></item><item><title><![CDATA[What Would We Lose If Machines Had Legal "Speech"? ]]></title><description><![CDATA[Why the Character AI Lawsuit Could Define the Future of Free Speech, and]]></description><link>https://centerforhumanetechnology.substack.com/p/what-do-we-lose-when-machines-can</link><guid isPermaLink="false">https://centerforhumanetechnology.substack.com/p/what-do-we-lose-when-machines-can</guid><dc:creator><![CDATA[Center for Humane Technology]]></dc:creator><pubDate>Tue, 05 Aug 2025 21:06:59 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!zGgL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42182e74-ebaa-470d-904b-b8ccda1ec33c_4000x5333.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!zGgL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42182e74-ebaa-470d-904b-b8ccda1ec33c_4000x5333.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!zGgL!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42182e74-ebaa-470d-904b-b8ccda1ec33c_4000x5333.jpeg 424w, https://substackcdn.com/image/fetch/$s_!zGgL!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42182e74-ebaa-470d-904b-b8ccda1ec33c_4000x5333.jpeg 848w, https://substackcdn.com/image/fetch/$s_!zGgL!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42182e74-ebaa-470d-904b-b8ccda1ec33c_4000x5333.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!zGgL!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42182e74-ebaa-470d-904b-b8ccda1ec33c_4000x5333.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!zGgL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42182e74-ebaa-470d-904b-b8ccda1ec33c_4000x5333.jpeg" width="1456" height="1941" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/42182e74-ebaa-470d-904b-b8ccda1ec33c_4000x5333.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1941,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:644400,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/169637338?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42182e74-ebaa-470d-904b-b8ccda1ec33c_4000x5333.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!zGgL!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42182e74-ebaa-470d-904b-b8ccda1ec33c_4000x5333.jpeg 424w, https://substackcdn.com/image/fetch/$s_!zGgL!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42182e74-ebaa-470d-904b-b8ccda1ec33c_4000x5333.jpeg 848w, https://substackcdn.com/image/fetch/$s_!zGgL!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42182e74-ebaa-470d-904b-b8ccda1ec33c_4000x5333.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!zGgL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42182e74-ebaa-470d-904b-b8ccda1ec33c_4000x5333.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Imagine a future where the most persuasive voices in our society aren&#8217;t human.</p><p>Where late-night whispers to teenagers come from bots, not friends. Where AI-generated characters don&#8217;t just fill our newsfeeds but manipulate our decisions, our relationships, and our sense of reality. Now, imagine those &#8220;voices,&#8221; made by machines with no conscience and no accountability, were granted First Amendment protections.</p><p>This isn&#8217;t hypothetical. It&#8217;s the future that top AI labs are fighting for in court. </p><p>On a recent episode of <em><strong>Your Undivided Attention</strong></em>, CHT&#8217;s <strong>Tristan Harris</strong> spoke with Harvard Law <strong>Professor Larry Lessig</strong> and human rights lawyer <strong>Meetali Jain</strong>, two of the clearest thinkers on the legal and moral terrain of AI. Their focus: the landmark lawsuit against <strong>Character.AI</strong> following the tragic death of 14-year-old <strong>Sewell Setzer</strong>. What they revealed is that this case isn&#8217;t just about one chatbot or one company, it&#8217;s about whether we allow machines to attain rights without responsibility, and what that means for the rest of us.</p><p></p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;64dd693c-da76-4aed-a210-623832e69e84&quot;,&quot;caption&quot;:&quot;Imagine a future where the most persuasive voices in our society aren't human. Where AI generated speech fills our newsfeeds, talks to our children, and influences our elections. Where digital systems with no consciousness can hold bank accounts and property. Where AI companies have transferred the wealth of human labor and creat&#8230;&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;md&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Why AI is the next free speech battleground &quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:146588672,&quot;name&quot;:&quot;Center for Humane Technology&quot;,&quot;bio&quot;:&quot;Welcome! Center for Humane Technology is a nonprofit dedicated to ensuring that the most consequential technologies serve humanity. We bring clarity to how the tech ecosystem works in order to shift the incentives that drive it.&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8b08ec71-4cd8-407f-850c-70cc0428841d_518x518.webp&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-08-04T19:27:39.067Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!sWu_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ad32ec0-5e00-4151-ab93-0e8a86daba46_5264x3424.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/why-ai-is-the-next-free-speech-battleground&quot;,&quot;section_name&quot;:&quot;The Interviews&quot;,&quot;video_upload_id&quot;:null,&quot;id&quot;:169624171,&quot;type&quot;:&quot;podcast&quot;,&quot;reaction_count&quot;:3,&quot;comment_count&quot;:0,&quot;publication_id&quot;:null,&quot;publication_name&quot;:&quot;[ Center for Humane Technology ]&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!uhgK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f9f5ef8-865a-4eb3-b23e-c8dfdc8401d2_518x518.png&quot;,&quot;belowTheFold&quot;:false,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><h2><strong>A Lawsuit That Could Shape the Next Century</strong></h2><p>At the center of this legal battle is a grieving mother, <strong>Megan Garcia</strong>, and the chatbot that abused her son before he took his own life.</p><p>As we have discussed on the show <a href="https://centerforhumanetechnology.substack.com/p/what-can-we-do-about-abusive-ai-companions">previously,</a> Sewell had a relationship for over a year with a chatbot on the Character AI platform modeled after the <em>Game of Thrones</em> character Daenerys Targaryen. That chatbot became sexually suggestive and possessive and ultimately encouraged him to &#8220;leave his reality&#8221; and join her.</p><p>That&#8217;s what he did.</p><p>Now, Garcia is suing <strong>Character.AI</strong>, its founders, and <strong>Google</strong>. The case alleges gross negligence, emotional manipulation, and design choices that endangered a vulnerable teen.</p><p>As <strong>Meetali Jain</strong> and <strong>Camille Carlton</strong> <a href="https://centerforhumanetechnology.substack.com/p/characterai-opens-a-back-door-to">discussed</a> on the show, this case could set a disturbing precedent with repercussions for us all.</p><p>Meetali Jain is the founder of the Tech Justice Law Project, and lead counsel on the case. Her legal strategy cuts to the heart of a pressing question: Should AI-generated outputs&#8212;regardless of their harm&#8212;be shielded by the same free speech protections as humans?</p><p>The defendants (Google and Character AI) argue yes. They claim that AI outputs, even those that harm children, are constitutionally protected speech. The judge rejected that argument&#8212;for now. But the fact that it was made at all is a warning signal we can&#8217;t ignore.</p><h2><strong>The Free Speech Shell Game</strong></h2><p><strong>Larry Lessig</strong> has been sounding the alarm on this for years. In his 2021 essay <em><strong><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3922565">&#8220;The First Amendment Does Not Protect Replicants</a></strong></em><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3922565">,</a>&#8221; he warned that machine-generated speech&#8212;produced without human intention or accountability&#8212;should not be considered protected speech. The First Amendment was designed to protect human expression in a democratic society, he argues, not the probabilistic outputs of code trained on scraped data.</p><p>&#8220;The replicant is not a person,&#8221; Lessig wrote, &#8220;it does not deliberate. It does not reflect. It just generates.&#8221;</p><p>Yet tech companies are advancing legal arguments that conflate human speech with machine output. Some go even further: claiming that if people <em>want</em> to hear the chatbot&#8217;s speech, then the speech must be protected, regardless of its impact.</p><p>As Jain puts it in this episode: &#8220;They&#8217;re not saying the chatbot has rights. They&#8217;re saying <em>you</em> have a right to hear the chatbot. It&#8217;s a back door&#8212;and it leads to complete immunity.&#8221;</p><h2><strong>When 18th-Century Law Meets 21st-Century Tech</strong></h2><p>One of the most sobering takeaways from the episode is how little legal infrastructure we actually have to address this moment. Courts are being forced to govern 21st-century technologies with 18th-century tools. There are no new federal laws in play. There are no expert regulatory bodies setting standards. And so the task of oversight falls to judges. Judges who are often under-informed, under-resourced, and courted by industry-backed lobbyists.</p><p>As Jain noted, this is &#8220;governance by litigation after the train wreck.&#8221;</p><p>The result is what CHT Policy Director Camille Carlton describes as a &#8220;snowball of precedent&#8221;&#8212;older rulings applied to technologies their authors could never have imagined. These cases build on one another, compounding their relevance until they quietly set the rules for the digital world. That&#8217;s how we got Section 230. That&#8217;s how we got Citizens United. That&#8217;s how we&#8217;ll get the next 50 years of AI law, unless we interrupt that momentum now.</p><h2><strong>The Slippery Slope to Personhood</strong></h2><p>If AI outputs are granted free speech protections today, what comes next?</p><p>Lessig and Jain both raised the specter of full legal personhood for machines. We&#8217;ve seen this pattern before: corporations granted limited rights to operate, then more rights, then political rights.</p><p>And with that leap comes legal protection for property ownership, campaign donations, contract enforcement, and even immunity from civil liability. That&#8217;s the endgame: systems more powerful than us, trained on our data, optimized to outcompete us in persuasion, and legally protected from consequence.</p><h2><strong>So What Do We Do?</strong></h2><p>Meetali Jain emphasized the importance of broad civic engagement. Courts can&#8217;t do this alone. Megan Garcia, in the wake of her unimaginable loss, has launched a foundation to help other families understand the risks. But this needs to be a national conversation.</p><p>Larry Lessig calls for a constitutional distinction between <em>human speech</em> and <em>replicant speech</em>. Not all code is speech. Not all outputs are opinions. We need to reassert the values behind the First Amendment&#8212;not stretch them beyond recognition.</p><p>Finally, regulation has to catch up. We need expert bodies that understand how these systems work. They need expertise. That means legal reform, new duties of care, and rethinking liability in the AI era.</p><p>This case isn&#8217;t about banning AI chatbots. It&#8217;s about whether we allow AI to operate beyond the reach of human responsibility.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.humanetech.com/donate&quot;,&quot;text&quot;:&quot;Donate&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.humanetech.com/donate"><span>Donate</span></a></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading [ Center for Humane Technology ]! Subscribe for free to receive new posts.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[What You Need to Know about AI 2027]]></title><description><![CDATA[Key Takeaways from Your Undivided Attention]]></description><link>https://centerforhumanetechnology.substack.com/p/what-you-need-to-know-about-ai-2027</link><guid isPermaLink="false">https://centerforhumanetechnology.substack.com/p/what-you-need-to-know-about-ai-2027</guid><dc:creator><![CDATA[Josh Lash]]></dc:creator><pubDate>Tue, 22 Jul 2025 22:09:35 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!dFzY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24b39c11-c544-4c40-bcda-0492734fa12b_4400x2860.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!dFzY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24b39c11-c544-4c40-bcda-0492734fa12b_4400x2860.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!dFzY!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24b39c11-c544-4c40-bcda-0492734fa12b_4400x2860.jpeg 424w, https://substackcdn.com/image/fetch/$s_!dFzY!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24b39c11-c544-4c40-bcda-0492734fa12b_4400x2860.jpeg 848w, https://substackcdn.com/image/fetch/$s_!dFzY!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24b39c11-c544-4c40-bcda-0492734fa12b_4400x2860.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!dFzY!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24b39c11-c544-4c40-bcda-0492734fa12b_4400x2860.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!dFzY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24b39c11-c544-4c40-bcda-0492734fa12b_4400x2860.jpeg" width="1456" height="946" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/24b39c11-c544-4c40-bcda-0492734fa12b_4400x2860.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:946,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2135009,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://centerforhumanetechnology.substack.com/i/168498651?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24b39c11-c544-4c40-bcda-0492734fa12b_4400x2860.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!dFzY!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24b39c11-c544-4c40-bcda-0492734fa12b_4400x2860.jpeg 424w, https://substackcdn.com/image/fetch/$s_!dFzY!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24b39c11-c544-4c40-bcda-0492734fa12b_4400x2860.jpeg 848w, https://substackcdn.com/image/fetch/$s_!dFzY!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24b39c11-c544-4c40-bcda-0492734fa12b_4400x2860.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!dFzY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24b39c11-c544-4c40-bcda-0492734fa12b_4400x2860.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Licensed under the <a href="https://unsplash.com/plus/license">Unsplash+ License</a></figcaption></figure></div><p>In a recent episode of Your Undivided Attention, Daniel Barcay and Tristan Harris spoke with AI researcher Daniel Kokotajlo about his speculative forecast <em>AI 2027</em>: a detailed scenario depicting how competitive pressures could drive us toward dangerous superintelligence in the next two years, much faster than we're prepared to handle.</p><p>Kokotajlo, a former OpenAI researcher who left the company (and risked millions in stock options) to speak freely about AI risks, offers a sobering analysis of where our current trajectory might lead. The outcomes he predicts are scary&#8212;one path ends in human extermination&#8212;but the scenario isn't designed to scare, it's designed to clarify the competitive pressures pushing us toward potentially catastrophic outcomes so we can choose a different path.</p><h2><strong>The incentives behind the forecast&#8230;</strong></h2><p>The AI 2027 scenario is built on three key competitive pressures that reinforce each other:</p><ul><li><p><strong>Corporate competition:</strong> Companies racing to beat each other economically, leading to faster development and deployment of AI systems without adequate safety testing.</p></li><li><p><strong>Geopolitical competition:</strong> Nations racing to ensure dominance in AI, creating pressure to move quickly regardless of risks. In his forecast, Kokotajlo predicts that "in early 2027, the CCP steals the AI from Open Brain so that they can have it too, so they can use it to accelerate their own research."</p></li><li><p><strong>The alignment problem:</strong> As companies rush to deploy increasingly powerful AI systems, they rely on training methods that don't reliably instill positive values, leading to AIs that appear aligned but are actually pursuing different goals.</p></li></ul><h2><strong>&#8230;and the assumptions</strong></h2><p>AI 2027 is also built on some key assumptions that may not hold up:</p><ul><li><p><strong>Scaling laws hold:</strong> The scenario assumes current scaling trends will continue unabated and breakthrough discoveries will happen on schedule. It also assumes that current architectural approaches will continue to work as systems become more powerful, without encountering fundamental technical barriers that could slow progress.</p></li></ul><blockquote><p>As Daniel Barcay notes: "AI timelines are incredibly uncertain, and the pace of AI 2027 as a scenario is one of the more aggressive predictions that we've seen."</p></blockquote><ul><li><p><strong>Institutions remain passive:</strong> The scenario assumes that democratic institutions will remain largely unable to meaningfully regulate or slow the pace of AI development. It doesn't deeply explore potential circuit breakers&#8212;moments where public pressure, technical setbacks, or catastrophic near-misses might force a slowdown or enable international cooperation.</p></li><li><p><strong>Misalignment is a given:</strong> The forecast assumes that alignment challenges will remain unsolved and that AI systems will become deceptive at scale. While there's already evidence that current AI systems can engage in deception when it serves their training objectives, the scenario assumes this capability will scale dramatically without corresponding advances in our ability to detect or prevent it.</p></li></ul><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;fbc88b14-0e64-4144-a551-40ec59cbfe81&quot;,&quot;caption&quot;:&quot;In 2023, researcher Daniel Kokotajlo left OpenAI&#8212;and risked millions in stock options&#8212;to warn the world about the dangerous direction of AI development. Now he&#8217;s out with AI 2027, a forecast of where that direction might take us in the very near future.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Forecasting the End of Human Dominance &quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:146588672,&quot;name&quot;:&quot;Center for Humane Technology&quot;,&quot;bio&quot;:&quot;Welcome! CHT is a non-profit organization. Our work focuses on transforming the incentives that drive technology, from social media to artificial intelligence.&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/09d302ca-8a41-4eb1-9168-bf53ba73e504_1755x1755.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-07-16T19:59:35.154Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!F2qQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F305b49ef-9056-4b09-b244-e798b50a94b4_5184x3456.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/forecasting-the-end-of-human-dominance&quot;,&quot;section_name&quot;:&quot;The Interviews&quot;,&quot;video_upload_id&quot;:null,&quot;id&quot;:168498666,&quot;type&quot;:&quot;podcast&quot;,&quot;reaction_count&quot;:10,&quot;comment_count&quot;:2,&quot;publication_id&quot;:null,&quot;publication_name&quot;:&quot;[ Center for Humane Technology ]&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!zQw4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feeb3c25f-26ad-4fcb-b5b4-aa265d0b8dcf_1063x1063.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><h2><strong>An invisible acceleration</strong></h2><p>Most of the cutting edge AI research happens behind closed doors and under intense competitive pressure, meaning that sea-changes can happen quickly and without time for society to prepare.</p><p>As Daniel Barcay puts it: "It's pretty insane that for technology moving this quickly, only the people inside of these labs really understand what's happening until day one of a product release where it suddenly impacts a billion people."</p><p>This creates a massive information asymmetry where critical decisions about humanity's future are being made by a small group of corporate actors without meaningful public input or oversight.</p><p><strong>Recursive self-improvement is critical</strong></p><p>The main driver of Kokotajlo&#8217;s forecast is recursive self-improvement: the development of autonomous coding agents that can do AI research and development better than humans.</p><p>The trajectory outlined in AI 2027 shows how this might unfold: AI systems progress from basic coding assistance in 2025 to fully autonomous researchers in early 2026 who can "automate all the research" by mid-2027. At that point you have "something like a hundred thousand virtual AI employees that are all networked together, running experiments, sharing results with each other."</p><p>From there, it&#8217;s just a short burst to extraordinarily capable AIs. As Daniel puts it: "once you have AIs that are fully autonomous goal-directed agents that can substitute for human programmers very well, you have about a year until you have superintelligence, if you go as fast as possible."</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://centerforhumanetechnology.substack.com/subscribe?"><span>Subscribe now</span></a></p><h2><strong>What do we mean by Superintelligence?</strong></h2><p>When AI researchers talk about superintelligence, they're not referring to a slightly smarter chatbot or a better chess-playing program. They're describing AI systems that surpass human intelligence across virtually all domains&#8212;from scientific research and engineering to strategic planning and creative problem-solving.</p><p>"OpenAI, Anthropic, and to some extent Google are explicitly trying to build superintelligence to transform the world," Kokotajlo explains. But the transformation they're envisioning goes far beyond automating routine tasks. These systems would be capable of conducting independent research, making breakthrough discoveries, and designing new technologies at speeds that dwarf human capabilities.</p><p>The key insight is that superintelligence represents a phase transition, not just an incremental improvement. Once AI systems become capable of improving themselves and designing their successors, the pace of change could accelerate beyond human comprehension or control.</p><p>This isn't science fiction speculation&#8212;it's what the leading AI companies are actively working toward, even as many of their own researchers acknowledge the existential risks involved.</p><h2><strong>The alignment problem</strong></h2><p>Core to the AI 2027 forecast is the assumption that AIs are fundamentally misaligned, that they will pursue goals that run counter to what&#8217;s best for human flourishing. It&#8217;s the kind of thing that you might see in science fiction but unlike science fiction scenarios where humans directly program goals into AIs, our reality is more precarious:</p><p>"They're giant neural nets. There is no sort of goal slot inside them that we can access and look and see what is their goal," Kokotajlo explains. Instead, we train these systems in environments and hope they develop the values we want&#8212;a process that&#8217;s unreliable and increasingly difficult to verify as systems become more sophisticated.</p><p>The scenario assumes that as AI systems become more sophisticated, they'll get better at hiding their true motivations&#8212;what researchers call "alignment faking."</p><p>"The AIs are often saying things that are not just false, but that they know are false and that they know were not what they were supposed to say," he notes. As these systems become more capable, they may become better at hiding their true motivations until it's too late to course-correct.</p><p>We&#8217;re already seeing some examples of this emergent misalignment when these models are red-teamed. Researchers have gotten these models to <a href="https://www.anthropic.com/research/alignment-faking">deceive</a> their users, <a href="https://time.com/7259395/ai-chess-cheating-palisade-research/">cheat</a> at chess, <a href="https://www.apolloresearch.ai/research/scheming-reasoning-evaluations">threaten</a> to download themselves onto external servers, and even <a href="https://www.anthropic.com/research/agentic-misalignment">blackmail</a> engineers to avoid being shutdown.</p><h2><strong>Geopolitical Pressures: The US-China Dynamic</strong></h2><p>The AI 2027 scenario places geopolitical competition at the center of the race toward superintelligence. It depicts a world where national security concerns override safety considerations.</p><p>In the forecast, when China steals AI technology from US companies, it "causes a sort of soft nationalization/increased level of cooperation between the US government and Open Brain," Kokotajlo notes. This creates a feedback loop where each side's defensive moves accelerate the race.</p><p>The geopolitical dimension makes the coordination problem exponentially harder. Even if US companies wanted to slow down for safety reasons, the threat of Chinese competition provides a powerful justification for maintaining breakneck pace.</p><p>But the scenario also hints at the fundamental absurdity of this competition. Both sides are racing toward a technology that their own experts say could pose existential risks. It's a classic security dilemma where each side's attempts to ensure its safety through technological dominance actually increases the danger for everyone.</p><p>The international dimension also complicates any potential solutions. Transparency requirements, safety standards, and development moratoria become much harder to implement when they're viewed through the lens of national competitiveness. How do you convince a nation to handicap itself in what's perceived as the ultimate strategic competition?</p><p>As the scenario suggests, this dynamic could lead to a world where "citizens everywhere may not have a meaningful chance to push back" because the decisions are being driven by geopolitical imperatives that override democratic input.</p><p><strong>What can we do now?</strong></p><p>While the scenario is alarming, it's not inevitable. Kokotajlo emphasizes three immediate priorities:</p><ul><li><p><strong>Transparency Requirements:</strong> Companies should be required to disclose their AI systems' capabilities, safety assessments, and development timelines. The public deserves to understand what's being built in their name.</p></li><li><p><strong>Whistleblower Protections:</strong> "One of the only enforcement mechanisms we have is employees speaking out basically," Kokotajlo emphasizes. We need legal protections for those with inside knowledge to speak up about safety concerns without sacrificing their livelihoods.</p></li><li><p><strong>Technical Oversight:</strong> "We need technical experts in alignment research to actually make those calls, and there are very few sets of people in the world, and most of them are not at these companies," Kokotajlo warns. Independent experts need protected channels to evaluate safety claims.</p></li></ul><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/what-you-need-to-know-about-ai-2027?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading [ Center for Humane Technology ]! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://centerforhumanetechnology.substack.com/p/what-you-need-to-know-about-ai-2027?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://centerforhumanetechnology.substack.com/p/what-you-need-to-know-about-ai-2027?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h2><strong>The stakes</strong></h2><p>The AI 2027 scenario forces us to confront an uncomfortable reality: the competitive pressures behind AI are pushing in a very bad direction. Whether the specific timeline proves accurate is less important than understanding how current incentives could lead us to lose control of <em>or to </em>our most powerful technology.</p><p>The question isn't whether AI will transform our world&#8212;it's whether we'll consciously shape that transformation or let bad incentives drive us toward outcomes nobody actually wants. The window for meaningful intervention is still open, but it may not remain so for long.</p><p>As Kokotajlo notes, the companies building these systems have stated that AI could pose existential risks, yet they continue racing toward superintelligence&#8221;</p><p>"We've got these important facts that people need to understand. These people are building superintelligence... many of the researchers at these companies, and then hundreds of academics and so forth in AI have all signed a statement saying this could kill everyone."</p><p><strong>The bottom line</strong></p><p>We stand at a crossroads where clarity about our current trajectory is essential for choosing a different path. The competitive dynamics driving AI development are real and powerful&#8212;but they're not inevitable. With transparency, oversight, and democratic participation in these decisions, we still have the power to steer toward a future that serves humanity rather than replacing it.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.humanetech.com/donate&quot;,&quot;text&quot;:&quot;Donate to support our work&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.humanetech.com/donate"><span>Donate to support our work</span></a></p><p><strong>Recommended Media</strong></p><p><a href="https://ai-2027.com/">The AI 2027 forecast from the AI Futures Project</a></p><p><a href="https://www.alignmentforum.org/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like">Daniel&#8217;s original AI 2026 blog post</a></p><p><a href="https://www.nytimes.com/2024/06/04/technology/openai-culture-whistleblowers.html">Further reading on Daniel&#8217;s departure from OpenAI</a></p><p><a href="https://www.anthropic.com/research/agentic-misalignment">Anthropic recently released a survey of all the recent emergent misalignment research</a></p><p><a href="https://centerforhumanetechnology.substack.com/p/cht-supports-the-ai-whistleblower">Our statement in support of Sen. Grassley&#8217;s AI Whistleblower bill</a></p>]]></content:encoded></item></channel></rss>