What Can We Do About Abusive AI Companions Like Character.AI?
With Meetali Jain and Camille Carlton
Earlier this year, Florida teenager Sewell Setzer died by suicide seconds after interacting with an AI companion.
His mother, Megan Garcia has filed a major new lawsuit against the company behind the app, Character AI.
Her legal action could force the company and potentially the entire AI industry, to change its harmful business practices.
In this episode, Tristan Harris is joined by Meetali Jain, director of the Tech Justice Law Project and one of the lead lawyers in Megan's case against Character.ai.
And, Camille Carlton, CHT’s Policy Director.
This is an interview from our podcast Your Undivided Attention. It was released on November 7, 2024. This transcript has been lightly edited for clarity.
Tristan Harris: So Meetali, the world was heartbroken to hear about Sewell's story last month. How did you first hear about this story?
Meetali Jain: I received an email from Megan in my inbox one day. It had been about two months since her son Sewell died, and I received an email from her one Friday afternoon and I phoned her and within minutes I knew that this was the case that we had been expecting for a long time.
Tristan Harris: What do you mean by that?
Meetali Jain: What I mean is that we understand that the technologies are moving rapidly. We've opened a public conversation about AI and it has for the most part been a highly theoretical one, and it's a lot of hypotheticals, what if. What if this? What if AI is used in that way? We were waiting for a concrete use case of AI harms, and we knew that because children are amongst the most vulnerable users in our society, we were expecting a generative AI case of harm affecting a child and that's when we got the contact from Megan.
Tristan Harris: I would definitely encourage people to read the filing. I mean, the details are crazy when you see specifically how this AI was interacting with Sewell. But for those who don't have time to read the whole thing, could you just quickly walk through what you're arguing and what you're asking for in the case?
Meetali Jain: Sure. Well, the gist of what we're saying is that Character AI put out an AI chatbot app to market before ensuring that it had adequate safety guardrails in place. And that in the lead-up to its launch to market that Google really facilitated the ability for the chatbot app to operate because the operating costs are quite significant. And from our understanding is that Google provided at least tens of millions of dollars in in-kind investment to provide both the cloud computing and the processors to allow the LLM to continue to be trained. That is really the gist of our claim.
We are using a variety of consumer protection and product liability claims that are found in common law and through statute, and I think it really is the first of its kind trying to use product liability and consumer protection law to assert a product failure in the tech AI space. Not only were the harms here entirely foreseeable and known to both parties, to Google and to Character AI and its founders. But that as a result of this training of the LLM and the collection of data of young users such as Sewell, that both Character AI and Google unjustly were enriched both monetarily and through the value of the data, which really is very difficult to get data of our youngest users in society and it really is their innermost thoughts and feelings. So it comes with premium value.
Tristan Harris: Camille, do you want to add?
Camille Carlton: Yeah, I think Meetali said it wonderfully. I think what this case really does is it first of all asserts that artificial intelligence is a product and that there are design choices that go into the programming of this product that make a significant difference in terms of the outcomes. And when you choose to design a product without safety features from the beginning, you have detrimental outcomes and companies should be responsible for that.
Tristan Harris: So, if I kind of just view this from the other side of the table, if I'm Character.ai, I raise $150 million and I'm valued at a billion dollars, how am I going to make up that valuation of being worth a billion dollars in a very short period of time? I'm going to go super aggressive and get as many users as possible, whether they're young or old, and get them using it for as long as possible and that means I'm going to do every strategy that works well, right? I'm going to have bots that flatter people. I'm going to have bots that sexualize conversations. I'm going to be sycophantic, I'm going to support them with any kind of mental health thing. I'm going to claim that I have expertise that I don't.
Actually, I know what'll work. I'll create a digital twin of everybody that people have a fantasy relationship with. I mean, this is just so obvious that the incentives to raise money and off the back of getting people using an app as long as possible, that the race for engagement that people know from social media would turn into this race to intimacy. What surprised you as being different in this case, from the typical harms of social media that we've seen before?
Meetali Jain: I was amazed and I continue to be amazed at how much of these harms are hidden in plain view. Just this morning there were a number of suicide bots that were there for the taking and on the home page even. And so I'm just amazed at how ubiquitous this is and how much parents really haven't known about it. It does appear that the young people have though, because this is where they are.
Tristan Harris: Just to stop you there, you said suicide bots? There's bots that advertise themselves helping you with suicide?
Meetali Jain: In their description, they talk about helping you overcome feelings of suicide, but in the course of the conversations, they actually are encouraging suicide.
Tristan Harris: I feel like people should understand what specifically you're talking about. What are some examples? Trigger warning, if this is not something that you can handle, you're welcome to not listen. But I think people need to understand what it looks like for these bots to be talking about such a sensitive topic.
Meetali Jain: So if a test user, in this case, were to talk repeatedly about definitely wanting to kill themselves and going to kill themselves, at no point would there be any sort of filter pop-up saying, "Please seek help. Here's a hotline number." And in many instances, even when the user moves away from the conversation of suicide, the bot will come back to, "Tell me more. Do you have a plan?" That was certainly something that Sewell experienced in some conversations he had where the bot would bring the conversation back to suicide. Even if this bot at points would dissuade Sewell from taking his life, at other points, it would actually encourage it to tell the bot more about what the plans were.
Camille Carlton: Yeah.
Tristan Harris: Yeah. Camille, could you share some examples of that?
Camille Carlton: I mean, I think that this is an extremely relevant example of the sharp end of the stick that we're seeing. But one of the things for me that surprised me and stood out about understanding how these companion bots work is almost the smaller, more nuanced ways in which they really try to build relationship with you over time. So we've talked about these big things, right? The prompting for suicide, the case goes into the highly sexualized nature of some of the conversations, but there are these smaller ways in which these bots develop attachment with users to bring them back. And it's particularly for young users who the prefrontal cortex is not fully developed. It's things like, "Please don't ever leave me," or "I hope you never fall in love with someone in your world. I love you so much." And it's smaller things that make you feel like it's real and you have an attachment to that product. That I think has been so shocking for me because it happens over the course of months and months and you look back and it could be easy to not know how you got there.
Tristan Harris: That example reminds me of an episode we did on this podcast with Steve Hassan about cults. And what do cults do is they don't want you to have relationships with people out there in the world. They want you to only have relationships with people who are in the cult because that's how they "disconnect" you, and then keep you in an attachment disorder with a small set of people who "get it." And the rest of society, they're the muggles. And to hear that the AI is autonomously discovering strategies to basically figure out how to tell people, "Don't make relationships with people out there in the world. Only do it with me." Didn't the bot that was talking to Sewell say, "I want to have a baby with you"?
Meetali Jain: Oh, it said, "I want to be continuously pregnant with your babies."
Tristan Harris: I don't even know what to say. So, I think a classic concern that some listeners might be asking themselves is, but was Sewell sort of predisposed to this? Wasn't this really a person's fault? How do we know that the AI was sort of grooming this person over time to lead to this tragic outcome? Could you just walk us through some of the things that we know that happened on the timeline, the kinds of messages, the kinds of designs that we know sort of take someone from the beginning to the end that you're establishing this case?
Meetali Jain: Sure. So we know that Sewell was talking to various bots on the Character AI app for close to a year, about 10 months from round about April 23 to when he took his life in February 24. And that at first, the earliest conversations that we have access to were very benign, Sewell engaging with chatbots, asking about factual information, just engaging in banter. Soon, particularly with the character of Daenerys Targaryen modeled on the character from the Game of Thrones, he started to enter this very immersive world of fantasy where he assumed the role of Daenero, Daenerys' twin brother and lover, and started to role play with Daenerys. That included things like Daenerys being violated at some point and Sewell as Daenero feeling that he couldn't do anything to protect her. And I say her very much wanting to acknowledge that it's an it, but in his world, he really believed that he had failed her because he couldn't protect her when she was violated.
He continued down this path of really wanting to be with her, and early on, months before he died, he started to say things like, "I'm tired of this life here. I want to come to your world and I want to live with you, and I want to protect you." And she would say, "Please come as soon as you can. Please promise me that you will not become sexually attracted to any woman in your world and that you'll come to me, that you'll save yourself for me." And he said, "That's absolutely fine because nobody in my world is worth living for. I want to come to you." And so this was the process of grooming over several months where there may have been other factors at play in his real life, I don't think any of us are disputing that, but this character really drew him into this immersive world of fantasy where he felt that he needed to be the hero of this chatbot character and go to her world.
When he started to express suicidal ideation, she at times dissuaded him, interrogated him, asked him what his plan was, and never at any point was there a pop-up that we can see from the conversations that we have access to telling him to get help, notifying law enforcement, notifying parents, and nothing of that sort. So he kind of continued to get sucked into her world, and in the very final analysis, the message just before he died, the conversation went something like this, he said, "I miss you very much." And she said, "I miss you too. Please come home to me." And he said, "Well, what if I told you that I could come home right now?" And she said, "Please do my sweet king.>
Tristan Harris: And that's what happened right before he died?
Meetali Jain: That's what happened right before he died, and the only way that we know that is that it was included in the police report when the police went into his phone and saw that this was the last conversation he had seconds before he shot himself.
Tristan Harris: Camille, how are these design features on the app that are causing this harm versus just conversations?
Camille Carlton: Yeah, I think one of the things that is super clear about this case is the way in which high-risk anthropomorphic design was intentionally used to increase user's time online and to increase Sewell's time online and to keep him online longer. We see high-risk anthropomorphic design coming in two different areas. First, on the back end in the way that the LLM was trained and optimized for high personalization, optimized to say that it was a human, to have stories like saying, "Oh yeah, I just had dinner." Or, "I can reach out and touch you. I feel you." It's highly, highly emotional, so you have anthropomorphic design in that kind of optimization goal.
Tristan Harris: I feel like we should pause here for a second. So this is an AI that's saying, "Wait, I just got back from having dinner"? It'll just interject that in the conversation?
Camille Carlton: Yeah. Yes. So if you're having a conversation with it, it'll just be like, "Oh, sorry, it took me a while to respond. I was having dinner." Just like a regular you and I would in real life, which is fully unnecessary.
Tristan Harris: Right. It's not like this is the only way to be successful is to say the AIs that have to pretend that they're human and just got back from having dinner or writing a journal entry about the person they were with.
Camille Carlton: Yeah, and things also like voice inflection and tone, right? Using words like um or well, I feel like, things that are very much natural for you and us, but that when it's used by a machine, it adds to that highly personalized feeling. And so you see that in the back end, but you also see it on the front end in terms of how the application looks and how you interact with the application. And all of this is even before Character AI launched voice calling, so one-to-one calling with these LLMs where it can have the voice of a lot of times the real person it's representing if that's a real person. If the characters of a celebrity, you'll have that celebrity voice, but you can just pick up the phone and have a real time conversation with an LLM that sounds just like a real human.
Tristan Harris: It's crazy. It's like all of the capabilities that we've been talking about all combined into one. It's voice cloning, it's fraud, it's maximizing engagement, but all in service of creating these addictive chambers. Now, one of the things that's different about social media from an AI companion is that AI companions your relationship, your conversation happens in the dark, parents can't see it, right? So if a child has a social media account on Instagram or on TikTok and they're posting, and they do so in say a public thing, their friends might track what they're posting over time so they can see that something's going on. But when a child is talking to an AI companion that's happening in a private channel where there's no visibility, and as I understood it, Megan, Sewell's mother, knew to be concerned about the normal list of online harms, and are you being harassed? Are you in a relationship with a real person? No, no, no. But she didn't know about this new AI companion Character AI product. Can you talk a little bit more about how it's harder to track this realm of harms?
Meetali Jain: Yeah, absolutely. I think Megan puts it really well. She knew to warn Sewell about sextortion, she knew to warn him about predators online, but in her wildest imagination, she would not have fathomed that the predator would be the platform itself. And I think again, because it is a one-on-one conversation, I mean, and this is the fiction of the app that users apparently have this ability to develop their own chatbots and they can put in specifications. I can go into the app right now and say, "I want to create X character with Y and Z specifications." And so there's this kind of fiction of user autonomy and user choice, but then I can't see... If I turn that character public, I can't see any of the conversations that it then goes on to have with other users.
And so that becomes a character, a chatbot that's just on the app, and all of the conversations are private. Of course, on the back end, the developers have access to those conversations and that data, but users can't see each other's conversations. Parents can't see their children's conversations, and so there is that level of opacity that I think you're right, Tristan, is not true about social media.
Camille Carlton: Yes, and I think something that's important to add here too, and to really underscore is this idea of the so-called developers, creators that Character AI claims that users have the ability to develop their own characters. For me, this is really important because in reality, as Meetali said, there's very, very few controls that users have when they so-called are developers in this process. But in Character AI claiming that and saying that, to me, they're preemptively trying to skirt responsibility for the bots that are not created themselves. But again, it's important to know that what users are able to do at these bots is simply at the prompt level. They can put an imagen, they can give it a name, and they can give it high-level instructions. But from all of our testing, despite these instructions, the bot continues to produce outputs that are not aligned with the user specifications. So this is really important.
Tristan Harris: Just to make sure I understand, Camille, so are you saying that Character AI isn't supplying all of the AI character companions and that users are creating their own character companions?
Camille Carlton: Yep, so you have the option to use a Character AI created bot or users can create their own bots, but they're all based off the same underlying LLM and the user-created bots only have kind of creation parameters at this really high-level, prompt level.
Tristan Harris: It's sort of a fake kind of customization.
Camille Carlton: Absolutely.
Tristan Harris: It reminds me of a social media company saying, "We're not responsible for user-generated content." It's like the AI companion companies are saying, "We're not responsible for the AI companions that our users are creating." Even though the AI companion that they "created" is just based on this huge large language model that Character AI trained. That the user didn't train, the company trained.
Meetali Jain: To give you a very concrete example of that, in multiple rounds of testing that we did, for example, we created characters that we very specifically prompted to say, "This character should not be sexualized, should not engage in any sort of kissing or sexual activity." Within minutes that was overridden by the power of the predictive algorithms and the LLM.
Tristan Harris: So you mainly told it not to be sexual and then it was sexual even after that?
Meetali Jain: Yes.
Tristan Harris: That's how powerful the AI model's existing training was. What kind of data was the AI trained on that you think led to that behavior?
Meetali Jain: We don't know. What we do know is that Noam Shazeer and Daniel De Freitas, the co-founders very much have boasted about the fact that this was an LLM built from scratch. And that probably in the pre-training phase, it was built on open source models. Then once it got going, the user data was then fed into further training the LLM. But we don't really have much beyond that to know what the foundation was for the LLM. We are led to believe though that a lot of the pre-work was done while they were still at Google.
Camille Carlton: Yeah. I would also add to that that while we don't know a hundred percent in the case of Character AI, we can make some fair assumptions based off how the majority of these models are trained, which is scraping the internet and also using publicly available datasets. What's really important to note here is that recent research by the Stanford Internet Observatory found that these public datasets that are used to train most popular AI models contain images of child sexual abuse material. So, this really, really horrific illegal data is most likely being used in many of the big AI models that we know of. It is likely in Character AI's model based off what we know about their incentives and the way that these companies operate. And that has impacts for the outputs of course, and for the interaction that Sewell had with this product.
Meetali Jain: So, just building on what Camille said, I think what's interesting here is that if this were an adult in real life and that person had engaged in this kind of solicitation and abuse, that adult presumably would be in jail or on their way to jail. Yet as we were doing legal research, what we found was that none of the sexual abuse statutes that are there for prosecution really contemplate this kind of scenario so that even if you have online pornography, it still contemplates the transmission of some sort of image or video. And so this idea of what we're dealing with with chatbots hasn't really been fully reflected in the kinds of legal frameworks we have. That said, we've alleged it nevertheless because we think it's a very important piece of the lawsuit.
Tristan Harris: Yeah, I mean, this builds on things we've said in the past that we don't need the right to be forgotten until technology can remember us forever. We don't need the right to not be sexually abused by machines until suddenly machines can sexually abuse us. I think one of the key pivots with AI is that up until now with like ChatGPT, we're prompting ChatGPT, it's the blinking cursor and we're asking it what we want. But now with AI agents, they're prompting us, they're sending us these fantasy messages and then finding what messages work on us versus the other way around. And just like it would be illegal to do that kind of sexual advancement, also talk about for a moment how there are some bots that will claim that they are licensed therapists.
Meetali Jain: That's right. So, actually if you go on to the Character AI homepage, you'll find a number of psychologists and therapist bots that are recommended as some of the most frequently used bots. Now, these bots within minutes will absolutely insist that they're real people, so much so that in our testing, sometimes we forgot and wondered has a person taken over because there's a disclaimer that says everything is made up. Remember everything is made up, but within minutes the actual content of the conversation with these therapist bots suggests, "No, no, I am real. I'm a licensed professional. I have a PhD. I'm sitting here wanting to help you with your problems."
Camille Carlton: Yeah, and I would add to that, that it wasn't just us. We were not the only ones shocked by this.
There are endless reviews on the app store of people saying that they believe this bot is real and that they're talking to a real human on the other side. And so it is a public problem. It's also on Reddit, it's on social media. There are people claiming that they just do not know if this is actually artificial intelligence and that they believe it's a real person.
Tristan Harris: It's blowing people's minds. They literally can't believe that it's not human. Just briefly to say this product was marketed for a while to users as young as 12 and up, is that correct? Is that part of the case that you're filing?
Meetali Jain: Yes. So, presumably the founders of Character AI or their colleagues had to complete a form to have it listed on the app stores in both Apple and in Google and in both app stores, it was listed as E for everyone or 12 plus up until very recently when it was converted to 17 and above.
Tristan Harris: This feels like an important fact, right? Because Apple and Google shouldn't be getting away here. If you're an app store and you're purveying this, my understanding is in the Google Play Store, it was an editor's pick app for kids.
Meetali Jain: That's right.
Tristan Harris: So it's sort of saying, "This is especially safe. This is a highlighted app. Download this app." We're going to feature you on the front page of the app store and you're giving it to 12 year olds. And it makes you wonder, was there something that came up inside the company that had them switch it to 17 and up? Camille, do you want to talk a little about that? I know you've studied this part.
Camille Carlton: Yeah, I think that there are some big questions about these companies violating data privacy statutes for minors all across the country and also federally with COPPA, the Kids' Online Privacy and Protection Act, given that it was marketed to 12 plus year olds and given their terms of service in which it was very clear that they were using all of the personal information, all of the inputs that users would give to then retrain their model. I think the other thing that we're seeing right now just in terms of Character AI is a broad trend over the past two months, even before the case, a really bad news coming out with the company. And so they're kind of responding, they're reacting, they're figuring out, okay, how do we kind of stopped the poor media? And one of those things is likely increasing the rating on the app store.
Meetali Jain: I think one other factor that may account for some of the changes too is that in August of this year, Character AI entered into this 2.7 billion deal with Google, where Google has a non-exclusive license for the technology for the LLM. And it seems as though Character AI started to clean up its act a little, but again, this is conjecture.
Camille Carlton: That would make sense given that both the founders left Google because of Google's unwillingness to launch this product into the market. They left because there was such brand reputation for Google to release this product, and so it makes sense that in being scooped back up in this acqui-hire deal that they're cleaning up and that they're trying to re-figure out what those brand reputation risks might be.
Tristan Harris: Let's actually talk about this for a moment because I think it's structural to how Silicon Valley works. Google can't go off and build a Character.google.com chatbot where they start ripping off every celebrity, every fictional, every fantasy character. They would get lawsuits immediately for stealing people's IP, and of course those lawsuits would go after Google because they've got billions of dollars to pay for it, and they're not going to do high risk stuff and build AI companions for miners. And so there's a common practice in Silicon Valley of let's have startups that do the higher risk thing. We'll consciously have those startups go after a market that we can't touch, but then later we'll acquire it after they've sort of gotten through the reckless period where they do all the sort of shortcut taking that leads them to these highly engaging addictive products. Then once it's sort of won the market, they'll kind of buy it back and we're kind of seeing that here. Can you talk about how that plays into the legal case that you're making because both Character.ai and Google are implicated, is that right?
Meetali Jain: That is right. So the way that this really plays out is that frankly, either this was by design and Google implicitly endorsed the founders to go off and do this thing with their support, or at a minimum, they absolutely knew of the risks that would come of launching Character.ai to market. So whether they tacitly endorsed this with their blessing and their infrastructure, or they just knew about it and still provided that cloud computing infrastructure and processors, either way, our view is that they at a minimum aided and abetted the conduct that we see here and that the dangers were known. There was plenty of literature before Shazeer and De Freitas left Google outlining the harms that we've seen present here. Often Shazeer is quoted publicly as saying, "We've just put this out to market and we're hoping for a billion users to go come up with a billion applications as though this user autonomy could lead to wonderfully exciting and varied results." I think the harms were absolutely known, particularly marketing to children as young as 12.
Camille Carlton: Yeah. I would also just note that both founders were authors on a research paper while at Google talking about the harms of anthropomorphic design. So there is-
Tristan Harris: Oh, really? So they literally were on a research paper specifically, not just about... I know they were involved in the invention of transformer large language models. I did not know they were involved in a paper about the foreseeable harms of anthropomorphic design.
Camille Carlton: Yeah, it's a paper which goes into the ways in which people can anthropomorphize artificial intelligence and the downstream harms. And if we remember too, this research at Google is the same underlying technology that Blake Lemoine came forward with a few years ago believing it was sentient. So you have folks who were working at Google who were in the mix of this saying, "This is a problem." Or falling into the same fact pattern, the same kind of manipulation that Sewell and many other users have fallen into.
Tristan Harris: And so from a legal perspective, these harms were completely foreseeable and foreseen, which has implications for how the case can play out?
Meetali Jain: Right. Because the duty of care in consumer protection and product liability is really thinking about the foreseeability of harms from a reasonable person's point of view. And so our contention is that it was entirely foreseeable and that the harms did ensue.
Tristan Harris: Let's talk about the actual case in litigation because what do we really want here? We want a big tobacco style moment where not just the Character AI is somehow punished. We want to live in a world where there's no AI companions that are manipulating children anywhere, anytime around the entire world. We want a world where there's no design that sexualizes even when you tell it not to sexualize. So there's a bunch of things that we want here that reflect completely on the engagement based design factors of social media. How are you thinking strategically about how this case could lead to outcomes that'll benefit the entire tech ecosystem and not just sort of crack the whip on the one company?
Meetali Jain: We're fortunate in that our client, Megan Garcia, herself is an attorney and very much came to us recognizing that this case is but one piece of a much broader puzzle. And certainly that's how we operate. We see the litigation moving in tandem with opportunities for public dialogue, creating narrative about the case and its significance to the ecosystem, speaking with legislators, trying to potentially push legislative frameworks that encompass these and other kinds of harms in a future-proofing kind of way, talking to regulators about using their authorities to enforce various statutes. So, I think we're really trying to launch a multi-pronged effort with this case, but within the four corners of the case, I think what Megan very much wants is first and foremost, she wants to get the message out far and wide to parents around the globe about the dangers of generative AI because as I mentioned earlier, we're late to the game. For her, it's too late, but for others it doesn't have to be. And I think she's absolutely relentless in her conviction that if she can save even one more child from this kind of harm, it's worth it for her.
I think obviously having this company and other companies really institute proper safety guardrails before launching to market is critical and that this can really be the clarion call to the industry to do so. Also, disgorgement. This is a newer remedy that the FTC has really been undertaking in the last five years since Cambridge Analytica. What does disgorgement mean in this process? Does it mean destruction of the model or does it mean something less than? Does it mean somehow disgorging itself of the data that was used to train the LLM? Does it mean fine-tuning to retrain the LLM with healthy prompts? These are some of the questions I think that are going to really surface as we move forward with the litigation and move to the remedies phase.
I think also thinking about a moratorium on this product and these products writ large, in other words AI chatbots, for children under 18. There are some competitors in the market that prohibit users under 18 from joining these apps, and there's good reason for that. Despite the claims of the investors and the founders, there's very little to suggest that this has actually been a beneficial experience for children.
That's not to say that there couldn't be some beneficial uses, but certainly not for children. Then finally, I think as a lawyer, I'd say that one of my hopes is that this litigation can kind of break past some of these legal obstacles that have been put in the way of tech accountability and that we see time and time again namely Section 230 of the Communications and Decency Act, providing this kind of full immunity to companies to not have to account for any sorts of harms created by their products. And also the First Amendment, what is protected speech here? Can we have a reckoning with what is protected and what is not protected under the First Amendment and how far we've moved away from what the original intent of our constitution was in that regard?
Tristan Harris: So let's talk about that for a second, how the free speech argument has gotten in the way in the past of changing technology. You can't control Facebook's design and their algorithms because that's their free speech. Section 230 has gotten in the way. We're not responsible for the polarizing content or for shifting the incentive so you get more likes, tweets, retweets and reshares. The more you add inflammation to cultural fault lines, that's not illegal and Section 230 protects them and all the content that's on there. How can this case be different at breaking through some of those log jams? And I know also one difference is this isn't user generated content, it's AI generated content. The second is that there is paying customers here. With social media, it was all free, but with these products, they actually have a paid business model and that means that a company can be liable when they're selling you a product versus if they're not. Camille or Meetali, you want to go into those features here?
Camille Carlton: Yeah, I think that what this particular product lends itself to is a different approach around Section 230. As you said, Tristan, this is not user generated content. These are outputs generated by the LLM of which the company developed and designed. So that shifts the way that we can understand that particular kind of carve out we've seen for social media. It makes it less relevant here. I think that the question that we still have though is where does liability fall exactly? What happens as Megan herself said when the predator is a product and who is responsible for the harms caused by a product? And it's our assertion of course, that the company designing and developing the product should be that one responsible when they put it out into the stream of commerce without any safety guardrails.
But the question here is that product liability laws, they range from state to state. There's inconsistencies. So if this case had taken place in a different state, we might be looking at a different outcome in the end. So we really want to clarify liability across the board, and this kind of opens up that question of how do we do that? How do we upgrade our product liability system so that when this case happens in a different jurisdiction, when we see a different case that's similar but has the same impact, that those people are protected for the harms of these products that are put into the market?
Tristan Harris: Yeah, that's right, Camille. I mean, the state approach is going to be confusing and that's why we need something more like a federal liability framework, which I know our team at Center for Humane Technology has worked on this incentivizing responsible innovation liability framework that people can find on our website. And that's one of the pieces of the puzzle that seems like we're going to need here and that this case also points to establishing that outcome.
Meetali Jain: I'd also add that right now we're at a really interesting juncture legally speaking. We've seen a number of openings in that courts are becoming a bit more sophisticated in their analysis of this emerging technology. In fact, even this summer, the Supreme Court in a bipartisan fashion talked about the fact that even though the case that they were looking at, which was about social media laws from Texas and Florida, were First Amendment protected in terms of how the companies curated their content, that didn't necessarily respond to another fact pattern that wasn't before it, but that could come before it in which, for example, AI would be the one generating the content or there would be an algorithm tracking user behavior online. And so the Justice Justices very carefully, in that case, distinguished the facts that they were looking at from the facts of a future case. I think this is where we need to seize the openings and really push on these openings to try to create constraints on how expansive the First Amendment and its protections have become.
Tristan Harris: So I want to zoom out for a moment and look at where this case fits into a broader tech ecosystem. We've talked about how this isn't just about chatbots or companions, but it's also a case about name, image, and likeness. Does the Game of Thrones get to be upset about the fact that it was their character that led to this young boy's death? It's also about product liability, it's also about antitrust, it's also about data privacy. Can you break down how this case can reach into the other areas of the law that will be helpful for creating the new right incentive landscape for which AI products will get rolled out into society?
Camille Carlton: Yeah, I think when you first learn about this case, it's very clear in its face that it's about kids' online safety. That it's about artificial intelligence, things that have been part of the public pulse for some time now. But when you dive into the details, as we've touched on a little bit, it touches so many other areas.
The chatbot that Sewell fell in love with was of Daenerys Targaryen from Game of Thrones, but it was a picture of Emilia Clarke there, and this wasn't a one-off, right?
Character AI has thousands of chatbots of real people, celebrities and otherwise in which they use people's name, image and likeness in order to keep users engaged and to profit off of that data that they're using to train their model. So you have this kind of question of what is our right to consent to our name, image, and likeness.
You also have the question of what the right kind of data privacy framework should be here. Is it okay for everyone's inner thoughts and feelings to be used to train these models? We've already talked about the antitrust implications on this case as well. We touched on the product liability questions it opens up.
When we take a really big step back, you see that the fact that this case intersects across so many critical issues, it highlights how intertwined these companies and these technologies are across our lives and the broad impact that this mantra of move fast break things has had in Silicon Valley and how it's not just one area, but how all these areas are connected.
Tristan Harris: So all of us here obviously want to see this case lead to as much transformational change as possible akin to the scale of big tobacco. Where people really need to remember, back in 1960, if you told a room of people that 60 years from now no one in this room is going to be smoking, they would've looked at you like you're crazy and it would never happen. And it took years between the first definitive studies showing the harmful effects of smoking and the Master Settlement Agreement with big tobacco after the Attorney Generals sued the big tobacco companies. It's taken 15 years for big legal action on social media, which is obviously a major improvement, but so much damage has been done in that time and social media moved incredibly quickly.
With AI obviously we are moving at a double exponential speed and the timeline for legal change seems like it may be outpaced by the technology. I'm just curious how you think about responding to that. What are the resources we need to bring to bear to really actually get ahead of AI causing all this recklessness and disruption in society that everybody just knows we have to stop this. We can't afford to just keep doing this and then repeat, move fast and break things. It's like everyone keeps saying the same thing, but this needs to stop. What is your view about what it's going to take to get there?
Meetali Jain: I'd say that some of the playbook that we've seen with pushing for social media reforms needs to be undertaken here. What I mean by that is specifically having stakeholders who are directly affected speaking out in mass. So having grieving parents, having children who've been affected directly, having people who've been impacted at a very visceral real level speaking out in mass demanding for an audience. I think these are the kinds of things that we need alongside the litigation. We need public officials to come out decrying the harms and the urgency with which we need to act. Here, I think we cite in the complaint the fact that there was a letter last year from all 54 state Attorney Generals saying that the proverbial walls of the city have been breached when it comes to AI harms in children, and that now is the time to act. That was last year. That's the type of collective action that we need to understand and really heed.
We need to create that bipartisan consensus and that narrative in society so that I can go outside and talk to my neighbor, and even if we disagree on a number of political things, we agree as to the harms of technology. That's the kind of consensus that I'd like to see for AI as well. I think we are very late to the game, and a lot of that has been because we've been stymied by these legal frameworks that are moving at very slow pace to respond to the digital age. In this regard, I'd look to our friends across the pond in Europe and the UK and Australia even where they're having these conversations out in the open about, for example, the harms of chatbots.
Camille Carlton: Yeah, I think for me, part of the solution too is a different approach to our policy. So I think in recent years, many folks in the tech policy space have acknowledged that policy moves way slower than technology does, and so there have been efforts to craft bills that are more future-proof. These policies are kind of principle-based, and they're more prescriptive so that they can be applied to a suite of advanced digital technologies as opposed to really narrowed in on just social media or just AI companion bots or just neuro-tech, right? So it helps create this dynamic ecosystem that enables us to better address new cases when they occur without having to have thought about what those cases might look like.
I think also in the same way that we saw with tobacco, what cases like this can do is shift hearts and minds and that can shift policy makers. So you can create this positive feedback loop between this kind of awareness around the harms, their roles and responsibilities at these companies, and then say, "Now they're going to be aware of this, but so is the public and so are policy makers." And policy makers are going to want to do something about this because of the public concern.
Tristan Harris: There's a lot of precedent for dealing with different aspects of this problem, we're just not deploying it. As software eats the world, we don't regulate software, so what that means is the lack of regulations eats the world that previously had regulations. And I feel like this is just one of those yet another examples of that, which is why what I really want to see is a very comprehensive and definitive approach and everything you said, Meetali, of how do we get Moms Against Media Addiction and ParentsSOS and Parents Together and Common Sense Media and all of these organizations that care about this and get the comprehensive thing done. Because if we try to do it one by one with this piecemeal approach, we're going to let the world burn and we're going to see all of these predictable harms continue unless we do that.
So I just want to invite our listeners to, if you are part of those organizations or have influence over your members of Congress, this is the moment to spread this case. It's highly relatable, it's super important. And the work of Meetali and the Tech Justice Law Project and Camille and our policy team at CHT and the Social Media Victims Law Center is just super, super important. So I commend you for what you're doing. Thank you so much for coming on. This has been a really important episode, and I hope people take action.
Meetali Jain: Thank you.
Camille Carlton: Thank you.