This Moment in AI: How We Got Here, and Where We’re Going
It’s been a year and half since Tristan Harris and Aza Raskin laid out their vision and concerns for the future of artificial intelligence in The AI Dilemma.
In this conversation, they discuss what’s happened since then–as funding, research, and public interest in AI has exploded–and where we could be headed next. Plus, some major updates on social media reform, including the passage of the Kids Online Safety and Privacy Act in the Senate.
This is an interview from our podcast Your Undivided Attention, on August 12, 2024. It has been lightly edited for clarity.
Tristan Harris: Hello, everyone. Welcome to Your Undivided Attention. I'm Tristan.
Aza Raskin: And I'm Aza. And we're actually going to flip it around today and have Sasha Fegan, our executive producer here for Your Undivided Attention, actually get in front of the microphone and interview us.
Aza Raskin: Sasha, welcome to Your Undivided Attention. You are the background actual host of this podcast.
Sasha Fegan: Thanks so much, Tristan, and hi Aza. It's so nice to be around this side of the microphone going from the background host of the podcast to the host for this episode. I'm really excited to be here.
Sasha Fegan: It's summertime in the US. And while things are a little bit slower, and everyone's at the beach, I thought it'd be a great opportunity to just take a breath and reflect about where we are at this moment in the tech landscape. It seems like yesterday, but it was actually a whole year and a half ago that you guys recorded this video called The AI Dilemma, which surprised us all by going viral all around the world. You guys did such a great job of forecasting how the AI race was going to play out, and I'd love to just get a sense of where you think we're headed today.
Sasha Fegan: The other thing I really want to do in this episode is get a little bit of a readout from all of the travels that you've been doing all around the world, including the AI for Good Summit that you went to in Geneva in this past spring, and all of the amazing conversations that you have behind the scenes to policymakers and folks in the tech industry.
Sasha Fegan: And the third thing I want to get to in this episode is your reflections on some of the really big developments we've seen in the social media reform space in the US, particularly the passage of some legislation around kids' online safety, which we know is now even more important than ever given how AI is going to supercharge social media harms.
Sasha Fegan: Let's get it started with a reflection on The AI Dilemma. Just give us a top line of what you talked about in that video, and we'll go from there.
Tristan Harris: The essence of The AI Dilemma talk that we did in March of 2023, which really launched this kind of next chapter of CHT's work, which extends from social media to AI ... That talk, The AI Dilemma, was really about how these competitive forces drive us to unsafe futures with technology. We saw that with social media where the competitive forces driving for the race to get attention, the race to get engagement, drove the race to the bottom of the brain stem. That then sort of inverted our world inside out into the addicted, distracted, polarized society that we have now. And how, with AI, it's not a race for attention. It's a race to roll out. A race to take shortcuts to get AI into the world as fast as possible and onboard as many people as possible.
Tristan Harris: And since The AI Dilemma talk a year and a half ago, we've seen more and more AI models scaled up even bigger with more and more capabilities and society less and less able to respond to the overwhelm that arises from that.
Aza Raskin: The other thing that we talked about in The AI Dilemma was just ... What is this new AI? What's different this time? Why does it all seem to be going so fast? And what we talked about was that, well, it used to be that, because the different fields of AI were separate, the progress was pretty slow. And then in 2017, that changed. There was a breakthrough at Google. A technology invented called transformers, which all large language models are now based on. And essentially, they taught computers how to see everything as a kind of universal languages. And every AI researcher was suddenly working on the same thing, having AI speak this kind of universal language of everything. And that's how we get to this world now with Sora, Midjourney, ChatGPT that, if you can describe something, AI will make it. And that was one.
Aza Raskin: And the other thing that we talked about were the scaling loss. That is how quickly AI gets better just by putting more money in. Before, dumping lots of money into making an AI didn't really make it smarter. But after, more money meant more smarts. And that's really important to get. More money means more smarts. That's brand new with this kind of AI. The companies are now competing to dump more billions of dollars into training their eyes so they can outcompete their competitors, and that's what's causing this insane pace.
Sasha Fegan: When we're talking about money, what are the big sums of money?
Tristan Harris: We know, roughly, GPT-4 was trained with around $100 million of compute, and we know that the next models are going to be trained rumored for $1 billion to $10 billion training runs. And when you scale up by a factor of 10, out pops more new capabilities.
Sasha Fegan: So much has happened in the last couple of years, and I'm really interested to know about the conversations that you guys are having in the Bay Area. Whenever I talk to you guys, you tell me about interesting conversations that you've been having and how it's checking your perspective on things. I'm just wondering if you can walk us through how those conversations around AI have evolved over the last two years and what you are hearing on the ground, as it were.
Aza Raskin: One of the weird things about wandering around the Bay Area is the phrase, "Can you feel the AGI?" That is, the people that are closest ... I know, right?
Sasha Fegan: Seriously?
Tristan Harris: Feel the AGI. There's T-shirts with this one.
Aza Raskin: [inaudible 00:05:21]-
Sasha Fegan: There's T-shirts with it on? Oh, my God.
Aza Raskin: I've walked into dinners, and the first thing that somebody said to me is, "You're feeling the AGI." He looked at my face. I was really concerned. I actually hadn't been sleeping because, when you metabolize how quickly everything is scaling up and the complete inadequacy of our current government or governance to handle it, it honestly makes it hard for me to sleep sometimes. And I walked in. He looked at my face. He's like, "Ah, you're feeling the AGI, aren't you?"
Sasha Fegan: This is AGIs in artificial general intelligence, which some people outside of the Bay Area don't ever think that we're actually going to get to. You are talking about something which is ... It's just normal in the Bay Area to be working towards that and thinking about it.
Aza Raskin: And it should be really clear here because there is debate inside of both the academic community and the labs of ... Does the current technology, this transformers-based large language model ... Will it get us to something that can replace most human beings on most economic tasks? That's the version of AGI. The definition that I like to use. And the people that believe that scale is all that we need say, "Look, if we just keep growing, and we sort of project out the graph of how smart the systems have been four years ago, it was sort of at the level of a preschooler. GPT-4 level of a smart high schooler. The next models coming out ... Maybe it'll be at PhD levels. You just project that out. And by 2026, 2027 that they will be at the level of the smartest human beings and perhaps even smarter. There's nothing that stops them from getting smarter."
Aza Raskin: And there are other people that say, "Hey, actually, large language models aren't everything that we're going to need. They don't do things like long-term planning. We're one more breakthrough away from something that can really just be a drop in human replacement."
Aza Raskin: Either one of these two camps ... You either don't need any more breakthroughs, or you're just one breakthrough away. We're very, very close. At least, that's the talk inside of Silicon Valley.
Tristan Harris: If you talk to different people in Silicon Valley, you really do get different answers, and it really feels confusing sometimes. And I think the point that Aza was making is that, whether it is slightly longer, closer to, I don't know, five to seven years versus one to two years, it's still not a lot of time to prepare for that. And when artificial general intelligence-level AI emerges, you'll want to have major interventions way before that. You won't want to have done it ... You won't want to be starting to figure out how to regulate it after that occurs. You want to do it before.
Tristan Harris: And I think that was the main mission of The AI Dilemma was ... How do we make sure that we set the right incentives in motion before entanglement, before it gets entrenched in our society? You only have one period before a new technology gets entangled, and that's right now.
Sasha Fegan: I mean, it's hard sitting all the way over here in the suburbs of Sydney, Australia. And I do have a sense from my perspective that there's been a little bit of hype. Some of the fear about AI hasn't translated. I mean, it hasn't transformed my job yet. My kids aren't really using it at school. And when I try to use it, honestly, I find it a little bit crappy and not really worth my while.
Sasha Fegan: How do you sort of take that further and convince someone like me to really care? And what's the future that I'm imagining, I guess, even for my job five or 10 years into the future?
Tristan Harris: I think one thing that's important to distinguish is how fast AI capabilities are coming versus how fast AI will be diffused or integrated into society. I think diffusion or integration can take longer, and I think the capabilities are coming fast. I think people look at the fact that the entire economy hasn't been disrupted so quickly as creating more some skepticism around the AI hype.
Tristan Harris: I think certainly, with regard to how quickly this transformation can take place, that level of skepticism is warranted, but I do think that we have to pay attention to the raw capabilities. If you click around and find the corner of Twitter where people are publishing the latest papers and AI capabilities, you'll be humbled very quickly by how fast progress is moving.
Aza Raskin: I think it's also important to note there is going to be hype. Every technology goes through a hype cycle where people get overexcited-
Sasha Fegan: And we're seeing that now, right?
Aza Raskin: And we're seeing that. Exactly.
Sasha Fegan: Millions of dollars. OpenAI is supposed to be potentially losing $5 billion this year. There's a bit of a feel of ... Is the kind of crypto crash coming with the energy around AI at the moment?
Aza Raskin: Right. Exactly. And that happens with every technology. That is true, and also true is the raw capabilities that the models have and the amount of investment into the, essentially, data centers and compute centers that companies are making now. Microsoft is building right now a $100 billion computer. Super center, essentially.
Sasha Fegan: I do want to move on now to questions around data because there's been a huge amount of reporting recently about how large language models are just super hungry for human-generated data, and they're potentially running out of things to hoover up and ingest. And there's been predictions that we might even hit a data wall by 2028.
Sasha Fegan: How is this going to affect the development of AI?
Aza Raskin: I mean, it's a real and interesting question. If you've used all of the data that's easily available on the internet, what happens after that? Well, a couple things happen after that.
Aza Raskin: One, and we're seeing this, is that all the companies are racing for proprietary data sets. Sitting inside of financial institutions, sitting inside of academic institutions, is a lot of data that is just not available on the open internet. It's not exactly the case that we've just run out of data. The companies may have run out of easily accessible open data.
Sasha Fegan: Free data?
Aza Raskin: Free data. The second thing is that there are a lot of data sources that require translation. That is, there's a lot of television and movies, YouTube videos, and it takes processing power to convert those into, say, text. But that's why OpenAI created Whisper in these other systems. There's a big push in the next models to make them multi-modal. That is, not just speaking language but also generating images, also understanding videos, understanding robotic movements, and it is the case with GPT-4 scale models that, as they were made multimodal, they didn't seem to be getting that much smarter, but the theory is that's because they just weren't big enough. They couldn't hold enough of ever one of these modalities at the same time. There's some big open questions there.
Aza Raskin: But when we talk to people on the inside, and these are not the folks like the Sam Altmans or the Darios that have an incentive to say that the models are just going to keep scaling and getting better. What we've heard is that they are figuring out clever ways of getting over the data wall and that the scaling does seem to be progressing. We can't, of course, independently verify that, but I'm inclined to believe them.
Sasha Fegan: Some companies are turning to AI-generated content to fill that void. This is what they call synthetic data. What are the risks of feeding AI-generated content back into the models?
Aza Raskin: Generally, when people talk about the concerns of synthetic data, what they're talking about is sort of these models getting high off their own exhaust, which is that if the models are putting out hallucinations and they're trained on those hallucinations, you end up in this sort of downward spiral where the models keep getting worse. And in fact, this is a concern. Last year, Sam Altman said that one out of every 1,000 words that humanity was generating was generated by ChatGPT.
Sasha Fegan: That's incredible. That is absolutely incredible.
Aza Raskin: Incredibly concerning, right? Because that shows that, not too far into the future, there will be more text generated by AI and AI models, more cognitive labor, done by machines than by humans. That's in and of itself scary. And of course, if AI can't distinguish what AI's generated and what they didn't, and they're trained on that model, you might get the sort of downward spiral effect. That's the concern people have.
Aza Raskin: But when they talk about training on synthetic data, that concern does not apply because they're making data specifically for the purposes of passing benchmarks, and they create data that are specifically good at making the models better. That's a different thing than sort of getting high on your own exhaust.
Sasha Fegan: Right, but it leaves us in a culture where we're surrounded or have surround sound of synthetically created data or non-human-created data potentially.
Aza Raskin: That's right.
Sasha Fegan: Then you have non-human-created information around us.
Aza Raskin: Well, and this is how you can get to without needing to invoke anything sci-fi or anything AGI. How you can get to humans lose control. Because this is really the social media story set again, which is everyone says, "When an AI starts to control humanity, just pull the plug," but there is an AI in social media. It's the thing that's choosing what human beings see that's already downgrading our democracies, all the things we normally say, and we haven't pulled the plug because it's become integral to the value of our economy and our stock market.
Aza Raskin: When AIs start to compete, say, in generating content in the attention economy, they will have seen everything on the internet. Everything on Twitter. They'll be able to make posts and images and songs and videos that are more engaging than anything that humans create. And because they are more engaging, they'll become more viral. They will out-compete the things that are sort of bespoke human-made. You will be a fool if you don't use those for your ends.
Aza Raskin: And now, essentially, the things that AI is generating will become the dominant form of our culture. That's another way of saying humans lost control.
Tristan Harris: And to be clear, Aza's not saying that the media or images or art generated by AI are better from a values perspective than the things that humans make. What he's saying is they are more effective at playing the attention economy game that social media has set up to be played because they're trained on what works best, and they can simply out-compete humans for that game, and they're already doing that.
Sasha Fegan: It's terrifying. We'll still have art galleries and places that are offline, though, that don't have AI-generated content.
Aza Raskin: It'll be artisanal art.
Sasha Fegan: Artisanal art.
Sasha Fegan: All right. Let's get on to what you guys have been up to because you are always so busy. I can barely book you into a podcast because you're off Jet-setting around the world and talking to important people. I know you went to AI for Good recently. Tell me about that. Where was it? Who did you talk to?
Tristan Harris: We were at the United Nations AI for Good Conference in Geneva with a lot of the major leaders in AI. Digital ministers from all the major countries. We ran into a lot of friends and allies. We saw Stuart Russell who, for those who don't know, wrote the original textbook on artificial intelligence. If you've been through an AI class at a major university, you've read his textbook. He's very much on the side of safety, and he talks about how there's currently at least 1,000 to one. He estimates closer to a 2,000 to one gap in the amount of money that's going into increasing the power of AI versus going into increasing the safety and security of AI.
Tristan Harris: And he gives examples of how that's not true of other industries. For example, he quoted from his friends at Berkeley, I think, who work on the issues of nuclear that, for every one kilogram going into a nuclear reactor, there's seven kilograms of paperwork to make it safe. With that ratio, it's not like when Sam Altman and co are making GPT-5. For every $1 they spent on building GPT-5, they spent $7 on the safety work on how to make GPT-5 safe. If we were in the nuclear ratio, we would be closer to that.
Sasha Fegan: That is such an interesting reflection. Aza, what were your thoughts on the AI for Good Summit in Geneva?
Aza Raskin: I just wanted to name a phrase, actually, Tristan, that you coined when we were there. And this was when I was sitting in the lecture hall and was listening. I think it was actually someone from Google who was talking about AlphaFold 3, and she was talking about how it would take before 10 years for them to find an enzyme that might, say, break down plastics in our environment, but they had used AlphaFold 3 to discover an enzyme within hours and how cool that was. It is really cool.
Aza Raskin: But she, of course, didn't then say the next step, which is but that same tool could be used to create an enzyme that might eat human flesh or do any number of terrible things. And in fact, this was a thing we saw time and time again in the open source panel where they're supposed to be talking about the risks and the opportunities of open source. Everyone on the panel only talked about the opportunities. No one would really touch the risk. And what was frustrating is that it was a kind ... I said it was sort of like gaslighting. And actually, Tristan turned to me and said, "No, this is half-lighting. They're only telling half the truth."
Aza Raskin: And it's frustrating because, if we only talk about the good thing, then we are ill-equipped to actually handle the downsides, which means we're much more likely to have the downsides smack us in the face. And so, my big request from everyone is let's stop half-lighting. Let's acknowledge the good at the same time as we acknowledge the harms, and then we'd be able to way find and navigate much better.
Aza Raskin: And one of the experiences Tristan and I had being there is person after person, whether it's just an attendee or a diplomat at the highest level, would come up to us and say, "Thank you for saying what you're saying. Thank you for talking about the incentives. Thank you for not half-lighting us." And it just made it clear to us not that we're so special. It's that there aren't enough people that aren't captured by, say, what their company requires them to say so that everyone has this feeling of just not being told the full truth.
Tristan Harris: The full truth. One of the other things that really blew me away, actually, walking around AI for Good was all of the people who listened to the podcast. I remember, Aza, we had the head of IKEA's Responsible AI innovation, and that they had used The AI Dilemma to sort of guide some of their policy.
Aza Raskin: We had the Cuban minister, right?
Tristan Harris: The Cuban digital ministry who works on policy, and they wanted our help with some stuff on autonomous weapons. They just listened to the episode on autonomous weapons. I was just blown away by how many policymakers who are working on these issues follow the podcast, and just want to thank all of you listeners because it both makes us feel like our work is really ... We're trying to impact things in the world. And one of the people who actually came up to us was Swiss diplomat, Nina Frey, who told us about some of the work that she's inspired to do because of the podcast, and we actually asked her to send a voice memo after you ran into her, Aza, and let's take a listen to that.
Nina: Hi, Aza and Tristan, this is Nina. I'm a Swiss diplomat currently working on tech diplomacy. I think it was in April 2023 when you released your podcast episodes on the three rules to govern AI.
Nina: After listening to that and your thoughts about putting the actors to a table to make them cooperate, then I thought that would be something that Switzerland could also contribute to. And we launched together, with E-Date Zurich, a initiative that's called the Swiss School for Trust and Transparency, which wanted to contribute with concrete actions to really also bridge the time gap from now until proper regulation will be in place.
Nina: Fast-forward today. This has led to one initiative amongst others that really tries to create a virtual network for AI which invites partners to contribute of resource pooling in the three pillars, compute data, and capabilities to really give a more equitable access to AI research.
Nina: And your podcast of one and a half years ago has been a kickoff initiator to this thought that led to so much. I really wanted to thank you for that and for your continued action to a more safe and equitable access to AI. Thank you.
Aza Raskin: That's so awesome to hear. It can feel really powerless as a human being seeing the tidal wave of AI coming for ... What can we possibly do? And without being pollyannish about it, there is a way that I think that clarity can bring agency and that it's not ... This is not the kind of thing that we're going to be able to ever do alone. Not any single one of us. This is always going to be a coordination kind of problem. And seeing that there can be decentralized action where each person who listens to this podcast or otherwise is informed can say, "What can I do in my sphere of agency?"
Aza Raskin: If we all did that, the world would be a much better place. And this is one of those examples of it happening in practice in ways that we could never have possibly imagined.
Tristan Harris: One of my favorite parts about walking around that center in Geneva was the sense that the movement was seeing itself. Feeling the movement. I remember I was talking to Maria Ressa, a former podcast guest who won the Nobel Peace Prize on her work, and what she said after the Social Dilemma Launch is the movement needs to see itself. There's a lot of people who are working on this, but when you're a person who's working on it, your felt sense is, "I'm alone. I don't feel the other humans that are working on this."
Tristan Harris: And so, how do we actually have the humans that are listening to this podcast feel the other humans that are listening to this podcast and then doing real things in the world because of it? And so, one of our thoughts with this episode is trying to bring more of that to light for people so they can feel that there is progress, slowly but surely, being mobilized.
Sasha Fegan: Well, that's a really good segue into what I wanted to talk about next, actually, which is that the work that CHT has been doing on AI is really on a continuum to the work that the organization first started to do on social media, and I think that's something people don't always understand very well. I'd love for you to have a go at explaining that.
Tristan Harris: The key thing to understand that connects our work on social media to AI is the focus on how good intentions with technology aren't enough. And it's about how the incentives that are driving how that technology gets rolled out or designed or adopted leads to worlds that are not the ones that we want. A joke that I remember making is that, when we were at AI for Good, was imagine you go back 15 years, and we went to a conference called Social Media for Good. I could totally imagine that conference. In fact, I think I almost went to some of those conferences back in the day. Because we were all ... Everyone was so excited about the opportunities that social media presented, and me included. I remember hearing Biz Stone, the co-founder of Twitter, on the radio in 2009 talking about someone sending a tweet in Kenya and getting retweeted twice, and suddenly everybody in the United States saw it within 15 seconds. And it's like, that's amazing. That's so powerful. And who's not intoxicated by that?
Tristan Harris: And those good use cases are still true. The question was ... Is that enough to get to the good world where technology is net synergistically improving the overall state and health of the society? And the challenge is that it is going to keep providing these good examples, but the incentives underneath social media were going to derive systemic harm or systemic weakening of society. Shortening of attention spans. More division. Less of a information commons driven by truth but more of the incentives of click-bait. The outrage economy. So on and so forth.
Tristan Harris: And so, the same thing here. Here we are 15 years later. We're at the UN AI for Good Conference. It's not about the good things AI can do. It's about ... Are we incentivizing AI to systemically roll out in a way that's strengthening societies? That's the question.
Aza Raskin: It's worth pausing there because it's not like we are anti-AI or anti-technology. It's not that we are placing attention on just the bad things AI can do. It's not about us saying, "Let's look at all the catastrophic risks or the existential risks." That's not the vantage point we take. The vantage point we take are ... What are the fragilities in our society that we are going to expose with new technology that are going to undermine our ability to have all those incredible benefits? That is the place we have to point our attention to. We have a responsibility to point our attention to, and I wish there were more conferences that weren't just AI for Good but AI for making sure that things continue.
Tristan Harris: Just one metaphor to add on top of that that I've liked using recently is that you've mentioned a few times is this Jenga metaphor. We all want a taller and more amazing building of benefits that AI can get us, but there's ... Imagine two ways of getting to that building. One way is we build that taller and taller building by pulling out more and more blocks from the bottom. We get cool AI art that we love but by creating deepfakes that undermine people's understanding of what's true and what's real in society. We get new cancer drugs but by also creating AI that can speak the language of biology and enable all sorts of new biological threats at the same time.
Tristan Harris: We are not people who are ... We are clearly acknowledging that the tower is getting taller and more impressive exponentially faster every year because of the pace of scaling and compute and all the forces we're talking about, but isn't there a different way to build that tower than to keep pulling out more and more blocks from the bottom? That's the essence of the change that we're trying to make in the world.
Aza Raskin: And this is why, just to tie back to something you said before, half-lighting is so dangerous because half-lighting says, "I'm only going to look at the blocks I placed on the top, but I'm-
Tristan Harris: That's right.
Aza Raskin: ... going to ignore that I'm doing it by pulling out a block out from the bottom."
Tristan Harris: That's right. Exactly.
Sasha Fegan: What are some solutions to these problems? What kind of policies can we bring in on a national level?
Aza Raskin: There are efforts underway to work on a sort of more general federal liability coming out of product law for AI. And I just wanted to have a call-out to our very talented policy team at CHT. Our leaders there, Casey Mock and Camille Carlton. They're often more behind the scenes, but you'll be able to listen to them in one of our upcoming episodes to talk about specific AI policy ideas around liability.
Aza Raskin: And another just sort of very common-sense solution, and we can tie this back to the Jenga metaphor, is how much money, how much investment, should be going into upgrading our governance. We can say that at least 15%, 25% of every dollar spent of the trillions of dollars going into making AI more capable should go into upgrading our ability to govern and steer AI as well as the defenses for our society. Right now, we are nowhere near that level.
Sasha Fegan: But who makes the decision about what should be spent on safety? I mean, is that something that happens on a federal level? Is that something that happens on an international level? Or do we trust the companies to make those decisions for themselves?
Tristan Harris: You can't trust the companies to make decisions for themselves because then it becomes an arms race for who can hide their costs better and spend the least amount on it, which is exactly what's happening. It's a race to the bottom. As soon as someone says, "I'm not going to spend any money on safety, and suddenly I'm going to spend the extra money on GPUs and going faster and having a bigger, more impressive AI model so I can get even more investment money," that's how they win the race.
Tristan Harris: And so, it has to be something that's binding all the actors together. We don't have international laws that can make that happen for everyone, but you can at least start nationally and use that to set international norms that globally we should be putting 25% of those budgets into it.
Sasha Fegan: This conversation, like a lot of the conversations we have on this show, can feel a little bit disempowering because it can be hard to get a sense of progress on these issues, but there have actually been some big wins for the movement. And I'd love to get your guys' thoughts on these, especially on the social media side.
Tristan Harris: There's actually a lot of progress being made on some of the other issues that CHT has worked on, including the surgeon general in the United States. Vivek Murthy, actually issued a call for a warning label on social media. And while that might seem kind of empty ... Or what is that really going to do? If you look back to the history of big tobacco, the surgeon general's warning was a key part of establishing new social norms that cigarettes and tobacco were dangerous, and I think that we need that set of social norms for social media.
Tristan Harris: Another thing that happened is this group, Mothers Against Media Addiction, that we talked about the need for that to exist a couple of years ago. Julie Scelfo has been leading the charge, and that has led to in-person protests in front of Meta's campus in New York and other places. And I believe Julie and MAMA were actually present in New York when they did the ban of infinite scrolling.
Tristan Harris: Recently in New York state legislatures, there's been 23 state legislatures that have passed social media reform laws, and the Kids Online Safety Act just passed the United States Senate, which is a landmark achievement. I don't think something has gotten this far in tech regulation in a very long time, and President Biden said he'll sign it if it comes across his desk, and that would be amazing. And this would create a duty of care for minors that use the platform, which would mean that the platforms are required to take reasonable measures to reform design for better outcomes. It doesn't regulate how minors search in the platform, which deals with the issue that would have a chilling effect on free speech or especially issues on LGBTQ minors. This is, I think, progress to celebrate.
Sasha Fegan: And I just want to say as well some of the most passionate advocates for these bills have been the parents of children who were injured and, in some cases, even died because of the use of these platforms. And I know you guys have met some of those parents and Center for Humane Technology has had a lot of opportunity to work with some of those parents over the past few years, and we've reached out to a few of them to get their stories on the podcast. I would love to get your reactions to some of these tapes.
Kristin Bride: This is Kristin Bride, and I'm a social media reform advocate. I came by this role in the worst way possible. In June 2020, I lost my 16-year-old son, Carson, to suicide after he was viciously cyber bullied by his high school classmates over Snapchat's anonymous apps.
Kristin Bride: When I learned of this, I reached out to YOLO, one of the anonymous apps, who had policies in place that they would reveal the identities of those who cyber bully and ban them from the app. Yet when I reached out to them on four separate occasions letting them know what happened to my son, I was ignored all four times. And it was really at this point that I had a decision to make. Do I accept this? Or do I begin to fight back?
Kristin Bride: I chose to fight back, but I had absolutely no idea where to turn. I had watched The Social Dilemma, and I decided to reach out to the Center for Humane Technology and tell them my story and ask if they could help. They, fortunately, immediately responded and connected me with resources and people who could help. It was really at this point that I started to tell my story and begin my advocacy journey, which for the last two years has been advocating for the Kids Online Safety Act.
Tristan Harris: Well, it's always really hard for me to hear Kristin's story. Actually, just as a small aside, I remember the moment her e-mail came into my inbox because I was completely inundated when The Social Dilemma came out. We had just e-mails and requests just constantly. And I remember reading it. And there was just so many things. We almost weren't able to respond to that message.
Tristan Harris: I'm so glad that ... I think it was one in the morning, and I forwarded the e-mail to our mobilization lead, David Jay, and he helped Kristin get going. And it's just amazing to see the advocacy that she's been able to do since then with, unfortunately, so many other parents who have lost their kids because of social media.
Tristan Harris: This is not some kind of moral signaling. This is real people who have real children who've lost their lives because of real issues that we have tried to warn against. Let's just keep making sure that we get this right so we don't have more parents like Kristin that have to face this.
Tristan Harris: And we should celebrate that we were able to pass the Kids Online Safety and, now, Privacy Act. And passed by a 91 to three margin. That's huge.
Aza Raskin: And to connect this back to AI, it's that ... Have we solved any of the misaligned incentives of social media of first contact with AI? And the answer is, of course, no, we haven't. Which means that, as our systems become more powerful, more persuasive, more omnipresent, these kinds of harms are only going to become more common and more prevalent rather than less, which means we really do have to move now.
Sasha Fegan: Well, thank you so much, both of you. I've really enjoyed my time interrogating you from in front of the microphone, and I promise I'll give it back to you for the next episode.
Tristan Harris: Thanks so much, Sasha.
Aza Raskin: Thank you so much, Sasha.
Clarification: Swiss diplomat Nina Frey’s full name is Katharina Frey.