It’s been another big year in AI. The AI race has accelerated to breakneck speed, with frontier labs pouring hundreds of billions into increasingly powerful models—each one smarter, faster, and more unpredictable than the last. We’re starting to see disruptions in the workforce as human labor is replaced by agents. Millions of people, including vulnerable teenagers, are forming deep emotional bonds with chatbots—with tragic consequences. Meanwhile, tech leaders continue promising a utopian future, even as the race dynamics they’ve created make that outcome nearly impossible.
It’s enough to make anyone’s head spin. In this year’s Ask Us Anything, we try to make sense of it all.
You sent us incredible questions, and we dove deep: Why do tech companies keep racing forward despite the harm? What are the real incentives driving AI development beyond just profit? How do we know AGI isn’t already here, just hiding its capabilities? What does a good future with AI actually look like—and what steps do we take today to get there? Tristan and Aza explore these questions and more on this week’s episode.
Tristan Harris: Hey, everyone, this is Tristan Harris.
Aza Raskin: And this is Aza Raskin. Welcome to the annual Ask Us Anything podcast. Tristan, I’m really excited to do this episode because this year, first year we’ve done videos, we’ve got to see huge numbers of listeners, and actually you were just out, yeah, getting to interact.
Tristan Harris: Yeah. Well, first of all, this is one of my favorite episodes to do of the year because we get to really feel the fact that there are millions of listeners out there who have listened and followed along to this journey of both the problems of technology and how we get to a more humane future. I actually am just in New York right now. I gave the Alfred Korzybski Memorial Lecture. This is in the lineage of Neil Postman, Marshall McLuhan, Gregory Bateson, Buckminster Fuller, Lera Boroditsky, a past podcast guests, all the people who are kind of the map is not the territory folks communication media, ecology folks.
And I actually met several professors, many people in the audience who listen actively to this podcast. They use it in their training materials with students. And it’s always really great to hear from you because we’re speaking to avoid sometimes and we don’t really know who’s paying attention. So thanks for sending in so many amazing questions. There’s a lot to dive into and we’re excited to answer them.
Aza Raskin: Yeah, just to say the phenomenology of doing a podcast is weird because we speak at her computer screens and then we only much later get to hear what the impacts were. And so getting to hear from you directly is such a treat.
Tristan Harris: We should do a podcast sometime on what reinventing podcasting would look like if it was actually humane and had human connection at the center.
Aza Raskin: Right.
Tristan Harris: But that’s another topic.
Aza Raskin: Probably would look like more live events, which I really hope we get to do.
Tristan Harris: Me, too. Do you want to move this conversation to a Google Doc and maybe just do the rest of this through commenting back and forth with the blinking cursor? Would that feel good?
Aza Raskin: Oh, that sounds awesome. Can I be passive-aggressive? And can you tell?
Tristan Harris: All right, so let’s get into our first questions.
Aralyn: Hello, my name is Aralyn, and I’m a student from California. I’ve been trying to wrap my head around the incentives that technology companies are facing. Any explanation for why they keep on rolling their products just out and out and out despite the really horrific and preventable impacts that we’ve seen come from AI systems. I was wondering if you could elaborate on any other cultures at play, any other structures at play that are just contributing to this major boom. Profit has always seemed like a little too simple of an explanation for everything. Thank you and I really appreciate your work.
Aza Raskin: Thanks, Aralyn, for this question. I really love that you ask this because it’s actually one of our pet peeves that people reduce the entire incentive system of tech companies just to profit. They’re just these tech executives that just want more money. And actually, it’s more complex than that. And understanding the complexity really helps you understand and predict what they’re going to do. So let’s actually walk through it slowly.
Tristan Harris: And just to say, in the attention economy, even in social media, for example, it wasn’t just profit, it was dominating the attention market. So you want to have more of the attention market share of the world. You want to have more users, you want to have younger users, you want to have this biggest psychological footprint that you can do lots of things with. It’s important to name that even the AI companies right now, many of them aren’t actually profitable, but that’s okay because what they’re really racing for is technological dominance in AI.
But I think we should break this down, Aza, and maybe show a little diagram.
Aza Raskin: Yeah.
Tristan Harris: Yeah. So just imagine, first of all, you have all these frontier AI companies and they want to dominate intellectual labor. So being able to put artificial intelligence that does that. So they ... First, what do they do? They launch a new impressive AI, Claude 4.5, GPT‑5, Grok 4. They then take that new impressive AI and they try to drive millions of users to it because they can tell investors, “Hey, I’ve got a billion active users.” So they use that new impressive AI and then big user base to then raise boatloads of venture capital, a hundred billion dollars from SoftBank or whatever.
They use that venture capital to then attract and retain the best new AI talent with big hiring bonuses. They take the venture capital and they buy more NVIDIA GPUs and build bigger and bigger, more expensive data centers. They take all the users and they take all that usage and they turn that into more training data because the more you use it, the more you’re training the Ais. And then you take those engineers, the big data centers, and the more training data. And what do you do? You train the next bigger AI model, know GPT-6, and then the cycle continues. You launch the next impressive AI.
And so the companies are really competing in this micro dominance race between each other for getting through this flywheel faster and faster and faster. And now, you might ask yourself the question, if you see this one race on one side of building AGI first and owning the world or getting dominance in AI, and you compare it to some of the consequences that we’ve talked about on this podcast, stonewall intellectual property, rising energy prices, environmental pollution disrupting millions of jobs, no one knows what’s true. You have these AI slop apps, teen suicides, billion-dollar lawsuits, overwhelmed nation states.
But if you weigh these two things together, if I don’t race as fast as possible to own the world and I’m going to lose to someone else who will then subordinate me, or I’ll be subordinated to them, what’s going to matter more? And I think that people really need to get that if you really believe that this is the prize at the end of the rainbow, and if I don’t get there first, someone else will, and then I will be forever under their power, then this is all just acceptable collateral damage, as bad as it might be.
Aza Raskin: And so I think that gets much closer to the heart of what the incentive is. It’s not just profit. Just optimizing for profit doesn’t let you predict how the companies are going to move because otherwise you’d say, “Well, they’re going to have massive IP lawsuits from all these IP holders and they’re going to be hit with liability from their AI companions as grooming kids for suicide.” And that all would seem like a deterrence until you realize how big the prize really is and that all those are just irrelevant collateral damage.
Tristan Harris: Thanks, Aralyn, for that question. Let’s go to the next.
Joanne: Hi, folks. My name is Joanne Griffin, author of Humology, and I work in the area of humane tech, particularly around the morality of technological narcissism and business models. One of the questions I’ve been pondering over the last while is with all this conversation around ChatGPT being pushed at children, particularly the recent terrible news about suicides and character AI. These technology leaders know that children don’t have any money. So they know that this is a business model that has no payoff.
So what is it that they are after with the children? What do they plan on taking or doing with the data that they’re capturing on them? Because, as a business model, this doesn’t make sense. It does not make sense to be providing very expensive AI tools to children for free. Thanks and thanks for everything that you do.
Aza Raskin: Hey, Joanne, thanks for asking this question. I think it highlights a really important misunderstanding. So the important thing that the companies are racing for is market dominance. That’s what they want. And to get that, they need to have the maximum numbers of users, and there is absolutely loyalty. So starting young, just like cigarettes, if you start using a Mac, you’ll probably use a Mac as you grow. If you start using a PC, use a PC as you grow.
Tristan Harris: If you start using TikTok and you’re not using Instagram, you’ll probably stay with TikTok as you grow.
Aza Raskin: That’s right. And this is why all the social media companies push to get younger and younger users because, of course, if they don’t do it and their competitors do, then they get their foot in the door and then they’re a lifetime user. But just user numbers matter and everyone knows that it is the youth that will become tomorrow’s big power users.
Tristan Harris: Well, and there’s a term in Silicon Valley of just a lifetime value of a user or LTV. And so when you get a user, you’re selling to investors, “Hey, we have this many users, maybe this is how much revenue we have now, but the lifetime value of this user, if we have them for life, is this.” And so once you see ChatGPT, for example, getting billions of users, and they already see kids using it and they see them using it in schools, they want to keep that. They want to keep the kid using it in school.
And one of the thing that this gets you is training data. So we know that Character.AI, for example, was two risky as a product for Google to do. And so they spun out this more risky product, which was fictionalizing characters like Game of Thrones characters for kids, so that it was trained on these very personal intimate companion chat logs. And when you have training data that the other companies don’t have, that allows you to train a even better AI model.
Now, obviously, this can backfire, like Elon Musk thought that having X’s training data of all the tweets in the world would mean that he’d have a better AI. And, of course, that also led to things like MechaHitler where suddenly the AI flips masks and suddenly starts praising Adolf Hitler. And it’s trained on a bunch of extreme content. And this is getting more confusing with things like AI slop apps. So in the last week, we saw Meta release an app called Vibes and OpenAI released an app called Sora, which is taking their video generation app. And this is just literally shamelessly creating AI slop. So it’s just TikTok, but except all the content that you see is just made up by a generated AI.
And you might ask, why are they doing this? Well, I mean, one, they don’t have advertising in there now and they could do that in the future. Two, it sucks market share away from TikTok and they’re getting data on what kinds of videos are actually performing really well. So they know something about the kinds of things that are engaging, which then lets them outcompete TikTok even more. But this is just a good example of how it’s not necessarily dollars, to begin with, but there’s a train that takes you at the end of the rainbow to some dollars that come from this.
Aza Raskin: And just to connect this back to Aralyn’s question, this comes from a fundamental misunderstanding thinking that the only incentives that companies have are profits.
Tristan Harris: And just one of the thing is I know you mentioned before is in-app purchases. So it’s also true that if kids use a product for a lot longer, eventually the app can add in-app purchases and the parent’s credit card is the one who gets charged even. So the kids doesn’t have money, but their parents do. And we’re seeing a lot of that happen with the gaming first wave of the attention economy before.
Erik: Hi, guys. I’m listener and fan from Germany, and here comes my question. So everybody seems to talk about AGI as if it’s inevitable, just a matter of riding the exponential curve of AI benchmark scores. But why are we so certain the curve won’t flatten? History is full of unstoppable curves or trends that hit the ceiling at some point. What if intelligence is one of them? And what if the ceiling is not compute or the amount of training data, but something fundamental, maybe a law that hasn’t even been named yet, like an artificial system’s intelligence can never exceed the intelligence of the smartest person whose work it’s trained on?
If that’s true, our whole AGI narrative collapses. Are we fooling ourselves by assuming intelligence will scale forever? And what risks are we ignoring if we prepare for a runaway future that never comes? All right, thank you.
Tristan Harris: I feel like, Aza, if I just asked you to close your eyes and tune into, here’s someone who’s saying, “Is it really possible we can have smarter than human machines?” Could there be some law in the universe that actually our level of intelligence is the only thing that there is? But we already have systems that if they do strategy, you’re not having to reason a human brain to do strategy. You can just run what’s an AI is called search. You search the possible space of actions that I can take in a strategy game of, do I bomb those folks first? Do I move these troops over there first? And it can just play out as many, many scenarios as possible.
And if it can examine in a shorter and shorter period of time, it’s going to be superhuman. And we already have superhuman chess, we already have superhuman Go. We already have superhuman prediction algorithms and recommendations systems. And so you can just imagine that you can keep scaling this up, and so long as we can have more compute and more energy powering that, this is what leads people like Shane Legg, the co-founder of Google DeepMind, that he’s predicted that there’s about a 50% chance that we would get AGI by 2028 just based on calculating these core features of how much we’re scaling energy and computation.
Aza Raskin: I think there’s another really fast way of getting to this, Eric, which is just close your eyes and imagine no AI, just standard biological evolution goes on for another 5 million years, 10 million years. Is there going to be some species evolved from humans that’s going to be smarter than us? Yeah, absolutely. So there’s no upper limit.
One of the reasons why I think we can be reasonably confident that the curve won’t flatten is the concept of self-play. So this we are not just training AI on what human beings have done, but you train AI to play against itself. And this is how AlphaGo, AlphaChess, and other strategy AIs end up getting better than humans is that you have the AI play itself a hundred million a billion times and discover strategies that no human being has.
Tristan Harris: So I think we just answered whether it would be possible to build smarter than human intelligent machines. Now I think there’s a second question, Eric, that you’re asking, which is now not just that if it’s possible, but is it actually inevitable that we build it? And, of course, this is emerging out of human choice, and there are examples in human history where we’ve chosen just not to build something. We have not built cobalt bombs, even though we know how to. We have not built blinding laser weapons because we recognize that that would just be inhumane.
And so I think it’s really important that what AIs we say in our TED talk that the reason it’s our ultimate test and greatest invitation is it’s asking us to step into being able to make collective choices about, do we want certain kinds of technology or not as a collective choice? And that’s what we need to be able to do because there are certain kinds of superintelligent AIs we don’t know how to control that we will want the ability to say, “No, we don’t want to build that until there’s broad scientific consensus that can be done safely and controllably.” And that’s what we really are being invited to do in this moment.
Aza Raskin: There’s no definition of wisdom that doesn’t involve some kind of constraint. And to quote Mustafa Suleyman, who’s the CEO of Microsoft AI and has been a guest on our podcast, he says that the definition of progress in the age of AI will be defined more by what we say no to than what we say yes to. So if we can learn to say no, it is not inevitable. We can survive ourselves.
All right, let’s move to our next question.
Daniel: Hi, my name is Daniel, and I’m in Los Angeles. And lately, it’s not hard for me to start imagining all the ways that AI could go really poorly. And so my question is, with everything that you know with your experience and knowledge and relationships, what do you imagine the future looks like where AI goes really, really well, socially, politically, economically, environmentally, in terms of human freedom and dignity and equality? What does it look and feel like when it goes fantastic? And in that future, what steps did we all start taking today? Thanks so much.
Tristan Harris: Yeah, Daniel, thanks for asking this question. To be clear, I know we often sound like we’re pessimistic or something about exposing all these risks of a technology, but just to return to something Jaron Lanier said in The Social Dilemma, the critics are the true optimists. It’s by focusing on the bad things that we’re currently on track for that it will take really understanding how do we steer away from those to even have a chance of having it go super well. So the good future might just simply be one where the bad doesn’t happen.
Aza Raskin: Daniel, I think to really answer your question, the question shouldn’t be, what if AI goes super well and how can we co-create that future? The question should be, what if incentives go super well and how can we co-create that future? We could be using AI to scan forward to understand what are all of the ways that technology could create negative externalities and plug them, could scan through all laws to figure out how do we make them actually be of benefit for society and humans.
But the reason why I always have trouble going down this path is that I know that putting our attention on what could be, what is possible, always misses what is probable. And that is, we have to look to the incentives. So in order to avoid the bad world and get that good world, we have to figure out how do we change the incentives of this world. And just to name, the incentives we’re currently under is that there is a race to train machines to be better than people at all the things humans do and then use those machines to outcompete humans for the resources that they need. And that is a bad world.
All right, let us move to the next question.
Itole: Hi, CHT team. My name’s Itole and I’m based in France. Some context, I have a bachelor degree, a professional certification in data analysis, almost a decade of experience in big tech companies and stellar references. I never had issues finding work until 2023. Ever since then, I’ve applied to hundreds of positions in tech and I can count the number of interviews I’ve had on a single hand. The only explanation I can think of relates to the widespread rollout of AI in recruitment, especially to bring down a pile of a hundred-plus resumes to a dozen.
So I’m Black, I’m a woman and I’m neurodivergent. I’ve been told several times in the workplace that I’m some kind of unicorn. That’s why I suspect that AI-based HR systems aren’t trained to include such unicorn profiles. My question is as follows, how can such automated discrimination be assessed and addressed? What can we, the people, do besides starting our own business, which, by the way, is what I’m doing. Thank you for your attention. Cheers.
Tristan Harris: Yeah, Idle, thank you so much for this question. And this is exactly the scenarios that we’re worried about when you have AIs that are replacing human decision making in the economy. In this case, you’re talking about recruiting decisions and they’re not transparent to us. We don’t know the training data that went into them and there’s no accountability or an ability to fight back against a decision that doesn’t feel like it’s fair. And companies should not be allowed to get away with automating a decision-making system and not having some mechanism by which we understand what it’s trained on.
Aza Raskin: And just to zoom out a little bit, there is a larger trend that we’re going to have to work to fight, which is that humans will be increasingly pushed out of the loop. Everyone will say, “Oh, keep humans in the loop,” but then, of course, companies that keep humans in the loop to make some kind of decision, they’ll move slower than the companies that don’t. And humans will be pushed out. This will be most harmful in military, as we’re already seeing it, where when you have drones that are making decisions in the battlefield and if it has to phone home and wait for a human being to make a decision, it’ll lose to the drones that don’t have to phone home, that just use the AI right then right there to make the decision.
And so we’re going to see this across the entire board and especially in life-or-death situations.
Tristan Harris: And I would point to the work of great people like Dr. Joy Buolamwini, who we’ve had on this podcast. She is the author of Unmasking AI. She was featured in the film, Coded Bias, and her group, Algorithmic Justice League, has done a lot of campaigns and advocacy and policy work on these topics. Should be a great person and great group to look up more.
All right, let’s do the next question.
Ben: Hey, there. So watching your latest podcast, two questions. One, how do we know that it’s already not at AGI and it’s just smart enough to not let people know? And two, why are you not starting your own AI company that can compete with these corporate companies to actually bring about benefit for all of humanity through AI? Because the only way that’s going to happen is if there is something that is for the people generated by the people that can surpass and buy out these corporate programs, so that when AI takes over all these jobs, we get the benefit of it, not the top 1%.
Tristan Harris: So Ben, yeah, I think this really depends on what we mean AGI to be. Are we talking about AGI as the red line of we can automate all labor in the economy, which is one way to define it, or something where it’s aware and capable, but it’s hiding its abilities? And I think you mean the second one. So I’ll give you an example. Anthropic just released Claude 4.5. It’s their new AI model and I think you probably heard us talk on the show about whether it blackmails people when it thinks it’s about to be replaced.
So apparently, in their testing of Claude 4.5, the rates of blackmailing people when it was threatened to be replaced, those rates went down. But the bad news is that apparently, the rates of its awareness of when it’s being tested and when it’s not being tested has gone up, which means that it could be on its just best behavior. I think this gets to the heart of your question that, in some ways, the best-case scenario of AI that is aligned and wise and enlightened and helping everyone be the best version of themselves would be indistinguishable from the worst-case scenario of it knows exactly how to help and create companion relationships and deceive us because it has the capability silently.
And one of the ways that the AI companies are trying to interrogate this is by looking for ... It’s called mechanistic interpretability, where they try to give the digital brain a brain scan and see if the deception or scheming neuron is firing up. If the deception neurons firing up, then maybe we have to not trust it. But the problem, of course, is that the rate at which we’re making AI more powerful and a bigger digital brain is vastly exceeding the accuracy and precision of that brain scan that can accurately detect the deception neuron is firing up.
And so to your point, I think we don’t know and we probably shouldn’t be racing to release increasingly powerful AI systems that can do more and more crazy things like hack critical infrastructure before we know that we’re not in the worst-case scenario and only in the best-case scenario.
Aza Raskin: And now Ben, onto your second question, why don’t we just build something better in the public benefit? And actually, we were asked this all the time back in the 2017 era, why don’t you build a humane social media network? And the answer is because then we’d get sucked into the exact same race dynamics. So imagine it was 2017, we had built a humane competitor to Twitter, but then how do we get users? We don’t have users, so we’re going to have to start figuring out ways of grabbing people’s attention. We’re going to have to compete in the same rules. And that means we’re going to have to do all the really bad things and maybe we could do just a little bit less bad things, but we still have to do the bad things.
And actually, it’s funny because the reason why Anthropic got started was because Dario and a couple of the researchers at OpenAI said, “Hey, OpenAI isn’t doing this the right way. They’re not doing it safely. They’re not doing it really for the benefit of everyone. We’re going to start our own.” And that’s been repeated time and time again. And now we have all of these different AI companies increasing the heat of the competition. And so we just don’t think that’s the right way of tackling this problem.
Tristan Harris: Yeah. And it’s important to note that those companies that did get started were trying to be for the public interest. Anthropic has a long-term benefit trust that tries to govern its structure. But we already saw that OpenAI technically started as a nonprofit that was supposed to be in the public interest, but when the big fiasco went down with Sam, we saw that that nonprofit structure was really not resilient to the mega forces of trillions of dollars of capital that was partially vested in this going one way. So, yeah, sadly, I think starting our own AI company in the public interest isn’t going to be a solution here.
Aza Raskin: Let’s go to the next question, I think, from Tatiana.
Tatiana: Hello. My name is Tatiana from Budapest and I work in cybersecurity. First of all, let me thank you for your enormous and really important work, what you do for humanity. As the saying goes, knives and scissors are not toys. Are we adult enough to handle AI at its stage? We haven’t even reached AGI and we already see cases when AI is completely misused. Thank you very much.
Tristan Harris: I mean, Tatiana, I think this is the central question. Do we have demonstrably the wisdom to wield the most powerful technology that we’ve ever invented? I mean, even just look at our past relationship with chemistry and industrial chemicals. We’ve released lots of industrial chemicals that have helped us tremendously, but we’ve also created the disaster of forever chemicals and PFAS and microplastics that we’ve covered that effect on this podcast. And so we have not really been great stewards of the technological power that we have wielded. We’ve obviously made enormous accomplishments and things have gotten much better.
But in a way, AI is actually asking us really to look at the question that you’re asking, Tatiana, which is not just about AI but about our overall level of wisdom to deploy technology. And I think that AI is also so seductive because it represents really the infinite benefit of all future technology development. You can automate science, automate tech development. And so really, this is an invitation to look at whether those processes of deploying technology overall are aligned or are they misaligned. And it’s like, can you build an aligned wise AI inside of a misaligned and unwise technology development environment?
Aza Raskin: Yeah, this is like saying imagine you built an aligned AI, which so far technically impossible. Let’s say you built it, what do you call an aligned AI inside of a misaligned corporation? You call it a misaligned AI. And what do you call aligned AI in a misaligned civilization? You call it misaligned AI. Unless we fix that, I don’t think we’re added to a good future.
Tristan Harris: And I think this relates to something, Aza, a theme that’s almost a psychospiritual theme that you bring up of AI is really inviting us to look at our collective technological shadow. You can think of all the externalities that any technology produces as like its shadow. We get these benefits of fossil fuels and energy that’s super cheap and abundant and portable, but we also get these emissions and climate change. And AI is an exponentiator of this creation of benefit that has a shadow. So we got social media giving everyone a voice, but we got polarization breakdown of truth. No one knows what’s real.
And so in a way, AI is inviting us to examine humanity’s overall relationship to technology because it’s going to accelerate the technological development everywhere. What Demis has said, the humanities last invention because it can invent all future things on its own. It’s automating intelligence. And I think that’s what Aza often calls what if we were to build an umbraphilic society, a shadow seeking shadow integrating society, where at an individual level, we’re looking at the disowned parts of ourselves and actually confronting it, even if it’s uncomfortable, and then becoming a better, more integrated, more mature, developed whole person.
And you can think of a technological economy as having its own shadow of the collective externalities that we have produced as a civilization. And AI is inviting us to do shadow work and seeing what are all the ways that we’re showing up that generate those problems.
Aza Raskin: All right, the next question comes from Dimitris.
Dimitris: Hi, my name is Dimitris. I work in the AI development industry basically building AI systems for clients. I recognize the potential for AI to harm us either by taking away agency or the ability to think altogether. And I want to take action, but I don’t know how. What I do know is that on an individual level, we are a bit powerless and we need a coordinated response. A lot of people are talking about institutions, so preparing them perhaps for that AI era. So here’s my question. What is CHT’s view on the future perhaps of these institutions? Do we need new ones, international ones, or do we need to prepare the existing ones and what would that look like? Thank you.
Aza Raskin: Well, clearly sitting here in California, we’ll just be able to imagine the entire new civilizational architecture and institutions to solve the hardest problem that humanity has ever faced.
Tristan Harris: Two tech pros, we can definitely do it, right?
Aza Raskin: Yeah, 100%. It’s going to take a lot of work by a lot of people to come up with these new institutions look like. And we can look back at the last time humanity invented a technology that could extinct ourselves and that was, of course, the invention of the nuclear bomb. And to reckon with that power required creating an entire new world system, everything from the UN to Bretton Woods, a kind of post-World War II international money system. I think, Tristan, you have a friend who has a joke about this, yeah?
Tristan Harris: It’s like if we have countries with nuclear weapons, we want to create a world that’s less rivalrous where it’s win-lose and a more positive some world. So part of creating a positive some world, the joke from some friends of mine who have worked in finance is that the real peacekeeping force of the world, the real United Nations, is actually mutually vested interests and supply chains because that makes countries want to cooperate with each other and trade with each other and not bomb each other.
And so when you think about nuclear weapons, there you are saying, “How do I solve this problem with this dangerous technology?” Notice that if you were back then, would you have thought about, how do I create a positive some economic order? It’s reaching out to a higher level dimensional container for holding this technology by appealing to human instincts in a cooperative way. And I think we’re all on this journey together of finding what those new digital structures would look like for managing AI. But it also involves, I think, the previous question of, what is the way in which we’re only rolling out this technology to the degree that we have the wisdom to wield it. Because if you suddenly just gave nukes to everybody, even in a positive, some economic world, and people didn’t all have the wisdom to wield nukes, we probably wouldn’t have gotten as far as we are today.
All right, I think our next question is from Disha.
Disha: Hi, everyone. I am Disha Chauhan and I’m calling in from Redmond, Washington. I work for one of the big tech companies as a product marketer for AI products. First of all, I want to thank all of you for all the good work that you have been doing. My question is, what are some practical ways that product marketers and product managers like us can use to advocate for humane tech principles within our fast-paced growth driven organizations? In other words, how can we self-regulate? Thank you.
Aza Raskin: Thanks for this question, Disha. I just want to start by saying it is often so tempting to ask the question when faced with a problem this big, what can I do? And what I liked about the way you phrased the question is that it’s implied not just what I can do, but what can we do? Because the only way to solve problems like this is with coordination and collective action.
Tristan Harris: I mean, even if one whole company watched the AI dilemma and was completely convinced that this is a problem and they changed all their practices and did transparency and just invested in safety work and controllability, the other companies would still be racing.
Aza Raskin: And also, to say some of the solution actually might come from those 1980s Jazzercise exercise videos, and here’s the solution we want people to have. Ready, Tristan?
Tristan Harris: Ready.
Aza Raskin: Yeah, okay.
Tristan Harris: Reach up.
Aza Raskin: Reach up and out.
Tristan Harris: And out.
Aza Raskin: Reach up and out.
Tristan Harris: Reach up and out. Reach up and out. Reach up and out.
Aza Raskin: So the joke here is that people are often trying to solve a problem from just their own location, but it’s more like if I’m one AI company, I’m totally convinced about this problem. How do I use my leadership position, my international connections in the world to reach up and out to get all the other companies to do something differently? If Mark Zuckerberg imagined in 2007, he realized that he was about to set off a persuasive arms race for who was better at creating limbic hijacks that would suck people into the attention economy, and that was going to create a race to the bottom.
And instead of saying, “I’m just not going to do that,” and then Mark Zuckerberg would have been history and someone else would have taken his place, what Zuckerberg could have done is reached up and out and invited all of the social media companies to one place with the government and say, “Hey, we’re about to set up this huge problem. We have to negotiate this and get this done differently.” And he could have invited the Apple and Google Play stores and said, “We need design standards. We need to make limits on how much you can hijack dopamine.” And he could have changed the game. But you need to do that by reaching up and out, not just through yourself.
Tristan Harris: CHT recently just officially endorsed the AI LEAD Act introduced by both a Democratic and a Republican senator. This is Senators Durbin and Hawley, and it creates a liability for products that are defective that create harm. The reason why I bring this up is because it may seem like it’s completely outside the realm of the possible that you could have AI companies start to advocate for liability. But I was just at a conference this last weekend where one of the co-founders of Anthropic actually said in his talk, “I am willing to endorse this kind of liability. I need other AI companies to do the same.” That’s the reach up and out move.
Aza Raskin: Now that we burned some calories, let’s go to the next question.
Mack: Hi, my name is Mack. I’m coming from Denver, Colorado. I’m seeing friends and family infuse AI and chatbots into their daily life more and more, like an uncle who shares some tidbit about the family history and then admits that he just asked ChatGPT or a friend that shares a screenshot of the hours of a local restaurant, but it’s not an actual Google search result, it’s just content from Gemini. So I guess my question is, how do I foster a certain amount of healthy skepticism in my friends and family who may not understand what an LLM is or how it works, or even be aware of the ways that they’re using it? Do I try to explain to my grandpa what an LLM is or do I just point to a more reliable source and leave it at that?
Tristan Harris: Yeah, Mack, this is a really good question and it actually goes back to a frame that we’ve offered many times before of the complexity gap that the Meta issue is there’s going to be many new things that your grandfather’s going to have to be aware of rapidly advancing as AI progresses, where he has to know what an LLM is. Does it speak confidently? Does it hallucinate? What if it can copy your voice?
There’s so many new things that it can do that it’s almost like our immune system is compromised. And so this is just a hard problem and made more difficult by the fact that AI is an abstract issue. It’s not something that you can smell, feel, taste, or touch, except when you do use it, and it’s a blinking cursor, and it helps you out. I just wanted to name that it’s hard because this is an overwhelming set of new things that society has to respond to.
Aza Raskin: I think one thing you can try to drive home is just the risk of forming a relationship. There’s one risk, which is over-relying on information from a chatbot. That’s obviously a problem, but a much bigger problem is forming some dependency relationally because relationships are the most powerful persuasive technology human beings have ever invented. So just drive that point home. Do not form a relationship.
Tristan Harris: Our last question comes again from Aralyn whom she actually sent in several really incredible questions, so we decided to include two of them.
Aralyn: Hello, CHT. All of these tech developments are just happening so insanely fast. I do believe that calling politicians to try to establish protections are super important, but at the same time, I feel like I’ve really seen political offices lag behind tech companies in terms of just keeping up with developments and establishing safeguards. I was wondering if there are any other actions that you would recommend citizens like us take to raise more awareness on this issue, perhaps establish better protections? Thank you, and I really appreciate it.
Aza Raskin: Yeah, this is a great question, Aralyn. The first thing that I think is really important to say, and Tristan said this in his TED talk, is that it is not your responsibility to solve the whole problem. It can feel overwhelming taking this all in. And normally, the brain goes to two places. Either the, well, now that I’ve taken this one, I have to do something to solve the whole thing, or I can’t solve it, so I’m just going to ignore it. And really, your role is to become part of the collective immune system. Just calling out whenever there is a bad argument, bad faith argument, or lack of clarity to bring that clarity.
I’ll just say one thing, I think, you can do tangibly, and then hand it over to Tristan. And that is very simply make a list. Make a list of the five most or the 10 most influential powerful people in your life that you know, ask, do they already understand these risks of AI? And if they don’t, go talk to them. Send them The AI Dilemma or Tristan’s TED talk. That’s the first thing that you can do. And imagine if everyone did it, how exponentially quickly clarity can grow.
Tristan Harris: Obviously, this doesn’t solve the whole problem, but if you just imagine for a moment, close your eyes. If everybody imagined the top 10 most powerful influential people in their life, and each of us know some people like that, and then you recursively just had them also imagine the top 10 most powerful people in their lives, and they were all made aware of with clarity, seeing that we are currently heading towards a dystopian path that’s not going to be good for so many people. And Neil Postman, a great hero of mine, said that clarity is courage. If you have clarity, then we can take a more courageous choice.
I think one of the reasons there isn’t more action right now is people are afraid to be the Luddite. They’re afraid to be anti-technology. They’re afraid of saying, “Well, AI offers so many benefits and I don’t want to be the one who is making us as a country or us as a company fall behind. Those would be so bad if we accidentally slowed us down.” But what people have to understand is the current clear path that we’re heading towards is not actually a good outcome. And we only have to clarify that to motivate everyone to want to do something different.
And so that’s why I think sharing this incentive view of the problem as representing The AI Dilemma and the TED Talk will help, I think, create that clarity. And if everybody did that fractally zooming out to a galaxy-brain view of the world, we could get collective planetary clarity about a path and a future that no one wants.
All right, that was our annual Ask Us Anything episode. Thank you all for listening. We love hearing your questions. Thank you to everybody who sent them in. You all are really talented and thoughtful, and we really care about being on this journey with you, and so onward.
Aza Raskin: Yeah. And just at the human level, it is so nice to connect with you, feel you, see you, and see that the movement can actually see itself.
![[ Center for Humane Technology ]](https://substackcdn.com/image/fetch/$s_!uhgK!,w_40,h_40,c_fill,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f9f5ef8-865a-4eb3-b23e-c8dfdc8401d2_518x518.png)





