[ Center for Humane Technology ]
The Interviews
What Would It Take to Actually Trust Each Other?
0:00
-45:28

What Would It Take to Actually Trust Each Other?

The Game Theory Dilemma

So much of our world today can be summed up in the cold logic of “if I don’t, they will.” This is the foundation of game theory, which holds that cooperation and virtue are irrational; that all that matters is the race to make the most money, gain the most power, and play the winning hand.

This way of thinking can feel inescapable, like a fundamental law of human nature. But our guest today, professor Sonja Amadae, argues that it doesn’t have to be this way. That the logic of game theory is a human invention, a way of thinking that we’ve learned — and that we can unlearn.

In this episode, Tristan and Aza explore the game theory dilemma — the idea that if I adopt game theory logic and you don’t, you lose — with Dr. Sonja Amadae, a professor of Political Science at the University of Helsinki. She’s also the director at the Center for the Study of Existential Risk at the University of Cambridge and the author of “Prisoners of Reason: Game Theory and the Neoliberal Economy.”

The history of game theory as an inhumane technology stretches back to its WWII origins. But humans also cooperate, and we can break out of the rationality trap by daring to trust each other again. It’s critical that we do, because AI is the ultimate agent of game theory and once it’s fully entangled we might be permanently stuck in the game theory world.

Tristan Harris: Hey everyone, it’s Tristan Harris.

Aza Raskin: And I’m Aza Raskin. Welcome everyone, to Your Undivided Attention.

So Tristan, today I think is actually one of our favorite episodes, because we’re diving really deep into a way of seeing the world that feels very obvious, that feels sort of like you’re naive if you don’t adopt it, but that is causing the deadening of a world, and that is game theory.

Tristan Harris: Yeah, I mean, and the simple way to boil that down is the logic that you’ve heard on this podcast before around AI and social media. Well, if I don’t do it, they will. If I don’t race for that attention and hijack people’s psychological vulnerabilities to build social media doom scrolling machines, then I’m just going to lose to the other company that will. If I’m a movie studio and I don’t release Spider-Man seven while the other guy’s releasing Batman 10, I’m just going to lose the game of building successful movies. If I don’t build the advanced AI as fast as possible and take all the shortcuts, even though taking shortcuts is bad for humanity, well, then I’ll just lose and they’ll win. And cooperation therefore, is for suckers. And this logic feels inescapable, it feels like it’s a fundamental law of human nature. But this episode with our guest Sonja Amadae is about why it’s not actually a fundamental law. It’s a specific way of looking at the world, a way of looking that was invented by humans.

Aza Raskin: We sort of call this the game theory dilemma, which is to say that if I adopt game theory and you don’t, you lose. So, game theory was actually invented in the 1940s by one of the greatest mathematicians and physicists of all time, John von Neumann, and he was trying to understand, how do you formalize how you win parlor games like chess and poker, and this ended up getting used all the way up to our most existential threats like the nuclear bomb, how it gets deployed. But there’s something very interesting that happened, which is to treat all of human endeavors like a chess or poker game that is winnable. And so, there’s been this propagation of games, winnable games to be the fundamental substructure of everything from war to AI.

Tristan Harris: So our guest today, Sonja Amadae, argues that it doesn’t have to be this way, that game theory misses fundamental aspects of what it means to be human. She’s Professor of Political Science at the University of Helsinki. She’s also the Director at the Center for Existential Risk at the University of Cambridge. She’s the author of a book on exactly this topic, The Prisoners of Reason: Game Theory and the Neoliberal Economy. Professor Amadae, welcome to Your Undivided Attention.

Sonja Amadae: I’m delighted to be here. Thank you for the invitation.

Aza Raskin: Just to sort of lay out the problem, it’s that if I use game theory and you don’t, I will outcompete you because I’m acting strategically wisely. So, if you don’t know game theory, then you’re the sucker, so that sucks everyone into using game theory, but that changes who we are. You’re changing the basis of trust. You’re changing the kind of society that gets created. And we don’t want to live in the society that is purely ruled by game theory, and that’s sort of the game theory dilemma, if you will.

Tristan Harris: The dilemma of game theory itself. So the reason that Aza and I were so interested in doing this episode is if you look around the world, the world kind of feels like it’s being colonized by this cold, strategic logic. Let’s just give a few examples of where this is showing up across a few different domains. It struck me in doing research for this episode, that game theory can colonize dating. So, pickup artistry is like a game theory version of dating, where people are making a cold calculus of, I’m going to say and speak the thing that will get me the outcome that I want. And I can measure that if I do this action versus this action, it will lead to this result.

If I’m designing software, like I should be designing software like Aza’s dad who started the Macintosh Project, thinking about what’s good for people, how do I make this really usable? What’s going to lead to these really positive outcomes for society? But then I noticed that there’s these other guys that are making software in a race to hijack human attention, which means they’re racing to hijack human vulnerabilities, which means that they’re actually measuring using AB testing. If I design it this way versus this way, I’ll actually get more results, I’ll get more engagement. I’ll get more screen time. I’ll get more people scrolling for longer, they’ll come back more often. If I make the button red instead of blue, or if I use a notification, or if I highlight their best friend or the girl that they’ve been spying on, actually liked their post.

And because they’re in this logic of measurement, game theory colonized software design, or then mimetics and culture and political campaigns where you have a politician who maybe wants to say something authentic and true for them and meaningful and heartfelt and sincere, but then they’re told by their advisors, “No, you can’t say that. We measured the results of these different communications, and you should say it this way versus that way.” And what it leads to is this kind of deadening of culture, this deadening of dating, this deadening of relationships, this deadening of software design.

And then now you get to AI, where AI is here, and instead of designing AI in a way where we focus on designing cures for cancer for all of us who have loved ones with cancer right now, and really focusing on that so we can actually get the benefits of that direct outcome that supposedly this is all for. We’re seeing companies in a race to scale these crazy, super uncontrollable, inscrutable, powerful intelligences under the maximum incentives to cut corners on safety.

And so, in every way, game theory has colonized not just technology and software, but more and more of our total world. And I want people to get this because I think it helps explain, and almost there’s a good news to it, which is what you see out there in the world. When it feels dead or meaningless or cold or strategic, that’s not authenticity, that’s actually just a world that has been colonized by game theory. And so, what I want to get for this episode is, how do we help expose how did this logic really take over? So I think, can we tease that out a little bit just so that people can get a little bit of a flavor of why this is so critical?

Sonja Amadae: The most basic point would be to look at the original text, which was John von Neumann and Oskar Morgenstern’s Theory of Games and Economic Behavior. The expected utility theory was part of this technological decision theoretic breakthrough that allowed social scientists that were using that approach to claim that anything that has any value at all can be captured by expected utility theory. Von Neumann thought that all value could actually be monetized, which you could argue about, but that’s the way he thought about it. He thought that you could put a monetary value on anything by watching people’s behavior, seeing what they’re willing to pay to have a certain outcome. Basically, he had that idea you could put a monetary value on everything that would motivate people, that would incentivize people. And expected utility theory let you do that.

Aza Raskin: Yeah. It’s probably important to let people know a little bit about von Neumann.

Tristan Harris: Yeah, who was John von Neumann? He seems like such a pivotal figure in-

Sonja Amadae: John von Neumann is... Well, first, he was operating in quantum thermodynamics, so he axiomatized quantum theory. So, he’s a mathematical prodigy and genius. He immigrates to the United States prior to the Second World War because it wasn’t safe, he had the Jewish ancestry. So, he moves to the United States and he takes up at Princeton, which then was this location from where he ended up playing a pivotal role in the Manhattan Project, which is in building the atomic bomb that was then used in Hiroshima and Nagasaki. During the Second World War, he actually chose the targets of Hiroshima Nagasaki, he was on the committee that made those decisions.

Aza Raskin: And so just to quickly tie, let’s see if I’m getting this history right. Von Neumann is trying to understand how to win at games of chess and poker, he’s trying to formalize these sort of parlor games. And to do that, he has to make an assumption about human nature and an assumption about the game being played, which is that you have to win. There is no such thing as cooperation in chess. Then that model that he creates gets picked up and used because he’s part of the Manhattan Project, to model the “game” between all the great powers. And so now this very dimensionally reduced model of what humans are, ones where we don’t cooperate, is now the basis for the most important decisions the world is making.

Tristan Harris: We’ve applied a theory of parlor games to nuclear weapons.

Aza Raskin: Yeah.

Sonja Amadae: Yeah, exactly.

Tristan Harris: And that’s how you end up with a world where thousands and thousands of nuclear weapons are built on both sides, enough to destroy the entire world. And that is what keeps the world safe, even though it’s safe under the just hair trigger, hairline sort of level of fragility, where just one little false step could still end the world, and yet, that was the “rational” thing for us to do. But if you try to escape that logic, like you say, “Well, we shouldn’t build nuclear weapons,” and you come in as a peace activist and you say, “We should just dismantle all nuclear weapons.” Well, how do you stop the other guy from doing that? And you end up with, game theory feels inescapable. If I don’t do it, I just will lose to the other one that will.

Sonja Amadae: Yeah, and what you see a lot today in the way that game theory in the Prisoner’s Dilemma is projected in these arms race over AI is asymmetric power. So the UK security strategy for 2025 is all about asymmetric advantage. And that is a real change of worldview from a classic liberal, multilateral world, where we would be hoping for mutual benefit. And game theory would lead you to conclude there’s no other way to come to this “solution” of this situation. It’s non-negotiable, non-navigable. If I’m the guy that is going to be cooperating, people will trample me. I will not survive and propagate.

You’re seeing game theories, it’s in public policy, it’s in economics, it’s in political science, it’s in nuclear deterrence, it’s in biology, evolutionary game theory. And the idea in game theory is that you would only ever say something strategically. And when you are a game theoretic actor, every time that you say anything, it is only what you need to say to get a specific outcome. So, it’s deeply embedded in the architecture of our world.

Tristan Harris: So a moment ago, you heard Sonja refer to the Prisoner’s Dilemma. This is a classic game theory problem showing why two rational individuals might not cooperate, even when it seems beneficial, and that leads to a worse outcome for both. It’s called the Prisoner’s Dilemma because it imagines a scenario where there’s two prisoners from a crime and they’re being interrogated separately. And each one has to decide, do I stay silent or do I betray the other? If they both stay silent and say that they didn’t do it, then they both get light sentences, but each is tempted to betray the other and say that the other one did it, and that way they can go free. But if they both give into that temptation, then they both end up with the harsher sentences than if they had just cooperated.

Sonja Amadae: In my book, Prisoners of Reason, one of the things I really struggled with is, how do you present the Prisoner’s Dilemma in such a critical way that when people finish reading the book, they would question the logic of the Prisoner’s Dilemma? And the whole book is written under that attempt to unlearn it from people, even though it’s teaching the Prisoner’s Dilemma at the same time, so people become critical consumers of game theory. And it’s very, very, very difficult to do that. And then there’s this anomaly about, well, why is it that actual humans don’t necessarily follow the logic of game theory? And especially those that are untutored in game theory, the ones that haven’t been exposed to its logic or taught it methodically in classes, they end up being the ones that would probably be more cooperative.

I work in Finland at the University of Helsinki, and I think it’s actually a crime of some kind to teach the Prisoner’s Dilemma because the students just cooperate there, they can’t fathom. And if I’ve done these, not experiments, but simulations, and often it’s the foreign students that would be more prone to be in a scenario where they would try to take advantage. And for the Finnish students, they can’t, the logic doesn’t make any sense because Finland is a very high trust society and it doesn’t run according to this logic of either game theory or the Prisoner’s Dilemma, not at the moment anyway.

Aza Raskin: And is the reason that it would be a crime or you feel like it’s a crime to teach it to the Finland students, is it because once they learn it, even starts to shift some of their thinking and behavior?

Sonja Amadae: Yeah.

Tristan Harris: Finish kids, students, they are naturally more cooperative, creating a more trusting society. And to introduce game theory to them interpersonally means you’re changing the basis of trust, you’re changing the kind of society that gets created. And we don’t want to live in the society that is purely ruled by game theory. We want to look-

Sonja Amadae: Strategic rationality.

Aza Raskin: Exactly, and that’s sort of the game theory dilemma, if you will.

Tristan Harris: Once you see it that way, it’s almost its own mimetic kind of infection. It actually infects everyone else’s thinking. And then more people think in terms of that way, the more people are actually operating from a calculated place, the more people’s speech is calculated, the more they start to outcompete others, and the more that group starts to outcompete everybody else who’s not operating with game theory. So, it has this kind of dominating, totalizing, you can see it like a global virus like coronavirus, but it’s a game theory virus colonizing the world and bringing more people into that mode of reasoning.

So, theoretically, if actors can actually find some authentic, trustworthy place, like there’s jokes about, what was it? Esalen was doing hot tub diplomacy where you had some of the Soviet nuclear scientists with the American... I don’t know if they were nuclear folks, but I know there are people that were involved and there’s these jokes about hot tub diplomacy. You got to get people in a hot tub just actually talking to each other as raw human beings reckoning with what’s actually at stake. But to do that, you need this communication, you need authentic communication. You need, you are a trustworthy actor who’s communicating with me honestly about what you actually feel and I’m a trustworthy actor who is receiving your communication and communicating honestly in return. And in a way, the whole problem is trustworthiness.

So, when people start to shift from communication that’s honest to communication that’s calculating, where the word communication is almost a false idea, we’re actually signaling to each other. So, I’m speaking tokens at your brain that I’m calculating, and you know that I’m speaking tokens at your brain, so then you counter respond with tokens at my brain. You see how game theory starts to make the whole world feel inauthentic, make the whole world feel calculating. And if we don’t do something about it, we end up in this bad outcome, and that’s what nations do. Right? North Korea sends a calculated statement where they use exactly these words, but not these words, because they’re trying to escalate in this tiered signaling regime.

But you’re just saying, you’re bringing up so many important points about the way that communication is so fundamental, but then also the way that communication itself doesn’t get to be a useful tool in game theory because it becomes itself colonized by game theory.

Aza Raskin: And just to build on that a little bit. The game theory dilemma is that if we can all see that the world that everyone operating on game theory and then AI, which perfectly operates on game theory, that world that that creates either is non-existent or nobody wants to live in. And it’s by seeing that that’s a world nobody wants to live in, that we create the opportunity for choosing something much more human.

Tristan Harris: And just to sort of double underline why AI is so central to this conversation, and we said this in the AI dilemma talk we gave several years ago, is that AI arms every other arms race. If there’s a military arms race, AI arms and supercharges the military arms race. If there’s a corporate arms race, if there’s an AB testing mimetic political communication arms race, AI will arm that arms race too. And so, the reason that we have to reckon with game theory itself is because AI is like the maximization of game theory logic, which is its own kind of singularity of just catastrophe. And so, AI is almost like a gift to actually look at the inadequate framework of game theory, because it’s already been inadequate but we keep pushing the can down the road, but now because it’s sort of making every problem that comes from game theory so visible, we have to reckon with it itself.

You’re seeing game theories, it’s in public policy, it’s in economics, it’s in political science, it’s in nuclear deterrence, it’s in biology, evolutionary game theory. And the idea in game theory is that you would only ever say something strategically. And when you are a game theoretic actor, every time that you say anything, it is only what you need to say to get a specific outcome. So, it’s deeply embedded in the architecture of our world. - Sonja Amadae

Aza Raskin: So, in the search for solutions about how we escape game theory, it’s really important for us to look at, well, what are the assumptions that game theory makes about human nature, so we can start finding where there are cracks. So can you outline, what are the assumptions that game theory makes about human nature?

Sonja Amadae: So according to game theory, value has to be scarce. And since game theory says that everything valuable can be accounted for in its metric accounting system of what is valuable, then everything that humans would value would need to be scarce. But if you look at, for example, my favorite, the Maslow Pyramid, where you look at all the different levels of what has value. And if you look at esteem, self-confidence, all of the higher levels of the Maslow Pyramid are usually, they’re positive some aspects that it doesn’t... If someone gets a good night’s sleep, for example, that usually doesn’t take away from somebody else getting a good night’s sleep. Or if somebody feels self-esteem, that shouldn’t detract from somebody else. So right away, we’re in a world where all of the things that we can put a valuation on are scarce and we’re going to be competing over them. And actual relationships, friendship, love, family, having children. Most of what we value, I would argue, is actually these positive sum goods that you’re never going to even begin to enter into some kind of a game theory payoff. Right? That’s the word, what’s the payoff?

Tristan Harris: And just for listeners, this is the Maslow’s hierarchy of needs, it’s a framework that Abraham Maslow came up with for what are the different hierarchies of human needs starting at the base foundational level of shelter and sleep and biophysical needs, but going up to these more abstract needs of self-esteem and then eventually self-actualization, love, belonging, community.

And your point is that those things are not zero-sum. If I have esteem, this is why corporations and organizations are always about doing appreciation days, and we really appreciated this employee who did this and this and this. And these are ways of dolling out more of a fulfilling society that’s not zero-sum.

Aza Raskin: And there’s also hearing in there the assumption that only things that can be measured matter, because only then can you reason on them. So, how do you put a number on love or on friendship? And so then, game theory just doesn’t have anything to say about it, so it doesn’t model it.

Sonja Amadae: No, it’s worse. It will do a Sophie’s Choice move and say, “No, but you will save one child before the other if there’s a fire.” And that’s the horrible thing about the way game theory does valuation of what’s important to people. We’ll say, “No, it can always...” That’s what von Neumann would say. “No, you can always put someone in a situation where they’ll need to choose. And when they’re making that choice, then you can do that preference architecture of mapping what people’s desires are and maybe now their intentions.” So, it’s very insidious because it lifts us out and it constructs a world, if you’re creating institutions according to this logic, you’re constantly putting people in situations where they will feel like it’s non-navigable to start perceiving and acting in a world according to that fundamental assumption that anything that’s valuable is scarce and competitive. It’s very frightening. It’s like a nightmare. It’s just like putting ourselves in a nightmare world and then saying, “Oh, but you’ll never wake up from this nightmare.”

Tristan Harris: I think it’s important to note that in a world that has been colonized by so much by game theory and what is effective and what is just Machiavellian, and that world selects for psychopaths and Machiavellianism, the dark triad characteristics, basically. So dark triad being the narcissism, Machiavellianism and psychopathy, so the inability to empathize with others, because the better you are at not empathizing with others, the more you can act just cold rationally, the better you’ll do at those kinds of cold games. The more Machiavellian and strategic your mind is, and you can just reason that way, the better you’ll do at these games. And the more narcissistic and self-important you are, the better you’ll do at these kind of games.

And so when you look out there in the world and you say the world looks like it’s run by psychopaths, well, that’s because the system being run more by game theory selected for those who would actually be complicit and not have a problem with playing that perverse game. And so, it takes people that might even start compassionate, warm, etc, in their lives, and the ones who continue to play the game and don’t burn out and don’t want to keep doing it, the ones who don’t want to do it, they burn out and they do something else. The ones who do want to keep doing it are the ones who are capable of becoming those dark triad folks. And I want people to know that that doesn’t mean that actually that’s the vast majority of people. It’s actually a small set of people who’ve been selected for and put in the top positions of power.

So, you were getting through the assumptions and you just gave us the first one of game theory that was-

Sonja Amadae: The assumptions. The other is this essentialism. This is not an invention, this is a discovery. This idea that we evolved to be these machines that have to propagate, and the way that you would do that is to be the perfect strategic actor. So, it’s an essentializing of this rationality, and then that reinforces that there’s really no alternative. Those of us who might want to be a different way, we will get suckered, we are going to fall by the wayside, all of those bad things.

And then the other assumption, that we are programmed to be this way means there is no alternative. That you cannot but be an individual competitor, a strategic competitor, or you’ll pay the price for that.

Aza Raskin: Let me see if I’m getting it right. So it’s like the core assumption’s essentialism, that we’re programmed to be strategic competitors. That if you’re rational, then you do X becomes proscriptive, not just descriptive. You have scarcity, only scarce things have value, hence competition is inevitable. And then the last one is that there’s no alternative. The strategic competition is non-negotiable. If you don’t play the game, you lose. If you opt out, you lose.

If you’re creating institutions according to this logic, you’re constantly putting people in situations where they will feel like it’s non-navigable to start perceiving and acting in a world according to that fundamental assumption that anything that’s valuable is scarce and competitive. It’s very frightening. It’s like a nightmare. It’s just like putting ourselves in a nightmare world and then saying, “Oh, but you’ll never wake up from this nightmare.” - Sonja Amadae

Tristan Harris: And so if we dive into these core assumptions now, so if these are the assumptions that undergird that game theory locks in, this is the only one way to see the world, how would we explore these assumptions or see if they’re limited one by one?

Sonja Amadae: Well, the first one is easy, the value, because I’m not sure about everyone, but many people probably do feel that there are aspects of their lived experience, if you’re spending time with a loved one or if you’re feeling that this person is in some kind of pain and you have that empathy. I think most of us experience the higher levels of the Maslow Pyramid and know that those are not zero-sum goods. They’re inherently positive sum where if one person has self-esteem, it doesn’t take away from another person’s self-esteem. Not if you’re in the advanced top of the Maslow pyramid. Maybe for a narcissist, if someone else has self-esteem, you’d want to destroy it, but not for mature adults that have evolved to the top of the pyramid. So, that one I think is pretty easy to grasp. And then it’s just a question, but how do we bring that love, empathy, and positive some goods into our world? So, that would be the next question.

So, I have spent a long time thinking about that, and I think it starts with understanding this logic of the Prisoner’s Dilemma, because if you’re in the world of scarce goods, everything is a Prisoner’s Dilemma, and it is non-navigable. But the way out of that, and I think it’s so simple, is that you just ask yourself the question, if the other guy went ahead and cooperated ahead of me, do I cooperate or not? Do you believe my signaling that I was trustworthy? But if I’m actually not a game theoretic, strategic rational actor, I will cooperate if the other guy does. And then what you’re trying to build is assurance and trust based on the fact that I am trustworthy. And we all know if we’re trustworthy and the trustworthiness just comes down to, do I cooperate if the other person does?

And then you’ve broken out of the Prisoner’s Dilemma and you’re starting to think about value in ways where value, it expands into two major concepts. One is solidarity, where you feel that solidarity with a common cause with other people and you’ll fight for a cause. And we know, look at Tiananmen Square in China. Look at how people, that video that lives on in all of our minds of the man standing in front of the tank who probably did get run over. Why? Why did he do that? That was not strategically rational, but the people that were protesting over and over and again in history, like in the Gandhi Peace Movement, they had the solidarity, which meant that they had this way of connecting and working together that was very powerful.

Tristan Harris: They stepped outside the logic of, all this was inevitable, there’s nothing that we can do, and they did something that broke out of it. And they were trustworthy and they somehow the actions that they did tapped into something in the collective consciousness that broke through and popped out of some of the containers somehow.

Sonja Amadae: Yeah, and a lot of working game theory has been to say that is irrational, that if you are able to work with solidarity, that that’s evil, that it’s communist, that it can only happen if there’s some kind of a dictator that’s incentivizing people and controlling them. That it’s not natural for people to have solidarity in terms of some kind of a connection and a common cause.

And the other thing is commitment, and commitment basically means that if you promise something, you go through with it. And Finland, for example, is such a high trust society that if you give your word on something, then that is who you are. Stepping entirely out of the world of game theory and saying, “I will carry through on my promise no matter what.” I mean, so banal, right, keeping one’s word? How did we lose that as fundamental to civil society or that that would be a choice? How did we lose the idea that that’s just a fundamental choice for being a moral agent in a political economy? That’s just baffling.

We have to combat that by... It’s very subtle and simple, but we have to believe what we say and believing what we say, it sounds so trivial, but it’s actually pretty difficult, because how many times you just say whatever it takes just to get some outcome versus believing what we’re actually saying. And that’s a basic duty for being a citizen in society is stating what we believe, and then trying to make our statements to be true. So, those are three pretty basic antidotes that we’re all able to put into action.

Tristan Harris: So, let’s just talk about how this all connects to the AI arms race. RAND, the same nonprofit defense think tank that has been involved in research in nuclear game theory and deterrence, etc, has also been doing research on the military and strategic implications of AI since the 1950s. And AI was framed exactly like nukes, existential technology that’s requiring strategic dominance, where fear drives the race, game theory legitimizes the fear. If anything, game theory got even more powerful inside of the reasoning about AI, because AI is unique in the fact that it can create step functions in my knowledge of physics or step function in my knowledge of math or step functions in my knowledge of energy production.

And those step functions in any of those scientific domains could create a step function in military domains or a step function in industrial domains, where if suddenly you can produce energy in order of magnitude more cheaply than me or produce all goods in order of magnitude more cheaply than me, or produce suddenly an infinite supply of weapons in a way that I don’t have, because AI is a race to arm every other arms race and erase to these step functions, it actually favors this kind of race to an asymmetric advantage, which then becomes the policy, which then becomes the kind of, we shouldn’t do anything to regulate or set guardrails on this at all. And it’s why you have currently in the United States a proposal for a federal preemption on AI, meaning we don’t want any states to regulate AI. We’re going to stop and actively prohibit regulations at the state level, because we need a no holds barred race to asymmetric advantages on every sector.

Sonja Amadae: Yeah. And then the AI is programmed to be a strategic rational actor because rationality is this thing that is game theory. When you put those two together, that we interpret that there has to be this AI arms race, the US wants total strategic dominance in AI for that exact reason, that it’s going to give the advantage where there’s no coming back. Once the US dominates in AI, it’s escalatory in the sense the AI will keep feeding back that logic for being rational, and then the human makers of policy will say, “But we need to stay symmetric advantage.” And that’s the ultimate winning of this paradigm, the paradigm one.

And then it is harder because you and I can take those easy steps of knowing there’s more value than scarce value. We can be trustworthy. We can believe what we say, and we can cooperate with others and form groups. But how do we break that out of the high splunk policy environment, especially when you see that the people that are in that environment have been trained for years in this way of thinking? So, how do you redo this, especially since the AI is going to be amplifying that set of beliefs? That’s I think where we are right now, and I think that’s quite a predicament.

Tristan Harris: This reminds me also of an example that I think we might’ve mentioned on this podcast before of how do you break out of this trap. It’s not fully true, but if in the world of relationship vicious spirals, two people are in a relationship and they’re in a vicious spiral where one starts criticizing the other. The only way that the other knows how to respond is, “Well, you criticize me, so tit-for-tat, I’m going to criticize you. Well, did you know that you left the dishes out or you did this bad thing, or?” And then you end up in a downward spiral where both parties actually don’t feel good at the end of the day and they’re left with a collective relationship commons between them that is degraded from the fact that they’ve both openly criticized each other.

And if you’re operating in that paradigm, it might seem like, well, that’s the only thing that could have happened. Clearly that person criticized me that’s the only route that we could have gone from there. And then you have Marshall Rosenberg come along, the inventor of nonviolent communication, who says, “Actually, it might appear that way, but it turns out there’s this other communication, I don’t want to call it a strategy because that makes it calculated and game theoretic,” but you basically respond with what it felt like to receive that or hear that. When you said this, I noticed I felt that. And you just start with that because I’m sharing what the effect of what you just said was and what it did to me, but in sharing what I feel because of it, now the other person’s empathizing with the impact of their actions. So, it’s creating connection at a higher dimension than the sort of value metric of who’s winning the war of that communication exercise.

And in a certain way, you can think of that as a kind of creative move that up until Marshall Rosenberg, maybe people had that in some other languages in other tribes throughout history, but Marshall Rosenberg put a new move onto the menu of human relationship communication dynamics.

And Aza, you’ve talked about how just like there was Move 37 in the game AlphaGo, so when the AI that Google DeepMind built that played Go and beat the Go player, it came up with a new move that no human had ever done called Move 37. And if you had AIs that are simulating the way that this could go and actually can discover Move 37s that are positive sum that look for cooperative dynamics, that everyone was convinced there’s no other move, there’s definitely no other better way to do this. And I think whether it’s Move 37 for relationships or for treaties, Aza, you’ve talked about this for treaties. What would Move 37 for treaties look like, alpha treaty? And maybe there are ways that AI can both be a tool in searching for positive sum games in a world that looks like we’re locked in zero sum games.

We have to believe what we say and believing what we say, it sounds so trivial, but it’s actually pretty difficult, because how many times you just say whatever it takes just to get some outcome versus believing what we’re actually saying. And that’s a basic duty for being a citizen in society is stating what we believe, and then trying to make our statements to be true. - Sonja Amadae

Aza Raskin: And that brings into my mind, Tristan, the, I think both of our favorite work in AI alignment, which is about self-other overlap. Because a lot of what you’re saying here in nonviolent communication is that you are internalizing the effect of your words on someone else. It becomes part of you. There’s mirror neurons. And in self other overlap, this research is very interesting. They train in AI not to be able to distinguish the difference between I and you, self and other. So, that the sentence like, “You stole because your family needed food,” and, “I stole because my family needed food,” they become the same because I is equal to you.

Sonja Amadae: I think it’s really interesting that AI has been programmed to use the personal pronoun I, when we can wonder if it has that embodiment of being a human communicator. And actually, some of my colleagues, well, I put out that maybe if we’d never let AI use a personal pronoun, then at least we could have disambiguated it if that had been just hard and fast regulation. And my two colleagues thought that that actually would’ve helped us not be where we are. But if we are trying to solve the alignment problem and we don’t really care if the AI refers to itself as I or not, then it does seem that it might be possible to program it to not have that barrier or distinction, but that would be a bit of an experiment.

Aza Raskin: Well, and it’s been tried.

Sonja Amadae: Well, but if we’re going to solve the alignment with that and we just cast it loose, that would be interesting to see what happens, writ large. But I still think there’s worries about language changing and whether language is a strategic signaling game and how would language function between I and you if we dissolve that barrier. But if language is still strategic, because I think we’d want to not look at language or treat language or experience language as a means of control. Yeah.

Aza Raskin: And I think this is so important with AI, because up until recently, ChatGPT when it launched, we prompt AI, but what’s changing in 2025 and certainly in 2026 is that AI prompts us. And so, AI is AB testing, we’ve never had this before. We’ve had politicians and marketers trying to figure out what is the most effective language, then they have a small surface area over our lives. But AI is increasingly in relationship with major portions of the population. I think, what is it? One in eight human adults are now in some kind of communication relationship with AI. And so, AI can search through all signal language space to find the most effective ways to manipulate us.

Sonja Amadae: Yes.

Aza Raskin: And that is a kind of threat that humanity has never had to deal with.

Sonja Amadae: Yeah, and you had that sentence that was in your video that was the main one that you have on the website. When you talk about that now language is the fundamental unifier under all of these different domains that AI would’ve been unleashed on, and that now because language is how we socially construct the world, that we’re letting AI take control of this profound tool of this social common world construction with whatever logic it’s programmed into how it uses language, and that it does have the ability to just totally dissolve our social reality if we don’t find a way to control it. I thought that was probably the most profound of many profound moments in your conversation for the AI dilemma.

Tristan Harris: The real main thing we’ve been exploring here is whether in AI creating this zenithification of the game theory logic, is there a way out of that? And then I’m kind of curious about the ability to have this be a jubilee, a break, the kind of maximization of game theory leading to this desire to change game theory, to wake up from the single cellular, narrow, self-interested logic that dominates the world into this kind of multicellular collaborative logic that in which we can perceive the fear of all of us losing greater than we can fear and feel the world where I lose to you. But in order for that to be true, the way in which all of us lose has to be extraordinarily clear and trustworthily communicated and received by every agent who is in charge of making decisions about the way this goes.

Sonja Amadae: I have three thoughts. One, we have a lot of freedom of choice and that starts with being trustworthy. And that starts with, if the other guy cooperates, I will. If the other guy doesn’t cooperate, I’m not going to cooperate, but if the other guy cooperates, I will. So there is freedom of choice that we have fundamentally as agents. Then I was thinking about the nuclear movie, The Day After.

Aza Raskin: The movie Sonja is referring to here is called The Day After. It’s a 1983 movie that depicts the brutal aftermath of a full-scale nuclear conflict between the US and Russia. It was seen by millions of Americans. In fact, it was the most watched television event in history, and it was screened for President Reagan and the Joint Chiefs of Staff. Reagan later said that the film actually changed his mind on US nuclear strategy, and it encouraged him to pursue de-escalation with the Soviet Union.

Sonja Amadae: Maybe the point there is to create a Hollywood blockbuster that would be that for this moment that would build up from, we can undermine those assumptions and we can have that individual freedom outside of the AI world to have that sort of wake-up moment.

And then the third thing would be, I don’t know about the major programming parties that are at the AI companies. You guys are probably way more in touch with those people. But there is no reason that we would need to be stuck with this Orthodox strategic form of rationality. I don’t know if the DeepMind scientists, if their approach is radical enough, but yeah, why are we stuck with a Prisoner’s Dilemma, prisoner of reason type of approach to strategic rationality? Wouldn’t it be possible to centralize a different kind? I mean, I think that if people could be, I don’t like the word educated, but if there could be some kind of participatory environment where leaders are exposed to alternative ways of thinking that would be carefully thought through the way that you guys generate content.

But those three things together, making people feel that they can opt out at an individual level and they have the tools, even though knowing where it is hard to opt out, something that’s a collective kind of imaginary event that captures this moment, and then to just go back to the foundations and realize we have so many alternatives and there’s so much goodwill and there’s so many alternative realities and constructions of where we could be to draw from. So, I guess I love this conversation, optimistic with at least thinking those three things and some others would take us in a better direction.

Aza Raskin: One of the things just to summarize that I think the day after did, was that it made the cost of defection negative infinity, it became existential. So now, cooperation becomes the rational thing to do. And I think the point of this conversation is to say with AI, game theory becomes destiny, and that destiny is a thing nobody wants that also has negative infinity. And so, if we can all see that and see it clearly, that means cooperation does become the rational thing.

Tristan Harris: Yeah, clarity, we say in our work, creates agency. And if we have clarity about the current destination being an outcome that no one wants, we can choose something else. And it’s a difficult picture. It is probably the hardest problem that humanity has ever faced, certainly the hardest coordination problem that we’ve ever faced. And yet, this whole conversation, I’m reminded of a quote I was just pointed to recently by Luis Alvarez, who was the winner of the 1968 Nobel Prize in physics. Perhaps the greatest experimental physicist of the century remarked that the advocates of these sort of game theoretic schemes were, “Very bright guys, no common sense. There’s this kind of over-intellectualization of that highly intelligent people build elaborate abstract models. They trust their mathematical formalism too much, but they ignore obvious real world constraints, incentives, human behaviors, and deeper sort of truths of human nature, inside of which may lie the answer of snapping ourselves out of this cold mathematical logic.”

And so maybe we can, since we’re appealing to the high credibility gods here of inspiring figures of history, if Einstein is just pointing us at, what is the higher level of consciousness we need to be operating from to snap out of the lower level consciousness of just pure mathematical logic of game theory.

Sonja Amadae: Well said.

Aza Raskin: I wanted to just call back, there were sort of two competing schools is my understanding that came post Darwin to interpret Darwin. One is, it’s just brutal competition. And the other one was, well, this is about mutual aid and cooperation. I think Darwin was the first person to ask, “Where do the noble traits come from? Altruism and heroism, where does it come from?”

And we have an episode with David Sloan Wilson who worked closely with the sociobiologist E.O. Wilson, and they have this wonderful phrase that sums it all up. It’s why the selfish gene is sort of wrong, it misses this, which is, “Selfish individuals do outcompete altruistic individuals. But groups of altruistic people outcompete groups of selfish people, and everything else is commentary.” And game theory misses this kind of noble traits that comes from groups operating together, because noble traits are about giving something up for a greater hole.

Sonja Amadae: Yeah, it’s a team reasoning. And team reasoning, you break entirely out of game theory. And really, that’s where we are on the planet now, right? I mean, if we don’t figure out a way to cooperate rather quickly and if we don’t find a way for not to be... We’ve already been colonized by institutions operating about the game theoretical logic, but once the AI is building those institutions and changing language and changing what’s normal to ever and ever more higher bars of strategic competition, if we don’t find a way to derail from that, it’s going to be pretty desperate. But knowing it’s an option and we can be trustworthy and we can believe what we say and we can have value that’s not scarce, maybe just that’s an inner light that just starts to create a possible different imagining. If we can start to believe that there would be an alternative possibility, then maybe that’s the first step with some very minimal building blocks, maybe we can start to create other social patterns and not to lose hope that we need to be these strategic cutthroat actors.

Aza Raskin: Sonja, thank you so much for coming on Your Undivided Attention. It’s been... this was, I really think one of the most important, completely under the radar conversation that needs to happen.

Tristan Harris: Yeah, absolutely. Thank you, Sonja, so much for coming on Your Undivided Attention. We’re so grateful to have you. And your work with your book, The Prisoners of Reason, is just so illuminating to highlight this for everybody. So, thank you so much for writing it and for coming on.

Sonja Amadae: I’m delighted. Really nice to meet you both.

RECOMMENDED MEDIA

“Prisoners of Reason: Game Theory and the Neoliberal Economy” by Sonja Amadae (2015)

The Cambridge Centre for the Study of Existential Risk

“Theory of Games and Economic Behavior” by John von Neumann and Oskar Morgenstern (1944)

Further reading on the importance of trust in Finland

Further reading on Abraham Maslow’s Hierarchy of Needs

RAND’s 2024 Report on Strategic Competition in the Age of AI

Further reading on Marshall Rosenberg and nonviolent communication

The study on self/other overlap and AI alignment cited by Aza

Further reading on The Day After (1983)

Discussion about this episode

User's avatar

Ready for more?