Tech leaders promise that AI automation will usher in an age of unprecedented abundance: cheap goods, universal high income, and freedom from the drudgery of work. But even if AI delivers material prosperity, will that prosperity be shared? And what happens to human dignity if our labor and contributions become obsolete?
Political philosopher Michael Sandel joins Tristan Harris to explore why the promise of AI-driven abundance could deepen inequalities and leave our society hollow. Drawing from his landmark work on justice and merit, Sandel argues that this isn't just about economics — it's about what it means to be human when our work role in society vanishes, and whether democracy can survive if productivity becomes our only goal.
We've seen this story before with globalization: promises of shared prosperity that instead hollowed out the industrial heart of communities, economic inequalities, and left holes in the social fabric. Can we learn from the past and steer the AI revolution in a more humane direction?
Tristan Harris: Hey everyone, it's Tristan, and welcome to Your Undivided Attention. So if there's one thing that people know about AI, it's that it's coming for our jobs. The main AI labs are all racing to build artificial general intelligence, which means an AI that can do anything that a human mind can do behind a screen. Or the saying goes, if you have a desk job, then that means you won't have a job. And most people working on this technology agree that we're well on our way to that, and that would mean job displacement at a level that we've just never seen before. If you listen to some tech leaders, that's not as much of a disaster as it might sound. Many predict a utopia because of this, where livelihoods are displaced by AI, but then replaced by universal basic income. Here's Elon Musk.
Elon Musk: There will come a point where no job is needed. You can have a job if you wanted to have a job for sort of personal satisfaction, but the AI will be able to do everything.
Tristan Harris: And this is a vision where AI will be an equalizer and the abundance will be distributed to everybody. But do we have a good reason to believe that would be true? We've just been through a huge period where millions of people in the United States lost their jobs due to globalization and automation, where they too had been told that they would benefit from productivity gains that never ended up trickling down to them. And the result has been a loss of livelihood and dignity that has torn holes in our social fabric. And if we don't learn from this story, we may be doomed to repeat it, which is why we've invited Professor Michael Sandel on the show.
Now, Michael is a political philosopher at Harvard University and he's thought about these issues incredibly deeply. He wrote the books, The Tyranny of Merit and Democracy's Discontent, which explore among other things, how dignity work and status interrelate in America. We're going to discuss the profound implications of AGI for the workforce and the lessons that we need to learn from the past and maybe what our leaders can do to avoid some of the worst case scenarios. Michael, thank you so much for coming on Your Undivided Attention.
Michael Sandel: It's great to be with you, Tristan.
Tristan Harris: So I just want to prime listeners that you and I had the privilege of meeting each other on a trip to Antarctica down in I think in Chile in 2016. And it was an honor to meet you then because I had been such a big fan of your work and your Harvard class, “Justice” is what it's called. Correct?
Michael Sandel: Right.
Tristan Harris: One of my favorite aspects of this class is just the way that you engage with students in this Socratic process of really teasing out what are these underlying values or basis of how we might navigate these complex moral situations. And there's a joke in the AI community, I think it was from Nicholas Bostrom who said that AI is like philosophy on a deadline. The questions of what is education for, what is labor for, these are ancient philosophical questions, but now AI is sort of forcing us to answer this at a whole new level of gravity and seriousness. So you've written books about capitalism, democracy, even gene editing, but recently you've been talking a lot about AI. When you gave a talk at last year's World Economic Forum, it wasn't on justice and democracy, but on the ethics of AI. Why has this become a focus for you?
Michael Sandel: AI raises some of the hardest ethical questions that we face, and there is a kind of headlong rush, even a kind of frenzy to the discussions about AI. So I wanted to step back and ask some questions and invite the Davos audience to ask some questions to think critically about what I think is the biggest question underlying the worries about AI. And that is whether this technology will change what it means to be human. So I was trying to invite them to consider that question, to discuss it, to debate it, to reflect on it.
Tristan Harris: And what are some of the questions that we are not asking, the fundamental deeper questions that we haven't been asking in public discourse about this?
Michael Sandel: Suppose AI fulfilled the promise that its most enthusiastic advocates put forward. Suppose we could have companionate robots, for example, to care for the elderly, to try to address the problem of loneliness. For that matter, to care for children. Suppose that we could create digital avatars of ourselves that we could bequeath to our loved ones. For me, the biggest philosophical question, the most important one is suppose it worked, then would we welcome it or would that be even more worrisome than if we notice little gaps and unconvincing moments? And I think the reason it's hard to articulate philosophically what is the reason we might still worry or worry all the more, but I think it has something to do with losing contact, losing our grasp of the distinction between what's fake and what's real, what's virtual and what's actual. And so the interesting philosophy begins when we begin to ask what would we lose exactly if we lost the capacity to distinguish between the virtual and the actual.
Tristan Harris: Yeah, it seems to me what you're pointing at is there's often a invisible thing that we don't know how to name even about the integrity or the original authentic expression of friendship let's say We don't even know how to put a name to that thing, but we all sort of operate by it. We know it when we feel it and we're living with it. And then suddenly as new technologies threaten to let's say undermine whatever that invisible quality is that exists just between humans. Even as you're saying, we're not imagining a partially working chat, but we're talking about a companion that fully meets you in the fullest ways that's sort of perfectly designed. You're sort of pushing us to that edge. Right? And you're saying even in that case, is there something that is lost?
And you're reminding me in our work at Center for Humane Technology, we talk about the three rules of technology, that the first rule is when you create a new technology, you create a new class of responsibilities because you may be undermining an unnamed commons that we might depend on. Social media undermined the being in physical spaces together commons because it maximized and profited from individual use of screen time. And so in succeeding at its goal, it sort of threatened this other commons. And you're sort of pointing to another one that if we were all to have these perfect AI companions, what then would be threatened? And how do you engage with that question?
Michael Sandel: Well, one way of engaging it is through this concept of the commons, which it's an actual thing in civic life, in public life, the creation of commons, common spaces, public places that gather people together, often inadvertently in the course of our everyday lives. But the commons also operates figuratively, metaphorically as a form of, well, of communion, of being together, being in the company of or in the presence of others, even in what we would call actual, not virtual relationships and friendships. We seek to deepen our sense of presence and we learn from and draw spiritual nourishment from presence to one another. And what the technology is testing is whether we could do without it.
Whether we could do with a really good simulacrum of presence such that the virtual was an adequate, maybe a preferable alternative to the actual real to being with others. So our capacity for human presence, being present to one another is being scrambled and confounded by this technology. Now we live in a world where we have to entertain the possibility that our capacity for human presence could be extinguished, could be lost, and we would find ourselves inhabiting the virtual rather than the actual way of being with one another. If we imagine a frictionless way of being with one another, virtually, the problem is really the ultimate form of human isolation, which is to say the loss of the commons.
Tristan Harris: Right. So I want to get into some of the other topics around labor and dignity because I think that's really the place where you really struck a chord. And one of the things that on the philosophy on the deadline aspects is there's a lot of things we don't want to look at or confront. And if we don't look at them and confront them, then they just happen. And one of them is the looming job displacement from AI, and most policymakers are really not willing to talk about this head on because they don't really have a good answer. They think about instead, let's just talk about increasing GDP. As long as GDP is going up and goods are cheap, that's enough to call the world successful. But I want to talk about what it would mean for this many jobs to be displaced by automation. And you really laid the story out in your book Democracy's Discontent. Can you start by telling that story in broad strokes, maybe learning from history?
Michael Sandel: Yes. In Democracy's Discontent, I look at the broad history of political argument, political debate in the United States from the founding to the present and try to tease out or to glimpse the shifting conception of what it means to be free that's been implicit in our public debates. These days, when we think of ourselves as free or as aspiring to freedom, what we mean really is the freedom to choose our interests, our ends to act on our desires without impediment or with as few impediments as possible. It is what might be called a consumerist conception of freedom because I'm free when I can act on my desires, fulfill my interests and my preferences. And this coincides with a very familiar idea of what an economy is for. Adam Smith and Keynes both said, an economy is for the sake of consumption, consumer welfare, serving and promoting, maximizing the welfare of consumers.
So it's a consumerist conception of freedom. Each of us has various interests, desires, preferences, and as we can realize them, then we are to that extent free. I argue in Democracy's Discontent that that conception of freedom is first unsatisfying ultimately. And not only that, it's not the only one that's been available or present in our political tradition. I contrast the consumerist idea of freedom with what might be called a civic conception of freedom. I'm free insofar as I can have a meaningful say with fellow citizens about the destiny of the political community. My voice matters. I can participate in self-government, I can reason and deliberate with fellow citizens as an equal about what purposes and ends are worthy of us. So the civic conception of freedom requires a healthy and robust common life, and it conceives the purpose of an economy. Here, we get back to work.
Not only to satisfy our interests as consumers, but also the civic conception of an economy is a way of enabling everyone to contribute to the common good and to win honor and recognition and respect and esteem for doing so. One way of seeing the crisis democracy is facing today is that the consumerist conception of freedom in recent decades at the last half century has eclipsed and crowded out the civic conception of freedom. And this has implications for work and the meaning we attribute to work. And so part of the anger, the frustration, the resentment that afflicts our public life has a lot to do with the grievances of working people, especially those without university degrees who feel that their work doesn't matter, that credentialed elites look down on them.
We've embraced and enacted an impoverished conception of what it means to be free. And with it, we've devalued work. We've forgotten that the purpose of work is not only to make a living, it's also to contribute to the common good and to win honor and recognition for doing so.
Tristan Harris: Could you talk about how this dynamic played out in the nineties and you write about how these three mutually reinforcing practices of globalization, financialization, and meritocracy interplay with each other. You're already sort of there, but I would love to just break that down for people. In particular, what you sort of land at is how dignity and status are affected by the financialization and globalization of our economy.
Michael Sandel: Yeah. Well, if we really want to understand what's gone wrong with our politics, why democracy is in peril, why there's been this backlash, right-wing populist backlash, this has partly to do with the widening inequalities of income and wealth that resulted from the neoliberal version of globalization that was carried out over the last half century. But the problem goes beyond even the economic inequality. It has also to do with the changing attitudes towards success that have accompanied the widening inequalities. Those who've landed on top during the age of globalization have come to believe that their success is their own doing, the measure of their merit, and that they therefore deserve the full bounty that the market bestows upon them. And by implication that those who struggle, those left behind must deserve their fate too. And this divide closely tracks attitudes toward work. And we need to remember that globalization produced enormous economic growth, but it went mainly to the top 20%, bottom half realized virtually none of that growth.
In fact, wages in real terms for the average worker were stagnant, virtually stagnant for five decades. That's a long time. But what the mainstream parties, the way they responded to the widening inequalities and into the stagnant wages was to say, and to working people who were struggling, if you want to compete and win in the global economy, go to college. What you earn will depend on what you learn. You can make it if you try. We heard these slogans again and again from Democrats as well as Republicans. And what they missed, what this bracing advice missed was the implicit insult it conveyed. And the insult was this, if you're struggling in the new economy and if you didn't get a college degree, your failure must be your fault. We told you to go get a diploma.
Tristan Harris: And hence how the rest of the world or rest of the society views them as well. So not just how they view themselves, but how the rest of the society views their place in society.
Michael Sandel: Exactly. And so on the one hand, this rhetoric of rising, you too can succeed if you get a degree. Well, first of all, it misses a basic fact, which is that most of our fellow citizens don't have a four-year college degree. Only about 37% of Americans do, which means that it's folly to have created an economy that sets as a necessary condition of dignified work in a decent life, a four-year degree that most people don't have. Another corrosive effect of this emphasis on this response to inequality by urging individual upward mobility through higher education was that it didn't grapple with the structural sources of the inequality or the policies that led to it. It was a way that elites, Democrats and Republicans alike let themselves off the hook and said, no, it's just that you haven't achieved the individual mobility by getting a diploma. So it's no surprise that a great many of those without degrees turned against the politicians who were making that offer and implicitly conveying that insult.
Tristan Harris: So let's compare this to the situation with AI because I know many of our listeners are not used to doing an economic diagnosis, but I think it's actually really critical to go back in time and look at what was promised in the nineties. Well, we're going to outsource all this manufacturing to China, and yes, we're going to lose some jobs here, but GDP, we're going to get all these goods for super low costs, so therefore we're going to enter into a world of abundance. We will reap those benefits. We'll figure out people will migrate to other kinds of work. But there was another kinds of work maybe to move to, and there was a hollowing out of our social fabric. But if you look at this very carefully, this matches exactly what we're being sold for AI, borrowing from the CEO of Anthropic, Dario Amodei.
Imagine a world map and there's all the countries in it, and a new country pops up onto the world stage, but it's filled not with humans from another culture, but a hundred million digital beings who are all Nobel Prize level geniuses, but they work at superhuman speed for less than minimum wage. They don't complain, they don't eat, they don't sleep. So instead of outsourcing all of our manufacturing or our labor to China, well, now we can outsource all of our cognitive labor, our mind labor, mental labor to this new country of super geniuses in a data center. And we're promised that we're going to have, it's kind of like NAFTA 2.0, North American Free Trade Agreement. We're going to have all of these cheap cognitive goods enter the market at an incredibly low rate.
That will be the world of abundance. We'll have universal high income as Elon Musk says. But of course, why should we believe that this would go any differently than the first story that you have told? And I think this is really an essential thing to look at because if we don't have a plan and we're about to repeat what got us to this sort of tyranny of merit and in the whole populist movement that was a result of the phenomenon that you're speaking to.
Michael Sandel: I think it's a very powerful parallel. I think you're right, and it is worth pausing to reflect on the way it worked last time, so to speak, with the neoliberal version of globalization and the trade deals and the free flow of capital across borders. It was said at the time, yes, there will be some dislocation, there will be winners and there will be losers, but the gains to the winners will be so significant and abundant that they can easily be used to offset the loss to the losers.
Tristan Harris: That's right.
Michael Sandel: That was the argument that was made. Of course, the way it played out, just as you're saying, the compensation never arrived. Redistribution.
Tristan Harris: Right, we were promised redistribution, but it didn't happen. But here we are yet again with AGI promised, well, once we all get this cheap abundant access to everything, AI will produce literally everything at no cost. We're about to enter the world of more abundance than we've ever seen. And yet, where is all that wealth going to go? Well, it's going to go instead of to the thousands of companies that are currently producing it, more people are going to be paying an AI company that's going to consolidate all the wealth and all the power to suddenly have all these resources. And the question is, when has a small group of people ever consolidated wealth and then consciously redistributed it?
Michael Sandel: Right. Well, yeah. So it will go to shareholders and it will go some of it to higher lobbyists to consolidate the hold of oligarchs, whether in finance or in tech over the system, which is the way it worked over the last 50 years.
So there are two problems with the promise of abundance that will be delivered by AI. Once our essential need for food and shelter and healthcare and so on are provided, our fundamental human need is the need to be needed by our fellow citizens and to win some recognition or honor for deploying our efforts and talents to meet those needs. So if the first reason to be skeptical about the abundance promised by when robots come for our jobs, the first reason is a reason of distributive justice. Will the compensation ever arrive? How generous will the universal basic income be?
But the second issue, even if that's met, even if that were fulfilled, even if you and I are wrong to be skeptical that it will ever be fulfilled, there's a question of contributive justice. It's about being a participant in the common life. It's being a participant in a scheme of social cooperation and contribution that enables us to win dignity and respect not only through paid labor, but also through the families they raise and the communities they serve. And if that's missing, all the abundance in the world will not be sufficient to answer the human aspiration for recognition.
Tristan Harris: You're pointing to, so you just named, there's two problems. One is the redistribution problem and that concern of will this be a universal basic income or wealth or will it be universal basic pennants or pittance of basic smallest amount of money to keep people going? And then the second is their need for dignity and recognition and status, which affects everything including mate selection and the health of a social fabric and your common respect and feeling of connectivity to your fellow citizen. We should also just name that with AI, these dynamics are about to become very different. The story of the past and NAFTA was, well, yes, maybe your job will go away, but you can use the money and the efficiencies you're about to get to go for a higher degree and you can move up the cognitive ladder to doing higher skilled work. The problem is as there's this ladder that you can climb to do higher skilled cognitive labor, but now who's going to climb that ladder faster? Humans trying to re-skill or AI that's rapidly progressing in capabilities across every domain.
And so now there's sort of like there's no other place to go to, so we're both going to have the first crisis and then an even bigger second crisis. And what you're saying reminds me also of what's been laid out in, I guess in the Middle East, they call the resource curse of what's different between this time and the last time in terms of this issue is that in the past, our labor mattered. So people rebelled against the system. Well, companies would have to answer the needs, the collective bargaining of the people with the workers and governments cared about the taxation and the taxes of the citizens.
In this case, the government and the companies don't need humans anymore, so they don't have to listen to them anymore. And this parallels the resource curse, which is I guess in the Middle East, if you have a big oil economy and all the GDP of the country is coming from the oil economy, what is the incentive to invest in the health of the social fabric beyond just sort of preventing revolt? And I think in AI, they call it the intelligence curse, coined by two AI researchers, Luke Drago and Rudolf Lane. I'm just curious how you relate to this new sort of challenge that AI presents on top of what you've laid out.
Michael Sandel: I think that the analogy to the resource curse is a good one, and the question we need to ask of abundance and of resources, and by extension of the efficiencies on the horizon, when robots do all the work, the question is abundance or resources for the sake of what exactly, for the sake of what end. That's a question we don't often ask. We assume that maximizing GDP is the thing. That maximizing consumer welfare is what an economy is for, but why care about abundance in the first place? Is it only to enable us to accumulate more stuff?
And now some might say, I'm caricaturing the case for abundance that it enables people to fulfill their desires. Okay, but is that all that matters? Is that the only purpose of an economy? Because if the only question is how to bring about abundance, then that's a technocratic question. That's for experts to figure out what is left for democratic citizens to debate. This is why right at the center of our politics should be questions about what would it take to renew the dignity of work? And insofar as new technologies promise greater abundance, that's a good thing. But abundance for whom and for the sake of what?
Tristan Harris: So if the point of an economy is not to maximize abundance or consumer welfare, what is it for?
Michael Sandel: Two things. One is to give people voice, to give people a sense that they can have a say in shaping the forces that govern their lives. This goes back to what we were discussing earlier, Tristan, about an economy as a system not only for producing goods to satisfy consumer needs, but also as a system of cooperation bound up with mutual recognition. And that's connected to the second, in addition to having voice, a sense of my voice mattering. Also, it's to promote a sense of belonging, which goes back to what we were discussing earlier about the idea of the commons. Part of the discontent and even the anger of our time and of our toxic politics is that people feel that the moral fabric of community has been unraveling, that we're not situated in the world, that we've lost the ability to reason together about big questions that matter.
What is a just society? What should be the role of money in markets in a good society? What do we owe one another as fellow citizens? So what we miss when we focus in a single-minded way on maximizing consumer welfare or GDP or consumer satisfaction, what we miss is mutual recognition, the dignity of work, the ability of every citizen to believe that his or her voice matters, having a meaningful say in shaping the forces that govern our lives rather than feeling disempowered. And finally a sense of belonging. So I think the question is how can progressive politics renew the mission and purpose of the economy and for that matter of democracy? I think that's the only way ultimately that we'll be able to respond to the danger that looms now the shadows that are hanging over democracy.
Tristan Harris: I think that's so well articulated. I just want to link everything you've shared to a broader framework that I use in diagnosing what we call the meta-crisis or the interconnected sort of issues that we face across society that largely when we diagnose how this is all happening, why are we getting all these results no one wants from forever chemicals to pollution to social media, degrading the social fabric, to optimizing for GDP at the expense of dignity. It all has to do with optimizing for some narrow goal at the expense of other unnamed values. And unnamed commons is that need to be protected but are not. So in the case of social media, we're optimizing for the growth of engagement. And in doing so, we don't look at teenage anxiety and depression and suicide because all those things of anxiety and depression are really good for the growth of the engagement economy. Doomscrolling is really good for it.
We look at a growing GDP, but we don't look at how environmental pollution is directly connected to GDP. We look at let's optimize for cheap prices and then we outsource all of our supply chains to adversarial countries that might threaten our national security in the last example of increasing GDP at the expense of all other values. And so I just want to name that in general, when we think about as we pivot more towards solutions and responses, how do we go from optimizing for some narrow goal, whether that's GDP engagement, cheap prices, abundance, and go to what is the holistic health of the thing that is existing inside of.
Michael Sandel: One of my political heroes who understood this intuitively, deeply, Robert F. Kennedy when he was campaigning for the presidency in 1968, and he was a critic of the single-minded pursuit of GDP or consumer welfare without asking the question for the sake of what? For the sake of what purpose and meaning, here's how he put it, and he was onto it. Fellowship, community, shared patriotism. He said, these essential values do not come from just buying and consuming goods together. They come instead from dignified employment at decent pay, the kind of employment that enables us to say, I helped to build this country. I am a participant in its great public ventures. This civic sentiment is... It's powerful, it's inspiring, but it's largely absent from our public discourse today. It connects what we've been discussing as the dignity of work with the civic conception of freedom and the idea of sharing in building common projects, public ventures, common purposes and ends.
Another expression of this way of thinking about work was in the same year Martin Luther King went to speak to a group of striking sanitation workers in Memphis. This was shortly before he was assassinated. And what he told the striking garbage collectors was this. He said, the person who picks up your garbage is in the final analysis as significant as the physician because if he doesn't do his work well, disease will be rampant. And then he added, all labor has dignity. And so to come to your question, what might that look like in concrete terms today?
Tristan Harris: And specifically what might it look like in the age of AI where the work part of our dignity is upended in a deeper way?
Michael Sandel: Well, with regard to AI, I think we should begin just as I think we should ask affluence or GDP for the sake of what? Of AI, I think we should ask to what problem is AI the solution? And with many instances of AI, the answer is far from clear. Now a default answer to that question as you've pointed out citing some of the techno-optimists and enthusiasts is efficiency, but the ultimate efficiency to the point where we can replace work. But why is replacing work taken to be without argument or reflection a good thing? We need to have a public debate about what should be the purpose of a new technology, what purposes should AI serve? And the answer is probably not replacing labor. It's probably enhancing work so that it will be more productive so that wages will increase if we can get the increase in productivity to translate as it has not been translated of late into wage increases.
Tristan Harris: So first of all, you're speaking my language. I mean, this is the Neil Postman question to what is the problem to which this new technology is actually the solution? Because oftentimes we're just applying technologies just because we can and we apply it in the direction of efficiency to the degree in which we live in a market society, not a market economy. Because a market society demands that everything is about efficiency and growth and GDP, which means that we would want to maximally apply AI to every incentive that's running through that economy because it'll just make the whole machine operate more efficiently.
And so my more cynical answer sadly, is that if I don't do it, I'll lose to the countries that will do it faster and then their collective goods will be cheaper than mine. And so therefore, the ones that automate, there's a race to automation, and then we're all doing this race to automate where the cost we're each incurring in that automation is not giving our citizens an answer around dignity, future labor, future prospects. And so it's a competition for who can manage that transition better. I'm curious your reaction to that. I mean, it's not really an answer to the philosophical question which we should be asking, but unfortunately, if one country, like say the US is asking that question and China is not, and then they suck all the economic resources away, that would lead the US in a disadvantaged position. And I'm only saying this because I spend so much time with folks who will justify AI in terms of this great global competition. And that's often the answer.
Michael Sandel: Yeah. Well, so there are two answers then. I think you've identified them well, money, saving money and power, accumulating power. And the link between the two is that if AI really will create enormous increases in GDP, then it bears on global competition and great power rivalries. So I think you're right that these two go together, but then the question can still be asked, including of the countries who would compete with us and who would get their first. China, for example, what purposes do they have in mind? Have they thought this through? I mean, I think everybody, every country has to address these questions of meaning and purpose because any country has ultimately to face its own people who sooner or later will ask, what does it all mean? For the sake of what? Have we either maximized our power or maximized our GDP or both? Because sooner or later this will become unsatisfying.
It'll become unsatisfying if the gains are not fairly distributed. But we also, I think are seeing it in the frustration about the lack of meaning and purpose and dignity and recognition. If you and I are right about this second dimension of meaning and purpose and belonging, then it's not only Americans who will be unsatisfied by that kind of solution. It's going to be citizens of China or of Europe or whatever other political powers. Part of the appeal of markets is not just that they deliver the goods, they seem to spare us messy contested debates about how to value goods. They seem to be value-neutral instruments that can spare us those messy debates. And so what we've done is we've outsourced our moral judgment about the value of people's contribution to the economy, to markets.
Tristan Harris: That's right.
Michael Sandel: And that's led us to this assumption that the money people make is the measure of their contribution, which very few people actually believe. Now, in the case of technology, there is a similar kind of moral outsourcing that's right going on. We hear it in the announcements by the high priests of techno-utopianism that technology is like a force of nature is going to transform the world of work, and we're just going to have to figure out how to adapt to it. But this is the same false necessity.
Tristan Harris: Correct.
Michael Sandel: That we were offered-
Tristan Harris: Or inevitability.
Michael Sandel: An inevitability that we were told about the global economy. We heard it from Bill Clinton who said, globalization is a force like wind or water. You can't stop it. And Tony Blair, his counterpart in the UK said, I hear those who say we should stop and debate globalization. We may as well debate whether autumn should follow summer. And we hear this echo of inevitability and there's a hubris in it.
Tristan Harris: That's right. Yeah.
Michael Sandel: By the high priests of technology who say that AI is coming, it's coming for our jobs, work will become obsolete.
Tristan Harris: And we can't do anything about it.
Michael Sandel: And we had just better figure out how to organize our society, pay people off so that there won't be riots in the streets and so on. But this is unsatisfying, both the market-driven and the technocratic way of conceiving the economy and technology taken by themselves leave nothing left for democratic citizens. So I suppose the most important thing we can do is to reclaim as democratic citizens questions about what technology should be for and debate how to direct technological innovation. Now that means we have to have a more robust kind of public debate than the kind to which we're accustomed. It also means we have to be willing to make public investments in the kind of technological change that will enrich and enhance work rather than replace it.
Tristan Harris: I mean, what comes to mind as you say all this is just that there should be, I'm just imagining what would the enlightened version of our society going through this transformation do? And I'm just imagining a big CNN debate that's to the degree centralized media exists anymore, which doesn't really, but there should be big town hall debates that are talking about how do we want this transition to go? If AI is going to displace hundreds of millions, if not a billion people doing white collar labor or cognitive labor, let alone other kinds of physical labor when the robots come. That is the biggest transformation that we have ever gone through. And the fact that we are not having any kind of let alone national debates to answer these questions, I think what you've been speaking to are the political disincentives for actually addressing these questions because it's more politically convenient rather than a politician rushing into this very messy and conversation that's not going to reward them, really.
It's easy just to say, well, if GDP is going to go up and it'll produce cheap goods and the technology is neutral, these are narratives that give license to just keep going down the path of inevitability. I love how you link that together. And so what struck me a more optimistic take is that, let's say that instead of a competition for who will just use AI for efficiencies, it really will be a competition for who consciously deploys AI in a way that addresses and answers philosophical questions about what all this is for. What is the economy for? What is labor for? What is work for? What is this technology for? And the countries that do that, that best and consciously answer this question the best will out-compete the other countries in a more holistic sense. Just like maybe we boosted GDP, but we created this entire class that feels disenfranchised and that left us weaker. Or another version of that is we beat China to social media, but did that make us stronger or weaker?
So if you beat the country to a technology, but you're not consciously deploying it in a way that strengthens your country, and this speaks back again to the narrow optimization for this, the holistic optimization and asking these questions, these philosophical questions gets you to the conscious application of these technologies and these policy moves in the direction of what is healthy for the whole.
Michael Sandel: Right, right. I think you've put it very well, beautifully. What technology really provides us is an occasion for a different kind of public discourse. What better occasion and subject for that kind of public discourse than a real public debate about what ends and purposes new technology in AI should serve. Now, this kind of debate raises controversial moral and civic questions. It raises questions about what makes for a just society, what we owe one another as fellow citizens.
People will disagree if we have a debate about values because that kind of debate would require that we depart from the unquestioned assumption that it's all about efficiency in promoting GDP. Anytime we debate questions of what technology is for, we're on contested moral terrain. But what I'm suggesting is that this could be an opportunity to reimagine the terms of public discourse, to engage more directly with the moral and even the spiritual convictions that we as democratic citizens bring to public life. And if this astounding new technological frontier can prompt that, then who knows? Perhaps after all, despite the dark clouds on the horizon, we can renew for our time the lost art of reasoning together, arguing with one another, listening to those with whom we disagree in reviving the lost art of Democratic public discourse.
Tristan Harris: Professor Michael Sandel, thank you so much for coming on Your Undivided Attention.
Michael Sandel: Thank you, Tristan.
Share this post