[ Center for Humane Technology ]
The Interviews
America and China Are Racing to Different AI Futures
0:00
-57:41

America and China Are Racing to Different AI Futures

Is the US really in an AI race with China—or are we racing toward completely different finish lines?

In this episode, Tristan Harris sits down with China experts Selina Xu and Matt Sheehan to separate fact from fiction about China’s AI development. They explore fundamental questions about how the Chinese government and public approach AI, the most persistent misconceptions in the West, and whether cooperation between rivals is actually possible. From the streets of Shanghai to high-level policy discussions, Xu and Sheehan paint a nuanced portrait of AI in China that defies both hawkish fears and naive optimism.

If we’re going to avoid a catastrophic AI arms race, we first need to understand what race we’re actually in—and whether we’re even running toward the same finish line.

Tristan Harris: Hey everyone. Welcome to Your Undivided Attention. I’m Tristan Harris.

In 1957, two events turned up the heat on the Cold War between the United States and the Soviet Union in a major way. The first was the launch of Sputnik, which showed the world that the Soviets were far ahead in the space race. The second was the release of a government report called the Gaither Report that warned of a “missile gap” between the two superpowers. And according to the report, the USSR had massively expanded their nuclear arsenal and America needed to do the same in order to ensure mutual destruction. JFK made the missile gap a central theme in the 1960 election. And after he won, he dramatically accelerated the buildup of American nuclear weapons, starting what we now think of as the nuclear arms race.

But today, we know that the Gaither Report was wrong. Historical counting from Soviet documents and early satellite imagery showed that the USSR was actually far behind the US in nuclear capability. Rather than the hundreds of ICBMs that the report claimed that they had, the Russians at the time only had four.

The point of the story isn’t that the US shouldn’t have taken the USSR seriously as an adversary. The point was before we open a Pandora’s box with the potential for global catastrophe, we need to have the maximum clarity and situational awareness, and not be led astray by false narratives or misperceptions. And if we had had that clarity in the 1960s, we might’ve been able to do more to avoid the nuclear arms race and seek diplomacy and disarmament instead of racing.

Well, today we’re on the brink of a potentially new catastrophic arms race between the United States and China on AI, and China had their own kind of Sputnik moment when DeepSeek was launched in January of this year, showing that their AI technology was nearly on par with frontier American AI companies. And now you’re hearing a lot of top voices in the US government and technology use the same familiar rhetoric of the past, the idea that if we don’t build extremely capable AI, then China will, and we must win at all costs.

So in this episode, we want to get to clarity on what the state of AI actually looks like in China. Do they see the AI race like we do? Are we racing towards the same things? Are we in a race at all? And what kind of concerns does the Chinese government and tech community have about AI in terms of the risk versus rewards? Today’s guests are both experts on AI and China. Selena Xu is a technology analyst who’s written extensively about the state of AI in China and co-authored a powerful op-ed with Eric Schmidt in the New York Times. Matt Sheehan is a senior fellow at the Carnegie Endowment for International Peace, where his research covers global technology issues with the focus on China. Selena and Matt, welcome to Your Undivided Attention.

Selena Xu: Thank you for having us.

Matt Sheehan: Thanks. Great to be here.

Tristan Harris: So I want to start by asking you both a pretty broad question. What do you each see as the most persistent misconception that Americans have about China and AI?

Matt Sheehan: For me, the biggest misconception is the idea that Xi Jinping is personally dictating China’s AI policies, the trajectory of Chinese AI companies, that he has his hands very directly on all of the key decisions that are being made in this space. And Xi Jinping is the most powerful leader since Mao. He runs an authoritarian single party political system, so he clearly has a lot of power. But just on a very practical basis, most of this is happening at levels of detail that he’s just not involved with and that even senior officials within the Chinese Communist Party are not involved with. There’s a huge diverse array of actors across China within the companies, within research labs, within academia, the bureaucracy that all have a major influence on China’s AI trajectory, how they see risks, how they see the technology developing. And those people are constantly feeding into the political system. They’re shaping how the government thinks about the technology. They’re developing the technology themselves without really hands-on guidance from officials in some cases, in many cases.

And understanding that diversity of actors and the role that they play in the ecosystem is critical to being able to understand where China’s going and in some cases maybe affect where they’re going on this.

Tristan Harris: And just to briefly elaborate on that, because there is just this narrative that China is run by the Chinese Communist Party and Xi runs the Chinese Communist Party. So it feels from external views that he really is running things. How do we know that things are coming from these different places? What’s sort of the epistemology we use?

Matt Sheehan: One of the main focuses of my research is to essentially reverse engineer Chinese AI regulations. So to take a Chinese AI regulation like their regulation on generative AI and say, where did all the ideas in this regulation come from? Can we trace them backwards through time and find, oh, this idea originated with this scholar at this university who essentially popularized this concept?

And I’ll just give one very practical example of this. Their second major regulation on AI was called the deep synthesis regulation. And specifically what they were trying to do is they were trying to regulate deep fakes. And so for a long time, the conversation in China is how are we going to regulate deep fakes? And then Tencent, one of the biggest technology companies in China creates WeChat, has a ton of money invested in entertainment, video games, digital products, all things that use generative AI. They started like, “Everyone talking about deepfakes all the time isn’t so great.” We need to just kind of pivot this conversation a little bit. So essentially they did what a lot of American companies do. They did corporate thought leadership where they started releasing reports on deep synthesis, how that’s really the better term for this technology and we should really understand all the benefits of it. And we see just very directly that term, it originated from inside of them. It made its way into official discussions and it became the title of a regulation and affected how that regulation was made.

And that’s happening at a bunch of different levels across companies, across academics, think tanks. So yeah, it’s a diverse ecosystem. I think the way to think about Xi Jinping in relation to it or just say senior leaders, they’re kind of the ultimate backstop. If they are directly opposed to an idea and they’re aware that that thing is happening, they’re going to be able to put a stop to it.

But in most cases, they don’t have an opinion on the details of AI regulation. They don’t have an opinion on what is the most viable architecture for large models going forward. And so those things originate elsewhere.

Tristan Harris: That’s super helpful. Selena, how about you? What are some of the most powerful misconceptions about AI in China?

Selena Xu: I think this is one that I think increasingly more people have started talking about, which is that we’ve heard a lot of AGI and US and China being in a race to what’s artificial general intelligence, which is AI that is human level intelligence. And I think if you look at what’s really happening on the policy level and including in a lot of companies that are outside of some of the few frontier labs like DeepSeek, most of these companies are thinking very much about AI applications, AI-enabled hardware, or thinking about, oh, if you’re a local government official, how do you integrate AI into traditional sectors, into things like manufacturing? So I think this is the kind of the thing you’re seeing on the ground in China right now instead of this very scaling law motivated, very leveraged economy on deep learning.

Tristan Harris: Okay. So Selena, ostensibly, both the US and China, the US at least thinks that we’re racing to this sort of super god in a box, AGI racing towards super intelligence. And that’s what this whole race is about, because if I have that, I get this permanent dominating runaway advantage, and you’re saying that China does not necessarily see AGI as the same prize. Could you just elaborate on this? Let’s really get to ground on this because it is the central thing that’s driving the kind of US approach to AI right now.

Selena Xu: Yeah. I think caveat here, first and foremost, it’s hard to exactly know what China’s top leaders are thinking, but we can look at what has been happening on the ground in the industry and also in policies. So if you’re looking at the AI Plus plan, for instance, which is this major national strategy that was released, you don’t really see... There is no mention of AGI. Secondly, when you look at what is actually they’re championing for, it’s very much embedding AI into traditional sectors like manufacturing and industrial transformation and also emerging sectors like science and innovation or even governance. So it’s very much application focused and all of the stuff that they’re trying to push for is very much, “How do we use AI in a way, massively deploy it so as to actually see a real productivity boost and improve our economy?” So that is kind of the way people are thinking about AI.

It is a bit instrumentalist. They aren’t trying to build AGI. They’re trying to make a profit. There isn’t this kind of anthropomorphic machine god or the lingo that you see here in the Bay Area. And that might be because of China’s history with other kinds of technologies, which is kind of interesting philosophically. But I think also at the same time, it’s very much because they don’t have the cultural context in the past that a lot of people in Silicon Valley have been educated on from the Matrix to her and thinking about AI in the Turing test way.

Tristan Harris: Yeah. Let’s break that down a little bit more because so much of this comes down to the philosophy or religion almost or the historical roots of where your conceptions of AI come from. And would you both just comment a little bit more on the roots of the AI philosophy in Silicon Valley versus the roots of what are the philosophical or even sci-fi or just other cultural lineages or ideas that inform what AI is for both cultures?

Matt Sheehan: The leading labs in the United States, they were founded very much on the belief. And at the time, I would say it very much was a belief that we were going to get to artificial general intelligence, and then that was going to rapidly transform into super intelligence. And this could have essentially infinite benefits or it could wipe out the human race entirely. That is baked into the DNA of OpenAI, Anthropic and some other leaders, a lot of leading researchers in this space.

Tristan Harris: Ilya and Sam Altman were writing about this in the 2014, 2015 kind of days, or people talking about AGI, Shane Legg at DeepMind talking about this in the early 2000s on internet forums. This is a very deep, almost transhumanist, influenced cultural idea.

Matt Sheehan: And yeah, it builds on a legacy of the Terminator movies. It built on a legacy of science fiction. And it’s not to say this is all siloed in the United States. Chinese people also read international science fiction. Many people in China share some of these beliefs. But I’d say when you think about the DNA of the leading companies, it’s very unique in the United States. When it comes to the Chinese companies, again, we kind of have to disaggregate the different actors here and even just individuals. I think the way Selena characterized the Chinese government’s position on this is exactly correct. They are very focused on application. They’re saying, “How can this technology help me achieve my political economic social goals? How can it upgrade my economy? How can it jump over the middle income trap? How can it empower the party to have greater control?” That’s their focus.

But you also do have some people like the founder of DeepSeek who is himself, as we’d say in the US, AGI built. He does believe that sometime in the perhaps not too distant future, we will achieve something like artificial general intelligence. This will probably have a lot to do with how much computing power we put into the models. Pretty similar, I think, from what we can tell, from the public statements he’s made to the way that people like Sam Altman view this. He’s operating with an ecosystem. He has limits on the compute that he can access. He has limits on the government that he’s dealing with, the talent that he has at his disposal. So it’s not to say that because the founder of DeepSeek believes in AGI, that means that’s where China is heading. But there is this diversity of actors, government, influential policy people, entrepreneurs, engineers.

Tristan Harris: Selena, do you have any parsing of that on top of what Matt shared?

Selena Xu: Yeah, but I would say in response, I think the main thing here is DeepSeek has been pursuing a slightly different path than some of the US frontier labs, possibly because of compute constraints. They’re very much more efficiency focused, and that’s why I think they’ve poured so much technical resources and attention into basically achieving highly efficient models. And that is kind of the goal he’s going towards. So that’s why in January when people woke up to DeepSeek, part of the surprise was that how good it was, bearing in mind the kind of cost and compute, even though that’s kind of vague and murky, but it’s definitely at least an order of magnitude lower than some of the training costs in US frontier labs. So I think that’s kind of a different approach that they’re pursuing. They are AGI pilled, but even then I think what they’re doing is not like, oh, scaling and building every bigger data centers that can compare with Anthropic and OpenAI. And that’s just not the reality in China.

Matt Sheehan: May I build on that a little bit? Yeah, please go ahead. One way to think about this is where is the government putting its resources and do companies need the cooperation of government resources in order to achieve their goals? I think in the United States, especially over the last year or two, the way OpenAI has been operating, not just with the US, but with governments around the world, is this belief that fundamentally this is going to be a large scale energy computation, huge financial costs, striking deals around the world to build out these data centers that they believe are going to be essential. And so if we’re sort of thinking about it through that lens and we look over at China and we say, “Okay, where is the Chinese government putting its bets down?” And I think the AI plus plan that Selena described earlier is a pretty clear signal that where they are putting their money down and their bureaucratic resources down is on applications.

The AI plus plan, it sounds a little weird to our ears. It basically means AI plus manufacturing, AI plus healthcare. Essentially, we want to use AI to empower all these other sectors. And that’s where they are telling their local officials saying, “If you’re going to subsidize an AI company, subsidize an application that makes sense in your area, subsidize these things.” They’re not saying, “Hey, let’s all consolidate all our computing resources and devote them just to DeepSeek so that they can push their one sort of mission.”

Tristan Harris: Well, this is very interesting. And Matt, you said earlier in a different interview that the Chinese Communist Party is like a big HR department, that it’s kind of run like there’s these performance reviews and they set these top level goals as a nation and they say, “Our goal is to make sure we’re applying AI to all these different industries and we measure the performance of each local official in each province and then down to each city according to how good they are at doing that.” And what you’re saying is they’re not saying to all those officials, “We’re going to judge you based on how good you are creating a super intelligent God in a box Manhattan Project. We’re judging you based on the application of AI.” Still, there might be some who are listening to this and saying, “Yes, but how would we know what if China’s secretly pouring a Manhattan Project sized amount of money into DeepSeek?”

Because it’s important to recognize that they did recently start locking down and tracking the passports and employees of DeepSeek. They’re sort of treating it kind of like the nuclear scientists. One could view it that way. I’m trying to steal man these different perspectives because there’s sort of this, as we talked about in the opening, with this missile gap idea, there is this deep fear that if we get this wrong and they are building a Manhattan project and that is the defining thing, then we could lose here. So how would you further square those pictures?

Matt Sheehan: Yeah, I think it’s very important to steel-man these and to also acknowledge how much we don’t know and can’t know about what’s going on inside China. And I do not rule out the possibility that somewhere deep in a bunker in Western China, they are slowly trying to accumulate some level of chips that would power a supersized data center. We cannot rule that out. I hope our intelligence agencies are very much on this and would have awareness of it before anything came to fruition. But I think again, to just where are they putting their money and their bets down, if that’s what you’re trying to do, we know that China as a country on the whole is compute constraint. They have a limit on how much computational power, how many chips they have in the country, largely due to US export controls.

Tristan Harris: And then just explain that for a moment, just so people who maybe not be tracking. So the US started these chip controls in what year was it? We stopped basically giving China these advanced AI chips?

Matt Sheehan: Yeah. So the big restriction came in 2022 and has been updated every year since then, 2022, 2023, 2024. And I guess the sort of simplest way to understand it is that in order to train and deploy the best AI models, you need a lot of computing power and you want that computing power in the form of very advanced chips that are called GPUs made by NVIDIA, a super hot company right now. And basically what these different executive orders have said is we will not sell the most advanced chips to China and we will not sell the equipment needed to make the most advanced chips to China. We’re going to ban the export of these things. Now these export controls are very imperfect. They have a lot of holes in them. They’re smuggling. Essentially they’ve needed to update it because the companies, NVIDIA specifically are constantly sort of working their way around it.

But despite all those sort of holes in the export controls, they have imposed large scale compute limits on China. The United States and US companies, if they want to access maximal compute, they can do that. And Chinese companies just have less, Chinese companies and government. And so if you’re in that situation, just say that you have five million leading chips, that’s probably more than they actually have. If you have five million leading chips and you want to lead this kind of Manhattan project thing, you’re probably not going to tell your local officials all around the country to be deploying AI for healthcare and manufacturing and all these local scenarios.

Tristan Harris: Because they’d be using up all the chips. So you’re saying if they succeed in this AI plus plan, then it would take away from their success as a Manhattan Project. They couldn’t do both realistically given the finite number of chips that are currently available to them because of these controls.

Matt Sheehan: Yeah, a lot depends on how many chips you end up needing for the “Manhattan Project.” But just in terms of signaling, the signaling that they’re sending to their own officials is focus on applications and they’re deploying resources in that direction.

Tristan Harris: Yeah, Selena, do you want to add to that?

Selena Xu: Completely agree. And also I think the TLDR is just that if they’re trying to build a Manhattan Project for AGI in China, that a sheer amount of chips that are required for that, if that’s being smuggled in, I think there’s no way that any intelligence agency or NVIDIA itself would be unaware.

They aren’t trying to build AGI. They’re trying to make a profit. There isn’t this kind of anthropomorphic machine god or the lingo that you see here in the Bay Area. - Selina Xu

Tristan Harris: Selena, you’ve recently attended the World Artificial Intelligence Conference in Shanghai, and would just love to take listeners on a felt sense for what AI feels like as it’s deployed, because I think the physical environment of AI reaching your senses as a human is very different in China than in the US currently. So could you just take us on a tour like viscerally? What was that like?

Selena Xu: Yeah, and there are a lot of different kinds of AI I would say, and I don’t know whether you, Tristan, have been to China, but pre-generative AI and LLMs and chatbots, there was already digital payments, people paid with their palm or facial recognition while you’re entering the subway. Those are other kinds of AI that’s already very visceral and kind of all around you. This time around in July for the World AI Conference, on top of all of that, I think one of the biggest things that really struck me was how just pervasive robots were. They were everywhere. So it was basically in this huge expo center and I think about 30,000 people were there. All the tickets were sold out. A lot of young children, families, even some grandparents. It was whole of society kind of thing, and it was a fun weekend hangout. And everybody was just milling around the exhibition booths, shaking hands with robots, watching them fight each other MMA style.

There were also robots just walking around. Some of those were mostly remote controlled by people. There were a lot of AI-enabled hardware stuff like glasses or wearables, including some AI plus education, like dolls. So all kinds of innovative applications of AI in consumer oriented ways. And you just see people interacting with AI in a very physical, visceral way that you don’t really see here in the US. Hear people talk about AI as this like, “Oh, far away, machine god thing.” But in China, it was very palpable. It was extremely integrated into the real world environment.

Some of it is hype. A lot of the humanoids and robotic stuff is still very nascent and not very mature. And you can see some of the limits of that when robots fell down or didn’t really react in the right way. But I think that the enthusiasm and the optimism really was very, very interesting. People were actively excited about AI, versus here it’s more like the Terminator or something.

Tristan Harris: Yeah. I wanted to ask about that because I feel like if you went to a physical conference like that, and given there are far fewer robots and robot companies in the US, although we do have some leading ones, I still feel like the US attitude is more this bad. A lot of the feeling is just, this is creepy, this is weird, don’t really like this. But when the thing that I keep hearing is that when you’re there walking the grounds, everyone is just pumped and excited and optimistic about AI. And I’d like to develop that theme a little bit more here about why one country seems to be very pessimistic more about AI and the other, China’s largely optimistic. But Matt, just curious here to add on to Selena’s picture here, you also were, I believe, in China in the 2010s as the mobile internet was kind of coming online, and that has a role, I think, in how China sees technology optimistically versus more pessimistically here.

Matt Sheehan: Absolutely. And I think maybe first touching on the sort of optimism, pessimism towards technology more broadly, and then we can bring it into AI. I think there’s a lot of questions about exactly what do the survey results show? Are these good survey results? How do we know this? It tends to rely a lot on anecdotes and vibes. But I think maybe the most important factor here is that the rise of information technology, eventually the internet, now AI, the way it’s come into people’s lives in the last 45 years since say 1980. And if you look at what happened at China since 1980 versus what happened in the United States since 1980, it’s very different. This has been essentially the biggest, longest economic boom in Chinese history. And normal people have seen their incomes multiply by factors of 10 or even 20 over that period of time. Basically, since information technology came into the world, Chinese people’s lives have been getting better.

United States, it’s very hard to say are Americans’ lives better, but a lot of people associate technology with impacts on labor, with more dysfunction at a political level, misinformation, the damaging effects of social media on kids. And this has just been a period of time when the United States has largely turned kind of more pessimistic about our society, our prospects at a national level, and I think at an individual level, or you could take it to the last 10, 15 years since the rise of the mobile internet. This has been one of the most fractious times in American political history, and it’s been with some exceptions, a pretty good time in China, at least from the perspective of someone who’s just trying to earn more, live better, have more convenience in their lives.

So that’s a very 30, 40,000 foot level take on the sort of optimism, pessimism, but I think it is pretty foundational to how people look at these things. Yeah, I lived in China 2010 to 2016, and this was really the explosion of the mobile internet in China. Obviously in the US, mobile internet was expanding rapidly too, but this is when China was very rapidly catching up to and then surpassing the global frontier of mobile internet technologies. What is the mobile internet doing for ordinary people? And to me, some of the sort of visceral memories from that time are around 2014, 2015 when mobile payments kicked into high gear, you suddenly had this explosion of different real world services that were being empowered by the mobile internet. So here in the United States, obviously we have Uber and Lyft. These are real world services empowered by the mobile internet.

In China, they had their own Uber and Lyft, but they also had just a huge diversity of local services. As of 2013, 2014, someone will come to your house and do your nails for you with just four clicks. The guy who’s literally selling baked potatoes out of an oil drum has a QR code up there in 2014 to have you pay via that. It was this very visceral feeling that technology is integrating to every factor of our lives, and in large part, it’s making things way more convenient. When I got to China in 2010, if you wanted to buy a train ticket, especially during Chinese New Year, it means you get up really early and you wait in a super long line for a very slow bureaucratic in-person ticket vendor to sell you the ticket. When WeChat, mobile payments, all that got integrated into government services, including ticket selling, suddenly it became way more convenient, way easier to do these things.

And of course, mobile internet has led to convenience in both places, but having lived at the center of this in both countries, I just think it had a much more tangible feeling in China and a feeling that it’s genuinely making our lives better at this point in time.

Tristan Harris: Just to add to that, I mean, the thing that I hear from people who either visit China or even Americans who’ve lived in China the last little while and no longer come to the US, when you visit China, it feels like going into the future and everything just works like you’re 10 or 20 years further into the future than in the US. Then when people actually have been in China for a while, they come back to the US, it feels like you’re going back in time and things feel less functional and less integrated. I’m not trying to criticize one country or another. I think it’s actually based on leapfrogging where the US had to build up a different infrastructure stack and they didn’t jump straight into this 21st century gig economy, immediate mobile payments built into everything, whereas China really did do that.

Matt Sheehan: Yeah. And just on our earlier conversation on China in the 2010s, I should note that simultaneous to this mobile internet transformation was a huge rise in AI-powered surveillance of citizens. Facial recognition everywhere. You want to literally enter your gated community, and in China gated communities are much more common. They don’t indicate wealth. To just enter your little housing community, you might need to scan your face. And so at the same time that we’re pointing to all the conveniences of this, this also has a very much a dark side that is just important to note here.

Tristan Harris: Absolutely. I think it is really important to note how obviously the surveillance-based approach, which we would never want here in the West. The other side of it is the just fluency of convenience where everywhere you walk, you’re already sort of identified, which obviously creates conveniences that are hard to replicate if you don’t do that. And that’s one of the hard trades, obviously.

Matt Sheehan: Yeah, absolutely.

The last 10, 15 years since the rise of the mobile internet…has been one of the most fractious times in American political history, and it’s been with some exceptions, a pretty good time in China, at least from the perspective of someone who’s just trying to earn more, live better, have more convenience in their lives. - Matt Sheehan

Tristan Harris: In a recent Pew study, it showed that 50% of Americans are more concerned than excited about the impact of AI on daily life. And a recent Reuters poll showed that 71% of Americans fear AI causing permanent job loss. What is the public mood in China versus the US on AI and job loss actually? Because I think this is one of the most interesting trade-offs that these countries are going to have to make because the more jobs you automate, the more you boost GDP and automation, but then the more sort of civil stripe you’re dealing with if people don’t have other jobs they can go to unlike other industrial revolutions.

Selena Xu: I think it’s definitely something on people’s minds, but not necessarily related to AI. In the past few years, youth unemployment has been a very serious issue before the government stopped releasing the statistic. I think it was about at least 20 to 25% of youths are basically unemployed in China. So that’s, I think, something that the society has been grappling with and something policymakers are obviously concerned about.

Tristan Harris: Did you say 20 to 25% youth unemployment?

Selena Xu: Of youth. Yeah.

Tristan Harris: Wow. It seems high.

Selena Xu: Yeah, it’s quite crazy. And because it was so high, they stopped releasing the statistics. So we can only speculate how high it is. I expect it to be around the same range. But if you’re talking to young people in China now who are trying to funnel into STEM fields or AI vacations, there is a huge pool of AI engineers and increasingly limited number of jobs. So I think this is something definitely that young people are facing and there’s real anxiety. But on the other hand, when you’re talking to policymakers and experts in China, the sense I’ve gotten is they’re strangely mostly positive about AI and they’re kind of slightly blase about, oh, the effects of unemployment. One person I spoke to who basically advises the government talked about the example where they went to do field research in Wuhan, which is a city in China that has a huge penetration of autonomous vehicles, and they talked to some taxi drivers about, “Hey, how concerned are you about self-driving cars?” And they said taxi drivers generally told them that they are excited to work fewer hours and are excited about the improvement in labor conditions.

And I’m like, okay, that is the kind of sentiment that they’re trying to basically use to justify, I think, how people are feeling about it. They’re slightly probably concerned, but the main thing is to upskill them. And in general, this is a better thing for society. Obviously, the tune would change, I think in China, a lot of times the pendulum just swings based on how policymakers think. Right now, it seems to me they’re pretty positive on AI as more of a productivity booster rather than a drag on labor, but obviously that might change down the road. And in terms of just everyday people, I think youth unemployment is just something that they’re really just thinking about and everyone knows and acknowledges... judges. I don’t know how much they tie it to AI, but I’ve heard from friends who work in the AI industry about just how cutthroat it is to get a good job and the sheer amount of PhD graduates who are trying to get the right number of citations and the right journals so as to secure a job at a place like Tencent or Alibaba.

Matt Sheehan: May I chime in on that?

Tristan Harris: Yeah, please.

Matt Sheehan: Yeah. The picture I have of this is slightly different, or at least I think it’s evolved substantially in the last, say, six months to a year. I agree that if you go back maybe a year or maybe two years, both Chinese policy scholars, the people advising the government, and it would seem the Chinese government were very blase about the unemployment concerns around AI. One of the things I do in my job is I facilitate dialogue between Chinese policy, AI policy people, and American AI policy people. And in one of our first dialogues, we had everyone from the two countries rank a series of risks in terms of how worried are you about this risk from existential risk, military applications of AI or privacy of seven or eight different things. And in that risk ranking, which I think this is taking place in early 2024, the Chinese scholars ranked the unemployment concerns second to last out of, I think, eight risks.

It was really low. And when I was thinking about why is this at the time, my shorthand for it was China has undergone just incredible economic disruption and transformation in the last 30 years, and it’s basically come out okay. In the 1990s, they dismantled a huge portion of their state-owned enterprise system. Millions of people became unemployed because of reforms to the economic system. And they’re like, “Basically, if we grow fast enough, this will all come out in the wash.” And of course there are long-term costs that, but they seem to have this faith that if you can just keep growing at this extremely high rate, then the job stuff will figure itself out.

I think that has changed a bit over the last six months to a year. Again, this is partly anecdotal, speaking to people over there, reading between the lines of some policy documents, but I have heard people saying that this is a sort of rising in salience as a concern for the government. And in some ways, the signals they’re sending are somewhat conflicting. On the one hand, they’re essentially like all engines go on applying AI and manufacturing on robotics. So they’re pushing the automation as fast as they can at the same time that their concerns about the labor impacts are also rising. We might say that that’s not a totally coherent sort of strategy, but government policy is not always 100% coherent. They’re still feeling out these two things, but people have been suggesting that essentially this is rising in salience and it might end up affecting AI policy going forward, but it’s speculative.

Tristan Harris: That’s fascinating, Matt, that the economic disruption from the past and the fact that they were able to navigate that successfully means that people see that maybe their job’s going to get disrupted, but no big deal. We did that once before we’ll retrain. Of course, what’s different about AI, especially if you’re building to general intelligence is that it’s unlike any other industrial revolution before, because the point is that the AI will be able to do every kind of job if that’s what you’re building. So there actually is a secondary benefit of approaching narrow AI systems, this sort of applied, narrow, practical AI, because you’re not actually trying to fully replace jobs. You’re maybe augmenting more jobs, but you’re not having the AI eat every other job. And then when you kind of zoom out, the metaphor in my mind for this visually is something like the US and China, to the degree they’re in erase for AI, they’re an erase to take these steroids to boost the kind of muscles of GDP, economic growth, military might.

But at the cost of getting internal organ failures, like you’re hyping up the attention economy addiction doomscrolling thing, you’re hyping up joblessness because people’s jobs are getting automated at the cost of boosting a steroids level. And so both countries are going to have to navigate this, but it’s interesting that if you do approach more narrow AI systems, you don’t end up with as many of those problems because people can keep moving to do other things.

Matt Sheehan: I think that’s a great metaphor. I’ve never heard that before, but steroids is about right. On the, “We’ve been through disruption before we can deal with it,” I would say I would differentiate a little bit between the Chinese government, which is thinking in a 100% macro perspective from an individual person. I think if you told an individual Chinese person, “Your job is going to be automated,” they might have something to say about that.

Tristan Harris: I guess the question is, it’s similar to the US question for UBI. If let’s say we live in a completely automated society, people don’t have to work, but is AI going to be able to generate enough revenue to support literally billions of people on universal basic income? The math as far as I’ve heard in the West is that that math doesn’t work out.

Matt Sheehan: Yeah. I mean, does the math math in this situation? I don’t know. I think it’s mostly, in many cases, it’s going to be a political decision. And I think at a very high level, we might think, okay, China, one party system, communism, they should be all good with just massive redistribution. And I think that’s possible that it does pan out that way. But quite interestingly, Xi Jinping, who’s a very dedicated Marxist in terms of ideology or a Leninist in a lot of ways, he personally, from the best we can tell from good reporting on this, is actually quite opposed to redistributive welfare. He thinks it makes people lazy. And China, despite being nominally a socialist on its way to communism country, has a terrible social safety net. People are largely on their own, much less of a social safety net than the US. And so-

Tristan Harris: Really? Than the US?

Matt Sheehan: Yeah. I mean, they have essentially welfare that is paid to people who cannot work or disabled. It’s extremely low. There’s nothing like Obamacare over there. Maybe a lot of people have health insurance in some form, but access to actually good medical care is really not great. And yeah, it’s one of these contradictions of modern China. They are simultaneously a communist party and sort of deeply committed to certain aspects of communism while at the same time being more cutthroat in terms of individual responsibility than even the United States.

Tristan Harris: That’s so interesting. It’s definitely not, I think, the common view of what you’re from externally knowing that it’s a communist country, you would think the opposite. Well, let’s just add one more really important piece of color here that I think speaks to a long-term issue that China’s having to face, which is that China’s population is aging very rapidly and they’re facing a really steep demographic cliff. Peter Zaihan, the author, has written extensively about this. There’s this sort of view of demographic collapse. I believe if I just cite some statistics here, China’s had three consecutive years of population decline down 1.4 million since 2023. They’re on track to be a super aged society by 2035 with one retiree for every two earners, and that would be among the first in the world. And so how can you have economic growth if you have this sort of demographic collapse issue?

And this has led a lot of people in the national security world to say that China’s not this strong rising thing. It maybe looks that way now, but it’s actually very fragile and demographic collapse is one of the reasons. Now, some people look at this and they say, but then AI is the perfect answer to this because as you are aging out your working population, you now have AI to supplement all of that. And I’m just curious how this is seen in China because this is one of the core things that has been named as a weakness long-term.

Selena Xu: I think one of the reasons that the Chinese government and also a lot of the companies have been in a frenzy about humanoid robots and other kinds of industrial robots is precisely because of this reason. If you’re thinking about in terms of the demographic decline, the shrinking workforce, a lot of the gap has to be filled in by automation and that’s in the form of industrial robots. If you’re looking at installations, I think China has outstripped the rest of the world over the past few years. But if you’re thinking about elderly care, companionship, how do you help the elderly and the growing silver economy continue to expand, you kind of do need AI, not just in terms of AI companions, but also like, oh, humanoids in some elderly homes, which I think some local governments have already started to push forward and pilot programs. So I think that’s how people have been grappling with that.

But I think apart from that, whether AI would be able to really help elderly people in brain machine interfaces, that’s still something that people are starting to research on. And I don’t think there’s a very clear sign of how close we are to that.

Matt Sheehan: Yeah. Just building on that, I think the dynamic you described is sort of right on all the fundamentals. And there’s this idea like essentially we have all these problems, this isn’t unique to China or just aging. We have all these problems. They’re getting worse. We don’t have any solution for them, but is AI going to be this rabbit that we pull out of a hat that’s going to resolve them? And I would call that a little bit of magical thinking or at least wishful thinking. It’s important to put the aging stuff in the context of their sort of broader population policies. China for decades had the one child policy, which was the greatest sort of population limiting policy that you can have, even though it was never exactly one child per family. It took them a long time to realize the damage that this was going to have on their economy long-term, but they did realize it.

When I was living there working as a reporter was when they put an end to the one-child policy. And since about 2015, they’ve actually been saying to people, “Actually, have more children, have more children. Here’s subsidies to have children.” And it’s just not having the effect that they want. And it’s a very sticky and intractable problem. And it’s not just China, it’s across a lot of countries in East Asia as well as other societies that aren’t bringing in that many immigrants.

Tristan Harris: Which is another issue for China is that they’re not actually bringing in lots of immigrants from all around the world because they value their... Yeah.

Matt Sheehan: Yeah, absolutely. So is AI going to be the sort of magic wand that gets waived and resolve or solve these problems? I can see why people in government, in society, want to believe that, and it could end up being true, but probably not something that you should bank on if you’re the leader of hundreds of millions of people.

So is AI going to be the sort of magic wand that gets waived and resolve or solve these problems? I can see why people in government, in society, want to believe that, and it could end up being true, but probably not something that you should bank on if you’re the leader of hundreds of millions of people. - Matt Sheehan

Tristan Harris: So now switching gears yet again, in the US, there’s a deep sense that we’re in a major AI bubble. The amount of money that’s been invested and the sort of circular deals that are going on between NVIDIA and OpenAI and Oracle, and this is just a big house of cards. I’m just curious, is there a view that there’s a big bubble in AI in China?

Selena Xu: From my sense, not yet. Maybe in terms of robotics, I’ve heard from several VC people that, hey, there’s totally a robotics bubble right now in China in terms of the sheer amount of funding, new companies. If you’re looking at some AI adjacent stuff, if you’re looking at self-driving cars, there was a bit of that previously. But now if you’re thinking about LLMs, a lot of consolidation has happened. And right now the AI space, I think a lot of the funding has dried up for frontier model training and most of the funding has gone into AI applications. So I think in LLM or AI frontier stuff, there isn’t really a bubble in China.

Matt Sheehan: Yeah. To have a bubble, you need to have huge amounts of money flowing into something and over hyping the evaluations. And the very ironic or difficult to grasp thing in China today is that despite the headlines, despite how well a lot of leading Chinese models are doing when you compare them on performance, the Chinese AI ecosystem is actually very cash strapped. They’re very short of funding. That’s one of the biggest obstacles, especially for startups, but also for big companies. And there’s a lot of reasons behind that. I’d say the venture capital community in China is very new. It kind of started around 2010, so it’s only 15 years old. And around the year 2022, that venture capital industry basically collapsed. Due to a bunch of things, COVID, the Chinese tech crackdown of that period of time when they were sort of beating up all their information tech companies and just the fact that a lot of the first wave VC investments didn’t pay off.

So when you look at the actual total amount of venture capital that’s being deployed in China, it’s been going down every year since 2021. And even in AI, which it’s almost hard to wrap our head around, but the venture capital being deployed is actually going down in China. Now there are companies that can get around this. Essentially DeepSeek, they started as a quantitative trading firm so they can sort of print their own money and don’t have to take on as much venture capital. Some of the big companies, Tencent, Alibaba, they have huge profit-making arms that they can funnel the money into it. And so it’s not to say that everybody is broke, but the investment is low. Then people might say, well, what about the government? Isn’t the government just flooding them with resources? The government is putting a substantial amount of money into this, but the government is actually also much more cat strapped today than it has been at any point in the last 20 plus years.

This is in large part due to the collapse of the real estate bubble in China, the one real bubble over there has led to huge shortfalls in local government money, which means the central government has to give money to local governments. It’s a complex system, but I’d say the shorthand is just like, while the US seems to just be having money flooding into it from a bunch of different directions, in China, it’s very cash constrained. We’ll just double tap on what Selena said about robotics. Robotics is one area where there probably is a bubble. You have a bunch of these startups that shot to huge evaluations and are trying to list very quickly, and they might have good technology, but it’s basically like demonstration technology at this point. It’s not actually being used to make money in factories. And those companies are, I think many people would say they’re due for a correction.

So we might have our LLM bubble burst and their robotics bubble burst, and then where do we go from there?

Selena Xu: And actually just to add one more thing, I think instead of hearing bubble, the word I hear the most in China the past few years is involution, which essentially just means excessive competition that’s self-defeating because there’s just ever diminishing returns no matter how much more effort you put in. And that’s been something that has spread from electric vehicles to AI chatbots to solar panels to everything. Essentially, all these companies grind on ever slimming profit margins and don’t really see a way to get their profit back. And there’s kind of no way out because of the list of reasons Matt has listed. It’s hard to exit, it’s hard to IPO. They want to go overseas, but there’s just so much competition and there’s some pushback in other Western countries. So I think that’s a phenomenon that’s being seen in China right now, like involution.

Tristan Harris: And how does that match with this sort of view from national security people in the West that China’s deliberately making these unbelievably cheap products to undercut all the Western makers of solar panels and electric cars and robots and things like that. And this is part of some kind of diabolical grand strategy to... I’m not saying one way or the other. I’m reporting out things that I hear when I’m around those kinds of people. How do you mix those two pictures together?

Matt Sheehan: I think essentially both things are true like the involution, which basically it means price wars. It means there’s way too many companies that have flooded into the new hot sector and they’re forced to compete on price and they essentially sell their products for less than it costs to make them and it leads to long-term consequences. And that happened in solar panels when I was living there in the 2010s, but it’s one of those things where you can have a price war, a collapse of the industry, and then what emerges at the end is actually still a quite strong industry. That’s what happened in solar panels. The government, I think at a very high level, does have a strategy of essentially if you undercut international markets on price, you can dominate the market and then you can hold it permanently. It’s what’s called dumping an international trade law.

You sell something for cheaper, you destroy your competitors and then... Yeah, some might say that was what companies like Uber might’ve done to taxis domestically. So it’s both a self-destructive practice that bankrupts tons of companies in China, and it also might be something that the government is okay with on some level. They’re currently having an anti-involution campaign. Policy-wise, they think that this is at this point more destructive than helpful, so they’re trying to limit the damage of this, but it’s a complicated system.

Instead of hearing bubble, the word I hear the most in China the past few years is involution, which essentially just means excessive competition that’s self-defeating because there’s just ever diminishing returns no matter how much more effort you put in. And that’s been something that has spread from electric vehicles to AI chatbots to solar panels to everything. - Selina Xu

Tristan Harris: One of the main things that we often talk about in this podcast is how do we balance the risk of building AI with the risk of not building AI? AKA, the risk of building AI is the catastrophes and dystopias that emerge as you scale to more and more powerful AI systems that either through misuse or loss of control or biology risks or flooding deepfakes, the more you progress AI, the more risks there are. And at the other hand, the risk of not building AI is the more you don’t build AI, the more you get out competed by those who do. And so the thing we have to do is straddle this narrow path between the risk of building AI and the risk of not building AI. And all the way at the far side of that is the risk of these really extreme existential or catastrophic scenarios, which it seems like both the US and China would want to prevent, and yet open sourcing AI has lots of risks associated with it and China is pursuing that.

And one of the sort of key things that comes up in this conversation all the time is as unlikely as it seems that the US and China would ever do something like what the Nuclear Non-Proliferation Treaty was for nuclear arms, would something like that negotiating some kind of agreement ever be possible between the US and China given shared views of the risks?

Selena Xu: I think it’s very possible. It’s just like there’s a long list of stuff that the two presidents have to talk about. And obviously it doesn’t have to happen in this administration. A lot can change with the technology. I think there is general consensus from experts, policymakers on both sides when they talk in some of these Trek two dialogues, which are basically non-government to non-government, and Trek one are government to government. In these Trek two dialogues, people generally can agree on a lot of things. These can include things like very basic areas of technical research like interpretability, how do you understand what’s actually going on in an AI model under the hood? There’s also things like general safety, guardrails, evaluations, monitoring, things like that. And then some other stuff that was agreed on during C and Biden’s Trek one dialogue on keeping a human in the loop when you’re talking about nuclear weapons.

So I think there’s a lot of stuff that’s possible. I think it’s more of a matter of mutual trust and that’s something that’s quite lacking today in our political climate. Trying to say we need to cooperate with China on anything seems quite poisonous, but I think if we can expand our imagination a bit and really just grapple with the sheer necessity and the gravity of the situation, there’s a lot that can be done that’s low risk and like an easy lift. And I would just say it can start from people to people instead of just government to government. It can be from companies to companies, experts, experts and stuff like that.

Tristan Harris: And just to elaborate this in a visceral sense for listeners, did you attend the international dialogues on AI safety?

Selena Xu: Yeah, I did.

Tristan Harris: So could you just take us inside the room-

Selena Xu: As an observer, but yeah.

Tristan Harris: Yeah. Could you just take us inside the room? So for listeners who don’t know, there are these dialogues where just like during the Cold War Age, the American nuclear scientists met with the Russian nuclear scientists. There actually was the invention of something called permissive action control links, which was a way of making nuclear weapons not fire in some accidental way. There’s a control system and there’s a history of collaboration like that. Could you just take us inside the room, Selena, of what does it feel like? Do you hear Chinese AI safety researchers working with American researchers on, are they agreeing to specific measures?

Selena Xu: Yeah, it’s a great dialogue. This year was the first time I actually was in the room for it in Shanghai. So this happened on the sidelines of the World AI Conference when you had people like Nobel Prize winner Jeffrey Hinton visit China and participate in both this dialogue and the conference. And then you also had other people from the Chinese side like Andrew Yao, Jiang Yatin and people like that. So it’s a group of very leading AI scientists and they get together to basically talk about what are the risks and red lines that they most agree upon and they issue a consensus statement. So for anyone who’s curious, you can read the Shanghai consensus afterwards. But essentially, I think whenever you’re in any of the sessions, there was always a lot of areas of convergence. Essentially, people always agreed on fundamental things like loss of control.

All of these are very well known, but I think the real issue today is that you need the companies who are building the technology to agree to these things. And right now, the race dynamic, profit incentives, all of that is just not converging to allow them to take these risks very seriously. And even if you have the best scientists agree on these things, the current landscape is basically that the companies are the ones who are building the technology is very different from the SLOMR conference when that was very much held in the hands of universities and those labs.

Tristan Harris: So Matt, with that said, when you kind of ask what would it take at the political level if it’s not going to happen at the researcher level, what do you see as possible here?

Matt Sheehan: I think it’s helpful to think of a spectrum of worlds or outcomes. On one end is the most binding regulatory approach where the US and China agree on a very high level of very top-down system where we’re both not going to build dangerous superintelligence. And then that international agreement gets filtered down into the two systems. We regulate domestically and everything is safe. On the other end is just total unbridled competition in which we think the other side is racing as fast as they can. They don’t have any sort of guardrails. And so we need to race as fast as we can and sacrifice the guardrails in the interests of winning. And I think the first one of international agreement that trickles down is at this point quite unrealistic, at least in the short term. In the halls of power, there’s such, such deep distrust between the countries.

That might not apply to the president himself or individuals, but when you look at the entire national security apparatus in the two countries, they tend to see each other as fundamentally in a rivalry.

Tristan Harris: Any promise would just be bad faith. You’re just saying that to slow me down and you’re still going to keep building it in a black project somewhere, and so I got to keep racing.

Matt Sheehan: Exactly. And given that, I think my hypothesis is kind of something in the middle, which what it fundamentally rests on is the idea that I think the most important thing is going to be how the US regulates AI domestically for itself for its own reasons and how China regulates AI domestically for itself and for its own reasons. China actually has a lot more regulations on AI. There’s a lot more compliance requirements, mostly centered around content control, but now expanding beyond that. And essentially, I think both countries are going to be moving in parallel here. They’re both going to be advancing that technology. They’re both going to be seeing new risks come up. And my sort of thesis here is that we have safety in parallel where both countries are moving forward and regulating because the risks are not acceptable themselves. And there can be this sort of light touch coordination or maybe just communication between the two sides.

We’re not going to have any binding agreement. I’m not going to do something in the United States because I 100% believe you’re doing the same thing over there, but we have a best practice over here. We have something that we’ve learned, like you gave the example of permissive action links. We think this is a method by which you can better control AI models. We’re going to do it domestically and we’re going to maybe open source that or we’re going to share it. We’re going to have a conversation with our Chinese counterparts about it. It’s not relying on trusting one another, but it’s sort of building touchpoints and sharing information about how to better control the technology as it advances. And then maybe if we get to a point where both countries have developed really powerful AI systems and they’ve also, in some sense, learned how to regulate them domestically, or at least they’re trying to regulate them domestically, then maybe we’re already in pretty similar places and we can choose to have an international binding treaty around this.

Tristan Harris: There’s also the getting to the point where we had so many nuclear weapons pointed at each other that the risk was just so enormous that it was existential for both parties. And even Dario Amodei from Anthropic has said, “Don’t worry about deep seek because we still have more compute and we’re going to do the recursive self-improvement.” When you signal that publicly, you’re telling the other one, “Oh, if you’re going to take that risk, then I’m going to take that risk.” But then that collective risk can be existential for both parties. And I heard you also say the need for basically red phones, like we had communication between the two sides and also red lines. How do we have red lines of what we’re not willing to do? And you can imagine there being, at the very least, some agreement of not building superintelligence that we can’t control or not passing the line of recursive self-improvement.

Or another one I’ve heard is not shifting to what’s called neuralese. So instead of right now, the models are learning on their own chain of thought, which is like their own language, so that models are kind of learning from their own thought in language, but what happens when you move from words that you’re thinking to yourself in to neurons that you’re thinking to yourself in? And when you have that, that’s when you’re in some new danger. So anyway, this has been such a fantastic conversation. I’m so grateful to both of you. And I think this has given listeners hopefully both a lot of clarity around the nature of how these countries are pursuing this technology and the differences and also the possibility for doing this in a slightly safer way than we currently have. Anything else you want to share before we close?

Matt Sheehan: This has been great. I’ve loved talking through this stuff with both of you. And yeah, I’d encourage people to try to read some of the good work that’s being put out there about what’s happening in China on AI and not expecting anybody or everybody to become experts on this topic, but the thing to know is that the Chinese are much more aware of what’s happening in the US than we are aware of what’s happening in China. They’re much more interested in learning from what’s happening in the United States than the US is in learning from China. We have this mentality that that’s an authoritarian system, therefore we can’t learn anything from the way that they regulate technology. They’re a rival, we can’t learn from them. China doesn’t see it that way. They say if there’s a good idea in the United States, let’s adopt it and let’s adapt it to our own ends.

And that’s a huge advantage for them, being willing to learn from the United States. And I think if we can kind of break down some of those mental walls and actually take seriously what’s happening over there and see if there are lessons for the United States, I think that would be a huge boost.

Selena Xu: I 100% agree. And I just think if there is more mutual understanding and if people try to visit China if you can or read some of the interesting research or pieces that are coming out, including Met Substack, a gentle plug here, I think that makes for a better world. So if you’re thinking about and listening to this and thinking about, “Oh, what can I do?” Understanding is the first part.

Tristan Harris: Matt and Selena, thank you so much for coming on your Undivided Attention. This has been one of my favorite conversations. Thanks so much. This has been really great.

Selena Xu: Thank you for the great questions and for having us.

Discussion about this episode

User's avatar

Ready for more?