Audrey Tang: How to Future-Proof Democracy in the Age of AI

What does a functioning democracy look like in the age of artificial intelligence? Could AI even be used to help a democracy flourish?
In this interview, Taiwan’s Minister of Digital Affairs, Audrey Tang, discusses healthy information ecosystems, resilience to cyberattacks, how to “pre-bunk” deepfakes, and more.
This interview aired on Your Undivided Attention on February 29, 2024. It has been lightly edited for clarity.
Tristan Harris: Imagine that 2024 was the year that democracies implemented a broad suite of upgrades that made them future-proof to AI attacks.
Aza Raskin: Upgrades like verified phone numbers, a concept called pre-bunking, implementation of multiple backup systems, always assuming that you are going to be hacked. And using paper ballots for elections, letting citizens use their own video to verify all the counting.
Tristan Harris: These are the kinds of upgrades that can make a democracy resilient in the age of AI. And the best living example of that is Taiwan. Because about a month ago, Taiwan had its major presidential election in which everyone thought that China would be using the latest AI tools to influence the outcome, and yet Taiwan survived.
Tristan Harris: In this episode, we're going to go on a tour of what Taiwan has done under the leadership of Audrey Tang, who serves as the Minister of Digital Affairs. We've had Audrey on the podcast before, but we wanted to have her back to talk through how she understands this moment in AI development, especially coming off of her party winning a new majority in Taiwan's election at the start of this year.
Aza Raskin: Audrey, we are so excited to have you back again, welcome to Your Undivided Attention.
Audrey Tang: Happy to be back.
Tristan Harris: When we think of AI and election harms, the first thing most people think of is deepfakes of politicians, things that have the power to sway voters in an election, how has this played out in the most recent elections in Taiwan?
Audrey Tang: In 2022, I remember filming a deepfake video of myself with our board of science and technology, this is called pre-bunking. Before the deepfake capabilities falls into the hand of other authoritarians, and so on. Already two years ago, we pre-bunked this deepfake scenario by filming myself being deepfaked, and also showing everybody how easy it is to do so in a MacBook and so on, and how it will be easy to do so on everybody's mobile phone, and so on. And so, with this pre-bunking, the main message is that even if it is interactive, without some sort of independent source, without some sort of what we call provenance, which is a digital signature of some sort, do not trust any video just because it looks like somebody you trust, or a celebrity. Now, pre-bunking takes some time to take effect, so we repeated that message throughout 2022 and 2023. But upshot is that in 2024, when we do see deepfake videos during our election campaign season, they did not have much effect because for two years people already have built antibodies or inoculations in their mind.
Tristan Harris: Yes, I love that example because you're pointing to both the need to understand the threat. Just like in cybersecurity, you let the defenders know ahead of time so that you build up the antibodies, you try to patch your system, before the attack actually gets used.
Aza Raskin: I could imagine people in our audience listening to the example you gave, Audrey, of like, all right, we need to pre-bunk by showing people how deepfakes work. But I think the deeper point is, if you don't already have a system that lets you do verification and content provenance, then you don't actually leave people with anything to do except to doubt everything. I'm curious, your philosophy there, and then how you go about doing that large scale upgrading?
Audrey Tang: In terms of information manipulation, we talk about three layers. Actor, like who is doing this? Behavior, are they millions of coordinated inauthentic behavior, or are they just one single actor? Content, whether the content looks fake or true. By content alone, one can never tell. And so, it's asking people to tell whether it is trustworthy by its behavior or by its actor. Starting this year, all the governmental SMS, short messages, and so on, from the electricity company, from the water company, from really everything, goes from this single number, 111. When you receive an SMS, whether it is the AI deliberation survey asking you to participate, we'll talk about that later, or whether it is just to remind you of your water utility bill, it all come from this single number, 111.
Audrey Tang: And because this number is not forgeable, in Taiwan, normally when you get an SMS number, it's 10 digits long, if it's overseas, it's even longer. One can very simply tell by the sender that this comes from a trusted source. This is like a blue check mark. And already our telecom companies, the banks, and so on, are also shifting to their own short codes. And so, it creates two classes of centers, one is unforgeable, one is guaranteed to be trustworthy, and the other class you need to basically meet face-to-face, and add to your address book before confirming that it is actually belonging to a person.
Tristan Harris: How about disinformation, how do you target that?
Audrey Tang: We have Cofacts, collaborative fact checking, where everybody can flag a message as possibly scam or spam to your chat group. And so, what it does is that it has a real time sampling of which information packages are going viral. Some of them are information manipulation, some of them are factually true, but nonetheless, we have a real time map of what's going viral at this very moment. And by crowdsourcing the fact-checking, think Wikipedia, just in real time, we now have not just the package of information, but also the pre bunking and debunking that goes with it.
Audrey Tang: And with newer methods of training language models, like direct preference optimization, it figures out the logic of what's approved and what's rejected. And even newer methods, like Spin, just show it's the way that the fact-checkers do their painstaking work, and it just learns from that train of thought. Using these ways, our civil society has been able to train a language model that provides basically zero day responses to zero day viral disinformation, before any fact-checker can look at any viral message. We only really have to focus on the three or four things every day that is really going viral.
Tristan Harris: I have a question about that though. Isn't it possible that AI will enable a much more diverse set of future deepfake categories and channels? I could say, what are the political tribes that I want to attack? Use AI to generate 10 different kinds of deepfakes to 10 different kinds of tribes, and there's different little realities, like little micro realities for each of them. And so, we might have previously lived in a world where most of the world's attention in your country is on a handful of channels, and a handful of stories are going viral, so we have to expand the horizontality of the defenses now.
Audrey Tang: Yes, it does enable a new kind of precision persuasion attacks that does not rely on the share or repost buttons. Instead, it just relies on direct messaging, basically. And talks to very individualized people with a model of their preferences. On the other hand, the same technology can also be used to enhance deliberative polling. Polling is when you call a random sample of say 4,000 people, or 10,000 people, and ask them the same set of questions to get their preferences. It is used during elections, of course, but also during policymaking. What polling did not do previously is to allow the people picking up the phone to set an agenda, to speak their mind, to show their preferences, and to let us, the policymakers, know what is the current fears and doubts and also personal anecdotes that may point to solutions by each and every individual.
Audrey Tang: We're also investing in deliberative polling technology. They use precisely the same kind of language model analysis tools that you just talked about, but not to con people, not to scam people, but to truly show people's preferences so that when we pair the people who volunteer to engage in this kind of face-to-face or online conversations of a group of 10 people each, we ensure that the group of 10 has the diversity of perspectives and the sufficient number of bridging perspectives that can bring everybody together to someplace where people can live with a good enough consensus. And so, if we do this at scale, we are no longer limited by the amount of human facilitators, which is very important and very treasured, but cannot simply scale to tens of thousands of concurrent conversations, and then we can get a much better picture of how to bring people together.
Aza Raskin: One thing I find frightening is that we're not just talking about an influx of AI content, or copy paste bots, as we've seen in former elections. We're also talking about AI that has the capacity to build individual long-term, intimate online relationships with entire swaths of the population, in service of steering or amplifying their beliefs. We're headed towards a world where you'll meet a person on a dating app, start messaging them, search, find their online profiles, and they'll even send you selfies and introduce you to their friends. We become like the people we spend time with. What's more transformative than a relationship? And because you'll have come to trust this person, and maybe even their friends, you will be influenced by their beliefs, it's how the social psychology of belonging works. And all this time, those people you've been interacting with were never real people at all.
Audrey Tang: First of all, I think pre-bunking also works on this, in that if you let people know that there is this kind of attack going on, they will be wary of it. And second, I think instead of point to point relationships, people want to belong in communities. Some people have the communities that they worship together, or they practice art together, or to do podcasts together, and so on. And so, in such communities, generative AI can also help find the potential people that may want to form a community with you instead of just satisfying all your preferences, catering to your every need, and so on, it's the other way around. It is showing each and every individual, the thing that they care about, there's some existing community, and then leads to much more meaningful ties among multiple people. When people enjoy that kind of community relationships, including actually participating in fact-checking communities, it is much, much more meaningful than an individual to individual companion, that may cater to your need but does not connect you to more human beings.
Tristan Harris: Just to make sure, I think, for listeners to get this, so it's like, okay, so people are vulnerable to a one-on-one attack of one person creating a fake relationship to influence another. But then what you're saying is, well, what if we group people together into communities where they feel this deep care and this deep reflection of their values, which is what your deliberate polling type systems and structures do, is they invite people into seeing the people who agree with them, not just agree with them on some hyperpolarized outrage thing, far in left field, but who agree with them about this more bridging consensus.
Audrey Tang: That is right. And it's not in the content of my speech, but rather in the care and the relationship that it fosters.
Aza Raskin: It reminds me of Hannah Arendt's point that totalitarianism stems fundamentally from loneliness. And so, what I'm hearing you saying is that there is a solution not just to better voting, humans just give one bit of information every four years to decide which way the country goes, and we could be doing it at a much higher bandwidth, but it also brings people together, and this deliberative polling actually puts people face-to-face into community to work through problems.
Audrey Tang: Yes, it builds both longer-term relationships, not just transactions, and also it deepens the connection. It's with more diverse set of people, but also deeper.
Tristan Harris: Taiwan is constantly under the threat of some kind of intimidation from China. The threat of a combined cyber attacks with information, that tries to bring people into confusion. At the same time as they're doing flyovers of its air force, so it makes you feel like your island is under attack. And Audrey, you have a huge amount of practical experience in how to fight back against these kinds of things, can you give us an example of how that's worked?
Audrey Tang: In 2022, August, just before my ministry started, the US speaker, House speaker, Nancy Pelosi, visited Taiwan.
Nancy Pelosi: The Chinese have tried to isolate Taiwan, they may try to keep Taiwan from visiting or participating in other places, but they will not isolate Taiwan by preventing us to travel there.
Audrey Tang: And on that week, we have seen how cyber attacks, along with information manipulation, truly work, from PRC against Taiwan. Of course, every day we already face millions of attempts of cyber attacks, but on that day we suffered more than 23 times compared to the previous peak. Immense amount of denial of service attacks that overwhelmed, not just the websites of our Ministries of National Defense, or the President's office websites, the Ministry of Transportation also saw that the Taiwan railway station has their signboards, the commercial signboards, outside of rail stations, compromised, replaced with hate messages against Pelosi. Not only that, but also the private sector, the convenience store's signboard were also hacked to display hateful messages. And when journalists want to check what's actually going on, was it really true, they've taken over the Taiwan rails? They didn't, but rumors says they did. They found websites of ministries and so on, very slow to respond, and that only fueled the rumor, the panic.
Audrey Tang: And concurrently, of course, missiles flew over our head. The upshot is that each of those attack vectors contributes to the amplification of other attack vectors. The goal, the strategic goal of the attackers, of course, is to make the Taiwan stock market crash, and to show the Taiwanese people it's not a good idea to deepen relationship with the US. But at first it didn't work, we very quickly responded to the cyber attacks, people did not panic, and we very quickly reconfigured our defenses against this kind of coordinated attack. But all in all, the battlefield is in our own mind, it is not in any particular cyber system, which could be fixed and patched and so on. But if they create the kind of fear, uncertainty, and doubt that polarizes the society and make part of the society blame the other side of the society for causing this kind of chaos, then that leaves the wound that is difficult to heal.
Audrey Tang: And so, we've be mostly working on bridging those polarizations, and I'm really happy to report that after our election this January, all three major parties' supporters feel that they have won some, and there's actually less polarization compared to before the election. We overcame not just the threat of polarization or precision persuasion of turning our people against each other, but also we used this experience to build tighter connections, like a share peak experience that brought us together.
Aza Raskin: One of the ways you've worked to heal polarization is implementing what you call deliberative polling, I wish there was a better term for it. But that's where you synthesize input from a large number of Taiwanese citizens in a very clever way, and then take it straight to policymakers. When we look at why people don't trust democracy, I always think of this very telling graph from the political scientists, Martin Gilens and Benjamin Page. It plots the average citizen's preferences versus what policies actually get passed. And there's no correlation. Everyday citizens preferences makes no difference in the agenda of what government cares about. But of course, there is a correlation for the preferences of what economic elites and special interest groups care about. Of course, there's low trust in our institutions. This is obviously a huge problem, and one that deliberative polling seeks to address. Can you explain how it works in more detail, and then give us an example of what it looks like in practice?
Audrey Tang: Sure. The first time we've used collective intelligence systems on a national issue was in 2015, when Uber first entered Taiwan. There were protests and everything, just like in other countries, but very differently, we asked the Uber drivers, the taxi drivers, the passengers, and everyone really, to go to this online prosocial media called Polis. And the difference of that social media is that instead of highlighting the most clickbait, the most polarizing, most sensational views, it only surfaced the views that bridges across differences. For example, when somebody says, "Oh, I think surge pricing is great, but not when it undercut existing meters." This is a nuance, and nuanced statements like this usually in other antisocial social media, that just gets scrolled through. But Polis makes sure that it's up and front. The same algorithm that powers Polis would eventually find its way into Community Notes, like a jury moderation system for Twitter, nowadays X.com.
Audrey Tang: And so, because it's open source, everybody can audit to see that their voice is actually being represented in a way that is proportional to how much bridging potential it has. And also, it gives policymakers a complete survey of what are the middle of the road solutions that will leave everybody happier. And much to our surprise, most people agree with most of their neighbors on most of the points, most of the time. It is only that one or two most polarized points that people keep spending calories on. Now, because of that peak experience, we've applied this method also to tune AIs. Working with the Collective Intelligence Project, we worked with Anthropic, worked with OpenAI, with Creative Commons, with GovLab, and many other partners, and the resulting matrix when we use that to train Claude, that's Anthropic's AI, it is as powerful as the Anthropics original version, but much more fair, and much less discriminatory.
Tristan Harris: Audrey, how do you get over the selection bias effects that there's going to be certain kinds of users, maybe more internet-centric, digital-centric, digital native users, who are going to use this, but then it leaves out the rest? And how do you deal with the problem of selection?
Audrey Tang: Well, in Taiwan, broadband is a human right, and broadband connectivity is extremely affordable, even in the most remote places. For $15 USD a month, you get to access unlimited bandwidth. And because of that, we leave no one behind. And so, we just randomly send, using the trusted number 111, to thousands and tens of thousands of people's SMS. And the people who find some time to answer a survey, or just to listen to a call, can just speak their mind and contribute to the collective intelligence. While, of course, this is not 100% accessible, there still are people who need, for example, sign language translation and so on, which we're also working on, and translation to our other 20 national languages, I think this is a pretty good first try, and we feel good about the statistical representativeness.
Aza Raskin: Audrey, the nuance in how you create spaces in which conversation happens, I think is actually critical and deeply thought through. For instance, there's no reply button in your systems, and you're like, okay, how do you have a conversation without a reply button?
Audrey Tang: Yes, Polis, the new petition system, Community Notes on X.com, they all share this fundamental design that there is no reply button. And through this bridging bonuses algorithm, we bring the bridging statements into more and more visibility, so people can construct longer and longer bridges that bridge across higher and higher differences between people's ideologies, tribes, experiences, and so on. It's mentally, very, very difficult to bridge long distances, this is true for anyone. But just to explain an idea to somebody who have slightly less experience, well, that's just sharing your knowledge. That kind of bridging, everybody can do. And so by visualizing which gaps still remain to be bridged, it turns it into a game almost, to challenge the people with a knack of crossing bridges between left and right that could be made. And so, this system that gamifies this bridge making activity, I think is very, very powerful, and is at a core regardless of which kind of space we choose to design.
Tristan Harris: And just to link this for listeners who know our work on social media, this is instead of rewarding division entrepreneurs who are identifying new creative ways to sow division and inflammation of cultural fault lines, this is rewarding the bridging and synthesis entrepreneurs. And per our frequent referencing of Charlie Munger's quote, that, "If you show me the incentives, I'll show you the outcome," what I love about Audrey's work is she's about changing the incentives so that we get to the different outcome.
Tristan Harris: Now, I know election officials from all over the world ask you for advice on how to make elections more resilient to cyber attacks and disinformation. What is one takeaway you can give to other countries undergoing elections?
Audrey Tang: The one takeaway that we would like to share, and I understand this is controversial, from our January election, is to only use paper ballot. We, in Taiwan, have a long tradition of each counter in each counting station, always raise above their head each and every paper ballot. There's no electronic tallying, there's no electronic voting, and YouTubers from all three major parties are practically in every station with a high definition video cam, recording each and every count. By using cutting edge technology, broadband, high definition video, and things like that, only on the defensive part, that is to say to guard against election fraud, so far, there is no other better technology than ask each of our citizens, if you want to bring your high definition camera, which may just be your phone, and to contribute your part in witnessing the public counting of a paper only ballot in your nearby station.
Audrey Tang: The information manipulation attack do not seek to counter a platform. What they seek is for people to no longer trust in democratic institutions. We entirely pre-bunked the election fraud defects, doubts did appear right after the election. There is no room for it to grow. Whatever the accusation was, you can find in that particular counting station, three different YouTubers belonging to three different parties, that did have an accurate record of the counts. And still, we get a result within four hours or so, so it's not particularly lacking.
Tristan Harris: I would say there's a grand irony in saying, here's the 21st century upgrade plan for 18th century democracies, and one of the big takeaways is...
Aza Raskin: But it is both ironic, but to Audrey's point, it's using the technology in a defensive way. It's saying, bring in the 21st century technology to make sure that everyone sees the same thing at the same time, it creates a shared reality to fight disinformation and other attacks against legitimacy of the election.
Audrey Tang: Really just trust the citizens. The citizens mostly have already figured out the right values, the right steering wheel, points of direction for AIs and our technologies and our investment to go to. It was just the citizens have very small amount of bit rates of essentially just a few bits per four years to voice their concerns. Simply investing in increasing that bit rate so the citizen can speak their mind and build bridges together does wonders to make sure that your polity move on from those isolated void, vacuum of online meaning, so that they do not get captured by those addictive persuasive bots, but can instead move on to alignment assemblies, to jury-like duties, to participate in deliberative polls, to crowdsourcing, fact checking, and many, many other things.
Tristan Harris: Taiwan has a bigger role than most countries in the development of AI, which is that 90% of the world's supply of GPUs, which are the chips that power AI, they come from one company which is TSMC based in Taiwan and is partly controlled by the Taiwanese government. That gives Taiwan enormous leverage in the development of AI and also some responsibility to make sure that AI is developed safely. To what degree is that burden being discussed in Taiwan?
Audrey Tang: First of all, it is true that Taiwan produced chips powers pretty much everything from advanced military, to scientific, to artificial generative intelligence to everything really. It's one of the most general purpose technologies imaginable. And because of that, I think people trust Taiwan to protect TSMC and its supply chain against cyber attacks and so on. And so we enjoy the trust of people around the world, and we take it very seriously. We just established the AI Evaluation Center to test the most broad range against potential AI risks. We test on not just privacy, security, reliability, transparency, explainability, which is a standard, but also fairness, resilience against attacks, safety. We're taking our burden quite seriously in that yes, we did produce the chips that could potentially lead to the weaponization of artificial generative intelligence. But we're also taking our role very seriously in making sure that we invest more to the defensive side, the evaluation and certification eventually side as compared to the offensive side.
Tristan Harris: It's great to hear you're making that investment into AI safety in Taiwan. What about international cooperation?
Audrey Tang: What we're advocating is a race to safety, a race to not increase the speed, but rather the steering wheels capability, the horizon scanning capabilities, the threat intelligence network so that we can let people know when a small scale disaster is just about to happen. And so we're crossing a frozen sheet of ice above a river, and we don't quite yet know which place in that ice sheet is fragile. And in the worst case, the ice sheet explodes and everybody fell down the water. That's the worst case scenario. And we correspond closely with the US AI risk management framework and its task force and European counterpart with the ethics guidelines for trustworthy AI, the UK counterpart, the AI Safety Institute, the list goes on. And so I think we are cautiously optimistic in our horizon scanning capabilities. Therefore, each harm that is being discovered, we design the liability framework, and if it doesn't work, then we decide the countermeasures defensively. And only when that fails to work do we talk about more drastic measures.
Tristan Harris: If we keep pushing this all the way to the extreme where you're saying, okay, we're on this thin ice, it's getting thinner, but no one knows exactly what the thin breaking point really is, but there's this point where we don't know where the ice is going to break underneath our feet. And is there some critical point, Audrey, where there's something else that would need to happen, some other emergency break, whether it's TSMC shutting down the flow of chips in the world or something else? How do you think about that question because that's not that many years away?
Audrey Tang: And this did happen. This did happen. People saw very clearly back when I was a child that the ozone layer is being depleted by the refrigerators of all things because the Freons, the chemical component used in it, is rapidly depleting the ozone protection layer. I think the point I'm making is if we're racing blind, if nobody knows that ozone is being depleted back then, then yes, drastic measures are called for when we suddenly discovered that we're all going to die from cancer.
Audrey Tang: But because people did invest in the sensing capabilities and also the commitment across the political spectrum through the Montreal protocol, basically they set a sunset line so that by year, this and year that we are committed to find commercially viable replacements. And so we need more Montreal protocols against specific harms that AGI could bring. And I totally agree with you that we need to continue our message of basically treating this as seriously as the pandemic or the proliferation of nuclear arms or even all the way to climate urgency. And only if we continue to do that do we create a more pressure on the top labs to commit to this kind of sensing and safety measures.
Tristan Harris: And I think we deeply agree that we are currently racing towards a very dangerous, uncontrollable, dark outcome. And we agree that there needs to be some form of international cooperation, international agreements and agreements between really any of the top labs that are racing to that outcome so that we know how to manage it as we get close. And the difficulty, I think, is the ambiguity of where those lines are and the many different horizons of harm and the different kinds of risks, the range of risks that occur as you get closer. Because some could argue that without the Audrey Tang upgrade plan to democracies that the existing AI that we have proliferated is enough to basically break nation states and democracies already. And so there are already risks that have the ability to break the very governments that we would need to be part of those international agreements and their legitimacy.
Tristan Harris: And so I think part of what I want to instill, hopefully, at least my take in this conversation is, and in listeners, is that we need to create a broader sense of prudence and caution and correct that into safety, coordination, care, and an understanding of those risks. And your work is an embodiment of foregrounding and making primary the vulnerabilities, the fragility of society so that we can care for that. And instead focus the incentives on the bridging, the health, the transparency, the strengthening aspects of society. Audrey, you're a genuine hero in the fact that your work is not only an actual plan and possibility space to upgrade democracies, but is also factoring the race for AI itself and what it'll take to correct that. And so my hope is that people will share around this episode as a blueprint for what it could take to get to that place.
Aza Raskin: Thank you so much, Audrey.
Audrey Tang: Thank you.
Aza Raskin: Two things we didn't get to discuss in this podcast, but that we've discussed with Audrey before are design criteria for how democracies can be future-proofed. One of them is that the capacity of governance has to scale with AI, otherwise the engine in your car is going faster and faster and faster, but your steering wheel just isn't keeping up and that car is going to crash, so that's one. Two is that our collective intelligence has to scale with AI, otherwise human will collectively, our collective intelligence will be dwarfed by AI. And that's another way of saying humanity has lost control. As we think about future-proofing democracy, these are two criteria for all technologists to keep in the back of their mind.