
One of the hardest parts about being human today is navigating uncertainty. When we see experts battling in public and emotions running high, it's easy to doubt what we once felt certain about. This uncertainty isn't always accidental—it's often strategically manufactured.
Historian Naomi Oreskes, author of "Merchants of Doubt," reveals how industries from tobacco to fossil fuels have deployed a calculated playbook to create uncertainty about their products' harms. These campaigns have delayed regulation and protected profits by exploiting how we process information.
In this episode, Oreskes breaks down that playbook page-by-page while offering practical ways to build resistance against them. As AI rapidly transforms our world, learning to distinguish between genuine scientific uncertainty and manufactured doubt has never been more critical.
Tristan: Daniel, something I think about often is how throughout history, society takes a lot of time to confront the harms caused by certain industries. I think about Upton Sinclair writing about the meat-packing industry in the early 20th century. I think about Rachel Carson talking about Silent Spring in the 1960s and the problems of pesticides or tobacco in the 1990s. And with social media, we're seeing it happen again. The can just keeps getting kicked down the road. And with AI moving so fast, it feels like the normal time that it takes us to react isn't compatible with doing something soon enough. We can become aware of serious problems, but if it takes too long to respond meaningful action won't follow.
Daniel: Totally. And I think this has to do with the way that we manage uncertainty in our society. With any new thing with any industry, it's important that we sit with the uncertainty as we discover what's happening. But also uncertainty is scary. And it's really easy for us to react to that fear that we experience sitting with uncertainty by avoiding thinking or speaking about topics when we feel uncertain. And then as a society, I often think about when we're uncertain about what's true or who to trust, we struggle to make collective informed decisions. And when we watch experts battling it out in public, when we hear conflicting narratives and strong emotions, it's easy to start to doubt what we think we know.
Tristan: And it's important to recognize that that's not by accident. It's because companies and individuals with a lot of money and a lot of power want to hide the growing evidence of harm. And they do so with sophisticated and well-funded campaigns that are specifically designed to create doubt and uncertainty. And so how do we sit with this? Our guest today, historian Naomi Oreskes, knows this better than anyone. Her book, the Merchants of Doubt, reveals how this playbook has been used repeatedly across different industries and time periods.
Daniel: And Naomi's most recent book, The Big Myth, just came out in paperback. So how do we make bold decisions with the information that we have right now while being open to changing our minds as new information comes? How should we sit with uncertainty, which is everywhere and unavoidable, while inoculating ourselves from weaponized doubt? We discuss all of these themes and more. This is such an important conversation and we hope you enjoy it.
Tristan: Naomi, thank you for coming on Your Undivided Attention.
Naomi Oreskes: Thank you for having me on the show, and thanks for doing this podcast.

Daniel: So, Naomi, 15 years ago, you and your co-author Eric Conway wrote this book, Merchants of Doubt, which really started this conversation about the ways that uncertainty can be manipulated. Let's start with a simple question. Who were the merchants of doubt?
Naomi: The original merchants of doubt were a group of physicists. So they were scientists, but they were not climate scientists. They were Cold War physicists who were quite prominent. They had come to positions of power and influence and even fame to some degree during the Cold War for work they had done on US weapons and rocketry programs. So they had been in positions of advising governments. They were quite close to seats of power. These four physicists who had been very active in attacking climate science, it turned out had also been active attacking the science related to the harms of tobacco. And that was the first indication for us that something fishy was going on because that's not normal science.
Normally physicists wouldn't get involved in a debate about public health. They probably wouldn't even get involved in a debate about chemistry. I mean, maybe if it overlapped their work. But these guys were so outside their wheelhouse that it was pretty obvious that something fishy was going on. The other real tell was that the strategies they were using were sort of taking legitimate scientific questions but framing in a way that wasn't really legitimate. So it's normal in science to ask questions. How big is your sample size? How robust is your model? How did you come to that finding? Those are all legitimate questions. But they began to pursue them with a kind of aggression that wasn't really normal, and the real tell, to do it in places that weren't scientific.
So we expect scientists to pose questions at scientific conferences, at workshops, in the pages of peer-reviewed journals. But that's not what these guys were doing. They were raising questions in the pages of the Wall Street Journal, of Fortune, and Forbes. So they were raising what on the face of things, on the surface, looked superficially to be scientific questions, but they were doing it in an unscientific way and in unscientific locations. So as historians of science, it was very obvious to us that something was wrong, and that's what we began to investigate.
Daniel: Right. But if I'm naive to that story I might come to that and think, here are people who might be curmudgeons. Here are people who might be fed up. Here are people who might be angry. But that's not the claim, right? The claim is deeper than that, that these were people who are actually deeply incentivized.
Naomi: Curmudgeons are normal in science, and they're not necessarily bad. I mean, they can be a pain in the ass. There's nothing per se wrong, particularly there's nothing epistemologically wrong, with being a curmudgeon. But there is something pretty weird when you start questioning climate science in Women's Wear Daily, right? And so we started looking into it, and then that's when we discovered this connection to the tobacco industry. And so then we thought, well, why the heck would anyone, anyone at all but much less a famous prominent scientist, make common cause with the tobacco industry? And one of the key players here was a man named Frederick Seitz, who was an extremely famous physicist. Someone who was very close to people who had won Nobel Prizes. He had been the president of the US National Academy of Sciences, so the highest level of science in America. And the president of the Rockefeller University, one of America's most prestigious scientific research institutes.
So why would this man, famous, respected, successful, make common cause with the tobacco industry? And this is where being a historian is a good thing because you can go into dusty archives and you can find the papers where people answer these questions in their own words. And what we found was that all four of these men did this for what was essentially ideological reasons. That is to say it had nothing to do with the science. In private, they're not having robust conversations about how good is the evidence that smoking causes cancer? No, that's not what they're saying. What they're saying is this threatens freedom.
They're saying if we let the government regulate the economy, if we let them ban smoking or even regulate it strictly like banning advertising, and this is a really important point we'll come back to about free speech issues, if we let them ban tobacco advertising then we're going to lose the first amendment. If we let them ban smoking in public places, the next thing you know they'll be telling us where we can live, what jobs we can have, what cars we can drive, and we'll be on the slippery slope to totalitarianism.
And so for them, it's deeply connected with their work in the Cold War. So the Cold War part of the story is not just incidental, it's actually central. They feared that if the government became more involved in regulating the economy through environmental or public health regulation, it would be a backdoor to communism. So there's this sort of slippery slope in their own argument. They're accusing their enemies of being on a slippery slope, but they themselves go on this slippery slope of going from climate scientists doing the work that shows why we might want to regulate fossil fuels, to accusing them essentially of being communists and wanting to see a kind of communist government in the United States.
It had nothing to do with the science. In private, they're not having robust conversations about how good is the evidence that smoking causes cancer? No, that's not what they're saying. What they're saying is this threatens freedom.
Daniel: Sure. And honestly, this is one of the oldest debates in science. The whole enlightenment story that really stuck was the story of Galileo versus the Pope and the Pope saying, "You basically can't say this because it would erode a lot of things about the world." And so there's always been this thing with science of how do we tell the truth separate from values we may care about.
Naomi: If I can just say on that.
Daniel: Sure.
Naomi: One of the ironies of this though, and we see this throughout this story, these guys like to present themselves as if they are Galileo, that they're the ones who are standing up for truth. But of course it's the opposite. They're on the side of very, very powerful corporations like the fossil fuel industry, but they try to flip that script and claim that they are the martyrs, they're the oppressed ones. And we see that going on even today.
Daniel: And that's one of the reasons we wanted to have you on the podcast because it's actually a really confusing time to be a person today in our news environment to figure out who is being suppressed, what opinions are real, what opinions are manufactured. And so we really want to come back to that theme again and again as we talk about this because it has such relevance to where we are today. But before we do that, I want to go back and talk about some of the mechanics of how doubt is seeded. Can you talk a little bit about the way that they did this?
Naomi: Absolutely. Thank you. Well, the name of the book really tries to convey the key thing. The idea is that they're selling doubt. They're trying to make us think that we don't really know the answer, that the science is unsettled, that it's too uncertain, that the uncertainties are too great to justify action. And it's a super clever strategy. These people are very smart, right? They're not dumb. Because they realize if they try to claim the opposite of what climate scientists are saying. So if climate scientists are saying the Earth is heating up, it's caused by human activities. If they were to try to say, no, it's not heating up, they would lose that debate. They have already lost that debate because the scientific evidence is overwhelming. But if they say, "Well, we don't really know. We need more data, we should do more research." And there's a lot of uncertainty. The uncertainty is a key part of this story.
That's a much harder thing for scientists to argue against because if I say, "There's a ton of uncertainty." And you say, "Well, I mean, yeah, there is uncertainty, of course there's always uncertainty in science, but it's not that bad." The scientist is now on his back foot, or her back foot.
Daniel: Right.
Naomi: The scientist is now put in a defensive position because they cannot deny categorically that there are uncertainties. So the scientists are placed in this kind of defensive position. And the other reason why this strategy is so clever is because they're saying it's uncertain, the science isn't settled, there's a big debate.
Daniel: Correct.
Naomi: And then they say, in fact, I will invite you to debate me on my podcast, on Fox News, in the pages of the Wall Street Journal. Now the scientist often agrees because the scientist believes in free and open conversation. The scientist thinks I have nothing to hide why wouldn't I debate? But the fact is, by agreeing to debate the scientist loses before he or she has even opened their mouth because the purpose of this argument is to make it seem that there's a debate.
Tristan: Right. They win as soon as there is a debate.
Naomi: Then the audience says, "Oh, there is a debate." Bingo, the merchants of doubt have won.
Tristan: Exactly. That's right. So it's like people's minds are left with the idea that there is a controversy we still don't really know. And there's so many other strategies that I'd love you to sort of talk about keeping the controversy alive, delaying, let's commission an NIH study or a study to figure out what the true effects are. Astroturfing, these fake organizations that get sort of spin ups. You talk about Citizens for Fire Safety or the Tobacco Institute. Can you just give us more of a tour? Like basically, how is our information landscape getting weaponized so that it's harder to see the truth? Because basically, unless we have antibodies for understanding these different strategies we're vulnerable to them. So essentially you are the kind of a little vaccine here to help us have the antibodies to understand these strategies.
By agreeing to debate the scientist loses before he or she has even opened their mouth because the purpose of this argument is to make it seem that there's a debate.
Naomi: Yeah. And it's interesting because some of my colleagues have now started to talk about inoculation in the context of bad information. But of course that's a really tricky metaphor given that we have lots of fellow Americans who are suspicious now about vaccinations and inoculation. So it's tricky landscape. But it's all the things you just said. So one of the strategies kind of involves buying out scientists, I hate to say this but it's true. One of the strategies is to say we need more research. It's too soon to tell. And it sadly is relatively easy to get scientists to agree to that because the reality is, well, scientists love to do research and there always are more questions that can be asked.
And as I've already said, there are always some legitimate uncertainties that we would do well to look more closely at. So it's proved very easy to get scientists to buy in sort of inadvertently by just saying, "Oh, let's have a big research program." So for example, back in the first Bush administration, President Bush established the global climate research program. Now back in 1990, that wasn't necessarily a bad or malicious thing to do, but it contributed to this narrative that it was too soon to tell that we needed to do a lot more research, even though in 1992 President Bush signed the United Nations Framework Convention on Climate Change which committed the United States to acting on the available knowledge, which was already quite robust at that time. Another thing you mentioned were the astroturf organizations. So now we're going from less dishonest to more dishonest.*
So there's a whole range of activities, some of which are catastrophically dishonest and deceitful and really appalling and maybe even illegal to others that are more manipulative. So astroturf organizations involve creating organizations that purport to be citizens groups or purport to be representing important stakeholders like firefighters and getting them to do the dirty work of the industry. So you mentioned the Citizens for Fire Safety. This was an organization that was created and wholly funded by the tobacco industry to fight tobacco regulation by fighting back against the overwhelming evidence that many house fires were caused by smoking, particularly smoking in bed. And so there were all kinds of campaigns that pointed this out to try to discourage people from smoking, particularly from smoking in bed. The tobacco industry made the claim that the real culprit wasn't the cigarette, it was the sheets and the pillowcases.
Tristan: Right.
Naomi: And that these things needed to be fireproofed. And so they persuaded people across the country, states, the federal government to pass regulations requiring flame retardants in pajamas. And I remember when I was a parent, it was incredibly hard to find comfortable cotton pajamas for my children because they were all made out of these disgusting synthetic fabrics filled with flame retardants. That was pushed heavily by this group called the Citizens for Fire Safety represented by firefighters who were in the pay of the industry. So this was like true industry shills.
Tristan: People should just stop here for a moment and recognize just how diabolical this is.
Naomi: It's very diabolical.
Tristan: So you've got a product that is literally causing houses to burn down. And instead of actually that product, because they don't want to change it, they can't really change, it's not really changeable. And so they want to externalize the source of this harm, this thing that's happening in the world and saying, "Well, there's another place that it's coming from. It's coming from these flammable materials." Let alone the fact that that probably gave us more PFAS and forever chemicals in all of our furniture and bedsheets.
Naomi: Which it did. We now know that for sure it did.
Tristan: Right. And the idea though that I think most people don't know, there's sort of this asymmetry, just how much effort would a you incentivized actor go through to spin up lots and lots or dozens of fake organizations, fake institutions in order to sow doubt about this thing? And so that's why I was so excited to have you on, because I just don't think people understand. So in the case of social media they might say, well, we need to do research or let's fund parent education programs so that parents are better educated about how to manage their kids' use of screen time. Which is of course not an actual solution to the fact that they've created network effects and lock in hyper addictive products that continue to manipulate people much more powerfully than their parents could ever be educated by. And so there's this sort of common strategy of distracting people from the true source of the problem.
Naomi: Exactly. And the word diabolic is so apt here because this is also happening now with this issue of hyper palatable foods, ultra processed foods that are really hard to stop eating. And you might be too young to remember this, but when I was young there was an advertising campaign on television for Lay's potato chips. And it featured a young girl, a blonde, very pretty young girl, and she's talking to the devil, and the devil hands her a potato chip and says, "I bet you can't eat just one." And I look back on that ad now and my mind is blown because in a way they're admitting what they were doing. It turned out they were doing research to figure out how to manufacture a potato chip that you couldn't eat just one or five or 10, that you would eat the whole bag. And it was deliberate and it was knowing, and they even weirdly tipped their hand in the ad, except none of us realized that that's what they were doing.
Tristan: Well, this seems like also, just to do a couple more here, there's another strategy which is emphasizing personal agency. Saying, "Well, it's up to you to have personal responsibility with how many Doritos you have." "It's up to the person who's addicted to cigarettes to choose do they really want to be addicted or not? They can still choose that." Social media, it's up to you to do that. BP saying, "Here's your personal carbon calculator where you can calculate your own personal carbon footprint," which distracts attention from the sort of systemic issue, which would threaten trillions of dollars of value if they had to change in any way.

Naomi: Yes. Well, the agency one is crucial, and it relates to the sort of bigger framework which is the framework of freedom. So as you pointed out, there are many ad campaigns, both on social media and in legacy media, basically trying to shift the burden away from the producer of the damaging product to the consumer and to say, "Well, this is our fault because we drive too much." And so BP ran a big ad campaign that many of us have seen, and it was super successful to calculate your own carbon footprint. And how many of us even now think about that? They'll say, "Oh, I'm traveling less because I'm trying to reduce my carbon footprint." And, of course, reducing your carbon footprint isn't a bad thing. If you can do it, it's a good thing.
But the net result of this is to shift agency, to shift it away from the producer that is knowingly making a harmful product and saying, no, it's my fault because I made that choice. But it wasn't entirely a choice because at the same time, the industry is fighting regulations that would restrict fossil fuels. They're fighting tax credits for electric cars. So I'm not really making a free choice, I'm making a choice that is heavily affected by what the industry has done. This is another strategy that we can track back to the tobacco industry.
Early on, the tobacco industry realized, and again, this is in the documents, we can find them saying it in their own words, that they would not succeed if they said to the American people, "Yeah, we know cigarettes will kill you, but oh, well enjoy it while it lasts." No, that was not a message that would work. Lots and lots of people would say, "Oh, I should try to quit." But if they said, "This is about freedom. This is about your right to decide for yourself how you want to live your life, do you want the government telling you whether or not you can smoke?"
And that was a very powerful message. I think for two reasons. One is because none of us do want the government telling us what to do. I think most of us feel like, yeah, I want to decide for myself where I live, where I work, whether I smoke or not. But also because it's tied into this bigger ideal of America as a beacon of freedom, that what makes America America is that this is a country of freedom. And so the industry ran all kinds of campaigns with American flags, with the Statue of Liberty. And we talk about this in our new book, The Big Myth, we can track this back actually into the 1920s and '30s, newsreels and documentaries evoking all these icons of American freedom. And this was a very powerful argument because it meant that you weren't fighting for a deadly product, you were fighting for freedom. And who was going to argue against that?
Daniel: Yeah. So it occurs to me that when we talk about this, what we're really talking about is not doubt itself. What we're talking about is sort of unfair conversational moves. It's unfair to turn a fact conversation into a values conversation. It's unfair to pretend that everyone is just saying this when you're bankrolling this. And so I kind of want to come back, because I have to admit I bristle slightly about just focusing on doubt because science and the process of honest inquiry demands that we sit with uncertainty. And it's part of our ability to act in this world. We don't know things. Sometimes longitudinal studies do take 20, 30, 40 years. What is the difference between manufactured doubt that is this deeply unfair conversational move that destroys our ability to be together versus a more wise sitting with doubt?
Naomi: Yeah, that's a great question. And it's one of the things we talked about in the book originally, that the doubt strategy is very clever because it's a kind of jujitsu move. It's taking what should be a strength of science the fact that scientists are motivated by doubt which in a different context we call curiosity. Scientists do spend a lot of time worrying about uncertainties and how to characterize them accurately, fairly, and honestly. And without some degree of doubt, there wouldn't be progress in science. So that's a good thing. But the merchants of doubt take that good thing and they turn it into a liability. And they want to make us think that unless the science is absolutely positively 100% certain that therefore we don't know anything and can't act.
And so it's really about exactly what you said, that we as citizens have to understand that we have to live with uncertainty. I wrote a paper once, it was called Living with Uncertainty. And the reality is we do that in our ordinary lives all the time. We get married, we buy a house, we buy a car, we invest for retirement even though we might die beforehand. So we live with uncertainty in our daily lives all the time. And we trust ourselves to make judgments about uncertainty in our daily lives because we think we have the information we need to make those choices. And so this leads to another strategy we haven't talked about, which is the direct attacks on scientists. Part of the way this works also is to try to undermine our trust in science generally to say that scientists are crooked, they're dishonest, they're in it for the money, which is again, pretty ironic coming from the tobacco industry.
Tristan: Very common.
Naomi: And this is one thing, so that we've tracked in our work that's particularly distressing about what's going on right now. Many of the things we studied began as attacks on particular sciences that seemed to show the need for regulation. Like science related to tobacco, the ozone hole, climate change, also pesticides. But then it spread. And what we've seen in the last 10 years, really since we published the book, is this broader expansion to trying to cast doubt on science more generally. So this broad attack on science and scientists in order to make us think we can't trust scientists, but then who should we trust? So as you say, now we're in this saturated media landscape with information coming at us from all directions and it's really, really hard for anyone to know who they should be trusting.
Without some degree of doubt, there wouldn't be progress in science. So that's a good thing. But the merchants of doubt take that good thing and they turn it into a liability. And they want to make us think that unless the science is absolutely positively 100% certain that therefore we don't know anything and can't act.
Tristan: I feel like there's a distinction between reflexive mistrust, which is a problem, and then reflexive trusting, which is also a problem. And what we're looking for is warranted trustworthiness.
Daniel: And one of the things I'm worried about the most in this space is that I've seen the response of scientists, even friends and colleagues, is to try to push for more certainty. And they'll say, "No, no, we know this. We are more certain." And I have to admit, I sort of doubt that that's the right response. I kind of think we all need to sit with more uncertainty. I mean, if anything, I blame the marketing teams. In the tobacco example I blame the "Cigarettes are safe 8 of 10 doctors agree" pulling us up to a place where we believe they were safe. And so how do we counteract that? Because I'm a little worried that science will be a race to the bottom of people shouting and claiming what we know as a sort of a false certainty in reaction to this very combative environment.
Naomi: Yes, I agree. I think you're absolutely right. I think it's a big mistake for a scientist to say, "Oh, we know this absolutely." I think it's much better to say, of course there's uncertainty in any live science. The whole point of science is to learn, right? It's a process of discovery and learning. And this is of course where history of science is so helpful, because of course we learn new things and that's good. But we have an issue right now. We have to make decisions that in some cases are literally life and death. And in a case like that, it does not make any sense to say, "Oh, well, I need to wait another 10 years till we better understand this virus," or, "I have to wait until sea level is on my windowsill."
Tristan: Right.
Naomi: Because then it's too late to act. We make decisions based on the best available information we have right now, but we also prepare to change in the future if we need to. And we have a term for that in science, it's called adaptive management. And it was used very, very successfully in the ozone hole case. The international convention, the Montreal Protocol that was signed to deal with the ozone hole had a feature in it for adaptive management because scientists knew that there were still things they didn't understand about ozone depletion. And so the politicians put in a feature that as they learn more information, the regulations could be made more strict or they could be made less strict. And we could do the same thing for climate change. I mean, it's what we should do. We should always start with the least regulation that we think will get the job done, but be prepared to tighten the regulations if more science tells us we need to or to lessen them as the case may be.
Tristan: What I love about the example you're giving with the Montreal Protocol Agreement is its law that recognizes its own humility, that it's not always going to be accurate. That the letter of the law and the spirit of the law are going to diverge, and we need to be able to update the assumptions of the law as fast as the sort of situation requires it. And that's building in kind of the right level of uncertainty.
Naomi: Yeah, and if I could jump in on that. You know, lot of people have criticized the IPCC for a variety of different reasons. But I think it's really important for people to understand that the UN Framework Convention on Climate Change was modeled on the ozone case. Because the ozone case was such an effective integration of science and policy, and it has proved effective and has done the job it was intended to do, the UN framework convention was modeled on that. Now it hasn't worked, but I think the main reason it hasn't worked is because of the resistance of the fossil fuel industry. And we've now been witness to 30 years of organized disinformation and campaigns to prevent governments from doing what they promised to do back in 1992.

Tristan: So, Naomi, one of the things you write about in your new book, The Big Myth, is how those who are advocating for the maximum unregulated sort of free market approach have a selective reading of history. And you have this great example of Adam Smith. Could you speak to that?
Naomi: Yeah. So one of the things we talk about in the book is how the Chicago School of Economics really misrepresented Adam Smith. And how many of us have this view of Adam Smith, the father of capitalism and an advocate of unregulated markets, that business people should just pursue their self-interest and all good will come from people pursuing their self-interest. That is not what Adam Smith wrote in the Wealth of Nations.
In fact, he has an extensive discussion of the absolute essential nature of banking regulation. He says, "If you leave banks to bankers, they will pursue their own self-interest and they will destroy the economy," or at least put the economy at risk. "You can't let factory owners just pursue their self-interest or they'll pay their workers' starvation wages." And he has multiple examples of this, which he goes on to describe at quite great length. Yet all of this has been removed from the way Adam Smith has been presented in American culture since 1945. And in fact, it's a kind of... I teach agnotology, the study of ignorance. And it's really interesting to see how this is a beautiful example of it, because in the 1920s and '30s, there were people even at the University of Chicago saying, "No, that's not what Adam Smith said." But by the 1950s that had all been erased, it had been expunged, and they were producing edited volumes of Adam Smith that left out all of his discussion of the rights of workers, the need for regulation, et cetera.
Tristan: So I want to take us a little bit to a different direction, which is another way that science can get weaponized. So one of the other areas of our work, Naomi, is around AI risk. And artificial intelligence is the most transformative technology in human history. Intelligence is what birthed all of our inventions and all of our science. And if you suddenly have artificial intelligence, you can birth an infinite amount of new science. It is so profound and so paradigmatic, I think it's hard for people to get their minds around it. There's obviously a lot of risk involved in AI. And one of the things that I've noticed, some of the major frontier AI labs like OpenAI, they came out after these whistleblowers left OpenAI saying, "Hey, we have safety concerns."
And what they said in response was, "We believe in a science-based approach to studying AI risk." Which basically meant they were pre-framing all of the people who are safety concerned as sci-fi oriented, that they were not actually grounded in real risk here on Earth, pal, but they were living in sort of the Terminator scenarios of loss of control and sci-fi. And that's one of the reasons I wanted to have you on is I want to think about how can our collective antibodies detect when this kind of thing is going on? Because that sounds like quite a reasonable thing to say, we want a science-based approach to AI risk, and we don't want to be manufacturing doubts or thinking hypothetically about scenarios. Just curious your reaction to that.
Naomi: I have to say, I do sometimes get a little nervous when I hear people say we want a scientific approach, because I want to know, well, who are those people and what do they mean by a scientific approach? Because I could show you people in the chemical industry saying that, the tobacco industry saying that, and using it as an excuse to push off regulations. So I would need to learn more about know who those people are and what they mean by a science-based approach. But I guess what I would say, it's interesting as a historian thinking about how radical this is and how serious the risks are. Because I agree with you, I think it is radical, and I think both the risks and the potential rewards are huge. But it does remind me a little bit of the chemical revolution, because many things were said the same about chemicals, particularly plastics, but also pharmaceuticals, other chemicals in the early to mid 20th century.
And chemicals did revolutionize industry. They revolutionized textiles, plastics was huge, all kinds of things. And similarly, there were many aspects of the chemical industry that were very helpful to modern life, and there were some aspects that were really bad. And so how do we make sense of that? And I think one thing we know from history is it goes back to my favorite subject that people in Silicon Valley love to hate, which is regulation. That part of the role of government is to play this balancing act between competing interests. In fact, you could argue the whole role of government is to deal with competing interests. That we live in a complex society, what I want isn't necessarily the same as what you want. And in a perfect world, we'd all get what we want. In a perfect world, we could be libertarians, we all just decide for ourselves. But it doesn't work because what I do affects you and vice versa. And so that's where governance has to come in. And it doesn't have to be the federal government, it could be corporate governance, it could be watchdogs.
But I do think that the way in which some elements of the air industry are pushing back against regulation is really scary and really bad. Because if we don't have some kind of set of reasonable regulations of this technology as it develops, ideally with adaptive management, we could find ourselves in a really bad place. And one of the things we know from the history of the chemical industry is that I think it's fair to say that many chemicals were under-regulated. You mentioned PFAS a few minutes ago. Again, DuPont knew a long time ago that these chemicals were potentially harmful and were getting everywhere. So the industry knew that this was happening and pushed hard against revealing the information they had, pushed hard against regulation. And we now live in a sea, a chemical soup where it's become almost impossible to figure out what particular chemicals are doing what to us because it's not a controlled experiment anymore.
Daniel: Well, I think that points at one of the core problems here is that as much as you want good science, good science takes time and the technology moves faster than the science. And so the question is, what do you do with that when the technology is moving and rolling out much faster than the science? So what does it mean to regulate this wisely? You talked about one thing, which is adaptive management. Are there other tactics that you can make sure that as you begin to figure out how to roll this out, the regulation actually helps us adapt and helps us stay with the science?
Naomi: Yeah, that's a great question. And, again, so good news here is that we do have the ozone examples. We have at least one example where it was done right. And we can look to that example. And I think one thing that we learned from that case is to do with the importance of having both science industry and stakeholder voices involved. Because I thought one of the really terrible things that someone said recently about AI, I think it was Eric Schmidt correct me if I'm wrong, but he said something like, "Well, no one can regulate this besides industry because we're the only ones who understand it." Do you remember that? And I thought that was a very shocking and horrible thing for an otherwise intelligent person to say. Because first of all, I don't think it's true. I mean, I could say the same thing about chemicals, I could say the same thing about climate change. But intelligent people who are willing to work and learn can come to understand what these risks are.
Tristan: And you talked about this in your book as epistemic privilege. And one of the challenges that's sort of fundamental to all industries is the people inside of the plastics industry or inside the chemicals' industry, they do have more technical knowledge than a policymaker and their policy team is going to have. That doesn't mean you should trust them because their incentives are completely off to give them the maximum agency and freedom. We've covered that on some of our previous episodes. But that's actually one of the sort of questions we have to balance is, okay, well we want the regulation to be wisely informed. We want it to be adaptive and never fixed. We want to leverage the insights from the people who know most about it, but we don't want to have those insights be funneled through these bad incentives that then end up where we don't actually get a result that has the best interest of the public in mind. And I feel like that's sort of the eye of the needle that we're trying to thread together here.
Naomi: Yeah, exactly. And so I think that really feeds into the point I want to make here, which is absolutely, the technologists know the most about technology and so they have to be at the table, and they definitely have to be involved. But they don't necessarily know the most about how these things will influence the users.
Tristan: Yeah.
Naomi: They don't necessarily know the most about how you craft a good policy. And so for that, in this case, you might want people who are involved in the ozone regulation who know something about how you craft good policy. Or stakeholders. Or what about labor historians who have looked at automation in other contexts? I mean, one of the big worries about AI is that a lot of us will be put out of work, and that can be really socially destabilizing. Well, there are people who are experts on that. And so you could imagine bringing to the table some kind of commission that would bring the technologists, policy experts, and people who could represent the risk to stakeholders, maybe even some psychologists who study children. I mean, the point is there's more than one kind of expertise that's needed here and the technical expertise is absolutely essential, but it's necessary but not sufficient.
Daniel: Yeah. And I certainly agree with you in that we need all of society to come together to figure out how to do this well. But having lived through the early internet and the Ted Stevens, the internet is a series of tubes and the inability for Congress to understand what they were dealing with, I have a certain amount of sympathy for this learning curve that we're all on together. I mean, Tristan and I can't even keep up with the news, and this is our full-time job. And so I'm curious because not only will people say that certain people outside of industry don't understand, but people say that our society has become over regulated or the regulatory apparatus is too slow.
Not just from the right, but from the left. People will say that building new housing is too onerous because of environmental regulations, for example. And I'm curious how you respond to that because you want to pull all of society, you want to build committees, you want to do this. And I think I agree with you from a values perspective that we need more of society in this conversation, but I'm not sure how good we are at doing that.
Naomi: Yeah. No, you're absolutely right. And I don't want to come across sounding like a Pollyanna. Although I should always point out, the moral of the Pollyanna story is that the world becomes a better place because of her optimism. And I think we often forget that. We think calling someone a Pollyanna is a criticism. But I guess I would say two things about that. First, I'd want to slightly push back on the idea that we have people on the left as well as the right who are anti-regulation. I mean, yeah, but like, I mean, I've just written a 500-page book about the history of business opposition to regulation in this country. And it's almost all from the right. There are some examples, but even the housing stuff, I mean, I was just talking to an urban historian the other day about how the real estate industry is really behind a lot of this pushback against housing regulation, not communities. I mean, there are some exceptions, particularly in California.
But there's been 100 year history. I mean, this is the story we tell in The Big Myth of the business communities insisting that they are over regulated and they've used it to fight back against regulation of child labor, protections of worker safety, tobacco, plastics, pesticides, DET. And also saying that if the government passes this regulation, our industry will be destroyed. The automobile industry claimed that if we had seatbelt laws, the US auto industry would be destroyed. And none of that was true. And every time a regulation was passed, industry adapted and typically passed the cost on to consumers, which you know, maybe wasn't always great. Maybe sometimes we paid for regulations we didn't really need. But in general, the opposition to regulation generally comes from the business community who wants to do what they want to do, and they want to make as much money as they want to make and make it as fast as possible.
So it gets back to what Tristan said about the incentives. I understand that. If I were a business person, I would probably want to run my business the way I want to run it as well. But in a democratic society, we have to weigh that against the potential harms to other people, to the environment, to biodiversity, to children. And so this gets back to another thing that's really important, especially in Silicon Valley, which is the romance of speed. American society has always had a romance with speed, railroads, automobiles, space travel. We love speed, we love novelty, and we like the idea that we are a fast-paced, fast moving society. But on the other hand, sometimes moving too fast is bad. Sometimes when we move fast and break things, we break things we shouldn't have broken. And I think we are witnessing that in spades right now. I mean, we have a broken democracy in part because we move too fast, in my opinion, with telecommunications deregulation.
Something that was supposed to be democratizing and give consumers more choice has ended up giving us less choice, paying huge bills for our streaming services, and really contributing to political polarization because of how fragmented media has become. So that's a really-
Tristan: I have an idea, let's go even faster with AI.
Naomi: Yeah, exactly. So this is a really good moment to be having this conversation because one of the things we're seeing now is exactly what we wrote about in our last book, The Big Myth, which is the business attempt to dismantle the federal government because they resent the role that the federal government has played in regulating business in this country. And this is a story that has been going on for 100 years, but is suddenly unfolding in real time incredibly rapidly in front of us. And part of this argument has to do with this idea that government regulation is a threat to freedom, and that any restriction on business puts us on this slippery slope to loss of freedom. But of course, it's not true because we make choices all the time.
And so one of the examples I like to cite, which was actually from a debate among neoliberals in the 1930s about what it meant to be a neoliberal. And one of them said, "Look, being against regulation, because you think it eliminates freedom, is like saying that a stoplight or stop sign or a red light is a slippery slope on the road to eliminating driving." No one who thinks we should have stop signs on roads is trying to eliminate driving, we're trying to make driving safe. And most regulations that exist in the world, not most, many, but probably most, have to do with safety, have to do with protecting workers, children, the environment, biodiversity against other interests. And so it's always a balancing act. It's about, of course we want economic activity, and of course we want jobs, and of course we know that business plays an essential role in doing those things, but we also don't want business to kill people with dangerous products. And we don't want business to trample the rights of working people. We don't want business to exploit children.
Tristan: Absolutely.
Daniel: As we talk about the urgency that we're all feeling and the urgency of these problems and how AI even makes that worse, I want to fold in that everything feels so urgent, and some of that urgency is real in that we're hitting these really real limits and we're undermining parts of our society. And other parts of it seem like a hall of mirrors that the internet has created where everyone can't slow down to even think about a problem because it's all so urgent that we just have to act now. So I can't even sit with my uncertainty on something. How do you think that this conversation space or this compression that we're all feeling around conversations that may take a decade to settle the science, how do you think that plays into the problem and what would you do?
Naomi: Yeah, I think that's a great question. And I feel like in a way, it's one of the paradoxes of the present moment. We are facing urgent problems, climate change is irreversible. So the longer we wait to act, the worse it gets and the less we're able to fix it. So there should be some sense of urgency about it. And the same with AI, right? I mean, as we've been talking about this whole hour, this technology is moving very quickly. It's already impacting our lives in ways we wouldn't have even imagined five or 10 years ago. But at the same time, I think it would be really bad to panic. Panic is never a good basis for decision-making. And there's a way in which the very urgency of it really requires us to stop and to think and to listen.
And especially if we think about adaptive management. Adaptive management is all about not overreacting in the moment, making the decision that makes the most sense based on the information you have, but being prepared to adjust in the future. And one of the ways that the Montreal Protocol worked was by setting specific deadlines, dates at which the people involved would review the evidence and decide whether an adjustment was needed. And I think that's a beautiful model because it incorporates both acting on what we know now, not delaying, not making excuses to delay, but also recognizing human frailty, recognizing the benefits of learning more information and being able to work in that benefit and making it structured. So it wasn't just a sort of promise, "Oh yeah, we'll look at that again next week." But it was actually structured into the law.
Daniel: That feels like something that all laws should be doing, actually. Especially all laws that have to do with emerging science or technology. Is this a common practice or is this a one-off that Montreal did?
Naomi: Yeah, that's a great question. It'd be a good thing to study. I don't really know the answer to that. I certainly know that some agencies have been created with sunset clauses, although mostly not. So I do think the conservatives are right about that, that we should have better mechanisms for if we set up a government agency to think about how long do we want this agency to operate and should there be some mechanism for, after 10 years, deciding if you want to renew it. Almost like when you take out a library book, you could renew it. I think that would be a useful thing to do.
And certainly, you know, one of the things that Eric Conway and I write about in our new book is that in the 1970s, it was absolutely the case that there were regulations from the '20s and '30s that needed to be revisited. I mean, there was a whole world of trucking regulation that made no sense given that we now had airlines. Telecommunication, it was absolutely right in the Clinton era that we revisited telecommunication that was based on radio now that we had the internet. But again, there wasn't a good mechanism for doing that. And I think the Clinton administration moved too quickly and made some really big mistakes and broke some really serious things. So I think that Montreal is a good model for thinking about how could we do something like that, maybe for AI? Maybe we should have some kind of commission on AI safety that has a 10-year term, but that is renewable if Congress or whoever votes to renew it at that time otherwise it sunsets.
It would be really bad to panic. Panic is never a good basis for decision-making. And there's a way in which the very urgency of it really requires us to stop and to think and to listen.
Daniel: That's really striking because this is a new thought for me, which is you either hear people saying, "Look, there's too many regulations," or people saying, "Well, there's not regulated enough." But what you're saying is it's both at the same time. We always have old regulations that we need to pull off, and we have new ones that aren't protecting us in the ways we need that we need to put on. And that we should expect that always we should be doing that continuously.
Naomi: Yeah. I think I like that way of putting it. And it's kind of like there's that saying about generals are always fighting the last war.
Tristan: Yeah.
Naomi: I mean, one of the problems of history, and as a historian I believe absolutely in the value of history and all the lessons we can learn, but sometimes people learn the wrong lessons or they carry forward experiences from the past that maybe aren't necessarily relevant now. And so some balance between creating a thing that we think we need now, but also creating a mechanism to revisit it and to learn from our mistakes.
Tristan: There's also a way that AI can play a role in helping to rapidly accelerate our ability to find those laws that need updating or are no longer relevant, and to help craft what would those updates be, find laws that are in conflict with each other. I'm not trying to be a techno-solutionist or say that AI can fix everything, but I think to the degree that law is actually part of how we solve some of these multipolar traps, the if I don't do it I lose to the guy that will, law is the solution. But the problem is, people have seen so many examples of bad laws, bad regulation. And so this is about how do we get more adaptive, more reflective ways of doing this. And AI can be actually a part of that solution when I think about a digital democracy.
Daniel: So we've talked a lot in this podcast about how hard it is to make sense of the world right now, these competing doubts and over certainties and these different cultic takes that social media has riven our world into. What are ways that individuals can actually stay grounded and understand when something is distorted? What are the antibodies that prevent people from being so susceptible to disinformation right now?
Naomi: Well, I think this is a really tricky question and if I had a simple answer that would be my next book, right? 10 Ways Not to Be Fooled by Nonsense, or something like that. And maybe I'll write that book. But I think an important thing to realize is that we all have brains and we all have the capacity to use our brains. So I really encourage people to kind of embrace their own intelligence and then to ask questions. So if someone is telling you something, the most obvious question to say is, okay, well who is this person? And who benefits from what they're saying? And what is their interest?
And that can be used in a hostile, skeptical way, and it sometimes has been. But in general, it's always legitimate to say, well, what does this person get out of it? So I admit freely, I want you to read my books. I get some money from my books, but not a lot. It's like a buck a book, I can't quit my day job. As opposed to the fossil fuel industry that is looking in trillions of dollars in profit. So if you ask, who are you going to trust about the climate? Climate scientists, most whom get paid good middle to upper middle class salaries, but they don't get paid anymore if they say climate change is serious than if they say it's not serious or the fossil fuel industry that stands to earn trillions of dollars more if they get to continue doing what they're doing. So the vested interests there are pretty lopsided, and you don't have to be a brainiac or Harvard professor to see that difference.
Tristan: I remember when we did our AI dilemma talk about AI risk. And people said, "But these guys profit from speaking about risk and doomerism, and here's all the problems of technology," as if that's what is motivating our concerns. And to the degree that we profit in any way from talking about those concerns, how does that compare relative to the trillions of dollars that the guys on the other side of the table can make? And I think how does one demonstrate that they are a trustworthy actor, that they are coming from a place of care about the common good. And that's built over time. And I think it's becoming, especially in the age of AI when you can basically sow doubt about everything and people don't know what's true, the actors that are consistently showing up with the deepest care and trustworthiness will sort of win in that world as we erode that trust.
Naomi: Yeah, I think that's right. And that's one area where I think scientists could do a better job. I mean a lot of scientists, we've been trained to be brainiacs, to use technical knowledge, to use mathematics. And in our science, those tools are important and good, but we also have to recognize that when you talk to the broader public those tools are not necessarily the best ones. And then you have to relate to people on a human level. One thing I've been thinking a lot about in recent years, I feel that in academia we are taught to talk, right? We're taught to get our ideas out, to write books. And it's all about, I'm getting my ideas out there, and we aren't really taught to listen.
Tristan: I agree.
Naomi: And so I really think that it's important for anyone who's in any controversial space, whether they're coming at it as a scientist, a journalist, a technologist, whatever, to recognize the importance of listening and to try to understand people's concerns. Because I spent some time in Nebraska some years ago talking with farmers, and one of the farmers said to me, "I just don't want the price of my fuel to go up." I thought, well, that's totally legitimate. If I were a farmer, I wouldn't either. So it means if we think about climate solutions, we have to think about solutions that don't hurt farmers. Tax credits, people have talked about fee and dividend systems for carbon pricing, but to be mindful of how is this affecting people and how can we structure solutions that take those considerations into account.
Tristan: Naomi, thank you so much for coming on Your Undivided Attention. Your work on The Merchants of Doubt and The Big Myth is really fundamental and deeply appreciate what you're putting out in the world.
Daniel: Yeah. Thanks, Naomi.
Naomi: Thank you. It's been a great conversation.
Share this post