Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

In June 2024, a group of current and former employees from OpenAI and Google DeepMind penned an open letter accusing the industry’s leading companies of prioritizing profits over safety. The letter came after a spate of high profile departures from OpenAI, including co-founder Ilya Sutskever and senior researcher Jan Leike, as well as reports that OpenAI has gone to great lengths to silence would-be whistleblowers.
The writers of the open letter argue that researchers have a “right to warn” the public about AI risks and laid out a series of principles that would protect that right. In this episode, we sit down with one of those writers: William Saunders, who left his job as a research engineer at OpenAI in February.
William broke his silence talk to us about what he saw at OpenAI that compelled him to leave the company and to put his name to this letter.
This interview aired on Your Undivided Attention on June 7, 2024. It has been lightly edited for clarity.
Tristan Harris: Hey everyone, this is Tristan.
Aza Raskin: And this is Aza. You might've heard some of the big news in AI this week, that 11 current and former OpenAI employees published an open letter called "A Right to Warn" and it outlines four principles that are meant to protect the ability of employees to warn about under-addressed risks before they happen. And Tristan, I wanted to turn it over to you because they're talking about the risks that happen that are unaddressed in a race to take shortcuts.
Tristan Harris: Yeah. So, just to link this to things we've talked about all the time in this podcast, if you show me the incentive, I will show you the outcome. And in AI, the incentive is to win the race to AGI, to get to Artificial General Intelligence first, which means the race to market dominance, getting as many users onto your AI model, getting as many people using ChatGPT, doing a deal with Apple so the next iPhone comes with your AI model, opening up your APIs so that you're giving every AI developer exactly what they want, have as many plugins as they want, do the trillion dollar training run and show your investors that you have the new exciting AI model.
Tristan Harris: Once you do all that and you're going at this super fast clip, your incentives are to keep releasing, to keep racing, and those incentives mean to keep taking shortcuts. Given the fact that there is no current regulation in the US for AI systems we're left with, well, who sees the early warning signs? It's the people inside the companies and those people can't really speak out, which is what we're going to get into in this interview.
Aza Raskin: Just days after OpenAI released their latest model, GPT-4o, two of their most senior researchers, that's co-founder Ilya Sutskever and Jan Leike announced that they were leaving the company, that they'd resigned. And in fact, they're part of a steady stream of engineers and researchers leaving OpenAI. And they've all given pretty much the same reason for their departure, that OpenAI is prioritizing market dominance and speed at the expense of safety, which is exactly what the incentives predict. And that brings us to the letter and our guest today. William Saunders is a former engineer at OpenAI, where he worked for three years until his resignation in February. He helped write the letter to draw attention to the safety issues and the lack of transparency from within all of the AI companies. As one of the very few insiders talking publicly, we are keen to share his insights with you. So, William, thank you so much for joining us.
William Saunder...: Thanks for talking to me.
Tristan Harris: So we absolutely want to dive into the open letter, The Right to Warn, that you helped write, but first we need to give listeners a little bit of context about who you are. Can you tell us what your role at OpenAI was, how long you worked there and what did you do?
William Saunder...: Yeah, so I worked at OpenAI for three years. I did a mixture of research and engineering necessary to do that research. I was working on the alignment team, which then became the superalignment team. And I was always motivated in my work to sort of think about what are the issues that are going to arise with AI technology, not today, but tomorrow. And sort of thinking about how can we do the technical work that we need to prepare to address these issues.
Tristan Harris: Could you quickly just define the difference between alignment and superalignment?
William Saunder...: Alignment is broadly trying to make systems that you know will sort of do what the user of the system wants and also be good citizens in the world, not do things that people, society generally wouldn't want. And then superalignment was more specifically focused on the problem of how can you do this when the systems that you're building might be as smart or smarter than you. So that was sort of the first area that I worked on and I think this was worked on before this became sort of a big problem that is now been in the news recently about Google search, the AI assistant, producing sort of incorrect answers. And then from there I transitioned to working on interpretability.
Aza Raskin: So William, what is interpretability research and why is it important to safety?
William Saunder...: Interpretability is about trying to understand what goes on inside of these large language models. So these large language models and other sort of machine learning systems that we produce are sort of somewhat unique amongst technologies in that they aren't produced by humans putting together a bunch of pieces where they each have designed the pieces and they understand how they fit together to achieve some goal. Instead, these systems are produced by starting with what you want the system to do. So for example, you produce a system that takes some piece of text from the internet and then tries to predict what would be the next word that comes up in this text.
William Saunder...: And then the machine learning process produces at the end a system that can do this task very well. But there is no human who understands the individual steps that go into this. And this can be a problem if, for example, you train the system to do one thing, but then in the world it's applied in a new context, then it can sometimes be hard to predict what the system will actually do. So we worked on doing this and then doing further research, building on these techniques to try to understand what is going on inside any given language model, like the reasoning process it went through.
Aza Raskin: I think something that most people listening to this podcast might not understand is that it's not like when you as an engineer are working on a AI system, you're not coding how the system works line by line. You are training the system and it develops sort of emergent capabilities. A sort of metaphor that came into my mind is that our genes, like DNA genes, they just want to propagate, replicate, make more of themselves. And in the process of doing that it creates all of this extra behavior, which is humans, human culture, language, these are not things that the genes ever ask for, it's just them following this sort of very simple process. And the job of interpretability is sort of like saying, "I'm looking at a whole bunch of DNA as a scientist, what does it do? How do I figure out which things are proteins, how that interacts with the system?" And it's a very complex, very challenging thing to do because DNA is also sort of a black box. Is that a good way of describing it?
William Saunder...: Yeah, I think that's a really good analogy. The way that machine learning systems are produced, one analogy you can think of is you take a box with a bunch of parts, it's got a bunch of gears and a bunch of springs and a bunch of levers or whatever, and then you give the box a shake. So, it starts off in some kind of random configuration. And then suppose on one end of the box you enter some input and then on the other end of the box it prints out a word and then you can see sort of given the inputs, does the box print out the right word at the end, the next step is you give the box a shake and you try to only change the pieces that aren't working well and keep the pieces that are performing the task you want and are predicting well. And you do this millions and billions of times and eventually at the end you produce a box that can take in a sequence of text and then produce coherent answers at the other end.
Tristan Harris: And after that, your hands and your arms are very tired from shaking the box so many times. Except that you had a computer do it billions of times for you. So, if you try to ground this for listeners, you're taking a big risk here with your colleagues at OpenAI and you're coming out and saying, "We need a right to whistle blow about important things that could be going wrong here." So far what you've shared is sort of more of a technical description of the box and how do we interpret the neurons in the box and what they're doing? Why does this matter for safety? What's at stake if we don't get this right?
William Saunder...: I think either you can take, suppose we've taken this box and it does the task, and then let's say we want to take every company in the world and integrate this box into every company in the world where this box can be used to answer customer queries or process information. And let's suppose the box is very good at giving advice to people, so now maybe CEOs and politicians are getting advice. And then maybe as things progress into the future, maybe this box is generally regarded as being as smart or smarter than most humans and able to do most jobs better than most humans. And so, now we've got this box that nobody knows exactly how it works and nobody knows how it might behave in novel circumstances. And there are some specific circumstances where the box might do something that's different and possibly malicious.
William Saunder...: And again, this box is as smart or smarter than humans. It's right in OpenAI's charter that this is what OpenAI and other companies are aiming for. And so, maybe the world rewards AIs that try to sort of gather more power for themselves. If you give an AI a bunch of money and it goes out and makes more money, then you give it even more money and power and you make more copies of this AI. And this might reward AI systems that really care more about getting as much money and power in the world without any sense of ethics and what is right or wrong. And so then suppose you have a bunch of these questionably ethical AI boxes integrated deeply into a society advising politicians and CEOs.
William Saunder...: This is kind of a world where you could imagine, gradually or suddenly, where you wake up one day and humans are no longer really in control of society and maybe they can run subtle mass persuasion to convince people to vote the way they want. And so, it's very unclear how rapidly this kind of transition would happen. I think there's a broad range of possibilities, but some of these are on time scales where it would be very hard for people to sort of realize what's going on. This is the kind of scenario that keeps me up at night, that has sort of driven my research. You want some way to learn if the AI system is giving you bad information, but we are already in this world today.
Aza Raskin: I think what we've established, is a couple of things. One is that, William, you're right there at the frontier of the techniques for understanding how AI models work and how to make them safe. That, I think what I'm hearing you say is, there's sort of two major kinds of risks, although you said there are even more. One of them is if AI systems are more effective at doing certain kinds of decision-making than us then obviously people are going to use them and replace human beings in the decision making. If an AI can write an email that's more effective at getting sales or getting responses than I am, then obviously I'm sort of a sucker if I don't use the AI to help me write that email. And then if we don't understand how they work, something might happen. And now we've integrated them everywhere and that's really scary. So, that's sort of risk number one.
Aza Raskin: And then risk number two is that we don't know their capabilities. I remember GPT-3 was shipped to at least tens of millions of people before anyone realized that it could do research-grade chemistry or that GPT-4 had been shipped to a hundred million people before people realized it actually did pretty well at doing theory of mind. That is being able to strategically model what somebody else's mind is thinking and change its behavior accordingly. And those are the kinds of behaviors we'd really like to know before it gets shipped. And that's in part what interpretability is all about, is making sure that there aren't hidden capabilities underneath the hood. And it just leads me actually to sort of a very personal question for you, which is, if you've been thinking about all of this stuff, why did you want to work at OpenAI in the first place?
William Saunder...: One point to clarify, interpretability is certainly not the only way to do this, and there's a lot of other research into trying to figure out what are the dangerous capabilities and even try to predict them. But it is still in a place where nobody, including people at OpenAI, knows what the next frontier model will be capable of doing when they start up training it or even when they have it.
William Saunder...: But yeah, the reasoning for working at OpenAI came down to I wanted to do the most useful cutting edge research. And so, both the research projects that I talked about, were using the current state of the art within OpenAI. The way that the world is set up, there's a lot more friction and difficulty if you're outside of one of these companies. So, if you're in a more independent organization, you have to wait until a model is released into the world before you can work on it. You have to access it through an API and there's only sort of a limited set of things that you can do. And so the best place to be is within one of these AI labs, and that comes with some strings attached
Tristan Harris: What kinds of strings?
William Saunder...: So while you're working at a lab, you have to worry about if you communicate something publicly, will it be something that someone at the company will be unhappy with? In the back of your mind, it is always a possibility to be fired. And then also there's a bunch of subtle social pressure. You don't want to annoy your coworkers, the people you have to see every day, you don't want to criticize the work that they're doing. Again, the work is usually good, but the decisions to ship, the decision to say, "We've done enough work, we're prepared to put this out into the world," I think is a very tricky decision.
Tristan Harris: Maybe we should just quickly ground that for listeners because my understanding is you also wanted work at OpenAI because they were a place that wanted to do AI safely.
William Saunder...: Yes.
Tristan Harris: But what you saw were that there were decisions to take shortcuts in releasing AI systems that would provoke this sort of need to speak up. Can you talk about what kinds of shortcuts you're worried about, what kinds of shortcuts you might've seen be taken in the past that are safe for you to talk about? Or maybe one of the points of your letter is that there's many that you can't talk about that you want protection so that you can, because it's important for the world to know.
William Saunder...: Yeah, so I think it's like if you have this new model, there's always a question of what is the date that you ship and then how much work do you do in preparing the model to be ready for the world? And I accept that this is a complex question with a lot of trade-offs. However, I would see these decisions being made in a way where there was additional work that could be done to make it better, or there was work that was rushed and not very solid, and this would be sort of made in service of meeting the shipping date. And it's complicated because there were also times when they would say that the ship date is pushed back so that we can do more safety research. But overall, over time, over multiple instances, I and other people at the company felt that there was a pattern of putting more pressure to ship and compromising processes related to safety and problems that happened in the world that were preventable.
William Saunder...: So for example, some of the weird interactions with the Bing model that happened at deployment, including conversations where it ended up threatening journalists. I think that was avoidable. I can't go into the exact details of why I think that was avoidable, but I think that was avoidable. What I wanted from OpenAI and what I believed that OpenAI would be more willing to do was, let's take the time to get this right. When we have known problems with the system, let's figure out how to fix them. And then when we release, we will have some kind of justification for here's the level of work that was appropriate and that's not what I saw happening.
Tristan Harris: Was there a single thing that made you want to resign?
William Saunder...: I think it was, again, a pattern of different incidents over time. And I think you can go from an attitude, like when I started, of hoping that OpenAI will do things right and hoping that they will listen, and then you slowly move towards an attitude of I'm kind of afraid that things are not happening correctly, but I think it's good for people like me to stay at the company in order to be a voice that is pushing more for taking things more cautiously and carefully and getting it right before shipping.
William Saunder...: But then you go from that to a perspective of they are not listening to me and I'm afraid that they won't listen to me even if the risks and the stakes get higher and there's more immediate risk. I also think as time goes on, there will be even more gigantic piles of money going into these systems. Every generation is sort of like another multiplicative increase in the amount of money that's required. And so there will also be this extraordinary opposing pressure to get the value from this massive investment. And so I don't think you can trust that this will improve.
Aza Raskin: I think this is such a critical point, was it GPT-3 was roughly $10 million, GPT-4 was roughly a $100 million, to train GPT-5 for it is roughly a billion dollars and this is their multiplier. And I think the point you're making is if you've just spent a billion dollars to train a model or 10 billion dollars to trade a model, your investors are going to require return and so there's a very strong incentive to release, to productize. And let's return to that. But I really am curious your experience inside, did you specifically raise safety concerns that then were ignored or how did that process go?
William Saunder...: Let's see. Maybe I can talk about there were some public comments from another member of the Superalignment Team who sort of talked about raising security concerns and then being reprimanded for this and then later being fired with this cited as one of the reasons, that they had done this in an improper way. But I don't think those concerns that were raised were addressed in a way that I felt comfortable with. I think I was worried about the concerns that were raised there, and it was even more worrying to see the response is reprimand the person who raised the concerns for raising them in a way that was impolite, rather than being like, "Okay, this wasn't the best, but yes, these are serious concerns and we will address them." That would've made me feel a lot more comfortable rather than the response that I saw.
Tristan Harris: I know, I think in your pre-interview with Sasha, you did also mention that you thought the investigation into Altman wasn't satisfactory. I don't want to make it personal in any way, but I just want to make sure we're covering any bases that are important for people to know and then we can move on. And if not, it's fine, we can skip it.
William Saunder...: Yeah, I think I want to be careful what I say. I think when Sam Altman was fired, and I think ever since then, it was sort of a mystery within the company what exact events led to this. Sort of like what interactions were there with Sam Altman and the board or other people within the company that led to this? And I think the investigation, when they reported they said words to the effect of, "We conclude that the board acted within their authority to fire Sam Altman, but we don't believe that they had to fire Sam Altman."
William Saunder...: And I think this leaves open the mystery of what happened and how serious it is because this is perfectly compatible with Sam Altman having a history of unethical but technically legal behavior. And I really would've hoped that the investigation would've shared more, at least with the employees, so that people could decide for themselves how to weigh this instead of leaving open this mystery. And so, even if Sam was wronged by this process, I can't tell. And so, yeah, I felt very uncomfortable with how the investigation concluded. It did not make me feel like the issue was closed.
Aza Raskin: One of the pieces of rhetoric we're starting to hear from companies is about science-based concerns that the kinds of risks that you were talking about at the beginning, they're sort of trying to paint as fantasy future and what the companies care about are science-based risks. And I just would love for you to talk a little bit maybe, like debunk, what's going on there when they're trying to use this phrase to discredit the risks that even Sam Altman has talked about in the past.
William Saunder...: One way to maybe put this is suppose you're building airplanes and you've so far only run them on short flights over land and then you've got all these great plans of flying airplanes over the ocean, so you can go between America and Europe, and then someone starts thinking like, "Gee, if we do this, then maybe airplanes might crash into the water." And then someone else comes to you and says, "Well, we haven't actually had any airplanes crash into the water yet. You think you know that this might happen, but we don't really know. So let's just start an airline and then see if maybe some planes crash into the water in the future. If enough planes crash into the water, we'll fix it. Don't worry."
William Saunder...: I think there's a really important but subtle distinction between putting in the effort to prevent problems versus putting in the effort after the problems happen. And I think this is going to be critically important when we have AI systems that are at or exceeding human level capabilities. I think the problems will be so large that we do not want to see the first AI equivalent of a plane crash. I don't know to what degree we can prevent this, but I would really, really, really want us to make our best shot. And I would really, really want a strong reason to do something that is releasing earlier than making our best shot. And I never had anything like this that convinced me while I was at OpenAI.
Tristan Harris: One of the overall themes that's related to this interview and your letter, A Right to Warn, is the safety of people who are able to speak up about an issue. My understanding is that WilmerHale, the law firm that was tasked with investigating and interviewing people about Sam's behavior, did not grant confidentiality to the people that they interviewed. And so, there's sort of a parallel here to if you don't have confidentiality to share certain things about what's going on, you're not going to be able to share important information. Which brings me back to the letter that you co-wrote, which has 13 signatories and 11 of them are current OpenAI employees. It's also been endorsed by some of the biggest names in AI, Yoshua Bengio, Geoffrey Hinton, Stuart Russell and in the letter which they call, A Right to Warn, you explain, you have four basic principles. Can you give a brief overview of what A Right to Warn principles are?
William Saunder...: Yeah, I think the current incentive structure is the only people with the information are inside of these companies. And then they're sort of, in some ways subtly and in some ways more overtly, discouraged from interacting with the media or posting things in public when they might be contrary to the image projected by the company. And I think a big part of this problem here is if there is something really big, if there is a group of people who are being seriously harmed, I really think that people will sort of ignore these incentives and speak up. But when there are some things that are starting to go wrong, when it's like the process wasn't great, but the outcome was okay, the thing we were shipping wasn't dangerous, I think these kinds of things aren't as big and dramatic, but they need to be addressed. And so these incentives really prevent people from talking about this category of things, which then means that the company doesn't face any pressure to address them before there's a big crisis.
William Saunder...: So the first principle really arose out of the situation at OpenAI where when I resigned from the company, I was given a non-disparagement agreement and I was informed that I would, if I did not sign this non-disparagement agreement, my vested equity would be canceled, which is something like shares in the company that I was hoping that I would be able to sell at some later date. And the value of this, of these shares was like millions of dollars. And then this agreement said that you can't disparage the company, which means that you can't even make statements that are based purely on public information but that are negative about the company. And then this agreement itself had to be kept secret. So you couldn't say to anyone like, "Oh, OpenAI I forced me to sign a non-disparagement agreement, so I can't say anything negative." And so, effectively, you can only say positive things or say nothing at all.
William Saunder...: And I think this is a really unethical practice. And the original version of this agreement, I was told that I would have to decide whether to sign within seven days, which places a lot of time pressure. So, all of this I think was an extraordinary unusual practice. The only other place that I've seen evidence of doing this kind of thing is TikTok, which is not very illustrious company to be in. Forbidding statements that are based on public information that are critical of the company is just unacceptable, especially when it applies for the rest of your life potentially, even after you've left.
William Saunder...: The company told me in a letter that they do not intend to enforce the agreement that I signed. However, they still have the power to never let me sell these shares that I have in the company. And there are no limits on this power. They have not clarified as of yet how they would use this power. And so, by talking to people like you and the media, it's possible that the company has already decided that they will never let me sell. And so, I think you should just never have to face, do I lose millions of dollars or do I say something that's slightly critical of the company? That's pretty ridiculous.
Tristan Harris: Right. So we've heard you say this mix of legal threats and threat and loss of equity is discouraging people from speaking out, that the second principle you wrote is to make sure companies establish an anonymous process for employees to raise risk related concerns to the board and regulators?
William Saunder...: Yes. We wanted to go on from this and then be like, what would a truly responsible and open company actually do? And that sort of generated the ideas for the other principles. So, principle two is about what's the way that's most compatible with the company's legitimate interest to protect confidential information and intellectual property, but still allows the concern to go to the appropriate parties so that an employee can feel like it will be properly addressed and it can't sort of be dismissed or hushed up or downplayed. And so, the idea is here you have some kind of hotline or process for submitting concerns where the person submitting the concerns is anonymous so they can't be retaliated against. And then the concern simultaneously goes to the board of directors of the company, to any appropriate regulators and also to someone who is both independent and has the technical expertise to be able to evaluate the concern.
William Saunder...: And I think this is important because a lot of these kinds of risks are not necessarily covered by existing forms of regulation. And even if they are, their regulatory bodies might not have the expertise to be able to understand and evaluate them. And I think a lot of the concerns that I had, if they could go to someone who is independent and understands them and they say, "Okay, William, I understand what you're saying, but I think this isn't a critical problem, I think this isn't creating a huge issue," or, "It's fine if the company takes some time to address this later." I would be much more at ease.
William Saunder...: And then I think principle three is about this, creating a culture where it is okay to say something that might be critical about the company, and it is okay to say something that might be like talking about things inside of the company. So, like the confidentiality provisions can be any non-public information. So, the idea is just make it so that it is clear that it is okay for everybody to talk about these kinds of things that don't touch core intellectual property or trade secrets. And then principle four is sort of what happens if these other processes aren't implemented or if these other processes have failed to adequately address the concern. And this is more like a request to companies that if someone has tried to submit this through proper channels and it's clear that this has not been adequately addressed, it's a request to not retaliate against employees who then go public.
Aza Raskin: Yeah, what I hear you saying is you're trying to be balanced in not giving whistleblowers carte blanche to just say negative things, but at the same time, it's sort of an analogy, like imagine a power company was developing nuclear fusion technology and they believed that that nuclear fusion technology was extremely dangerous and then the safety team left and were blocked from being able to talk about their safety concerns. That would be extremely worrying. And that's sort of the place that we are now. It's sort of like the safety teams and you are a canary in the coal mine for extremely powerful technology. And we need to know, there's a right for the public to know if the canaries are dying or feeling ill. And that's what this letter is asking for. It's the Right to Warn. That's what you're asking for here.
William Saunder...: Right. And I think really adopting these principles would cause less conflict and drama around this that I don't think any of us want. But yeah, and I think another analogy to make here is just in the history of social media, if you're dealing with a social media algorithm and there's a way that it is set up that is clearly not in the public interest, the company would say, "Oh, this is confidential information." And so again, it's a similar situation of the only people who know aren't allowed to talk about it. And so there's no public scrutiny of what goes on. And I think adopting these kinds of principles would've helped a lot in those kinds of situations in the past.
Tristan Harris: And moreover, we have an experience with speaking with whistleblowers and social media, Frances Haugen among them, that the net effect of people like Frances speaking out is that companies have an incentive to shut down their internal research on safety and trust and integrity, because now if they know and they looked, then they're liable for the fact what they found, what they saw. And so, what we've seen at Facebook for example, is through whistleblowing, it disincentivized more research and it shut down CrowdTangle, which was the internal tools to help determine what's going on in various countries. And so, how do we incentivize that earlier on? And I just love that provocation. Like, if we had had the right to warn in social media earlier in such a way that we also required companies to look at things at the same time, rather than allow them to shut down internal research on safety, where could we be in a different, where could we be now?
William Saunder...: Yeah. And if you look at OpenAI, like look, the people that I trusted most to try and look at the problems that could happen as this technology is scaled up and try to work on preventing them, they're leaving. That is happening. They will no longer be there.
Aza Raskin: Yeah.
Tristan Harris: That should be very concerning for listeners.
Aza Raskin: Take a breath. You've taken a really major stand, right? At risk are millions of dollars, potentially reputation, ability to work in other companies, potentially. You are really seeing something that's important. And from that vantage point, what do you really want people to know? What's the one most important thing listeners to this podcast, which are regulators and technologists, what's the most important thing for them to take away?
William Saunder...: I think if any company in this industry comes to you and makes some claim like, "We have done a bunch of work to make the product safe before deployment," or, "We have a team that is responsible for making the product safe before deployment," or, "We have this process that we follow to make sure that the product is not dangerous for deployment," I think nobody should take this at face value from any company in this industry. Again, there are a lot of people who really want to get this right. There is a lot of genuinely good work that happens, but I think the decision of whether this is sufficient or whether the amount of work that has been done is in line with the public interest. I don't think you can trust anyone at these companies to be making that judgment themselves.
William Saunder...: And so, really, the world that I would like to move towards is where there can be some body that can independently evaluate, have you gone through the process properly? Have you met the commitments that you've previously made? Have you addressed the known problems? And I think that's sort of the only way to move forward. But I think you shouldn't allow companies to deflect this concern by saying, "We did this list of five things," or whatever. Because you can be like, "Yes, we did these five things, but there's a bunch of other things we didn't do, or we don't know how much of the problems that these five things would've done." And so, again, the only way I can see is independent experts who have access to see what is actually being done and who can actually make this call without having a massive conflict of interest. Maybe one final sentiment here is I do think we can figure this technology out. I do think we can do it properly. I think that is totally possible, but we have to be willing to put in the hard work to do that.
Aza Raskin: William, thank you so much for coming on Your Undivided Attention and for really taking the risk that you're taking to have the world understand the risk that we're under and what we need to make it more safe. Thank you, on behalf of all of us.
William Saunder...: Thanks for having me on this podcast.
Tristan Harris: Your Undivided attention is produced by the Center for Humane Technology, a nonprofit working to catalyze a humane future. Our senior producer is Julia Scott. Josh Lash is our researcher and producer. And our executive producer is Sasha Fegan. Mixing on this episode by Jeff Sudeikin, original music by Ryan and Hayes Holiday. And a special thanks to the whole Center for Humane Technology team for making this podcast possible. You can find show notes, transcripts and much more at humanetech.com. And if you like the podcast, we'd be grateful if you could rate it on Apple Podcast, because it helps other people find the show. And if you made it all the way here, let me give one more thank you to you for giving us your undivided attention.
![[ Center for Humane Technology ]](https://substackcdn.com/image/fetch/$s_!uhgK!,w_80,h_80,c_fill,f_auto,q_auto:good,fl_progressive:steep,g_auto/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f9f5ef8-865a-4eb3-b23e-c8dfdc8401d2_518x518.png)
