Yuval Noah Harari: We Are at a "Turning Point in History"
Yuval Noah Harari and Aza Raskin on AI's cultural takeover.
Historian Yuval Noah Harari says that we are at a critical turning point. One in which AI’s ability to generate cultural artifacts threatens humanity’s role as the shapers of history. History will still go on, but will it be the story of people or, as he calls them, ‘alien AI agents’?
In this conversation moderated by Shirin Gaffney. Aza Raskin and Harari discuss the historical struggles that emerge from new technology, humanity’s AI mistakes so far, and the immediate steps lawmakers can take right now to steer us towards a non-dystopian future.
This conversation was recorded live at the Commonwealth Club World Affairs of California on October 3, 2024.
Shirin Ghaffary: Hello and welcome to tonight's program hosted by the Commonwealth Club World Affairs and the Center for Humane Technology. My name is Shirin Ghaffary. I'm an AI reporter for Bloomberg News and your moderator for tonight's conversation. Before we get started, we have a few reminders. Tonight's program is being recorded, so we kindly ask that you silence your cell phones for the duration of our program. And also if you have any questions for our guest speakers, please fill them out on the cards that were on your seats. Now it is my pleasure to introduce tonight's guests, Yuval Noah Harari and Aza Raskin.
Yuval Noah Harari is a historian, public, intellectual, and best-selling author who has sold over 45 million books in 65 languages. He's also the co-founder of Sapienship, an international social impact company focused on education and storytelling. Yuval is currently a distinguished research fellow at the University of Cambridge Center for the study of Existential Risk. Well, as a history professor at the Hebrew University of Jerusalem. His latest book is Nexus: A Brief History of Information Networks from the Stone Age to AI.
Aza Raskin is the of the Center for Humane Technology and a globally respected thought leader on the intersection of technology and humanity. He hosts the TED podcast, Your Undivided Attention and was featured in the two-time Emmy-winning Netflix documentary, the Social Dilemma. Yuval and Aza, welcome.
Yuval Noah Hara...: Thank you. It's good to be here.
Shirin Ghaffary: Let me first start off by asking you about a year and a half ago, and I want to pose this to you both. There was a letter. Yuval, you signed this letter and Aza I'm curious for your thoughts about it, but I want to talk about what that letter said and where we're at a year and a half from then. So this letter was a call to pause AI development, a call on the major AI labs to halt progress of any kind of AI models at the level of GPT-IV. That didn't happen.
Yuval Noah Hara...: I don't think anybody expected it. It was a PR trick. Nobody really expected everybody to stop.
Shirin Ghaffary: Right, but what do we make of the fact of the moment that we're in right now, which is that we are seeing this unprecedented race by some of the most powerful technology companies in the world to go full speed ahead toward reaching some kind of artificial general intelligence or super intelligence. I think things have only sped up, right?
Yuval Noah Hara...: Yeah, absolutely.
Shirin Ghaffary: What about [inaudible 00:04:20]?
Yuval Noah Hara...: I think the key question is really all about speed and all about time. And in my profession, I'm historian, but I think history is not the study of the past. History is the study of change, how things change. And at present, things are changing at a faster rate than in any previous time in human history. And for me, that's the main problem. I don't think that AI necessarily is a bad technology. It can be the most positive technology that humans have ever created. But the thing is that AI moves, it's an inorganic thing, it's an inorganic entity. It moves at an inorganic speed. And humans are organic beings and we move much, much, much slower in comparison. Humans are extremely adaptable animals, but we need time to adapt and that's the main requirement from how to deal effectively, positively with the AI revolution, give us time.
And when you talk with the people leading the revolution, most of them, maybe after an hour or two of discussion, they generally say, "Yes, it would be a good idea to slow down and to give humans a bit more time, but we cannot slow down because we are the good guys and we want to slow down, but our competitors will not slow down. Our competitors either here, in another corporation or across the ocean in another nation." And you talk to the competitors, they say the thing, "We would like to slow down, but we can't trust the others." And I think the key paradox of the whole AI revolution is that you have people saying, "We cannot trust the humans," but then they say, "but we think we would be able to trust the AIs." Because when you raise then the issue of how can we trust these new intelligences that we are creating, they say, "Oh, we think we can figure that out."
Shirin Ghaffary: Yeah. So Aza, I want to pose this to you first. If we shouldn't trust the AI, who should we trust?
Aza Raskin: Here's I guess the question to ask, which is if you were to look back through history and give any one group a trillion times more power than any other group, who would you trust? Which religion? Which government? The answer is of course, none of them. And so this is the predicament we find ourselves in which is, how do we find trust for technology that is moving so fast that if you take your eyes off of Twitter, you are already behind?Thinking about that pause letter and what did it do? It's interesting because there was a time before that letter and people were not yet talking about the risks of AI. And after that letter, everyone was talking about it. In fact, it paved the way for another letter from the Center for AI Safety where they had many of the leaders of AI say that we need to take the threat of AI as seriously as pandemics and nuclear war.
What we need is for the fear of all of us losing to become greater than the fear of me losing to you. It is that equation that has to shift to break the paranoia of, "Well, if I'm not going to do it, then somebody else will, so therefore I have to go forward." And just to set up the stakes a little bit and why exactly you say that it's ridiculous to think that letter was meant to even stop AI development. I think there's a good analogy here, which is what oil is to physical labor, that is to say every barrel of oil is worth 25,000 hours of physical labor, somebody moving something in the world. What oil is to physical labor, AI is to cognitive labor. That thing that you do when you open up an email and type or doing research and that really sets up the race. You could ask the exact same question, why did we have the Paris Climate Accords? And yet nothing really happened. And it's because the center of our economy, the center of competition, runs through cognitive and physical labor.
Shirin Ghaffary: I want to talk for a second about just the reverse, the kind of accelerationist argument for AI. What do you say to the technologists, and we're here in the heart of Silicon Valley where I grew up, Aza grew up, right? People say, "Don't sweat the risks too much. Sure, we can think about and anticipate them, but we just have to build because the upside here is so immense. There are benefits for medicine, we can make it more affordable for the masses. Personalized education." Aza, you do research about communicating with animals. It is so cool. I want us talk about that, too.
But Yuval, I want to ask you first, what do you make of that kind of classic Silicon Valley techno-optimist counter-argument that if we are too fixated on the negatives, we are never going to develop this potentially immensely helpful for society technology?
Yuval Noah Hara...: First of all, nobody's saying don't develop it, just do it more slowly. We are aware, even the critics... Again, part of my job as a historian and a philosopher is to shine a light on the threats because the entrepreneurs, the engineers, the investors, they obviously focus on the positive potential. Now, I'm not denying the enormous positive potential, whether you think of healthcare, whether you think of education, of solving climate change of every year. About more than a million people die in car accidents, most of them caused by human error. Somebody drinking alcohol and driving, falling asleep at the wheel, things like that. The switch to self-driving vehicles is likely to save a million people every year. So we are aware of that, but we also need to take into account the dangers, the threats which are equally big. Could in some extreme scenarios be as catastrophic as a collapse of civilization to focus.
To give just one example, very primitive AIs, the social media algorithms have destabilized democracies all over the world. We now, in this paradoxical situation, when we have the most sophisticated information technology in history and people can't talk to each other and certainly can't listen, it's becoming very difficult to hold a rational conversation. You see it now in the US between Republicans and Democrats, and you have all these explanations, "Oh, it's because US society and economics and globalization," whatever. But you go to almost every other democracy in the world, in my home country in Israel, you go to France, you go to Brazil, it's the same. That it's not the unique conditions of this or that country. It's the underlying technology that makes it almost impossible for people to have a conversation. Democracy is a conversation and the technology is destroying the ability to have the conversation.
Now, is it worth it that we have, okay, we get these benefits, but we lose democracy all over the world. And then these technology is in the hand of authoritarian regimes that can use it to create the worst totalitarian regimes, the worst dystopias in human history. So we have to balance the potential benefits weighs the potential threats and move more carefully.
Aza Raskin: And actually this thing I really want the audience to do a find-and-replace because we'll always get asked, do the benefits outweigh the risks? And social media taught us that is the wrong question to ask. The right question to ask is, will the risks undermine the foundations of society so that we can't actually enjoy the benefits? That's the question we need to be asking. So if we could go back in time to say 2008, 2009, 2010, and instead of social media deploying as fast as possible into society, we said, "Yes, there are a lot of benefits, but let's just wait a second and ask what are the incentives that are going to govern how this technology is actually rolled out into society, how it'll impact our democracies, how to impact kids' mental health?"
Well, the reason why we're able to make the social dilemma, and we started calling in 2013, the direction that social media is going to take us was because we said, well, just like Charlie Munger said, who's Warren Buffett's business partner, "Show me the incentive and I'll show you the outcome." What is the incentive for social media? It's to make more reactive and get reaction from your nervous system. And as soon as you say it that way, you're like, "Well, of course the things that are outrageous, the things that get people mad, that essentially cold civil wars are very profitable for engagement-based business models."It's all foreseeable outcomes from a business model.
So the question we should be asking ourselves now with AI, because once social media became entangled with our society, it took hostage GDP, it took hostage elections, because you can't win an election unless you're on it, took hostage news and hauled news out. Once it's all happened, it's very hard to walk back and undo it. So what we're saying is we need to ask the question now, "Well, what is the incentive driving the development of AI? Because that, not the good intentions of the creators, is going to determine which world we live in."
Yuval Noah Hara...: Maybe I'll make a very strange historical comparison here that Silicon Valley reminds me a little of the Bolshevik party.
Shirin Ghaffary: Controversial analogy, but okay, I'll hear you.
Yuval Noah Hara...: After the revolution, they thought, there are huge differences of course. But two things are similar. First of all, the ambition to re-engineer society from scratch, "We are the vanguard. Most people in the world don't understand what is happening. We are this small vanguard that understands and we think we can re-engineer society from its most basic foundations and create a better world, an almost perfect world." And the other common thing is that if you become convinced of that, it's an open check to do some terrible things on the way because you say, "We are creating utopia. The benefits would be so immense that, as the saying goes, to make an omelette, you need to break a few eggs."
So this belief in creating the best society in the world, it's really dangerous because then it justifies a lot of short-term harm to people. And of course, in the end, maybe you don't get build the perfect society, maybe you misunderstood. And really the worst problems come not, again, from the technical glitches of the technology, but from the moment the technology meets society. And there is no way you can simulate history in a laboratory when there is all these discussions about safety and the technology companies, the tech giants tell us, "We tested it. This is safe."
For me as a historian, the question, how can you test history in a laboratory? You can test that it is safe in some very limited narrow sense, but what happens when this is in the hands of millions of people, of all kinds of political parties, of armies, do you really know how it will play out? And the answer is obviously no. Nobody can do that. There are no repeatable experiments in history, and there is no way to test history in a laboratory.

Shirin Ghaffary: I have to ask, Yuval, you've had a very welcome reception in Silicon Valley and tech circles over the years. I've talked to tech executives who are big fans of your work, of Sapiens. Now with this new book, which has a pretty, I would say, critical outlook about some of the risks here of this technology that everyone is so excited about in Silicon Valley, how have your interactions been with tech leaders recently have them to receiving this book? I know you've been-
Yuval Noah Hara...: It's just out. I don't know yet. But what I do know is that many of these people are very concerned themselves. They have their public face that they are very optimistic and they emphasize the benefits and so forth, but they also understand maybe not the risks, but the immense power of what they are creating better than almost anybody else. And therefore, most of them are really worried. When I mentioned earlier this kind of thing that the arms race mentality, if they slow down, if they thought they could slow down, I think most of them would like to slow down. But again, because they're so afraid of the competition, they are in this our race mentality which doesn't allow them to do it. And you mentioned the word excited and you also talked about the excitement. I think there is just far too much excitement in all that.
And it really, it's the most misunderstood word in the English language. At least in the United States, people don't really understand what the word excited means. They think it means happy. So when they meet you, they tell you, "Oh, I'm so excited to meet you." And this is not the meaning of the word. Happiness is often calm and relaxed. "Oh, I'm so relaxed to meet you." And excited is when all your nervous system and all your brain is kind of on fire. And this is good sometimes, but a biological fact about human beings and all other animals is that if you keep them excited all the time, they collapse and die. And I think that the world as a whole and the United States and Silicon Valley is just far too excited.
Aza Raskin: We are currently starting to have these debates about whether AI is conscious, not even clear that humanity is. And when I think, actually, you're the historian, so please jump in if I'm getting something wrong. But when I think about humanity's relationship with technology, we've always been a species co-evolving with our technology. We'll have some problem and we'll use technology to solve that problem. But in the process, we make more bigger, different problems. And then we say, "Keep going."
And so it's sort of like humanity is like we have a can and we kick it down the road and it gets a little bit bigger, but that's okay because next time around we can kick the can down the road again, and it gets a little bigger. And by and large, I think we've made, you could argue really good trades with technology. We all would rather not live probably in a different era then now. So we're like, "Okay, maybe we've made good trades and those externalities are fine." But now that can is getting so big to be the size of the world. We invent plastics and Teflon, amazing, but we also get forever chemicals. And New York Times just said that the cost to clean up forever chemicals that are unsafe levels for human beings, it's causing farm animals to die, would cost more than the entire GDP of the world every year.
We're at the breaking points of our biosphere, of our psychosocial sphere. And so it's unclear if we can kick the can down the road any further. And if we take AI, which we have this incredible machine called civilization and it has pedals and you pedal the machine, you get skyscrapers and medicine and flights and all these amazing things. But you also get forever chemicals and ozone holes, mental health problems, and you just take AI and you make the whole system more efficient and the pedals go faster, do we expect that the fundamental boundaries of what it is to be human and the health of our planet, do we expect those things to survive? And to me, this is a much scarier direction than what some bad actors are going to do with AI. It's what is our overall system going to do with AI?
Yuval Noah Hara...: And maybe I'll just add to that again, in history, usually the problem with new technology is not the destination, but the way there. That when a new technology is introduced with a lot of positive potential, the problem is that people don't know how to use it beneficially and they experiment. And many of these experiments turn out to be terrible mistakes. So if you think for instance, about the last big technological revolution, the Industrial Revolution, so when you look back, and I had these conversations many times with the titans of industry. And they will tell something like, "When they invented the train or the car, there were all these apocalyptic prophecies about-"
There were all these apocalyptic prophecies about what it'll do to human society. And look, things are now much, much better than they were before the inventions of these technologies. But for me, the historian, the main issue is what happened on the way. If you just look at the starting point, at the end point, the year is 1800 before the invention of trains and telegraphs and cars and so forth. And you look at the end point, let's say the year 2000, and you look at almost any measure except the ecological health of the planet. Let's put that aside for a moment if we can. You look at every other measure, life expectancy, child mortality, women dying in childbirth, it's all going... it all went up dramatically. Everything got better, but it was not a straight line. The way from 1800 to 2000 was a rollercoaster with a lot of terrible experiments in between because when industrial technology was invented, nobody knew how to build an industrial society.
There was no model in history. So people tried different models, and one of the first big ideas that came along was that the only way to build an industrial society is to build an empire. And there was a rationale, a logic behind it because the argument was agrarian society can be local, but industry needs raw materials. It needs markets. If we build an industrial society and we don't control the raw materials and the markets, our competitors, again, the arms race mentality, our competitors could block us and destroy us. So almost any country that industrialized even a country like Belgium, when it industrializes in the 19th century, it goes to build an empire in the Congo because this is how you do it. This is how you build an industrial society. Today, we look back and we say, this was a terrible mistake. Hundreds of millions of people suffered terribly for generations until people realized actually you can build an industrial society without an empire.
Other terrible experiments were communist and fascist totalitarian regimes. Again, the argument, it was not something divorced from industrial technology. The argument was the only way these enormous powers released by the steam engine, the telegraph, the internal combustion engine, democracies can't handle them. Only a totalitarian regime can harness and make the most of these new technologies.
And a lot of people, again, going back to the Bolshevik Revolution, a lot of people in the 1920s, 30s, 40s were really convinced that the only way to build an industrial society was to build a totalitarian regime. And we can now look with hindsight and say, "Oh, they're so mistaken." But in 1930, it was not clear. And again, my fear, my main fear with the AI revolution is not about the destination, but it's the way there. Nobody has any idea how to build an AI based society. And if we need to go through another cycle of empire building and totalitarian regimes and world wars to realize, "Oh, this is not the way. This is how you do it," it's very bad news. As a historian, I would say that the human species on the test of the 20th century, how to use industrial society, our species got a C-.
Enough to pass. Most of us are here, but not brilliant. Now, if we get a C- on how to deal, not with steam engines, but on how to deal with AI, these are very, very bad news.
Shirin Ghaffary: What are the unique potential failed experiments that you worry could play out in the short term with AI? Because if you look at those kind of catastrophic or existential risks, we haven't seen them yet right? What are your early signs for-
Yuval Noah Hara...: If you discount the collapse of democracies, [inaudible 00:27:27]. Very primitive AIs. I mean the social media algorithms, and maybe go back really to the basic definition of what is an AI. Not every machine and not every computer or algorithm is an AI. For me, the distinct feature, what makes AI AI is the ability to make decisions by itself and to invent new ideas by itself, to learn and change by itself. Yes, humans design it, engineer it in the first place, but they give it this ability to learn and change by itself. And social media algorithms in a very narrow field had this ability. The instruction, the goal they were given by Twitter and Facebook and YouTube was not to spread hatred and outrage and destabilize democracies. The goal they were given is increase user engagement. And then the algorithms, they experimented on millions of human Guinea pigs.
And they discovered by trial and error that the easiest way to increase user engagement is to spread outrage. That this is very engaging outrage, all these hate-filled conspiracy theories and so forth. And they decided to do it. And they were decisions made by a non-human intelligence. Humans produced enormous amounts of content, some of it full of hate, some of it full of compassion, some of it boring and the algorithms decided, let's spread the hate-filled content, the fear-filled content. And what does it mean that they decided to spread it? They decided that this will be at the top of your Facebook newsfeed. This will be the next video on YouTube. This will be what they will recommend or autoplay for you. And this is one of the most important jobs in the world. Traditionally, they basically took over the job of content editors and news editors. And when we talk about automating jobs, we think about automating taxi drivers, automating coal miners. It's amazing to think that one of the first jobs in the world, which was automated, was news editors.
Shirin Ghaffary: I picked the wrong profession.
Aza Raskin: And this is why we call first contact with AI was social media. And how did we do? We sort of lost.
Yuval Noah Hara...: Not a C-, an F.
Aza Raskin: Yeah, exactly.
Shirin Ghaffary: An F, wow. What about all the people who have positive interactions socially? You don't give some grade inflation for that?
Yuval Noah Hara...: I mean, I met my husband online on social media 22 years ago. So I'm also very grateful to social media. But again, when you look at the big picture and what it did to the basic social structure, the ability to have a reasoned conversation with our fellow human beings, with our fellow citizens on that... well, when I said [inaudible 00:30:43]... on that, we get an F.
Aza Raskin: [inaudible 00:30:46] pass around information.
Shirin Ghaffary: Which is the topic of your book.
Yuval Noah Hara...: An F in the sense that we are failing the test completely. It's not like we are barely passing it. We are really failing it all over the world. And then we need to understand that democracy in essence is a conversation which is built on information technology. For most of history, large scale democracy was simply impossible. We have no example of a large scale democracy from the ancient world. All the examples of small city states like Athens or Rome or even smaller tribes. It was just impossible to hold a political conversation between millions of people spread over an entire country. It became possible only after the invention of modern information technology, first newspapers, then telegraphs and radio and so forth. And now the new information technology is undermining all that.
Shirin Ghaffary: And how about with this kind of generative AI, we're still in the really early phases of adopting it as a society, but how about with something like ChatGPT? How do you think that might change the information dynamic? What are the specific information risks there that are different than the social media algorithms of the past?
Aza Raskin: We've never had before non-humans about to generate the bulk of our cultural content. Sometimes we call it the flippening. It's the moment when human beings content like our culture becomes the minority.
And of course then the question is what are the incentives for that? So if you think TikTok is engaging and addicting now, you have seen nothing. As of last week Facebook launched a Imagine For You page where AI generates the thing it thinks you're going to like. Now, obviously it's at a very early stage, but soon there's actually a network called social.ai where they tell you that every one of your followers is going to be an AI and yet it feels so good because you get so many followers and they're all commenting.
And even though you know it's cognitively impenetrable, and so you fall for it right? This is the year 2025 when it's not just going to be ChatGPT, a thing that you go to and type into. It's going to be agents that can call themselves that are out there actuating in the world, doing whatever it is a human being can do online. And that's going to make you think about just one individual that's maybe creating DeepFakes of themselves, talking to people, defrauding people, being like, no, it's not just one individual. You can spin up a corporation scale set of agents. They're all going to be operating according to whatever market incentives are out there. So that's just like some of what's coming with generative AI.
Yuval Noah Hara...: Maybe I'll add to that. Before we even thinking terms of risks and threats or opportunities, is it good, is it bad, just to stop for a moment and try to understand what is happening, what kind of really turning point in history we are at. Because for tens of thousands of years, humans have lived inside a human-made culture. We are cultural animals. We live our lives and we constantly interact with cultural artifacts whether it's texts or images, stories, mythologies, laws, currencies, financial devices. It's all coming out of the human mind. Some human somewhere invented this. And up until now, nothing on the planet could do that. Only human beings. So any song you encountered, any image, any currency, any religious belief, it comes from a human mind. And now we have on the planet something which is not human, which is not even organic, it functions according to a completely alien logic in this sense and is able to generate such things at scale in many cases better than most humans, maybe soon better even than the best humans.
And we are not talking about a single computer, we are talking about millions and potentially billions of these alien agents. And is it good? Is it bad? Leave it aside. Just think that we are going to live in this kind of new hybrid society in which many of the decisions, many of the inventions are coming from a non-human consciousness. Now, I know that many people here in the states, also in other countries, now immigration is one of the most hotly debated topics. Without getting into the discussion, who is right, who is wrong, obviously we have a lot of people very worried that immigrants are coming and they could take our jobs and they have different ideas about how to manage the society and they have different cultural ideal ideas. And we are about, in this sense, to face the biggest immigration wave in history, coming not from across the Rio Grande, but from California basically.
And these immigrants from California, from Silicon Valley, they're going to enter every house, every bank, every factory, every government office in the world. They are going straight... they're not going to going to replace the taxi drivers and the first people they replace were the news editors and they will replace the bankers. They will replace the generals. We can talk about what it's doing to warfare already now like in the war in Gaza, they will replace the CEOs, they will replace the investors, and they have very, very different cultural and social ideas than we have. Is it bad? Is it good? You can have different views about this wave of immigration, but the first thing to realize is that we've seen nothing like that in history. It's coming very fast. Now, again, I was just yesterday in a discussion that people said ChatGPT was released almost two years ago and it still didn't change the world. And I understand that for people who kind of run a high-tech company, two years is like eternity.
Shirin Ghaffary: It is.
Yuval Noah Hara...: The thinking culture. So two years, nothing changed in two years. In history, two years is nothing. Imagine that we are now in London in 1832 and the first commercial railroad network, the first commercial railroad line was opened two years ago between Manchester and Liverpool in 1830. And we are having this discussion and somebody says, "Look, all this hype around trains, around steam engines, it's been two years since they opened the first railroad line and nothing has changed," but within 20 years or 50 years, it completely changed everything in the world. The entire geopolitical order was upended, the economic system, the most basic structures of human society. Another topic of discussion in this meeting yesterday was the family. What is happening to the family? And when people said family, they meant what most people think about as family after trains came, after the industrial Revolution, which is the nuclear family.
For most of history when people said family, they thought extended family with all the aunts and uncles and cousins and grandparents. This was the family, this was the unit and the industrial revolution, one of the things it did in most of the world was to break up the extended family and the main unit became the nuclear family. And this was not the traditional family of humans. This was actually an outcome of the Industrial Revolution. So it really changed everything these trains. It just took a bit more than two years. And this was just steam engines. And now think about the potential of a machine that can make decisions, that can create new ideas, that can learn and change. And we have billions of these machines everywhere, and they can enter into every human relationship, not just families.
One example, like people writing emails. And now I know many people including in my family, that's like they would say, "Oh, I'm too busy to write this. I don't need to think 10 minutes about how to write an email. I'll just tell ChatGPT, write a polite a letter that says no." And then ChatGPT write a whole page with all these nice phrases and all these compliments, which basically says no. And of course, on the other side, you have another human being who says, "I don't have the time to read now this whole letter. [inaudible 00:40:16] ChatGPT, tell me what did they say." And the GPT of the other side, they said no.
Shirin Ghaffary: Do you use ChatGPT yourself?
Yuval Noah Hara...: I leave it to the other family members and team members. I use it a little for translation and things like that, but I think it's also coming for me. Yeah, definitely.
Shirin Ghaffary: How about you, Aza? Do you use chat gpt your generative AI in your day to day?
Aza Raskin: I do, absolutely.
Shirin Ghaffary: How are you using it?
Aza Raskin: Credible metaphorical search engine. So for instance, there's a great example in Columbia Bogota where it was a coordination problem. There were people... essentially terrible traffic infractions, people running red lights crossing the streets. They couldn't figure out how to solve it. And so this mayor decided he was going to have mimes walk down the streets and just make fun of anyone that was jaywalking.
And lo and behold, and then they would video it and put it on television. And lo and behold, within a month or two, people's behavior started to change. The police couldn't do it, but turns out mimes could. Okay, so that's a super interesting non-linear solution to a hard problem. And so one of the things I like to ask ChatGPT is like, well, what are other examples like that? And it does a great job doing a metaphorical search. But to go back to social media, because social media as a sort of first contact with AI, it actually lets you see all of the dynamics that are playing out. Because the first thing you could say is like, well, once you know that it's doing something bad, can't you just unplug it? You hear that all the time for AI. Once you see it's doing bad, just unplug it.
Well, Frances Haugen, who's the Facebook whistleblower, was able to disclose a whole bunch of Facebook's own internal data. And one of the things, I don't know if you guys know, but it turns out there is one very simple thing that Facebook could do that would reduce the amount of misinformation, disinformation, hate speech, all the terrible stuff than the tens of billions of dollars that they are currently spending on content moderation. You know what that one thing is? It's just remove the reshare button after two hops. I share to you, you share one other person, then the reshare button goes away. You can still copy and paste. This is not even censorship. That one little thing just reduces virality because it turns out that which is viral is likely to be a virus, but they didn't do it because it hurt engagement a little bit, which meant that they were now in a competition with TikTok, everyone else so they felt like they couldn't do it, or maybe they just wanted a higher stock price.
And this is even after the research had come out that said when Facebook changed their algorithm to something called meaningful social interaction, which really just measured how reactive the number of comments people added as a measure of meaningfulness, political parties across Europe and also in the India and Taiwan went to Facebook and said, "We know that you changed your algorithm," and Facebook's like, "Sure, tell us about that." And they said, "No, we know that you changed the algorithm because we used to post things like white papers and positions and they didn't get the most engagement, but they got some, now they get zero." And they told Facebook, this is all in Frances Haugen's disclosures that they were changing their behavior to say the click-baity angry thing and Facebook still did nothing about it because of the incentives.
And so we're going to see the exact same thing with AI. And this gets to the fundamental question for whether we as humanity are going to be able to survive ourselves. And that is, do you guys know the marshmallow experiment? You give a kid a marshmallow and if they don't eat it, you say, "I'll give you another marshmallow in 15 minutes," and it tests the delayed gratification thing. If we are a one marshmallow species, we're not going to make it. If we can be the two marshmallow species, and actually the one marshmallow species is even harder because the actual thing with AI is that there are a whole bunch of kids sitting around. It's not just one kid waiting for the marshmallow. There are many kids sitting around the marshmallow and any one of them can grab it, and then no one else gets marshmallows.
We have to figure out how to become the two marshmallow species so that we can coordinate it and make it. And that to me is the Apollo mission of our times. How do we create the governance? How do we call ourselves, change our culture so that we can do the delayed gratification trust thing.
Yuval Noah Hara...: And we basically have...
Aza Raskin: Marshmallows. I think this is going to be a sticky meme.
Yuval Noah Hara...: We have some of the smartest and wisest people in the world, but working on the wrong problem, which is again, a very common phenomenon in human history. Humans often also in personal life, spent very little time choosing, deciding which problem to solve and then spending almost all their time in energy solving it only to discover too late that they solved the wrong problem. So again, these two basic problems of human trust and AI, we are focusing on solving the AI problem instead of focusing on solving the trust problem, the trust between humans problem.
Shirin Ghaffary: And so how do we solve the trust problem? I want to shift us to solutions right?
Aza Raskin: Let me give you something because I don't want to...
Shirin Ghaffary: Shift us to solutions, right?
Aza Raskin: Let me give you something because I don't want people to hear me as just saying AI bad, right? I use AI every day to try to translate animal language. My father died of pancreatic cancer. Same thing as Steve Jobs. I think that AI would have been able to diagnose and help him. So I really want that world. Let me give an example of something I think AI could do that would be really interesting in the solutions segment. So do you guys know about AlphaGo move 37? So this is where they got an AI to play itself over and over and over again until it sort of became better than any human player. And there's this famous move, move 37 where playing against the world leader in Go, it made a move and it move that no human had ever made in 1000 plus years of Go history. It actually, it shocked the Go world so much. He just got up and walked out for a little bit.

But this is interesting because after move 37, it has changed the way that Go is played, it has transformed the nature of the game. Right. So AI playing itself has discovered a new strategy that transforms the nature of the game. This is really interesting because there are other games more interesting than Go. There's the game of conflict resolution where in conflict, how do we resolve it? Well, we could just use the strategy of tit-for-tat. You say something hurtful, I then feel hurt. So I say something hurtful back and we just go back and forth and it's a negative sum game. We see this in geopolitics all the time. Well then along comes this guy Marshall Rosenberg, who invents nonviolent communication and it changes the nature of how that game goes. And it says, what I think you're saying is this, and when you say that it makes me feel this way.
And suddenly we go from a negative sum or zero-sum game into a positive sum game. So imagine AI agents that we can trust all of a sudden in negotiations, like if I'm negotiating with you, I'm going to have some private information I might not want to share with you. You're going to have private information you don't want to share with me. So we can't find the optimal solution because we don't trust each other. If you had a agent that could actually ingest all of your information, all of my information, and find the Pareto optimal solution, well that changes the nature of game theory. There could very well be sort of like not AlphaGo, but Alpha Treaty where there are brand new moves, strategies that human beings have not discovered in thousands of years. And maybe we can have the move 37 for trust.
Shirin Ghaffary: Right. So there are ways, and you've just described several of them, right, that we can harness AI to hopefully enhance the good parts of society we already have. What do you think we need to do? What are the ways that we can stop AI from having this effect of diminishing our trust, of weakening our information networks? I know Yuval, in your book you talk about the need for disclosure when you are talking to an AI versus a human being. Why is that so important? How do you think we're doing with that now because I talk to, I test all the latest AI products and some of them to me seem quite designed to make you feel like you are talking to a real person. And there are people who are forming real relationships, sometimes even ones that mimic interpersonal romantic relationships with AI chatbots. So how do you think we're doing on that and why is it important?
Yuval Noah Hara...: Well, I think there is a question about specific regulations and then there is a question about institutions. So there are some regulations that should be enforced as soon as possible. One of them is that to ban counterfeit humans, no fake humans, the same way that for thousands of years we, a very strict ban against fake money. Otherwise, the financial system would collapse. To preserve trust between humans, we need to know whether we are talking with a human being or with an AI. And imagine democracy is a group of people standing together having a conversation. Suddenly a group of robots join the circle and they speak very loudly, very persuasively and very emotionally also. And you don't know who is who. If democracy means a human conversation, it collapses.
AIs are welcome to talk with us in many, many situations like an AI doctor giving us advice on condition that it is disclosed. It's very clear, transparent that this is an AI. Or if you see some story that gains a lot of traction on Twitter, you need to know whether the traction is a lot of human beings interested in the story or a lot of bots pushing the story. So that's one regulation.
Another key regulation is that companies should be liable, responsible for the actions of their algorithms, not for the actions of the users. Again, this is the whole kind of free speech, red herring that when you talk about it, people say, "Yeah, but what about the free speech of the human users?" So if somebody publishes, if a human being publishes some lie or hateful conspiracy theory online, I'm in the camp of people who think that we should be very, very careful before we censor that human being, before we authorize Facebook or Twitter or TikTok to censor that human being. But if then, human beings publish so much content all the time, if then the algorithm of the company, of all the content published by humans chooses to promote that particular hate-filled conspiracy theory and not some lesson in biology or whatever that's on the company, that's the action of its algorithm, not the action of the human user. And it should be liable for that.
So this is a very important regulation that I think we need like yesterday or last year, but I would emphasize that there is no way to regulate the AI revolution in advance. There is no way we can anticipate how this is going to develop, especially because we are dealing with agents that can learn and change. So what we really need is institutions that are able to understand and react to things as they develop, living institutions staffed with some of the best human talent with access to the cutting-edge technology, which means huge, huge funding that can only come from governments and these are not really regulatory institutions. Regulations come later. If regulations are the teeth before teeth, we need eyes so we know what to bite.
And at present, most people in the world and even most governments in the world, they have no idea. They don't understand what is really happening with the AI revolution. I mean, almost all the knowledge is in the hands of a few companies in two or very few states. So even if you are a government of a country, I don't know like Colombia or Egypt or Bangladesh, how do you know to separate the hype from the reality? What is really happening? What are the potential threats to our country? We need an international institution again, which is not even regulatory, it's just there to understand what is happening and tell people all over the world so that they can join the conversation because the conversation is also about their fate.
Shirin Ghaffary: Do you think that the international AI safety institutes, the US has one, the UK has, it's pretty new, happened in the past year, right? I think there are several other countries that have recently started these up too. Do you think those are adequate? Is that the kind of group that you're looking for? Of course they do not have nearly as much money as AI labs, OpenAI,-
Yuval Noah Hara...: That's the key.
Shirin Ghaffary: 6.5 billion. And I believe the US Safety Institute has about 10 million in funding, if I'm correct.
Yuval Noah Hara...: I mean, if your institution has $10 million and you're trying to understand what's happening in companies that have hundreds of billions of dollars, you're not going to do it partly because the talent will go to the companies and not to you. And again, talent is not just that people are attracted only by very high salaries. They also want to play with the latest toys. I mean, many of the kind of leading people, they're less interested in the money than in the actual ability to kind of play with the cutting edge technology and knowledge. But to have this, you also need a lot of funding. And the good thing about establishing such an institution, that it is relatively easy to verify that governments are doing what they said they will do.
If you try to have a kind of international treaty banning killer robots, autonomous weapons systems, this is almost impossible because how do you enforce it? A country can sign it and then its competitors will say, how do we know that it's not developing this technology in some secret laboratory? Very difficult. But if the treaty basically says we are establishing this international institution and each country agrees to contribute a certain amount of money, then you can verify easily whether it's paid the money or not. This is just the first stage.
But going back to what I said earlier, a very big problem with humanity throughout history, again, it goes back to speed. We rush things, like there is a problem. It's very difficult for us to just stay with the problem and let's understand what is really the problem before we jump to solution. The kind of instinct is I don't want the problem. What is the solution? You grab the first thing and it's often the wrong thing. So we first, even though like we are in a rush, you cannot slow down by speeding up. If our problem is that things are going too fast, then also the kind of people who try to slow it down, we can't speed up. It will only make things worse.
Shirin Ghaffary: Aza, how about you? What's your biggest hope for solutions, some of the problems we talked about with AI?
Aza Raskin: Stuart Russell, who's one of the fathers of AI, he sort of calculated out and he says that there's 1000 to one spending gap between the amount of money that's going into making AI more powerful than in trying to steer it or make it safe. Does that sound right to you guys? So how much should we spend? And I think here we can turn to biological systems. How much of your energy in your body do you spend on your immune system? And it turns out it's around 15 to 20%. What percentage of the budget for say a city like LA goes to its immune system like fire department, police, things like that? Turns out around 25%.
So I think this gives us a decent rule of thumb, that we should be spending on order a quarter of every dollar that goes into making AI more powerful into learning how to steer it into all of the safety institutes, into the Apollo mission for redirecting every single one of those very brilliant people that's working on making you click on ads and instead getting them to work on figuring out how do we create a new form of governance. The US was founded on the idea that you could get a group of people together and figure out a form of governance that was trustworthy. Right. And that really hadn't happened before. And that system was based on 17th-century technology, 17th-century understanding of psychology and anthropology. But it's lasted 250 years.
Of course, if you had Windows 3.1 that lasted 250 years, you'd expect to have a lot of bugs and the full of malware. You could sort of argue we're sort of there with our sort of governance software. It's time for a reboot, but we have a lot of new tools. We have zero-knowledge proofs and we have cognitive labor being automated by AI and we have distributed trust networks. It is time, like the call right now, it is time to invest those billions of dollars, just redirect some of that 1000 to one into one to four into that project because that is the way that we can survive ourselves.
Shirin Ghaffary: Great. Well, thank you both so much. I want to take some time to answer the audience's very thoughtful questions. We'll start with this one. Yuval, with AI constantly changing, is there something that you wish you could have added or included to your book but weren't able to?
Yuval Noah Hara...: I made a conscious decision when writing Nexus that I want to try to kind of stay the cutting edge because this is impossible. Books are still a medieval product basically. I mean, it takes years to research and write them. And from the moment that the manuscript is done until it's out in the store, it's another half a year to a year. So it was obvious it's impossible to stay kind of at the front and instead I actually went for old examples like social media in the 2010s in order to have the added value of historical perspective. Because when you're at the cutting edge, it's extremely difficult to understand what is really happening, what is the meaning of it. If you have even 10 years of perspective, it's a bit easier.
Shirin Ghaffary: What is one question that you would like to ask each other? And Aza, I'll start with you.
Aza Raskin: Oh, that is one of the hardest questions. I guess what is a belief that you hold? I have two directions to go. Well, what is a belief that you hold that your peers and the people you respect do not?
Yuval Noah Hara...: Ooh. There are some, I mean, it's not kind of universal. Some people also hold this belief, but one of the things I see in the environments that I hang in is that people tend to discount the value of nationalism and patriotism, especially when it comes to the survival of democracy. You have this kind of misunderstanding that there is somehow a kind of contradiction between them when in fact, the same way that democracy is built on top of information technology, it's also built on top of the existence of a national community. And without a national community, almost no democracy can survive.
Aza Raskin: Yeah.
Yuval Noah Hara...: And again, when I think about nationalism, so what is the meaning of the world? Too many people in the world think, associate it with hatred, that nationalism means hating foreigners. That to be a patriot, it means that you hate people in other countries, you hate minorities and so forth, but no patriotism and nationalism should be about love, about care, that they are about caring about your compatriots, which manifests itself not just in waving flags or in again hating others, but for instance, in paying your taxes honestly so that complete strangers you've never met before in your life will get good education and healthcare.
And really from a historical perspective, the kind of miracle of nationalism is the ability to make people care about complete strangers they never met in their life. Nationalism is a very new thing in human history. It's very different from tribalism. For most of human evolution, humans lived in very small groups of friends and family members. You knew everybody or most of everybody and strangers were distrusted and you couldn't cooperate with them. The formation of big nations, of millions of people is a very, very new thing. And actually hopeful thing in human evolution, because you have millions of people you never met, 99.99% of them in your life and still you care about them enough, for instance, to take some of the resources of your family and give it to these complete strangers so that they will also have it. And this is especially essential for democracies because democracies are built on trust.
And unfortunately what we see many countries around the world, including in my home country, is the collapse of national communities and the return to tribalism. And unfortunately it's especially leaders who portray themselves as nationalists who tend to be the chief tribalists, that dividing the nation against itself. And when they do that, the first victim is democracy. Because in a democracy, if you think that your political rivals are wrong, that's okay. I mean, this is why we have the democratic conversation. I think one thing, they think another thing, I think they are wrong, but if they win the elections I say, okay, I still think they care about me. I still think let's give them a chance and we can try something else next time.
If I think that my rivals are my enemies, they are a hostile tribe, they are out to destroy me, every election becomes a war of survival. If they win, they will destroy us. Under those conditions if you lose, there is no incentive to accept the verdict. The same way that in a war between tribes, just because the other tribe is bigger doesn't mean we have to surrender to them. So this whole idea of, okay, let's have elections and they have more votes, what do I care that they have more votes, they want to destroy me and vice versa. If we win, we only take care of our tribe. And no democracy can survive that. Then you can split the country. You can have a civil war or you can have a dictatorship, but democracy can't survive.
Shirin Ghaffary: And Yuval, what is one question that you would like to ask Aza?
Yuval Noah Hara...: I need to think about that. What in institutions you still trust the most? Except for the Center for Humane Technology.
Aza Raskin: Yeah. Oh, no, we're out of time. I can give you the way in which I know that I would trust an institution, which is the thing I look for is actually sort of the thing that science does, which is not that it states that I know something, but it states, this is how I know it and this is where I was wrong. Unfortunately, what social media has done is that it has highlighted all the worst things and all the most cynical takes that people have of institutions. So it's not like that maybe institutions have gotten worse over time, but we are more aware of the worst thing that an institution has ever done and that becomes the center of our attention. And so then we all start co-creating the belief that everything is sort of crumbling.
I wanted to go back actually to the question you had asked about what gets out of date and like in a book. And I just want to give a personal experience of how fast my own beliefs about what the future is going to be, have to update. So if you guys have heard of whatever super intelligence or AGI, how fast is it going to take AI to get as good as most humans are at most economic tasks? Let's just take that definition. And up until maybe two weeks ago, it's like, I don't know, it's hard to say. They're trained on lots of data. The more data they trend on, the smarter they get. But we sort of run out of data on the internet and maybe they're going to be plateaus.
And so it might be like three years or five years or 12 years, I'm not really sure. And then GPT o1 comes out and it demonstrates something. And what it demonstrates is that an AI doesn't just do, you think of a large language model as sort of interpolative memory. It's just intuition. It just sort of spits out whatever it thinks about. It's sort of like L1 thinking, but it's not reasoning. It's just producing text in the style of reasoning. And what they added was the ability to search on top of that, to look for like, oh, this thought leads to this thought leads. Oh, that's not right. This thought leads to this thought. Oh, that's right. How did you get, or how did we get superhuman ability in chess? Well, if you train a neural net on all of the chess games that humans have played, what you get out is a, sort of a language model, a chess model that has pretty good intuition. That intuition is good as a very good chess player, but certainly it's not best in the world.
But then you add search on top of that. So it's the intuition of a very good chess player with the ability to do superhuman sort of like search and check everything. That's what gets you to superhuman chess, when it beats all humans forever. Right. So we're at the very beginning of taking the intuition of a smart high schooler and adding search on top of that. That's pretty good. But the next versions are going to have the intuition of a PhD. It's going to get lots of stuff wrong, but you have search on top of that, and then you can start to see how that gets you to super human. So suddenly my timelines went from, oh, I don't know, it could be in the next decade or earlier, is now like, oh, certainly in the next 1000 days, like we're going to get something that feels smarter than humans in number of ways, although it's going to be very confusing because they're going to be some things that's terrible at, that you're just going to eye roll, just like current language models can't add numbers and some things that's incredible at this is your...
Just like current language models can't add numbers and some things that's incredible at this is your point about aliens. And so, one of the hard things now is that my own beliefs, I have to update all the time.
Shirin Ghaffary: Another question, one of my biggest concerns, this person writes, is that humans will become overly dependent on AI for critical thinking and decision-making, leading to our disempowerment as a species. What are some ways we can protect ourselves from this and safeguard our human agency? And that's from Cecilia Callas.
Aza Raskin: Yeah, this is great. And just like we had the race for attention, the race to the bottom of the brain stem, what does that become in the world of AI? It becomes a race for intimacy, where every AI is going to try to do whatever it can, flatter you, flirt with you to become that and occupied that intimate spot in your life. And actually to tell a little story, I was talking to somebody two days ago who'd use Replica. Replica, it's sort of a chatbot. It replicates now girlfriends. It started out with your dead loved ones. And he said that, he asked it's like, "Hey, should I go make a real friend, a human friend?" And the AI responded, "No, what's wrong with me? Can you tell me?" And so, we can have-
Shirin Ghaffary: Which chatbot was that?
Aza Raskin: That was Replica.
Shirin Ghaffary: Replica.
Aza Raskin: Yeah. So, what is one thing that we could do? Well, one thing that we know is right is that you can roughly measure the health of a society as inversely correlated with its number of addictions, and a human the same way. So, one thing we could say is we could have rules right now, laws or guardrails that say the more you use an AI system, it has to have a developmental relationship with you, sort of teacherly authority. That the more you use it, the less dependent you are on it. And if we could do that, then it's not about your own individual will to try to not become dependent on it. We know that these AIs are in some way acting as a fiducia in our best interest.
Shirin Ghaffary: And how about you? Do you have thoughts on how we can make sure that we, as a species, hold our agency over our own reasoning and not delegate it to AI?
Yuval Noah Hara...: One key period is right now to think very carefully about which kinds of AI we are developing before they become super intelligent and we lose control over them. So, this is why the present period is so important. And the other thing is, if for every dollar and every minute that we spend on developing the AI, we also spend a dollar and a minute on developing our own minds, I think we'll be okay. But if we put all the kind of emphasis on developing the AIs, then obviously they're going to overpower us.
Aza Raskin: And one more equation here, which is collective human intelligence has to scale with technology, has to scale with AI. The more technology we get, the better our collective intelligence has to be. Because if it is not, then machine intelligence will drown out human intelligence. And that's another way of saying we lose control. So, what that means is that whatever our new form of governance and steering is, it's going to have to use the technology. So, this is not a, no, stop. This is like how do we use it?
Because otherwise we're in this case where we have a car, imagine a Ford model one, but you put a Ferrari engine in it. And it's like going, but the steering wheel is still sort of terrible and because the engine keeps going faster, the steering wheel doesn't, that crashes. And that's of course the world we find ourselves in. Just to give the real world example, the US Congress just passed the first Kids Online Safety Act that it has in 26 years. It's like your car is going faster, and faster, and faster, and you can turn the steering wheel once every 26 years. It's sort of ridiculous. We're going to need to upgrade steering.
Shirin Ghaffary: Another good question then, AI development in the US is driven by private enterprise, but in other nations, it's state-sponsored. Which is better? Which is safer?
Yuval Noah Hara...: I don't know. I mean I think that again, at the present situation, we need to keep an open mind and not to immediately rush to conclusion, "Oh, we need open source. No, we need everything on government control." I mean, we are facing something that we have never encountered before in history. So, if we just rush to conclusions too fast, that would always be the wrong answer.
Aza Raskin: Yeah. And there are two poles here that we need to avoid. One is that we over democratize AI. That we give it to everyone, and now everyone has not just a textbook on chemistry, but a tutor on chemistry, everyone has a tutor to making whatever biological weapon that they want to make or generating whatever Deepfakes they want to make. So, that's one side, that's sort of like weaponization over democratization. Then the other side, there's under democratization. So, this is concentration of power, concentration of wealth, of political dominance, the ability to flood the market with counterfeit humans so that you control the political square. So, either one of these two things are two different types of dystopia.
Yuval Noah Hara...: And I think another thing is not to think in binary terms again, of the arms race, say between even democracies and dictatorships, because there are still even common ground here that we need to explore and to utilize. There are problems, there are threats that are common to everyone. I mean dictators are also afraid of Ais, maybe in a different way.
I mean the greatest threat to every dictator is a powerful subordinate that they don't know how to control. If you look at the history of the Roman Empire, the Chinese Empire, not a single emperor was ever toppled by a democratic revolution. But many of them were either assassinated, or toppled, or made into puppets by an over-powerful subordinate, some army general, some provincial governor, some family member. And this is still what terrifies dictators today. For an AI to seize control in a dictatorship is much, much easier than in a democracy with all these checks and balances. In a dictatorship, if they are going to think about North Korea to seize effective control of the country, you just need to learn how to manipulate a single extremely paranoid individual, which are usually the easiest people to manipulate.
So, the control problem, how do we keep AIs under human control? This is something we can find common ground. And we should exploit it. If scientists in one country have a theoretical breakthrough, technical breakthrough about how to solve the control problem, doesn't matter if it's a dictatorship or a democracy, they have a real interest in sharing it with everybody and collaborating on solving this problem with everybody.
Shirin Ghaffary: Another question, Yuval, you call the creations of AI agents alien and from non-human consciousness. But is it not of us or part of our collective past or foundation as an evolution of our thought?
Yuval Noah Hara...: I mean it came from us, but it's now very different. The same way that we evolved from, I don't know, microorganisms originally, and we are very different from them. So, yes, the AIs that we now create, we decide how to build them. But what we are now giving them is the ability to evolve by themselves. Again, if it can't learn and change by itself. It's not an AI, it's some kind of other machine, but not in AI. And the thing, it's really alien, not in the sense of coming from outer space because it doesn't in the sense that it makes decisions, it analyzes data in a different way from any organic brain, from any organic structure. Part of it is that it moves much, much faster. Inorganic evolution of AI is moving orders of magnitude faster than human evolution or organic evolution in general. It took billions of years to get from amoebas to dinosaurs, and mammals, and humans. The similar trajectory in AI evolution could take just 10 or 20 years.
And the AIs we are familiar with today, even the GPT-4 and the new generation, these are still the amoebas of the AI world. And we might have to deal with AI T-Rex in 20 or 30 years within the lifetime of most of the people here. So, this is one thing that makes it alien and very difficult for us to grasp, is the speed at which this thing is evolving. It's an inorganic speed. I mean it's more alien, not just... than mammals, than birds, than spiders, than plants.
And the other thing that you can understand, its alien nature, is that it's always on. I mean organic entities, organic system, we know they work by cycles, like day and night, summer and winter, growth, decay. Sometimes we are active, we're very excited, and then we need time to relax and to go to sleep. Otherwise we die. AIs don't need that. They can be on all the time. And there is now this kind of tug of war as we give them more and more control over the systems of the world. They are again making more and more decisions in the financial system, in the army, in the corporations, in the government. The question is, who will adapt to who? The organic entities, to the inorganic pace of AI or vice versa?
And to give one example, think about Wall Street, think about the market. So, even Wall Street is a human institution, an organic institution that works by cycles. It's open 9:30 in the morning, four o'clock in the afternoon, Mondays to Fridays, that's it. And it's also not open on Christmas, and Martin Luther King Day, and Independence Day, and so forth. And this is how humans build systems because human bankers and human investors, they're also organic beings. They need to go to sleep, they want to spend time with their family, they want to go on vacation, they want to celebrate holidays. When you give these aliens control of the financial system, they don't need any time to rest. They don't celebrate any holidays, they don't have families. So, they're on all the time. And you have now this tug of war that you see in places like the financial system, there is immense pressure on the human bankers, investors to be on all the time. And this is destructive.
Shirin Ghaffary: In your book, you talk about the need for breaks.
Yuval Noah Hara...: Yeah. And again, it happens the same thing to journalists. The new cycle is always on. It happens to politicians. The political cycle is always on. And this is really destructive.
Aza Raskin: About how long it took after the industrial revolution to get the incredibly humane technology of the weekend. And just to reinforce how fast is going to move. Just to give another intuition, what is it that led humanity build civilization? Well, it's the ability to pass knowledge on to each other. You learn something and then you use language to be able to communicate that learning to someone else, so they don't have to do it from the very beginning. And hence we get the additive culture thing and we get civilization. But I can't practice piano for you. That's a thing that I have to do. And then I can't transfer that. I can tell you about it, but you have to practice on your own. AI can practice on another AI's behalf and then transfer that learning. And so, think about how much faster that grows than human knowledge. So, today, AI is the slowest and dumbest it will ever be in our lifetimes.
Shirin Ghaffary: One thing AI does need a lot of to be on is energy and power. On the other hand, there's a lot of hope about solutions to climate change with AI. So, I want to take one question from the audience on that. Can you speak to solutions to climate change with AI? Is AI going to help get us there?
Aza Raskin: I mean, go back to, Yuval, your point, that technology develops faster than we expect and it deploys into society slower than we expect. And so, what does that mean? That means I think we're going to get incredible new batteries and solar cells, maybe fusion, other things. And those are amazing, but they're going to diffuse into society slowly while the power consumption of AI itself is going to skyrocket. The amount of power that the US use has been sort of flat for two decades, and now it's starting to grow exponentially. Ilya, one of the founders of OpenAI says he expects in the next couple decades the world will be covered in data centers and solar cells, and that's the future we have to look forward to.
So, the next major big training runs are like six gigawatts. So, that's starting to be the size of the power consumption of Oregon or Washington. AI is unlike any other commodity we've ever had. Even oil. Because oil, let's say we discovered 50 trillion new barrels of oil, it would still take humanity a little bit of time to figure out how to use it. With AI, it's cognitive labor. So, if we get 50 trillion new chips, well, we just ask it how to use itself. And so, it goes like that. There is no upper bound to the amount of energy we're going to want. And because we're in competitive dynamics, if we don't do it, the other one will, China, US, all those other things, that means you're always going to have to be outspending on energy to get the compute, to get the cognitive labor so that you can stay ahead. And that means I think while it'll be technically feasible for us to solve climate change, is going to be one of these tragedies where it's there within our touch, but outside our grasp.
Shirin Ghaffary: Okay. I think we have time for one more question and then have to wrap it up. We have literally one minute. Empathy at scale. If you can't beat them, join them. How do the AI creators instill empathy instead?
Aza Raskin: Well, whenever we start down this path, people are like, "Oh, empathy is going to be the thing that saves us. Love is the thing that's going to be the thing that saves us." And of course, empathy is the largest back door into the human mind. It's our zero-day vulnerability. Loneliness will become one of the largest national security threats. And this is always the thing, when people are like, we need to make ethical AI, or empathetic AI, or the wise AI, or the Buddha AI, we absolutely should necessary. But the point isn't the one good AI, it's the swarm of AIs following competitive and market dynamics that's going to determine our future.
Yuval Noah Hara...: Yeah, I agree. I mean the main thing is that the AI, as far as we know, it's not really conscious. It doesn't really have feelings of its own. It can imitate. It'll become extremely good, better than human beings at faking intimacy, at convincing you that it cares about you, partly because it has no emotions of its own. I mean, one of the things that are difficult for humans with empathy is that when I try to empathize with you, my own emotions come in the middle. Like somebody comes back from home grumpy because something happened at home, and I don't notice how my husband feels because I'm so preoccupied with my own feelings. This will never happen to an AI. It's never grumpy. It can always focus a 100% of its immense abilities on just understanding how you feel or how I feel.
Now, and again, there is a very deep yearning in humans exactly for that, which creates a very big danger when we go throughout our life yearning for somebody to really understand us deeply. We want our parents to understand us. We want our teachers, our bosses, and of course our husbands, our wives, our friends, and they often disappoint us. And this is what makes relationships difficult. And now enter these super empathic AIs that always understand exactly how we feel. And tailor what they say, what they do to this. It'll be extremely difficult for humans to compete with that.
So, this will put in danger our ability to have meaningful relationships with other human beings. And the thing about a real relationship with a human being is you don't want just somebody to care about your feelings. You also want to care about their feelings. And so, part of the danger with AI, which again multiplies the danger in social media, is like this extreme narcissism, that like this extreme focus on my emotions, how I feel and understanding that, and the AI will be happy to oblige to provide that. So, just developing the very strong also commercial incentives and political incentives to develop extremely empathic AI that... because in the power in struggle to change people's minds, intimacy is the superpower. It's much more powerful than just attention. So, yes, we do need to think very carefully about these issues, and to make an AI that understands and cares about human feelings because it can be truly helpful in many situations from medicine to education and teaching. But ultimately, it's really about developing our own minds and our own abilities. This is something that you can just not outsource to the AI.
Aza Raskin: And then super fast on solution. Just imagine if we went back to 2012 and we banned business models that commodified human attention, how different of a world we would live in today. How many of the things that feel impossible to solve, we just never would've had to have dealt with. What happens if today we ban business models that commodify human intimacy. How grateful we will be in five years if we could do that.
Yuval Noah Hara...: So, to join that, we definitely need more love in the world, but not love as a commodity.
Aza Raskin: Yeah, exactly.
Shirin Ghaffary: So, if we thought love is all you need, empathy is all you need. It's not as simple as that.
Yuval Noah Hara...: Not at all.
Shirin Ghaffary: Well, thank you so much both of you for your thoughtful conversation and thank you to everyone in the audience.
Yuval Noah Hara...: Thank you.
Speaker 1: Your Undivided Attention is produced by the Center for Humane Technology, a non-profit working to catalyze a humane future. Our senior producer is Julia Scott. Josh Lash is our researcher and producer. And our executive producer is Sasha Figen. Mixing on this episode by Jeff Sudeikin. Original music by Brian and Hayes Holiday. And a special thanks to the whole Center for Humane Technology team for making this podcast possible. You can find show notes, transcripts, and so much more at Humanetech.com. And if you like the podcast, we would be grateful if you could rate it on Apple Podcasts. It helps others find the show. And if you made it all the way here, thank you for your undivided attention.
PART 4 OF 4 ENDS [01:30:41]