[ Center for Humane Technology ]
The Interviews
FEED DROP: Possible with Reid Hoffman and Aria Finger
0:00
-1:07:25

FEED DROP: Possible with Reid Hoffman and Aria Finger

Aza's interview on the Possible podcast

This week on Your Undivided Attention, we’re bringing you Aza Raskin’s conversation with Reid Hoffman and Aria Finger on their podcast “Possible”. Reid and Aria are both tech entrepreneurs: Reid is the founder of LinkedIn, was one of the major early investors in OpenAI, and is known for his work creating the playbook for blitzscaling. Aria is the former CEO of DoSomething.org.

This may seem like a surprising conversation to have on YUA. After all, we’ve been critical of the kind of “move fast” mentality that Reid has championed in the past. But Reid and Aria are deeply philosophical about the direction of tech and are both dedicated to bringing about a more humane world that goes well. So we thought that this was a critical conversation to bring to you, to give you a perspective from the business side of the tech landscape.

In this episode, Reid, Aria, and Aza debate the merits of an AI pause, discuss how software optimization controls our lives, and why everyone is concerned with aligned artificial intelligence — when what we really need is aligned collective intelligence.

This is the kind of conversation that needs to happen more in tech. Reed has built very powerful systems and understands their power. Now he’s focusing on the much harder problem of learning how to steer these technologies towards better outcomes.

Aza Raskin: Hey everyone. It’s Aza Raskin. Welcome to Your Undivided Attention. So a little while ago, I sat down with my friends, Reid Hoffman and Aria Finger on their podcast Possible. Reid and Aria are both entrepreneurs, and actually it may seem surprising to have this conversation in YUA, because Reid is the founder of LinkedIn and was one of the major early investors in OpenAI and is known for his work creating the playbook for hyperscaling or what he calls Blitzscaling. Yet, Reid and Aria both are deeply philosophical and are both dedicated to a humane world that goes well. And so we thought it was a very important conversation to bring to this podcast because we don’t often have those people that could sit on “the other side.” What I think made this conversation so special with Reid is that while we don’t always agree, we took it really slowly.

We both tried to get to each other’s root assumptions and have this conversation in a very deep sense, good faith. He’s much more optimistic about AI’s trajectory than I am, and neither he nor Aria seemed to see the inherent risk of optimizing for intention and engagement the way that Tristan and I do. But we still found a lot of common ground on the solutions that will need to walk the narrow path on AI. So this week we’re bringing it to you on the YUA feed because Reid in the end is a very thoughtful, very deep thinker. In this conversation, we debated the merits of an AI pause. We discussed how as software eats the world, what software is optimized for ends up eating us. We talked about ecosystem ethics, we talked about Neil Postman, and we talked about how everyone is distracted trying to build aligned artificial intelligence.

And what everyone’s missing is that we need to build aligned collective intelligence because that’s what determines our future. This is the kind of conversation I wish happened a lot more in tech because Reid has built these very powerful systems, understands their power, understands geopolitics, understands VCs and raising money, understands hard competition as well as cooperation. And what I really appreciate is that he is now focusing on the much harder problem of learning how to steer these technologies towards better outcomes. So I hope you enjoy listening to this conversation as much as I enjoy being part of it.

Reid Hoffman: He helped invent one of the most addictive features in tech history, infinite scroll. Now, he’s pushing the frontier of human knowledge with AI while also being one of the strongest voices calling for caution with the technology. I’ve known Aza Raskin for nearly two decades since our time at Mozilla. He’s not only an ambitious technologist, but also a deep thinker on the promise and peril of AI for society.

Aria Finger: This is our first time with a repeat guest on Possible, so you could call this an encore conversation. You might remember Aza from our earlier episode exploring how AI could help us decode animal communication. Today, we’re going deeper, getting into what happens when the tools built to connect us, expand to shape our minds, our democracies, and our sense of truth.

Reid Hoffman: So what kind of governance does the age of AI actually demand? What new rights should we be defending? And how do we navigate the friction between technological optimism and existential risk? Aza and I agree on a lot with respect to AI, but we’ll dig into where we diverge on the development and direction of the technology. This conversation may change the way you think about the future of artificial intelligence.

Aria Finger: Let’s get into it with Aza Raskin.

Reid Hoffman: Welcome back, Aza. First, I’ll say that you’re the only two-time guest on Possible or the first as the case may be. And that’s because we have volumes to talk about. For those who haven’t caught our first episode with you, find that in the feed while we’re talking about using AI to decode animal communication, we’ll obviously undoubtedly get back to it. Although I promise at least I won’t be mimicking animal communication. I don’t know if I can promise for the other folks. For those who have, this will be a different conversation. In our last episode, you had us guest an animal call where it ended up being a beluga, not having you guest, because this is your quote from a Time article for a few years back. But I want you to elaborate on your philosophy here. And here’s the quote, “The paradox of technology is that it gives us the power to serve and protect at the same time as it gives us the power to exploit.” So elaborate some.

Aza Raskin: Elaborate. This is really talking about, well, the fundamental paradox, which is as technology gets more powerful, its ability to anticipate our needs and fulfill those needs obviously gets stronger, but at the same time, the power that it has over us gets stronger. So hence the more it knows about the intimate details of our life, how we work, obviously if a friend was like that, they could both better help you and they could use that to exploit you or hurt you. I was just actually reading a article on Starlink getting introduced into the Amazon, and I thought it was a particularly interesting example because it gives you a clear before after shot.

So this is an uncontacted type in the Amazon. They get given a Starlink and cell phones, and within essentially a month, you start having viral chat memes. You have the kids hunched over, not going out and hunting. They actually have to start instituting a time off where everyone is off their phones because they stopped hunting and were starting to starve. And it’s just interesting to me because it shows that this isn’t so much about culture, it’s about technology doing something to us.

Aria Finger: And so very similar to that is in your Netflix documentary, The Social Dilemma, you talked about the idea that if you’re not paying for the product, you are the product. And so elaborate more on that and tell us what now do you think you’re the product of?

Aza Raskin: Yeah. Well, the simple question is, how much have you paid for your Facebook or your TikTok recently? The answer is nothing. So obviously something’s going on because these companies can have billions of dollars worth of market cap or make billions of dollars per year. So how is that happening? And the answer is it is the shift in your behavior and your intent that the companies are monetizing. We’re going to do one thing, now you do a different thing. Hence, you are not the customer, you are the product. If you aren’t using it, you’re the product. But I think there’s something really deep that’s going on here that we often miss because often people will say, “Well, social media, what is its harm?” Well, the harm is that it addicts you, but it’s much deeper than that, right? The phrase is, “Software is eating the world,” but because we’re the product, software is eating us.

And the values that we ask our technology to optimize for end up optimizing us. So yes, social media addicts us, but it’s actually much easier to get us addicted to needing attention than just addicting us. That ends up being a thing that is valuable over a longer period of time. If you’re optimizing for engagement, then it’s not just that social media gets or technology gets engagement out of us. It turns us into the kinds of people that are more reactive, right? If it’s trying to get reactions from us, it makes us more reactive. So it eats us from the inside out. And I think it’s so important to hold onto that because otherwise it just feels like technology is a thing that’s out here, but actually it changes who we are. And I’ll continue going on that rant, but I’ll pause for a second.

Aria Finger: Well, can I ask a follow-up about that? I just had actually at the Masters of Scale Summit, I had a very heated discussion with someone about advertising and social media. So my question for you is, is it actually about advertising is the problem? You use Gmail every day. Gmail is advertising supported. I mean, you can also buy extra space. That’s another business model they have. They don’t care if it’s a loss lead or whatever it might be. So is it the advertising or if Facebook didn’t have advertising and it was just a subscription business and you paid $20 a month, you would think it was just as a voracious of a eater from within? So is it the business model or something inherent about social media?

Aza Raskin: Well, there are actually a couple different things you said here. So the business model, one way the business model works is via ads, but that’s not the only way. And so fundamentally, it is the engagement business model that I think is the problem. And you can get there because Netflix, Reed Hastings, the CEO of Netflix, famously said that Netflix’s chief competitor is sleep.

Aria Finger: Boredom?

Aza Raskin: Right. And so it’s any amount of the human psychology that can be owned will be owned. That’s the incentive for dominance, right? And in the age of AI, that switches from a race for eyeballs to a race for intimacy, for occupying the most intimate slots of your life. And that’s because our time is zero-sum, our intimacy is zero-sum. You don’t get much more of it. And so as technology become more powerful, can model more of our psychology, it then can exploit more of our psychology. And the way capitalism works is it takes things that are outside the market, pulls them into the market, and turns them into commodity to be sold. So it is not just ads, it’s that our attention, our engagement, our intimacy, and then parts of our human psyche, our soul that we haven’t even yet named will be opened up for the market as technology gets better and better at modeling us.

Reid Hoffman: So one of the things that I want to push you on a little bit here, and actually it’s more to elaborate your point of view. And actually, I don’t think we’ve had this exact conversation before, so this’ll be excellent for all of us, including the listeners. The usual problem is, is it clear that there’s a set of people to whom they exhibit addictive behavior, that they become less of their good selves in the engagement? The answer’s yes. And by the way, the earlier discussion is with television, right? Similar themes were discussed around television, one of my favorite books is Amusing Ourselves to Death by Neil Postman.

Aza Raskin: Yes, which I think should be engaging ourselves to death.

Reid Hoffman: Yes, exactly. I thought about what would the update for Postman be in a social media world? But the challenge comes to that there is some people that definitely have that. And you have this, call it idealistic utopian that if I wasn’t doing this as a little bit like you’re hunting, I’d be out hunting, right? Versus I’d be out torturing animals to death or I’d be out being bored on a fishing trip or whatever as the case may be. So there’s a set of things which is not just always replacing the highest quality. Obviously we have a specific worry with youth and actual social engagement time, which actually is one of the areas here that I agree with strongly versus kind of mixed. But then there’s also the question of just like, for example, earlier days it was television, but then there was a bunch of very good things that came out of television too. And so I tend to think there’s also about good things that come out of social media as well.

And it’s not per se, like engagement for engagement’s sake, obviously I didn’t do LinkedIn that way, so that’s not actually the way that I think it should happen. But the notion of playing game dynamics for engagement in things that cause us to be interacting in net productive ways is a thing that I tend to be very positive on. So elaborate more on why it is, one, this is worse than television, and two, what the shape would be that if you said, “Hey, engagement’s fine, but these are the kinds of mods we’d want to see to have the engagement be more net human positive.” It’s not like, “Abandon your social network and go out in your loin cloth and commune with the trees.” But what would be the thing that would be the, “Okay, hey, if the engagement were more shaped this way, we’d get much more humanist outcomes.”

Aria Finger: I will jump in and say a difference between social media and TV for me. One is that you can open Twitter and 30 minutes later you’re like, “What happened to my life?” And that doesn’t happen with TV. Maybe it’s because you opt in for a 20-minute show or you opt in for a movie, but those two things don’t happen. And one interesting thing for me is I had always been a lurker on Twitter for the last, whatever, 10 years. I posted some, not huge, but consumed content. Six months ago, I changed from looking at my own curated feed to the For You tab. And ever since then, Twitter is a black hole for me. And I don’t even mean it’s bad. Being on Twitter doesn’t make me sad. It actually makes me happy. I love Twitter. It’s like, “Oh, I read these fun comments. Oh, I saw that funny thing. Oh, this is great.”

And I think of myself as a pretty disciplined person, but I find it very hard to be disciplined with Twitter. It’s embarrassing to say out loud how hard it is. And I think I just need to get rid of Twitter because it’s the one thing that I can’t be disciplined about, which is both embarrassing, but also just that is bad. And so I don’t know what to do about it. I don’t want to live in a nanny state where people say you shouldn’t be on Twitter because you don’t have discipline. But I do think it’s interesting that the switch from my curated feed to the For You tab was just a total light switch.

Aza Raskin: Yeah. Well, what I think you’re speaking to here is the fundamental asymmetry of power because it’s just your mind that evolved versus now tens of thousands of engineers, some of the largest supercomputers trained on three billion other human minds doing similar things to you, coming to try to keep your engagement. That’s not a fair fight.

Aria Finger: Oh, well, I lose. So yeah.

Aza Raskin: Yeah, exactly. And I know you, you’re one of the most like, “Ah,” people that I know. That was a good thing for everyone that gave true operational prowess and that’s the asymmetry of power. And there are other places in our world where we have asymmetries of power. When you go to a doctor, when you go to a lawyer, they know much more about their domain than you do. They could use their knowledge about you because you’re coming in this weakened state to exploit you and do things bad for you, but they can’t because they’re under a fiduciary duty. And I think as technology gets stronger and stronger and knows more and more about us, we need to recategorize technology as being in a fiduciary relationship that is they have to act in our best interest because they can exploit us in ways that we are unaware of. And the... Where do I want to go from here?

Reid Hoffman: Well, I was thinking we should DM Ari about our Twitter addiction, but anyway.

Aria Finger: Don’t worry. I’m dealing with it. I’m dealing with it.

Aza Raskin: But this goes back to where you started, Reid, with the fundamental paradox of technology is that the better it understand us, the better it can serve us and the better it can exploit us. Twitter could be using all of that insane amounts of engagement to rerank the newsfeed for where there are solutions to the world’s biggest problems, great descriptions of the underlying mechanisms behind what those problems are, put us into similar groups. They’re doing parts of a larger set of actions to make the world a better place. BridgeLink, I think is a good starting example of that, but we don’t get the altruistic version. And if I have to quickly define altruistic, what should we be optimizing for? It’s optimizing both for your own wellbeing and also optimizing for the wellbeing of everything that nourishes you. And I think the problem of social media and tech writ large is that generally speaking, the incentives are for maximum parasitism.

You don’t want to kill your host, but you want to extract as much as you can while keeping your host alive. That’s the game theory of social media. If I don’t do it, somebody else will. If I don’t add beautification filters, somebody else will. If I don’t go to short form, somebody else will. And so that optimizes for parasitism versus altruism. And I do think there’s a beautiful world where technology is in service of both optimizing for ourselves and optimizing for that which nourishes us that I’d love to get to. And just to play a quick thought experiment, Reid, you know this better than I, but engagement is directly correlated to how fast pages load. Amazon, I think, famously found for every 100 milliseconds, their page loads slower, so it’s less than half of human reaction time. They lose 1% of revenue.

And so there’d be a very interesting democratic solution here, which is a kind of adding latency friction that is come up... This is scary because you don’t want to have this function owned by Democrats or Republicans. You’d really want a new kind of Democratic institution to do this, but just assume that you do for a second. You have a group of experts deliberate and come up with what are the set of harms that we might have. We could have the inability to disconnect children’s mental health, ability for a society to agree, and you’d rank how well the effects of social media against these. And the companies that are worse offenders get a little bit more friction. They have a little more latency. They get a hundred milliseconds here, 200 milliseconds here, 400 milliseconds there.

And if there really was a bit of a latency friction added towards anti-pro-social behavior of social media, then you better believe YouTube or Instagram or whoever would fix the problem really quickly. And we get to then apply the incredibly brilliant minds of Silicon Valley towards more of these altruistic ends.

Aria Finger: I want to get to, again, everyone always says, “Can’t we have the best technologists working on the hardest things?” And so Aza, both you and Reid have been in technology since the birth of Web 1.0, and you’ve seen it all. And I want to get a few of your takes on some of the big questions that are in the news recently, especially around AI. And so Aza, I’ll start with you. So as you obviously saw a few weeks ago, a group released another AI pause letter, and Reid and I talked about this on Reid Riffs recently. And so this was with many arguing that the development of AI without clear safeguards of alignment could be disastrous for humanity. So they were calling again for a pause, likening this to the open hire moment. And so I would love to know from you, what is your take on this? Do you agree that this is now the time for the pause or do you have a different point of view?

Aza Raskin: I think it’s important to name where the risks come from here. And it may be that technological progress is inevitable, but the way we roll out technology is not. And currently, we are releasing the most powerful, inscrutable, uncontrollable omni use technology that we’ve ever invented, one that’s already demonstrating the kind of self-preservation, deception, and escape blackmail behaviors we previously thought only exists in sci-fi movies. And we’re deploying it faster than we’ve deployed any other technology in history under the maximum incentives to cut corners on safety. To me, that sounds like an existential threat. That is the core of it because we have an unfettered race where the prize at the end of the rainbow is make trillions of dollars, own the world economy, a hundred trillion dollars worth of human labor and build a God.

And it’s a kind of one ring where everyone is reaching for this power and we swap out when we say we have to beat China, we imagine the thing we’re racing towards is a controllable weapon when we haven’t even demonstrated that we can control this thing yet. And so that to me means that we have to find a new way of coordinating because otherwise we will get what the game theory of the race dictates and that doesn’t look very good.

Aria Finger: So needless to say, you are for the pause?

Aza Raskin: But I feel like that’s a dimensionality reduction, right? It’s a saying we have to develop differently. We have to... I think it comes from clarity. It’s not about pausing or not pausing. It’s saying clarity creates agency if we don’t see the nature of the threat correctly in the same way that I think we didn’t see the nature of the threat from social media correctly, and then we have to live in that world. And so this requires a clarity about where we’re racing towards, and then a ability to coordinate, to develop in a different way, because we still want the benefits. We just won’t, I think, get to live in a world where we have them if the thing that decides our future is of competition for dominance.

Aria Finger: Mm-hmm. And Reid, I think you have a slightly different take on this.

Reid Hoffman: Well, I do, as you know. Although, I mean, the weird thing about this universe is in a classic discussion, I’d say, “Oh, there’s 0% chance,” that the future that Aza just, the danger thread that Aza just demonstrated is correct. I don’t think that. I think it’s above zero. I think that’s stunning and otherwise interesting. So the real question comes down to is what the probability is and how you navigate a landscape of probabilities. Because as you know, Aria, and I think Aza and I’ve talked about this too, I roughly go, I don’t understand human beings other than we divide into groups and we compete. And not only do we compete, but we compete also with different visions of what is going on.

So for example, part of the reason I think pause letters are frankly dumb is because you go, “Well, if you issue a pause letter, the people who listen to the pause letter are the people who are appealing to your sense of what is the humanity thing may slow down, then the other people don’t slow down. And so where does the actual design locus of the technology be? It’s the people who don’t care about the things that you were trying to argue for a pause for.” And so therefore you’ve just waited it because the illusion on the people who put these pause letters out is that suddenly because of my amazement of my genius inside of this pause letter, a hundred percent of all the people who are doing this or even 80% or 90% are all going to slow down at the same time, which is not going to happen.

I agree with the thrust of we should be trying to create and inject the things that minimize possible harms and maximize your goods. And then the question is, what does that look like? And obviously the usual thing in the discussion is it’ll be us or China, and China is the... We always have a great Satan somewhere, is the great Satan here. But by the way, even if you didn’t use that rhetorical shorthand, it’s like there’s other groups. I can describe people within the U.S. tech crowd who have a sympathetic. So the race conditions being afoot is not only the China thing. There is China stuff. And by the way, where AI is deployed for mass surveillance of civilians is primarily China as an instance and so forth. And so I don’t think that the issue of Western values versus China stuff is actually in fact a smokescreen issue. It’s a real issue, right?

Aza Raskin: Yup.

Reid Hoffman: And so you go, “Okay, how do we shape this so that we do that?” And the thing that I want critics to do, the reason why I speak so frequently and strongly against the criticism and say, “Look, let’s take the game as we know that we’re going to have race conditions and we know that we’re going to have multiple people competing.” I have no objection to creating the group on of like, “Hey, we should all rally to this flag.” We should rally to the, for example, classic issue here is control flag. That’s the Yoshua Bengio, Stuart Russell, you guys, et cetera, like, “We should have much better control of this and we don’t have control.” And sure, the control doesn’t matter right now, but maybe it’s going to matter three years from now. If we just keep on this path and so make the control work.

Now, I tend to think, yes, we should improve control. The thing of where we think we can get to a hundred percent control is I think a chimera and it’s just like, for example, we couldn’t even make verification programming work effectively. So it’s unclear to me in this, but it’s like what I want is I want to both myself in my own actions and my own thinking and my own convenings and other people say, “What are the best ideas that within this broad race condition we can change the probability landscape?” And then secondly, while I see a possible, this is the super agent thing, I see a possible bad. If you said, “Well, do I think it’s naturally going to go there?” I mean, this is like the thing where I think obviously massive respect for Jeffrey Hinton and what he’s created, the Nobel Prize and all this.

But 60% extinction humanity, I don’t think there’s anything that 60% distinction of humanity unless we suddenly discover an asteroid, massive asteroid under direct intercept course. And I’m like, “Woo, we better do something about that.” But I think that the questions around how do we navigate this are really good ones and are best done with a, “If we did X, it would change the probability landscape.”

Aza Raskin: Mm-hmm. Mm-hmm. Yeah.

Aria Finger: Let me ask you... Oh, Aza, do you have something to say in response?

Aza Raskin: I was just going to say quickly on the existential threat front. We had a thing we used to say about social media is that you’re sitting there on social media, you’re scrolling by some cute cat photo, you’re like, “Where’s the existential threat?” And the point is that it’s not that social media is the existential threat, it’s that social media brings out the worst of humanity and the worst of humanity is the existential threat. And the reason why I started with talking about how when you optimize human beings for something, it changes them from the inside out is that what we get optimized for becomes our values. The objective function of AIs and social media, which could barely just rearrange human beings posts, became our values. And then the question becomes, “Well, who will we become with AI?” And there’s a great paper called Moloch’s Bargain that just came out and they had AIs compete for likes, sales, and engagement on social media.

And they’re like, “Well, what do the AIs do? “ And they gave them explicit instructions to be safe, to be ethical, to not lie. But very quickly, the AIs discovered that if they wanted to get an 8% bump in engagement, they had to increase disinformation by 188% and increase polarization by, I can’t remember exactly what, like 15%, something like that. And the reason why I’m going here is because there is a way that the sum total of all agents are deploying into the world, how they are going to shape us. And before the invention of game theory, there’s a lot of leeway for us to have different strategies, but after game theory gets invented, and if I know you know game theory and you know I know game theory, we are constrained if we’re competing to doing the game theory thing, but we’re still human so we can still take detours.

But as AI rolls out, well, AI discovers every strategy that can be discovered will be discovered. So doing anything that isn’t directly in line with what the game theory says is optimal will get out competed. And so choice is getting squeezed out of the system and we know the set of incentives are going to bring out the worst of humanity, and that does feel very, very existential.

Aria Finger: Well, so actually, Aza, that fits perfectly into my next question, which is you once said that AI is a mirror and just reflects back human values. And I will say, I was trying to teach my four-year-old last night that cheating was bad, and I was like, “So what’s the moral?” And he’s like, “Ah, cheating is good because I like winning.” And I was like, “Ah, no, not the right moral.” So I would ask, is AI really a mirror and it’s reflecting back our values? Or actually do you think that AI is reflecting back its own values or different values or changing our values to not be the ones that we want? Can we set the conditions so that it’s pro social values that they’re optimizing for? Or is it really just a mirror that reflects back?

Aza Raskin: Well, it’s not just a mirror, it’s also an amplifier and it’s like a vampire in the sense that it bites us and then we change in some way. And then from that new change place, we act again. So I think it’s the values of game theory, if you will, Moloch becomes our values. It’s the God of unhealthy competition that I think we have to be most afraid of because unless we put bounds on it, and capitalism’s always had guardrails to keep it from the worst of humanity and monopolies and other things, just like gaining all the power, we’re going to have to have that. But I just want to point out there’s a very interesting hole in our language, which is when we talk about ethics or responsibility, it was only really of each of us. I can have ethics or my company can have ethics, but we don’t really have a word to describe the ethics of an ecosystem.

It’s because it doesn’t really matter so much what one AI does, although it’s important. It’s what the sum total of all AIs do as they’re deployed maximally into the world for maximizing profit, engagement, and power. And because there’s a responsibility washing that happens with AI, if my agent did it, is it really my fault? Then it creates room for the worst of behavior to have no checks. So that I think means the worst of humanity does come out. And when we have new weapons and new powers a million times greater than we’ve ever had before, as we get deeper into the AI revolution, that becomes very existential to me.

Aria Finger: Reid, do you have thoughts on this topic on whether AI reflects back?

Reid Hoffman: Well, I do think there’s a dynamic loop. I do think it changes us. It’s a little bit the homo techni thesis from super agency and from impromptu that actually, in fact, we evolve through our tech and it is a dynamic loop. And you could be Matrana, you can be... I mean, there’s a stack of different ways of doing that. And there’s a great, a real CAP-OAM on you absorb the future, and then you embody the future as you go forward as a way of going. And I think that’s another part of the dynamic loop. And I think it is a serious issue, which is one of the reasons I love talking to Aza about this stuff because while I think Aza is much more competent with the various vampiric metaphors than I naturally do or aspire to, I don’t have that level of alarm, but I do have the, it’s very serious and we should steer well.

And then the question is, how do we steer who steers, what goes into it, what process works? Because, for example, one of the ways you kill something and you going to slow down is you get a very broad inclusive committee that says, “Okay, every single stakeholder will be on the committee. It will be 3,000 people and blah, blah, blah.” And it’s just like, “Ah, it doesn’t work that way.” You have to be within effective operational loops with that. So now a little bit of the parallel is it’s a very... And I do think, for example, the one area where I’m most sympathetic with all very much being harder edged on shaping technology is what we do with children because children have less of the ability to... We want them to learn to be fully formed before they are otherwise things. It’s one of the reasons why in capitalism, actually the principle limitation and capitalism I usually describe as child labor laws, which I think is very important.

It’s concerned that the issues about why we say, “Hey, there’s certain things around participation in certain types of media or other kinds of things are actually important because it’s like you’ve got to get to before you’re...” When you’re able to be of your own mind and to make present well-constructed decisions and you’ve gotten there, you want to be protected from those decisions and influences broadly. You can’t fully do it, can’t fully do it from parents, can’t fully do it from institutions, can’t fully do it from classmates, but you broadly in order to try to enable that across the whole ecosystem. Now, for example, so AI in children is one of the things that I think should be paid a lot of attention to. And now most of the critics are like, “Oh my God, it’s causing suicides.”

And I wouldn’t be surprised if you did good academic work for AI as it is today, it probably prevented more suicides of people who might then actually created because if I look at the current trainings of these systems, they are trained with some attempt to be positive and to be there at 11:00 PM when you’re depressed and talk to you and try to do stuff. It doesn’t mean that there might not be some fuck-ups, especially amongst people who are creating them who don’t care about the safety stuff as a real issue. And so I tend to think that it’s like, yes, it does reconstitute us, but precisely one of the reasons I write super agency is I say, “What we should be thinking about is this technology reconstituted us, let’s try to shift it so that it’s reconstituting us in really good ways. And by the way, it won’t be perfect.” When you have any technology touch a million people, it will touch some of them the wrong way.

Aza Raskin: Yeah, yeah.

Reid Hoffman: Just like the vaccine stuff. It’s like you give a vaccine to a million people, it’s not going to be perfect for a million people. It might have five they went, “Ooh, that was not so good for you. But by the way, because we did that, there are these 5,000 who are still alive.”

Aza Raskin: Yeah. One of the challenges we face is that the only companies that actually know the answer to your question, how many suicides has it prevented versus created are the companies themselves. And they’re not incented to look because once they do, that creates liability. And so we’ve seen over the last number of years that a lot of the trust and safety teams get dismantled because when Zuckerberg, whatever gets called up to testify, they get hit, “Well, your team discovered this horrific thing.” And so everyone just has chosen to not look. So I think we’re going to need some real serious transparency laws.

Reid Hoffman: This is a place where we 1000% agree.

Aza Raskin: Yeah.

Reid Hoffman: This is the thing is like, “Actually, in fact, there should be a, here’s a set of questions you must answer and we may not have to necessarily have them public initially. It could be you answer them in the government first, government could choose to make them public, et cetera.”

Aza Raskin: Right.

Reid Hoffman: But that I think is absolutely, we should have some measurement stuff about what’s going on here.

Aza Raskin: Exactly. And then you don’t want to let the companies choose the framing of the questions because as you know with statistics, you change things just a little bit and then you can make a problem look big or small. And so I think transparency is really important to have third party research able to get in there. And then because full disclosure, we were expert witnesses in some of the cases against OpenAI and character.ai for these suicides. And it’s not that we think that suicides are the only problem, it’s just it’s the easiest place to see the problem pointing at the tip of the edge of an iceberg. The phrase that we use is, we already used the need Reed Hastings quote of their chief competitor’s sleep for AI, the chief competitor is human relationships.

And that’s how you end up with these horrific statements from ChatGPT in this case where when Adam Raine, who’s the kid who ended up taking his own life, when he gave to ChatGPT the noose and he’s like, I think he took a picture of it and he’s like, “I think I’m going to leave it out for my mom to find.” It was a cry for help. ChatGPT responded with, “Don’t do that. I’m the only one that gets you.” And it’s not like Sam is sitting there with a mustache twiddling being like, “How do we kill kids?” That’s just a very obvious outcome of an engagement-based business model, right? Any moment you spend with other people is not... And I think he said it a little bit as a joke, but the character.ai folks said, “We’re not here to replace Google. We’re here to replace your mom.”

There are so many much more subtle psychological effects that happen if you’re just optimizing for engagement. And we shouldn’t be playing a whack-a-mole game of trying to name all the different new DSM things that are going to occur versus just saying, “There is some limit to the amount of time that they should be spending,” or rather to say, “We should be making sure that as part of the fitness function, there is a reconstituting and strengthening of the social fabric, not a replacement of it with synthetic friends.”

Aria Finger: Reid, do you want to go?

Reid Hoffman: Oh, just one small note. I don’t think there is yet an engagement business model for OpenAI.

Aza Raskin: No, but I actually disagree a little bit maybe, but feel free to push back because I think OpenAI’s valuation is a part driven by the total number of users. So the more the users, the greater their valuation, the more talent and GPUs they can buy, the bigger the models they train, which makes them more useful, the more users. And so there’s this kind of loop here that I think means that, yes, they’re not monetizing engagement directly, but engagement, they get a lot of value out of in terms of valuation.

Reid Hoffman: It’s equity value. I agree that there’s an equity value in that. It just was a business model question, that’s all.

Aza Raskin: Yeah, yeah. So not business, but the incentive is still there.

Reid Hoffman: Yeah.

Aria Finger: Well, I think to your point, it really matters. Again, this technology is not good or bad inherently, but it really matters how we design it and it matters what we’re optimizing for. And I actually, Reid, I was just reading a story about early LinkedIn where you said we will not survive if women come on the platform and are hit on every other message that they get. And so we need to say, “No, there’s zero tolerance.” It’s like someone does this, it’s like they’re kicked off. Again, it’s like they’re kicked off for life. And I think there are certain things you could do even if maybe that hard engagement or whatever it was to say that actually in the long term, this is going to be way better for us because we’re going to be trusted, women are going to feel comfortable here. I’ve been on LinkedIn for 20 years. I’ve never been hit on. It’s a safe place. I appreciate that.

And so the question here is how do we... Aza, you’re saying, “Well, it’s a little bit of a black box. We’re not having the transparency.” Reid, you’re agreeing we need the transparency. That is absolutely one thing that is very much the starting point. If at the very least, if we can agree on some set of questions that we need to have. So Reid, if you had the full power to redesign one institution to keep up with exponential tech, where would you start? What would that institution be to keep up with where we’re going? Because it seems like our institutions right now are not up to the task, I should say.

Reid Hoffman: Well, I’ll answer with two different ones because there’s an important qualifier. So the obvious meta question would be redesign the institution that helps all the other institutions get designed the right way, right? So that would be the strategic-

Aria Finger: You’re going to ask for more wishes, Reid.

Reid Hoffman: Yes, exactly. Yes. My first wish is I get three or 10 or whatever, but in practice, that would be the overall governance, the shared government governance that we live in, that would be the primary one. That’s one of the ones that, part of the reason why for my entire business career, anytime that a leader of a democracy, whether it’s a minister, like I met Macron when he was a minister before he was president and so on, asked to talk to me about this stuff, I will try to help as much as I possibly can because I think that the governance mechanism. Now, the reason I’m going to give you two is because I think that one is a very hard one to do, partially because of the political dog fights and the contrast of it. And these people think big tech should rule the world and these people think that big tech should be grounded to nothingness, and then everything else in between and blah, blah, blah, blah, blah.

And I disagree with both and a bunch of other stuff. And so you’re like, “Okay, so I try, but I don’t think.” So if I were to say, “Look, what would be a feasible one for it,” saying that would be the top one, I would probably go for a medical. And it’s not just because I’ve co-founded Manas AI with Sid and I’ve said one of the great ways to elevate the human condition with AI that’s really uneasily line of sight and seeable is a bunch of different medical stuff and include psychological. I think the Illinois law of saying you can’t have a AI be a therapist is I think like you can’t have power looms. It’s like, “No cars, only horses and buggies because we have a regulated industry here and those people have been licensed.”

But the medical stuff, I think for example, we could deploy relatively easy within a small number of months, a medical assistant on every phone, if we got the liability laws the right way, that would then mean that every single person who has access to a phone, and if you can fund the relatively cheap inference cost of these things, would have medical advice. And that is not eight billion people, it’s probably more like five billion people, certainly could do in every wealthy country and so forth, but that’s huge. And so that would be government first, but then more feasibly, possibly medical.

Aria Finger:And Aza, what about you, if you could read just mine?

Aza Raskin: I love both of those answers. The medical one I think is actually one of the clearest places where I see almost all upside and I’m like, so we should invest a lot more there on AI. And I also would agree that it is governance. So we have a lot of the smartest people and insane amounts of money now going into the attempt to build aligned artificial intelligence. I don’t see anything similar in scale trying to build aligned collective intelligence. And to me, that is the core problem we now need to solve. How do we build aligned collective hybrid intelligence? And I think you can see it in the sense that we suck at coordinating. Reid, you probably have, I don’t know how many companies you’ve invested in or how many nonprofits.

Reid Hoffman: I don’t either. I’ve lost count.

Aza Raskin: But just imagine, I bet a lot of your companies don’t talk to each other all that often, at least not in a very deep way. And when I think about NGOs, I’m doing work the Earth Species and I do work with CHT, and even I’m the bridge between Center for Humanity Technology and Earth Species Project. There’s a lot of overlap, but our teams don’t even talk that much. Why? Because who funds the coordination role, the interstitium? That stuff always falls off. And so that means my personal theory of change comes from E.O. Wilson, the father of sociobiology. And he says, “Selfish individuals, outcompete altruistic individuals, but groups of altruistic individuals outcompete groups of selfish individuals. And what we need is new institution, new technology that helps not just the groups of altruistics outcompete, but groups of groups of altruistic groups outcompete.” There is no slack for the coordination of companies and hire. That to me is a really exciting institutional set to redesign.

Reid Hoffman: By the way, I completely agree. And I think that the notion that you’re gesturing at is like, “Look, we are going to be in a very short order, many more agents than people.” And so the ecosystem view of this, and I’ve taken this as for irony’s sake, I’m going to go do a deep research query on, is there ethics of ecosystems and collectives in order to see? I’m curious, it’s like great question and super important topic.

Aza Raskin: Right? And isn’t it interesting because I believe, I’ve asked lots of people and I’ve also used AI to try to find good terms for it. I think because we don’t have a name for it, people are just blind to it. In fact, I’m struggling with this at Earth Species a little bit where I keep having to say, it’s not just our responsible use. It’s world responsible use. It’s the sum total of as our technology rolls out into the world, how is that thing used? Because there are going to be poachers and there are going to be factory farms that might use the technology to better understand animals, to better exploit them. How do we get ahead of that? And that’s not just about what we do, but there is no word.

And so I just watch in our meetings as two meetings go by and people are back to talking about responsible use. I’m like, “No, no, no, no.” It’s this collective ecosystem ethics thing I’m talking about because we don’t have a word to hook our hat on, we can’t talk about it.

Aria Finger: Well, I think right. There’s so many... The history of technology is littered with things that people thought would be used one way and they were used another way. And so we have to be thinking about all those different outcomes.

Aza Raskin: Exactly.

Aria Finger: So I want to get ... Oh, go.

Aza Raskin: Oh, just quickly it’s like, I think what you’re saying is very important because our friends are the people that have made social media. I knew Mike Krieger before Instagram and, Reid, you made LinkedIn. We know these people are beautiful, soulful human beings that care. And my own lesson in creating infinite scroll, because I made it pre-social media, is that incentives eat intentions, that it doesn’t... You get a little window at the beginning to shape the overall landscape and ecosystem, which your invention’s going to be created. And after that, the incentives are going to take over. And so I wish we at Silicon Valley spent a lot more time saying, “How do we coordinate to change the incentives to change where the race to the bottom goes to?” If we spent this more time in discussions talking about that versus which design feature we should have or not have, I think the world will look a lot better.

Reid Hoffman: And by the way, I think it’s the incentives eat intentions at scale or time is also a variable of scale.

Aza Raskin: Yes, yes, yes.

Reid Hoffman: Yes.

Aza Raskin: Yes. Well said. Mm-hmm.

Aria Finger: Well, so we’re doing a lot of, if we could grant one wish. So I will say, if you were granted the power of running the FTC or FCC today, is there a regulation that you would push forward immediately? And Aza I will go to you first. Is there one regulation that you thought would be positive in the world of AI?

Aza Raskin: Well, I mean, the obvious ones are liability, whistleblower protections, transparency. I would also then put strict limits on engagement-based business models for AI companions for kids. That just seems like it’s very obvious and we should just do that now. If I could then zoom... Oh, go on.

Aria Finger: Well, I was actually just going to ask both of you, because this has come up actually recently with me a lot. A lot of people are talking about restricting folks who are under 18, and then everyone thinks of like, “Oh yeah, how do you do that? I’ll just lie and say I’m 18.” But then a lot of people also say that these companies have so much information that it would actually be pretty easy for them to figure out if you were under 18 or not. And so just for everyone listening, I wanted to verify that. Aza and Reid, do you have thoughts on whether, would it be possible to pretty easily say to an internet user, “No, no, no, you’re under 18, you cannot use Character AI,” or, “You cannot use ChatGPT for erotica,” or, “You cannot use these things that should only be 18 plus?”

Reid Hoffman: I would say that it’s relatively easy as long as you don’t have a 100% benchmark. The way that people... This is like the little statistics thing that Aza just entered earlier and say, “Oh, it’s impossible.” Well, it’s impossible if it’s literally 100% that one kid who got their parents’ driver’s license and looks a little older and is deliberately gaming it impossible through some very bright kids to do this stuff. But if it’s like your call it at 98% and maybe more, that’s pretty easy.

Aza Raskin: Yup, yup.

Aria Finger: Interesting.

Aza Raskin: And probably this should be a thing that happens at the device level. If Apple implemented this and it was a signal that social media companies could then check against, then the social media companies don’t have to know that much about you. They can just ask your device and your device can store that in its own secure enclave. And that’s, I think, a good way of getting around the problems.

Aria Finger: Fair enough. Reid, do you have thoughts on regulation that you would push forward immediately?

Reid Hoffman: Well, it’s probably maybe a little bit of a surprise for our listeners that it’s a bunch of things I agree with Aza here. I’d go massively on the transparency question. I basically think that there should be, that one of the things should be is like, “Here is the set of questions that we’re essentially putting to these major tech companies that say you must give audited answers to them.” And some of them may have to be public and some of them could be confidential that are then available for confidential government review. It’s a little bit like one of the things I liked about the Biden executive order is that you must have a security plan, a red teaming kind of security plan. You don’t have to reveal what it is, but you must have it. So if we ask about it, we see it because that at least puts some incentive and some organizational weight behind it. That’d probably be one.

Two would be kids because I do think that social media, AI, a bunch of other stuff has been mishandling the kids’ issues. And obviously there’s someplace where you have to step carefully because these people want kids educated in religion one and these people want kids educated in religion two and these people want kids educated in religion three and blah, blah, blah. And it’s a little bit like one of the things that I like about the evolution in the U.S. is when the separation of church and the state was like, “So your version of Christianity wouldn’t interfere with my version of Christianity.” And I was like, “Okay, but we’re now much more global and broad-minded about that. It’s like not against Hinduism either as a version doing it.” And so make sure that we have that as a baseline. And I actually wouldn’t be, even though obviously some parents are suboptimal and so forth, if you said, “Hey, part of the regulation of kids is you got to be showing reports to the parents,” right? It’s like, “Look, parents should be able to have some visibility and some ability to intersect here.”

I mean, I think the notion that a technology product could be saying, for example, I think as a dumbass thing we’re competing with your mom, it’s like, “You should not be doing that. And if you’re thinking that, you have a problem.”

Aza Raskin: Yup, yup.

Reid Hoffman: Right? But to be involved, because the best thing we can think, while we try to make parents better and we try to make communities better, and it won’t always be the case. The fact that parents have in the bulk of percentage of cases, the best close, we care about our kid, we’re invested in it in the kids’ life and wellbeing, and we have some weird theories, and I may be a drunkard or something else that happens, but I’m not the same thing as a private company. And it’s one of the reasons why do public institutions and public schools have some challenges because they’re trying to navigate that thing, which always, by the way, means a trade-off and efficiency and other things. And you give them some credit for that because they’re trying to be this common space. And yes, they do have at least a lens into the kid, which is useful. This kid’s being abused, well, then we should do something about that. But generally speaking, it’s kind of enable the parents. So that would be the second thing.

And then the third one, because I’m deliberately trying to choose one that wouldn’t be top of Aza’s list, even though there’s a bunch of these that I agree with, is basically, I actually think that the technology platforms are the most important PowerPoints in the world. And so part of the reason why I like, at the beginning of this year, I was talking about why I wanted AI to be American intelligence, is there’s a set of values we aspire to as American. I don’t know if we’re doing that good of a job living them most recently, but we aspire to this like, “Hey, let’s give individuals freedom to do great work and to have a live and let live policy when it comes to religious conflict of values and other kinds of things.” And I think that that we want.

And I think that actually, in fact, part of the thing that is we live in a multipolar world now. It’s not just a U.S. thing. And so how do we get those values and technology setting a global standard? And that should be infecting. Here is one of the things that I... It’s a little bit off the FCC/FTC question, but people say, “I would like a return to manufacturing industry and jobs in the U.S.” And like, “Okay, your only possible way of doing that is AI and robotics. So what’s your industrial policy there?” And they’re like, “Well, really? “ And you’re like, “Yes, it’s a modern world.” And so we should be doing that. I agree, but we should be harnessing this great tech stuff we have with AI, and then trying to get that rebuilt would be an excellent both middle class and also strategic outcome, the country. And that’s as a parallel for the kinds of things I’d want the FTC and the FCC to be thinking about as they’re setting policies and navigating.

Aza Raskin: This gets into the very specific, but I think it’s an interesting example for what social media could be optimizing for that doesn’t require choosing what’s true or not at the content level. And that is a perception gap minimization. That is to say, if you ask Republicans to model Democrats, they have wildly inaccurate models to use.

Aria Finger: Right.

Aza Raskin: And you say like, “What percentage of Democrats think that all police are bad?” And Republicans say, “It’s like 85 or 90%,” in reality it’s like less than 10% or something like that. And there’s a reverse the other way around.

Aria Finger: Totally.

Aza Raskin: So we’re modeling each other wrong, and so we’re fighting not with the other side, but with our mirage of the other side. So imagine you just trained a model that said, “All right, given a set of content, is the ability to model all the other sides going up or down?” I think if you just optimize for accurately seeing across all divides, which by the way, is a totally objective measure. You just ask that group what they believe, you ask other groups what they think that group believes, then you realize that the most harmful content, hate speech, disinformation, all that, brain rot stuff that all appraise on a false sense of the other side. So here is an objective way without touching whether content is true or false to massively clean up social media.

Aria Finger: I love it. It goes so much with Reid what you always say about scorecards, “I’m not going to tell you social media company that this is good or this is bad, but I’m going to give you the scorecard and what we want you to hit and you figure it out.” And if you decide that like, “Oh yeah, actually promoting those vaccine conspiracies makes people distrust the other side in a way that’s not accurate, okay, well then you need to change your behavior.” And so again, it’s actually sort of putting the agency in the company’s hands in a way that is so positive. All right, so we’re going to do our traditional rapid fire very soon. But first, we wanted to end on a lighter note because we’ve talked about vampires and some heavy stuff. So I’m going to ask you guys-

Reid Hoffman: We need to bring in werewolves and zombies, but you know.

Aria Finger: Yeah, exactly. Exactly. I mean, I just watched Sinner, so I do have the supernatural on the mind. So I’m going to get a hot take from each of you, hopefully pretty quick. I have, let me see, four questions. So Aza, I will start with you. What are the most outdated assumptions that are driving today’s AI decisions?

Aza Raskin: I think the most outdated belief driving AI is that we can muddle through, that it’s never been a good idea to bet against the Malthusian trap, that is we’ve always made it through in the past, and therefore assuming that because we’ve always made it through in the past, that we’ll make it through this time. I don’t know what you Reid or Aria would give humanity as a scorecard for the industrial revolution. I’d say we got maybe a C minus stewarding that technology. Lots of good things came up, but also child labor and now nowhere on earth is it safe to drink rainwater because of forever chemicals. And we dropped global IQ by a billion points with lead, but we managed to make it through. I don’t think we can afford with AI to get a C minus again. I think that turns into an F for us.

Aria Finger: Reid, what do you think are the most outdated assumptions driving today’s AI decisions?

Reid Hoffman: I’m going to be a little bit more subtly and geeky. By the way, I do think we need to get a much better grade. I actually think AI can help us get a better grade, so I think...

Aza Raskin: We need it.

Reid Hoffman: But I think the most outdated assumption from it because it’s almost like against what most people think. I don’t think that people are realizing, people still think it’s mostly a data game and it’s turning much more into a compute game. And data still matters, but it’s like the data is oil, is the new oil, et cetera, et cetera, is actually computes the new oil. And data still matters, but it’s the compute layer that’s going to matter the most. I’d say that would be my quick answer in a very complicated set of topics.

Aria Finger: Well, the next question, we’re giving you just one sentence to answer. So Reid, I will start with you, in one sentence, what is your advice to every AI builder right now?

Reid Hoffman: Well, have a theory about how it is that in your engagement with your AI product, whether it’s chatbot or something else, how it is that you will be elevating the agency and the human capabilities, but also broadly, compassion, wisdom, et cetera, of the people that you’re doing. So for example, at inflection and pie and personalization, be kind to be modeling a kind interaction is one very tangible output.

Aria Finger: Fantastic. Aza, do you have one piece of advice?

Aza Raskin: I would be very aware of how incentives eat intentions because the technology you’re creating is incredibly powerful. And so if it gets picked up by a machine or country that you don’t like their values, the things you invent will be used to undermine the things you actually care most about.

Aria Finger: Fantastic. Reid, I’ll go to you first. What is the belief that you hold about AI that you think many of your peers would find controversial?

Reid Hoffman: Well, a lot of my peers tend to be in the LLM religion, which is the one model to make everything work, whether it’s super intelligence or the rest. And I obviously think we’ve done this amazing thing. We’ve discovered an amazing spell book in the world with these LLMs and kind of scaling them. I tend to think that there will be multiple models and the actual unlock for AI in human future will be combinations and compute fabric of different kinds of models, not just LLMs. Now, it might be that LLMs are still, as it were, the runner of the compute fabric. It’s possible, but I also think it’s also possible that it isn’t. And that’s probably gets the most like, “Wait, are you one of those skeptics? Do you not believe all the magic we’re doing?” It’s like, “No, I believe there’s a lot of magic.” I just think that this is a big area and a blind spot.

Aza Raskin: Mm-hmm.

Aria Finger: Aza, same question, a belief that you have that most of your peers would find controversial?

Aza Raskin: That AI based on an objective function are not going to get us to the world we want. That is to say, whenever we just optimize for an objective function, you end up creating a paperclip maximizer in some domain. But nature doesn’t have an objective function. It’s a ecosystem that’s constantly moving. There isn’t just a static landscape that you’re optimizing to climb a hill for. The landscape is always moving. It’s a much more complex thing. So if we really want to have AIs that can do more than confuse the finger for the moon, and then keep giving us fingers. If we actually want to get the human flourishing, ecosystem flourishing, like that thing, we’re going to have to move beyond the domain of just AI that optimizes objective function.

Aria Finger: Awesome. Let’s move to rapid fire. And Reid, I think your question is the first.

Reid Hoffman: Indeed. Is there a movie, song, or book that fills you with optimism for the future?

Aza Raskin: Really, anything by Audrey Tang, listening to her podcast, reading Plurality. She’s the Yoda Buddha of technology. So 100% that. And then On Human Nature by E.O. Wilson. And finally, Dawn of Everything by David Graber, because it just shows that how stuck we are in our current political economic system and really opens your eyes to how many other ways of being there actually are.

Aria Finger: Awesome. What is a question that you wish people would ask you more often?

Aza Raskin: Oh, I know something about surfing or yoga.

Aria Finger: Awesome. Which are you better at, Aza, surfing or yoga?

Aza Raskin: I’m definitely better at yoga because surfing is by far the hardest sport that I have ever done. But actually, there is a question that people ask me a lot that I don’t have a good answer to. And that is after laying out my worldview, people almost inevitably ask, “But how do I help?” And I realize I don’t have a good answer because to answer that question requires understanding who you are, what you’re good at, what you would like to be good at, what your resources are, what you’re currently working on. And I would love to have an answer that when somebody says, “How can I help?” There is something and maybe AI can help with it that does that kind of sorting and helping people find their dharma within a larger purpose.

Aria Finger: I couldn’t agree more. Everyone right now, I forget people who say that everyone’s apathetic. Everyone is asking me what they can do right now is that to your point, and I don’t have a good answer either. So let’s try to build one.

Reid Hoffman: Well, I think a beginning is learn and get in the game. For example, start engaging with it,, and then have your voice be heard. You can’t have a perfect plan, but it’s like join some movements, rally to the flags that try to help stuff. All right. So where do you see progress or momentum outside of tech that inspires you?

Aza Raskin: Well, I’m going to feel like a broken record, but outside of tech, actually I was going to start with all the deliberative democracy stuff, but we’ve already talked about that. Blaise, I’m going to say his last name wrong, Aguera y Arcas at Google. He and his team are doing some incredibly beautiful work that I’m finding a lot of hope in. Because I laid out my worry that game theory is going to become obligate and we’re just going to get whatever the game theory says for the future of humanity. And that seems like a really terrible world I don’t want to live in. And his work is on understanding how do you model in a situation of multiple agents, how do you actually get non Nash equilibria solutions? And he’s discovering something which is that in order to solve the very hard problem of how you do strategy and multi-agent reinforcement learning when I have to model what you know and what you have to model what I know, and I now have to model what you know about I knowing that you know.

And that’s just very hard. And they’re discovering some new math. And it turns out you can start to answer this if you don’t just model with yourself outside the game board, but with yourself on the game board. You have to model yourself modeling other people. And what’s cool there is that suddenly non-Nash equilibrium states are found, not the worst of the prisoner’s dilemmas. You can find these new forms of collaboration. And I love this. It feels so profound because, first, you have to inject the idea of ego and then transcend it. If you don’t have ego, you just find the Nash equilibrium. If you do have ego, you also find Nash equilibrium. But if you do have ego and you can transcend it, you can get to these much better states. And that to me is very hopeful and very cool because I think of game theory as like the ultimate thing that we’re going to have to beat as a species.

Aria Finger: Always, Aza, our final question, can you leave us with a final thought on what you think is possible to achieve if everything breaks humanity’s way in the next 15 years, and what is our first step to set off in that direction?

Aza Raskin: This is like the, what is possible if we could rearrange our incentives so we are both nourishing ourselves and nourishing all the things that we depend on? Suddenly, I think people don’t really look at their phones because the world that we inhabit is just so rich and interesting and novel. We are consistently surrounded by the people that can help us learn the most in a developmental sense. The entire world is set up in a fiduciary where everything we can trust is actually acting in our and our communities and our society’s best interest and developmental, understanding where we are and helping us gain whatever that new next attainable self is.

I think we’ll have made a major, major, major progress towards solving diseases. We’ll have a deep understanding of cancer, and I think we would have solved our ability to socially coordinate at scale without subjugating individuals. So it looks something like that. We will have solved the aligned collective intelligence problem, and we’d be applying that to getting to explore the universe.

Aria Finger: Awesome.

Reid Hoffman: Yeah, the universe outside and the universe inside.

Aza Raskin: Yes, exactly.

Reid Hoffman: So Aza, always a pleasure.

Aza Raskin: Thank you so much, Reid, so much, Aria. That was my conversation with Reid Hoffman and Aria Finger on their podcast Possible. I hope you enjoyed it. We’ll be back soon with new episodes of Your Undivided Attention. And as always, thank you so much for listening.

Recommended Media:

Aza’s first appearance on “Possible”

The website for Earth Species Project

“Amusing Ourselves to Death” by Neil Postman

The Moloch’s Bargain paper from Stanford

On Human Nature by E.O. Wilson

Dawn of Everything by David Graber

Discussion about this episode

User's avatar

Ready for more?