[ Center for Humane Technology ]
The Interviews
What if we had fixed social media?
0:00
-16:54

What if we had fixed social media?

An alternative history

We really enjoyed hearing all of your questions for our annual Ask Us Anything episode. There was one question that kept coming up: what might a different world look like? The broken incentives behind social media, and now AI, have done so much damage to our society, but what is the alternative? How can we blaze a different path?

In this episode, Tristan Harris and Aza Raskin set out to answer those questions by imagining what a world with humane technology might look like—one where we recognized the harms of social media early and embarked on a whole of society effort to fix them.

This alternative history serves to show that there are narrow pathways to a better future, if we have the imagination and the courage to make them a reality.

Tristan Harris: Hey, everyone, welcome to Your Undivided Attention. I’m Tristan Harris.

Aza Raskin: And I’m Aza Raskin.

Tristan Harris: So I’d say that when Aza and I are running around the world and talking to everybody, there’s really just one question that’s the most popular question that we get asked, which is, so what do we do about all this? How do we get out of this trap and what would it look like if we got this right? And they’re really mostly talking about social media. So we did this, Ask Us Anything episode where you sent us all your questions, and this was the most popular question we got asked. Here’s Max Berry.

Max Berry: Hey, Tristan, it’s Max Berry from Canada. It really seems like we’re all stuck in these feeds and the companies are stuck too, because they need the money from the ads. It’s like we’re all trapped by the same algorithm. Is there actually a way out of this whole thing?

Tristan Harris: So we’re going to do a little thought exercise, just follow along here. Imagine that we actually took action. And we’re not saying that we might or we should, we’re saying imagine in past tense that we did. What would it look like to comprehensively respond with the cultural changes, design and product design changes, legal changes, incentive changes, the litigation and lawsuits that led to those incentive changes so that we could comprehensively reverse this problem, what would that look like?

Aza Raskin: My favorite thing about this is that it really can feel so bleak and so inevitable. We live in this world, and that’s just because we can’t articulate an alternative. So that’s what we’re going to try to do here. So let me set this up then for you, Tristan. Zoom your mind back to 2012, it was all looking really bleak. We had falling attention spans, we had rising polarization, we had the most anxious and depressed generation in history, a loneliness epidemic, mental health crises. And then what happened?

Tristan Harris: Well, we sprang into action. We realized we had a problem, we replaced the division-seeking algorithms of social media with ones that rewarded unlikely consensus using Audrey Tang’s bridge ranking for political content. So now, instead of scrolling and seeing infinite examples that made you pessimistic about the worst sort of things and violence and inflammatory content that is happening around the world every day, you were suddenly seeing optimistic examples of unlikely consensus from everyone around the world. And that started to turn the psychology of the world around.

Aza Raskin: Mm-hmm.

Tristan Harris: And just like we have emission standards on cars, we just put in these sorts of dopamine emission standards, recognizing that too many of the apps were incentivized to get into limbic hijacks and slot machine behaviors. And then suddenly when we had these emission standards for dopamine, using your phone didn’t make you feel dysregulated, didn’t make you feel sort of anxious, and you had more control as you were using technology.

Aza Raskin: Yeah, we began subsidizing solutions journalism, so that every time you’re on a feed and you saw a problem, it was contextualized with real-life solutions from around the world that gave us learned hopefulness, not learned helplessness.

Tristan Harris: We realized that our phones were not just phones or products that we used, they were more like a GPS for our lives. They were kind of like a brain implant and we were only as good as the menus that we lived by inside of those GPSs. And we realized that the attention economy was just creating a GPS that only ever steered us towards more content. And so, instead, we sort of reclassed these phones and these devices as attention fiduciaries for making life choices. We made the radical choice to treat technology companies the way that we do every other kind of company, which is to say that there are rules that we have to follow. And sort of just like we have zoning laws in cities for different building and noise codes, we realized that we needed an attention economy with a kid zone, a sleeping zone, a residential zone versus a commercial zone.

Aza Raskin: And we realized that actually that wasn’t a radical proposal in the same way that we added cigarette tax at the point of purchase to change behavior or put age restrictions on drinking and driving, that obviously there are restrictions on the most powerful technology affecting us.

Tristan Harris: And groups like Moms Against Media Addiction or MAMA, and the Anxious Generation rallied public support to ban social media in schools. And now you had tens of thousands of schools going phone-free all around the world. And once that happened, laughter returned to the hallways, attention spans started to reverse. We implemented age appropriate design codes so that we didn’t have autoplaying videos in any of these social media apps.

And in terms of thinking and systems, this wasn’t just about making design changes, it was about changing the incentives. And once we reckoned with the total harm that all of this had caused, what Michael Milken, the great capitalist has called the trillions of dollars of damage to social capital in the form of mental healthcare costs and lost GDP and productivity. Once we accounted for that, there was a trillion dollar lawsuit against the engagement-based business model.

And just like the big tobacco lawsuit that ended up funding ongoing public awareness campaigns that educate people that smoking kills, this funded ongoing digital literacy campaigns for young people so that the problems of technology were understood at the speed at which they were entering society. And as part of that, it funded community events and rewiring of the social fabric and refunding local news and investigative journalism all around the world that had previously been bankrupted by that engagement-based model.

And this funded the mass rehumanification of connecting people to in-person events and nature. So suddenly the smartest minds of our generation were thinking about how to design interfaces that were all about hosting events in community. And as part of that, we replaced the dating swiping industrial complex of dating apps like Tinder and Hinge and Raya that were really just predating on people’s loneliness and causing people to send messages and never meet up. And suddenly there was a simple change to all these dating apps that made the world so much better, which they were forced to actually spend money to host real-world events every week in every major city in many venues, and then used AI to route everybody who had matched with each other into these common places. So suddenly every week there is a place where people who were lonely had an opportunity to meet all sorts of people they had matched with. And it turned out that once people were in healthy relationships, about 25% of the polarization online was actually just due to people feeling disconnected from themselves and not happy. And so, polarization started to go down.

Aza Raskin: And realized that Marc Andreessen was right, or at least about one thing, which was that software was eating the world. But because software doesn’t have the same kinds of protections that we’ve built up in the real world, as software ate the world, we lost the protections from the real world. And we realized that you couldn’t take over the world without caring for retaining the life support functions of this society that constitutes that world. So we realized you couldn’t take over childhood development without a duty of care to protect children’s development. Or you couldn’t take over the information environment without a duty of care to protect the integrity of the information environment. And by passing a duty of care act for all of technology, so that as software and technology eat the world, the world doesn’t end up chewed up, that solved many of the problems.

Tristan Harris: Yeah, and there was so many aspects that software was taking over, including our ability to unplug from technology. And when technology ate the ability to unplug, it also needed to care for our ability to unplug. So we started getting our entire technology environment that was actually protecting and making it easy to unplug. There you are in email and it makes it easy to sort of say, “I need to go offline for three days.” There you are in news and say, “Hey, I’m going to go offline for five days.” And when you come back, it just summarizes all of the news that you missed so you don’t actually have to check it constantly. And so, suddenly, using technology felt more balanced. It was more in touch with the real world and balancing the real world with the online world. We also realized that so much of this was that personnel is policy and we didn’t have enough people who are actually trained in humane technology.

Aza Raskin: Just like the show, The West Wing, caused a 50% increase in enrollment in the Kennedy School, the social dilemma, and then a whole bunch of new shows centered around what humane technology feels and looks like created a massive wave of humane technologists.

Tristan Harris: Can you imagine having Netflix shows that ongoingly cover what it would look like in these fictional rooms where people were making design choices at technology companies that were all about protecting and dealing with these societal issues? To cite the work of Donella Meadows, who’s a great systems change theorist, who said, “How do you change a system?” And she said, “You keep pointing out the anomalies and failures in the old paradigm, and you keep coming yourself, loudly, and with assurance from the new one, and you insert people with a new paradigm of thinking into places of public visibility and power.”

And so, once we had all these people watching these Netflix shows of humane technologists making thoughtful decisions about how to trade off and make technology work for society, and once you had humane technology graduates who had all taken these foundations of humane technology course, suddenly in these positions of public visibility and power, the technology that we used every day was actually really starting to feel like it cared about the society that it was really operating.

Aza Raskin: And this I think might be my favorite one, just like we ban the sell of human organs, something that’s sacred to us that we need, we realized that we could ban engagement-based business models. And that immediately made technology much more humane, but it did something even deeper. It freed up two generations of Silicon Valley’s most brilliant minds to go from getting people to click on ads to solving actual real-world problems like cancer drugs and fusion. And in the wake of that, Silicon Valley went from being reviled to loved again.

Tristan Harris: And we saw that countries that started adopting these comprehensive humane technology reforms that were less dysregulated, less distracted, less polarized, started to actually out-compete the other countries who didn’t regulate technology and still had these parasitic engagement-driven business models. And there was also a national security side to all of us that we realized, which is we realized that authoritarian societies like China were consciously deploying technology to reinvent 21st century digital authoritarian societies. And they were using tech to strengthen that model.

And in contrast, democracies were not consciously deploying technology to upgrade and create 21st century democracies. Instead, we had inadvertently allowed two decades worth of these pernicious business models to profit from the degradation of the health and cohesion of democracies. But once we sprang into action, it really wasn’t that hard to change all these design patterns, change these incentives, and start to really set in motion a totally different trajectory of a healthier, less lonely, more belonging, more community, less dysregulated, just more coherent society. And so, a beautiful world our hearts know as possible isn’t nearly as far away as we think if we can just start to see and feel into how a few changes like this could make a big difference.

And we went from, we’re upgrading the machines and downgrading the humans, to we’re upgrading the machines to up upgrade the humans.

Aza Raskin: All right, I want everyone now to close your eyes and I’m going to ask Tristan to lead us in a little meditation of, assume all of these things have actually happened and we’re living in that world. What does that world feel like?

Tristan Harris: Yeah, so just imagine stepping into this other world that we just described for a second. There you are holding this device that’s designed totally differently in your hands. It doesn’t make you feel dysregulated, because you don’t have autoplaying videos and dopamine hijacking happening everywhere. When you’re scrolling news feeds, suddenly 30 to 40% of what you’re seeing are things that you can do with real friends and real community in your environment. So suddenly you’re using technology and it’s actually encouraging us to disconnect and take breaks and making it easier and built-in across all of these messaging applications to do that.

Aza Raskin: Tristan, you said something that still resonates in my ears from social dilemma and you said, “So there I am scrolling on social media, one more cat video. Where’s the existential threat?” And your point was that social media isn’t the existential threat. Social media brings out the worst in humanity, and the worst in humanity is the existential threat. So when I close my eyes and I imagine this world, this alternative world, it’s that I’m no longer seeing the worst of humanity, I’m starting to see consistently the best of humanity. And then instead of existential threat, I’m seeing existential hope.

Tristan Harris: But now imagine the most violent thing that has happened today. And imagine just over and over again pointing your attention at more and more examples of this. Just notice what happens in your nervous system when you’re doing that. I think one of the most pernicious aspects of the way that this system has hijacked us is that we don’t even really notice how profoundly different the world that we’re living in our inner environment is, because we’ve been living in it for so long. And what inspires me about the narrative, and we have just described, is it actually doesn’t take a lot of just these small changes to create a very different feeling world psychologically. But then a different feeling world psychologically starts to translate into a differently constructed world.

Aza Raskin: So really imagine that we’ve done all of these things and people just can see each other’s humanity. We are bridging our divides. We are spending more time in person as societies. We are stronger and more coherent, and we can see that we are making better decisions over time. In that world, suddenly AI seems much easier to deal with.

Tristan Harris: When we saw that we had successfully dealt with social media, so now we knew that we were a society that was capable of dealing with technology problems. And they weren’t insurmountable, it was just a matter of seeing the underlying incentive and design that was leading us to a world that no one wanted. And once we had made those changes to social media, we had the confidence in ourselves that we could do something about AI and it wasn’t too late.

This is just one path through a set of things that could happen, but we want all of you to be thinking about what your version of this narrative is and what would this narrative look like for AI? We all need to tell that story of how this went a different direction. Because if we sort of collapse into, “Well, the current path that we’re on is just reckless and sort of dystopic, is just inevitable,” then we’re never going to get there. And so, we hope this episode is an example of what it looks like to step into a version of what we did do, past tense, that was obvious once we saw the problem clearly.

So some of you might be feeling maybe depressed even after hearing this alternative narrative, but let me give you just a little bit of actual hope. 30 something attorneys general have actually sued Meta and Instagram for consciously addicting young people to their products. There is a big tobacco-style lawsuit underway. There are bills in Congress, like the Kids Online Safety Act to try to create things like the age appropriate design code. There is work being done by Audrey Tang to actually get X to implement BridgeRank as the center of how it ranks content for the world. And so, it doesn’t look great out there, but if you look at the road, it’s almost like you can see these trailheads where there is a path for more solutions to happen if there is a comprehensive and concerted effort to make it happen.

Discussion about this episode

User's avatar

Ready for more?