Tristan Harris on How AI Worsens Ills Caused by Social Media
The only cure is to impose change on AI firms’ incentives, argues Tristan Harris in The Economist
By Invitation | Technology and society.
May 29th 2024
AS SOCIAL-MEDIA platforms gained dominance over the past decade, society was transformed. In its early days, that transformation was billed as an unprecedented good by social-media companies including Facebook, Instagram and Twitter—they were, after all, connecting the world as never before. Twitter’s tagline in 2014 was the succinct and bright “What’s Happening?” Instagram’s was “Capture and Share the World’s Moments.” Facebook’s login page declared that “Facebook helps you connect and share with the people in your life.”
But as Charlie Munger, Warren Buffett’s late business partner, once said, “Show me the incentive and I’ll show you the outcome.” And those taglines obscured a warped incentive structure within social-media platforms—an invisible engine that would come to drive the psychological experience of billions of people. Darker realities emerged. As social media tightened its grip on our everyday existence, we witnessed the steady shortening of attention spans, the outrage-ification of political discourse and big increases in loneliness and anxiety. Social-media platforms fostered polarisation, pushing online harms into offline spaces, with at times tragic, fatal results.
I began to worry about the damaging effects of social media more than a decade ago when I was a design ethicist at Google. It became clear to me that it was engineered to capture and hold our attention, and to capitalise on our subconscious instincts, all without consideration for the long-term ramifications. The perverse incentives in social media’s business model—to maximise user bases and engagement—were making society more addicted, distracted, validation-seeking, outraged and polarised.
Now the world is grappling with a new emerging technology—generative AI. It is common to hear people say that it’s simply too early to tell how it will affect society. But I believe that we can predict the outcome now, the same way those who looked closely enough were able to predict the outcome with social media, by examining the incentives that drive the development and roll-out of the technology.
Social media was our first large-scale contact with AI—that is, “curation” AI, which simply picked which posts, videos and tweets would hit eyes and ears. Curation AI was programmed with a simple incentive: to drive engagement on the platform, in order to then drive advertising revenue. As it turns out, using engagement as an incentive can create profound dysfunction in society, culture and politics. Ten years of living with curation AI and its current incentives have been enough to rewire global information flows, break shared reality and fuel unprecedented mental-health crises in the young.
Society is now beginning its second contact with AI—generative or “creation” AI. A new class of technology is being unleashed, from chatbots that generate text and AI copilots that generate code, to deepfake images and voice-cloning audio generators. Although tech companies market generative AI as a god of productivity gains and other benefits, its consequences have already proven to be disturbing and even destructive. Financial scams against the elderly have been enhanced with voice clones of loved ones; “nudification” apps are being weaponised against teens; and deepfake audio content is being used to blackmail people of all ages.
Generative AI heightens pre-existing dysfunction in the digital ecosystem. This is because it greatly reduces friction and makes it much easier to create content. Images that used to require advanced Photoshop skills to build can now be created almost instantaneously with a single-sentence text prompt, for instance. Political disinformation campaigns that used to require the involvement of a lot of people can be generated and deployed at scale, and with surgically precise voter-targeting, with just a handful of human agents.
What is driving a potentially dangerous technology like generative AI to grow at such a fierce rate? The answer can once again be found in the perverse incentives at play—especially the pressure to be first to market.
Despite warnings from employees and external researchers alike, many cutting-edge AI companies are rushing to release risky, unreliable, insecure and even unethical AI products, in the hope of dominating the market. In the process, they are whitewashing harms and offloading responsibility to governments and the public to figure out solutions for problems their AI products create.
Governments have the power to intervene and change the incentives. However, elected representatives face an age-old challenge: emerging technologies have historically been difficult to regulate, going back to the advent of the railroad and the telegraph. That is partly because new technologies are often not well understood. It is also because tech pioneers generally go to great lengths to minimise oversight. Today, half-hearted, even bad-faith statements by AI-industry CEOs about their openness to regulation are routinely followed by intense, well-funded lobbying efforts that help to create political gridlock.
Politicians need to find a way out of this gridlock. Tech giants need to be held accountable for the harms their products cause, not just encouraged to innovate. So far, companies like Meta, Google, Amazon and Microsoft have had to bear little responsibility for their actions. These companies must be subjected to a liability framework that exposes them to meaningful financial losses, should they be found responsible for harms. Only then will they take safety more seriously in both the AI-development process and “downstream”, once it is deployed. Liability has the power to wire new incentives into the foundation of AI businesses.
A decade ago with social media, the world took a wait-and-see approach to how that technology would change society. The results have been devastating. With AI, we cannot afford to nod along with taglines and marketing campaigns. What’s driving AI research, development and deployment is already clear: a dangerous incentive to race ahead. If we want a better outcome this time, we cannot wait another decade—or even another year—to act. ■
Tristan Harris is a co-founder of the Centre for Humane Technology.
This article appeared in The Economist on May 29th 2024 and has been shared with permission.