We’re Not Just Racing Toward AGI. We’re Racing Toward Fragile Institutions.
Key Takeways from Sam Hammond, Chief Economist at the Foundation for American Innovation

As the race to build increasingly powerful AI accelerates, the risks are starting to come into focus: on one end of the spectrum, the threat of authoritarian control; on the other, ungoverned chaos.
But there’s a third path—a narrow one—where technological power is matched with institutional responsibility and democratic resilience. This episode explores how we might actually find and walk that path.
Our guest for this episode was Sam Hammond, chief economist at the Foundation for American Innovation. Sam comes to the conversation from a different perspective—he’s more innovation-forward, a former techno-optimist turned realist—but what emerges in this conversation is surprising alignment.
Daniel and Aza did not agree with Sam on everything, but they shared the same urgent need: to modernize the systems that hold society together before AI breaks them apart.
Institutions Aren’t Ready for What’s Coming
Sam warns that we’re approaching what he calls an institutional singularity: a moment when legacy systems like the courts or the FDA just can’t keep up with what AI enables.
He says, imagine AI lawyers in everyone’s pocket flooding the courts. AI researchers generating biomedical breakthroughs faster than our clinical trial frameworks can evaluate them.
His concern isn’t rogue AGI—it’s scale. Even well-intentioned AI, deployed into outdated bureaucracies, can create cascading system failures.
“You can imagine if we all had AI lawyers in our pocket… the courts are going to get overwhelmed unless they adopt AI judges in some form.”
— Sam Hammond
Surveillance Is Becoming Ubiquitous
Aza lays out the near-future surveillance landscape: 6G networks that can detect heart rate, gestures, and micro-expressions in real time. Cities that can see and sense everything.
Sam doesn’t dispute that trajectory, in fact, he sees it as likely. The real question isn’t whether it happens, but who governs it, under what rules, and with what values built in.
This Isn’t Just About Alignment
A lot of the policy conversation today is focused on technical alignment, for example, how to make sure models do what we ask. But Sam pushes the frame wider: even aligned AI can destabilize fragile institutions.
He draws a parallel to gain-of-function research in biology. Powerful systems are being built in competitive environments with little coordination and little capacity to absorb the fallout.
The Real Race Is for Institutional Capacity
Sam challenges the narrative that the West just needs to “win the race” to AGI. In his view, the real imperative is upgrading democratic institutional agility.
That means keeping a lead in key infrastructure—compute, hardware, safety tools—but also updating regulatory systems and legal frameworks to avoid collapse under AI’s weight.
He calls for a kind of Manhattan Project—not to build AGI, but to modernize how we govern it:
“I’d have more rigorous oversight over AI labs — more AI-specific rules and standards for the development of powerful forms of AGI — at the same time as I am essentially doing a jubilee on all the regulations that currently exist in most sectors. Not because we want a world that's totally deregulated, but because those regulations are starting to lose their direction of fit.”
— Sam Hammond
AGI Isn’t Inevitable—It’s Ideological
Sam pushes back on the idea that building a unified, god-like general intelligence is simply what comes next. He sees that goal as ideological. Not necessary, not inevitable, and certainly not universally shared.
“China is not racing to build an AI sky god. They’re racing to build automated factories. They’re much more pragmatic and practical. It’s going to come down to the CEOs of a handful of companies with a kind of glint in their eye.”
— Sam Hammond
We’re not being dragged toward AGI. We’re choosing to build it. And we can choose differently.
Final Window of Choice
Despite different priors and politics, all three voices in this episode agree: we’re in a narrow and closing window before AI becomes fully entangled with the systems that shape our world.
We still have agency. But that agency depends on our ability to build institutions that can adapt as quickly as the technology itself.
“We can’t be utopian and know what the end state is, but we can apply general principles for complex adaptive systems. And one of those is rapid feedback loops, experimentation, fail-safe testing, and we just at the moment completely lack the infrastructure to do that. And so there’s some work that needs to be done.”
— Sam Hammond
What we do next will determine whether AI strengthens or fractures democracy, whether our institutions evolve or erode, and whether we steer the future or get swept along by it.
Another issue I think about with alignment: There’s a lot of research being done around cognitive decline in people who use AI frequently. No matter how aligned AGI may be, inherently, externalizing intelligence, relying on machines to problem solve and think for us decreases our own intelligence and makes us more reliant on AGI to solve our problems in a never ending positive feedback loop. In a world where AI is recursively programming itself to be more and more intelligent and we rely on it more and more and become more dependent on it, no matter how much it is aligned with human values it undermines human agency. I’m just curious if this is being discussed and what ways out of this feedback loop there might be? Is the temptation too great? Have humans collectively shown this kind of restraint in the past that we can point to?
Good insights. I might suggest use substitute “Hammond” for “Sam,” both because it is oddly familiar for serious policy, and because the most prominent AI name is Sam, so I was confused for the first half! ;)