[ Center for Humane Technology ]
The Interviews
The Narrow Path: Sam Hammond on AI, Institutions, and the Fragile Future
0:00
-47:55

The Narrow Path: Sam Hammond on AI, Institutions, and the Fragile Future

With Sam Hammond
Shutterstock: 2549619599

The race to develop ever-more-powerful AI is creating an unstable dynamic. It could lead us toward either dystopian centralized control or uncontrollable chaos. But there's a third option: a narrow path where technological power is matched with responsibility at every step.

Sam Hammond is the chief economist at the Foundation for American Innovation. He brings a different perspective to this challenge than we do at CHT. Though he approaches AI from an innovation-first standpoint, we share a common mission on the biggest challenge facing humanity: finding and navigating this narrow path.

This episode dives deep into the challenges ahead: How will AI reshape our institutions? Is complete surveillance inevitable, or can we build guardrails around it? Can our 19th-century government structures adapt fast enough, or will they be replaced by a faster moving private sector? And perhaps most importantly: how do we solve the coordination problems that could determine whether we build AI as a tool to empower humanity or as a superintelligence that we can't control?

We're in the final window of choice before AI becomes fully entangled with our economy and society. This conversation explores how we might still get this right.

Aza Raskin: Hey, everyone, welcome to Your Undivided Attention. This is Aza Raskin.

Daniel Barcay: And this is Daniel Barcay.

Aza Raskin: So today's guest is Sam Hammond. He's the chief economist of the Foundation for American Innovation, and I'm very excited to have this conversation with Sam in part because we just come from different backgrounds. We have different worldviews, take different stances about the world. And yet on the biggest thing, we seem to agree. And so we really wanted to have this be a conversation about, well, how AI is going to go and how it can go well.

The recap is AI companies and global superpowers are in a race to develop ever more powerful models moving faster and faster without guardrails. And of course, this dynamic is unstable. Putin has said whoever wins AI wins the world. Elon Musk says AI is probably the way World War III starts. We've just passed the threshold of the latest entropic models starting to have expert level virology skills.

And there are really two end states that we've talked about on the podcast, and I think Sam sees two, and that's either we end up in a dystopia where a handful of states and companies get a previously unimaginable amount of wealth and power, or that power is distributed to everyone and that world ends in increasingly uncontrollable chaos, which will then make the move to dystopia even more likely.

And so there is a narrow path to getting it right where power is matched with the responsibility at every scale. But right now we aren't on that path. And so today's episode is really about how we might get on that path. So Sam, thank you so much for coming on Your Undivided Attention.

Sam Hammond: Oh, thank you for having me.

Daniel Barcay: Now, Sam, as you could probably hear, I'm just getting over a little cold, but I was really looking forward to this conversation so I wasn't going to let that stop me. So Aza just talked about this in the intro, but you come to the AI conversation from a different perspective of a lot of our guests. Like a lot of the people that we have come primarily from AI safety or harm reduction. And here at CHD, that's our priority as well. But you have what some might call an innovation first approach to AI development.

And you've described yourself as a former accelerationist, a techno-optimist in the Marc Andreessen vein, but you also talked about updating your worldview because of the fragility of our institutions. Can you just tell our listeners a little bit about where you're coming from and the top line of how you think about technology and AI?

Sam Hammond: Sure. So I've always thought of myself as maybe a techno-realist more than optimist per se. So I got into this area as a young kid interested in philosophy of mind, cognitive science, evolutionary biology, debating, is the mind a computer, and coming to the conclusion that by some description it is at a pretty young age and trying to reverse engineer that.

Intellectually, I think one of my earlier philosophical transitions was being very much a hardcore libertarian and coming to realize via an understanding of this history that institutions like property rights, the rule of law, religious freedom, these things are actually new constructions and are not natural. They're part of recent western history, and they don't necessarily result from a weak state.

They resulted from, in some cases, the strengthening of the state out of the Feudal Era with early technological growth that favored the consolidation of militaries, the early ability to collect taxes, form bureaucracies. These things were driven by the printing press, by other technological currents. And so this way in which technology shapes and alters the nature of our institutions became very apparent to me.

At the same time, I was also very interested in political philosophy, and a lot of my interest in innovation and technology came from understanding the history of the Industrial Revolution and how out of equilibrium we are. Most inventions that we use on a daily basis were... The biggest impact things were invented in a span of less than 100 years. And you can think of that as the first singularity.

Shutterstock: 82645594

Gergen Murkowski has this what she calls the hockey stick curve of history where you look at GDP growth over time, and for most of human history it's basically zero. And then sometime around the late 1700s, early 1800s, it goes vertical. We're on that vertical curve and everything we owe to our civilization is a result of that stupendous economic growth. And so that begs the question, could this happen again? Is there a further inflection point in the near future?

Daniel Barcay: And so this first singularity you talked about, the industrial singularity, we're living in a world of just pure industrialization now, and it's so different from what we have before. And you're saying that moving forward into this other world, it could be as different as it was between industrial society and pre-industrial society. Talk a little bit about how you think of that transition, because you also come at this from this mixture of this could be this beautiful transition, but also this could be quite a chaotic transition.

Sam Hammond: But if there is going to be another technological transition, we should, I think, by default assume that there will be similar institutional transitions of similar magnitude. And no one could have foreseen circa 1611 with the first publication of the King James Bible that in a few hundred years we'd have the first railroads and enlightenment, telegraph networks. So I just fully expect that if we do get to AGI, and I think we're quite near, that we'll have a similar transition, but probably one that's much more compressed in time, and that will challenge all our assumptions of effective governance... Well, right-sized institutions than all the dressings of modern nation states.

Aza Raskin: I first learned of your work, Sam, I think we first met at a little conference in Berkeley last year and you were giving a talk. Actually, out of a little bit of your talk we borrowed for that when Tristan just gave his TED Talk on a narrow path. And I think you borrowed some of the things on surveillance and other bits from our AI Dilemma talk. It's sort of nice to see the reciprocity.

And actually I'm going to ask you in a second to recapitulate a five, ten-minute version of that talk. Because I think there's so many really great points in there, but I think we should start with this sort of thought experiment that you give in that talk of you invent a new technology and it uncovers a new class of responsibilities and then society has to respond and you give that as X-ray glasses. And so I'd love for you to just give that example.

Sam Hammond: Yeah. So the intuition here by the way, comes from looking at the ways in which even small technical changes can lead to very large qualitative outcomes. The birth control pill drove qualitative changes to society. And so in the talk I give, I open up with the thought experiment. Imagine one day we woke up and just like manna from heaven, we had x-ray style glasses that we could put on and see through walls, see through clothing, everything you could do with x-ray glasses. There's really three canonical ways society could respond. There is the cultural evolution path, which is we all adapt to a world of post-privacy norms. We get used to nudism. Then there's the adaptation mitigation path-

Daniel Barcay: Well, can I slow that down just a little bit? Just because if you're listening to this, so if you invent x-ray glasses and everyone can all of a sudden do what? They can see through walls, see through clothing, a bunch of parts of our society that we sort of depend on that we've gotten used to being opaque suddenly become transparent and things break, right? So anyway, keep going.

Sam Hammond: Then there's the adaptation mitigation path. So we could retrofit our homes with copper wiring or things that could block out the x-rays. We could wear leaded underwear, we could take a variety of mitigations. And then there's the regulation and enforcement path, which is maybe government uses its monopoly on the force to pull all the x-ray glasses, say we're the only ones allowed to use the x-ray glasses. And probably society would have some mixture of all three of these things, but what wouldn't happen is no change, right? It's a classic collective action problem.

Daniel Barcay: So what I love about this example is that on one hand, if everyone gets the x-ray glasses, you're thrust into this kind of chaos where all these people are doing things they shouldn't, understanding when buildings are unlocked, when people aren't home, it can cause chaos in society. On the other hand, if only the government has the x-ray glasses, then you're entering this kind of dystopia where it's sort of corruption, state power overreach.

Or in the third case that you're saying where we're just adapting to all of this, it's like throw out all of the social norms. We have to invent a new society from scratch and it feels like we don't want any of the three of these things. We want to find a narrow path where we don't have to worry about everyone wreaking havoc. We don't have to worry about the government sort of having the power and we don't have to worry about all of our social norms that we built our whole society onto unraveling. We here at CHD are committed to finding that narrow path between all three of these bad outcomes. We may have some differences of opinion about how we get there, which we can discuss, but I wanted to give you a chance to set the stakes for this conversation. What are the different pitfalls of us doing this wrong and why should people care that we get this transition correct?

Sam Hammond: Yeah. Connecting this back to my libertarian evolution, part of it was understanding the Industrial Revolution has served this package deal. The economist, Tyler Cowen, has an old essay called the Libertarian Paradox where he points out that it was sort of libertarian ideas around laissez-faire markets, capitalism that spawned the Industrial Revolution and kicked off this tremendous phase of growth. But by the same token, it set off a series of dynamics and new technologies capabilities, new kinds of negative externalities that necessitated the growth of first bureaucracies to regulate things like public health and safety and then welfare states to facilitate compensation for people who lost their job through no fault of their own. And so there's always going to be these trade-offs.

And so that concept of the narrow path comes from Daron Acemoglu the Nobel Prize winning economist now who has a book called The Narrow Corridor, where it is this history of the transition into modernity and how following the English Civil War and the wars of religion, there was a realization that we needed to consolidate power within nation states while also maintaining respect for freedom of religion, for rule of law, for equality under the law, and striking a balance between the power of state and the power of society.

And so the challenge is almost like in terms of differential calculus or something like that, how do we stay on this stable path and deal with the shocks that are both simultaneously strengthening the power of the state and the power of society, right? Because AI is not merely enabling security agencies to be able to do more bulk data collection and things like that, but it's also an aggregate empowering individuals to have the power of a CIA or a Mossad agent. And what does that mean in aggregate as society, just by dint of there being way more computation available to everyone else, starts to overwhelm the capabilities of the state.

Daniel Barcay: So many people assume that you can just have a change to technology and then it won't change society nearly as deeply as you think. And I think that's right, but I think we believe we can do this better and worse. And I hear people kind of throw up their hands and say like, oh, it's just inevitable. We are going to have to change everything and that's okay. Whereas I kind of worry about this, right? I think we can do radically better or radically worse at these transitions. And I'm worried that if you just say it's a package deal, then we're not factoring in our own agency to make this go differently.

Sam Hammond: There are hinge points in history where human agency matters a lot, but you can't just... To be a good surfer, you need to know when to catch the wave and it's necessary but not sufficient condition that you know how to surf and there are better and worse surfers, but if the wave is not cresting, then you're not going to do anything. And so there are these big tidal forces in history, and then there are ways in which things really are packaged deals because of the way they alter the kind of coordinating mechanisms we have in society. We had a gala last year for our 10th anniversary where we had Kevin Roberts, the president of Heritage Foundation speak, very conservative organization, and Dwarakash Patel at the Dwarakash podcast interviewed him and asked him for his takes on super intelligence, which I thought was fun. And he said, if we have super intelligence, we might have 10% GDP growth or greater, but we'd also potentially rapidly go down a post-human branch of the evolutionary tree.

And Kevin Roberts was like, oh, I love... I'm a conservative. I love GDP growth. I am all for that 10% GDP growth, but I am also a Christian conservative. I don't want to become post-human. And to point out that it as a packaged deal is not to deny his agency, but just to make us reflect on the ways in which you can't have one fit the other in some cases. And if we are going to go into this future with clarity, then we need to be realistic about the ways in which these things are bundled together.

Aza Raskin: So I hear that, which is to say technology always changes the landscape on which the game is played. So the game is going to change and you can't help that, but which game you decide to choose to play on top of that game board is still up for grabs and the initial conditions matter a lot. But I do want to see if I can get you to tease out it a little bit of the counterfactual of just imagining what an F grade might've looked like for the industrial revolution. And the reason why I want you to paint that out is I would argue right now we have very little sort of state power intervention into trying to put guardrails on AI. And I'm curious how that would've looked if we're in the same place now that we were in the industrial revolution, what would that have looked like?

Sam Hammond: Yeah, I mean, we could have had a nuclear holocaust. We could have had march through Europe of the Third Reich or of the Soviets taking over the world. And you can get situations where you get locked in a less than ideal equilibrium. I think it is kind of miraculous that we haven't blown ourselves up so far. Obviously the Industrial Revolution was a massive boon for human living standards, wellbeing and knowledge creation and understanding. And I think it was worth it even with all the calamity that we had to pass through. By the same token, the printing press arguably precipitated the wars of religion with much inferior technology. And yet I think we wouldn't deny that. I'm glad that we have the written word and books and academic publishing and all these things. Former colleague of mine, Dorado, has a blog post called On the Collapse of Complex Societies, reviewing some of the literature on how complex civilizations collapse.

And one of the recurring themes is that you often will have these sorts of technological trends that are moving quicker than institutions can adapt. And partly one of the reasons institutions fail to adapt is because there's an incumbent that is forestalling the eventually necessary adaptation. And I see this playing out with debates around artificial intelligence and we're going to have to give some things up. And maybe one of those things is our understanding of intellectual property. It may be we may want to have restrictions on the level and degree of surveillance, but at least from my vantage point, it seems like, and I'm not saying this is good or bad, just like some kind of surveillance state probably is going to be inevitable in our future.

And the question is what are the guardrails and what are the limitations on that and how is it actually governed? And so there's I think co-equal risks in trying to steer the narrow corridor in a way that's not really progressing anywhere, that's not really taking the developmental trajectory of a society seriously. That's actually in a weird way, trying to hold down to some aspect of the ancient regime.

Aza Raskin: I actually think that point you just brought up on total ubiquitous technological surveillance is a thing I don't hear talked about enough, just that without AI, total ubiquitous surveillance is impossible. But with AI, it's inevitable. Already, AI is enabling things like Wi-Fi routers and 5Gs to see through walls. And certainly with the next generation of cell phones, 6G, the companies like Nokia and Ericsson are talking about as a feature network, as a sensor. That is because it's in terahertz range, the network can tell your heart rate, your facial gestures, micro expressions, where your hands are. And that means everywhere human beings are in cities, everything is known.

And how do you possibly fight an enemy when you have no secrets? And that just seems like a thing that we're not talking about enough. And that sort of gets into this next question of the stakes right now in Congress, someone is trying to sneak a provision in that says states cannot make their own rules around AI, that it can only happen at the federal level. So it means there can be no rule-based innovation that's not for the entirety of the United States. And so getting to the narrow path actually, and what we should be doing as a society is it's right here, it's right now, and I really want to get you to talk about, because you've said you're not a fan of preemptive regulation, what should we be doing in your mind? How do we get onto a narrow path?

Sam Hammond: So I think there's different buckets of things that seem obviously good. One to start with is going back to this initial conditions point. I look at the experience of the Arab Spring where weaker states actually failed because effectively of information technology, Facebook, and they were much less adapted to a world of suddenly ubiquitous ability to coordinate and mobilize and critique government actors and expose corruption and hypocrisy. Now coming out of that China, other countries saw what was going on. It was like, we need to get control over the information ecosystem.

And in a sense, China is now well adapted to a world of ubiquitous open source AI models and all kinds of powerful information technology because they control the pipes. So the question is from these initial conditions, if China or the West get to very powerful intelligence first, is there a kind of winner take all dynamic where one of them pulls ahead in the same way the US pulled ahead of the Soviet Union in terms of GDP and technological capacity, potentially exporting technology that enables weaker states to surveil their citizens in a way that doesn't respect human rights and civil liberties.

So I think that it's point A is its incumbent on if we care about western liberal democratic values for the West to maintain it and grow its lead in AI and then to export its technologies around the world. And in some cases, export tools will be used for surveilling but have embedded within them privacy and civil liberties enhancing principles and values.

Daniel Barcay: Well, and you add onto that the idea that it's not just about surveillance. The idea that technology in general, but especially AI may radically change the game theory of centralized versus decentralized states, that sort of capitalist democratic states ended up out competing in the 20th century might have been an artifact of the technological environment of industrialization, but now AI might give an advantage to highly centralized governments. And to your point, I want to live in a world where we maintain the human rights, democratic values, some of these things, but we have to figure out how that works within an AI world.

Sam Hammond: Yeah, absolutely. But I think that that's going to be an ongoing learning by doing in many cases. And the question is who's doing that learning by doing? And so my zeroth order policy recommendation is always do whatever it takes to ensure that the US and the broader west maintain their AI advantage and hardware in the models themselves and energy and the inputs that go into these models and then to proactively engage rest of the world for adoption purposes.

Daniel Barcay: So let's double click on that a bit. You have called for Manhattan Project for AI to try to do some of this stuff. Tell us a little bit about what you think should happen in order to maintain that competitive advantage in order to make sure that AI strengthens our society.

Sam Hammond: To be clear, the piece I wrote was a Manhattan Project for AI safety.

Daniel Barcay: I might've co-opted it a little bit for the conversation.

Sam Hammond: I've been critical of this idea that the federal government should have some secret black site, five gigawatt data center and build a AI in a lab. I think that'll be very dangerous and actually in some ways decelerationist, because if we just let the companies proceed, they're going to move much faster than the Department of Defense. So there's going to be a component of this is international standard setting. A component of this is fixing our own internal problems around energy permitting, data center infrastructure, and then controlling the export of our most advanced hardware. So NVIDIA chips, China has a trillion yuan, 138 billion state BC that is their Stargate project in a sense. It's doing this big push to build data centers for their leading tech companies, and right now the best chips in the world are export controlled. And so there's this cat and mouse game going on in how we allocate global compute. And so I think that's a very important vector for Maintaining the aggregate amount of compute that is in the jurisdiction of western countries or our close allies.

Aza Raskin: I think one of the challenges in this conversation or like the cruxes I often think slips by, and I'm curious how you'll react is AI as a technology is very different than every other technology because with other technologies, if you need to build a more powerful airplane, that means you need to understand more about how airplanes work. If you want to build a taller skyscraper, you need to understand more about the foundations of building. But with AI, you don't actually need to know more about how the internals of AI works to build a bigger, faster, more powerful, more intelligent AI. And that means I think there's a confusion when we say we need to beat China, there's a smuggling in of, well, that means that whatever we're building, we can control. But actually what we've seen is that the more powerful the models, the less able we are to control them. And so shouldn't the race be towards strengthening this society versus racing for a technology that we don't yet know how to either individually or cybernetically control?

Shutterstock: 2172278433

Sam Hammond: I guess I question the premise. I think as these models have gotten more powerful in some senses, they've gotten easier to align. I think we're rapidly moving to a world of reinforcement learning and I think that's going to open up a whole host of other problems. But where we stand today, in some sense, the biggest, most powerful LLMs are vastly more aligned and controllable than the ones that we had two or three years ago.

Aza Raskin: Although we're also seeing increasing rates of like 03 deceives a lot more than previous models.

Sam Hammond: And I don't think any of those things are insurmountable. So a lot of these AI safety debates end up conflating a variety of different concerns people have. One of them is the classic alignment problem. How do we control very powerful super intelligence? The things I've been talking about are more like you suppose we have very controllable super intelligence, the world still looks very different, and I think we're going to move into a world where there's not just one singleton AI that takes over, but one where there's just a diffusion of very powerful capabilities in maybe people's hands and powerful AI agents all doing more or less what you ask them.

And there will be probably cases of like "rogue AIs" that shut down the colonial pipeline or do a ransomware attack or something like that. But I think for just using basic access control techniques, I don't think we need to have a full mechanistic interpretable understanding of how these models work to know almost at the level of physics that they have the behavior that we desire. We still entered into this very new world and most of the biggest problems are still unresolved because obviously people have very different interests, right? An ex-boyfriend could sick a malicious AI bot that is fully aligned in the sense that it does exactly what he asks on his ex-girlfriend and then have that thing autonomously replicate itself and constantly be terrorizing her. Those things are not an alignment failure. It's on the part of the humans not being aligned.

Daniel Barcay: Okay. I mean, fair enough. So if we step back a bit, what you're talking about is there's two big problems. Alignment is typically a question of was the AI doing what you're asking it? And then there's the question about fragility of our institutions. So forget about the alignment problem for a second. Let's assume you're right, I question that a little bit. I think it's going to be much harder to make sure that these things are aligned, but never mind. Let's keep that on the side for a second.

You talk a lot about the fragility of our institutions, and I really want you to go more deeply into that because I'm worried that our institutions are not going to be able to keep up with this and that we're going to enter even with aligned AI, a period of a very chaotic transition that is quite avoidable in my opinion. And I think some of the reading I've done from yours is we're on the same page here, is that we need to really watch out to make sure that deploying this recklessly across our society doesn't create a whole bunch of chaos that we wish we had never done. So can you walk us through a little bit about your view on institutional fragility with AI and a little bit on what we can do to avoid it?

Sam Hammond: Sure. So in big picture, what I see is this differential arms race between public sector and private sector AI diffusion where AI is diffusing much more rapidly into the private sector than the public sector. And so I think there's a need for more accelerated adoption in the public sector. That's point number one. But that doesn't really go quite far enough because if you think about the different vectors in which AI is going to cause not from misuse, but just from valid use of unprecedented scale, like essentially institutional denial service attacks where you can imagine if we all had AI lawyers in our pocket, we all can just be suing each other constantly. The courts are going to get overwhelmed unless they adopt AI judges in some form.

Because I don't foresee our court system technologically, they're still use human stenographers, like adopting AI at that pace. I think there is a world where there's a kind of displacement effect where just in the same way that Uber and Lyft and modern ride hailing technology, displaced license taxi commissions, right? Where maybe we will have the Uber but for drug approvals or the Uber, but for adjudicating contracts and commercial disputes.

Aza Raskin: And when you say Uber, do you mean that there's some startup somewhere that says, aha, I'm going to solve law. There's going to be a new... It's like court, but without all the vowels, and then what was part of the state moves into something that is private and that private thing is now subject to all of the VC incentives, is that what you're saying?

Sam Hammond: Yeah, and it might not be one company, it might be something that's done competitively or bottom up. You could easily imagine a world where a lot of things that today require formal institutional processes and bureaucracies get sort of pushed out to the edge of using kinds of raw intelligence. And that may not come from any one company, but it will look like a very different way of doing business.

Daniel Barcay: Can you tell us which institutions do you think are the most vulnerable for disruption around AI and what are the kinds of disruptions you're expecting to see?

Sam Hammond: First of all, you look at where there's going to be the most rapid progress and what institutions we already have to govern those processes, and then what is their willingness to actually adapt and co-evolve. Look to where there's likely to be very rapid AI progress. Dario Amadei in his Machine's Living Grace essay talks about the potential for in the very near term AI scientists that could perform basic R&D and biology and other areas autonomously in parallel with thousands of other AI scientists. That could in theory lead to a speed-up of scientific discovery collapsing what used to take a century into a decade or less. If we have institutions like the FDA that are in charge of approving drugs, and they do that through a three-phase clinical trial process where you have to get human volunteers or patients to have two different treatment and controls.

And it's a very long-drawn-out process. If you stretch this out long enough, you could even imagine us one day having human models in silica that could completely characterize the effect of a drug on a particular disease without ever needing human trials and have that validated against humans to prove that it's accurate. But the barrier is not that the people at the FDA don't see this coming, but that these things are written into law. And the question of how fast can the FDA adapt is also is fundamentally the question of how fast can Congress write new laws and how forward-looking are they? And what makes it so hard is it's not just the FDA, it's not just drugs, it's not just science. It's going to be everything everywhere all at once. And so this is what gravitates me towards just seeing us not quite getting this all right, and for things to shift often to private sector solutions.

Daniel Barcay: But help me see the balance there because in your blog post 95 Theses on AI, one of the things you say is periods of rapid technological change tend to be accompanied by utopian political and religious movements that usually end badly. When you're saying, okay, we're going to revolutionize the FDA and we're just going to let this play out and we're not prepared for the changes going to come. That's what comes to mind for me is that this feels like a utopian movement saying we can just allow it to run roughshod across our institutions and that will usually end badly. So help me, what's the balance here? I agree that we don't want to get stuck with our existing institutions dragging this into the dirt, but at the same time, I don't want some sloppy thinking about this will all end well lead us into this sort of religious belief that this is going to go well and not have us think hard about how we roll this out.

Sam Hammond: Yeah, I certainly don't have a religious belief that it'll go well. I think what characterizes the utopian movements is believing in some end state or knowing how the story ends and trying to move us closer to the end of the story. And I don't know how the story ends, and I don't think anyone really knows how the story ends. I think what we can know are sort of general principles for complex adaptive systems. There's no one in charge of the thing we call America. And when I look at the things that are barriers to AI, you mentioned earlier the AI moratorium that's been proposed.

I think that's built on this faulty assumption that the things slowing down AI are AI specific laws, when actually the thing that's going to slow down the AI diffusion are all the laws that deal with everything else, the laws in healthcare or finance or education. And so I think if I could wave my magic wand and do two things at once, I'd A, have in some sense more rigorous oversight over AI labs, more AI specific safety rules and standards for the development of powerful forms of AGI. At the same time as I am essentially doing a jubilee on all the regulations that currently exist in most sectors, not because we want a world that's totally deregulated, but because those regulations are starting to lose their direction of fit.

Aza Raskin: What I am hearing you say is we're going to need new paradigm institutions. It actually was reminding me of a moment I was at the Insight Forum, which is that moment in history where for the first time Congress called all of the CEOs from IBM and OpenAI and Google, and there was Elon Musk and Mark Zuckerberg to come to the capitol to answer the question like, what's about to happen? How can this go well? And it was a funny thing for me to be sitting there across the table from $6 trillion of wealth. But after that event, I ended up going through a long walk in DC and I ended up somehow at the Jefferson Memorial. And the Southeast Portico, I saw a quote of his that I'd never seen before. And it said, "I'm not an advocate for frequent changes in laws and constitutions, but laws and institutions must go hand in hand with the progress of the human mind.

As that becomes more developed, more enlightenment, new truths discovered and opinions change with the change of circumstances, institutions must advance also to keep pace with the times. We might as well require a man to wear still the coat from which fitted him a boy as society to remain ever under the regime of their barbarous ancestors." And that just really hit me, which is I'm sitting in the capitol and we're basically having a debate where we're not even really talking to each other. This is such an old institution. There are many ways of updating our institutions using the new technology so that it can scale with AI. And so I'd just love for you to get specific. You were starting to about some of the other ways that we might fundamentally upgrade our institutions.

Sam Hammond: So the fact that right now in Congress, they're debating the big beautiful bill, this big tax bill. So over a thousand pages, why don't members of Congress who often have 24 or 48 hours to read these things have AI tools where they can just plop the bill in and ask, what does this do for my state? Does this have any poison pills? Are there any provisions in this law that say one thing but could be used to do something else? And you could imagine this would be incredibly useful, not only in itself, but because Congress is notoriously short-staffed. This is just one area, and I've done a little bit of work on this in pushing Congress to modernize its tech stack and actually begin embracing these tools because as it stands today, most congressional offices that I talk to use ChatGPT on a regular basis, but in violation of their own guidelines.

And you see this up and down in the federal agencies as well. This goes back to my FDA point is it's not enough to give like FDA officials an AI copilot. We're going to need fundamental process reform. And I think a lot of these more scalable mechanisms are going to look something similar to Twitter's transition from having a western safety team to having community notes where they went from something that was, you can think of the elect that were deciding what posts violate the rules or not to something that was bottom up.

We can critique Elon Musk's broader interventions and his own trustworthiness, but the Community Notes algorithm is incredibly inventive and actually aligning incentives so that groups of people that tend to disagree, if they agree on a particular note, that note gets amplified. And so are there other community notes like solutions for the things that government does? And then are there areas of government that are just genuinely obsolesce? And I think this is where there's going to be the biggest tooth-pulling exercise because there are certain aspects of things that governments do that are technological contingent. Will we need a national highway traffic safety administration if all the cars are autonomous and we don't have a single traffic death? And will it just wither away or will it metastasize into some other beast? And I think that this is going to be one of the biggest fights.

Daniel Barcay: I have to admit, I'm both hopeful. I love the sort of pro-democracy tech angle, especially using LMS to figure out ways of supercharging our governance and not using this sort of 19th century, 20th century system to try to really govern but rather get inside and change some of this stuff. But also I'm kind of worried about these short timelines, introducing a technology into the heart of some of our most important facets of government where we still don't really know how it works, if it's ready. Do you have any ideas on how you think those transitions should go over? What timeframes? When is the tech ready to integrate? I mean, when the rubber meets the road, do you have any specific recommendations?

Sam Hammond: Yeah, I'm going to say things that will sound a little contradictory. On the one hand, I think that it's important to open up the ability for folks within government to experiment. And right now the way the rules are written for IT procurement, for instance, is all around compliance and minimizing risk, in this very risk-averse culture. But that risk aversion has come out from sort of codifying processes that worked in the past. And the analogy I sometimes give is when you're designing, say, a park or the quad for university, you could lay down the sidewalks that you think are the right sidewalks, or you could just leave the field barren and let people choose the path that they walk on. And when you start to see a path forming, that's where you build the sidewalk. And I think there's going to be analogous things with the use of AI where because it's so general purpose, we don't know all the ways people could use it productively.

And so we need to have pilot programs and sort of a more permissive ability for individuals within government and within corporations and other large institutions to experiment without needing permission and see what works. And then only later did you start codifying things. At the same time, we've also seen in government when they do these sort of especially mega projects or big pushes for adoption, that it's really important that you not be too early. When you're even a few years too early to technology where everyone sees where the ball's going, you can get locked into something inferior.

And so the way things have historically worked is that the US government has been a fast follower of the corporate sector. And so I think we're going to need to see something similar in this era where the hyperscalers of our current day are like the Carnegie's and the Rockefellers and so forth from the earlier era, and they need to bring their learnings into government and make the government also hyperscale.

Aza Raskin: What it seems like you might be advocating for here is treat the government a little bit more like a corporation. We've just seen with DOGE and Elon, some version of a whole bunch of young twenty-year-olds rushing into the government. And actually when we last talked at the curve conference, you were very optimistic. I'm curious now that we've sort of seen it and you're like, we would maybe be living in the best possible world. Are we living in the best possible world? How has that gone?

Sam Hammond: Yeah, ex-ante, I thought and still think that if there was going to be this narrow corridor path, it would take something like DOGE, something that was detached from all these political constraints and public choice problems that would hold back more dramatic reform. And as Doge has played out, it's been obviously a huge mixed bag, and that's probably because they're not a singular thing. DOGE is in part a tool to enact the president's agenda per se. And the reason they went off after USAID as their first target was because the president signed an executive order putting a pause on all foreign aid. And so it wasn't that Elon or the DOGE kids had it out for foreign aid, it's because they were tasked with using information technology as the conduit to reestablish executive control over the bureaucracy. And that just happened to be the way that played out.

At the same time, and this has not been nearly as reported on, behind the scenes, there's a lot of genuine modernization going on. I have a friend at Health and Human Services who's now the CIO and HHS is the sprawling agency. They have I think 17 or 19 other sub-CIOs. One of his jobs right now is the fact that no one in government can share a file with someone else in HHS because they're all using different file systems. And it's this mundane fragmentation that has accumulated over time that I think DOGE should be trying to solve because at some point we are going to have very powerful AI bureaucrats, for lack of a better word, tools and agents that could replace hundreds of thousands of full-time employees within the government. And we need some of that infrastructure in place. And there's just some basic firmware level government reforms that are needed, and DOGE is addressing them while also being a bull in the China shop.

Daniel Barcay: So it seems like we're stuck between this adaptation regime that you were talking about. How do we make the US government resilient in adopting AI, but then also you're worried that this may just be sufficient to cause these institutions to collapse totally. This seems like we're back in paradox territory. Can you talk a little bit about that and do you think that adaptation is going to work?

Sam Hammond: I've mentioned the innovator's dilemma before. The example of what the taxi commissions built their own Uber and Lyft. By default, they don't. The question is, can you do the impossible and defy the innovator's dilemma? And public institutions like the federal government, one of the big disadvantages they have over private institutions is private institutions, private companies are constantly being born and then dying, and there's this constant rejuvenation process, and we only have one federal government. That being said, the US government has undergone kinds of what you could call like refoundings or reboots, whether that was Lincoln or FDR, or you could say the Great Society was a partial. We've gone through these sort of constitutional resets in the past, and I see the Trump administration more broadly as trying to facilitate another one of these constitutional resets. Now it's bundled up with all kinds of other political commitments around trade, around immigration. Things I don't necessarily agree with.

And what it comes down to is like, is the bureaucracy this headless beast that just keeps on going on business as usual on autopilot, or do you have some source of agency within government that can actually begin reorganizing it and preparing it for major change? And this gets back to my earlier point about we can't be utopian and know what the end state is, but we can apply general principles for complex adaptive systems. And one of those is rapid feedback loops, experimentation, fail-safe testing, and we just at the moment completely lack the infrastructure to do that. And so there's some work that needs to be done.

Aza Raskin: I would love it, just like I think you believe there needs to be been in Manhattan Project for AI safety. I think we need an Apollo mission for massively upgrading society's defenses because VCs generally are not going to put money there, so the market isn't really going to get there until it's a little too late. We've seen examples of this in cybersecurity, whereas our infrastructure digitized. There just was no strong incentive for private corporations to massively invest in their defensive. And it's just less... America's cyber capacity is deeply vulnerable. And so I think we're going to see something similar in AI unless we can do a kind of large scale Apollo mission, which is not to say some big centralized thing, but we certainly need enough resources to accelerate our defenses.

Daniel Barcay: I keep coming back to your 95 Theses on AI because there's so many gems in there, but there's one that I really love, which was building a unified super intelligence is an ideological goal, not a fait accompli. And there's something in here that really resonates with me. You'll hear from people about we're building AGI, we're about to build. This is this goal. And you'll hear people talk about this. It's not even a goal, it's a foregone conclusion. It's just the tech path in front of us. Aza and I, I think are quite aligned that how we build this technology is very much up to us and whether we're racing to one goal or another is a choice. So can you talk a little bit about why you say that it's an ideological goal and how do you see that?

Sam Hammond: Yeah, I mean, I think machine learning and deep learning is a general purpose technology, and we could use that to construct better weather forecasts or to solve protein folding. But this idea that we need to have a single, coherent, unified system with agency sentience and so forth, that is vastly superior to human intellect in every possible way, doesn't seem necessary to me. It's not like there's a big market demand for that. I definitely see the case that there's a market demand for human level agents that do routine work, office work and stuff like that.

And so I do worry, and this gets into the ideological undercurrent in Silicon Valley, that there's a strong kind of messianic almost milieu where we are going to bring on the sky god. And I don't think we know if that's inevitable or not. It does seem clear that if something like that were to happen, it's not this big structural thing. Certainly China is not racing to build AI sky god. They're racing to build automated factories. They're much more pragmatic and practical. It's going to come down to the CEOs of a handful of companies with a kind of glint in their eye.

Daniel Barcay: I think it's such an important point. Jared Lanier, for example, talks about we should be building AI like tools and not like creatures. And personally, I think it's a real choice that we have and it's not some foreground conclusion we can build a more tool-like future with AI and not just build the sky god.

Sam Hammond: I agree with that. It would certainly be safer.

Aza Raskin: And in order to not build such a thing or to deploy them safely, will require human beings doing perhaps the hardest thing, which is solving multipolar traps, learning how to coordinate where often our behavior is bound by the fear of me losing to you, my company losing to your company, my country losing to your country. But the fear of all of us losing has to become greater than that paranoic fear of me losing to you. And that to me is like is the calling card of how to walk the narrow path or the narrow corridor, is solving the ability to coordinate at scale while still maintaining honest rivalry.

Sam Hammond: Yeah, 100%. There's few enough actors in the world that will be able to build those systems in the near term that they should, at least in theory, be able to coordinate in the same way we have coordinated over nuclear weapon proliferation, biological weaponization, chemical weapons, even now gain-of-function research. And in many ways, the stuff that is going on in these AI labs is a kind of gain-of-function research.

Daniel Barcay: Right. For listeners that don't know, gain-of-function research in biology is where you deliberately train into a biological organism, a undesirable characteristic. For example, the ability to jump species or the ability to become more infectious. And then you try to study it and figure out what makes that happen in theory so you can prevent that from happening. But that's where the hubris comes in. You're being very Promethean. You're giving an ability to an organism, the ability to do something that you don't want it to do, and then you're assuming that you can control it.

Sam Hammond: The one saving grace is that AI models don't get into your respiratory tract.

Daniel Barcay: Right. Right. They just get into your economy and get into your politics and get into-

Aza Raskin: And then get into your mind. Well, I just want to say, Sam, it's been such a pleasure having you come on the podcast. We really are in the sliding closing window of what AI is before it becomes fully entangled with our GDP, and we can't make changes. We're in that final period of choice. And even though we come from, as I said, very different ideological stances, it seems like there's a lot that we've agreed on also. Some that we haven't. But I'm just very grateful to get to have this conversation and get it out to a wide group. So thank you so much for coming on Your Undivided Attention.

Sam Hammond: Thank you. It's a lot of fun.

Aza Raskin: Yeah. Thanks Sam. This is great.


Thanks for reading [ Center for Humane Technology ]! This post is public so feel free to share it.

Share

Discussion about this episode

User's avatar

Ready for more?