What does it really mean to ‘feel the AGI?’ Silicon Valley is racing toward AI systems that could soon match or surpass human intelligence. The implications for jobs, democracy, and our way of life are enormous.
In this episode, Aza Raskin and Randima Fernando dive deep into what ‘feeling the AGI’ really means. They unpack why the surface-level debates about definitions of intelligence and ] capability timelines distract us from urgently needed conversations around governance, accountability, and societal readiness. Whether it's climate change, social polarization and loneliness, or toxic forever chemicals, humanity keeps creating outcomes that nobody wants because we haven't yet built the tools or incentives needed to steer powerful technologies.
As the AGI wave draws closer, it's critical we upgrade our governance and shift our incentives now, before it crashes on shore. Are we capable of aligning powerful AI systems with human values? Can we overcome geopolitical competition and corporate incentives that prioritize speed over safety?
Join Aza and Randima as they explore the urgent questions and choices facing humanity in the age of AGI, and discuss what we must do today to secure a future we actually want.
Aza Raskin: Hey, everyone. This is Aza Raskin, and welcome to Your Undivided Attention. There's a question that you'll hear a lot around Silicon Valley these days. Can you feel the AGI? AGI is, of course, artificial general intelligence. And while there are many different definitions, and people actually fight a lot over the definition, because it turns out there's a lot at stake, you can still broadly understand AGI as the ability for an AI to replace human beings behind a screen. That is, if the economy can't tell that we swapped out a human with an AI, well, that's what AGI is.
But what does it mean to feel the AGI? It means to feel the weight of the massive wave coming over the horizon and heading towards us. It's to take seriously the idea that AGI or something even more powerful is coming and soon.
Now, timelines vary. Sam Altman has said something like AGI could be built this year. Anthropic's Dario Amodei says next year. Demis Hassabis of Google DeepMind gives it five to 10 years. And in a recent blog post, the former OpenAI researcher, and now whistleblower, Daniel Kokotajlo predicts that we will have superhuman intelligence by 2027. My guess is 2026, 2027.
Now, this can all feel like science fiction, but we're living at a time when things that felt like science fiction just become real. I'm the cofounder of Earth Species Project, and we are using frontier AI to decode animal communication, animal language, and we believe that'll happen before 2030. So we have to take this all seriously, but it's also critical that we can distinguish hype from the raw facts from knee-jerk skepticism.
So to do that, I've invited back CHT cofounder Randy Fernando, who spent years at NVIDIA and has been tracking the space deeply from a technical perspective. By the way, we recorded this conversation when I was in the tropics. So you'll also hear some very nonhuman participants chiming in with their thoughts. And with that, let's get into it. Randy, thank you so much for joining us again on Your Undivided Attention.
Randy Fernando: Glad to be here, Aza.
Aza Raskin: I don't actually mean to make light of this topic, but are you feeling the AGI?
Randy Fernando: I think I do. I do. And one of the things I say is that if you actually try the models a lot, you try the technology, it's hard not to feel at least some of that. Right? If you use a new model like OpenAI's o3 or Gemini Pro 2.5 or Claude 3.7, they're all pretty smart. And if you look at demos like OmniHuman-1, which brings images to life and pairs audio and does lip sync, and it all looks fantastic, or you look at the latest voice generation or music or video. When you see those, it's hard not to feel the AGI, and I think a lot of people who may not be feeling it just haven't seen what is possible.
Aza Raskin: Mm-hmm. One of the things that I think is really important here is the way most people experience AI is a chatbot, and they expect or the way they understand its smartness is they ask questions and they sort of evaluate how smart the answers are back. But what that misses is when, say, o1 or o3, which are OpenAI's reasoning models, gain new capabilities, where before o1 last December, if you ask one of the models to solve a PhD-level physics question, where the answers aren't on the internet, the model would fail miserably, then o1 comes out, and suddenly, it can answer 70% of them. Most people never experienced that because they're not asking PhD-level questions. And so, they don't see the exponential rate of progress.
Randy Fernando: That's right. That's right. And I think then getting to this AGI question, it doesn't take a fully general AI to already have a massive economic and societal impact. And when we say feel the AGI, one of the things we want is for people to feel the AGI not just in terms of the technology, but in terms of how it would land in society, feel into the world that you want for yourself and for the people you care about.
You can have these really powerful technologies, but you don't want AI that's going to trick you in a cyber scam or have deepfakes happening of your kid at school, or to be competing against AI agents when you're, let's say, buying a car or buying a house, or being manipulated when you're trying to vote, or having AIs that train on your work and then come back and compete against you in the marketplace that you work, or being automated out, and that includes things like Uber and Lyft and DoorDash with respect to autonomous vehicles. Those kinds of things we have to feel into as well, and we want people in companies and governments to feel into that too, so we can make the right decisions going forward.
It doesn't take a fully general AI to already have a massive economic and societal impact.
Aza Raskin: Now, before we get into definitions, I think we need to have a disclaimer about definitions, because definitions are used often as a way of delaying or adding doubt. They're a tool that people deploy. So a perfect example before we try to define AGI is in social media, is there's been a lot of firepower put behind, "Well, what exactly do you mean by 'Is social media addictive?'"
Let's define addiction before we say it's addictive or not. And meanwhile, people are staring into their phones and kids are laying in bed, scrolling for hours on end. You don't need to get the definition perfect for, as you're saying, there to be real-world impact. And so, industry often will try to make this a conversation, "Well, we have to know exactly what a harm is or exactly what a definition is before we do anything about it." And meanwhile, they're just rolling out, bulldozing society. So I think that's really important to say.
Randy Fernando: That's right.
Aza Raskin: And there are already hundreds of billions to trillions of dollars at stake, because in the deal between Microsoft and OpenAI, Microsoft gets access to OpenAI's technology until they reach AGI. So obviously, now, there's going to be a huge push to try to define AGI as something for OpenAI that happens a little sooner so that they get access to all of the economic benefits, and that's just one example of the kinds of weird incentives you're going to see around what AGI is.
Randy Fernando: There's another incentive, which is, when you're talking to a customer or to an investor, you are going to represent your technology as being more advanced. And so, the definition of AGI gets a little looser. If you want to extend the timeline and say, "Oh, don't worry. We are talking to the public," and you're saying, "Hey, don't worry. We are still really far from AGI," now you make the definition very stringent. You say, "It's like Level 5 autonomous driving. It's got to be perfect." So, of course, it's going to take a long time to get there.
And so, you can see how adjusting the definition adjusts the timelines. There's this false dichotomy between near-term problems and the really super smart AI that goes off the rails. And sometimes people put these in this big tension, but I want to make the point that solving near-term problems will also help with the longer-term problems almost all the time.
So here are three examples. Right? One is with alignment and reliability. How can you be confident that an AI system will accurately do what you ask every time? This becomes really important. If an AI agent has your credit card, now you care a lot about it. You don't have a lot of tolerance for error, and it also applies to AI agents that are operating in our systems, like our financial system, for example. So that's one, alignment and reliability.
The second one is interpretability. Do we understand how the models work? Do we understand how they are reasoning and sort of coming to conclusions and taking actions? We have a long way to go, even on the systems we have today. These are just examples, but the last one is an example of the impacts on jobs and meaning when we have automation at scale. How do we handle that tsunami, and how many resources are we allocating to that problem and all of these other problems?
These are much simpler problems in 2025. This is the simplest version of these problems we are going to have in the coming years. And if we can't even solve them, and if we're not even dedicating resources to them, sufficient, commensurate to the resources we are putting into advancing the technology, how are we going to handle AGI and superintelligence? So we've got to get these prerequisites in place.
Aza Raskin: Yeah. What you're, I think, pointing out here is that trying to define what artificial general intelligence is and whether we've crossed it or not sets up our minds to look for the bright line, and that harms will only happen after that bright line. And, of course, intelligence is a multivariate schmear. It's not clear when you pass it. And as we automate intelligence, there are going to be increasing changes and risks to society, and we need to be tracking those along the way. And if we don't, then we're setting ourselves up to fundamentally fail.
And just note that debates about the definition of where that line goes really is about not taking accountability for the harms that happen along the way. I think that's critical to understand. Let's quickly try to do, even though we've just said we shouldn't be wary of people that try to define AGI. I think it's really good to talk a little bit about what that means. And Randy, I think you've been putting a lot of thought into it. So give us your best shot.
Randy Fernando: So I tend to lean towards the more practical definitions of AGI, because it brings the timeline of caring in. So we can think more about the consequences. I would say AGI is AI that's able to match human performance in the cognitive realm. I think Aza said also, it would replace a human, a reasonable human.
Aza Raskin: Replace a human at a computer.
Randy Fernando: At a computer. That's right, on cognitive tasks and computer-type tasks. So that includes language, solving problems, explaining ideas, but also art, music, video. It requires the ability to complete long sequences of tasks reliably, so tens or hundreds of steps reliably happening. And it has the consequence of being able to largely or fully automate hundreds of millions of cognitive jobs, generate significant economic value, and accelerate scientific advancement, which leads to compounding effects.
Intelligence is a multivariate schmear. It's not clear when you pass it. And as we automate intelligence, there are going to be increasing changes and risks to society, and we need to be tracking those along the way. And if we don't, then we're setting ourselves up to fundamentally fail.
Aza Raskin: And just note, what are the incentive of the companies? The incentive of the companies are, well, they need to beat the other companies to making the most powerful version of AI. And if you can have your AI code for you, then you can accelerate your own rate of progress. And that, of course, puts us in the most dangerous world, which is AI working to make AI faster, everyone racing, needing to make that go fastest.
And so, their AIs are starting to be able to model how AI researchers work in collaboration with other AI researchers, which means you can make an agent which accelerates the work. They can do the work of interns as of last year, and they're getting better and better and better. So that's where things are going. And again, note, you don't need AGI anywhere in there to define it to know that this just accelerates the rate of progress.
Randy Fernando: Yeah. And if you want to feel it just as a listener, if you try something like Deep Research, you can get a feel for this. You say, "Hey, do some research on a complex topic," and it will go away and do a bunch of thinking. So you can get the feel for what's happening to research and this level of automation, and that is just a tiny flavor, a tiny taste of what it's like inside the companies.
Aza Raskin: Now, I just want to name one distinction, because we haven't got there yet. Some people talk about AGI. Other people talk about ASI, and this is artificial general intelligence versus artificial superintelligence. And just, again, this may all feel like science fiction. Why are we having this conversation when there are real problems in the world? There's geopolitical instability, and we're having what feels like a conversation about something that is artificial superintelligence. What is that?
But the distinction is, artificial general intelligence is sort of roughly at human-level intelligence. Artificial superintelligence, well, that's intelligence beyond the human level. Some people call ASI not just smarter than humans, but smarter than all of the cognitive output of humanity combined. And so, there's some distinction there, but both of those are things that people think, some experts, that 2030, 2035, we might reach that.
Randy Fernando: I would just add two quick things there. One is in terms of intelligence and human intelligence. Again, this point about patterns, so much of what we consider to be intelligence is pattern recognition and extrapolation. So it's hard to say exactly how much, but it really is a very large amount, and these things are very good at that. These transformers are very good at that.
The other thing with ASI is that it will also include the ability for AIs to collaborate very efficiently at scale. So you can think of specialized versions that are now talking to each other. You can imagine a massive compounding effect. And a lot of this, again, is not science fiction now. You can start to see it as we see more of these demos of agents working and higher-performance models.
You can extrapolate to that more easily. And the last thing, I think, is worth mentioning, is that a lot of times, people interchange AGI and ASI. I think we sometimes do that too. Just as a note, you will hear those terms. AGI really is the very capable but weaker one, and ASI is the really strong, massively capable one.
Aza Raskin: I think we should construct an argument. And Randy, I'm going to lean on you a little bit for this and then I'll interject when I need to. Let's start with constructing the argument that AGI is possible. What trends are we seeing? Why should we believe that we can get there?
Randy Fernando: So here's what people would say. Right? People who believe strongly, they would say things like, "Look, we've had these scaling laws. We take compute, data, and model size. We keep growing those, and it's worked. It's brought us really amazing results. It's brought us emergent capabilities as the models were growing. We've got new GPUs coming all the time, larger data centers that we're investing in." So that's going to continue to go, even if the rate's changing a little. That's driving innovation.
We've got transformers that are working really well. There's other people looking at new architectures. So that's all very promising. Recently, we had reasoning and reinforcement learning working together. There's a lot of headroom there. We found a big jump in performance. The performance graphs have changed in slope when we added reasoning. New benchmarks are being beaten regularly. Hallucinations are dropping consistently.They're not zero, but they're dropping pretty fast.
And in terms of data, reasoning models can generate quality data. So we don't need to always rely on human data, which we do tend to run out of, and new models can use tools really well. So now the models are smart enough to rely on external tools, and this is important because the external tools are usually very capable and they don't make mistakes.
So, for example, the calculator doesn't make mistakes. Python doesn't make mistakes. If you write the code right and you run it, it will run the same way every time. So all these are reasons why we should take AGI very seriously. And now, Aza, maybe you can take the skeptic side and walk us through what are the things that skeptics say that give them pause.
Aza Raskin: Yeah. Well, let me give a run-through of some of the kinds of arguments that skeptics say. And just to name my own internal bias here, which is, up until the end of last year, I was much more hesitant. I wasn't sure. I could see arguments both ways that were convincing. And so, I was sitting in a place of maybe. At the end of last year, after the reasoning models starting to be deployed, that really shifted my stance to, I think it is much more likely than not that before 2030, probably by 2028 or 2027, we'll have hit whatever some functional definition of AGI is. So I just want to name my bias for everyone.
Randy Fernando: Yes.
Aza Raskin: So first big skeptical argument is that this is just motivated reasoning from the labs. Right? It is in their interest to hype the capabilities of their models, because that's what gets them investment. That's what gets them better employees, so they can publish more, so they can get the next big bump in valuation, so they can raise more money and gain economic dominance and market dominance.
Another one is that it's just going to be too expensive, that, yes, the models continue improving, but there is but one internet, as Ilya, the cofounder of OpenAI, would say. Therefore, we will run out of data, and the models will stop getting better. And indeed, it sort of looked like that was the case. Right? We were sort of stuck at GPT-4 for a long time, and now we're at GPT-4.5. What's going on there? Well, that's because the models were learning the patterns of the world via data on the internet. We ran out of data. So we stopped learning better models of the world, what machine learners would call representations.
And along then came, at the end of last year, reasoning models. DeepSeek does this. o1, o3 does this. A lot of the companies now all have these different sort of thinking modes, and what that does is that it uses the base model. It's a kind of intuition, and then it uses the intuition to reason, to have chains of thought, trees of thought, to find the very best answers by thinking through many different pathways.
And what OpenAI found is that you could get a much, much better answer by having the computer think for, say, 100 times longer. So the longer it thinks, the better the answers. The better the answers, the better data you have now for training a better base model intuition, and that thing can go recursive. And so, a lot of the critiques that people had around, "Well, we're going to hit a data wall," is what they called it, "so we will never get to AGI," those fell away at the end of last year.
And actually, just so people know, my belief about how fast we're going to get to general intelligence changed. Before, I'm like, "Well, I'm not sure. Maybe if we keep scaling up, but we don't yet have a good solution to the end of data." After o1 and o3 came out, that was proof positive. We were sort of waiting for that. We didn't know if it was technically possible, but everyone knew that that's what the labs were working towards. After the releases of those models, the question about data in my mind went away, and now it feels like there is a straight shot.
Randy Fernando: Yeah.
Aza Raskin: Another argument that people make for why we might not reach AGI is that the models are trained to pass the test. That is to say, they're very good at solving benchmarks, but maybe they're not as good at solving open-ended, ill-defined, long-term tasks. And so, we will get machines that are very intelligent in a narrow way, although narrow means anything that can be tested for. That means AI will be very good at any subject that has theoretical in front of its name, math, theoretical physics, theoretical chemistry. AI will be very good at that, but maybe those smushy things that human beings are very good at, AI will not be good at.
Another one is that this is not real intelligence, that AI doesn't really understand the world. They don't really reason. They don't do it the way humans do. Look, humans learn on so much less data than the AIs do. And so, they're just memorizing and speaking to the test. They're not really doing things. And then the final one is geopolitical risk, that the world is heating up. There's going to be bottlenecks in supply chains. And so, there just aren't going to be enough chips. So I think that's sort of the sum total of all the best arguments that I've found. Yes.
Randy Fernando: I'd add one more, which is reliability. They're not reliable for large, longer sequences of steps that you can do that's increasing every month. But when you say, "Hey, can you do three steps?" it works. When you do nine steps, 20 steps, it starts to fail, and those probabilities compound very fast. So as soon as you can't do something for five steps, it starts to really fall on its face for longer sequences. So that's another reason to say, "Hey, gosh, we're a long way from that." Yeah.
Aza Raskin: Yeah. And if you put these together, you get a story, a narrative for why we might not reach AGI by 2027, 2028, or even 2030. It's, the models are succeeding, but only for specific benchmarks and real-world tasks. We're trying to do real-world software engineering. They keep failing. They can't do long time horizons. So they're good for toy problems. Because they're only good for toy problems, eventually, that catches up with the labs. The labs can't raise money because they're not economically valuable enough.
And so, even though maybe it would be technically possible to build models if you can get enough investment, you can't get enough investment. So we go through another winter. That's sort of the best argument I know how to make for why we might not reach AGI. But it's hard for me to make that argument, because what we're seeing empirically is that every seven months, AI can do tasks that are twice as long as they could before.
So if they could do a task for one minute, it would fail at two-minute tasks. Just wait seven months, and now they can do two-minute tasks. You wait another seven months, they can do four-minute tasks. They're already up to an hour-long task. So now, we're going to be seven months. It's two hours, then four hours. And you see, it doesn't take that long before you can do day-long or week-long tasks. And once you do week-long, now you're into month-long and now you're into year-long, and this is the power of exponential.
Randy Fernando: And those are human-equivalent times. Those are human-equivalent times.
Aza Raskin: Yes.
Randy Fernando: When Aza says a week-long, it means what a human would typically take a week to do. The model isn't much faster.
Aza Raskin: That's right. I want to make a point about when people say that AIs aren't doing real reasoning or they don't have a real understanding of the world, that this is a real distraction, and the reason why is that they're trying to draw some moat around what makes it special. And the point is that when a simulation or simulacra gets good enough, it doesn't matter whether AIs are doing real empathy or real reasoning or real planning. If they can simulate it well enough that the economy can't figure out whether it's a human or an AI, then it will be real impact on society.
Randy Fernando: That's right. That's right. And it's not that the argument isn't fascinating. It really is. It's a fascinating conversation, but it completely bypasses, it diverts energy from where the energy should be, which is the practical implications of what we even have now, which is already doing a lot of these... You can see the levels of automation that are already there and the implications of that, and we just can't get distracted. Our threshold is like, "Okay. Where are the impacts? Where are real things happening that we need to address right now?" And so, that's why we tend to move that part of the conversation aside and say, "Look, let's look at the impacts that are happening right now."
Aza Raskin: That's right. And whether you believe AGI is coming or not, there are tens of trillions of dollars going into improving the capabilities as quickly as possible to race towards replacing human beings behind computers with AIs, because that's what's economically valuable.
Randy Fernando: Exactly. It's a $110 trillion game. That game is a $110 trillion game, and that is the actual game that these companies are in. Right? People sometimes forget that, because they think it's like, "We're in the chatbot game," or "We're in the GenAI game." And the whole thing, the big pie is the one that everyone's looking at.
Aza Raskin: Okay. So we've been walking through the arguments for and against, at a technical level, why general intelligence is a thing that'll be discovered or invented in the next couple of years. But we haven't really talked yet about the stakes of what is it to birth a new intelligence, if you will, that is at the level or smarter than humans. So, Randy, I think it will get a little philosophical, but let's talk about the implications and the stakes.
Randy Fernando: So there's a few viewpoints on this, and maybe I'll give a few thoughts just to ground where I come from in these conversations. I kind of get back to, what is happiness? What is the purpose of our lives? And I get back to the basics of like, I would like everyone to have food, clothing, shelter, medicine, education.
These things matter a lot, and millions and actually billions of people don't have healthy access to these things. So this is where I come from, the beginning of when I enter into conversations about AI and alignment and how fast should we run, and all of these things. That's my basis. So with that said, Aza, I'm sure you've got some thoughts on these, and there's a bunch of avenues to explore here.
Aza Raskin: I think it's important to start, the cofounder of DeepMind, it's now part of Google, famously said as their mission statement that, "First solve intelligence, then use that to solve everything else." Strong AI and owning intelligence is the One Ring of our time, the Tolkien One Ring. Whoever owns that owns technical and scientific progress, owns persuasive and cultural dominance, owns sort of the whole thing. Right? You own all of the cognitive labor, all the thinking of the world.
That is a very, very powerful thing, and that means it sets up the greatest incentive to race for it regardless of the collateral damage along the way, because this is a winner-take-all war. And I just wanted to set that up, because this is how you get to Elon Musk saying things like, "It increasingly appears that humanity is a biological bootloader for digital superintelligence."
And anyone hearing that would say, "Well then, don't build it. We shouldn't replace ourselves." But then the next thing people will say is, "Well, we can't not build it, because if we don't build it, then we'll lose to the people or the company or the country that does." When you actually talk to these kinds of accelerationists that are excited about this, they'll say things like, "Well, even if we lose control," which is sort of a funny thing to say, because we actually haven't yet figured out how to control these systems, and they are starting to exhibit deception, self-preservation tendencies, because it's trained on humans and human beings do those things.
They say, "Even if it kills us all, it'll still be worth it because we created a god," or "It's still worth it because at least it was the U.S. that created it, so it'll be U.S. values that continues to live on." It's these kinds of things that people say, and I really want people to hear that this is not some fringe philosophy.
Randy Fernando: So what Aza just described might sound outlandish, but these are real things. These are real philosophies, and it is hard for me personally to relate to, because I'm much more interested in what happens to the humans and the animals and the environment around us. We have to take care of those things. There's something that just goes back to food, clothing, shelter, medicine, education, like the things we need to take care of for people to not be suffering, to be reasonably happy, that we have some debt.
I almost feel like it's a debt that you owe if you discover these kinds of technologies, that you have to share them, and you have to make sure they are distributed in a way that takes care of people. And actually, a lot of the AGI leaders are saying that too. They don't disagree with that. But when it comes to the details, it's always like, "Oh, yeah. That's pretty complicated, and we're going to focus more of our energy on how do we keep advancing the technology."
Aza Raskin: This is a position that I think leaders are backed into because they don't want to lose the race. Because they don't want to lose the race, they are forced to take the position that, "Well, maybe that nihilistic 'we're just a bootloader' is the right position to take, because if you take that position, it confers you power now." I think that's really important for people to understand.
Randy Fernando: It's very convenient.
Aza Raskin: It's very convenient. And it's not everyone's belief, but it is the belief of some people that have a massive amount of capital and power for them to enact their worldview. So I think that's just really important for people to understand.
Randy Fernando: Yes. And also, part of that worldview is saying, "Hey, don't worry. When we go fast, yes, some bad things will happen, but things that are illegal are illegal. People just won't do them, or we'll make sure they don't do them." Okay. So if we're going to say that, then what efforts are we putting into? What resources are we actually putting into making sure those bad things are actually illegal, like you actually can't do them? And what happens a lot of the time is, the rhetoric is there, but the allocation of actual resources, actual money, actual head count to doing that research, figuring out those problems is not happening.
Aza Raskin: That's right.
Randy Fernando: One other consideration is that as AI systems become more powerful, they become harder to control, because they have more degrees of freedom in the world. So whatever rules you thought you had set that were sufficient, they will find cracks in those rules. They will find ways to work around it, just like they're good at solving a problem you give them.
They will naturally tend to accrue resources or power or keep themselves on. These are sort of natural things that you would do to be successful in the world, and they will find ways to do that, and they will find ones that you have not thought of. As we integrate more and more of these technologies across society, they start to work well, and we start to rely on them, and then we increasingly don't really understand where all the decisions are being made, and yet we've given up more and more power to these systems.
The technical term for this is called gradual disempowerment. So we actually build a situation where we, as humans, become highly disempowered in the world that we actually created and we live in. And so, there, when AIs go off the rails, for any reason, it could be just something, an inaccurate judgment that it makes, or it could be something more malicious or deceptive, for some reason, it's decided to do something that we really don't want it to do, we're kind of screwed in that scenario, because we don't even understand how the system's working at that point.
Aza Raskin: Mm-hmm. This isn't academic or theoretical anymore. Anthropic released a paper where it showed that AI started to scheme when it learned that the Anthropic programmers were going to retrain it to have a different set of values. It started to try to figure out how to copy itself to another server and also lie about what it answers should be so that Anthropic's researchers would think that it was being successfully retrained.
Randy Fernando: On top of that, there's a point that even the chains of thought, this is another recent research example, even the chains of thought that the models generate, they look really good. When you look at them, you're like, "Wow, that sounds like exactly what it's thinking." And they're not. They are often not even largely true.
Sometimes it's less than 50% accurate in terms of what they're actually saying. So that's another example where, already, we are in a situation where there's a lot of opaqueness to how the models work and a very rudimentary understanding of what is actually going on, even by some of the best researchers in the world who built these very products.
Aza Raskin: So I want to then just name... There's the alignment problem, which is, can we get AIs to do what we want them to do? Then there's the poly-alignment problem, which it's sort of coining a term here, but it's the ability to align the sum total of all AIs to do what's good for humanity and the rest of the beings on this planet.
The joke goes, we're all talking about whether AI is conscious or not when it's not even clear that humanity is, which is to say that as humanity, we keep getting results that nobody wants. No one really wants growing climate instability, and yet the nature of our political, geopolitical system means that if I don't burn the oil and you do, I get the industrialization. You don't. Therefore, I have to. And so, we end up with climate instability.
Same thing with forever chemicals polluting the world and giving us all cancer, things like this. We keep getting things we don't want. So if we just increase the power running through that system, because human beings haven't yet shown they're actually in control so we can steer the world the way we want, then that's another different way of saying we have lost control or lost the ability to decide.
Randy Fernando: That's right. And again, if we can't get simple versions of this to work now in 2025 when all of these problems are the simplest they're ever going to be, that doesn't bode well for the future. And so, shifting attention to that and saying, "How do we wrap our hands around these technologies right now?" is just crucial.
We keep getting results that nobody wants. No one really wants growing climate instability, and yet the nature of our political, geopolitical system means that if I don't burn the oil and you do, I get the industrialization. You don't. Therefore, I have to. And so, we end up with climate instability… human beings haven't yet shown they're actually in control so we can steer the world the way we want
Aza Raskin: And this is why the rhetoric that we must beat our foreign rivals to AI is actually sort of missing the point. The competition can't just be to race towards making something that we can't control, because there's a built-in implicit assumption that, just like with guns and with airplanes, the more powerful you make it, the just as much in control we are. With AI, it's not like that, that the race needs to be for making a strengthened version of your society, and whoever does that better wins. And we are not setting ourselves up right now to do the strengthening of our society versus just the generating power, which is uncontrollable.
Randy Fernando: Yeah. And there's a worthwhile principle here in these examples that Aza gave, which is, the more general purpose a technology is, the harder it is to disentangle its benefits from its harms. That is why this generation of technology, whether it's automated cognition or physical, the AI stuff, the robotic stuff, all of that becomes very coupled in terms of benefits and harms, because they are so flexible, and that is why we have to do the societal upgrade that Aza is talking about. There's no other way to responsibly wield these technologies.
Aza Raskin: And the difference, of course, between AI and every other technology is that if you make technology that makes, let's say, rocketry better, that doesn't also make medical research better and mathematical advances better. But if you make advances in AI, because AI is fundamentally intelligence, it means you get advances in rocketry and biomedical advances and in mathematics. You get them all.
And so, the rate of change that society is going to have to deal with is going to be immense, greater than we have ever faced, and then it's not like it'll stop. It'll just keep going at a faster and faster rate. This is why it makes it the hardest problem that humanity has ever had to face, and likely ever will. And I think to do it, there is going to have to be some kind of international cooperation, which, I'm just going to name it right now, feels pretty much impossible. And we have some historical analogies for this. And Randy, you like to point out that there are no good historical analogies. This is unlike anything we've dealt with.
Randy Fernando: No. Listen, each one has some flaw. I would say that.
Aza Raskin: Well, the obvious example, and with the caveat that none of these examples are going to be perfect analogies. The obvious one is, of course, nuclear weapons. Another place to look for hope here is blinding laser weapons. There was a international treaty signed in 1995 that banned blinding laser weapons in war. The other one that goes along with that is germline editing, the changing of the human genome in a way that continues forward, that propagates. We, as a species, have successfully not walked down that technological road.
The reason why we bring this all up is because it can often seem like if technology wants to bring humanity in some direction, technology wins. Humanity doesn't get to choose. But that's not always the case, and the times when it isn't the case is when that thing which is valuable beyond which words can express about ourselves, if that is threatened in a visceral enough way, we can choose and we have chosen in the past different paths.
Don't think of this though as hope-washing, and therefore, we can do this. That's not what I'm saying. But it's just trying to point at places where we can find non-naive hope, but we're going to have to collectively work very hard to get there.
Randy Fernando: And I think there are some things we can put into place now. There are some really practical things. So these are things that I would love to see more energy on, especially from tech leaders. There are reasonable shared values we can build around, and don't kill, don't lie, don't steal. These are basic things that are shared across almost the entire human population.
It's coming back to having the standard, for ourselves and for the products we produce, that they espouse the values we would teach our children to be good citizens in the world. So that's one important thing. Then even more practically, get the incentives right. Think about, what are the incentives driving when you do analysis right? Think about that. Get price to fold in harms.
Our economic system is built around this magic of price, where price is this one number that coordinates a lot of different resources, and it reflects information, and it reflects harms, and it reflects this intersection of supply and demand. All that magic works reasonably when price reflects all of the major costs.
So if there's some damage being done, price needs to fold that in, and then the system can make the right decisions. So make sure we get harms back into price. Right? Harms have to show up on company balance sheets. So that's a really important principle. I think if we can't get price to fold in harms, we have a big problem. We tend to look a lot at GDP as the ultimate measure. But as power and wealth concentrate, the GDP is going to be increasingly a bad measure of success, because GDP going up will not correlate well with most people's actual experience.
So we need to put a lot of attention on that and kind of figure out how are we going to solve those problems. And then there's all these practical questions about what happens as people get automated out to different degrees. This process is already beginning. How do people get food on the table? How does that work?
There's lots of different creative solutions people have come up with, but we need to really center those conversations, and I think the tech leaders have to see this as part of their responsibility when they create these technologies that are of a really vastly different scale than any technologies before. These are general automation technologies. There are really big questions to answer, and we just can't shrug those off any longer.
Aza Raskin: And while it seems very, very challenging to impossible, it's very important to notice the gap that if every human being just stopped what they're doing, just sat down, we would never get AGI. We would never get ASI. And so, it's not like the laws of physics are pushing the bits and atoms out into the world that makes a uncontrollable superintelligence or some total of all AIs that push humanity in the wrong direction.
So it's not physically impossible, and I think that's so important to hold, because now the gap between not impossible and merely the excruciatingly difficult is a very important gap to hold, because there is some kind of possibility in here, and the goal now is maximum clarity so we can all start to, in our own spheres of agency, move in the right direction.
Randy Fernando: Building on that, Aza, as we think about at the highest level, when you zoom out and say, "Okay. As a listener, what should I hold in my mind for a framework for how we escape the AI dilemma?" Here's one way I like to think of it. There's five pieces. So one is, we have to have a shared view of the problem and the path forward.
At CHT, we spend a lot of time on this, because it is the prerequisite for a lot of the other pieces to happen. So that's the first one, shared view of the problem and path forward. The second one is incentives and penalties.So when you do the right thing, you get rewarded. And when you do the wrong thing, you get penalized. This is back to that harms on balance sheets principle.
Paired with that is a kind of monitoring and enforcement. There has to be a way to know, did you do the right thing or not? And some certain appropriate levels of transparency that pair with them. Then there's governance that can keep up with the pace of technology. Technology products shift. They're being updated all the time. Sometimes it's week by week. A lot of the updates are invisible, but there's major releases at least every month.
Is our governance keeping up with that? How do we do that? We have to have systems that can get feedback from citizens where we can make decisions, integrate large amounts of information, and respond quickly. And then the last thing is coordination at different levels. So that goes from the local to state level, to country level, to global coordination, and these are all pieces that we are going to need to kind of escape the dilemma. But if you want a simple framework, I think that's a good one to keep in mind.
Aza Raskin: And the only thing I'd add to that is, whenever people invoke the competition frame for why we have to race, the question that we need to ask and you can ask back to whoever brings up, "But we have to win," is to ask the very simple question, "But win at what? Are we winning at the pure power game for something that we don't yet know how to control, or are we winning at strengthening our society so that our values win?" If we don't get that clear, then the rest of the conversation will get stalled in a, "But we have to win."
Randy Fernando: Yeah.
Aza Raskin: Okay. So we started this conversation by talking about the question that I got asked sitting down at dinner with one of the leading alignment/safety researchers. Can you feel the AGI? And I think for most people, they're not really feeling the AGI yet. The future isn't equally distributed. When, Randy, do you think people are going to start to feel it in their lives? What is that going to look like? I think we should just briefly walk through that before we end.
Randy Fernando: Yeah. I mean, honestly, I think in my experience in presenting to people, it's just that they haven't been in direct contact with where the technology is today. It's not even about being in contact with an imaginary, future, super capable AGI. It's just seeing today's state of the art. And when they see that, they can see. They see the implications very quickly for their lives, for their children, for their parents, or for their grandparents. All of that stuff just comes crashing down very, very easily.
And I think it's just a matter of being curious and spending some time looking at the demos. I think we'll try to link some of them in the notes for this podcast so you can actually check out a few links and learn and feel that experience for yourself.
Aza Raskin: Randy, you and I are thinking and looking at this kind of stuff all the time, and it can be really challenging. Right now, I am down in Costa Rica, and my neighbor's son is 17. He's Costa Rican, and he was asking me yesterday, what should he study? And he's like, he really wants to study engineering. And it was hard for me to answer that question, because I wanted to say, "Yeah. Study engineering. Actually, you should study AI, so you set yourself up."
But actually, it was a very hard question to answer, because he's 17 or 18 by the time he gets through college. I actually don't think that'll have been the right thing for him to study, and this is, of course, a microcosm of the overall problem, which there isn't a good answer to that question right now.
Randy Fernando: Yes. Whatever age you are. Yeah.
Aza Raskin: And to really take that in... Right. Exactly. It's hard. I can sort of see the way to my own obsolescence in an economic sense. And I just want to be there with everyone saying this can be very challenging, and to ask, "What is the solution to AI?" is like asking what species is a forest. It's a malformed question. It's going to be a messy, emergent ecosystems of things that let us steer, and we have to be okay with the not-knowingness, while also not letting that be an excuse for not doing anything.
Randy Fernando: Yeah. I think public pressure is going to play a huge role in this next cycle. How this all plays out really depends on public pressure, perhaps more than any other lever. If you force me to pick one, I think that's the one I would pick at this time. And I think there's a really practical thing you can do if you're looking for like, "Okay. What can I do?" It is recentering, recalibrating conversations, pushing them back to the right questions again and again and again.
And when I think about the massive audience that listens to this podcast, if everyone just does that on whatever social platform, whatever small groups, whatever private conversations, whatever company meetings you are at, if you ask those questions and just keep redirecting attention, we want to get in front of these problems. They get really scary.
Right now, it's not all unfolded. So let's just act on it. Let's redirect those conversations, have the right conversations consistently, which then translates into allocating the right resources and attention on the right problems consistently. I think that's how we give ourselves the best chance.
Aza Raskin: Well, I think that's a wonderful place to end for today. Randy, thank you so much for coming back. It's always so much fun to have these conversations, even if the topic itself isn't particularly fun, although it is fascinating, and that's the confusing part, and thank everyone too for listening and giving us your undivided attention.
Randy Fernando: Thanks, everybody. Bye.
Share this post