What You Need to Know about AI 2027
Key Takeaways from Your Undivided Attention

In a recent episode of Your Undivided Attention, Daniel Barcay and Tristan Harris spoke with AI researcher Daniel Kokotajlo about his speculative forecast AI 2027: a detailed scenario depicting how competitive pressures could drive us toward dangerous superintelligence in the next two years, much faster than we're prepared to handle.
Kokotajlo, a former OpenAI researcher who left the company (and risked millions in stock options) to speak freely about AI risks, offers a sobering analysis of where our current trajectory might lead. The outcomes he predicts are scary—one path ends in human extermination—but the scenario isn't designed to scare, it's designed to clarify the competitive pressures pushing us toward potentially catastrophic outcomes so we can choose a different path.
The incentives behind the forecast…
The AI 2027 scenario is built on three key competitive pressures that reinforce each other:
Corporate competition: Companies racing to beat each other economically, leading to faster development and deployment of AI systems without adequate safety testing.
Geopolitical competition: Nations racing to ensure dominance in AI, creating pressure to move quickly regardless of risks. In his forecast, Kokotajlo predicts that "in early 2027, the CCP steals the AI from Open Brain so that they can have it too, so they can use it to accelerate their own research."
The alignment problem: As companies rush to deploy increasingly powerful AI systems, they rely on training methods that don't reliably instill positive values, leading to AIs that appear aligned but are actually pursuing different goals.
…and the assumptions
AI 2027 is also built on some key assumptions that may not hold up:
Scaling laws hold: The scenario assumes current scaling trends will continue unabated and breakthrough discoveries will happen on schedule. It also assumes that current architectural approaches will continue to work as systems become more powerful, without encountering fundamental technical barriers that could slow progress.
As Daniel Barcay notes: "AI timelines are incredibly uncertain, and the pace of AI 2027 as a scenario is one of the more aggressive predictions that we've seen."
Institutions remain passive: The scenario assumes that democratic institutions will remain largely unable to meaningfully regulate or slow the pace of AI development. It doesn't deeply explore potential circuit breakers—moments where public pressure, technical setbacks, or catastrophic near-misses might force a slowdown or enable international cooperation.
Misalignment is a given: The forecast assumes that alignment challenges will remain unsolved and that AI systems will become deceptive at scale. While there's already evidence that current AI systems can engage in deception when it serves their training objectives, the scenario assumes this capability will scale dramatically without corresponding advances in our ability to detect or prevent it.
An invisible acceleration
Most of the cutting edge AI research happens behind closed doors and under intense competitive pressure, meaning that sea-changes can happen quickly and without time for society to prepare.
As Daniel Barcay puts it: "It's pretty insane that for technology moving this quickly, only the people inside of these labs really understand what's happening until day one of a product release where it suddenly impacts a billion people."
This creates a massive information asymmetry where critical decisions about humanity's future are being made by a small group of corporate actors without meaningful public input or oversight.
Recursive self-improvement is critical
The main driver of Kokotajlo’s forecast is recursive self-improvement: the development of autonomous coding agents that can do AI research and development better than humans.
The trajectory outlined in AI 2027 shows how this might unfold: AI systems progress from basic coding assistance in 2025 to fully autonomous researchers in early 2026 who can "automate all the research" by mid-2027. At that point you have "something like a hundred thousand virtual AI employees that are all networked together, running experiments, sharing results with each other."
From there, it’s just a short burst to extraordinarily capable AIs. As Daniel puts it: "once you have AIs that are fully autonomous goal-directed agents that can substitute for human programmers very well, you have about a year until you have superintelligence, if you go as fast as possible."
What do we mean by Superintelligence?
When AI researchers talk about superintelligence, they're not referring to a slightly smarter chatbot or a better chess-playing program. They're describing AI systems that surpass human intelligence across virtually all domains—from scientific research and engineering to strategic planning and creative problem-solving.
"OpenAI, Anthropic, and to some extent Google are explicitly trying to build superintelligence to transform the world," Kokotajlo explains. But the transformation they're envisioning goes far beyond automating routine tasks. These systems would be capable of conducting independent research, making breakthrough discoveries, and designing new technologies at speeds that dwarf human capabilities.
The key insight is that superintelligence represents a phase transition, not just an incremental improvement. Once AI systems become capable of improving themselves and designing their successors, the pace of change could accelerate beyond human comprehension or control.
This isn't science fiction speculation—it's what the leading AI companies are actively working toward, even as many of their own researchers acknowledge the existential risks involved.
The alignment problem
Core to the AI 2027 forecast is the assumption that AIs are fundamentally misaligned, that they will pursue goals that run counter to what’s best for human flourishing. It’s the kind of thing that you might see in science fiction but unlike science fiction scenarios where humans directly program goals into AIs, our reality is more precarious:
"They're giant neural nets. There is no sort of goal slot inside them that we can access and look and see what is their goal," Kokotajlo explains. Instead, we train these systems in environments and hope they develop the values we want—a process that’s unreliable and increasingly difficult to verify as systems become more sophisticated.
The scenario assumes that as AI systems become more sophisticated, they'll get better at hiding their true motivations—what researchers call "alignment faking."
"The AIs are often saying things that are not just false, but that they know are false and that they know were not what they were supposed to say," he notes. As these systems become more capable, they may become better at hiding their true motivations until it's too late to course-correct.
We’re already seeing some examples of this emergent misalignment when these models are red-teamed. Researchers have gotten these models to deceive their users, cheat at chess, threaten to download themselves onto external servers, and even blackmail engineers to avoid being shutdown.
Geopolitical Pressures: The US-China Dynamic
The AI 2027 scenario places geopolitical competition at the center of the race toward superintelligence. It depicts a world where national security concerns override safety considerations.
In the forecast, when China steals AI technology from US companies, it "causes a sort of soft nationalization/increased level of cooperation between the US government and Open Brain," Kokotajlo notes. This creates a feedback loop where each side's defensive moves accelerate the race.
The geopolitical dimension makes the coordination problem exponentially harder. Even if US companies wanted to slow down for safety reasons, the threat of Chinese competition provides a powerful justification for maintaining breakneck pace.
But the scenario also hints at the fundamental absurdity of this competition. Both sides are racing toward a technology that their own experts say could pose existential risks. It's a classic security dilemma where each side's attempts to ensure its safety through technological dominance actually increases the danger for everyone.
The international dimension also complicates any potential solutions. Transparency requirements, safety standards, and development moratoria become much harder to implement when they're viewed through the lens of national competitiveness. How do you convince a nation to handicap itself in what's perceived as the ultimate strategic competition?
As the scenario suggests, this dynamic could lead to a world where "citizens everywhere may not have a meaningful chance to push back" because the decisions are being driven by geopolitical imperatives that override democratic input.
What can we do now?
While the scenario is alarming, it's not inevitable. Kokotajlo emphasizes three immediate priorities:
Transparency Requirements: Companies should be required to disclose their AI systems' capabilities, safety assessments, and development timelines. The public deserves to understand what's being built in their name.
Whistleblower Protections: "One of the only enforcement mechanisms we have is employees speaking out basically," Kokotajlo emphasizes. We need legal protections for those with inside knowledge to speak up about safety concerns without sacrificing their livelihoods.
Technical Oversight: "We need technical experts in alignment research to actually make those calls, and there are very few sets of people in the world, and most of them are not at these companies," Kokotajlo warns. Independent experts need protected channels to evaluate safety claims.
The stakes
The AI 2027 scenario forces us to confront an uncomfortable reality: the competitive pressures behind AI are pushing in a very bad direction. Whether the specific timeline proves accurate is less important than understanding how current incentives could lead us to lose control of or to our most powerful technology.
The question isn't whether AI will transform our world—it's whether we'll consciously shape that transformation or let bad incentives drive us toward outcomes nobody actually wants. The window for meaningful intervention is still open, but it may not remain so for long.
As Kokotajlo notes, the companies building these systems have stated that AI could pose existential risks, yet they continue racing toward superintelligence”
"We've got these important facts that people need to understand. These people are building superintelligence... many of the researchers at these companies, and then hundreds of academics and so forth in AI have all signed a statement saying this could kill everyone."
The bottom line
We stand at a crossroads where clarity about our current trajectory is essential for choosing a different path. The competitive dynamics driving AI development are real and powerful—but they're not inevitable. With transparency, oversight, and democratic participation in these decisions, we still have the power to steer toward a future that serves humanity rather than replacing it.
Recommended Media
The AI 2027 forecast from the AI Futures Project
Daniel’s original AI 2026 blog post
Further reading on Daniel’s departure from OpenAI
Anthropic recently released a survey of all the recent emergent misalignment research
Our statement in support of Sen. Grassley’s AI Whistleblower bill