What is Really Going on With AI and Jobs?
The Jobs Apocalypse Conversation is Missing the Point.
If you only look at the headlines, you might think we are already living through an AI-driven jobs collapse. Mass unemployment. White-collar wipeouts. Careers evaporating overnight.
That story is wrong. But the opposite story, that nothing serious is happening and that AI will fit neatly into our economies and simply result in greater efficiency, is wrong too.
What the evidence shows, and what our recent conversation with Ethan Mollick and Molly Kinder makes clear, is that something else is afoot, something more unsettling and easier to miss. The labor market is not collapsing. But it’s clear that AI can (and if we let it, will) reshape it in ways that undermine how people enter careers, build skills, and imagine a future they can work toward.
Therefore, we’re in a short, crucial window of time where we can and should ask the big questions: what forms of human labor are most worth preserving, and how do we fight to preserve them? How can we make AI work for us in ways that enhance our felt sense of meaning at work?
No one actually knows what’s going to happen when it comes to AI, automation, and the future of work. But what’s become clear is that the leading Silicon Valley AI startups are exclusively focused on creating products that will replace low-hanging cognitive tasks without considering the implications of what could follow. Our two guests had something to say about that – and it’s not what you’ll read about in most of the coverage of this topic.
The Jobs Apocalypse Narrative Lacks Nuance
Brookings Senior Fellow Molly Kinder’s recent work with the Budget Lab at Yale looks directly at the question people are most anxious about. Since the release of ChatGPT, have we seen economy-wide job loss tied to AI?
The answer, so far, is no.
Across multiple datasets, there is no evidence of broad-based job loss in AI-exposed occupations. Studies from Brookings, the International Labour Organization, and the Stanford Digital Economy Lab all converge on this point. Total employment remains stable across exposure levels, even in highly exposed sectors.
For many people, this is reassuring. It tells us we are not already in a jobs apocalypse. But stability in headline numbers does not mean the system is healthy.
When researchers look more closely, a different pattern emerges. The Stanford Digital Economy Lab study found a roughly 13 percent decline in early-career employment in AI-exposed occupations, including software, administrative work, and customer service. For early-career software developers specifically, the decline is closer to 20 percent.
Is that just a labor market correction from companies over-hiring during the pandemic, as Molly Kinder argues? Maybe so. The larger point is that most AI and jobs debates fixate on a single metric: how many jobs will be lost, and how quickly? Some commentators think the risks are over-hyped, others adhere more to ‘AI 2027’-type human replacement scenarios.
We’re missing the deeper issue.
Work is not just how people earn money. It is how people develop skills, gain recognition, and participate in society. When career pathways erode, even without mass layoffs, people lose agency. They stop being able to plan their lives. They stop feeling included in the economy’s future.
This is why focusing only on unemployment statistics is dangerous. It allows structural harm to accumulate quietly until it is much harder to reverse.
We’re actually incentivizing the creators of general-purpose AI to focus on the wrong things entirely. In our conversation, Molly Kinder said it best:
“Every time we are talking about measuring AI, it’s whether or not it’s better than a human. Right off the bat, that steers us in the wrong direction. Why are we trying to ‘best’ humans? Why isn’t the benchmark some kind of combined [metric], like making the human better? So right off the bat, I think we have all the wrong incentives.”
Is This the End of the Career Ladder?
One of the most important insights from our conversation is that AI is not simply replacing entry-level work. It is changing the economics of learning.
Most white-collar (and for that matter, blue-collar) careers still rely on the centuries-old concept of apprenticeship. Junior workers do lower-stakes tasks. Senior workers mentor, review, and lead. Over time, responsibility levels increase.
But AI is disrupting this model in a profound way.
If an AI system can produce a better first draft than a junior employee, the incentive to assign that task to a trainee weakens. If general-purpose AI can handle routine analysis, summarization, or coding, the work that once doubled as training disappears. This dynamic helps explain why early-career employment is declining even while overall employment remains stable. Employers can hire fewer juniors and instead rely on fewer, more senior workers whose workflows are augmented by AI.
That creates a long-term problem. Training pathways collapse. Skill development becomes uneven. And a few years later, organizations find themselves without a pipeline of workers ready to step into those judgment-heavy roles.
But Prof. Ethan Mollick, Co-Director of Generative AI Labs at Wharton, points out that if every company has access to the same AI tools, it eliminates that competitive advantage. AI is too valuable to ignore, but if it were instead used to consciously upgrade younger knowledge workers’ skillsets in a way that improves the bottom line, it could help organizations thrive in the long term:
“Maybe we need to treat level two consultants as if they were welders, and have more formal training with testing and other stuff built in. We do know how to do that, but we’d have to shift the incentives to make that happen,” he suggested.
But that’s not the conversation that’s happening. It’s easy to forget that, just like Silicon Valley startups, organizations and institutions that rely on the care and brilliance of human knowledge workers have a choice when incentives push them toward replacing humans (short-term thinking) instead of meaningfully augmenting their work.
Policymakers, too, have a choice. “You have agency right now. This is the time for policy intervention,” said Mollick.
The hands-off approach to market forces won’t work with a technology as transformative as AI. And just because the overall job losses directly attributable to AI have been small so far, does not mean we’re not facing a waterfall of problems.
We saw this during globalization and offshoring in the 1990s. By the time job losses showed up clearly in national data, many communities whose economies relied on domestic manufacturing had already lost their economic footing and civic identity. AI risks repeating that pattern unless we name it early, and shape incentives deliberately.
The takeaway from our conversation with Ethan Mollick and Molly Kinder is not to panic. It is that we have a deepening responsibility to think carefully about the future we want when it comes to our relationships with work. We are not powerless – just the opposite. The direction this transition takes will depend on choices made now, especially by employers, policymakers, educators, and technologists.
Do we redesign work in ways that preserve learning, judgment, and human participation? Or do we optimize narrowly for short-term efficiency and gains, only to discover later that we have hollowed out the foundations of working life?
![[ Center for Humane Technology ]](https://substackcdn.com/image/fetch/$s_!uhgK!,w_40,h_40,c_fill,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f9f5ef8-865a-4eb3-b23e-c8dfdc8401d2_518x518.png)





This piece cuts through the simplistic “jobs apocalypse” headlines and highlights a more nuanced reality that AI isn’t just eliminating roles but reshaping how skills are developed, careers are built, and people find meaning in work. It’s a helpful reminder that our collective choices now regarding policy, training, and incentives will determine whether AI augments human potential or hollow out the foundations of fulfilling labor. Joseph Weizenbaum (AI pioneer) was very adamant that technological choices can quickly become unintended legacy and you illustrate that very well.
Your piece surfaces the gap between the headlines about 'AI job collapse' and the quieter shifts inside organizations. I'm working on an 'AI posture' lens (extraction vs creation vs sense-making). Would love your take: what does a genuinely humane organizational posture to AI look like?