No matter where you sit within the economy, whether you’re a CEO or an entry level worker, everyone’s feeling uneasy about AI and the future of work. Uncertainty about career paths, job security, and life planning makes thinking about the future anxiety inducing.
In this episode, Daniel Barcay sits down with two experts on AI and work to examine what’s actually happening in today’s labor market and what’s likely coming in the near-term. We explore the crucial question: Can we create conditions for AI to enrich work and careers, or are we headed toward widespread economic instability?
Daniel Barcay: Hey everyone, this is Daniel Barcay. Welcome to Your Undivided Attention. No matter where you sit within the economy, whether you’re a CEO or an entry level worker, a software engineer or a teacher, everyone’s feeling pretty uneasy right now about AI and the future of work. Unease about our career progressions, about what our job might look like in a few years time, or quite frankly, whether we’re going to be able to find a job at all. All of this unease, this fundamental uncertainty makes it really hard to plan for our future. What should I study in school? What new skills do I really need to grow my career? Will my work be supercharged by AI or will AI replace my job entirely? And do I have enough certainty to really buy that house or start a family? Or should I be saving to weather the storm?
Doing good work and ultimately living a happy life depends on having some predictability, some stable understanding of what our place is in the world. And AI has injected some serious uncertainty into that picture. And many of us feel caught in the middle of some strong narratives. On the one hand, rosy visions of our creativity being unleashed at work and on the other, some pretty dire warnings of being replaced entirely. So today we’re going to try to cut through some of that confusion. We’re going to look at what’s already happening in the labor market right now and talk about what’s likely coming in the next few years as this technology becomes more capable and more embedded in the workforce. And we’re going to ask the crucial question, how do we get this right? Can we create the conditions for an AI economy that really enriches our work and our careers or are we headed towards a much more unstable economic future?
Our guests for today are two economists who’ve been paying very close attention to how AI is already changing the nature of work. Molly Kinder is a senior fellow at the Brookings Institution where she researches the impact of AI on the labor market. And Ethan Mollick is a professor at the Wharton School of Business at the University of Pennsylvania, where he studies innovation, entrepreneurship, and the future of work. He’s also the author of Co-Intelligence: Living and Working with AI. Ethan and Molly, thank you so much for coming on Your Undivided Attention.
Molly Kinder: Thanks for having us.
Ethan Mollick: Glad to be here.
Daniel Barcay: So I want to start our conversation today with a snapshot of how AI is already impacting the labor market in the fall of 2025. Molly, you recently worked with the budget lab at Yale and you put together a report to try to do exactly that. What did you find?
Molly Kinder: Great. Well, first, let me say why we took on this report. So I think we are in a moment of national anxiety. People are very worried about the impact of AI and jobs. And because of a lot of the very sensational headlines, it can often feel like we are already in the midst of a job’s apocalypse, that already the labor market is being dramatically disrupted and people are losing their jobs left and right. That is often how we feel in the moment. So I teamed up with the Yale Budget Lab, Martha Gimbel, Joshua Kendall, and Maddie Lee. And we did a really deep dive into labor market data to ask the question, since ChatGPT’s launch three years ago this month, have we seen economy-wide disruption to the labor force? And this was really trying to ground where we are today in the data. And the headline is surprising to many people given this sort of state of national anxiety we find ourselves in.
Overall, we actually found a labor market more characterized by stability than disruption. So what we did was we looked at based on exposure to AI, are we really seeing the mix of jobs moving away from the sort of more exposed jobs to less exposed jobs? And the headline is we really aren’t. We’re not seeing evidence of true economy-wide major disruption. Now, it’s important to note that doesn’t mean that AI has had zero impact on jobs. Absolutely, there could be some creative jobs, some coding jobs, some customer service jobs that have been negatively impacted. Our methodology is not meant to look at very granular jobs. It’s really looking at zooming out across the entire economy to say, are we really seeing major disruption? And for us, our answer was no, with one very important potential caveat. Our data did see some disruption to the youngest workers, so to early career workers.
It isn’t clear from our data whether that’s because of AI and whether some of those disruption trends predate the launch of ChatGPT, but certainly we are seeing more occupational churn amongst the earliest career workers, which resonates with some recent data out of Stanford that did find elevated unemployment amongst young people.
Daniel Barcay: And that was the Canaries in the Coal Mine study, right?
Molly Kinder: That’s correct. Yeah.
Daniel Barcay: And I’m really curious about your take on that because when I read the Canaries in the Coal Mine Study, I came across this very different picture that said there are some really strong early warnings of labor displacement, and yet you seem a little more muted in what you want to say about the economy.
Molly Kinder: There actually isn’t a lot of daylight between our paper and the canaries paper, except maybe some of the newspaper headlines that framed the findings. If you look overall at that canary’s paper, they did not find any substantial labor market change with that ADP data since ChatGPT’s launch for any age group other than the earliest workers. So actually, if you zoom out and just took a snapshot of the overall labor market and not just that segment of 25 and younger, they’re finding is the same as ours, which is we’re really not seeing much of a discernible impact, broadly speaking on the labor market. They found quite sizable impacts on young workers in AI-exposed careers, but our data does not counter that. What we can’t say though is whether or not AI is causing that. I don’t think economists have yet teased out exactly the isolated effect of AI versus other economic impacts, say the uncertainty of the economy, interest rates, tariffs, cyclical changes like the over-hiring of coding workers during the pandemic.
So there’s a lot of factors that are playing into the weak job market for young people. I believe AI is contributing to the picture. It’s just we have to be a little careful about suggesting all of it is from AI when there are other factors.
Daniel Barcay: And Ethan, your work looks at more of the nuts and bolts of workers and organizations with AI. What do you make of this?
Ethan Mollick: I would be absolutely shocked if you saw a large scale impact immediately. I think things have been changing very rapidly in the last four or five months, but I think that in terms of actual impact, that would be kind of surprising. Now that being said, what we are finding from study after study, and my co-authors work on other people, is that AI has a broad impact on productivity and performance on creativity and innovation. Basically, any job that you take that is a highly educated, highly creative, highly paid job, there’s an overlap with AI. An overlap means transformation at a minimum, and we’re starting to see that stuff happen. So we do have pretty strong beliefs that AI is going to be transformational. I don’t think the macro patterns would pick up something very large yet. Even if you just take something like coding, for example, it really took until cursor introduced agentic coding in 2024, and we now have some data that just came out from paper that the 39% improvement in productivity from getting that.
So everything we have is that early AI models were much less impressive from a productivity impact and broad economic impact. And I think we’ll see that in the future, not today.
Basically, any job that you take that is a highly educated, highly creative, highly paid job, there’s an overlap with AI. An overlap means transformation at a minimum, and we’re starting to see that stuff happen. - Ethan Mollick
Molly Kinder: And I would just add to Ethan’s point, when you look at history, this is not surprising at all. In our paper, we actually compare these first nearly three years of occupational change since ChatGPT’s launched to previous waves of technology, so the computer and the internet. They’re on a very similar trajectory, and there’s lots of reasons why there is a gap between the speed of the technology and how much it’s really being adopted in the workplace, which I think that gap right there is responsible for a lot of the more muted early impacts.
Daniel Barcay: So I think that’s really important for people to get. You’re saying that people are much more exposed to this transformation than we’re currently seeing happen in the job market.
Molly Kinder: Yes. So there is a very large gap between the exposure of occupations and sectors to this technology and the actual usage in the workplace. So what we see when we look across sectors at usage is highly uneven. There’s a handful of sectors that are way out in front with very widespread adoption. Ethan would be very thoughtful in reflecting on this. There are some sectors where there’s not a lot of friction. It’s really easy. In research, I can just turn to ChatGPT research. There’s no friction. There’s no regulation. It’s easy for me to do. Coding, it’s very easy to turn to cursor. There are other sectors, there’s a lot of friction, whether it is skittishness about privacy or healthcare, even some in finance, a lot of companies are worried about their proprietary data, and so there’s just very highly uneven usage. So even within the jobs that are “exposed,” at least a sort of medium to high level, we are really not yet seeing the potential of the disruption realized because of these lags in usage and also lags in technological quality.
Daniel Barcay: Ethan, a few years ago, you introduced this concept, the Jagged Frontier to talk about this, about the different capabilities that AI has. Can you walk us through what the jagged frontier is and how that helps us think about this?
Ethan Mollick: Sure. So the idea of the jagged frontier is that AI is good at some stuff and bad some stuff to the most basic way. And it’s hard, a priority to know what that is, especially if you don’t use the systems a lot. So in the early days that would’ve been, and when we talked to GPT-4, we would’ve said math is a weak spot. The AI hallucinates math all the time, or citations is a weak spot.
Daniel Barcay: And so what are the implications of that that you’ve seen and how people are using AI in their jobs?
Ethan Mollick: Well, I mean, one thing is the frontier is filling in and expanding. So Phil Tetlock and company have this forecasting group and they get a bunch of experts together to forecast the future. In 2022, the year that ChatGPT came out, the forecast was that there was a 2.5% chance that AI would be able to win the International Math Olympiad in 2025. And not only did two models do this, OpenAI and Gemini, but they won it with pure LLMs. The thought was you would need a large language model using some math tool. Nope. It turns out now we’ve figured out how to make LLMs good at math. And so a whole frontier that used to be very bad at math, they’re now PhD level in many cases and not all at math.
Daniel Barcay: I’d really love to make sure we ground people in how is this playing out in the world. So with that jagged frontier, how is that affecting the way people are using it now in the corporate environments?
Ethan Mollick: So AI still has some strengths and weaknesses. Some of those are models themselves. Some of those are the interfaces we use to talk to those models. And as a result, there are these gaps of things AI can’t do. I mean, obviously some of that is it doesn’t have legs and won’t walk across the room, but also there are capability gaps that appear in any job. That mean that if you’re highly exposed to AI, you still probably have a couple things that the AI cannot possibly do because it is either not built for it or the models aren’t good enough yet, and that changes how use operates. The goal of the AI labs is to fill in those gaps or to push the frontier past the point where your error rate is lower than human, so who cares?
Molly Kinder: So I have a strap-line that if you can do your job locked in a closet with a computer, you’re far more at risk in the future with AI than if you can’t. It actually is the kind of opposite of the pandemic where the jobs that had to be in person were sort of at risk of COVID, and those of us in white collar jobs who can work from home were safe. It’s kind of the reverse now. If your job really can be done sitting in a closet with a computer with no human interaction, that’s a much more problematic job, but we aren’t there yet. I mean, I think a really major deterrent to widespread adoption in the workplace has been the fact that these models still mess up or the idea that you still need a human in the loop to oversee it.
Ethan Mollick: But I don’t know if that’s true. I don’t know if that’s true of the current models that are out in the last month or two. I don’t know if it’s true so much of the pro and thinking level models that are out there. I think people talk about models messing up and then they’re using ChatGPT-5, which is a router and it often puts them to a dumber model. I don’t actually think that that is well documented at this point that the mess up rate is that high compared to humans or the hallucination rate is still where it was. And I think when we say the models mess up, we’re having this assumption that it’s like a year ago or if you’re using a weaker model, absolutely you’re going to get hallucinations and mistakes. I’m not sure that that is present with the current set of generation of technology coming out right now.
If you can do your job locked in a closet with a computer, you’re far more at risk in the future with AI than if you can’t. - Molly Kinder
Daniel Barcay: But regardless of the state of conversation, Ethan, this has led you to write a lot about how people are hiding their use of AI. I mean, people may be afraid that they’re going to get risk slapped for it, maybe using AI in the workforce, but are actually hiding it and saying, “No, I’m not using AI.” Can you talk about what you’re finding there?
Ethan Mollick: Yeah, I think that there’s a whole bunch of reasons. Let’s go back to the main thing that people talk about using AI for and somewhat where to blame because we kicked off the discussion partially about productivity with our early research. But if you think about it, let’s say that you are using AI at work. AI is very single player right now. I work with an AI system. We’re just barely in the days of how do we build a system for the entire organization. So it’s very much an individual worker using it. Now think about their incentives. First of all, they look like geniuses right now because they’re using AI to fill in their gaps. Do they want everyone to know they’re not a genius that’s AI genius? No. Second, there’s an AI policy in place that explained that usually is based on old understandings of what AI could do.
Often data fears that aren’t really an issue anymore, but it mean that you get fired if you use AI wrong, so no one’s going to show using AI. Or they don’t even know who to talk to if they’re using it. So who would they show they’re using AI to? So AI use, this sort of secret cyborg phenomenon I talk about is ubiquitous. We know over 50% of Americans say they’re using AI at work, at least in the survey data, which you can have doubts about one way or another, they’re claiming that one One fifth of tasks they use AI for in these surveys, they’re getting three times productivity gain. And then even assuming you get the productivity gain, let’s say I can now produce PowerPoints in one 10th of time. The bottleneck becomes process. What do I do with 10 times more PowerPoints? Or even more directly, coders are more productive.
We have not built a replacement for agile development, which is what people still use to code. How do I have a two-week sprint where my coder’s a hundred times more productive? What do my daily stand-ups look like? How do I change how work operates? What are the barriers? So I think the technology is being adopted very quickly. I think people are seeing very big productivity impacts individually. I think the question is, how do you translate this to organizational ones is partially not just an economic and process one, but also a motivation.
Molly Kinder: Personally, I use AI all the time in my job. Not because my employer told me to or even really encouraged it. I’m just finding so many ways it’s enhancing my research, saving me time, making me more productive, and really enhancing my thinking very much in Ethan’s book, sort of this co-intelligence. But when I look at my own institution, we haven’t fundamentally re-engineered any of our workflows across any of our divisions because of AI. And it’s very much up to individuals to adopt and find sort of individual tweaks from it. So my gut is that as organizations figure out how to really embed this technology and not just count on individuals using ChatGPT, but really embed the API and re-engineer their workflows, that’s where you might see not only more productivity, but also frankly, more labor displacement as well.
Daniel Barcay: I think you both seem to agree that we’re not seeing massive transformations yet at the macro level, but you’re also saying that we need to be watching for early signals of that transformation. So what should we be looking at? What would be the canaries in the coal mine that this transformation is starting?
Molly Kinder: Yeah. We say in our paper that our methodology was very purposely broad and big. It could catch if the house was on fire, not if there was an individual stove fire in a small room. The methodology of looking at the labor market broadly is not going to pick up the early canaries. I think the headlines have been so sensational. They have instilled far more fear than is justified. And that could be its own self-fulfilling prophecy. Companies are looking over the shoulder, they’re hearing all about these layoffs. They’re thinking, should I be laying off employees? Should I stop hiring? And that can feed on itself. So I think we need to have a grounded sense of really where we are. But I think the reason why I do my job at Brookings is that I thoroughly believe in the transformative potential of this technology to reshape work.
And I don’t think that where we are today is necessarily where we’re going to be tomorrow. I think it’s imperative that we track very closely the labor market impacts, especially in some of the sectors where we saw the greatest adoption. Well, what about the early movers? What about customer service? What about coding? I’m looking at finance. I’m looking at marketing. What’s happening with early career workers? That’s where the greatest noise is right now. So I think the public should be reassured that we are not in the midst of a job’s apocalypse, but we should be very concerned that this is a technology that will reshape the workforce and we have to stay vigilant about it.
I think the public should be reassured that we are not in the midst of a job’s apocalypse, but we should be very concerned that this is a technology that will reshape the workforce and we have to stay vigilant about it. - Molly Kinder
Ethan Mollick: I would add something else. I think if I had a problem with this conversation, which has been really interesting, the problem with the conversation is it makes the technology external. This is a thing that’s being done to us and its consequences are inevitable and destructive and that’s it. And I don’t think that’s necessarily the case. I think we have agency over how this stuff is used and the AI labs are still trying to figure this stuff out. I talk to them all the time. And by the way, look at the differences in announcements from say Walmart where the CEO has said, my goal is to keep all three million employees and to figure out new ways to expend what they use. And you could say, are they going to do it or not? But that’s the statement versus Amazon that might be like, we’re going to get rid of as many people as possible. There is this chance to show a model that works.
The fact that everybody has a consultant at their disposal might have an impact on consulting jobs, but maybe that actually superpowers all the jobs where management was lacking. The fact that a product manager can now do coding and do some prototyping can expand what we do. The fact that this tool works for innovation makes a difference. And I think that it’s up to people and organizations to figure out how this is used and there’s competing models of use. And I think that we would behoove ourselves to spend more time thinking about what the twist is going to be, what do we want this to be used for rather than just inevitably talking about, which was very important. Still, job loss is inevitable. We haven’t seen it yet, but don’t worry, everyone’s going to lose their job soon.
Daniel Barcay: So that’s the direction we want to go. You contrasted Walmart with Amazon and you’re saying, okay, we want to be in a world of much more creative management, a much more creative understanding about how we can all play a part, but I’m not convinced that that’s a world we’re going to end up in. I think maybe it’s, my worry is that these beautiful stories about AI unleashing our productivity are going to actually feel relatively short-lived as eventually entire job functions get replaced and the pressure is to just do away with them. Are we pulling up the job ladder underneath us? Are we removing all these entry-level positions?
Ethan Mollick: In a lot of ways, this conversation really has the same story behind it as every other conversation about AI, which is what we’re really asking is how good will the models get and how fast? I think that the GPT-5 class models are good enough to transform all of work, but they will transform them gradually over the next 10 years as people figure stuff out, which is enough chance to say, what should we do differently? And by the way, part of the reason why you might not want to just have productivity gain to job loss is if your productivity gain is the models doing the work, all your competitors and every person in the world has actually the exact same model as you do. There’s like nine AI models that matter in the world right now. And if these nine models, there’s no source of competitive advantage in the long term and having the same AI as everyone else run your decision-making process.
So there might be reasons you want to still have things done by humans or differently. But the bigger question that we’re all asking is how good do these models get and how fast? And the goal of every AI company is AGI, artificial general intelligence, a machine smarter than a human at every intellectual task. They think they will get there in the next two years. Some people already think they’re there, but you could see that may not transform jobs overnight, but that is the question. If models are better than humans across a wide variety of tasks, then it’s a matter of time and we have to figure out what everyone does with their lives. If that doesn’t happen and it looks more like and the technology stalls out or the jagged frontier is too jagged, then we’re in a world where we’re going to see competition between people who use AI as augmentation and automation.
I think augmentation will often win over automation, but we don’t know yet. And that’s really the big question.
Daniel Barcay: So let me back up for a second, which is one of the things we often cover at the Center for Humane Technology is people radically underestimate both how transformative to the good and to the bad a technology is. And they come with simple narratives about what this is going to do. And then we’re surprised five or 10 years later when the technology was so much more complex than we thought that we drove the car into this or the other ditch by the side of the road, that we didn’t stop to imagine what this would do to our world. And I guess what I’m trying to ask you is, if we look at the next few years, what are the transformations that are going to be surprising to our labor market that you two will understand, but that people won’t have thought about?
Ethan Mollick: On my end, I would say I think that people are underestimating the level of quality of work that these systems can produce. And I’m partially a fault. When I wrote Co-Intelligence, intern was the right analogy to use for AI. It is not working at intern level anymore. And I think that one of the things that will blindside people a bit is how capable these systems are. I am now getting fully automated papers out of these systems that I would be impressed by a second year graduate student producing. We’re not there yet in replacing me as a professor, but if you had told me I can get a high quality academic paper or if I throw something in a GPT-5 Pro, it finds errors in my papers that 10 seminars and the review process and a thousand citations since have never located before. The changes to high level, high intellectual level work, I think people are not expecting as much as they are.
And then I think the big bet, the possibility one way or the other is agents just in the last four months for a variety of really interesting reasons have just started to work. And the question is, are they going to get as good as what people think? Because then it becomes very different when I can just say to the AI, “Hey, go through my email, figure out what my priorities are, email our top sales prospects that I haven’t had attention to, go back and forth with them, build the products they need to customize in and the proposals and just take care of stuff.” That is what the labs are aiming for. And if we’re there, that’s a very different change than I turn to the AI and ask them to write the proposal. It’s not a good proposal. I ask them to change the proposal again.
And then I check my email because the AI can’t check my email and then it misses some of the context of who the person is. I think that people are not expecting models to get as good as I think they’re already getting.
Daniel Barcay: But all of that leads me back to the question we started with at the top, which is I think I’m afraid that this notion of a gradual labor transition that we’re going to wake up one day and say it’s a 10% and there is a 20%, it’s at 30%, it’s not how this is going to be. We’re going to wake up one day to realize that the connective tissue between doing these different tasks that make up our jobs, suddenly an AI can do it and suddenly an entire function is automated. Aren’t we likely to see these big punctuated changes in radiologists are safe right now because they’re overseeing the AI and all of a sudden you wake up next month and you know what? We don’t need radiologists anymore.
Ethan Mollick: Yeah. If the agent stuff works the way the AI labs want, and there’s lots of ifs in that statement that we could talk about, if it does, then yes, it will be slowly and then all at once because the problem with substitution is everything we’re talking about with the process. If the system isn’t very good, if you have to do a lot of work building custom solutions, if you have to ask mid-career people to replace themselves with AI, you’re going to have all sorts of forms of resistance. But if I can just go ask an AI agent, do this task, figure it out, then we have a very sudden change. And that is what the world that people are aiming for is. And so again, we don’t know.
Daniel Barcay: And Molly, how does that affect your work? What do you think?
Molly Kinder: I think this notion of a drop-in remote worker vis-a-vis an AI agent is what is driving fear in people because that is unbelievably disruptive. If the AI labs can truly create an agent that is literally just drop in, you now are covering certain functions and you’re basically my virtual teammates, that vision is extremely disruptive. Personally, I think we are overestimating how quickly that’s going to come and how many bottlenecks there are that are very sort of interpersonal systems. I mean, most of our jobs don’t look just like coding. And I think there’s a reason why coding is out in front. The real world is far messier. When I sit in Washington DC, I often work out of Le Pan Quotidien and Capitol Hill, and I’m surrounded by lobbyists and people whose whole world is relationships and influence. And when I go to Silicon Valley, they live in a world of coding where it’s just a very, there’s many aspects of our job that I think are not going to be so easy to replace with a drop in remote worker.
The problem with substitution is everything we’re talking about with the process. If the system isn’t very good, if you have to do a lot of work building custom solutions, if you have to ask mid-career people to replace themselves with AI, you’re going to have all sorts of forms of resistance. But if I can just go ask an AI agent, do this task, figure it out, then we have a very sudden change. And that is what the world that people are aiming for is. - Ethan Mollick
So I don’t have the same AI 2027 fear that we’re staring down in a year from now, but I agree with Ethan that typically I expect this to be more gradual than what you’re hearing from Silicon Valley, but there could be pretty dramatic punctuations. If agents get really good, I think it will start moving a lot faster. The other thing I would say is I totally agree with Ethan and you, Daniel, that I think the public in many ways is underestimating how good these models are getting at certain very skilled, highly cognitive tasks. When ChatGPT research came out, that is my job. So I had that experience of this moment, what Ethan talked about in his book. I felt it. I mean, my hair is standing up on my arms right now because I had that out of body experience where I got access to it.
I asked it to write a paper I have wanted a famous economist to write for years, which is what can we learn positively from the last few decades of technology automation in women because women have been a lot more resilient than men. So I gave a bunch of really high quality papers and some people ... The paper that ChatGPT put out was so well done. I’ve shared it with lots of extremely influential economists as my example of how good this is. And this is going to creep up in so many different very expert, high quality knowledge jobs. And that for society is dramatic change. Just a few years ago, if I had been on this podcast before ChatGPT’s launch, which was three years ago this month, I never would’ve identified these highly skilled, highly cognitive roles as being susceptible. So still think in the real world, it’s going to be slower.
To your point about radiologists, I actually think it’s going to move slower to fully replace humans in some of these roles, but I think businesses, sectors are going to be disrupted, roles are going to be disrupted, it’s going to be uneven, but it will happen. And I think what instills fear in the heads of so many Americans is this sense of Russian roulette. Are you going to be the person that’s going to wake up one day and there’s a version of ChatGPT research that can do your job? And I think that’s terrifying to people to sense these are careers people have spent a lot of money, a lot of time on their education, years of experience, and I think people feel quite vulnerable. But again, the sort of caveat to that is I don’t think we are facing down in two years PhD level drop-in remote workers that are going to substitute for most of us.
But I have three kids, my oldest is 10. So when I look out, I think 10 years from now is still when he’s in college, this is still in the lifetime of a lot of us, especially those of us with kids. Where this could go could be mind-boggling, but I think we should feel some comfort that tomorrow our organizations are not going to be full of drop-in remote workers.
Ethan Mollick: And I agree. But I feel like what ends up happening sometimes in these discussions, and I think Molly, we’re on the very same page about this in Daniel, which is that there is this sort of view of it’s either all hype or it’s 2027 and there’ll be super intelligent machines and we’re all just going to be building machine pyramids or something like that for them. And I think that there’s a tendency to swing to one side or another.
Molly Kinder: Exactly.
Ethan Mollick: And especially for people who are kind of rational people who study this field like us to be, “The hype is overblown.” The hype is off almost certainly, but it’s not off by as much as people who ... That doesn’t mean things look normal in the near future.
Daniel Barcay: Right. Well, it’s like the hype is overblown, but the skepticism is overblown too.
Ethan Mollick: Right. And the timeline is there. There’s enough value now in the models that people will figure out a way. Let’s say there’s a financial collapse of AI stuff. I’m not convinced that there’s a bubble, but there could be bubble. I don’t have any idea. I don’t think that matters very much. I think a lot of people think that something is going to make this all go away, that we’re going to hit some limit and then AI is done for and we’re going to work. So it’s either you can ignore it or you have to panic all the time. And I think we are in the world’s either best or worst place, which is you have agency right now. This is the time for policy intervention. This is the time for companies to show models of good use, but it is not a time where it’s like either we’re all doomed or we’re all saved.
I think we are in the world’s either best or worst place, which is you have agency right now. This is the time for policy intervention. This is the time for companies to show models of good use, but it is not a time where it’s like either we’re all doomed or we’re all saved. - Ethan Mollick
Molly Kinder: I love that statement so much. And actually that was partly the motivation of the research paper I put out with Yale was not to say there’s nothing to see here. I am very firmly believing that this technology has enormous capability, but it was to say, look, we have a moment to catch our breath and shape the way this is going to play out. I don’t like the fear-mongering coming from Silicon Valley in a way that strips us of our agency. This thing is coming tomorrow. There’s nothing we can do to stop it. It’s this inevitable force. Every job loss is all about AI. This is coming for you. Don’t even go to college. I mean, this is sometimes the tenor of the conversation. Part of what we wanted to do with ground the conversation to say, today we are not yet in a job’s apocalypse is not to say it will never come.
It’s to say, let society catch our breath and let us steer this. Let us have agency because this is not going away and every day it’s getting better. So we do have to make sure that we are steering it. And I think, again, a lot of incentives in the system are not steering us toward a sort of pro worker vision.
Daniel Barcay: Earlier in this conversation, you said, Molly, that you’re not worried about the tech or organizations, you’re worried about the wrong incentives. Pull us into that. What are the incentives that you’re seeing and why does it worry you?
Molly Kinder: Yeah. So first of all, I worry that we are spending an absolutely mind-boggling amount of money on investing in these systems. And one of my fundamental worries is are investors expecting an economy with a lot of those drop-in remote workers.
Daniel Barcay: So just to be clear, you’re saying that because a trillion dollars has been poured into this already, there’s an expectation of getting a return on that capital and that expectation could become turning the screws on business models, turning the screws on workers. Is that what you’re saying?
Molly Kinder: These are decisions that are going to be made at the employer level. It is going to be the decision of employers to decide how much this is going to use to get more out of your workers, to augment, to unleash new possibilities, to grow, versus simply this is a cost-cutting exercise and it’s a race to the bottom. And my worry with a lot of the sort of pressure on the C-suite is we got to show in the short run some return on our investment. And one of the quickest ways to get there is this kind of race to the bottom with labor savings. And then when you see Morgan Stanley coming out with, here’s the potential return on all this investment, and it’s a huge number. And a lot of it, over half of it was coming from labor savings. It does make you question what are the incentives of this?
Let society catch our breath and let us steer this. Let us have agency because this is not going away and every day it’s getting better. So we do have to make sure that we are steering it. And I think, again, a lot of incentives in the system are not steering us toward a sort of pro worker vision. - Molly Kinder
And are we operating in a world where if you take a long view, these employers are going to need to train up their future level threes and level fours who are going to be able to do things that technology can’t do, but are they just thinking about their short run costs? So let’s cut our entry level, be damned if this means that in three years we’re not going to have a pipeline of talent. So I think some of these incentives, I think, I worry are going to push us into the world that is not optimal for workers and might steer us in a world where we see pretty phenomenal inequality. Who benefits from this technology and who doesn’t? I think this is really what keeps me up at night.
Daniel Barcay: Ethan, do you see the same picture?
Ethan Mollick: Yeah. I think that that’s a wise point, which is what the incentives are. Leaving aside bubbles are not bubbles. On the other hand, I do think that if you talk to the AI labs, I think that they still view this as that scientific research gets accelerated and everybody has, it’s abundance for everybody. We just don’t have a path that leads from where we are now to abundance for all. There’s a policy decision to be made. There’s what does that look like? There’s just the fact that even if everything works out great, living through industrial revolutions historically sucks.
It’s a tough time in the early industrial revolution. Lifespans fall before they grow up again. And so I don’t know the model there, but I do think that there is concern that a gentle pathway ... There’s a lot of attention paid to hard takeoffs of technology. I think that one thing Molly’s pointing out that we should be talking more about is sort of hard takeoffs of automation versus having a period with more competing designs for how we approach using AI, more humane designs. I have a feeling some of those will win. I think that there are more solvable problems with output than people think. I think that the bitter lesson is that if you want a particular output, AI is really good. You can teach an AI to do that output. But if process matters ...
And so the answer to the Brookings problem is everyone could do these reports. It’s that a better report would be that everyone debated with each other during writing the report, you’ll end up with a better report in the end. And so the question is, how do we reestablish the idea that process matters, interaction matters? And I think giving us more time to decide would probably be helpful.
There’s just the fact that even if everything works out great, living through industrial revolutions historically sucks. - Ethan Mollick
Daniel Barcay: So given all these powerful incentives, these powerful cost-cutting incentives, these labor replacement incentives, how do we shift them towards that future, Ethan, that you’re pointing out? If we could design something different in policy and the way that we roll this out in companies in the way that people use it, what are the levers to end up with a better outcome?
Ethan Mollick: My self-serving view from being in a university is that this is the time that universities actually could be extremely helpful because we might need to bolt on an extra session that is apprenticeship, but for knowledge workers, which we always trust that to happen inside organizations. Maybe we need to treat level two consultants as if they were welders and have more formal training with testing and other stuff built in. We do know how to do that, but we’d have to shift the incentives to make that happen. I think the other example of this is more R&D effort now going into use cases that are positive for AI. I mean, I do a lot of work on education AI. It baffles me that there has been no crash effort to build the universal tutor yet. As somebody who’s done education for, at this point, 20 years building technology for education, there’s a lot of cynicism in the education community about how technology works, but we actually have some early evidence that AI tutors are amazing.
And certainly for people who don’t have access to enough schooling or something similar, we need crash programs like that. What’s the crash program for how humans can work with AI workers? And I think the incentives are misaligned in that direction. I think a lot of academia and policy institutes, Molly aside, aren’t taking this very seriously that this is actually a big disruption. And I think that there actually is some intellectual lift that’s required right now to incentivize people to actually, here’s a way humans can work with AI to be better than the AI alone. And that’s not happening yet.
Molly Kinder: Yeah, it’s really hard to come up with sort of big, bold ideas that can change the incentives. That has been my express mission. 2025 is a year of solution. So I’ve been batting around some big ideas. First, I would say at a very high level, and this, I want to acknowledge my friend Stephanie Bell with the Partnership AI has several times shared this idea with me that starting at benchmarks, every time we are talking about measuring AI, it’s whether or not it’s better than a human. Right off the bat, that steers us in the wrong direction. Why are we trying to best humans? Why isn’t the benchmark some kind of combined making the human better? So right off the bat, I think we have all the wrong incentives when we’re measuring the thing that was actually probably not good for society. Then you can imagine funds.
We’ve got DARPA, we have all sorts of federal money going toward innovation. Why are we not steering that toward a new benchmark where you can prove that the output that you’re aiming for is sort of leveling up humans in some way. So I think you could imagine tying some sort of innovation funding to that, and that could be somewhere where I think the public sector can really make a difference. I think another area that is really important, I’m really thinking a lot about employers. When we think about how AI is going to impact work and workers, it’s going to happen in the workplace. And so I think the question becomes, what are the incentives of employers and what levers do we have to nudge in a better direction for workers? And that could be everything from more of a focus on augmentation versus automation. It could be sharing the gains.
Every time we are talking about measuring AI, it’s whether or not it’s better than a human. Right off the bat, that steers us in the wrong direction. Why are we trying to best humans? Why isn’t the benchmark some kind of combined making the human better? So right off the bat, I think we have all the wrong incentives when we’re measuring the thing that was actually probably not good for society. - Molly Kinder
What happens when there’s big productivity gains? Are workers going to get paid? Are they going to get more time off? There’s big questions around that. What are levers where we can steer in a better direction and can public policy play a role? I’ve been working for a few months on a big idea that I hope to be publishing soon around how can we change the incentive structure of employers vis-a-vis these entry-level hires? I mean, Ethan was saying, yes, I think certainly we can imagine a world where universities, you can take on more schooling to get that apprenticeship, but then those costs fall to the young person. And what happens when the employer is, they’re the ones getting the cost savings and the extra profit from cutting. What kind of incentives can we push so we could give carrots and sticks to make employers still do some of those trainings?
Daniel Barcay: I’m searching for credible visions that paint a pro social version of that incentive, but I have to say, I’m not very optimistic on that front.
Molly Kinder: Well, Daniel, one of the reasons why I feel some pessimism is that we have something that I’ve documented with colleagues, the great mismatch. If you look at the sectors in the economy that have the greatest exposure to AI, meaning this is where we expect the greatest disruption, they have the lowest union density across the entire economy, typically 4%, 3% as low as 1% in finance, which means 90 plus percent, 95 plus percent of workers in these sectors have no collective bargaining. If we lived in a country with more collective bargaining, if there were gains, so workers became more productive because of AI and could do far more and can almost level up to a new role, you could imagine a process by which workers can figure out some gain sharing. We don’t have that kind of power in the workplace, and so it either is going to be left to employers to voluntarily take a high road approach.
And I will say we have no definition in this country of what it looks like to be a high road employer on AI. We do for things like wages, but we don’t have that high road yet, and I think we should develop that and get a consensus on, or is there going to be some public policy that’s going to force this? Is there going to be at some point, I don’t think we’re anywhere close to this right now because of where we are with the AI trajectory, but could you imagine some legislation that imposes something like a four-day workweek or what are the mechanisms by which there is gain sharing? That is one of the sort of questions.
I think right now so much of the policy discussion is either re-skilling or redistribution through something like UBI. Nothing is about the work itself and how do we make sure workers benefit as they become more productive with AI. And I think these are some of the big north star big ideas that I think we as a policy community need to come up with.
Daniel Barcay: We spent so much time in this conversation talking about AI’s effects on people just entering the labor market, the ladder being pulled up behind us and everything. If you both had one piece of advice for people entering the labor market, and I want to start with you, Ethan, because you’re a professor at Wharton, your advice for your students at Wharton in an age of AI, what is it?
Ethan Mollick: My first joke to everybody who asks this is that they should go into a regulated industry that can’t be changed because of too much government oversight or enough government oversight. But outside of that, I think that jobs that are bundled of tasks is incredibly diverse, covering a lot of different kinds of interaction. So I think about doctor being one of these. You wouldn’t expect someone to be equally good at hand skills and empathy and administration and diagnosis and keeping up with the research. That’s a nice example. Professor is one where my job is clearly going to be disrupted, but a professor does many things. Many of our jobs are very complicated.
So I think a single serving job where you’re doing one narrow thing is writing a press release every day is a much more risky job than many interactions with many sets of people at different kinds of levels in the real world. So there are a lot of good jobs like that.
Daniel Barcay: That’s really interesting. That’s like the career you should be taking is one of breadth and-
Ethan Mollick: I think because what does this help you with? Also, I’m an entrepreneurship professor. When you think about it, entrepreneurship is all about you’re really good at one thing and you hope that none of the other stuff you’re terrible at destroys you. And this is a great time for entrepreneurship too, because the AI stops from being at the zeroth percentile of a few things you would’ve been the zero percentile of, and now you’re the eighth percentile of everything you’re not amazing at. So I think jobs where you’re held back by one or two skills might actually be really interesting places for the future too, where I’m not a good writer, but I’m incredibly good at working with people. Maybe a sales job that I couldn’t do before is now doable in a way it couldn’t be. So I actually think they’re bundled jobs, complex jobs, jobs that have many sets of skills where I’d be focusing.
Molly Kinder: So I think for young people, be good at being a human. I think relational skills, being influential, being able to get up and speak and motivate and influence and connect with people is definitely something that AI is not going to be able ... Anything embodied like that, I don’t think the AI is going to be very good at right now. I would also say AI has so many superpowers. Embrace AI, find your passion, and make sure, again, you’re as much flexing your humanness as it is being a vessel by which AI is going to make you powerful.
Daniel Barcay: I think neither of your jobs is under threat right now, but both of your jobs are going to change wildly over the next few years. And I look forward to keeping up with both of you as we ride this wave. Thanks for coming on.
Molly Kinder: Thanks, Daniel. Really appreciate it.
Ethan Mollick: Thanks for having me.
![[ Center for Humane Technology ]](https://substackcdn.com/image/fetch/$s_!uhgK!,w_40,h_40,c_fill,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f9f5ef8-865a-4eb3-b23e-c8dfdc8401d2_518x518.png)
![[ Center for Humane Technology ]](https://substackcdn.com/image/fetch/$s_!i59_!,w_144,h_144,c_fill,f_auto,q_auto:good,fl_progressive:steep,g_auto/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92a16bba-0510-45e6-9ca7-63a2a875682a_2000x1125.jpeg)











