Podcast
Podcast
- 26 Mar 2025
- Managing the Future of Work
Erik Brynjolfsson on how AI is rewriting the rules of the economy
Bill Kerr: It’s spring 2025, and generative AI’s disruptive potential is a major fixation. Will it replace or boost workers and who will ultimately benefit?
Welcome to the Managing the Future of Work podcast from Harvard Business School. I’m your host, Bill Kerr. It’s my pleasure to welcome Stanford economist Erik Brynjolfsson. Erik is a leading expert on the economics of information systems and a widely cited observer of the impact of artificial intelligence. Erik is director of Stanford’s Digital Economy Lab, a fellow at the University’s Institute for Human-Centered AI, and a research associate at the National Bureau of Economic Research. His books include The Second Machine Age and Machine, Platform, Crowd, both written with Andrew McAfee. He was also the co-chair of the National Academies’ report on “AI and the Future of Work.” We’ll look at AI’s impact on productivity and worker experience, the evolving skills picture, and the implications for education. We’ll also talk about what’s behind the underwhelming early business returns and how firms can revamp their processes to leverage the technology. And we’ll consider how governments and businesses are addressing AI safety. Erik, welcome to the podcast.
Erik Brynjolfsson: Great to be here, and great to talk to you, Bill.
Kerr: Erik, you have been at the forefront of studying technology for a long time, and I dare say, you were studying AI before it was even cool to be doing so. So I’d love for you to tell us…
Brynjolfsson: It’s always been cool. What do you mean?
Kerr: Yeah. Well, you were saying that, but people were sometimes skeptical. But they’re not probably now. And so I’d love you to tell us a little bit about that journey and then also whether, for you, this does seem like a special inflection point.
Brynjolfsson: Sure. Well, I mean I could go back to when I was a teenager. I spent a lot of time reading about science fiction. Right after college, I actually started teaching a course on artificial intelligence in the late 1980s—that’s how old I am—about expert systems, which are these rule-based systems, and about applications of artificial intelligence with my classmate Tod Loofbourrow at the Harvard Extension School. And that was a lot of fun. We got to meet a lot of people in industry who were building these systems. And we actually started a company, Foundation Technologies, in homage to Isaac Asimov, my teen hero. And so I have been involved in this a long time. And when I went to MIT for my PhD, I tried to, for a little while, do both AI and economics at the same time, but I was the only one trying to do that, and it didn’t really work. So I ended up sticking more on the economics and business side, while having a kind of avocation, playing with the technology on my own and, of course, writing and teaching about it. It’s great to see how AI is sort of having, well, more than a rebirth, just an explosion, especially since ChatGPT in November 2022.
Kerr: Erik, so you’ve been a part of that upward slope for a long time, but there have been times where people anticipated it to be at that transformational moment, but it turned out not to be. And I’m curious to go back to the inflection point. Do you think now is the moment where we’re going to see something? We’re already seeing significant things, but is it going to continue to compound at that rate?
Brynjolfsson: Yeah. I don’t think we’re going to have another AI winter. That’s what it was called in the ’90s when things kind of went backwards. These exponential trends are difficult to predict. History is littered with people going back to Marvin Minsky in the ’50s and ’60s predicting that [powerful] AI was just around the corner. But now, anyone who uses ChatGPT or Claude or Gemini can see that it’s actually answering some questions pretty well. Yesterday I spent a couple of hours working with Deep Research, a great tool that a lot of professors are working with, a lot of other folks are working with. I would compare it to a good grad student, that they don’t necessarily have all the answers, and they don’t always get everything right, but totally turbocharges my productivity.
Kerr: I have to say, there have been several of these moments for me that were like, “Oh my goodness!” And probably the first one comes out when it’s ChatGPT, and you ask it to write a Shakespearean sonnet, and it magically does it. But even more important, I think more recently, was when I used Deep Research, and I put in a question that I knew a fair bit about it and asked it to create a 4,000-word summary. And I was rather impressed with what came out on the other side.
Brynjolfsson: Yeah, no. I was annoyed last year when I asked one of these systems, I’ve been working on a book, and I put some of my ideas and asked it to work on it. And it came up with these other ideas that I thought only I had thought of, and it was already thinking of them. So I was like, “Uh-oh, I better get working on this faster, because even the AI systems are …
Kerr: I think all of us should satisfy those book contracts as quickly as possible. One of your recent academic studies with Lindsey Raymond, who’s now at HBS, and then also Danielle Li at MIT introduced generative AI into the call center environment and was able to look at the impact on workers. So tell us a little bit about that study.
Brynjolfsson: So this was a company that rolled out a large language model [LLM] to assist call center workers. They rolled it out to about 5,000 workers. Some of them got access to the technology, some of them didn’t. So we had basically a controlled experiment. We used some of the tools from the credibility revolution [of data-driven validation] to get causal estimates of whether or not the AI system was actually having an effect on a bunch of different KPIs, you could call them, key performance indicators—things like average handle time, call volume, customer satisfaction, customer sentiment, employee turnover. And what we found was very quickly there was big benefits to the people who got access to the technology. On average, they were about 14 percent more productive, although some workers actually got about a 35 percent productivity gain. Customer satisfaction went up. And even the employees seemed to be happier. There was less turnover among the employees. So really all three groups— stockholders, customers, employees—they all got measurably better off.
Kerr: And one of the things I’ve found interesting about that study is it also was similar to a study that has been done by members of many different universities on the consulting side. Similarly, both studies found that it was the most inexperienced or the newest workers that benefited the most from that. And I’m curious as to whether you think that’s going to be the most likely way that AI is going to be impacting parts of the workforce differentially.
Brynjolfsson: Yeah, that’s such an important question. And that Jagged Edge study was really interesting, where they worked with the BCG consultants and found a similar result, which in our study, the bottom 20 percent of workers, whether you measure it by skill or by experience, had the biggest gains. The top workers, the top quintile, actually had basically zero gains. I think what was going on there is that the LLM, it basically looks at these millions of transcripts and learns—it’s machine learning—learns what are the best ways to answer certain kinds of questions, and then it makes them available to everybody. LLMs are great at capturing tacit knowledge, something that wasn’t really possible before machine learning. Well, it turned out that the best workers were kind of like getting their own answers fed back to them. They’re like, “Yeah, I know that. That’s what I just said.” Whereas the less-skilled workers were actually getting some benefits from it. Now to the second part of your question: Is this a general pattern? Well, we’re seeing it in a lot of places. But also, recently, there was just another terrific study by Aidan Toner-Rodgers, an MIT PhD student, and he found the opposite. He found that, while the systems were improving productivity, in his case, the scientists who were most experienced got the biggest gain, and those who were less experienced got smaller gains. And there have been some studies in medicine and elsewhere where they also had the effect flip around. So I don’t think this is a general phenomenon. I’m trying to think hard about when it’s one way versus the other. I have a nascent hypothesis I can test with you. I think that when there’s a right answer or a ceiling, then the LLM brings people up to that—this is how you reset your password or whatever it is that people are calling on the call center. And so it sort of makes sense that it levels things out. You don’t get a better answer than the correct answer. But when it’s something like scientific discovery, where there isn’t really a ceiling—at least not one that I know of—then there’s room for the best to get even better than they were before, and that seems to be what’s happening. And there also have been some studies of medical systems where it was sort of a sad result. Eric Horvitz and others published a paper in the Journal of American Medical Association last fall where they had three conditions: no AI, AI alone, and AI plus human. And all of us would have assumed that AI plus human would do at least as well or better than the AI alone. It turned out not to be the case—that the AI plus human did worse than the AI alone. And looking more closely at it, I think it’s similar to the Jagged Edge. People, the human users, didn’t always understand when the AI was making a mistake and when it was helpful, and they actually were more likely to overrule it when it was doing the right thing. So this really highlights the importance of having users who understand it—maybe better users who are better able to understand it. It also highlights, perhaps more importantly the importance of having an AI system that can explain its reasoning. If you’re a medical doctor and the AI system says, “Hey, you need to cut off the patient’s left leg,” and the doctor says, “Why?” It says, “Well, I’m 90 percent sure. Just do it.” I don’t know. I don’t know how many doctors would just say, “Okay.” They want to have some explainability, some interpretability. And unfortunately, LLMs with all their strengths, one of their weaknesses is they have hundreds of billions or even trillions of parameters, and it’s very hard to explain what’s going on under the hood. And until we crack that, it’s going to be harder for humans and machines to work together in a trusting way. I don’t think a black box is going to work as well with humans as one that is explainable.
Kerr: Erik, you were recently at a conference in Paris, I believe, that was about the AI safety questions and what is the future of governance on this space. I know it’s an impossible question, because we’re going to be with this for decades to come. But what do you see as the most near-term things that companies and the providers of the technology and so forth are going to have to grapple with on the safety dimensions and the boundaries that we set up?
Brynjolfsson: Yeah, it’s a big question, and it’s getting more and more important. One thing I noticed in Paris, actually, was that, while safety and security and privacy are incredibly important, they actually de-emphasize them a little bit, compared to what I’ve seen in previous years. Everyone knows that the Europeans are world experts at regulation, and they’re leaders in that. The most common question I got in the different sessions was, how can we be more like America? How can we be innovating faster? How can we unburden some of our companies to move more quickly? Certainly, done right, regulation can be helpful. It can make the users more comfortable and secure. Right now, a lot of people are afraid that they’re infringing on someone’s copyright or that there’s a safety hazard. And so, if you can alleviate those concerns, it actually could speed adoption. But done wrong, it can really slow down innovation. And so having smart regulation that protects people without unduly burdening companies, ultimately, is that delicate balance that needs to be found. I know that both the United States and Europe are trying to work their way through that. And over time, I think it’s less a matter of turning the dial toward more or less and more a matter of being smart about which regulations to implement. AI is capturing intelligence, and it’s hard to think of things that are much more important than that. And if we can improve or solve intelligence, we can do so much more with healthcare, with poverty, with the environment.
Kerr: And just to make a connection back to your book, The Second Machine Age, that we mentioned in the introduction, you begin that book by describing how you could look at human history and human progress, and it’s this very slow upward creaking line until there’s a significant bent upwards. And that came with the industrial revolution and technology. And by consequence, it didn’t come with the many different cultures, the many different religions or the many things that have happened also along that period of time.
Brynjolfsson: That’s right. The thing that really bent the curve—like literally bent the curve, depending on how you measure human living standards—is technology. It’s the steam engine that ignited the industrial revolution. We call [the new era] “The Second Machine Age” because those [earlier] technologies basically augmented and automated our muscles. Now we’re beginning to do the same thing for our brains, our minds, and that’s got to be at least as consequential, if not more so.
Kerr: All right, let’s go back and then think about how this is going to play out in the world and in jobs. And we often think of jobs as being bundles of tasks with potentially the call center workers being maybe a type of job that would have a relatively narrow range of tasks that are being associated with it, but other jobs being a more complex bundle. Talk to me about how you’re working through both academically. And you also have a firm, I believe, that’s connected to this practice. How is this going to play out in the labor market? What do you anticipate this meaning in terms of the types of jobs available for people? And will it be that everyone’s being partially affected, or will some be more significantly impacted?
Brynjolfsson: Yes, that’s a great question. And you’re absolutely right that the way I think about it, the way I think everyone should be thinking about it, is not at the level of whole jobs but breaking things down into individual tasks. Most jobs consist of dozens of specific tasks—even call centers, radiologists, economists, professors, truck drivers. There’s many different individual tasks they do. And when you look at it that way, you can see that there are some of them that AI can help a lot with: writing memos. There’s others that AI is completely useless for—at least current gen AI can’t lift a box, for instance. We humans, at least for now, still have a comparative advantage in improvisation and exception processing. So what we’re seeing is that, in basically every occupation we look at, there’s parts of it that are being automated and augmented, and there’s others that are unaffected. It never seems to run the table and do everything, at least not yet. Maybe someday. And what that means is, we’ll not see mass unemployment or job loss, but rather we’ll see a lot of restructuring and reorganization. With the call center workers, they maybe have simpler jobs, compared to some other jobs. But even there, what we found was that there was sort of a Pareto curve of tasks that they were doing. There were some kinds of questions that were very common, and there are other questions that only appeared once or twice in the data set. And those rare questions were the ones where humans had a comparative advantage. You mentioned the company. We have a company called Workhelix, which is really taking this academic research and bringing it to companies. And our goal is to make sure that they’re not wasting so much money on inappropriate uses of the technology and focusing on the tasks where the payoffs are high. We actually have a very fine-grained taxonomy of about 250,000 tasks. And most companies, we can identify numerous opportunities where there’s a very high return and steer them away from the many other opportunities that frankly they’re wasting money on right now where the returns are not that high. By using this task-based approach, I think we’re going to shorten that time between amazing technology and big business payoff to something much more manageable than what we’ve been seeing before.
Kerr: Staying with that, because clearly the technology is astounding and is getting more so, but a lot of companies can find frustration when they look for the impact of this on their bottom line. So what are the things that they think would be great use cases? They start going down that route, but then they’re finding that there’s not really the benefit they are anticipating.
Brynjolfsson: I think they have overly high expectations about what the technology can do in terms of automation. So we see them sometimes trying to implement in the call center 100 percent complete automation, or in driving, complete self-driving. These turn out to be a lot harder than you might have thought initially. I think both engineers and managers are subject to an optimism bias, where they read the science fiction, and they think the machine can do everything. Most of the time, a partnership between humans and machines works better than complete automation. And that allows them each to do the things that they are strong at, and you end up having a much more robust system. And also, I think when you focus on that, you start identifying KPIs that involve higher quality and better customer service and new products, doing things better than you did before, rather than just squeezing the costs out. And the last benefit of doing it that way is that it’s a lot easier to get buy-in from your employees when they see this as a tool that augments what they’re doing, allows them to do their job better, and isn’t focused on removing costs and eliminating headcount.
Kerr: This perception of the employees is a thing that I’d like to ask you a hypothetical here. But let’s say that the company started at some size, but it’s going to end up being 100 employees after AI comes up the curve with it. Do you anticipate that all 100 of those employees will be, more or less, directly interacting with the AI systems and having kind of a hands-on keyboard type of approach? Or will it still be that two-thirds of those employees are doing things that really don’t bring them into daily contact with the AI systems that are wrapping around the company, and one-third are concentrated on that complementarity?
Brynjolfsson: Well, I think all of them will have their jobs affected by AI. Now whether it’s very visible in terms of hands-on keyboard, that’s not necessarily true. It may be that a truck driver is having their routes optimized by AI and other AIs in the background doing a lot. Or even right now, even when people are doing searches or whatever, they don’t realize how much AI is under the hood. But more and more, people will also be directly working with AI. In an article in the Harvard Business Review called “The Business of Artificial Intelligence,” Andy McAfee and I, the last line we put in that article was, “AI will not replace managers, but managers who use AI will replace managers who don’t.” And I still very much believe that. So the thing to do is think about how you can use AI to enhance your job, and that’s going to give you a competitive advantage over other companies. In just about every occupation, there are numerous tasks where that’s true, every process. So the way to find those is with the task-based analysis. It can happen at the top of the organization with the C-suite, but even all the way up and down the organization.
Kerr: Do you think going back to the credibility revolution, which kind of brought causal inference to economics, that companies will need to do a lot more of that internally? Is it the case that they’re going to know through these taxonomies what to do? Or is it more of we don’t really know the direct way that this technology will impact workers? It will, but let’s try out several different things and find the one that works best.
Brynjolfsson: Yeah. I’m so glad you brought that up, because I think that’s a huge opportunity for businesses, and I hope your listeners go and Google and read about the credibility revolution and think about ways of adopting it. Basically, the credibility revolution is all about taking all this observational data that business executives and economists have been looking at, data series that are generated—and we’ve got more digital data now than we’ve ever had before—and understanding how those correlations and when they can be interpreted as causal estimates. We’ve all heard the slogan, “Correlation is not causality,” but sometimes it is. If you do the right controls, if you do synthetic difference-in-differences or instrumental variables and do things correctly, you can credibly infer that A is causing B, that AI is causing business value. I think if more and more companies adopt this data-driven approach that economists have been using in our papers, we’re going to be able to get a lot more value from the technology.
Kerr: We’re recording this in March of 2025, and I think at various moments through early 2025, DeepSeek has been in the news quite a bit. And I’d love for you to share with us a little bit about where you see the future around these lower-cost models, versus those that are the frontier models, and how this world is evolving.
Brynjolfsson: DeepSeek is still reverberating through the AI universe. It was a model that was 100 times cheaper and faster than some of the models that had been popular otherwise. It performed as well, or almost as well, as the frontier models. And that was shaking up what we expected, especially coming from China. One of the things that this really highlighted for me was that these much smaller models could be incredibly valuable. And even though most of the dollars and most of the attention has been to the big hyperscalers—the so-called “frontier models” that come out of the leading AI companies, where they’re spending not just tens of billions, but they say they’re going to be spending hundreds of billions of dollars for the very best models—you can train a model for much, much less that’s still quite useful. And when I think about the typical business, I think there are a lot of opportunities for these small and medium models. There will be people doing scientific research or at the Department of Defense that want the very biggest and best models, and it’ll be very expensive. But there’ll be other situations where a small operation could do just fine with a model that literally runs on an iPhone and solves all the problems that they need to have solved. So just as with other parts of the economy, there’s room for lots of different-sized companies.
Kerr: Well, bringing the economy back into this, you’ve described a J-curve effect around productivity that can help both reconcile the technology we observe, the individual tasks that we see, but then also why businesses can struggle beyond the reasons that you initially outlined. So walk us through what that curve may look like. And how will we know maybe when we’re at the bottom of the trough and starting to move back in the other direction?
Brynjolfsson: Well, the J-curve is, I think, a really important way of understanding what’s happening in the economy today. Whenever there’s a very powerful general-purpose technology—economists usually just say “GPTs,” but the AI researchers stole that acronym from us—but general-purpose technologies include things like the steam engine we talked about earlier, electricity, and I think AI. The thing about general purpose technologies, the good news is that they fundamentally change the economy, and that’s where almost all increases in living standards come from, is these few really powerful technologies that reverberate through every corner of the economy, often spawning complementary innovations. The bad news is that, when they’re powerful— in fact, especially when they’re powerful—they often take years to have their full impact, because they do require those complementary innovations. So in the case of electricity, it was about 30 to 40 years before the full impact occurred. During those 30, 40 years, people were figuring out how to use electricity effectively. The first factories, when they installed electric motors, they just put them in the same place where the steam engine was, and nothing really fundamentally changed. Only after a new generation of managers came in, 30 years later, did they start redesigning the factories with small electric motors. They laid them out based on the flow of materials, and that led to a doubling of productivity, 100 percent increase in productivity. Similarly, today, to really take advantage of AI, it’s not enough just to take out a person and try and plug in AI where it is. You need to rethink the organization. That takes a lot of management and entrepreneurial ingenuity. And during that time, while you’re investing in human capital, organizational capital, business process redesign, you’re not instantly getting big increases in productivity. So all that effort going in and nothing coming out on the other side looks like a decrease in productivity. And that’s the trough of the J. But once you’ve made those investments, you start harvesting the new business processes, you start harvesting the newly trained employees and the benefits that they bring to the company and to the customers, and now you’re in the upward part of the J-curve, and things start really taking off. We wrote a paper called "The Productivity J-Curve," Daniel Rock and Chad Syverson and I, and we calibrated that with some of the data that exists in the economy. And I think we’re close to the trough of the AI J-curve right now. I’m pretty optimistic about productivity taking off over the next five years or so. I think this J-curve is going to be a lot smaller—or I should say shorter—than the one with electricity. I expect the payoff to happen a lot faster. We talked a little bit about the call center example earlier and there the companies were getting returns very quickly. It’s not showing up economy-wide, yet. Official productivity statistics for the last quarter—or I should say official productivity statistics for Q4 2024—were about 1.2 percent, which is about half of what it was in the roaring 1990s and early 2000s. But I think it’s going to tick up quite a bit. It’ll be good news, not just for businesses and consumers, but it’s also going to help the budget deficit and a lot of our other economic challenges.
Kerr: Do you have some misgivings about whether all the things that we talk about with AI show up in GDP statistics? Is it something that it’s capturing the measure?
Brynjolfsson: Oh yeah, more than misgivings. I’m quite sure that it is not capturing all the benefits. GDP, which is one of the greatest inventions of the 20th century, according to Paul Samuelson, and I agree, it measures all the things that are bought and sold in the economy and with few exceptions. If something has zero price then it’s not captured in GDP, or at least not very well. But, of course, a lot of digital goods are not captured. I think this podcast is free, right?
Kerr: We always accept donations, but it is free.
Brynjolfsson: Exactly. And so much of what we consume, whether it’s Wikipedia or Google searches are free to consumers. And that means that they’re largely missing from GDP or not captured well. In fact, just to give you a sense of the magnitude of that, estimates are that the average American spends about eight and a half hours a day looking at screens like I am right now—sometimes little screens, big screens, different sized screens. That’s like slightly more than half your waking hours. So when we’re voting with our time, we are voting that we’re going to consume digital goods and services more than those made out of atoms. But those are exactly the ones that aren’t being measured well. So we definitely have a problem in not capturing the real benefits of the digital economy. Now the good news is, I am working on a project to address that. We have a team working on what we call “GDP-B,” where the B stands for measuring the benefits, not the costs, of GDP. And the way we do that is with a series of massive online choice experiments. We run millions of these experiments where we offer to pay people a small sum to stop using a digital good. We’ll pay you $30 to stop using Facebook for a month or $10 to stop using Wikipedia, et cetera. And if you do enough of these experiments and sometimes actually we enforce them, then you start getting a sense of what the demand looks like for these different goods. Some people you need to pay $100, some people you can pay $1, and you get a whole downward-sloping demand curve for the value of these goods to different people. And for the economists in the audience, now you’re going to be able to measure the consumer surplus under that demand curve, how much people would be willing to pay if they had to, versus how much they actually have to pay, which is typically zero. That difference is called “consumer surplus.” Well, it turns out that it adds up to trillions of dollars in the U.S. economy of value that people are getting from these goods without having to pay for them. And as we look at more and more goods and services, including some traditional goods and services—we’re looking at apples and cars and chicken and toothpaste as well as Wikipedia, Facebook, and Google searches—you start getting a sense of where the biggest consumer surplus is and ultimately where real value is being created in the economy.
Kerr: Erik, when we lost you from the Boston area to the West Coast, you went to Stanford in part to set up the digital AI lab. So we’ve covered a lot of different things in this podcast, but is there something you’re up to at that lab that you would like to share with us that we haven’t yet covered?
Brynjolfsson: : Sure. Well, first off, I love coming back to Boston whenever I can. It’s my second favorite city. So out here on the West Coast, I am spending a lot more time focusing on AI and digitization and a new project that we’re just getting underway in addition to the GDP-B project is looking at the economics of transformative AI. And what I mean by that, people throw around terms like “AGI”—artificial general intelligence, human-level AI. I’m an economist, so I’m focusing on the part that transforms the economy, so we’re calling it “transformative AI”—AI that’s as transformative as the steam engine was for the industrial revolution. If and when that AI becomes available—and the people I talk to around here think it’s going to happen very soon, but I think quite plausibly within a decade—the economy will be changed like it never has been before. Work, employment, concentration of wealth and power, productivity and growth, scientific invention and discovery, even the meaning of how you measure well-being, they’re all going to be affected. So our program on the economics of transformative AI is bringing together a group of top economists to look at that. And, ultimately, what we hope to do is lay the foundation for an economic understanding of this new economy. As we talked about a few minutes ago, there are hundreds of billions of dollars being spent on advancing the technological frontier. There’s almost nothing being spent on understanding the economic implications. And we want to try to address that gap by speeding up the research on the economic implications.
Kerr: As you and this team work on this question, are there any really critical signals you’re looking for to understand when something is really now just 12 months away and not a range of 12 months, which some would say we’re already at the 12-month mark, others would say a decade or longer? But is there a signal, the most important signals you have on your horizon?
Brynjolfsson: Well, we’re creating sort of a dashboard of this. I mean, first off, there’s something called the “AI Index” that I helped create, which is mostly technical indicators. So anybody who wants to Google “Stanford AI index,” and it’s about a 520-page report that has a lot of metrics of progress. But for the economics of transformative AI, we’re specifically creating a dashboard of economic indicators. We are creating a set of 60 metrics, and we’ll be updating them on a monthly basis to serve as an early-warning indicator along the lines of what you were just suggesting to let us know what’s changing. And some of them I think will change faster. You can look at job postings on Indeed and Lightcast, and certain categories look like they may already be changing. I think coders are kind of the canary in the coal mine, some sales jobs, some call center jobs. We’ll look at capital labor, we’ll look at investment in some of the different categories of AI. As we know, there’s just huge investments going into building these data centers. And those will all be some of the metrics. And over time, some of the other metrics, like I think productivity will start moving as well. And the capital-labor ratio, employment wages will start moving as well. But it would be wise for us to get some early indicators as well.
Kerr: Erik, well, we’re glad that you and the rest of that very elite team are working on this issue because it’s certainly going to be something that’ll be on our horizons for decades to come with even longer consequences. Thank you so much for joining us on the podcast today.
Brynjolfsson: It was an absolute pleasure. Always good to talk to you, Bill.
We hope you enjoy the Managing the Future of Work podcast. If you haven’t already, please subscribe and rate the show wherever you get your podcasts. You can find out more about the Managing the Future of Work Project at our website hbs.edu/managingthefutureofwork. While you’re there, sign up for our newsletter.