Podcast
Podcast
- 29 Jan 2025
- Managing the Future of Work
Positive prompts: Sal Khan on AI in the classroom and beyond
Bill Kerr: Covid accelerated the adoption of online learning and, in the process, revealed both its limitations and its potential to exacerbate inequality. Amid mounting evidence of post-Covid achievement gaps, AI presents an even bigger challenge. Can the transformative and much-hyped technology boost outcomes for all groups and prepare students for the labor market that it is in the process of reassembling? And can it improve conditions for hard-pressed teachers who might view it as a threat?
Welcome to the Managing the Future of Work podcast from Harvard Business School. I’m your host, Bill Kerr. We first spoke to Sal Khan in mid-2020 on a special Covid-19 Dispatch episode. Back then, his nonprofit Khan Academy was ramping up to handle a surge in demand for its free online materials. As the technologist behind a trusted project, the HBS grad is uniquely positioned to influence how AI works its way through K–12 education and beyond. We’ll discuss his evolution on the subject as outlined in his book, Brave New Words: How AI Will Revolutionize Education (and Why That’s a Good Thing). We’ll talk about maintaining the human connection and navigating the ethical issues of AI. We’ll also look at Khan Academy’s growing course catalog and partnerships. And we’ll delve into its workforce development, competency-based learning, and skills-based hiring. Sal, welcome back to our podcast.
Sal Khan: Thanks for having me.
Kerr: Sal, 2020 seems a long time ago, ages ago. Bring us up to speed on what’s been going on at Khan Academy since those early days of Covid.
Khan: Yes, you’re right. It was a long time ago. Well, the pandemic itself was interesting. We saw a huge spike in Khan Academy usage when everyone was caught flat-footed. And then the school districts started to try to implement their own version of distance learning, remote learning, once they realized that the pandemic was going a little bit longer. And I think that created a screen fatigue, a Zoom fatigue, and we actually started to see a decline in all forms of technology, EdTech usage, including Khan Academy. But then once we got out of the pandemic, as you mentioned, there was significant learning loss, and people became more interested in how do you personalize education and address some of that learning loss. At Khan Academy, we’ve been talking about ideas like mastery learning from the beginning—this idea that you shouldn’t just keep going and accumulate all these gaps, you should always have the opportunity, incentive, to fill these gaps. There was tens of billions of dollars of federal funds, ESSER funds, going into public schools. Most of that went into live tutoring, which had mixed results—for the most part, no results, because it was a little bit disjointed from what was going on in the classroom and wasn’t connected to teachers, and the students who needed it most weren’t showing up. But while all of that was going on, there have been a few projects on the Khan Academy side. During the pandemic, we started another nonprofit called schoolhouse.world, which is all about free live tutoring over Zoom. And then that’s even involved in ways for students to show what they know, especially if they become tutors. A lot of colleges have taken that very seriously in admissions. But also, a few years after the end of the pandemic—I guess it was right as the pandemic was winding down, 2022—OpenAI reaches out to us. And this was five, six months before ChatGPT comes out. And they gave us access—I believe we were the first organization outside of OpenAI to see what would become GPT-4. And that blew our minds, as it would soon blow the whole world’s mind. And we started immediately wrestling with a lot of the questions that folks are still wrestling with. This has a lot of possible downsides—cheating, deep fakes, et cetera, et cetera—but a lot of potential upsides. So we’ve been heads-down working on what we call “Khanmigo,” which is an AI tutor that really acts on top of Khan Academy, and also “Teaching Assistant,” helping teachers solve problems like lesson plans and writing progress reports, et cetera. We launched in 2023, at the same time as GPT-4, and every day we’re investing more and more on that front.
Kerr: Great. So maybe tell us a little bit about the overall size of Khan Academy. You’ve mentioned these new programs come out. Has the headcount been growing over the last four years? And is it still principally funded through philanthropic donations and support?
Khan: Yes. So we have now grown to over 300 full-time employees, and it is primarily philanthropically supported, roughly about 75–80 percent of that is philanthropically supported. For those of you all who are listening, any donation is much appreciated. But we do have a bit of an earned revenue stream. About six, seven years ago, we went to a lot of school districts, and we said, “Look, a lot of your teachers are already using us. Here are all of our efficacy studies. I think we’re the most studied platform out there.” And superintendents, chief academic officers, universally told us, “Oh, we would love to use Khan Academy more.” Almost all of them had a personal story about how it benefited them. But they said, “But if you want us to adopt it formally at our school district, you have to give us professional development, support, training, integration with our rostering systems, district-level dashboards.” And so that’s when we said, “We are a not-for-profit. Our mission is free world-class education for anyone, anywhere, which is primarily funded with philanthropy. But to do this bespoke work, we need to have the districts put a little skin in the game.” And so we’ve charged them $15 a year to essentially just charge them for that incremental cost. And so that’s a bit of an earned revenue stream to do that work.
Kerr: And it sounds like, even with the new online tutoring programs, you’ve continued to take the free access to world-class education out with your latest advances in products.
Khan: One of the interesting things about generative AI, a lot of listeners might be familiar with this, is it’s very computationally intensive. Even prior to AI, our cloud-hosting computation expenses weren’t small. We spent $6 million, $7 million a year to serve tens, if not hundreds, of millions of folks every year. But generative AI introduces a whole new level. And when we first launched as part of the GPT-4 launch, we estimated that it would cost roughly $50 a year just in computation costs for someone who’s a pretty active user. The good news is, between efficiencies on our end, the models, themselves, have become more efficient, and we’ve gotten some significant compute grants. Microsoft famously last spring in May of 2024 gave us a $32 million compute grant. And so that has allowed us to give free access to the teacher tools for any teacher in the United States—and soon any teacher in the English-speaking world. And we are trying to bring the costs down as much as possible. So now, for the districts that are using this district offering—and we are pushing a million students that are in this district offering—we’re just throwing Khanmigo and the AI in that, as they provide that $15 a year for support training, integration with the rostering systems, and now they’re also getting student access to the AI. But our goal as a nonprofit is to try to make that as accessible as possible without sacrificing quality.
Kerr: Tell us a little bit about Khan Academy’s course offerings, how they’re created, the partnerships that you have in there, and what’s been the significant area of growth over the last four years. You’ve clearly done things ever since your very first video, Sal, of helping people with their math education all the way up through K–12. But where’s the greatest growth in the course catalog happening?
Khan: We started in math, a lot of folks know that. But our goal has always been all of the core academic courses from pre-K through the core of college. So on Khan Academy, yes, you can start at kindergarten-level math and literally cover every standard all the way through calculus, multivariable calculus, differential equations, et cetera. But we also have science that starts at a fairly basic level, late-elementary, middle-school level, all the way to college-level physics, chemistry, biology, economics, environmental science, et cetera. We’re doing a lot more of the humanities as part of the AI work. We’re actually launching ways that the AI can help you write—not write for you, but help you write—give teachers more information, help you do reading comprehension. A lot of what we’ve been doing in the last couple of years, we continue to just expand the library to be aligned with more standards. We’ve aligned the Texas standards, the Florida standards. We had always had the Common Core. We’re aligning to more countries. We have a lot of active efforts. In terms of new domains, financial literacy, which has always been close to my heart. Just before I got on for this podcast, I actually cranked out a video on investment retirement accounts. We also just launched a course that I recommend. This is for people of any age of any education level, a Constitution 101 course with the National Constitution Center. And this has videos from Supreme Court justices and ex-governors and senators and some of the leading academics, right and left of center, and exercises with primary documents. If you go through this course, you can go toe-to-toe with anyone in your understanding of civics and the Constitution. So, yes, we continue to just keep adding more and more content. We’re doing a lot in computer programming, but I could keep going on.
Kerr: That’s great. Hopefully some of our listeners will go take a look at some of those videos and also remember that you take donations of any size as you pointed out a little bit earlier. Speak with us a little bit more about your personal AI evolution. You get the early insights as to what OpenAI might accomplish, and you’ve obviously always been digital, taking your videos and using the big platforms to get them out. But how did your thinking change since we last spoke?
Khan: In my book, the last chapter of the book, not to give it away, I make a confession that I used to think I was going to become an AI researcher at one time. So I was very interested in AI for a very, very long time. I sought out my freshman adviser in college, was Patrick Henry Winston, who’s considered one of the founding fathers of AI. And then I took classes with Marvin Minsky and other leaders of AI. I was very disillusioned in the late ’90s by what was possible with AI then. And when Khan Academy became a thing, we definitely experimented with AI, let’s call it “early non-generative AI,” to explore using it to make inferences about what a student knows or doesn’t know or make recommendations. In some ways it was valuable, but it was also a black box, and people didn’t know why it was giving one recommendation over another. But when we saw what GPT-4 could do, that’s when I said, it still had issues. It would still hallucinate, make up facts, way worse at math than most people would expect a computer to be at. There’s real questions around cheating, real questions about inappropriate use, because you couldn’t control how it’s used in a lot of ways. But just with a little bit of prompting, you could get it so much closer to what a great tutor would do. And so that’s, at least in my brain, when I said, “This is something that we have to really lean in on and hopefully lead.” I’ve told the team at Khan Academy, “We got to go all in on this.” There’s other folks who are like, “Hey, hold on a second. This has a lot of issues. Why us?” And I tell our team, if we’re serious about free world-class education, this is going to allow us to get that much more world-class and hopefully freer as well. And if not us, yes, there are probably 500 AI EdTech start-ups already that are out there, but are they really going to be focused on making sure we’re reaching everyone, making sure that this is a human right, making sure it’s high quality? There’s already, unfortunately, a lot of stories about a school district going with some start-up and then it blows up. LA Unified most famously gave some start-up $6 million and just got a note that that start-up just disappeared overnight, and it looks like there was fraud involved. A lot of the start-up community is more focused on cheating than on actually helping a student learn material. And so, as a trusted party, we can go in there where people are excited about AI, but also afraid of AI, and say, “Look, there is a way to do this right. There’s a way to do this so that is more equitable, that is more pedagogical, that it can support teachers, as opposed to some narrative around replacing humans.” And I think that’s our role.
Kerr: I mean, continuing on that many companies, even outside of obviously EdTech space, we have executives listening to this podcast that are, themselves, trying to take a business and orient its processes and its culture more toward an AI-based future. Any things that, in particular, you recall as being important for Khan Academy to become more aligned around this future? I mean, obviously connecting it to the mission and talking about others and their actions were important. But at what point did you feel like you got the critical mass to be able to really take this to the next level?
Khan: It took a lot in the early days of just reminding the team, especially the folks who wanted to take a little bit more of a wait-and-see attitude about it or might’ve even had some fear around AI, to just make it clear that wait and see has risks associated with it too. You could very easily be irrelevant in a few years. At Khan Academy, we openly talk about notions of horizon of irrelevance, and it can be a sensitive subject sometimes. What about when the AI can make videos as well as I can—or better? A little part of my ego likes to think there’s always going to be something special about something I do, but I don’t want that to put our organization—or more importantly, our organization’s mission—in danger. AI can help with content creation. So how do we use that to shepherd our donors’ dollars better? I was just talking to the engineering team today, where we are seeing productivity gains from AI being able to help code, but some engineers are reporting 30, 40, 50 percent acceleration, some are reporting very little. And it might be based on what they’re doing, but I suspect that it might just be based on a little bit of mindset. And in my book, I talk about organizations that are flat-footed here are definitely going to lose out from a productivity point of view,; they’re not going to be able to compete. So, yes, I think it is very important for leaders to lead right now and to take a stake in the ground and force their organizations into maybe a little bit out of their comfort zone.
Kerr: That scariness is something that many companies will face. So maybe tell us a little bit about how do you retain the human connection in education. What’s your view as the importance that’s going to have toward the future, and are you combining new ways with Khanmigo or with other products that a school district might use?
Khan: So what I’ve always tried to do, and hopefully our team does as well—and this is well before AI entered the scene—is to not get too enamored with the technology, which is easy to do. But always take a step back and say, what problems are we trying to solve? What does ideal look like? And for our journey, I always think ideal looks like Alexander the Great had Aristotle as a personal tutor. But when we introduced mass public education, which has done wonders for the world—building a middle class, increasing literacy rates, et cetera—we had to make a compromise. We borrowed industrial age tools of batch processes and kind of assembly lines of moving people together at a set pace and assessments that, okay, some folks are getting it, some aren’t. And we’ll at some point sift some people into certain tracks or certain people into others. And there was never anything to do about it. But maybe now technology can get us to not having to make that compromise. At scale, being able to get the kind of personalization that young Alexander had with Aristotle. So that’s the problem we’re trying to solve. A lot of folks know, I got started on this journey back in 2004. I was originally in tech, then after business school, I found myself as an analyst at a hedge fund, but my cousins needed help, and I was tutoring them. And I started making the first software of Khan Academy and the first videos as a way to help my cousins to really start to help me scale as a tutor and to approximate what a tutor could do. And I never viewed it as a replacement for myself. I always said, “Hey, if I can offload some of this to the technology, then when I got on the phone with my cousins, I could do more with them.” And that’s been the true north of Khan Academy ever since. But it’s always been with putting the teachers and parents and districts in the loop, because no matter how good a piece of software you make—whether it’s using AI or game mechanics—yes, it will lower the activation energy to learn. It’ll make it easier to learn. But if you want to reach most students, teachers and parents are really the main drivers, the main motivators, the main un-blockers for students. So we’ve always viewed ourselves as, how do you work in symbiosis? I’ve always said, if I had to pick between amazing teacher and no technology or amazing technology and no teacher, I’d pick amazing teacher every time for myself, for my own children, and by extension, anyone’s children. And we still believe that. So our true north is now with AI, yes, if we can support students better, if we can help teachers with a lot of the tedious things they have to do that aren’t student facing, like lesson plans and grading papers and filling out this form or that form or writing progress reports, that’s a win. But our ideal classroom of the future is definitely one where teachers can lean more into the human element. Students get more time making personal connections, not just with the teacher, but also with each other. Now, we also do think about how do we raise the floor, where there might not be access to a great teacher or any teacher at all. We have stories of young girls in Afghanistan using Khan Academy as their school system when the Taliban takes over. There’s orphans in Mongolia who are using Khan Academy. There are kids in rural Idaho who are using Khan Academy, because they don’t access to a calculus class. Or there’s kids who do have access to a calculus class, but it’s not a real calculus class at a real level of rigor. And so we also want to raise the floor there.
Kerr: Those are all important, and I think we can all sort of appreciate how AI could help you improve and narrow the achievement gap. Is there a scenario that you worry about, where AI actually widens the gaps that exist already or leads to some more dire consequences?
Khan: Yes and no. I think that’s always a risk. That’s why we work with school districts, and especially large urban school districts, so that we can make sure that it’s not just based on people who are self-selecting in; that the tools can benefit everyone, especially the folks who could use it. Because the upper-middle class, a lot of them are using Khan Academy even though they can afford tutoring. But if Khan Academy were to go away, well, then they’re just going to go back to what they were doing before—while a student whose family makes less money has no other option. If we can make these things like personalized education, access to world-class materials and learning, essentially riding on the cost curve and adoption curve of the internet and AI, which are very low-cost and high-scale and fast adoption curves, versus the adoption curve of $50 or $100 an hour tutoring, then this is definitely, I believe, going to help level the playing field. I’ll never claim that the playing field is going to be completely level, but it will help.
Kerr: No, absolutely. Absolutely. Tell us a little bit about how you think of ethical AI in the education context. What does that mean, and what is being put into place at Khan Academy and beyond?
Khan: I think ethical AI is kind of the same as ethical anybody helping a student. If a parent is doing the homework for the student or writing the essays for the student, well, that’s unethical. And the same thing is true of AI. And what we can do, or what we try to do on the Khan Academy side, is Khanmigo has been instructed not to do the work for you. It’s there to support you, it’s there to be Socratic, it’s not there to cheat. But on top of that, we try to make it transparent to parents and teachers what’s going on, so that if a student does try to cheat, the teachers can see it. If a student is doing something that could be unproductive, unhealthy with the AI, then the teachers can get proactively notified. So we’re definitely putting those guardrails in place. Transparency tends to help on a lot of fronts. So that’s the tactic we’re taking.
Kerr: Is there also just an element of teaching the students how to use the AI toward the future? Is it, in part, the way that it can use the lessons on mathematics or history or constitutional law, but also this is the way that you should be prepared to enter the labor market?
Khan: There’s definitely two dimensions here. There’s AI to help improve your, let’s call it “traditional skills,” which we’ve been spending a lot of energy on. And then there’s also AI as a skill, itself, or as a tool, itself. Especially once you get into high school, college, some aspect of what students do should actually ask them to use AI and even general-purpose AI tools like ChatGPT. The trick is, how do you get a high school student to use it but be on the right side of that, to some degree, not-clear line on what’s ethical and what’s not ethical. And there, I think, transparency could be a big part of it, maybe some of the same labeling, some of the same notifications that we do already for Khanmigo. But, absolutely, students should be learning to use these tools. I’m using it all the time. That video on IRAs that I just made, I was hanging out on ChatGPT, and I got it to make a little image for me. And then I was just a couple of edge cases that I was trying to understand between two different types of IRAs, it helped me. Then I verified it on the internet with legitimate sources before I embarrassed myself and tried to teach it in a video. But it’s a super important skill now.
Kerr: You got early access, maybe the first access to OpenAI’s 4.0 model. Have you continued to develop that partnership? And how are you also working with other AI vendors or technology spaces?
Khan: I’ve been very impressed with the OpenAI team, but we’re also close to Google, which has been a longtime partner and funder of Khan Academy. I mentioned the Microsoft donation, a significant compute donation. We’ve been in conversation with the Anthropic folks. So we are in a fortunate position that I think people see us as a positive use case of an AI, an organization that people trust, hopefully an organization that has some thought leadership here now. And I suspect, if you fast-forward two or three years, Khanmigo, which is right now primarily based on OpenAI models, but we are definitely exploring some of the Google models and the Anthropic models and the Microsoft models on top of that. We’ve talked to the XAI folks, as well. So we’re talking to all of the major players, doing whatever we can. If it’s in service of students and teachers, we’re up for it.
Kerr: So you think about a set of skills—social skills, creativity, things that sometimes can be hard to learn in a digital environment or in one-to-many communication or something like that. Maybe to kind of frame it, where’s a place that you’ve been just impressed with what we can now do, maybe with the new technology, or also a place that remains stubborn to tackle, like you wish we could do more? Even generative AI hasn’t solved that one that is a challenge.
Khan: I think we’re a little bit in between both of those different things. There’s a lot of very promising, bleeding-edge technology that is almost ready for prime time, but not quite. If you look at some of this advanced image and voice that you’re starting to see—from folks like OpenAI, but Gemini has got some interesting things, too—very natural in how they speak. They can see things in the real world. They can see a facial expression, they can understand tone. And so it’s not hard to imagine that, in the not-too-distant future, you could have a new type of practice or a new type of assessment, where an AI is leading you through a simulation. It feels like a management consulting interview, where they’re asking you, “How many quarters could you put in a 747?” And they’re really trying to see how you think through the problem more than do you get the exact number of quarters right. Then it might ask you to riff on brainstorming for a business or to draw something together and get a sense of creativity and judgment and things like that. And that’s exciting, because this type of practice or this type of assessment has never been scalable. In fact, it has not been accessible to most people. You definitely don’t see it on your traditional standardized tests, like an SAT or AP. And the only way to assess it has been very resource intensive, like job interviews. And even those are super imperfect, super inconsistent, and as we said, super, super resource intensive. And so I’m hopeful that AI, which isn’t quite there yet, because those models that are very great, they’re awesome for demos right now. But my guess is, we’re about three to five years away. We do have an initiative at Khan Academy around assessments. Reed Hastings, founder of Netflix, is on our board. He was very intrigued by this. He has funded a whole initiative here. And we are creating assessments that can be used like traditional interim assessments just to figure out what percentile a student’s in, how much they’ve grown. But we’re starting to introduce aspects of generative AI, where a student can explain their reasoning. Reading comprehension no longer has to just be, what do you think the author’s intent was? Pick choices, A, B, C, or D. It could be what do you think the author’s intent was? Type it in right over here. And we’re starting to get early signals that the AIs are starting to potentially get good enough to grade pretty consistently and pretty well.
Kerr: That’s amazing. You’ve long advocated and try to think about things like competency-based education, hiring based upon skills and so forth. Do you think we’re getting closer to that now with the generative AI products and this capacity to do things like the assessments you just described?
Khan: I think we’re going to get closer and closer. You already see pretty significant movement in that direction in certain parts of the economy. Amazon, I haven’t applied for a job there, but I hear that they give essentially a test to almost anyone who’s willing to take the coding test. So that’s very competency based. You’ve had the Google certifications in things like project management, which are general-purpose capabilities. And other employers are starting to value those as well. A lot of employers are frustrated with not just even the scarcity of the people who check the boxes, go to the right universities, and majored in the right things, but also the inconsistency of what it means. If someone, say, graduates from Harvard Business School, a lot of very impressive young people from Harvard Business School. But some people mightn’t, I’ll say politely, be the exact right fit for whatever it might be. So can you make more competency-based signals there? So that’s definitely something I’m personally intrigued about. There’s a lot of talk about reskilling, et cetera, and there that’s going to be a piece, especially as AI takes over more and more, and you’re going to have job dislocation, which we have to think very seriously about. But when you really think about most good jobs that we could imagine that the types that we started our careers having or we’d want our children to have, they were really just looking for, did you have really strong critical-thinking skills, really strong communication skills? Could you write reasonably well? Could you show up at work and deal with adversity reasonably well? And maybe a college degree was a bit of a proxy for that. If you want to be a product manager at Google, you want to be an analyst at McKinsey, you want to be an analyst at Goldman Sachs—they’re not assuming you have a lot of, let’s call it “hard skills.” They assume that you’re going to learn that on the job. And I’m hoping that we’re going to, whether it’s using AI or other things that could be simpler than AI, start to create really consistent signals. Another thing I’ve been thinking a lot about, and I know other folks have, job interviews, which are very expensive and inconsistent, but let’s say they’re done well. You go through this whole process, 500 people apply for a job, 20 of them get a phone screen, maybe three of them end up as finalists, and then they pick one, even though there’s two or three or four other really great people. And then all of that information is lost. Are there ways of capturing that? Is there information in networks that can be scaled? Maybe making some prototypes, maybe using generative AI to get a read on what people think of you, maybe streamline the reference check process, things like that. So the sky’s the limit. There’s a lot there, but this problem of, reskilling is one thing, but already people have skills. But how do you know they have those skills, and how do you reduce the frictions in the matchmaking process that we have right now in labor markets?
Kerr: And what role would you see Khan Academy taking in some of this credentialing or kind of pushing the assessment boundary beyond just helping people actually acquire the skills?
Khan: I definitely see it’s part of our mission. When you say free world-class education for anyone, anywhere, education is not just about learning and practice and assessment. I think it’s also about credentialing. How do you signal what you know, and how do those signals matter? And so, for the last 15 years, I’ve been having fun conversations with people. We have a few places where we kind of are giving credentials at Khan Academy, but I’m getting a lot more serious about it, because I do think we’re going to have massive change and massive economic dislocation over the next 10 to 20 years, and someone needs to be working on it. I think it’s not just going to be us, but I think we’re in a pretty good position. We have reach, hopefully people trust us as a not-for-profit, that we’re not going to try to water down credentials or turn it into something else. And so, if we can play a role that helps solve this problem, we’ll definitely try.
Kerr: Well, since we’re on big topics, let me just give you a huge one here. As we emerge from the pandemic, any particular directions in the public education sector that you are focused in on the U.S.—changes that have captivated your work?
Khan: Our work in the U.S. is very, how do we get in there and make sure that people’s academic outcomes improve? And they’re not where they need to be. A majority of American kids who go to college, they take placement exams in writing and in math. And for a majority, it shows that they are at essentially a middle-school level, despite having sat many years in classrooms. How do we help them fine-tune these skills in writing and reading comprehension and math? How do we give better data to teachers and district administrators? How do we give teachers, especially, tools to take some of the not fun stuff off of their plate, that 10 to 15 hours they spend grading papers? The product manager, Sarah, at Khan Academy, who leads our what we call “writing coach” efforts, she used to be an English teacher, and she had 100 students. She would limit herself to 10 minutes per paper, but that’s still, what was it, 16, 17 hours of grading papers. If we can bring that down, one, we give an incentive for teachers to lean in more into this AI revolution. I’m hoping that teaching might be one of the first professions that gets really empowered in a good way by AI. Then I think we’re making a positive dent.
Kerr: That’s great. So our final maybe question here, it was four years since our last conversation. You look ahead, your 5- to 10-year goals that you’re setting for the organization?
Khan: Who knows? We might be able to plug something into our brains and have lucid dreams, and who knows, that AI is going to construct for us, or we’ll just be augmented. But in five years, to what we were just talking about, I hope that you’re going to see Khan Academy be in a position, for sure, giving high school credit where appropriate, college credit, that we are part of a solution to rethinking alternative pathways to college, maybe also even helping the college pathway, maybe ways to get college that have high signaling value cheaper, dramatically cheaper, maybe even close to free. On the other end, we are also working on early learning. We have an offering, “Khan Academy Kids,” where it already covers all the major standards in math, reading and writing, and social, emotional learning from ages two to seven. But we’re working on early language acquisition. We’re already in 50 languages, but we’re looking at how do we expand internationally. So hoping in five years, you’re going to see not just in the U.S. many people using Khan Academy and hopefully AI really transforming how a teacher’s life or a student’s life is for the better. But also in countries like Brazil and India and Vietnam and Philippines and Bangladesh and Pakistan, all areas where we have pretty significant efforts going on, Middle East, there’s something that’s brewing there. That part of the world needs it, needs some good news that I’m hopeful about. But it sounds like a lot for five years, but I think that’s about right.
Kerr: That’s awesome. Sal Khan is the founder and CEO of Khan Academy. His most recent book is Brave New Words: How AI Will Revolutionize Education (and Why That’s a Good Thing). Sal, thanks so much for your insights.
Khan: Thanks for having me.
Kerr: We hope you enjoy the Managing the Future of Work podcast. If you haven’t already, please subscribe and rate the show wherever you get your podcasts. You can find out more about the Managing the Future of Work Project at our website hbs.edu/managingthefutureofwork. While you’re there, sign up for our newsletter.