OpenAI Forum
+00:00 GMT
Sign in or Join the community to continue

The Future of Work Series: The Effects of AI on Talent Management and Workforce Development

Posted Aug 01, 2025 | Views 177
# AI Economics
# Career
# Future of Work
# OpenAI Leadership
Share

speakers

avatar
Ronnie Chatterji
Chief Economist @ OpenAI

Aaron “Ronnie” Chatterji, Ph.D., is OpenAI’s first Chief Economist. He is also the Mark Burgess & Lisa Benson-Burgess Distinguished Professor at Duke University, working at the intersection of academia, policy, and business. He served in the Biden Administration as White House CHIPS coordinator and Acting Deputy Director of the National Economic Council, shaping industrial policy, manufacturing, and supply chains. Before that, he was Chief Economist at the Department of Commerce and a Senior Economist at the White House Council of Economic Advisers. He is on leave as a Research Associate at the National Bureau of Economic Research and previously taught at Harvard Business School. Earlier in his career, he worked at Goldman Sachs and was a term member of the Council on Foreign Relations. Chatterji holds a Ph.D. from UC Berkeley and a B.A. in Economics from Cornell University.

+ Read More
avatar
Joseph Fuller
Professor @ Harvard Business School

Joseph Fuller is a Professor of Management Practice in General Management and Entrepreneurship. He founded and co-leads the school’s project, Managing the Future of Work, as well as the Harvard Project on the Workforce. He currently leads the FIELD Global Capstone course in the first year of the MBA program. FIELD Global Capstone is the school’s premier experiential learning course, in which more than 900 first year MBA students travel to fifteen countries around the world to work directly for local companies on a product or service challenge. He formerly developed the Making Difficult Decisions course in the second year of the MBA program and headed the required The Entrepreneurial Manager course.

+ Read More

SUMMARY

Joe Fuller, a professor at Harvard Business School, and Ronnie Chatterji, Chief Economist at OpenAI share insights on how AI is reshaping the workforce, emphasizing both the opportunities and challenges it presents. Fuller highlighted the profound impacts of AI adoption across industries, especially in white-collar roles, the shifting skills requirements emphasizing social and interpersonal skills, and the critical role of democratic AI and infrastructure investment for sustained competitiveness and innovation.

+ Read More

TRANSCRIPT

Joe Fuller, welcome to OpenAI Forum, Harvard Business School legend, mentor to me as well during my time at Harvard Business School. Joe, thanks for being here to talk about the future of work in AI.

Ronnie, I'm really pleased to catch up and join you and your audience.

Well, first, Joe, let's talk a little bit about your history because you've had a really impactful career at HBS, but also before in the business world. It's really given you, I think, a unique lens on this question about how AI is going to affect work. Walk people a little bit through your experience, your career in business, your role at Monitor, and now what you do at HBS.

Well, Ronnie, when I graduated from business school, I had a difficult choice, which is would I stay here to get a PhD and try to join the faculty?

would I start a business that I'd been working on with several colleagues, most notably the famous strategy professor, Mike Porter. I had been his research assistant starting as an undergraduate at Harvard where I went as an undergrad and then went directly to business school. And we were already getting a lot of uptake using the concepts that he had pioneered with companies. So we decided to start a consulting firm and I was the first employee and the long time CEO of that company, which was called Modeler Company.

Several other recent graduates, classmates of mine were involved in the starting group. And it became, I don't know if this is much to brag about, but it became the fourth real competitor in the high-end strategy.

as you mark it. And after being there for about 25 years, I decided to make a change. There's a funny dynamic in professional services firms with founders and when they need to leave the stage. And oddly, when I decided to leave, I nearly joined a large client of mine, it's a senior executive. But frankly, what happened is the one of the lead direct, the lead director of that company called the former Dean of the Harvard Business School, the late great John MacArthur to ask what he knew about me. And John gave me a nice reference, but then called me up and said, I think you ought to come here instead. So that led to a conversation. And I joined this faculty about 12, 13 years ago. And I've taught courses including.

including with you. But really, my principal activities here has been setting up a couple of projects to study how work's evolving, the various forces that are reshaping the way people structure work, how organizations should be shaped, and what the implications are for executives and policymakers.

Joe, it's fascinating. I want to pick up on three threads, I think, that are relevant, because your career is really diverse, but gives you a sense, I think, on sort of the future of work and AI from three really cool perspectives. One is that early career, doing the MBA, thinking about doing the PhD, being back now at Harvard Business School in the midst of all these really interesting projects.

You've always had a mind for research and data in terms of what's happening and why it matters. And I think when we look at AI and the workplace right now, there's lots of different theories about what's going on. But I think what you're saying is that AI is a really important part of what we're doing.

what's going on, and I know you bring that lens to it. I also think just running a large professional services firm, professional services, as you mentioned, where you guys were in that top four or five of the league tables, that's an area where people think AI disruption is going to be significant. And probably no one better than you to really opine on what you see in terms of the future of professional services, one of the key white collar sort of occupations that people are talking about with regards to AI.

And the last piece is a teacher and a professor. And I learned so much from you in the classroom, my team teaching with you at Harvard Business School, about what our MBA students should be doing, and thinking about their own careers at their beginning, as they're at the beginning of these really interesting journeys. So can you talk a little bit about your perspective on AI and work from those three lenses? I think it's a really unique perspective. I think people will benefit from hearing about it.

Well, I think on a broad basis, I'm

I'm bullish on what AI can do for entities, businesses, bureaucracies, slightly more bearish or anxious about what the overall societal impact's going to be unless it's managed carefully.

Later this year, I'll publish some research that we've been doing. I've been partnered with Accenture Research, the think tank part of Accenture, looking at how our model of the degree to which AI will replace either through automation or augmentation working hours across the whole inventory of the so-called ONET codes, which you'll be very familiar with, the codes the Department of Labor tracks for 900 different jobs in the United States.

and that model indicates an average impact of about 41 percent. But that's across 900 jobs, so an average isn't a very interesting number. Job number 900 is a roofer, turns out AI cannot replace the roof on your vacation home. I hired the wrong person for that one, Joe, I got to go back to the drawing board.

Well, it can improve a roofer's productivity by about 9 percent, it turns out, at least according to our model. But also, you get into the white-collar, cognitive work categories, credit analysts, financial analysts, things like that.

You get well into the 70s and 80s in terms of the percentage of work that can be done economically.

economically by AI as we currently understand it. And of course, I would argue that the technology's advanced faster than predicted. I'm also gonna say adoption in companies is slower than I would have predicted. And we can talk about that later if you want. Yes. But the impact on organizations is really gonna be profound.

We built models of six major industries, populated those models with their current configuration. So one of the industries is software platforms. About 60% of the workforce in a software platform company are what are so-called individual contributors. Think you're a typical run-of-the-mill software engineer. Well, that job can be made almost 60% of the time.

percent more productive using tools like Cursor and Copilot. When you overlay the AI model on top of the way these industries are currently structured, you can start seeing how the geometry, if you will, the ratio of different levels in those pyramids change. The changes are not uniform across all industries, and have really profound implications for the way we manage talent.

Very interesting. I mean, so Joe, this goes back to the foundations of strategy and differences across industries, which is going back to your work with Michael Porter and the foundation of Monitor. We're going to see it unfold, the impact of AI that is differently across different industries, depending on.

sort of industry structure, things like the number of competitors, the role of regulation, the talent pool, things like that, which I think is really interesting and probably different than what a lot of people are expecting. I think people might be expecting AI to happen all at once, a binary event, and I think it's probably going to happen across industries in an interesting fashion, perhaps predicted by models like yours.

Can you use that to guide students? Because that's the other part of your portfolio. You deal with these fantastic HBS first-year students. Last time I worked with you, it might be a broader set now, who are thinking about their careers, their summer internship and what happens after that.

Can you use research like you're doing with Accenture to inform that?

I think what we can do is point out what are the types of skills that employers are going to be most attracted to.

the balance of skills in these different roles is going to shift and I think it's now just widely assumed, widely accepted I probably should say, that social skills, sometimes called soft skills, are going to become more important.

If you think about any role that one of our students is going to be pursuing, AI is going to clearly affect the nature of those jobs because our students are drawn to what I call cognitive, non-routine work jobs, right?

And a lot of the early positions they aspire to today have what I'm going to call rules-based

tasks. How do you do a certain form of analysis? Let's say you're doing a model for an investment bank or a PE fund. Wow, that's very, very amenable to using advanced AI tools like OpenAI. So, so if those tasks go away, what's left? And what's left are things technology are not at least currently good at.

And those are going to be skills that are related to interacting with other humans. It's going to be rather amorphous skills like judgment. I like to think of it really more to around the term heuristics. How does, how does someone process the world around them? Yes. And I've just published with

Matt Sigelman, some colleagues at Burning Glass Institute, this look at how AI is going to affect both making a lot of entry-level jobs really uneconomical to staff with new inexperienced workers, while simultaneously removing some of the technical requirements that other jobs have because AI will take on tasks that were previously accounted for by getting technical credentials that were hard for people to get.

So it shows you this kind of balance of some doors are going to close, some doors are going to open, maybe the pitch on the fairmaster is going to get higher in some jobs and lower to get other jobs, but we can give people a better sense of

of what types of skills do they want to cultivate and make sure to feature as they present themselves as candidates for positions. And those are going to be different than they have been historically.

I mean, Joe, this is fascinating because, you know, we've been hearing one side of the story, which is a really interesting one in a lot of the media coverage, which is saying, hey, for entry-level jobs, here's why AI is going to make it more difficult. Now, I'm not sure the data heretofore is backing that up if I look at the different factors, but there's a lot of worry, that this is going to happen.

On the other hand, you're pointing to another mechanism for your research, which is that AI can actually knock out some of the technical requirements and open up a whole new set of jobs, making them, you know, presumably entry-level. And I think about coding, like, you know, to the extent that you need to know how to code to do a particular job. Now, using AI tools, you can

can do a lot of that coding or learn it more quickly, which lowers the barrier to entry to different tasks. I think as you say, some doors will open, some doors will be closed, new opportunities that we can never imagine will be created. For students graduating, that can be scary, confusing, potentially exciting, but maybe the best that we can do as researchers is give them the information they need to try to make those decisions about where to allocate their skills and talents. That's the least way I've been thinking about it.

If you were running a consulting firm today, how would you think about that incoming analyst class? The group you used to hire at Monitor that a lot of the consulting firms are looking at today, how would you look at that class? Would you be hiring more, less, different? How should you think about it? I'd start by acknowledging that we're in a period of transition, so we expect to be able to do a lot of things that we can't do in the future.

expecting the intake class to be as adept as with this new skill set as we've expected previous classes to be with all the skill sets, that's not a reasonable expectation.

I do think, Ronnie, I'd be actually, maybe this is my inner portarian speaking, but I'd be working back from the way I think the service is going to be configured in the future.

Were I CEO in the industry today, I would get a partners meeting together and on the first day, because not everyone's thinking so clearly, on the second day after the first evening, I'd say, people, this is the last year we're delivering PowerPoints as a product.

product. From now on, we're delivering bots as a product. And we can, I'm not going to tell you to delete PowerPoint from your laptops, but let's work backwards from how we're going to do that.

I think the way this is going to affect the professional services industries are first of all, speeding up those fairly routine rules-based tasks, but it's really going to change the nature of demand because as large corporates start making better use of the tools, and as I mentioned, there are a lot of barriers to that that aren't being sufficiently addressed yet by management teams, they're going to ask fewer questions that can be answered.

answered by what I'm going to call information arbitrage. A lot of consulting is information arbitrage, but I'm not trying to suggest that there's something insidious about it, or, you know, it's really a con job or something.

But if you think about what particularly those higher level strategy firms do, they're often relating knowledge they have of a market and of competitive set with the ability to go in the organization and navigate across organizational barriers, repel up and down silos, whatnot, and pull together an integrated set of data that will address the client's problem.

There are tools, you know, I'd point you to companies like Aera Technology.

Certainly, it's the whole class of trade about decision intelligence, which Gartner is going to come out with a magic quadrant on shortly. Palantir is really working hard in the space.

You're going to get these overlay decision tools that can mine your live AI base, that your ERM systems like Oracle and Workday and Salesforce and integrate the data and populate data where the analysis takes less time than it does to populate your display to see what the answer is.

So, you're really going to have to rethink what professional services firms do. I would say, I would.

account 50% of their revenue at a very least as being highly likely to be negatively affected.

Now look, I think that shifting from our current org structures, the way processes are configured, job descriptions written, incentives and metrics deployed, to what AI will enable is going to be the most profound and far-flung transformation exercise in corporate history.

So that's going to unleash a lot of work.

So just like we were saying, some jobs will be easier to get, some jobs will be harder to get, the work of consultants...

are going to, is going to change, but I think there should be lots of opportunities for those that move fast. Those that decide to stand and fight on the old ground, I think it might be in for an ugly surprise. I'd be worried about it. It's very interesting though, because you're also opening up the opportunity for the consultants and professional service providers who kind of see ahead of the curve.

First of all, they could have a lot of work to do as we make these transitions in the enterprise to becoming sort of AI enabled and maybe AI native at some point. At the same time, there's going to be a need for strategic advice, maybe the equivalent of the soft skills for MBA students, but sort of higher level tacit knowledge around leadership, teams, implementation, that can be really important in professional services and take the other 50% could become more important.

So this is really pointing to a shift that some firms are gonna have to undertake and some won't be able to do it.

But luckily, they're populated by consultants who should be studying this stuff and knowing it. So that makes us feel maybe a little bit better.

I want to hit this point in enterprise AI. But before I do, I just do want to go back to your origin story with Michael Porter, because there's something really interesting here. When Porter was doing this work on strategy, and of course, this is my background as well, so I'm fascinated by it.

Did you have a sense that you were onto something really, really big in terms of these unanswered questions about industry structure and what makes firms have a competitive advantage? Do you feel that way today about AI research? I have the sense for the first time in a long time that a whole set of questions are going to be opened up that are going to really rewrite the rules of the game in terms of people who study firms, strategy, innovation.

I'm wondering for you, someone who started his career really in the cradle of the amazing period of strategy, really the growth of strategy as a field, now seeing it from the other end.

and looking at AI, what are the similarities and differences between what you saw in that previous revolution in research and what you're seeing now on AI?

Back in the day, I think many of your listeners will not know that in the 1980s, no business school had a strategy department or area. There was no strategy bookshelf in the business school, the business section of the university bookstore.

And so, yeah, we could tell that we had something that was momentous. We could tell by the way executives reacted to it. And perhaps a little snidely, I'll say that we could also tell by the way some senior faculty reacted to it, which was not generous.

Um, really that, that early part of what we did, Mike and I and others did together. And then we did a monitor with really what we're doing is taking classic industrial organization theory and mapping it into business. And I think you'll agree, Ronnie, that a lot of innovations do have their, their, their, their inception in taking some insight from one discipline and figuring it out applies to another.

The big change was the first book that we published as monitor, which was a book called competitive advantage, where we started trying to think of the activity level, how you executed strategy. And, um, I think when I look at the research about AI currently, I see kind of two pools. One is classic, really smart, really detailed.

academic research, but there just aren't a lot of human beings in the discussion. It's kind of neutron research. The buildings are still standing, but there's nobody in them.

Where we're trying to differentiate our research here and the work I'm doing with everyone from Accenture Research to Burning Glass Institute and whatnot, is we're trying to work backwards from the human organization.

I think when we start thinking about the types of innovation, game-changing ability to understand markets and the interaction of markets and decision structures,

can't, I couldn't be more excited about the rate of change. This is the most important thing for large institutions since controllable electricity. And it's, I'm very envious of the position you have, not that I'd be qualified to be in it, but the, unless you've got, you know, a subcategory of old war horses that you're hiring, but this is going to change the nature of management. It's going to change the nature of decision rights. And that's why companies are going slowly because to most companies I talk to say, how can we use this to improve our current process?

And then I think, you know, me well enough to know.

that I'm going to say something maybe a little bit, I don't know, provocative back to them, I say, well, that's entirely the wrong question. The right question is, how do I reconfigure my process to maximize the impact of this technology? And that's when it gets scary, and that's when they're going slow.

Let's double click on this enterprise AI point, because I think people are going to find this fascinating. And I'll just say, I mean, what you, Professor Porter, the other people who really pioneered strategy, you're the people who trained me, right, through your work, through your research. That is kind of what I've been using to guide my understanding of the economics of AI.

So in some sense, that impact, I'm feeling it today, and being really at the frontier of this new revolution about how we're going to understand the economics of AI, and maybe really importantly in that, how businesses are going to opt AI.

something that I've been fascinated by, and I think I owe a lot to the strategy field for helping me kind of understand that in a really useful way. I do think this point you've been making, which a lot of people have been talking about here at OpenAI and around the world, is this notion of enterprise AI and the speed of adoption. And we're seeing with ChatGBT on the consumer side, I mean, the fastest growing consumer app in history, at least in terms of how I can count it. I mean, hundreds of millions of people using this and really with no marketing kind of coming out of nowhere. And of course, there's other fantastic products out there as well. The enterprise piece, people are watching really closely because a lot of the economic bang for the buck in terms of productivity and hitting things like GDP will come from enterprise. You and others have noted, hey, maybe it's moving a little bit slower there. You talked about risk appetite. You talked about the unwillingness to re-engineer workflows. What else should we be thinking about with enterprise AI?

are the sort of canary in the coal mines we should be looking at, the indicators to say, hey, this is speeding up or some firms really get this? I think everyone's trying to unlock that puzzle. How do you see it?

I think when, if you asked me this two years ago, I would have said, we know that the LLMs, the large language models are going to win. Mm-hmm. And therefore, and that's a hard thing for enterprises to take on. And the notion that they're going to somehow take an LLM and repurpose it for themselves and build their own.

Now, I think architecturally with more agentic AI, we're seeing that you probably have what's going to look more like a telecoms network where you've got

smaller language models that are deeply trained on specific phenomenon and are nested under large language models which govern a set, let's say the finance LLM has got the capital budgeting small language model and the accounts payable small language model or the chart of accounts small language models and also acts as a bit of a buffer because the small language models aren't going to hallucinate as much as the large language models because they have much more restricted data.

I'm no architect and I'm sure there are people who work with you at OpenAI who are laughing in their coffee saying who is this guy but at least that's how I begin to understand it.

With, with, with that in hand, I think companies will be more able to digest the lessons and to experiment.

The second thing is what I keep telling people is in this world, they were the most data that start using it earlier when, so you can be cautious and you better hope that you're the least cautious person in your sector, I say, or else you're going to wake up dead one morning.

And that is, I mean, this is a nearly interesting idea too, that has, I think an analog to the enterprise software revolution, where a lot of the billion dollar companies were built in the application layer, right? For specific applications.

And I think what you're saying is if you're in finance or healthcare, energy.

there's going to be a set of models that are fine-tuned or designed for your applications, and maybe the foundation models, the kind of things OpenAI and other labs are building are going to be what everything is getting built on. But there's going to be a really important role for application-specific models and solutions. And by the way, that's great work to do inside existing organizations. Startups could come into this space, and even our consultants, our professional services might be needed to make sure that these technology integration projects actually work inside these big companies. So I feel like there's a lot of potential for new work from this potential enterprise revolution.

Let me ask you about startups, because I often, when I talk to folks about corporate transformation, and I explain the same thing you've said, which is we're going to have to re-engineer the entire workflow to really get the most out of AI, and many, many firms are unwilling to do that. The obvious implication is, well, startups coming out of the topics.

accelerators, somebody who's working on this in Silicon Valley or the Research Triangle Park, they're the ones who are going to start without legacy workflows. And they're going to start an AI-first company, and it's going to be those companies with the backing of risk capital that really change the industry structure and disrupt other companies. How do you see that working? How long should we wait for cues that that's happening? Do you see some indications? You mentioned a few companies earlier that that's already happening in different areas.

I think this native generative AI and probably also native remote or native hybrid are going to be an incredible threat. And going back to consulting, I was, I guess, flattered to be headhunted to leave Harvard to become the non-executive, I'm sorry, the executive chairman of a.

company that aspires to be a strategy consulting firm, native generative AI, which I think actually can be a very effective model.

Having had a somewhat similar job in my 20s, and it nearly killed me then, I decided in my 60s it wasn't a good idea to start trying to relive my childhood. That's wisdom. That's called wisdom, Joe.

Yeah, exactly. But I think that the capacity for particularly focused startups in areas that are, once again, rules-based heavy, we're already seeing it a lot, obviously, in places like hedge funds.

I mean, there's the John...

The giant sucking sound for AI talent other than between the tech giants is among the financial services companies and who they're hiring in those entry-level finance jobs, in their asset management divisions, in their sales and trading divisions are much more not just quants, but people who are very comfortable with AI and they're hiring top AI talent to run their activities in AI.

It's because the nature of competitive advantage in that industry is the most important activities are ones that are highly, highly amenable to widespread deployment of generative AI.

You touched on something I just want to come back to, what I've observed with large companies as it relates to their enterprise resource management.

management systems, their oracles, their sales forces, is over time they become highly dependent on the customer success function of their vendors. And I think a really interesting question in the foundation model business is to what extent does an open AI or do competitors with open AI need to cultivate on that or are you going to rely, for example, as IBM did in the seventies on the creations, services companies, they put in the business starting with what's now called Accenture, but then the Anderson as an Arthur Anderson business consulting division in the late seventies, early eighties.

which happened by the way to be the third collider monitor. So I was present at the creation of what became one of the defining tech advisory companies in history.

I mean, but see, this is very interesting because this is a boundaries of the firm question, right? So how much of this is going to be done by the foundation labs?

You know, OpenAI, we're proud to have these forward deployed engineers that are sort of working with sort of our big customers to make sure these implementation projects, which are complex can actually work. And that's an approach that is really important.

There's also intermediaries and consulting firms, the Accentures of tomorrow, right? Given the origins that are AI experts that are helping make sure these projects work inside big corporates. And so I think that's something to watch in terms of the space and another area of a lot of potential jobs and value creation, frankly, for people with the right skills.

which makes us optimistic. Let me ask you about skills, because some of your prior work that really has inspired me, some of the stuff you've done with Bill Kerr and others, is about skilling and the workforce. The most common question I get as a dad of three kids is, what skills should my kid be learning at a younger age? How do we get people up to speed? Maybe they're not at Harvard Business School, but they're starting their educational journey at a community college.

How do they think about what skills they should learn where AI comes into that, and are there good or better ways to learn AI, become AI literate in an area that's moving really, really fast? Have you spent time thinking about that, and any thoughts for us on it?

Well, I have quite a bit for a couple of reasons. One is my research, and the second is, I'm the Chairman of the Board of Trustees of Western Governors University, which is the largest online university.

in the United States and not, we don't do a lot of television advertising, whatnot, but we're the largest online university and have the best student success ratios on all measures by a lot. Sorry for the advertisement, but that's an asynchronous environment where we need to understand in healthcare, in our College of Business and our College of Technology, how is AI both going to be integral to career success for our learners, but also in the delivery of content. And also keeping close eye, and we're in active conversations with Sal Khan of Khan Academy, alumnus of our school, and the way they're thinking about learning.

I think this is something that is going to be, have to be,

involve parents heavily to be honest with you. The ed sector is not built for speed and it's not built for pivoting and AI is both a blessing if it's being engaged in the right way to help people actually learn concepts that they find difficult or where whether it's the failing of the school district they're in or their individual instructor or just the way their mind works their learning aptitudes and modalities AI can backfill a lot of places and become weak.

I mean we know that what happens in mathematics is that people when they at some point when they start losing a grasp on some core concepts then get in what amounts to death spiral in terms.

their ability to go further in mathematics, and then they start doubting themselves, and then they decide they're bad at it or whatever else.

On the other hand, it's ability to give students a lazy person's way around things.

Here at Harvard Business School, what our administration decided to do is tell students, this is just another tool, and we wouldn't tell you to use R or Excel or to build spreadsheets.

Of course you should use it. We want you to cite it. If you went into a management meeting at an employer and said, you know, I got on workday today and I figured something out, you'd be applauded by your boss if it was clever and advanced the conversation.

exact same thing coming in our classroom saying, I got on ChatGPT last night. We are a ChatGPT school. And I did these prompts and some interesting data came out. Our faculty will and should say, well done.

But if you're asked to write a personal reflection on something that happened in class, and you do it by putting five sentences into a prompt and you print out what generative AI created for you, that's not learning.

And this goes to what makes us human, which is I was really interested to return back to this theme as we start to get closer to the end here, which is that you mentioned early on that some of the skills that would be at a premium in the workforce would probably be softer skills. And our friend and your

colleague David Deming has written about this in terms of sort of social skills, leadership, being able to connect with people. These are things that I think you're articulating and others have said, these can be really, really important. How do we use AI to make sure that students aren't just learning how to write prompts, but actually learning those human skills as well.

And where can AI get in the way of some of those things where maybe as instructors, we just say, okay, yeah, everyone put their screens down, put their phones down. It's time to do something that connects us to other humans. How do you think about that in the classroom as a professor?

On the first part of the question, I think that I'm a big believer that we're going to have the highly effective AI-driven simulations. If you look at companies.

like immersion, which can mimic events in a corporate setting, an interpersonal setting, where you can try again, or you can have a, you're having a dialogue with the AI. And as AI gets more capable, and we get better at training it to be able to take on personas, I think people will be able to practice.

I do think, however, there's something about real-time presence, physical presence, subtlety of expression, tone of voice, that will be a long way from being able to really mimic the

full experience of interpersonal dynamics. I think that the reverse classroom that we have at the Harvard Business School teaching Socratic method actually benefits greatly from AI, because our challenge, as you will appreciate having been a colleague of mine here, is it is hard to teach analytical techniques through a dialogue.

But if the analytical technique can be learned through an interaction with a, let's say, a small language model that's focused around that technique, then when you come into our classroom, you're discussing the so what of this. And you're also probably discussing your prompts.

That's right. And how you got to those conclusions. And you're learning both how to engage

with the tool, but in a way that allows you to engage better with colleagues and peers and people you're trying to help and influence. And that's really gonna benefit so many in-person experiences. I mean, some people will get sort of a more scale digital experience to get specific skills. When it comes to a group of people who wanna discuss this, so what with other humans, that could be even more valuable 10 years from now than it is today. And I think it bodes well for parts of business school education, depending on how business schools decide to plot their own strategies.

Joe, just one final question for you is given your experience, and I think it's just notable kind of your business background, your academic work at Harvard Business School, leading a professional services firm, now seeing the classrooms from the perspective of the MBA students. You once were one, now you're teaching them.

those different perspectives on the future of work in AI. Is there anything that I should have asked you that I did not ask you?

Well, there's something you asked me. I didn't answer you sufficiently. I'll take that. That's fine. Let me go there.

I think for educators, but very importantly for learners like your kids, what I'd strongly suggest is they look for opportunities to engage in what's called experiential learning, so that what they're doing is not simply studying something, but they're taking something they're studying and doing it, not as their first job, but as part of a course or part of a co-op program or as an internship or a micro-internship with companies like Parker Dewitt.

Aripen, that that's going to be essential to learning at all levels. I've been meeting with a couple of governors recently about skills development. And what you find in social skills is they really learn in childhood.

And if you get well into your teens or in your early 20s, and you really have underdeveloped social skills, it's going to be much harder to get a job, unless you've got a very in-demand technical kind of certificate or other skill, to really get on a path to earning an income that's going to sustain a household, which is the way we should think about people's futures, I think.

But that, so that means we have to help learners in K through

12 develop social skills through teaching by design. Like having to stand up and explain something when you're a fourth grader every week in every class, no exceptions. And having faculty who now form teams around mostly path of leaf resistance, I'm going to let the basketball players be a team and in the frog dissection project in 10th grade biology and the smart girls be a team and the goths be a team because there's just more trouble than it's worth to try to get the mix.

Now your teaching objective is that people, kids you've never seen work together have to do something collaboratively. Those are examples of experiential-based learning that the experience of being an unfamiliar team.

but seeking opportunities for our students, even at the level of an MBA program to have more experiential learning, but our learners have more as they mature. I think that's going to be an integral part of realizing their full potential, but also accommodating, adapting what's going to be this mind-blowingly exciting world of one that relies on the large language models and the foundation models that OpenAI has pioneered.

Joe, I can say the experience of working with you and learning from you and interviewing you has been really beneficial to me. And I just want to thank you for all your contributions in business and the Academy.

Professor Joe Fuller from Harvard Business School, thanks for joining us on the OpenAI Forum.

Ronny, a pleasure to be with you as always.

+ Read More
Comments (0)
Popular
avatar


Watch More

Expertise, Artificial Intelligence, and the Work of the Future Presented by David Autor
Posted Mar 12, 2025 | Views 21.8K
# Higher Education
# Future of Work
# AI Literacy
# Career
# Social Science
Exploring the Future of Math & AI with Terence Tao and OpenAI
Posted Oct 09, 2023 | Views 27.1K
# STEM
# Higher Education
# Innovation
Integrating AI Into Life and Work
Posted Jan 30, 2024 | Views 24.4K
# Career
# Everyday Applications
# Technical Support & Enablement