Event Replay: Understanding the Labor Market Through Real-World Usage Data
Speakers

Alex Martin Richmond is a Labor Economist on the Economic Research team at OpenAI, where she studies AI's impact on the labor market and how AI will transform the future of work. Previously, she was an Economist at the Burning Glass Institute, where she led research on workforce dynamics and labor market trends. She holds a PhD in Economics from the Massachusetts Institute of Technology. Prior to graduate school, she worked as a research assistant at the Federal Reserve Board of Governors in the Department of Financial Stability. She holds a B.A. in International Studies and Mathematics from the University of Mississippi.

Daniel Rock is an Assistant Professor at The Wharton School, where his research focuses on the economics of technology, with a particular emphasis on artificial intelligence and its impact on labor markets and organizations. He is also a Co-Founder of Workhelix, a company focused on helping organizations better understand and implement AI in the workplace. Prior to joining Wharton, Daniel was a postdoctoral researcher at the Massachusetts Institute of Technology, where he also earned his PhD in Management Science. His work explores how emerging technologies reshape productivity, skills, and the future of work. Earlier in his career, he worked as an algorithmic trader at DRW Trading Group, specializing in fixed income, foreign exchange, and commodities markets.

Gregor Schubert is an applied microeconomist with expertise spanning real estate, labor economics, and finance. His research focuses on the impact of emerging technologies, including generative AI and robotics, on firms, labor markets, and the broader economy. He also develops AI-driven tools and methodologies for research in finance and economics, with additional work centered on housing markets and real estate technology. He has designed and taught courses for managers on how to successfully deploy artificial intelligence and machine learning initiatives, as well as how to navigate organizational change in response to technological disruption. Prior to his academic career, Gregor worked as a strategy consultant at The Boston Consulting Group, where he contributed to projects involving data analytics, survey design, and large-scale technology transformations. He continues to advise companies on data and technology implementation, as well as real estate valuation.

Aaron “Ronnie” Chatterji, Ph.D., is OpenAI’s first Chief Economist. He is also the Mark Burgess & Lisa Benson-Burgess Distinguished Professor at Duke University, working at the intersection of academia, policy, and business. He served in the Biden Administration as White House CHIPS coordinator and Acting Deputy Director of the National Economic Council, shaping industrial policy, manufacturing, and supply chains. Before that, he was Chief Economist at the Department of Commerce and a Senior Economist at the White House Council of Economic Advisers. He is on leave as a Research Associate at the National Bureau of Economic Research and previously taught at Harvard Business School. Earlier in his career, he worked at Goldman Sachs and was a term member of the Council on Foreign Relations. Chatterji holds a Ph.D. from UC Berkeley and a B.A. in Economics from Cornell University.
SUMMARY
This conversation focused on how real-world OpenAI usage data can give a more grounded picture of AI’s labor-market impact than simple exposure measures alone. Ronnie Chatterji, Alex Martin Richmond, Daniel Rock, and Gregor Schubert argued that widespread AI adoption does not automatically translate into immediate job loss, and that economists need better ways to distinguish between automation-prone work, demand-expanding work, and roles where human judgment remains central. Across the discussion, the panel described how people are already using AI in practice for writing, technical help, planning, summarization, research, and other day-to-day knowledge work, while also noting that many of the broader productivity effects may take time to show up in traditional data. The conversation also emphasized training, access, and experimentation as key factors in helping workers and institutions benefit from these tools rather than be left behind by them. Audience questions reinforced those themes, especially around what students should study, how to interpret personal-account usage for work, how to measure quality and meaningful work, and what policymakers should watch as adoption accelerates.
TRANSCRIPT
[00:00:00] Speaker 1: Hi, everyone. I'm Natalie Cone, your OpenAI Forum Community Architect, and I'm joining remotely today from my home in Austin, Texas. I'm so glad you're all here with us. Artificial Intelligence is rapidly becoming part of everyday life, shaping how people work, learn, and solve problems. But alongside its growing adoption, important questions remain. How do we measure its economic impact, its effect on society, and the ways it's already changing daily life? We hear a lot about how AI might impact our future. Today, we're focusing on what we already know about how people are using AI right now and the ways it may already be improving how we work, learn, and navigate everyday life. How are people already using AI in everyday tasks, workplaces, and learning environments? Where is it driving productivity and creating value? Why are some impacts easy to feel personally but harder to see in traditional economic data? And what might today's usage patterns tell us about the future? To help us unpack all of that, we're joined by an outstanding panel. Ronnie Chatterji, Chief Economist at OpenAI, will join us today along with Alex Martin-Richmond, Labor Economist on OpenAI's Economic Research team, Daniel Rock, Assistant Professor at the Wharton School, and Gregor Shuber, Assistant Professor of Finance at UCLA Anderson. What I love about this conversation is that it's grounded in evidence, not speculation. We'll explore what OpenAI usage data and broader research reveal about how AI is adopted across the world and what that means for workers, learners, institutions, and society more broadly. So, as you listen, think about this: How do we ensure AI helps people work better, learn faster, solve harder problems, and access greater community and opportunity? Well, let's get started. Please join me in welcoming Roni, Alex, Daniel, and Gregor to the forum stage.
[00:02:20] Speaker 2: All right, great to see everybody. Welcome to OpenAI forum. It is good to be back. I was talking to Natalie about how much I enjoy doing these, and it is a great time to be an economist, especially a young economist who's looking at the impact of AI in the labor market. I'm really excited to have Daniel, Gregor, and Alex here today with me to answer some of the most important questions about the economics of AI and really focus on the labor market work that all three of them have done. When you talk to three researchers like this, you're really talking to people who are getting their hands dirty with the data, looking at every data point, trying to figure out what's going on, and that's what's going to make this an interesting conversation. There's a joke, and I'll start with Daniel with this one, about economists always having two hands. On one hand, Daniel, we're seeing rapid AI usage. Oh my goodness, we have over 900 million weekly active users. You're seeing a lot of work in enterprise. You're also seeing capabilities advance really rapidly, and you documented some of this in your own work. But on the other hand, being an economist, there's somewhat unclear data on the labor market, and a lot of the data we're seeing on usage and capabilities doesn't seem to be mapping to the labor market impacts that many people are expecting. What do you think we are getting wrong about this moment, and why are there seemingly contradictory stories going on when it comes to the labor market and AI?
[00:03:37] Speaker 3: That's a wonderful question, Ronnie, and I want to say quickly thank you to the whole team for having us on here. This is super fun. I don't necessarily think that we're getting anything wrong. In 1987, I think, Bob Sillow famously said, you see computers everywhere except in the productivity statistics. I think we're seeing a version of that again here. Our first instincts at the very high level, how do we use AI in the work or the activities that we currently do to turbocharge what we're up to? It takes a long time to discover those new use cases or even new products and services that didn't exist before. That's where we start to see a lot of our investments in these capabilities start to pay off longer term. My colleagues, co-authors, Eric Rielsen and Chad Severson, I wrote this paper a while ago, the Productivity J-Curve. In the beginning, it's trying to steer a ship with a lot of forward momentum. You want to go a different direction. You have to put real money, real investment in to get something that's sort of intangible and new, but harder to measure out. It looks like we get a little bit of a drag perhaps on productivity, productivity being outputs per unit of input. Then longer term, that thing we built up, new culture, new processes, a new factory floor is set.
[00:04:58] Speaker 1: A new factory floor, as Sandal, Mullen Nathan, and Ashash Rambachan might call it. You build that, and then all of a sudden it's like, oh hey, we're getting free money and explosive capability. Well, no, it's about making those investments upfront. So I think it's coming. And in the meantime, there's plenty of power users and exploration experimentation happening right now that can light the way for the rest of us to learn from.
[00:05:23] Speaker 2: Well, and that is a perfect segue to Gregor, because you're really doing a lot of work and you're distinctive in this on that new factory floor, thinking about enterprise adoption, how firms are using A.I. So how do you think about this question, Gregor, as you look at firms using A.I., using it more intensely, using it for new use cases, but not necessarily seeing that map up and some of the indicators map to some of the indicators that Daniel was talking about?
[00:05:46] Speaker 3: I think there's an important element here, and I think Alex is going to be able to talk to that as well, which is that there are original measures of firm and occupation exposure to A.I. and where the potential is, where can these technologies be used productively. Then, trying to relate that to do firms actually end up using the technology. I think there's sort of one filter of just translating do we have potential for A.I. and to actually trying to apply it. There's a lot of variation still in terms of how well firms are actually able to translate some of these potential benefits into actual benefits. I have some research looking at whether or not pre-existing technological capabilities at the firm level are making it easier or harder for firms to implement this. We definitely see that firms with existing technological capabilities are faster and better at actually adopting these tools.
[00:06:39] Speaker 3: Partially, it is just that there's a lag between the technology coming out, the productivity benefit existing in theory, and then actually translating that into organizational gains. I think that is compounded by what Daniel was mentioning with the sort of j-curve effect: that even once you get going and try to implement these technologies, it takes a while to restructure your organization around that. You spend a lot of time investing in intangible and organizational capital, which from the outside looks like you're hiring more people and spending lots of money without seeing a lot of productivity benefits yet. But that’s because you’re sort of building, in some sense, the toolkit before you can apply it.
[00:07:22] Speaker 3: I teach a course on how organizations can restructure and how managers should think about managing AI workflows. One important piece there is that I hear from my students is that it takes a while to get everyone aligned and figure out what the new roles are. Jobs don’t stay the same; there's lots of experimentation with regard to how new jobs should function. Product managers are becoming very different roles that are spanning marketing, developers, and things like that. A lot of firms are experimenting and trying to figure this out. Until they figure out the playbook, I think we're going to see more of these new roles popping up, new people being hired, and some people being let go, but not necessarily just a one-way line in terms of productivity.
[00:08:14] Speaker 2: Gregor, it's so interesting you talk about job redesign because we're lucky enough to have someone on our OpenAI Economic Research Team who can talk a lot about how jobs are designed and how they change. Usually, we have external guests, and it's really fun for me to talk to people who are from the outside of OpenAI. This time we really couldn't do this without talking to Alex Martin Richmond on my team, who is doing amazing work on labor markets for us. Alex, as you think about this, we've talked about this so much. What are you seeing in the data around sort of job changes, role changes, and how the labor market in general is dealing with the advent of AI and the acceleration of AI?
[00:08:49] Speaker 4: You know, it seems so clear from looking at how AI is starting to show up in job ads, especially in roles heavily aligned with the current most effective uses of AI, like for coding, that this is transforming how people work. I think task composition for lots of folks is already changing, especially for people that work at OpenAI. They're doing lots of work, both knowledge work and coding very differently than they were even a few months ago. My analysis workflow is entirely different than when I joined OpenAI in December, in terms of how much code I write and how I design data analysis and tests. I think that we, sort of to what Gregor and Daniel are saying, are continuing to see people experiment with these things and their workflows gradually change over time.
[00:09:43] Speaker 4: One thing that we have seen a lot of, though, is while we're seeing usage rapidly expand—especially in use cases where AI is just getting good at something and the capabilities are evolving so rapidly—there are things that AI is great at now that it might not be.
[00:09:56] Speaker 1: It's great at now that it might not have been six months ago or a year ago. And so we do see this sort of, we've been calling it the capability overhang on the team where we see that lots of people are using the tools, but still figuring out how to get to the frontier of usage and take full advantage of AI capabilities in their workflow. It's amazing. I mean, I'll stick with you on this, Alex, for a second, because I feel like within our team, I mean, if you see the diffusion of skills that you're using in CodeX or that other people are picking up, it's really interesting to see how the impact of what your peers, your co-workers, your colleagues are doing, how that can affect how you use AI. And I think that's something that is going to show up in a lot of enterprise studies as well.
[00:10:34] Right now, you're describing AI as a complement to your work, Alex, not a substitute as much. And that is consistent with a lot of the prior research on how technology ultimately changes work. What should you be watching, though, in terms of that story going forward with complements and substitutes? When we look at labor market data, when you think about your own job, this is something people are paying a lot of attention to as they think about the pace of change and whether AI will continue on its current trajectory. How are you thinking about that question?
[00:11:00] Speaker 2: Absolutely. It's really important. Measuring automation versus augmentation is really difficult and it gets to the fundamental questions of what's work and what's a task. You know, maybe we're getting to the point where AI can do some tasks, like I can assign something to CodeX. Then I have this new task that didn't exist before where I have to go verify what CodeX did before I send it to Ronnie and say that it's correct. Did it automate that task? If it did the analysis for me correctly, when there's this new human step where I have to do this augmentation step. And we are very much working on the team about how to measure these automated and augmentative capabilities. As we continue to see the rise of agentic coding and sort of we can observe exactly how you're interacting with code and documents and slides and emails on your computer, I think we're getting an ever sharper picture of how exactly workflows are evolving in that dimension.
[00:12:25] And yeah, it's trickier than it seems because, like I said, we even for a task that appears that AI is completely automating, we're often not seeing the full picture of automation. But its power to augment what we're doing, to make coding faster, to make analysis faster, to help you understand a data set or a code base is pretty rapidly expanding.
[00:12:36] Speaker 1: And Gregor, you think about the firms that you study, a lot of your work, they're full of people like Alex, who are using tools like CodeX and other coding tools to really improve productivity, and they're adding more work. How are you seeing that the enterprises that you study and the students you teach, are they trying to hire more people who are using these tools? Are they trying to get more output for each worker? Are they thinking about teams differently? How are those some things that we should be looking at in organizations in terms of how they're trying to optimize these power users like Alex?
[00:13:54] Speaker 3: Yeah, I like what Alex is saying with regard to the fact that you suddenly have new tasks that pop up as a result of using these tools, right? So I like the phrasing that AI does a lot of things middle to middle rather than end to end, right? And so you end up adding new tasks at the beginning in terms of planning and setting up tasks and then the validation. I think the same is true then for broader organizational structures where you actually end up having to design workflows where rather than, you know, you hand it to a human, they do the tasks, and they sort of hand it back to you and they do all of that internally, you have an AI workflow in the middle and then you invest a lot more time and specialized skills in terms of setting up roles that are about structuring data and structuring tasks that go into an AI workflow, and then also having specialized roles that are targeted at evaluations and validation of the outputs that come out the other end. And it changes where, how firms structure themselves and hire.
[00:14:45] I was recently talking to the CEO of a Korean bank who was telling me that they're shifting a lot more resources towards hiring people who have subject matter expertise and then giving them some technology enablement so they can implement AI workflows rather than leading these, what sounds like a technology implementation, rather than leading that out of the IT department. Because it turns out that a lot of these applications, for instance, designing good validation and designing good task inputs, ultimately require subject matter expertise. And this is almost like designing internal products, designing these workflows. And so you can't leave that to sort of the pure technical backend and the people in the firm who sort of don't know what a good output looks like. They should not be designing the AI workflows that ultimately then have implications for the output you're going to put in front of customers or internal outputs and things like that. And so there's new roles coming up, new workflows.
[00:14:49] Speaker 1: Yeah. I love, Gregor, this idea of middle to middle rather than end to end and the proliferation of these new roles at either end in terms of how you structure the...
[00:14:54] Speaker 1: At either end in terms of how you structure the questions, how you do the validation. That's a really important way for us to think about the labor market for early workers, younger workers, especially as they try to complement advancing machine intelligence, rather than be substituted for it by. Daniel, you've been doing some of this implementation work too. How are you seeing this frontier open up? Are you seeing the kinds of roles that Gregorio is talking about? Are you seeing job redesign the way Alex is talking about? What do you advise companies, particularly fast-growing entrepreneurial startups, which we haven't talked as much here and you know a lot about? How do you advise them to take advantage of AI and think about the complement versus substitute question? Because it's a little bit different in the smaller organizations.
[00:15:30] Speaker 2: Right, right. And either way, you're trying to design a new configuration of work that takes advantage of the technology. I don't think there's a company out there that was perfectly, because it may be OpenAI or, you know, some of the other labs, you know, perfectly designed for what the technology looks like to begin with. Everyone's got some work to transform their workflows. So yeah, I mean, through my startup work Helix, we work with a lot of large enterprises and we've been connected to a bunch of startups to kind of pick up on some best practices. I would encourage everyone to get into the group chats, build communities for yourself and see like, you know, what your friends can discover and their friends. But, so a few practices I think we see emerging.
[00:16:18] Speaker 2: At the cultural level, I think celebrating people who discover new and exciting use cases is key. There's like a group of folks who get really excited about using this. Often they're some of the best employees of the company generally. But when they discover something, being able to propagate what they've discovered to others and have them incorporate it in their workflows, which requires a culture that allows for experimentation and failure. That is a really strong set of compliments for advancing faster with AI. At the same time, really respecting, I call these folks craftspeople, respecting craftspeople who say, I want to do a really good job at work. And sometimes AI can pose a risk to that. Those are people who might abandon the technology when it makes a mistake. They don't have patience for things that create risks. So acknowledging their preference, but then also saying, great, help me build guardrails and help me hold other workers or other people on the team accountable. You can sometimes pull them into being gigantic technology evangelists once they discover how to do things properly. And then there's a lot of other folks who are like, hey, I just want to learn what some best practices are and move from there.
[00:17:28] Speaker 2: I think at a bigger picture level, there's something going on that almost is like an organizational equivalent to the steam engine to electric power transition. For those who don't know this whole chestnut, this great story about the steam engine, it used to have a big engine in the middle of the room and it powered a bunch of other machines off of it. You have pulleys, you have belts and cables and things; it's a centralized source of power, but lots of different machines running off of it. And when the electric dynamo first came out, the idea was let's just replace that steam engine with a giant electric dynamo in the middle and do the same thing. You get some improvement doing that, but it wasn't for even decades that people figured out we should modularize that electric power and make small power sources. And now we can completely reconfigure work. Now, imagine most companies have a centralized technology function. That's what Gregor and Alex were referring to already. You have a hub and spoke model almost. You serve the rest of the folks in the company by building technology for them.
[00:18:28] Speaker 2: But if everybody can build tools for their purposes, now we should really modularize that technology function because as much as the IT function might not know the best way to do something in sales or marketing or procurement or whatever, they do know how to build secure, maintainable, reliable software. So as everybody's building stuff, we might still need some adult supervision from those who have hard-fought knowledge in terms of how to build these new tools. I've seen a couple of organizations start to deploy their own engineers into the rest of the organization. I think that's another practice that leads to good productivity gains at a faster clip.
[00:19:11] Speaker 1: And that's interesting. And Daniel, when you think about what people are doing at work, is this also, do you think, carrying over to what they do at home? I think this contrast between consumers and business is really interesting. This seems to be one of these technologies that we're getting really advanced capabilities in the enterprise, but also to consumers almost at the same time, which is really, really interesting to think about. Versus some of the technologies you mentioned earlier related to, let's say, the steam engine and the electric dynamo, they appeared at work in the factory before they appeared at home, I would think. So what's going on here where folks are using AI intensely as consumers versus the business side of the adoption? And how do you think about what that means for economic impact? When people say, when will it show up in the statistics, how does the consumer angle versus the business angle affect your answer to that question?
[00:19:52] Speaker 1: Or business angle affect your answer to that question? Well, I would say that Gregor definitely knows more about this than me, having looked at the consumer stuff, or Alex, if you've looked at some of the actual data going through these systems for that. But one thing I will say is, it's almost like companies are getting a ton of training for free to a degree. And people are like, if you think about that direction of things, people are using the stuff at home and asking, I mean, I ask my various AI models, all sorts of questions that are not work related. And I feel like it makes me better at the work that I do. One difference I do see with some folks in terms of how they use it as a consumer versus using it at work, to take as much advantage of expertise as possible, I like to have models ask me questions until they have enough information to help me build documents or artifacts for work. I probably do a little bit less of that when I'm doing this as a consumer. It's more one way. I wonder if that has some, maybe I should be doing it differently. But I think the, yeah. The idea that the consumer side of things can actually drive better outcomes on the work side seems like it's probable. I haven't looked at the data on that. And then I'm sure there's some advantages going the other direction. I just, I don't know. I defer to my fellow panelists here.
[00:21:12] Speaker 2: No, that's great. Greg, what do you think about? That's an interesting idea that Daniel's putting out there. Are you thinking about that when you study things like enterprise enablements? That you know, look. A lot of consumers are doing a lot with AI at home. So it's not like the first time they're encountering it is at work. Anything you're seeing there?
[00:21:26] Speaker 3: Yeah, it's interesting. I think, I love this idea that Daniel's bringing up of like that home experimentation in some sense probably helps productivity at work to some degree. And I think it's definitely true that like, I mean all of us here on the panel, but also many people who are sort of really excited about AI applications, probably find themselves using them both in their private lives and at work. Like people don't sort of have that strict separation. It's currently still very hard to sort of pick up what exactly that spillover is. One thing that we see, so I have some recent research with my co-authors, Michael Blank and now Ben Zhang, where we look at what people are doing at home with AI. And it's sort of built on this idea that it actually looks like more people are using chatbots at home than people than at work so far, right. And so actually, there's a sort of iceberg of large consumer benefits that are happening in households that are not captured by the labor market indicators or sort of GDP measures. And so as a result, people are actually doing a lot of productive stuff, it just doesn't end up in the market economy necessarily, right. And so when people are planning their next trips, like planning dinner parties and shopping lists, researching health issues, things like that, all of that is economic value that's being created and some of it maybe even then prepares people for doing similar tasks in the market economy later on or transferring some of that to their work lives. But we're currently not necessarily capturing that in official macroeconomic statistics, right, like none of that would show up in GDP growth because it's all sort of consumer surplus in some sense. People pay very little for the most advanced models but they get a lot of use out of it and that's what we find in this research that people use it for these productive workflows and seem to get a lot of benefits from that. And this is interesting because Alex, we found this in our, you know, how do people use chat GPT paper, which is that there's tremendous amount of home production being done with AI that would not show up in GDP.
[00:23:33] Speaker 2: How are you thinking about this, when you think about, we hear about these amazing productivity gains individuals are having but it's not necessarily showing up in the macro data, on the consumer and business side, Gregor and Daniel have given us ways to think about it. What are you looking at as you think about how that's gonna translate down the road to our economic indicators?
[00:24:07] Speaker 4: Absolutely, you know, about 30% of the usage we see on consumer chat GPT is work-related, like from signals and so it's clear that people are sort of are blurring those boundaries as you think about. It's really hard and I think there's ongoing research to do this, to think about how people value these kinds of chat GPT for consumer uses. I think Gregor's research supports that it's huge in terms of personal productivity gains and welfare but if you're not thinking about market time, then how that translates into GDP and then into aggregate productivity is really difficult to measure. I think that we, as we are starting to see workflows, professional workflows evolve that we will probably start to see more and more of that pickup in the productivity statistics. I would love for us to have better ways in the US to measure these kinds of changes at the occupation level. In the US, we don't have great administrative measurements linked to sort of like tax or wage.
[00:24:50] Speaker 1: We have to rely on smaller surveys to make inferences about changes in employment levels by occupation, which would help us better link these productivity estimates and these usage changes from consumer chat GBT to aggregate statistics. I'm optimistic that maybe we'll make improvements on that front, but we're not there yet in terms of aggregate data.
[00:25:17] That's a good question. I can think, Alex, about some of those indicators, and policymakers are watching the jobs indicators in particular, but other indicators as well, like price levels and GDP. Let's suppose the job transition happens faster than policymakers expect. What kinds of policy solutions should be on the table? How do economists think about this? There's so much discussion on social media and in the papers covering this, but academics take a long time to get to the party to talk about it. So, what are things economists are thinking about in this space?
[00:25:52] I've heard lots of really interesting solutions to this problem. One that we don't give enough credit to is that the unemployment insurance system in the U.S. is pretty well-designed for this purpose, to respond to people when they have a negative employment shock and help them bridge the gap into finding their next opportunity. We should be open to the idea that our current unemployment insurance system infrastructure is better suited to these short-term gaps. For example, if your particular plant or firm has a layoff, and you need some time to find another plant or firm that needs someone with roughly similar skills, maybe you do make some kind of occupation transition, but it's not typically a large leap in terms of the job you left and the job you're looking for.
[00:26:54] In what might happen with AI, we could see people needing to make job transitions that are larger in some skill distance sense, meaning they need to transition to something that looks quite different from what they were doing before. So, thinking about making the unemployment insurance system more generous or extending its duration could help cope with the fact that these are potentially more radical changes. I also think that the tax system, in some ways, is designed to assist with these sorts of things and make transfers from those making more money to those making less, as we do have a progressive tax system in that respect.
[00:27:50] While policies like wage insurance are really interesting and we should continue to evaluate precisely how we should do this and think as carefully as we can about providing re-skilling or retraining, I think we shouldn't underestimate the ability of existing policies to do some of this work, provided that they have adequate support, funding, and perhaps some tweaks to make them more effective for this case.
[00:27:59] Speaker 2: It's really useful to build on existing institutions, and there are some good ideas related to this work in the AGI industrial policy that we released in OpenAI a couple of weeks ago as well.
[00:28:00] Gregor, how about you? When you think about the job transition unfolding faster than maybe people expect, what kinds of policy decisions are interesting to you? What should we be thinking about? Are there new areas we should consider?
[00:28:11] Speaker 3: One thing I would consider is investing in AI enablement to some degree. If we think that a lot of new roles will involve both using AI tools to enhance productivity and working in broader AI-enabled organizations, we need to provide people with access to frontier models and some sort of capability enablement to figure out how to work with these tools.
[00:28:38] One aspect that always surprises me is that I teach an MBA course aimed at enabling managers in this area. It takes five weeks, and the students come in with a fairly high level of knowledge; these are graduate students in business. Yet, five weeks is barely enough to get them to a level where they feel super confident in using these tools.
[00:29:01] Given that most of the workforce does not come in with the same level of preparation, it would require at least that much training—10 to 20 weeks of AI training—for everyone to become fully fluent in these tools. This necessitates significant public investment in both funding some of this training as a public good since it can't all happen within corporate environments, as that would only benefit those who currently have jobs.
[00:29:34] It might also mean ensuring that everyone has access to some of the best models. One of the biggest gaps often observed is that those who haven't worked with frontier models tend to have a distorted view of the capabilities available.
[00:29:48] Speaker 1: To have a distorted view of what the capabilities are and are sort of less able to envision where to use these productively. And so I think that's a lot of investment that can go into that. It's really interesting democratizing AI in that way, something we emphasize a lot, making sure people are more familiar with the tools and kind of closing that capability overhang that Alex was talking about before.
[00:30:07] Speaker 1: Daniel, how about you? When you think about the job transition, it happening more rapidly than people expect, things that you're watching or things that we should be on our radar in terms of policy solutions?
[00:30:16] Speaker 2: Yeah, so, you know, former options trader at a degree here. And when I see an environment with this much uncertainty, I think how can we get an option into our policy agenda? And I'll credit Anton Kornack for alerting me to this possibility at some point. Like can we have a meta policy of small scale experiments that we might try that we could, if it looks like one of them is gonna work out for a particular policy situation, we could scale it up quickly.
[00:30:49] Speaker 2: I think Alex and Gregor made great points. Alex totally agreed. The unemployment insurance program is pretty well designed for solving certain types of problems. When I think about training as one, right? Like, so, you know, the GPTs, or GPT's paper I worked on with Pamela in Michigan and Taina Lindow up in AI, and then Sam Manning is now at GovAI. We thought about exposure right in the labor market. And I think people often interpret exposure in a negative sense. We made no claims that that was a negative thing at all. It's just, hey, there's potential for these tasks to change.
[00:31:26] Speaker 2: And I think a lot of the time it's gonna be good. So to Gregor's point, like how do we get AI enablement, but you may wanna be training people to rush into AI intensive work. And to me, I think about expertise as you think about the labor market as being primarily about problem identification and then solution engineering as being really important areas where expertise helps. Can we create more nimble programs that enable people to develop expertise very, very quickly for those like more distant gaps that Alex was talking about?
[00:31:57] Speaker 2: We may want people to rush into this industry and benefit from it as opposed to thinking like, oh no, keep the AI away from me. It's like, no, it's quite the contrary. You got a lot of problems only you know about if you've got some expertise. How can we enable the solution to be something that you build yourself?
[00:32:17] Speaker 1: Fantastic. It's really interesting to think about how sort of providing that training can also kind of close some of the gaps in addition to helping people learn how to use AI and frontier models better. Well, with that, I think we are gonna go to some Q&A. So I wanna invite Natalie Cohn back into the discussion here. Natalie, you kindly agreed to moderate our discussion here.
[00:32:38] Speaker 1: And I was told that I should hang around and try to answer any of the softball questions that come up and deflect the hard ones to my three esteemed panelists. But I'm happy to stay and participate in that as well. That's the plan.
[00:32:48] Speaker 1: First, I wanted to do a little shout out of some of the awesome people that are here in the audience. We'll start with a few OpenAI team members. It's always cool when your team shows up. So Anirani's team, Cassandra, Duchon, Solis, where we actually would never be able to pull any of this together without Cassandra. So glad to see her here.
[00:33:07] Speaker 1: Also, Taina Alundu, which maybe some of you know, Daniel, maybe you've even worked with her before, but Taina was instrumental in kicking off our very first AI Economics talk with David Autor. So really cool to see that our team families are still showing up for us.
[00:33:25] Speaker 1: And then, oh my gosh, there's so many amazing community members here. We've got Jordan Holtzmann coming from Berkeley. He's the MBA program director. Got Kevin Braza. Oh my gosh, just like, I can't believe all the amazing people that are here. Mohsen Fatsadeh, Paul Asito, and Renee Rodriguez. So many. I just have to give a shout out since we can't see their faces, it's always nice to say hello to them.
[00:33:57] Speaker 1: Okay, let's go to some audience questions, folks. We've got a lot of really great ones here.
[00:34:02] Speaker 2: Yeah. So, maybe I'll kick this one out, Daniel, since AI changes the work market so much, what would you recommend to adolescents to study while starting in university?
[00:34:10] Speaker 2: Oh boy, I get this question a lot and I'm not sure I always have a great answer. The punt answer is, of course, AI, right? And then the second answer is economics. But beyond that, I think it's more about problem solving.
[00:34:28] Speaker 2: So, I think getting a sense, engineering is a great place for this. I think engineering and science are gonna be perennially great because this is, as my colleagues would call it, an impossible task.
[00:34:46] Speaker 1: As some colleagues have called it, an invention and the method of innovation. And we have new tools that will allow people to discover all sorts of new things. But at the same time as these systems change how we operate, we really need the humanities folks to come in and, you know, explain how this is changing the world for us and whether that's fruitful or not. So I really don't think you can go wrong studying all sorts of different things. It's just about how you position yourself within those categories to take advantage of this stuff. And I would recommend not fighting these technologies so much. Not that everybody is super interested in doing that at first flush. But rather like integrating them and saying, hey, here, this is going to be part of our lives. So how can we make sure it goes well?
[00:35:34] Speaker 2: Oh, that was awesome. Thank you, Daniel. And Gregor, did you have anything to add to that?
[00:35:39] Speaker 3: Oh, yes, I do. So we have this debate a lot because we have the same discussion with our PhD students. What should they be doing? Is it worthwhile doing a PhD anymore? And I think that also applies to undergrads. And I think what I tend to tell them is that it's not really that individual subjects get devalued. Similar to what Daniel was saying, there's going to be a role for people studying all sorts of things because sort of the needs of society don't go away. It's just that specific tasks within these fields are becoming less valuable. And so you need to sort of figure out what you want to be doing within those fields that is still going to be valuable. And so I think in a lot of places, this is going to mean I'll bring back this AI does middle to middle and end to end piece, which is that there's some elements of execution in some of these fields.
[00:36:17] So for instance, if this used to be in if you're studying history, you know, like doing archival work is going to look very different going forward and might look a lot more like structuring, automated review of archival documents and sort of figuring out how to build the tools for that. In economics, it looks like, you know, the skills of solving problem sets and doing algebra really well are going to be less important going forward, but it becomes much more important to find interesting data sets and come up with interesting questions to ask. And so I think a lot of fields, it just means that you want to study them differently. You want to put more effort into being able to ask good questions and sort of being able to figure out where the big problems are and developing more of that sort of agency and judgment for driving things forward and building your own things rather than focusing on some of the execution pieces which are going to become less important.
[00:37:19] And I've interviewed a lot of economists. So if you guys don't mind, I'll jump in here. And some of the areas that I have found to be really exciting because I'm not a technical person, but have already started adopting codecs and really finding value in my workflows and really surprised to learn that I'm absolutely capable of learning how to use these tools. So one thing that makes me very excited is that we're definitely learning from different domains and disciplines that we're all becoming higher agency. So historically, I have been, I would say almost relegated to starting being an operator. But now I have this tool that can allow me to build things. And I just feel like I've become higher agency, like I can imagine something and build it. And so that's very exciting for people that do have a humanities background.
[00:38:10] My background is in history of art and as a painter. It's very exciting for people in the social sciences who are now developing completely new methodologies for research, such as Karen Elkins at Kenyon College. And then also for creatives, for artists. You're so good at coming up with ideas and approaching a blank canvas and building something from nothing. And now you can take that entire wealth of wisdom that you have, and you can start to build the way previously only technologists could build. So that's really exciting. And then something that we've learned, Ronnie, I learned through you and Karen Kimbrough about last year's data coming out of LinkedIn. We haven't re-interviewed Karen, but we need to, is that there were two skills that were just way, way in demand compared to anything else, kind of out of the blue. That was AI literacy. Obviously we've touched on that. We know that we have to learn to use these tools, but the other thing was soft skills, the ability to manage, the ability to mediate conflicts. And for me, that's very exciting because I spent the first 10 years of my life as a restaurant manager.
[00:39:30] And it took me a long time to be able to convince people in a different domain, especially technology, that I had skills that were applicable. And now if you're working in these jobs that have traditionally a lower barrier to entry, but you've...
[00:39:44] Speaker 1: To entry, but you've actually learned skills and you've become very good and an expert in those things. And you retool with AI like, oh my God, just the world is our oyster. So I just, I always want our audience to know how this tool is making us higher agency. I'm definitely not an economist, so forgive me for dumping in there guys, but I've learned so much from you. That is amazing.
[00:40:11] Speaker 1: Okay. Another question from the audience. I think maybe we'll start, Alex, why don't you take a stab at this one? Where do you see AI creating the biggest opportunity to make work more meaningful, creative or less repetitive?
[00:40:21] Speaker 2: We were talking about how jobs reorganize. And I think this is the kind of place where if you can offload the things you really didn't want to do, the forms you didn't want to fill out, the emails you were putting off sending, that, you know, you're automating the sort of most repetitive, boring parts of your day. You could spend a lot more time doing the most interesting human parts of your job, you know, more reading, more paper, understanding, more writing. And I think people should look for ways where they can get more minutes, you know, devoted to the actual, you know, interesting, cognitively intensive parts of their work.
[00:41:03] Speaker 2: There's actually a group of us from OpenAI that got to go to the National Gallery of Art last week and look at how curators and conservationists were working with art. And one thing they're trying to figure out how to do is how to use AI to automate the record keeping part of the conservation process, where they spend a lot of time documenting exactly what they do to paintings, which in a very real way takes away the minutes that they could spend actually restoring and preserving art. And I think that people will continue to find ways to use AI to get more time doing the parts of their work that are the most interesting.
[00:41:47] Speaker 1: And Gregor, I read an article recently that was basically underpinned by your research about how people are finding more joy in their everyday lives. A little bit different than this, but can you explain your vantage point on this? How can we be using AI, whether at work or in our personal lives, in ways that will just elevate our quality of life in general?
[00:42:10] Speaker 3: Yeah. So I think what Alex is saying is exactly right, which is that I think there tend to be these pieces of our lives that are sort of the chores, the grunt work in some sense. I think back to, I used to work in strategy consulting, and I remember strategy consulting is sold to you as a young graduate as you spend a lot of time solving big strategic problems. And in practice, it's mostly making slides, taking meeting notes, and summarizing things.
[00:42:34] Speaker 3: And it turns out that all of those boring pieces, AI is very good at, right? It's very good at summarizing text, and taking meeting notes, and making slides, and things like that. And so as a result, that means that both in jobs like that, but also in our personal lives, we can suddenly focus a lot more on the high-level strategic things. And so maybe the sales pitch becomes more true that you suddenly have these jobs where you just get to think the big thoughts.
[00:43:04] Speaker 3: And I think in our personal lives, I think this is also true where we do a lot of things that don't feel rewarding in our private lives, right? So for instance, researching medical issues for hours on end by going through different online forums and websites, or like travel planning or applying for jobs where there's just a lot of filling in endless forms, reformatting documents involved that none of us want to do. And so to the degree that some of that time is made easier or that we can do it at a higher strategic level using AI tools, it gives us back more time to do the stuff that we actually enjoy.
[00:43:47] Speaker 3: What we find in our research is actually that people seem to be getting the chores, so to say, and their personal lives done faster, and then spend more time on the things we actually enjoy, like social media, streaming, gaming, whatever it is that is your passion. And that's, I think, the goal is to sort of bring people towards being able to do the things that they enjoy because AI is taking over some of the other parts.
[00:44:15] Speaker 3: And imagine if we weren't just on this constant cognitive overload all day long. I just really think that our strategy, the intellectual rigor that we are able to bring back to our work lives, like we're making space for that. And that's very exciting.
[00:44:30] Speaker 1: Just yesterday, my teammate, we just started learning Codex. We're not a technical team at all. And in order to host these events, we have to create a memo before we can even start planning it. I can't wait to get started. I'm so excited to be here.
[00:44:51] Speaker 1: I'm so excited to be here. It's just, I'm so excited. I'm so excited to be here. I'm so excited. And I'm so excited to be here. I'm so excited to be here. I'm so excited to be here. I'm so excited to be here. I'm so excited to be here. I'm so excited to be here. I'm so excited to be here. I'm so excited to be here. I'm so excited to be here. I'm so excited to be here. I'm so excited to be here. It's just, I'm so excited to be here. I'm so excited to be here.
[00:44:42] Speaker 1: We have to get everyone on board, make sure it's the right kind of conversation to have, cross-functional teams have to review it. And you're taking many conversations you've had with external stakeholders. So take this group, we're talking to four of you plus Ronnie's Chief of Staff, plus we have all of the other things that influence and then drive the type of events that we want to host. And then, you know, usually creating that memo takes maybe three hours. Well, Caitlin got Codex to connect to her email. So we're not even finding the conversations. Connect to the transcripts of the meetings that we take, where we're planning and discovering ideas. And in one day, she knocked out three memos on top of everything else. And this is a first-time non-technical Codex user. So I mean, we're literally jumping for joy. It's very exciting.
[00:45:38] Speaker 1: Okay, last question. And I wish it wasn't the last question, because there's a lot of good ones in here. But we'll have to do this again. So what are the most important questions for governments around the world to focus on solving right now to support civilians? And how can AI support this? I think we started having this conversation related to unemployment and the EDD with Alex. I would love, maybe, Ronnie, do you want to take this one on, just to start?
[00:46:07] Speaker 2: Yeah, sure. I mean, I want to thank my fellow panelists and you, Natalie, for a great forum event. And I'll just try to do the best I can to kind of answer a really important question. When you think about it from the government standpoint, I worked in economic policy in the government for many years. I think it's with such a focus on the individuals that are going to be most affected by AI. I think with a lot of the popular discussions, there's a sense of being exposed to AI as a technology, but not really digging deep enough into which jobs are kind of the ones that are going to be most affected by AI, versus ones that will actually probably be enhanced by AI. We're doing some work in that area right now. So I'd encourage policymakers to use that kind of data, the data we're going to release on our website, signals, as well as other frameworks and approaches to figure out where the impacts are going to be felt. The second thing is trying to deliver sort of direct support for those individuals, hopefully at the beginning through existing institutions and programs, and then building new programs. And through a process of experimentation, the way Daniel talked about, it's going to be really, really important. And the last piece is nerdy and wonky, but really important, we also should be measuring what we're doing and looking at long-term outcomes for folks who are affected by AI and receive support, folks whose jobs are changing because of AI, folks who are doing more at work, like the types of things you talked about, Natalie, to figure out how persistent these effects are and how it's affecting people's lives. At the end of the day, if we can do those things, we'll make sure that we channel effective policy to the people who need it the most, and we'll build a body of evidence to make sure that we can keep aligned to the benefits of humanity.
[00:47:41] Speaker 1: Does anybody else want to add to that before we move to our closing remarks?
[00:47:43] Speaker 2: Okay, folks. Well, Ronnie, yes, please, let's keep working on, let's keep on prophesying those policy, the policy kind of bullets that you just talked about, because I want more free time. I want to run, I want to be outdoors, I want to paint. That sounds really great.
[00:48:02] Speaker 1: But Gregor, Daniel, Alex, Ronnie, such a pleasure to have you here in the forum. A fun fact, Ronnie, I didn't know a couple of weeks ago that Gregor was on this panel. I knew we were doing this, but I didn't know the specific external friends we were bringing to the table. And I saw Gregor's research being written about somewhere, The New York Times, or The Atlantic, or The New Yorker. And I reached out to him on LinkedIn and I DMed him, like, you know, just, wow, this is fantastic, I want to host this in the forum. And he's like, I'm already doing something at OpenAI, and it was this event. So, yay, I was really excited to come here today. And, Alex, welcome to the OpenAI Forum family, really cool to have you. And I want to also thank Cassandra and Caitlin, our teammates, who really underpin a lot of the work that makes this possible. So that was an awesome talk.
[00:49:00] Speaker 1: And we're going to have another one next week, decoding biological intelligence, building AI agents for the brain genome. In that event, we're going to invite Grace Tseng, the co-founder of Perturb AI, and Shen Jin, co-founder of Perturb and professor at Scripps University. We have a lot of friends at Scripps University, they've been pioneering with AI since the very beginning. And they're going to show us how we're moving from mapping the brain to predicting how it works. In partnership with OpenAI, they've built one of the largest brain datasets ever created, capturing activity across 8 million cells, and are using AI to uncover the rules that drive biology. And what does this mean for us here and also in the audience? It is a new era of biology and medicine that's more predictive, more precise, more effective.
[00:49:40] Speaker 1: More precise, more effective, and maybe even faster to be able to find new treatments for diseases that previously we've done a lot of research on, but we just aren't as fast to find a solution that will impact and increase people's quality of life. So I hope you guys will tune in for that.
[00:49:57] But most of all, happy Wednesday. It was lovely to see you all. Ronnie, thank you so much for coming back to the forum. We'll see you again soon. Don't be a stranger everyone, and we'll see you guys next week. Thank you.

