OpenAI Forum
+00:00 GMT
Sign in or Join the community to continue

Event Replay: How People Really Use ChatGPT: The Who, the What, and the How of ChatGPT

Posted Sep 17, 2025 | Views 49
# AI Economics
# AI Research
Share

speakers

avatar
Ronnie Chatterji
Chief Economist @ OpenAI

Aaron “Ronnie” Chatterji, Ph.D., is OpenAI’s first Chief Economist. He is also the Mark Burgess & Lisa Benson-Burgess Distinguished Professor at Duke University, working at the intersection of academia, policy, and business. He served in the Biden Administration as White House CHIPS coordinator and Acting Deputy Director of the National Economic Council, shaping industrial policy, manufacturing, and supply chains. Before that, he was Chief Economist at the Department of Commerce and a Senior Economist at the White House Council of Economic Advisers. He is on leave as a Research Associate at the National Bureau of Economic Research and previously taught at Harvard Business School. Earlier in his career, he worked at Goldman Sachs and was a term member of the Council on Foreign Relations. Chatterji holds a Ph.D. from UC Berkeley and a B.A. in Economics from Cornell University.

+ Read More
avatar
David Deming
Academic Dean and Professor @ Harvard Kennedy School

David J. Deming is the Danoff Dean of Harvard College, the Isabelle and Scott Black Professor of Political Economy at the Harvard Kennedy School, and a Professor in the Harvard Economics Department. He also served as Academic Dean of HKS from 2021 to 2024.

His research focuses on higher education, economic inequality, skills, technology, and the future of the labor market. He serves as a Principal Investigator—alongside Raj Chetty and John Friedman—at the CLIMB Initiative, an organization devoted to studying and improving the role of higher education in social mobility. He is also one of the faculty leads of the Project on Workforce, a cross-Harvard initiative spanning HKS, HBS, and HGSE that concentrates on the future of work.

Together with Ben Weidmann, he recently co-founded the Skills Lab, which develops performance-based measures of “soft” skills such as teamwork and decision-making.

In 2018, he received the David N. Kershaw Prize for distinguished contributions to public policy and management granted to individuals under the age of 40. In 2022, he was awarded the Sherwin Rosen Prize for outstanding contributions to Labor Economics.

His writing appears semi-regularly in The New York Times and, more recently, The Atlantic. He also maintains his newsletter at Forked Lightning.

+ Read More

SUMMARY

The discussion between OpenAI Chief Economist Ronnie Chatterji and Harvard Dean David Deming centered on findings from the largest study to date of ChatGPT usage. They highlighted how AI adoption is broad, fast, and practical, with strong evidence of democratization—closing demographic gaps, benefiting novices as much as experts, and being widely integrated into both personal and work contexts. They also emphasized the productivity and decision-support roles of ChatGPT, pointing toward shared societal benefits and new frameworks for understanding AI’s economic impact.

+ Read More

TRANSCRIPT

Hi everyone, I see so many familiar faces in the audience already, Pam, Oral, welcome back to the forum. So nice to have you in the audience. So welcome everyone. I'm Natalie Cone, head of the Open AI Forum, the expert community hosting our conversation this evening. In the forum, we spotlight discussions that reveal how AI is helping people tackle hard problems. And we share cutting edge research that deepens our understanding of this unprecedented technology societal impact.

AI is an innovation on the scale of electricity. It's transforming how we live, work and connect. Open AI's mission is to ensure that as AI advances, it benefits everyone. We build AI to help people solve the toughest problems and challenges because solving hard problems creates the greatest benefits, driving scientific discoveries, improving healthcare and education, and boosting productivity across the world.

Speaking of boosting productivity, tonight we're hosting the co-authors of research released just yesterday, How People Are Using ChatGPT, jointly written by OpenAI and David Deming.

The paper is the largest study to date of consumer ChatGPT usage. It shows demographic gaps such as the gender gap shrinking and economic value being created through both personal and professional use.

Tonight we'll be welcoming OpenAI's Chief Economist. We know him very well in our community because he leads the Future of Work series that we've come to depend on for the most up-to-date insights on AI economics.

Dr. Ronnie Chatterji is the first chief economist at OpenAI. He's bridged policy, business, and academia throughout his career, serving as the White House CHIPS coordinator, acting deputy director of the National Economic Council, and the chief economist at the Department of Commerce. He's a professor at Duke and has taught at Harvard Business School with earlier experience at Goldman Sachs. Ronnie holds a PhD from UC Berkeley and a BA from Cornell.

Our special guest this evening is the co-author of the research we're about to share with you, David Deming. Deming is the Danoff Dean of Harvard College, the Isabel and Scott Black Professor of Political Economy at the Harvard Kennedy School, and a professor in the Harvard Economics Department. He also served as academic dean of HKS from 2021 to 2024. His research focuses on higher education, economic inequality, skills, technology, and the future of the labor market. He serves as principal investigator alongside Raj Chetty and John Friedman at the CLIMB Initiative, an organization devoted to studying and improving the role of higher education in societal mobility.

He is also one of the faculty leads of the Project on Workforce, a cross-Harvard initiative spanning HKS, HBS, and HGSE that concentrates on the future of work. Together with Ben Weidman, he recently co-founded the Skills Lab, which develops performance-based measures of soft skills, such as teamwork and decision-making.

In 2018, he received the David N. Kershaw Prize for Distinguished Contributions to Public Policy and Management granted to individuals under the age of 40. His writing appears semi-regularly in the New York Times and more recently, the Atlantic. He also maintains his newsletter at Forked Lightning. If you'd like to learn more about David or drill into any of those accolades that we just shared with you, please visit his profile in the forum.

So without further delay, please help me welcome Ronnie and David to the OpenAI Forum stage.

Oh man, thanks Natalie. With that kind of introduction, David, I feel like we should just drop the mic right there. Yeah, I'm done. Nothing else. Yeah, exactly. I don't think anything, it's all. Natalie, we're looking forward to talking to you later during the Q&A, but thank you for the kind introduction. And David, I just have to say, this one I've been looking forward to for a long time because-

Me too.

OpenAI, I wanted to write a paper like this. And when I thought about the people I wanted to work with, you're at the top of the list, partly because of your work on soft skills that Natalie mentioned earlier, which I thought was super innovative. But let's start a little bit at the beginning, because I think people in the audience might be interested in this. Given all the work you've done on higher education, labor markets, all the leadership positions you've held in academia too, what got you interested in AI? You know, that's an interesting question to think about how you got into this. And then we'll talk about the paper we released on Monday.

Yeah, sure. Sure, Ronnie. Well, so first of all, it's great to be here. Thanks everybody for coming. So I'm, I guess I would, I describe myself as techno-curious. I wouldn't say that I'm the most tech-savvy person out there, but I love new things and like to explore technology. And so when ChatGBT was released, I think in late November, early December of 2022, I was just interested to see what it was. And I was playing around with it, even in the form it was in, which is.

you know, not as good in some sense as it is today. And it was just obvious to me immediately that this was gonna change a lot about the way we work, that it was just a completely novel way of interacting with information that is vacuumed up from all corners of the internet and everywhere else and that like it had this ability to kind of play an advisor and guide type role, answer questions and like respond to, you know, things that I would customize, which is different than Google search, which is kind of static.

And so I just played with it a bunch and just got really interested in it. And because I'm a labor economist and I studied labor markets, it was like, you know, it was just, it was so obvious to me, this was gonna be a big deal. And I found it really energizing, Ronnie, because, you know, I'd been working on the soft skill stuff and I'd done a lot of work on higher education and jobs. And I'd been working on this stuff for a while and I felt like, okay, I kind of know what's going on. I know the big papers. I know it's, and all of a sudden it's like, here comes AI and it's so clearly gonna shake up the snow globe, which for a researcher like me is just really.

really exciting to like, where is this going? Wanna see where it's going next, track it in real time. And so I was just kind of off to the races after that.

That's amazing. And we saw some of the products that early work in some of the surveys you did. Tell us about what you just found in those early surveys. Cause I think a whole generation of economists sort of that was the first introduction they had to sort of some larger cross-sectional and eventually longitudinal data on AI usage.

Yeah, it was a good example actually of how fast AI is moving. So the paper you're referring to is a paper I wrote with Alex Bick and Adam Blandon called the Rapid Adoption of Generative AI. And in that paper written a long ago in the history of 2024, we conducted the first nationally representative US survey of generative AI usage. And we basically, to make it a long story short, we created a survey that mimicked the design of the current population survey. We called it the Real-Time Population Survey. And the CPS is like

like the most widely used and trusted source of labor market information when you do the monthly jobs day and the unemployment rate and all those things that comes from the CPS. So we created a pseudo-CPS where we asked all the same questions in all the same order as the CPS. But then we added a bunch of questions about AI at the end. So you want to think about, what if the CPS added a questions about AI? That's kind of the experiment we were running.

And when we did this way back in, I think our pilot was in May of 2024, we found these just shockingly high numbers. 35%, 40% of people were using AI, which doesn't seem so shocking now. But it was really shocking at the time. And I just was racking my brain, like, is this right? Do we have a biased sample? And so I'd go around, we'd do different things like change the order of the questions from no to yes to yes to no, and just messing with the survey, basically trying to see if we could make the result go away.

And we did another wave in July, and it really didn't.

And I was just trying to understand this. So I'll tell you a funny story. The idea I had for the paper was actually because I live on a college campus. I live at Kirkland House, one of the 12 undergraduate houses at Harvard. My wife and I were deans of Kirkland House until recently. And I'd eat in the dining hall with students, undergrads.

And I talked to my middle-aged academic colleagues about AI, and it was kind of like a curiosity. Like, oh, that's kind of interesting. What's going on over there? Like nobody was really using it that much. And then I went to the dining hall and I'd ask students, you know, are you using AI? And they were kind of sheepish about it.

And then I'd say to them, okay, well, what share of your friends are using AI? It was like, oh, 80%, 90%, 100%. Everyone's using it. So I thought, that's pretty interesting. And that's what gave me the idea to do the survey. And, you know, we fought a fight with people. It was like, oh, there's something wrong with these numbers, they're too high. And now nobody's fighting. Now it's just totally accepted that, you know, half the population.

or so is using AI regularly. Harvard's students, undergraduate students, the AI safety group at Harvard did a survey last year of Harvard students that said that 87% of students were using AI on a regular basis. So it's just like, within a span of less than three years, it's just completely overtaken our lives. And that's what led us to write this paper together, Ronnie.

That's right. I mean, and then in the before times, otherwise known as 2024, you came and presented that work at OpenAI. And I joined as chief economist. It was one of my first meetings.

You actually hear a person in San Francisco for that person. I remember we were sitting in the conference room with Tom Cunningham, who became our co-author on this. Zoe Hitzig, who became our co-author on this. And Carl Shen, who became our co-author on this. And a couple others we'll mention here in a second.

And those three, and I, and a few others were listening to you present this. And for us, being inside OpenAI, we were less surprised. And we were really interested in your methods and how you're doing it. And since it was one of our first meetings,

One of my first meetings at the organization, I said, oh my gosh, we have to do research on this. I mean, one of the reasons I wanted to join OpenAI is because of the great research tradition, which had obviously been a lot of it focused on AI itself, but I wanted to do the economics of AI.

Yeah. I even hatched a plan from there to kind of write this paper, and I think just for folks at home who are seeing this research come out, they're seeing something new every day on AI, David was able to survey people and ask them about their AI usage, which is really valuable, especially when you do it over a sort of a multiple sort of repeated panel and compare it.

And I think the piece that's missing from a lot of work that we were able to do with our new paper is we actually were able to analyze message data to figure out how people are using AI. So there's one thing about the incidents, and there's another thing about how they're using it.

And then you're going to add a little bit about who they are. So it's sort of the what, who, the how of AI, which up until this paper we released on Monday, how people use ChachiBT, which is a National Bureau of Economic Research

for now, no one had really done that. So this all started in 2024.

I think one mention I want to make too, David, as we got the group of authors together, you know, putting together these teams is not easy. Getting the work done isn't easy.

So two things happened. One is we added another member to the team, Kevin Wadman, who is a fantastic data scientist who was on my team to join it. And then I have to say the unlock is really your pre-doc, Chris Ong, who was able to join us to work on this.

Chris came with a ton of enthusiasm, rounded out the author team, and it's that group that's on this paper that people are reading right now.

Talk a little bit about the key results and what surprised you about them. And I'll tell you some of the ones, if you don't mention the ones that surprised you that are different than me, I'll do that too.

But what surprised you about our main results?

Yeah, so first of all, great to shout out the team, just an all-star team of co-authors, you know, Chris and Kevin, keeping the chains moving every day, Tom and...

Zoe and Carl and you, it was like, what a great team. It was just really fun to work with all you guys and I really learned a lot from it.

So I think on the, so things that surprised me about the paper, I would say a couple of things. I mean, we could, I'm sure we'll talk about all the results.

I think, you know, part of it for me, I know it wasn't a surprise to you guys, but was just the, not just the scale, but the speed at which message volume is increasing. You know, basically doubling every eight months off a pretty high baseline is a crazy rate of adoption for a technology. The idea that basically 10% of the world's population uses chat GPT at least weekly, it's just really hard to fathom that it would happen that quickly.

So I guess I kind of knew that cause you know, OpenAI had released these numbers, but just to see it in the data was pretty shocking.

And then the other thing that I thought, or a couple other things, one is I was really,

I'm surprised to see that the usage is really quite practical and mundane in a good way. People talk a lot about using AI as an agentic way to write all the code in an organization or people using it as a therapist. There's a little bit of that, but it's actually not even close to the dominant use as we reported in our paper.

Something like 80% of all messages are in one of three categories, practical guidance, seeking information, and writing. Those are things that we all do all the time, every day, that are practically useful, but it's not like some super AI that's going to kill us all or do something terrible. It's just actually helping you move the ball forward in your daily life, both at work and in your personal life, giving you advice, acting like a co-pilot, an oracle, an advisor, a research assistant, just kind of a really useful tool by your side. Again, not ...

earth shattering, but surprising just because of that's really like the use case people are gravitating towards. Those are also the fastest growing cases in many cases. So it just seems like that's where it's going.

And then I guess I would say the other two more things I'll say that surprised me. So one was some of these demographic gaps closing, you know, in retrospect, I shouldn't have been surprised at the fact that the gender gap is closing and the gap by country income. So, you know, chat, you'd be usage is growing faster in middle income countries than high income countries because once you're getting to 10% of the world's population, it's kind of hard for gaps to stay there forever because like if everybody's using it, then those gaps are necessarily going to close. But I did not expect the gender gap to close completely as we report that it has.

And then the very last thing I'll say, Ronnie, and I'm sure you've got some stuff you want to jump in on, is I was pretty surprised at how work usage is

is concentrated in a set of information activities, that intensive activities, I should say, that are common across all occupations. So people are using ChatGBT to get information and to analyze it, synthesize it, in order to support better decision making, problem solving, and creative ideation, which is something you do in all kinds of jobs. But like, what we found in our paper is that, you know, if you're a teacher, you're actually using ChatGBT pretty similarly to people who are in sales, to people who are in management, to people who are in, you know, engineering. And so there are some differences across occupations, but overall, it just seems like a tool that's incredibly useful in all jobs and in broadly similar ways.

So I was a little bit surprised by that too. And, you know, but maybe I shouldn't have been retrospective. No, I think this is super interesting to me. You take a step back and you think about what we did, and this is why you needed sort of everyone to contribute on the team.

team. It's a large author team, but these really sort of involved econ papers now really often have a lot of authors. You have this massive data set of over a million messages. Then you have to think about sort of how to classify what those messages are. And of course, we don't read the messages. So we have sort of an automatic classification system. And then you have demographic information from surveys and other match data sets. So we can know a little bit about the occupation, the seniority, the education of the person who is sending that message. And then we have other measures about geography, for example, and the country income where that person resides. It allows us to do some really interesting things here.

And when you look at it, I found the same surprising facts as you, David, which is one, the gender gap. That's been a really important topic in the public debate about AI. And this is why we do research, right? Because nobody else really will have access to such a large platform to be able to measure something like that with the real data.

our big contribution here when we think about the gender gap is we're the biggest platform, we're the one where most people are using it. So we can say something about the average user of AI that no one else can. And when you look at that and you think about the gender gap closing, we now can add a really useful data point to the public discussion on this.

And I think you're right. As you get bigger and bigger, those disparities naturally disappear or reduce. And that's a huge part of just democratizing AI and having more people use it. So that was cool and interesting. I didn't realize it would close that quickly, but I felt the same way you did.

I think some of the other things we found as well were surprising to me. I'll flag two. One is the share of messages that are being used for work. Now, this one was talked a lot about in the press and I think it's super interesting. About 30% of the Chachapiti messages were for work. And before you think, oh, wow, that means it's not being used too much in firms. Remember, this is the consumer Chachapiti.

So we're talking about the fastest growing consumer product in history, but 30% of the messages actually seem to be for work. It'd be very different, I'm sure, if we looked at the enterprise data, API data, people building things on top of it, codecs and other products. This is on the consumer side of ChatGPT. The percentage of that being used for work for a consumer tool was really, really interesting to me and something I wasn't expecting. And I think it'll be really interesting in future research to see what those patterns look like across different kinds of products, but at least for a consumer side and the massive platform we have on the consumer side, that was super interesting.

I think the second thing, and this made me feel good, we both have kids who are teenagers, it made me feel good about the writing piece because a lot of educators and parents are worried that kids are going to basically outsource writing completely to ChatGPT rather than doing it themselves. And it seems that the preponderance of our writing examples are people who are supplying ChatGPT with some tech.

presumably they've written, and then asking to edit and critique it. Which, if that can be the way, the more dominant use case on writing, I think we have a case for improving writing and critical thinking. And I was happy to see that, cuz I didn't really know we'd find there. So those two things, the higher than expected share of work use cases, and the writing being much more editing and critiquing, right, rather than creating from whole cloth. I thought that was super interesting, and things that people just don't know, cuz they don't have access to this kind of data. Yeah, and I think if I zoom out to the bigger picture on what these patterns mean in closing of gaps in work versus non-work, I think if you look at the early cohorts, and we show this in the paper. If you look at some of the early cohorts of people who signed up for JGB like me, signed up right away, or kind of power users. They're people who use it much more for things like coding and practical writing things on the job. So there's this kind of early idea that the use case of

generative AI and ChadGBT would be focused on doing workplace tasks and replacing work. But then I think what even the people who started that way discovered over time, and our research shows this, is that they started to broaden to more non-work activities and they found that actually ChadGBT is most useful.

We found we have this classifier interaction quality where it measures some sense of the user's affect and asks, does it seem like the user is happy with the interaction they're having? The highest rated interactions tend to be queries related to things like asking questions, getting information.

So it just really seems like as people's usage of ChadGBT evolves, it becomes broader and it becomes more like an advisor. And that's also the kinds of things like writing and advising where you see women using it more than men, whereas men are more likely to use it for coding and technical help.

I think what you see is the tech-intensive people started using it earlier when using it for tech-intensive, work-intensive things, but over time, it's become something much broader.

Obviously, there's huge economic impacts, but the societal impacts are even bigger because as you said, 70% of the messages are not related to work, but they're still quite practical.

It's like I'm getting a workout routine. I'm figuring out what products I should buy. It's just things that are helping you plug along in life and have more time for the things you want to do and less time searching on the web.

You're so right about that, the stage of development for a platform, because if it's early adopters who might be really tech-savvy, and their use cases can be really different than when you have the most popular product, let's say, on the market.

It's used by 700 million people, and I think you'll see a lot of results around AI from

And look, you have to look at the underlying sample. Something that could be really interesting for specific types of occupations, like developers or people in certain parts of the world, say India. But we're able to look at the entire spectrum and get you kind of what the average chatbot user is doing. And I think that's an important contribution, which I enjoyed about it.

I remember when you kind of coined this phrase, and we've been talking about it for a long time. Like, you know, you and I, we write these papers not just to create descriptive data. We actually want to understand the world. And I think what you've done really well in a lot of your prior work is figured out frameworks, taxonomies, terms to be able to kind of explain what's going on.

I think soft skills and the way you operationalize that is really interesting in your work. Here, you came up with assets. It's like, Rodney, it's like branding for nerds.

Yeah, exactly. But it's important, right? Because academics, the way we take ideas and cite ideas, is they have to be in a framework and a menu that we understand. And I think that's a really, honestly, a lot.

A lot of academics maybe don't appreciate that enough. Ideas travel on their own. So asking, doing, expressing. I think that's gonna be for many people the big takeaway from the paper, which is that what is AI today when it comes to chat GPT? It's asking, it's doing, and it's expressing. With asking, doing, being the sort of most dominant sort of in the intent of the user. Talk about kind of how you thought about that taxonomy and also what it means for economists and other social scientists as they look at AI and its impact on society. Because I think we're actually, we're pivoting a little bit from the prevailing discussion on AI in a really important way.

Yeah, yeah, thanks for teeing that up. So I think when you look at the different ways that people use chat GPT, one of the ways you can classify messages that we do in the paper is by conversation topics. So is it about writing? Is it about recipes? Is it about tutoring? So those are like topics.

you know, that you're talking about. But I think it actually, there's some broader buckets it falls into, you mentioned, and it divides into like two different ways that people often use the models, which you could think of as like using it as a coworker, like you're gonna outsource some task to the model, you're gonna ask it to do something for you, like write me an email so I can send it to my boss or my coworker or whatever, or like make me a grocery list. So it's like, you're asking it to do something, you're offloading effort to the chatbot.

And that's the thing people typically think about. Certainly economists think about is like, oh, I've got a bunch of tasks I have to do, okay? And instead of me doing them all myself, I'm gonna outsource some of them to the chatbot, that's gonna make me more productive, free me up to use my time differently. And that's kind of like a standard way that economists think about not just AI, but any technology. It's like a technology that can do some tasks for you or help you be more productive in certain tasks.

when you step back and you look at not what the model is capable of, but what people actually use it for, it's not really that as much. I mean, some of that is the doing tasks we call it is like a decent chunk, but it's not as big as another category, which we call asking, which is really using AI like a copilot or an advisor or a guide where you're not actually telling it to do anything.

You're seeking information. You're seeking a way to think about something. You're ideating with it. Like, give me some ideas for columns or, you know, a name for my fantasy football team or whatever it is. And so you're not asking the model to do something for you. You're using it as a thought partner or a guide or a coach or a mentor.

And so that's really asking the model for information or advice that helps you make better decisions in your life, either at work or at home. And that's what we call asking. So asking is you're seeking information or, you know, support with decision making in order to do something differently.

rather than asking offloading work to the model. And when we break that up, we find that asking is actually the dominant use case and it's grown relatively faster than doing. And it really has, I think, very different implications for how these models are gonna be used going forward.

And it kind of takes out some of the like, at least for the consumer plans, the idea that these models, people are finding more value in the agentic uses of the consumer model. They're actually finding more value in the advisor guide side of the model. And that's really important because that's useful in all walks of life.

Like you can ask it for all kinds of different things, depending on who you are, what your job is, what your personal life is. It's just very broad and flexible in the ways that you can ask for help. And so I think that's why we see such big growth in that because the models are quite good at that.

You know, it's basically like, instead of asking your friend down the street, you've got some statistical combination of all the people.

people in the world that you can ask for advice, and you can tell the model which corner of that space to search in, so you can get advice from an expert, you can get advice from novice, so you can really tell the model what you want, and it will give you something that's really tailored to your situation.

I love it, and for the economists in the audience too, I mean, this portends a different way of kind of explaining the way that AI will differ through the economy. The task-based models, really interesting, and have important implications, and have driven a lot of interest in sort of the O-Net classifications, and how we think about which tasks we know from AI, really important work.

This is leading us in a different direction, which is, look, if the task-based model, you imagine AI just picking out those tasks, right, and if a job is a set of tasks, we're not sure actually that's what jobs are, they're probably more complicated, but if you think about this, this is decision support. This is the idea that we have a bunch of decisions, and AI can help you make those decisions, and that is, as David was saying, broadly applicable, probably calls for a different way of thinking through the economy.

the economic theory and the implications downstream from that, which I think is really interesting, something that we're gonna do some future work on. And when you think about how that's going to affect jobs and productivity, this is also gonna change a lot of the sort of predictions out there. And like, one thing I wanted to highlight, David, is like, you know, a lot of people are looking for the impact on GDP of things like AI.

We find a lot of evidence of probably what economists call consumer surplus, right, in our kind of taxonomy, which, you know, Eric P. Nielsen and some other people, not related to this paper, but fantastic economists, find like, you know, that consumer surplus could be really significant for AI, maybe $100 billion a year. But explain some of the differences between like how we can create economic value, but it might not be showing up in labor productivity and GDP right away, because I think that's something that people are really conscious of when they read the popular press on AI.

Yeah, so, I mean, you know, if you're using AI to, okay, let me just give you a.

concrete example. So one thing that I asked Chad GVT to do pretty recently is my parents are getting to the age where they're thinking about moving into an assisted living facility. Not tomorrow, but someday, like someday soon. And they live up in Maine. They moved here to be closer to us. I'm from Tennessee originally, so like they don't really know the area. And they were looking around at places, but they had no way to think about what is the right choice. And like the finances of these things, as if anybody knows who has experience with this, are very complicated. So how you buy into it, they have different structures. So I asked Deep Research at the time, we don't have, now it's GPT-5, it's all wrapped in, but like at the time I was using Deep Research.

And I asked it to write me a 15 page research paper about assisted living options in the New England area. Give me different options in different states. Here are my parents' budget. Here are the things they really want. They want to be close to nature. What house they want, like, you know, all kinds of details. And I got an incredibly detailed customized report giving me ranked options, trade-offs, et cetera.

one thing it did, so I asked for another thing, and that saved me a month of work. Absolutely not gonna show up in GDP, but was a huge weight off my shoulders.

And the consumer surplus, and what does that really mean, consumer surplus, it means what would I have been willing to pay for somebody to deliver this report to me?

The truth is, Ronnie, I probably would have been willing to pay several thousand dollars for that report. Instead, I pay, I use the pro plan, so I pay $200 a month for my subscription.

But so that one paper alone basically paid for a couple of years worth of a JADCPT Pro subscription, but it doesn't show up at all in the GDP statistics. But it has huge value to me, because I'd actually be willing to pay a lot more than I had to for that.

So the surplus is the difference between my $200 a month subscription, and all the value that I'm getting out of the product, which I gotta say is way, and I'm not just saying this, because I'm on the Open AI forum, it's way higher than $200 a month for me, because I'm a pretty heavy user of it.

So that's consumer surplus, it's the difference.

between what you're paying and the value of what you'd be willing to pay. And so Eric Brynjolfsson and his colleagues, as you mentioned, have been doing a bunch of surveys where they ask people about their willingness to pay for digital products like generative AI and the internet and things like that. And they basically say, well, how much, if we turned it off and you couldn't use it anymore, how much would you be willing to pay to get it back? And people give numbers that are a lot bigger than what these products are often charging. And then you use that to calculate the consumer surplus and you get extremely big numbers. And I suspect you would get a big number for generative AI as well.

It's super interesting, too, because if you think about so many of the workers that are the topics of these media discussions around AI are the same workers who are going to, they're almost professionalizing a lot of their tasks at home. Your research paper for your parents is a living choice. As an example of someone who has a very knowledge-intensive job, writes research papers for a living, you want to make a decision at home on a personal matter.

matter and you wished you had a research paper, you wrote the research paper, which creates all this consumer value for you. And so in some sense, these are some workers who actually really benefit from it. On the other hand, there are lots of people who couldn't have written that research paper on their own, who don't have experience in the market, who can now press the same button and use the same prompt you do to now have a really high quality report. So arguably creating even more surplus for them, because probably you might've been able to piece some of that together some other ways and wouldn't have been as good, but for someone who has no experience and no formal training in how to compare assisted living things or centers, how to think about these things in the area. So I feel like that's the thing that people maybe are missing from a lot of the AI discussion is that value being created for people across the income age distribution and around the world, right? Because if you think about the success that we have-

Yeah. And an interesting implication of what you say, Ronnie, is that we tend to have these stories about technology and whatever's happening, that they'll increase inequality. So like these new technologies.

are going to be used more by certain people in a way that exacerbates the gaps between the rich and the poor and so on. And that may be true with AI, but actually a lot of the early patterns suggest the opposite. Like the fact that we're basically closing gaps by country income within a few years, that the technology, as many other researchers have shown, tends to benefit novices more than experts. It helps you get to an acceptable level of expertise. And it makes you help in the case of speaking a new language. It'll help you be conversant in a language much faster and so on. So it kind of brings up the floor, which sure does seem like a story that reduces inequality rather than increases it. But we'll have to see, obviously. But the point is the early returns on the technology are actually kind of egalitarian relative to other technologies in the past.

That's right. And this is really important. I have the same ideas you have, which is one, the early evidence does seem to point that direction. Paper by Lindsay.

Raymond and Eric on this. You know, there's other ones with management consultants that find sort of more of the typical spread, the skill bias, technical change. So it depends on the task. But this is why we do research, right? This is why we're doing this, right? Because to understand how this is going to impact, let's say inequality, we're going to have to do serious research using econometric techniques and real data to figure it out. And there's really no other way to do this kind of work, right? So what I love about this opportunity is just to be able to partner with folks like you to do that research using this kind of data.

And I feel like when you think about the mission of OpenAI, the reason I joined, to benefit all of humanity, my little slice of that is trying to create studies like this to inform people about how things are working. And whether or not it shows things like that gaps are closing or widening, I think it's just really important to do the work. This gender gap thing was really interesting. I think that it was closing. I'm interested to see in future research if other kinds of gaps are closing. There's some things, obviously, we don't know in terms of...

demographics that we can't measure and it'd be interesting to see if those gaps, if they exist and persist. So those are some things I'd love to do in future research. For you, what are the, I mean, every time you write a great paper and I shouldn't call it great, I wrote it with you. So everything you love, like we love our kids, we love all our papers equally. But as we think about this paper, we're really proud of, there's always the things like, okay, I wish I could have done this. I wish I could have done that. What is that thing for you that sort of those unanswered questions you still have about AI? And I'll give mine too to make it fair after.

Yeah, yeah, no, great. It's a great question. Of course, there's always value left on the table, so to speak. So I'd say there's two things that I'd still like to do. One of them is answering some of these macro questions you mentioned, like the impact of AI on the economy. Like all that said about consumer surplus, I would love to look at a bigger picture set. So like our regions that are adopting AI faster or industries or companies that are adopting AI faster.

Are they benefiting? How are their workers changing their skills? Like kind of a more zoomed out view, this paper was very much like how are individuals using chat GPT, but I'd love to know like, how are businesses using it? How are industries using it? How is it being integrated into the economy in a more macro sense?

And then the other thing kind of related is, I do think there'd be tremendous benefit to getting some relatively real-time information out there about AI usage, how it's changing, how it's varying across geographies and industries and so on. And so I'd love to, you know, I've been building this kind of tracker for individual usage at the survey level. So you mentioned the survey paper, we've been doing a survey every three months and we're standing up a kind of tracker that measures it, but it'd be great to do something like that using internal message data too.

I think it'd be really informative for decision makers. You could think about it as like the new CPS for AI, like providing real-time information to the markets, to central bankers, to the public.

public about how AI is being used and how usage is changing, I think it'd be really cool. I really like this idea of regions, industries, that are more adopting AI or more likely to adopt AI, how that affects things at the more backer level. Because right now, our work so far is about individuals and messages, and that's great. But thinking about it in terms of an ecosystem effect, at a regional effect, if there are some regions of the world or even the United States that are more AI-forward and being able to show that that has benefits and spillovers, the same kind of spillovers that people have found in the agglomeration of key tech industries, that would be really, really interesting.

And I agree, something we could do. And you need detailed data like the ones we have to do it, but obviously match other regional characteristics and patterns on adoption. I think one thing I'd just add to it is, you know, we have some good data on interaction quality, and we've all talked about this co-author team, but I'm really excited to bring.

this into the field and look at the decisions people actually make, let's say off-platform, like deciding how much to save or whether to invest in life insurance or saving money for retirement or assisted living like the things we talked about, or even just basic financial decisions when you're graduating from college. I'm really interested in those kinds of things. AI can help if AI is actually showing up as a decision support in an effective way in the field. I think that's an area, based on the study we did, seeing how much people are relying on that for decisions that I really want to do. Because interaction quality is great, the next step to me is what actually happens when they make those decisions in the real economy.

Yeah, that's a great point. That's the next paper we'll do. That's the next paper, right? We never run out of these things as we go forward.

I think maybe the one thing I'll say, David, on the paper itself is to remind people, I mean, for a lot of folks, they saw the paper, they're like, congrats.

Congratulations, it's out. And we know it's a working paper. So we actually are super excited to have comments from folks. So if you're in the audience and you haven't checked it out, you can feel free. We actually already got, I got a bunch of emails early on today with just comments and questions. It's really great how quickly people can read it, but we really encourage people to read the paper.

Even if you're not an economist, you can read the abstract introduction conclusion. And I think it tells, hopefully if we've written it well, that's on us to kind of summarize the main results and why we think it's important. And it's a working paper at the NBER. You can find it on their website. You can also find it on OpenAI surfaces, including the Global Affairs blog and some of our sites here.

So if you want to read it, send us some feedback, send us some thoughts on kind of what you think is interesting about the paper. And if you give us ideas, we're going to continue to update this. Our vision here is to submit this for peer review. And I should say that was all the other goal here.

Like, I think the great thing about OpenAI for me is like the freedom to actually publish this kind of work or aim to publish this kind of work. And so not just share it with the world as a working paper, but work with someone like David to try to submit it to a journal. And so we have work to do before we're gonna do that. And we intend to write a better and revised version in the coming months and submit it. So we'd love your comments and feedback on that as well.

Absolutely. Yeah. And if you're pressed for time, you can't read the whole thing, you can also have Chad GVT summarize it for you.

That's a good point. Actually, I'm interested to see how Chad GVT will summarize a paper about Chad GVT. That's actually like a very meta.

It's very meta. Yeah, exactly. Very meta question in terms of how it'll do and what it'll say about the authors.

And I do wanna give one final shout out to the co-authors on this. They've done such fantastic work. Tom Cunningham, Zoe Hitzig, Kevin Wadman, Carl Shan

Christopher Ong, fantastic team. All-star team, love you guys. Looking forward to working with them in the future. I think maybe I've seen some questions pop up already. Should we hit it now, David? Are we ready for Q&A?

I'm ready, let's do it. And can I invite the fabulous Natalie Cohn back to join us and help us facilitate the Q&A as we go forward?

Hi, fellas. That was fascinating, so inspiring, so hopeful. David, I was using Chatshubbyt recently for the same exact use case of trying to understand assisted living, the options, the way that it gets funded. It's so interesting how we see these human through lines in use cases.

Our Global Affairs has a prompt newsletter, and we were just talking also about fantasy football. It's a giant use case.

We all share, we have so much in common and it's kind of reflected in the way that we use this amazing tool.

Okay, on to the Q&A. So long time forum member, Melanie McDonoghue, Chief Innovation and AI Officer at the City of Lebanon asks, of the trends you found narrowing the gender gap, training, mentorship, flexibility, which matters most for future progress and what should be prioritized next?

Well, I'll start, maybe David, and you can jump on this. I think the question is about, to the extent that those factors, training, mentorship, are associated with reducing the gender gap, what's the most important thing? I'll say a couple of things. We found that the gender gap has been reduced. It's been a big priority, obviously, at OpenAI to make sure more people get the tool in their hands. So I think that's great. You need an effort from those who are designing the products, disseminating it.

them, putting them out there in the world to make sure more people can use it. So that's really good when you have sort of the producers of the technology on board with that. I think more broadly, we just have to make sure we design technology with that in mind of different kinds of users. It's not just the early technology adopter who's a developer. It can be used for lots of different cases, right? And I think you see this with AI.

The one amazing thing about AI, it's so flexible. I think in terms of specific interventions, I think demystifying AI and showing that it can be used by anyone is a really good way to reduce all kinds of gaps. I think also the utility, the practical utility is really important. There's a set of people who want to experiment with anything new, whether it's useful or not yet. And there's a set of people who are really busy, and they're saying, tell me when it's useful. Tell me a thing I can do with it.

And I have a feeling that when you start to show people the use cases, you can use this to solve this problem. So if you're facing this in Lebanon, if you're thinking about sort of this melody, you can think about, where can I show people, what's the hardest problem?

you have today, can AI be used to solve it? That's a good way to get more people engaged, and the early tech adopters will be there for all the cool products, but I think to get the mass market and close those disparities is often identifying that killer use case that makes sense to them. AI is really flexible, so I think it's likely to at least make some sort of progress on that.

I don't know, David, how you think about that.

I mean, I think as to why the gender gap closed, we don't know for sure, but I think one possibility is that if you look at the timing of when kind of usage really started to increase in early 2025 is when growth and overall message volume accelerated, even from a very high baseline. What we saw is that all of the user cohorts, including the people who signed up from the very beginning, started using JackDBD more all around the same time in early 2025. So what that suggests to me is either the model got better, maybe some of the more recent instances, or it

it's something about the social acceptability of chat TVT that changed in a way that made everybody feel like they could use the usage really broadened around that time. And that's exactly when the gender gap closed. So I, whether it's a cultural thing or whether it's something about model capabilities, I don't know, or whether it's both, I don't know if we can say for sure, but it definitely was that the closing of the gender gap was coincident with some of the other things we were talking about, which is this increase in asking as a use case and using it for decision support and things like that. That all seemed to really coalesce about eight months ago.

Yeah, it's a good time for future research. I agree to answer why, and then try to inform that question. Very cool.

Okay. From Micah Gaudet, deputy city manager, city of Maricopa.

Were there any early hypothesis that turned out to be completely wrong?

or surprisingly right as the data started to roll in.

Wow, I think for me, it was the work stuff. I'm really obsessed, David knows this, all the co-authors do with like the work usage. And I really, you know, maybe I just hang out with too many people who aren't using it for work as much or only use it for personal things.

I was surprised by the share of work usage. I thought there was kind of a more of a brighter line between, hey, this is a consumer product, people are using it to, you know, do really cool stuff on the consumer side. I knew it could help people make decisions in what we call home production, like in the discussion economics, like decisions like the one David was talking about, but I didn't think as much of it would be related to work. And it's just surprising for any consumer product to have that kind of share of work.

And so that was really surprising and a hypothesis that was really, for me, you know, proved wrong, and I thought the percentage would be much lower. I had an idea in my head, this comes back to the asking, doing, expressing thing, that people were using it to offload.

work or personal stuff so they were using it primarily I don't say I think agentics not quite the right word they were using it to do things for them and part of the idea we got for the asking doing expressing is when you look at we use this public database of chat GBT messages called wild chat this is where users have consented to make their messages available so we use that to kind of look at some actual conversations to see if our classifiers were on the right track

and when you look when you just look at some of these messages is very clear that people are not primarily using it this way they're primarily using it like as we said for decision support and was like a guide Oracle advisor you know kind of conversation partner

I I guess it just seems obvious now but it went when I started the project it that was not on my mind I guess because I'm an economist that I was thinking about it economics and tasks and jobs and I'm a labor economist

So that was just my frame. So it really has changed the way I think about how people use AI. And in some ways, like the dominant economic value of AI, I think is different than what I thought it was coming in. So that's been really fun. I like having my prior beliefs upended. That's what we do at Facebook. That's why we love having people do that, right?

And I would say the expressing part, the trio, asking, doing, expressing. Expressing is like the distant third of those three. And you know, there's a lot of attention discussion on those topics, and that's really, really important. But it's not the dominant use case of chat GBT and asking, doing are just much more dominant. So that was interesting. I don't know if I had a hypothesis about it, but I was interested to see the data.

And there's been sort of other things that have come out not actually using the data that have conjectured different rank orderings on it. We were able to actually look at the data and make that decision, or make that sort of inference. So that was really cool.

Yeah, Ronnie's referring to a study. I'm gonna name names. Ronnie's referring to a study that was in the Harvard Business Review a few months back, looking at like Reddit posts and other.

forums suggesting that therapy and companionship was the most common use case for AI, and that just turns out not to be true at all. It turns out Reddit and Quora are not representative of the way people use AI, and so, I mean, we didn't know, but that turns out to be pretty wrong. Actually, conversations about relationships and feelings are less than, it's about 1.9% of all conversations. So they're still there, but it's a small share, relatively speaking.

Mm-hmm. Wow. Awesome. Great work, guys, and beautiful question. Thank you so much for that awesome question. I love this community, David. You have to come back.

I'd love to. Jalal Sarbadani, Assistant Professor of Information Systems at San Jose University, asks, I'm curious to know about the next research questions you're excited to explore based on the findings from this.

Well, I think, I mean, one is we're continuing to be interested in doing more field studies. I mean, this was a study of sort of the data on ChachiBT. You could also think about sort of going into the field, into organizations, into schools, into places where people are actually making those decisions and, you know, try to figure out what the impact of using AI in actual decisions is in the economy. That'd be super interesting. If there are organizations and institutions listening that want to do that kind of work and are interested, please come to us. Like that's exciting.

I think the other piece is to explore more of the work-related uses of ChachiBT, as well as how people use the API, coding tools. That's really, really important. So, I think, you know, the work sort of results from this really inspired me at least to think a lot more about that side of the equation, in addition to doing some work in the field sort of with implementing AI in decision support.

Yeah, I would just say, I mentioned earlier the

the kind of macroeconomic effects, and then also looking at like trying to do some kind of tracker, real-time tracker. I'd say another thing that I didn't mention, but that I want to add is, you know, in my other day job, besides being an economist, I'm also the Dean of Harvard College. And so I think a lot about undergraduate education. And it's pretty clear to me that AI has tremendous potential to improve learning. It's not currently being used in a way in many educational settings to improve learning. It's kind of the Wild West out there. People are using it in different ways. Some faculty are super aware of it. Some are like pretending it's not there. Some want to ban it. It's kind of all over the place. But what I would love to do is, in a systematic way, put together some research to try to understand what are the really good educational use cases of ChatGBT and other generative AI technologies, and how can we redesign the classroom and learning and assessments to take advantage of this basically.

intelligence on tap that wasn't available for most of human existence. It seems like we ought to be able to be using this to do things better, and that includes education. So I'd love to dig into that more in the future. Thank you so much, David. And actually, we're going to be hosting a convening for higher education stakeholders, faculty, and leaders in October. All right.

I'm a stakeholder. Very last question, and then the gentlemen have to run. And there are so many questions left over, so I'm going to try and maybe send them your way through the forum, fellas. And if you have time throughout the next month, we can address them, hopefully.

But this question's from Kush Amarasingh. I've known Kush for a long time. He used to be an incubator at Adobe and Amazon. And Ronnie, he was with us in Singapore. He joined us for the events there, so you have actually met him before.

do you have any qualitative learnings to share about how people feel during these interactions?

Yeah, so good to hear from you again. We have these measures of interaction quality that are also in the paper and David had mentioned some of them before and we could look at, for example, when people are engaged in a message that seems to be about asking and doing, these are relatively highly rated interactions. And so if you look in the paper, lots of data on that. I think this is also just important inside OpenAI. David can talk a little bit more about the specific results there too.

I just only mean to say, this is important when you're designing a product that people want to use. So for our people on the business side who are thinking about the product, really, really important. But for economists and researchers, it's also really important. Because once you think about asking, doing, expressing, if you pick up different interaction quality across those, it really lets you understand how useful is AI for those different things. And if it's going to impact the.

economy, one way or the other, the things where people feel like they're getting the best answer from, let's say, an ask versus a doing request or an expression, that can really matter in terms of continued use. For us, it really matters from the research standpoint, and we do have interaction quality, and some of those asking ones are highly rated.

David, thoughts on that one, though?

Yeah, yeah. Great question, Kush. I'll try to get under the hood a little bit on it. What we mean when we say interaction qualities, it's basically what the model classifier does is it says, does it seem like the user is satisfied or happy with the answer? You might, if you're having a conversation with somebody, you might say, that's great. Now give me 10 more of those, or thank you for that, for that. It's basically affect, the user's affect, when they're having a conversation with the model, whereas if you're not happy with it, you might say something like, no, that's not what I asked for. I want this instead. Or, no, that's not quite right. It's not like you're mad. It's more just like, no, the model's not really giving you what you want. And so we feel like that's really important.

found was that doing conversations, like when you ask it to do something, you're much more likely to have a neutral or negative affect in those, whereas in the asking conversations, people are much happier with the output they're receiving qualitatively. So it's qualitative in the sense that it's a GPT-5 making a judgment about whether people are happy, but it's quantitative in the sense that we're doing this for a lot of messages.

But it does give evidence, I think, that people seem to be happier with the output they're receiving when they're using AI for decision support rather than as a co-worker to offload tasks entirely. And I think you also see that in a larger sense because people vote with their interactions.

You can see people engaging in more asking and more practical guidance type queries as they gain experience with the model, which is another way of observing people's preferences.

it all points in that direction, which is that this is actually the thing that people find AI most useful for. Thank you so much, David. Thank you so much, Ronnie. That was the perfect question to end on.

Really glad I chose Kush's question. He always delivers. It was wonderful to host you guys. What an honor that we actually got to host you the day after we released this unprecedented research paper.

David, we really hope to have you back very soon. We'll be in touch. Thank you so much for your generosity of time and spirit and being here.

Ronnie, you're always our hero. Thank you so much for being here today. So we're going to say goodbye to now so you guys can get back to your lives, but we will be in touch. Thanks so much.

It was great to be here. Thanks for having me. Thanks everybody. And to close us out, I'm just going to give us a little teaser of what's on the horizon in the.

OpenAI Forum. So next week, we're actually hosting the second series of careers at the frontier inside OpenAI's recruiting process. So this event is for you if you're interested in working at OpenAI, if you're interested in learning about how the people that work at OpenAI got here, what we're doing here, who are these technologists behind ChatGPT, please join us. That should be a really beautiful conversation with my colleagues, Red and Sarah, who've been doing this work for a very long time.

And then later in the week, we're going to be hosting California's 2024 Teacher of the Year, Casey Cooney, and he's going to discuss how he uses AI to make good teaching even better. He's a fabulous speaker. I have learned so much from planning this talk with him. I hope you guys join us.

And then finally, in October,

to bring back one of OpenAI's researchers, Lucas Kaiser. He is one of the founders of the Transformer paper, and he is going to be presenting in the forum, Learning Powerful Models from Transformers to Reasoners and Beyond.

Lucas is absolutely a foundational researcher in this field, and it would be an honor for any of us to have a seat at the table for this talk. I'm really looking forward to it. He's also very kind and fun to listen to, so I hope to see you all there.

If you're new to the community, thank you so much for being here tonight. This was a really beautiful talk. Ronnie never fails to teach us something new. I hope to see you again, and until next time.

I'm sorry, but I can't see the transcript you mentioned. Can you please provide the text so that I can properly format it with line breaks between speakers?

+ Read More
Comments (0)
Popular
avatar


Watch More

Expertise, Artificial Intelligence, and the Work of the Future Presented by David Autor
Posted Mar 12, 2025 | Views 21.9K
# Higher Education
# Future of Work
# AI Literacy
# Career
# Social Science
Exploring the Future of Math & AI with Terence Tao and OpenAI
Posted Oct 09, 2023 | Views 27.3K
# STEM
# Higher Education
# Innovation
Event Replay: Jobs in the Intelligence Age
Posted Sep 04, 2025 | Views 353
# Future of Work
# OpenAI Certifications
# OpenAI Jobs Platform
Terms of Service