Sign in or Join the community to continue

Event Replay: Sam Altman on Building the Future of AI

Posted Apr 06, 2026 | Views 2.1K
# OpenAI Leadership
# AI Governance
# AI Safety
# Economic Opportunity
Share

Speakers

user's Avatar
Sam Altman
Co-Founder & CEO @ OpenAI

Sam Altman is the co-founder and CEO of OpenAI, the AI research and deployment company behind ChatGPT and DALL·E. Sam was president of the early-stage startup accelerator Y Combinator from 2014 to 2019. In 2015, Sam co-founded OpenAI as a nonprofit research lab with the mission to build general-purpose artificial intelligence that benefits all humanity. The company remains governed by the nonprofit and its original charter today.

+ Read More
user's Avatar
Adrien Ecoffet
AI Researcher @ OpenAI
user's Avatar
Joshua Achiam
Chief Futurist @ OpenAI

Josh Achiam is the Head of Mission Alignment at OpenAI, supporting the organization in defining and evangelizing the mission to ensure that AGI benefits all of humanity. He joined OpenAI in 2017 as a research scientist and has worked on AI safety research and operations, AI impacts research, and educational resources (including Spinning Up in Deep RL). Previously, Josh earned his PhD in Electrical Engineering and Computer Sciences from UC Berkeley and BS degrees in Physics and Aerospace Engineering from the University of Florida.

+ Read More
user's Avatar
Chris Nicholson
Member of Global Affairs Staff @ OpenAI

Chris V. Nicholson serves on OpenAI’s Global Affairs team, where he uses data and storytelling to document major AI use cases and support the company’s economic research. He co-founded the deep learning company Skymind (Y Combinator W16), which created the open-source AI framework Eclipse Deeplearning4j. He previously reported for the New York Times and Bloomberg News. Born in Montana, he now lives in the San Francisco Bay Area with his family.

+ Read More

SUMMARY

This Forum conversation centered on OpenAI’s newly released blueprint on superintelligence and why the company believes the public debate needs to begin now. Chris Nicholson moderated a discussion with Sam Altman, Josh Achiam, and Adrien Ecoffet on the accelerating pace of AI progress, the possibility of extremely capable models arriving in the near term, and the need for society, policymakers, and institutions to prepare before those changes are fully felt.

The panel balanced urgency with optimism. Sam highlighted the potential for AI to dramatically accelerate scientific discovery, improve healthcare, unlock new materials and energy breakthroughs, and lower the barrier for people to build startups, software, and other creative projects. At the same time, the speakers stressed that these gains will only be sustainable if society invests in resilience, including stronger cybersecurity, biosecurity defenses, incident reporting systems, and broader institutional readiness.

A major theme was how to ensure AI benefits are shared widely. Josh spoke about the responsibility to build policies and systems that support working people, middle-class families, and people in lower-income countries, while also helping workers participate in decisions about how AI is deployed in the workplace. The conversation also explored ideas such as portable benefits, transition assistance, new tax models, and broader access to compute so that AI does not become concentrated in the hands of a few.

The panel closed on a deeply human note, emphasizing that qualities like compassion, creativity, character, and connection will remain essential even as AI grows more capable. They also pointed to healthcare, education, and caregiving as areas where AI can reduce friction, expand access, and free people to focus more on human-centered work. Overall, the event framed the blueprint as an invitation for broader public debate on how to shape an AI future that is both innovative and equitable.

+ Read More

TRANSCRIPT

[00:00:30] Speaker 1: Good afternoon everyone and welcome to the OpenAI Forum.

[00:00:59] Speaker 1: I'm Chris Nicholson and I'm glad to be here with all of you today. The forum is a place for serious conversation about how AI is being used in the world, what we're learning from that, and how more people can help shape its trajectory. Today's conversation focuses on one of the biggest questions in technology, what it will mean as AI systems grow dramatically more capable and how we should think about their implications for science, work, our life together, and governance. To discuss that, I'm joined by Sam Altman, co-founder and CEO of OpenAI, Josh Achiam, OpenAI's Chief Futurist, and Adrien Ecoffet, a long-time researcher here. So, let's get started.

[00:01:42] Speaker 2: Sam, the blueprint we released this morning talks a lot about superintelligence. A big question on my mind is, why are we doing that now? And what are some things that you can see from the inside that you wish everybody knew?

[00:01:56] Speaker 2: The biggest reason is simply that the rate of progress is continuing to accelerate, and we believe we are very close now. And this won't be a one-time thing. This will be, you know, over the next few years, to powerful models that will impact the world in important ways. The researchers that are working on these models did, I think, an incredible job with this set of ideas, and these are meant to be early ideas to start a discussion. You know, I'm sure we'll get to much better ideas as the world debates all of these, are staring down the models that are coming, the pipeline in front of us. And, you know, we may be wrong. We may hit some wall. We are imperfect. But given what we see, we expect to be in a world of extremely capable models quite soon.

[00:02:44] Speaker 2: And then, for the ramp of the capability to continue to increase, I think this will have huge impact on the economy, on the way we live, on what we can do. And one thing I've observed from watching the world go through some number of transitions is that the more time the public, our leaders, the political system, has to debate ideas before you really have to make a decision, the more likely you are to make a good decision. So, starting this now, given what we see coming, so that as this becomes a very large issue of public debate, I think it's important.

[00:03:19] Speaker 1: For sure. And speaking of debate, we brought in a lot of researchers, Adrian, very early in this process. I think it was maybe a unique exercise here in how many researchers were working with the folks who also think about policy a lot. Closely tied. What was that like for you and for the research group as a whole?

[00:03:38] Speaker 3: Yeah. You know, I think it was a very interesting experience. It was my first time working actively on a policy doc. It was a little bit like humbling in some cases. Sometimes as researchers, we can have these abstract ideas of, oh, we should really be thinking about the economic impact of this or about policy for safety. But, you know, it's one thing to think about this and another thing to actually put a pen to paper and think of concrete policy ideas that are going to be debated by your peer researchers. So that was an interesting experience for me.

[00:04:18] Speaker 3: I hope it was very helpful for the final product that we had the researchers involved so early and deeply, especially since it's so forward-looking. You kind of need to, to Sam's point, like you need people who are dealing with this technology every day, who know how to build it and how the safety stack works and also seeing the speed of progress. And one thing that I remember as part of this process in the past few months that we were working on it, like a lot of researchers went through a transition.

[00:04:58] Speaker 1: Went through a transition from writing most of their own code to having AI write most of their code. I think that, to some extent, led them to bring a lot of the urgency of like, this technology is real and important, moving fast in a way that maybe not everyone can see. And that's part of also kind of your earlier question about why now, like because we are seeing this urgency. Can I tell a quick story that reminded me of? There was a night in, I guess, would have been late January, early February of 2020, when the OpenAI researchers kind of got obsessed with COVID before the rest of the world did. We were talking about it all the time, watching the numbers every day, and we were like, this is gonna happen. We were making plans to go work from home, and there was some article that came out mocking us, because there are these crazy people at OpenAI.

We had put some copper or something on some of the door handles, because people... Some journalist wrote about this, and there's all these things going on. We kind of made our plans, and we assumed a shutdown was going to happen. We were like, there's this thing coming, the world's not paying attention. For whatever reason, something about working on exponential makes you understand these things better, so I think we were a group of people that were primed to do it. Then there was this one night, it was a very cold night. I lived in the Mission at the time, and I said, I'm about to get locked in my house for a while, and I'm going to go for a walk. I went one more time, and I'm going to go for a walk, because who knows what was going to happen.

I went through this long walk through the city, hours, on a cold night, and I was watching people breathing in each other's faces in restaurants and bars through the windows. I was wearing my mask, looking crazy. There was one other dude out wearing a mask, and we nodded at each other. But other than that, it was just like, life felt totally normal. I have not felt that so acutely as I do again in this moment, where it's like there is this crazy change. The change has already happened, like the models have already hit some level, but society has not digested them yet. We feel like we see it clearly. We're trying to tell the world that it's gonna happen, and it is hard to get this across, but it feels like that might be the very beginning of COVID walking through the streets again.

[00:07:25] Speaker 2: That's really interesting, and you've talked a bit about the upsides to UNLOCK and a lot about science as a means to do that. So, if we're going to contrast this with COVID, what are some of the hugely positive things in the pipeline for us?

[00:07:40] Speaker 1: I will answer that. I think the positives are so positive. We should talk more about all the things that we're thinking about here. You mentioned science. If we can really go make a decade's worth of scientific progress in a year, if we can go cure a ton of diseases, if we can come up with personalized medicine for people, find new materials to sort of make cheap, safe energy. If we can make it such that anybody who can come up with an idea for a startup can have the AI implement it, write the piece of software they want, have a custom video game that's the most fun for them to play, this stuff is all wonderful.

Now, part of the reason for putting this out with urgency is there are a lot of things coming that we'll need to mitigate. I assume we'll figure it out. I'm always an optimistic person by nature, but also, the benefits of this technology so change the options based in front of us and what we can do. My first reaction to reading the blueprint after the researchers wrote it was like, it is awesome, but also a little crazy that we can credibly be talking about these kinds of things. If the AI is as good as we think and all these wonderful things happen, we also have an incredible new tool at our disposal to help us mitigate potential downsides. That's a tool for everyone.

[00:09:07] Speaker 2: Josh, I think we've talked about this as social infrastructure that changes the way people work and learn and participate in society. What responsibilities do we have, and do institutions have, to help society get ready for that, and how do you think about it?

[00:09:20] Speaker 1: One of the things that I think about here, this does kind of come back to the broad benefits for everyone thing. For a long time in society, we've had this great aspiration that everyone could have food, shelter, electricity, healthcare, and we've always wanted to see there be some kind of new step in the direction of providing these sorts of things for everyone so that everyone can focus on the things that are most important in their lives.

We've often been told over and over again, you know, actually we can't have that, it's too expensive, there's no way to pay for it.

[00:09:56] Speaker 1: Expensive, there's no way to pay for it. Like it's nice, but it would be too burdensome to society to do all of that. And I think what AI and superintelligence will unlock is the freedom to do all of it at a much lower cost than has ever been possible. And correspondingly for the folks of us who are sort of the stewards of the technology, we have a special responsibility to make sure that we actually do fully realize those benefits and be a part of building policies and systems that help working people, middle-class people, people in low-income countries, really make sure that it benefits everyone and not just the very wealthy. So those are the things that I think of as key responsibilities here and I'm very excited about it. Plus the downside risks are also quite serious, and there are systemic shocks that could happen. We have to prepare. We have to be thoughtful. We have to be honest about them.

[00:10:49] Speaker 2: Yeah. So that sounds like a resilience question, and when I look at the document and speak with you folks, it sounds like you think of resilience in layers, right? And a lot of it is not before the AI goes out. It's also after the AI goes out in terms of our responses to AI and how people are prepared for it. I'd actually like to ask each of you how you think about resilience. Adrian, why don't we start with you?

[00:11:15] Speaker 3: Yeah. I think maybe one distinction that I would draw with... Classically, we've thought about safety and about making sure that we run safety evals, that we read team models, that we implement navigations, and I think that's to your point of resiliency as a layer. This is a very important layer that we should still keep doing and keep expending, but at the same time, you want society to be prepared for the possibility that, well, maybe there will be some actors who do less safety testing and what happens then? How is society resilient to risk from AIs in these cases? Maybe there will be incidents in spite of our safety testing or near misses. In the blueprint, we talk, for instance, about incident reporting that's modeled a little bit after how the aviation industry does things whenever there's kind of a near miss or any incidents, however minor, that kind of gets reported to a database so that all the companies can kind of know, okay, well, this is a risk and perhaps here are mitigations that you could implement. And so, there's a lot of things that could happen at a society-wide level. Another thing would be defending against risks. So, defending against risks, we're talking about models that can code a lot better now. That also implies cyber capabilities like could they help bad actors run cyber attacks. And I think part of resilience is ensuring that we make our software systems more secure and use AI to do these things. So again, there's, as you said, all these layers. So that's kind of how I take the resilience question.

[00:13:17] Speaker 2: Sam, before the show, we were talking about how prosperity can be emergent when everybody has access to AI. It also strikes me that resilience can be emergent. How do you think about it?

[00:13:26] Speaker 4: Yeah, I think our original AI safety thinking, and the field in general, like I'll call it classical AI safety thinking, was that there are going to be a very tiny number of AIs in the world. The only thing that matters is making sure those AIs do the right thing. And as long as you align them and they don't do unsafe things, the world will be okay. I think the picture now is actually more stable, but more complex. And there are going to be many AIs in the world. And it will not be enough to just say, like, you know, this one company is going to make sure the AI never does something it shouldn't do. But there will need to be an emergent response across society. You know, Adrian talked a little bit about some of this, but if we just take a few examples of the threats that we expect to be coming, cybersecurity is definitely going to become a huge issue. AI will be incredibly good at finding vulnerabilities in software. And I think the world will find that their software is much more brittle and much less secure than we thought. And, you know, humans just have limited capacity to find the exploits. It is not enough to say that, you know, one or two or three model providers are just going to make sure that their systems won't do this because code is like, you know, the same thing that code is very useful to be good at, and being good at writing code can also help find security problems. And even if all of us somehow could prevent our models from ever being used for this, there will be open-source models coming soon.

[00:14:54] Speaker 1: There will be open source models coming soon that are good at code and thus good at security exploits. So what has to happen is the world has to use these models, and there can be differential access; you can give it to good, known, trusted defenders first. We have a program for that; other companies will do similar things. And you have to empower the companies that defend software because there will be some power plant that no one has understood software for 20 years, and no one can patch it, and there's a big problem with it. You'll have to do something about that. But a resilience approach here is okay; there's this new thing in the world, there's AI that is really good at exploiting computer systems, let's use AI to defend it, and that is not a one-company thing; that's gonna require this huge effort. If we go a little bit further, I think there will be a bio version of this where classically people have said, well, we're just gonna restrict our models from being able to develop pathogens. Someone at some point is gonna use some model to develop a pathogen, and the world needs defense shields against that: detection systems, rapid response treatments, a whole bunch of other things. And this doesn't get us off the hook in any way of aligning our systems and building safe systems; we still have to do that. We get a time advantage there as long as we stay at the frontier, but we do need the world to do its thing, society to have this emergent magic and build these layers of defensive shields. There are many threats besides those two, but that's enough given we have short time.

[00:16:22] Speaker 2: For sure. Josh, you have shared some interesting thoughts with me about how each huge technological shift has produced new institutions and new democratic mechanisms. And also, you've been thinking a lot about new institutions that might emerge now. How are you, what are your most exciting ideas and what would you really like to think about in terms of the collective response to superintelligence?

[00:16:50] Speaker 3: Yeah, certainly. So one thing I'll say, first on the subject of resilience, just as grounding, a lot of the problems that we're concerned about AI creating new externalities for are problems and vulnerabilities that exist in the world regardless of whether AI is present, and AI just increases the urgency of action. But coming back to COVID, we found back then that everyone had a much deeper dependency on the functioning of supply chains than most people were previously conscious of. Supply chains are super important—supply chains for food, for goods, for everything that sustains civilization. And also, of course, for other types of vulnerabilities like democracy. We've been debating for years how there are real risks when people are influencing society sort of inappropriately. And we're worried that AI is gonna make these things potentially easier to attack in the near term because there will be tools at people's disposal that they haven't had before. I'm excited about the possibility that we can build new institutions and state capacities to use AI to rectify some of these vulnerabilities. We can systematically identify them with AI in ways we couldn't in the past; we can systematically close them in ways we couldn't in the past. And we can use AI to scale up efforts to combat certain types of issues in ways where we can potentially make it too expensive for attackers to really do something. So, you know, on cyber and bio, I am optimistic that maybe we can build an ecosystem of defenders that altogether can make it so expensive for attackers to try to run a cyber attack that there just won't be that much of an incentive to do it. And if we can fully implement the bioresilient side of things, not just for pathogens that might impact humans, but especially for the food supply chain—this is one where I have like a hobby horse on this, and I'm gonna talk about it every chance I get. I think people under attend a food supply chain, biorisks. But we can use AI to make that resilient at scale in a way that today is cost prohibitive. So I'm very excited about the things we can build there.

[00:19:05] Speaker 2: Yeah, neat. One more question. When we talk about the individual transition for many people, what work looks like, where value shifts to. I know you've been thinking about that a lot. What do you see happening as folks kind of transition to other ways of creating value in an AI economy?

[00:19:20] Speaker 3: Oh, oh goodness, that's very broad. I think people will have a lot more opportunity to exercise agency. If you can start a new business and have a team of AIs that handle all of the functions of putting the business together that you have no expertise in yourself, you can get something off the ground an awful lot easier. And so there are a lot of ways that the economy just fundamentally changes when you give people access and tools to help them do more things than they could have before.

[00:19:40] Speaker 2: Yeah, for sure. Sam, similar question. You have so much experience with startups, running them, managing Y Combinator, or just being in the ecosystem for so long. How do you see startups changing?

[00:19:52] Speaker 1: How do you see startups changing and our potential to realize new ideas changing with AI? I'm obsessed with trying to explore this space. I don't know exactly what it's going to look like, but this idea of one person, a very small team being able to create an entire startup, as Josh was talking about pretty quickly, all my instincts are there's something deep and important to figure out here. Every time in our industry that the friction, costs, whatever you want to call it, of starting a startup has come down a lot, amazing new things have happened. I remember one of these transitions that I was doing a startup during was when basically AWS came out and all of a sudden there was an idea of a cloud and a small startup didn't have to go do all the crazy things you used to have to do, manage his racks in the closet. And that was an amazing change to what you could do as a small startup with a few people. This one that's coming is much bigger and there have been several in between, but I really want to find out what it looks like when a startup is two or three people and a ton of GPUs and you can really democratize who can start a startup.

[00:21:09] Speaker 2: Yeah, yeah. It's that democratization aspect and it gets down to the widespread availability of AI. How do you, what's, what's the best frame for thinking about bringing more people in democratizing AI access?

[00:21:27] Speaker 1: I think when people talk about democratization of AI, they mean two different things. One is shared access and making sure that everybody gets to use sufficient AI to improve their own lives, build things for other people, all that. And the other is a sort of a voice in where it's all going to go. I think both are very important. Part of why we do things like this blueprint are if we're not debating the issues of society, also part of the reason we actually released products in the first place. People don't have a feel for this. If people aren't talking about this, that is kind of a prerequisite to people being able to have input. But it's not enough. You also have to have a way that people's input, that we listen and the input is captured back into the system. So that's one that I think is really important. And the other is we need to put not just services like ChatGPT, but the real deal, high compute valuable services where people can start a startup or make a scientific discovery or whatever in broadly into many people's hands.

[00:22:26] Speaker 2: Yeah. And that's going to take new economic models or much cheaper inference to get wider spread.

[00:22:35] Speaker 1: Right. And that's an infrastructure problem among many other problems. You know, we used to talk for years at OpenAI about when we were going to get through the compute crunch, when we were going to build enough compute that we wouldn't be so strapped. I don't think we ever get out of it. If we do our job, if we keep driving the cost of intelligence down, the capability of intelligence up, I think it, you know, effectively, if that happens, the demand is uncapped and the world's where we don't build enough infrastructure. I think you get a crazy concentration of power and concentration of compute because people will just bid the price up and up and up. So the only thing that I really believe in as a long-term democratization strategy is to just make so much infrastructure available and make the model so good and so capable that we hope to get to a world where people are like, I need help coming up with ideas for to use all this compute. I don't think we will in practice. But I certainly think that if compute is very limited, the richest people in the richest companies in the world will just sort of bid up the price to a kind of extreme degree. It'll be another kind of scarcity that is monopolized where this is more data centers is actually a very egalitarian initiative in the sense that they can make AI access more widespread.

[00:24:04] Speaker 2: I would look at many examples throughout history. One of the best things that we ever did for really increasing people's quality of life was to drive the price of electricity way down, price of energy probably way down. Energy correlates incredibly well with quality of life, or at least has for a long time. Maybe now we'll be more about AI. And by making energy abundant and shockingly cheap relative to what it cost 50, 100, 200 years ago, that has done quite a bit for lifting the entire world up.

[00:24:28] Speaker 1: I think we did the same thing with AI. And I think that means you need a lot of it. And just like with energy, you need to innovate new ways to make it much more affordable.

[00:24:36] Speaker 2: Can I give a thought on this whole issue as well? I think we've talked a little bit about the broad access to AI, which I think is very important and can create potentially a lot of new products that will be...

[00:24:50] Speaker 1: Products that will be very helpful for the world, very useful for people. I think one thing that, you know, we try to think about as well in writing this Blueprint is ensuring that, you know, ordinary people who maybe, you know, aren't going to start a startup necessarily, like, aren't left behind by the technology. I think this is related to something that we might talk about, which is kind of how does AI change the composition of the economy? Does it move it more towards capital or towards, like, labor—the labor that is done by AI, really? And one of the things that we talked about in the Blueprint is how do you modernize the tech space for an economy that is like this, and how do you distribute kind of this prosperity that, you know, will be created by this technology? How do you make sure that this is prosperity for everyone and not like wealth for, you know, a relatively few people? And so, this is something that we try to address in the Blueprint. But I'm curious if we can talk about it here as well.

[00:26:03] Speaker 2: We totally shouldn't. A couple of the big ideas that I saw were kind of workers co-authoring how AI is deployed in their workspaces. I'd like to ask you about that, Josh, and I'd also like to ask all of you about, more broadly, how society can capture the upside. What are the institutional forms that will take? But let's start with you, Josh.

[00:26:24] Speaker 3: Sure. On this compute question, for one second, I want to say the compute allocation problem, figuring out what things we use compute to help people do with AI, will probably be one of the most important society-wide questions to navigate over the next few years while compute is relatively scarce. And we should try to boost the amount of compute in the world as much and as quickly as we possibly can. So we don't have to have painful trade-off questions where there's some extraordinary good we could provide to everyone and we're stuck with a hard question where someone says, well, how are you going to pay for that? Because the cost of compute is so high.

[00:27:15] On getting workers involved in AI, I actually, I kind of want to back up and just acknowledge an elephant in the room, which is that a lot of workers are concerned about AI; they're worried about what AI means for them. They are not immediately excited at the prospect of figuring out, all right, how are we going to use AI in our workplace? They're thinking, oh my gosh, is the AI going to replace me? What I think is sort of the important first step here is that those of us who are working on AI and who are kind of the stewards for this have to be putting out things like this blueprint document where we say, well, here's how we're going to advocate for policies to make sure that the economy is fair and that you are supported no matter what. And then given that we've made this level of safety net, now we can talk about something where you have confidence in us. We can talk together, have a good conversation. We should figure out how do we empower unions to make wise choices about where and how to use AI? How do we empower workers to participate in conversations about the acceptable use of AI in the workplace? I think a lot of folks are rightly concerned about surveillance in the workplace, making sure that workers are a part of the decisions about those types of things feels very important, and a big push on AI literacy to ensure that folks, you know, get the tools they need to use AI to make their lives better, to start small businesses, to do all the kinds of things that, you know, will help them realize the potential.

[00:28:25] Speaker 1: Sure. Um, what are some institutions that you've considered that allow everyone to capture some of the upside, Josh?

[00:28:30] Speaker 3: Um, so let's see. For new kinds of institutions, I think the capacity to measure pieces of the economy in greater granular depth so that there can be responses to those, you know, like an economic shift in place. I think AI is actually a very exciting tool for making that more scalable and less expensive than it would have been in the past. I also think that there can be more institutions that are kind of in between corporations and governments, which have very, very different levels of accountability governance-wise. Maybe it's like a need for something with like more in-between level of accountability, um, that can provide, you know, like social safety net type services or something between a corporate board and the regulator.

[00:29:16] Yeah, yeah, yeah. Like there's, um, like this can't all just happen in private companies that have very minimal governance. And we also don't expect that government, which moves fairly slowly, is going to do all of it quite quickly. And so we sort of maybe need some in-between institutions that can help us prototype things. Um, this is like a little bit spitballing, uh, and I don't think this is quite covered in the blueprint, but, uh, you know, you asked. And so that's, I think, an innovative new institution that I think we can do.

[00:29:37] Speaker 2: Cool. Um, Sam, institutionally, how are you envisioning that people might broadly participate in the upside? I think we talked earlier about...

[00:29:48] Speaker 1: We talked earlier about very broad access and giving a lot of people a lot of compute. You hear these ideas like universal basic compute or other things with nice branding, but really what they mean is just like, instead of the traditional thinking of we're gonna give people a monthly stipend or money or whatever when AI does all the jobs. I think it's way better to say, actually people are pretty good about knowing what they need and pretty creative about how to use things. But if people are boxed out of access to this resource, that will be a challenge. I do suspect that we're gonna have to make changes to how we tax like in a world where AI is doing most of the intellectual work in the world, or at least of the work of today. We probably are going to need to explore some way to tax that instead of taxing human income in the traditional sense. I suspect that we will need to provide new kinds of transition assistance, unemployment and things like that. I suspect that eventually we will need to think about how people get to be an owner in the upside of all this in new ways. Capitalism is dependent on a certain balance between labor and capital, and if that gets totally out of whack, then the current system is not going to work. There will have to be some sort of evolution. What that is, I think is a very open question, and, again, part of the goal here is to throw out some ideas, but there's many more. I will always leave some room and say maybe we're wrong and maybe no change at all is required and somehow this just works differently than we think. But, again, in a spirit of trying to use the time we have to think and debate, this seems like a good time to start sharing ideas here.

[00:31:31] Speaker 2: Can I say something on this maybe we're wrong aspect? I think one thing that in the blueprint, we have proposals around modernizing the tech space and we talk about maybe even a 32-hour work week and these types of things. I think there's something important here about trying to create counter-cyclical measures where conditional on disruption from AI, we have additional unemployment insurance, we have these measures like the 32-hour work week. It's, I think, fairly important to me that some of these measures, maybe some of these measures are good in the current world, but I think many of them would be quite disruptive and we're really talking about a world that changes a lot. To me, institutionally, something that we need is kind of some thinking about what are potential disruptions that could occur from AI and what are things that we can implement as we see those coming to kind of counter the disruptions and distribute benefits broadly at that time.

[00:32:49] Speaker 1: Yeah, and so one of the ideas in the report was portable benefits, since benefits are so linked to employers in America now.

[00:32:55] Speaker 2: Correct, yeah, yeah.

[00:32:58] Speaker 1: And this was, I think, an example of one of the more kind of US-focused proposals, of course, but I think it's a great idea. I think it's insane we don't already have that, and the way that the US benefits world has evolved is really bad, but I think it's a great idea. No one should lose their healthcare if they lose their job.

[00:33:17] Speaker 2: Like, that just shouldn't happen.

[00:33:19] Speaker 3: Mm-hmm, I agree. And from inside the research organization, you're seeing this acceleration, it's real for you. You can see some of the scientific kind of progress that's being made with these models. We want institutions, we want society to keep pace with technology. What do you think the window is for adaptation?

[00:33:41] Speaker 1: So, you know, it's... Well, I was going to say it's hard to put a number. I would say that there's a lot of uncertainty about these numbers. We've talked, I think, about having an automated researcher in 2028, early 2028.

[00:33:57] Speaker 2: And I want...

[00:33:58] Speaker 1: March of 2028 is the official goal?

[00:34:00] Speaker 2: What's that?

[00:34:01] Speaker 1: March of 2028 is the official goal. Thank you. And I think one useful thing to think about here is, once you have this automated researcher, which is an automated, like, AI researcher capable of doing AI research, you potentially have kind of like a double whammy of disruption, I would say. Like, first of all, you have a model that is capable of advanced cognitive work, clearly, which AI research is, and so that in itself is disruptive. But it might accelerate further AI progress. And so, you know, I can't tell you, you know, after that point exactly how much progress will have made, like, a year from then, but it's probably, you know, more than the pace of progress that we've been making so far. And so, this is kind of the type of window that we're talking about.

[00:34:46] Speaker 1: That we're talking about.

[00:34:47] Speaker 1: Yeah, thank you.

[00:34:48] Speaker 1: OK, so we are at the point in the conversation where we open up questions. It's not just me anymore. It's members of the community. So I've got one here. It's from Svetlana Romanova.

[00:35:01] Speaker 1: As AI becomes more capable, which human qualities do you think will matter most in the future? I'd like to ask each of you one by one. Josh.

[00:35:10] Speaker 2: You know, I think there are human qualities that are timeless and that will always matter. Character, commitment, effort, compassion. I think they're going to matter a lot. All of those, creativity, understanding what other people want. But I will share a recent anecdote that really struck home for me that things don't quite go the same way. I went to my first robot cafe. Like, I was so excited to try it. I thought I was going to love it. And it was the most underwhelming experience. I thought I was someone that did not need like a barista at Starbucks to smile at me and say hi and ask how my day was going. I really thought I was like, didn't care about that. Turns out that I really want that. Walking in to push on the screen and have the robot do the thing and give you a delicious cup of coffee was like a deeply unfulfilling, I don't want this experience. And I thought I wouldn't care.

[00:36:01] Speaker 3: I totally agree. Just those small interactions throughout the day, I really appreciate them too.

[00:36:06] Speaker 1: OK, here's one. Should be Asian.

[00:36:09] Speaker 3: That's fine. Well, I mean, I think what I was going to say is a little bit along the lines of what Sam said. And I believe it's in the blueprint. Humans need each other. I think we care about other humans. In fact, to some extent, one of the most scary things with the development of technology and social media, video games, and stuff like that is like maybe that has gotten us to lose a little bit of the connection that we used to have. But I still think that it's something that matters tremendously to people, and that they will recognize this more and more, in fact, as AI becomes more advanced and can do some of these other tasks that don't require a human connection. And so that's one big thing that I believe will expand. It seems like a good thing to me that this can be, of course, a type of work. Nursing and all these types of works and the care economy, teaching. But it's also just important to our lives. And I think that's, to me, going to be the big quality, like how good you are as a person to other people.

[00:37:24] Speaker 1: Yes, I agree. So in America, things like health care, child care, and elder care are very expensive. Here's a question: How can AI expand access to those for everyday people? Josh?

[00:37:39] Speaker 2: One thing I am so excited about is the way that AI can help provide the best health care in the world to everybody. I've heard a lot of stories from folks navigating the health care system that they didn't know where to go, how to navigate the insurance, what specialists to talk to. You hear stories of people bouncing around for a diagnosis for years and years and not getting anywhere. You hear about folks who are stuck in a health care system, or even if you have the most caring, compassionate nurses and doctors, there's not enough time to give everyone the best quality care. I think that AI is going to make it possible to deliver the best quality care at scale. I don't think it's going to replace doctors. I think it's going to make their workloads manageable. I think it's going to make it possible for patients to get the best experience possible.

[00:38:25] Speaker 3: Yeah, I agree with that. A lot of patient empowerment out there.

[00:38:27] Speaker 4: We have all told stories many times of things we've seen on social media about people having this amazing health care experience. I don't have anything that amazing, but I'll tell my own, because it's happened to me. I recently got a blood test. There were nothing seriously wrong, but a few markers just out of their range. You scan down; there are like a hundred things, and you just look at the list of the obsessed ones that are a little bit out of range. I asked my doctor. He was like, probably those are all close enough; it's fine. I put it in ChatGPT. It's like, yeah, you're fine. But here's what's going on. Take this one supplement. Get your blood test again in a month. You should be OK. And it did it. Again, I wasn't that sick. But the fact that I could just upload my blood test and instantly get the right answer for a kind of complex thing was an amazing experience. I think there will be many things like that across health care education and all these other areas. Elder care, that's one way I think we want people doing that. But there will be a lot of ways we can really drive down the cost of health care and education and things like that.

[00:39:32] Speaker 3: For sure. And even personalized learning. You can still be humans, but you can figure out how to teach individuals, right?

[00:39:39] Speaker 1: How do you think about it?

[00:39:40] Speaker 2: Yeah, I mean, I would say similar things, of course. I mean, another basic thing in terms of...

[00:39:44] Speaker 1: So I think another basic thing in terms of healthcare, that I hope at least, is simply that AI will be able to help medical research, and that's like a basic thing that would make healthcare better for people. But also, I just talked about the care economy, and I think to some extent, to the extent that AI can facilitate some of the more bureaucratic, frankly, aspects of healthcare, and help us have more people actually providing the actual healthcare to individuals, that seems like a positive thing. Like, it's not AI would be providing, would be doing this, but it would be freeing up people to actually do this work.

Yeah, we've talked a lot about the capability overhang this year, how AI now is capable of so many things that most folks are not leaning into it for. And I sometimes suspect that it's because they're resigned to the world like it is, right? I think Pee-Gee calls it flip blindness, right? So they have ceased to consider that something better is possible. So I think they're gonna start leaning in more, right? And I think when they lean in, their behavior is actually what will change society as much as any technological breakthrough is that moment of adoption. In your lives, what are the moments where you've seen somebody kind of light up and realize, oh, I can do this? I don't have to live resigned anymore to the old ways?

Wait, I've taken a lot of questions first. Adrian, do you want to take a crack at it? Now you have time to think about it.

[00:41:27] Speaker 2: Oh my God. Wow, wow. Yeah. Putting you on the spot. Yeah, maybe I do want to think about this. To your point, I think it's definitely the case that people take a while to adapt to new technologies. I think to some extent, the capability overhang might be large in terms of capabilities versus how much people are actually using them. I think there's an extent to which that's a function of how fast the capabilities are improving versus how fast people are used to things getting better.

I have a favorite to mention. My favorite, there are many options here, but my favorite of all is watching parents who are coders by training, watching the parent watch their kids use Codex for the first time, and a kid who has a bunch of ideas and no idea about what the traditional limits are, what would be hard or easy, just start describing the video game and having Codex make it. And the kind of creative journey the kid goes through. Often you see the kid doing this mostly by voice, and the parent is just like, that's not gonna work, and then it works, and then they're like, wow, my kid is gonna grow up in a world where he or she just expects this happens. And I kind of still, I wouldn't have even thought to try that because I've been so certain it didn't work. So watching it through the parent's eyes, the kid do it for the first time, especially if the parent is a software engineer by training, is awesome.

This is a tale as old as time, right? The kids always know how to use the VCR in the 80s or whatever, and yeah. Interesting. That's funny.

[00:43:16] Speaker 1: Just on the capability overhang and how fast people notice when a capability has arrived, there's an interesting time scale mismatch where people who aren't super in the know on AI have this like, distant awareness that something is happening. They know there's a product out there. Once every few months, they might check it out. They don't immediately and instinctively probe it to the maximal extent of its capabilities. And they often don't put it on the thinking setting. Like, they don't know that reasoning models have happened. They stay on the default chat model that's right out there.

And so they wind up with this misperception that things aren't moving as fast as they are. And you hear people talk about, well, there's hallucinations, it's slop, it's making mistakes, it's inaccurate. Why would they ever tell us that it's gonna do these great things? And this like, visceral belief gap is I think an issue that will get overcome when they start to see other folks and institutions very successfully use AI at sort of the maximal reasoning settings, at the most capable settings, in ways that are shocking to them. Like this video game example, but sort of like at scale for society as a whole. They're gonna see a lot of people get diagnoses that they wouldn't have expected that someone could get quickly. And it's gonna update them. And I think that's just like an interesting phenomenon that the timescale for AI progress is weeks and months.

And the timescale for people to currently checking back in is like every half year or something. And yeah, some big change will happen when people realize the maximal extent of capabilities today.

[00:44:40] Speaker 2: Yeah. I agree.

[00:44:42] Speaker 1: I think the point you brought up, Sam, about the kids really illustrates for me what an advantage creative people have. Like, there are some folks that are just kind of a font of ideas. Many of those ideas have never been realized. Some are scientists, some are artists, many are children. It feels like the flood gates are opening for them to realize more things. I think you mentioned that you would actually burn through your codex list and you're not having it run all night anymore. We just need to make a model that helps you come up with new ideas. I actually think this will be one of the most exciting things to do. I don't think we're that far away from a model where you can say, go look through all my text messages, all my email, look at my entire computer, anything you can find about me, and just suggest ideas that I've gestured at as fragments or that might be interesting to me. And then I'll build those.

[00:45:34] Speaker 2: Yeah, yeah, agreed. A thought partner. I see a lot of people using it like that.

[00:45:37] Speaker 1: Well, I think we're wrapping up here. So thank you all for joining us today. And thank you, Sam, Josh, Adrian. This has been pretty cool. It's a lot of fun.

[00:45:48] Speaker 3: Thank you.

[00:45:49] Speaker 4: Yeah.

[00:45:50] Speaker 1: So the ideas discussed may sound ambitious to you, and they're meant to be. But we know that they're also early and exploratory. We're offering them not as a final plan, but as a starting point in a very public conversation with policymakers, everyone in society, to encourage more discussion and research and debate. OpenAI wants this conversation to continue. The company, we are inviting feedback through a new email address. It's newindustrialpolicy.com. Please send us your best ideas. We're launching a pilot program of fellowships and focused research grants for up to $100,000 in funding and up to a million in API credits. We're convening further discussions at the new OpenAI workshop opening in Washington, D.C., in May. So thank you again for being part of this forum. We appreciate the time and thought that everyone is bringing to this conversation and we look forward to continuing it.

+ Read More
Comment (1)
Popular
avatar


Watch More

Event Replay: Careers at The Frontier: Hiring the Future of OpenAI Part 2
Posted Sep 19, 2025 | Views 3.9K
# Recruiting
# Career
# OpenAI Presentation
Event Replay: Scams in the Age of AI
Posted Oct 01, 2025 | Views 2.2K
# AI Safety
# Security
Exploring the Future of Math & AI with Terence Tao and OpenAI
Posted Oct 09, 2023 | Views 27.8K
# STEM
# Higher Education
# Innovation
Terms of Service