Sign in or Join the community to continue

Event Replay: OpenAI's Chief Futurist on AGI and What's Next

Posted Feb 26, 2026 | Views 44
# OpenAI Leadership
# OpenAI Team
# Responsible AI
Share

Speakers

user's Avatar
Joshua Achiam
Chief Futurist @ OpenAI

Josh Achiam is the Head of Mission Alignment at OpenAI, supporting the organization in defining and evangelizing the mission to ensure that AGI benefits all of humanity. He joined OpenAI in 2017 as a research scientist and has worked on AI safety research and operations, AI impacts research, and educational resources (including Spinning Up in Deep RL). Previously, Josh earned his PhD in Electrical Engineering and Computer Sciences from UC Berkeley and BS degrees in Physics and Aerospace Engineering from the University of Florida.

+ Read More
user's Avatar
Chris Nicholson
Member of Global Affairs Staff @ OpenAI

Chris V. Nicholson serves on OpenAI’s Global Affairs team, where he uses data and storytelling to document major AI use cases and support the company’s economic research. He co-founded the deep learning company Skymind (Y Combinator W16), which created the open-source AI framework Eclipse Deeplearning4j. He previously reported for the New York Times and Bloomberg News. Born in Montana, he now lives in the San Francisco Bay Area with his family.

+ Read More

SUMMARY

In this OpenAI Forum event, Chief Futurist Josh Achiam discussed how his role focuses on helping society prepare for the transformative global impacts of AI while ensuring AGI ultimately benefits all of humanity. Josh emphasized that AI safety is not only a technical challenge but a sociotechnical one that requires democratic input to define acceptable risk and guide how systems are deployed across society. He framed AI as a general-purpose technology comparable to the agricultural, industrial, and internet revolutions—but advancing at unprecedented speed—with the potential to automate economically valuable work and meet global material needs if governed responsibly. Josh highlighted how scaling compute and inference capabilities may reshape geopolitical competition, creating national incentives to build AI infrastructure in order to maintain economic and strategic advantage. This dynamic reinforces the importance of building resilient, democratic AI ecosystems and investing in compute capacity as a foundation of national power and global stability. He also underscored that broad access to AI tools and literacy is critical for economic resilience, enabling workers, firms, and communities to adapt to rapid technological change. Ultimately, Josh positioned the democratization of AI access as essential for ensuring productivity gains, scientific advancement, and social benefits are widely shared rather than concentrated.

+ Read More

TRANSCRIPT

[00:00:00] Speaker 1: Hello everyone, and welcome. I'm Chris Nicholson from OpenAI's Global Affairs team, and I'm here with my colleague Josh Achiam. This is his first appearance as OpenAI's Chief Futurist. So today, Josh will share more about himself, his role, his mission, which is helping the world prepare for a future shaped by AI by thinking seriously about the benefits, which we see now, the risks, and the choices that can make the future better. Over to you, Josh.

[00:00:42] Speaker 2: Thanks so much for the intro, Chris. So okay, actually before we get started I just want to give a quick shout out. Hi Mom. Hi Jenny. I know you're watching. Thank you so much. I love my family and doing these things. Okay. So hi, I'm Josh. I've been at OpenAI for a long time. I want to tell you a little bit about my background and kind of how I got here. So I originally, early in my career kind of aspired to get to faster than light space travel. I wanted to see humanity do the awesome space exploration thing. So I went to undergrad for Physics and Aerospace Engineering. And then when the time came to figure out maybe where to go to grad school and you know to really focus on, I realized actually the kinds of problems that I want to solve might be much better addressed if we had something like artificial intelligence to help us crack through these.

[00:01:35] So you know, I thought really hard about whether or not I wanted to go to grad school for five years and just be in a room doing math nonstop that whole time. And don't get me wrong, that sounded amazing. But I thought actually if I did something a little bit more practical that had broader applications and would help solve the problem I really cared about solving, that was exciting to me. I got very lucky. This was around 2013 that I was applying to grad school and I decided that I wanted to work on AI. I got to go to UC Berkeley. This was right around the time that a super big revolution in AI research was kicking off. That was when people started to notice that deep learning kind of worked as a technology. I began to focus my research on that and on getting AI systems to learn from trial and error in a subfield called reinforcement learning.

[00:02:41] When you spend a lot of time thinking about what AI can potentially do and when you believe that getting to very general artificial intelligence is possible in the near term, you kind of can't help but notice that the consequences and risks from that would be fairly profound. The opportunity is enormous, but the risks are quite substantial, too. I spent progressively more time thinking about and doing research on AI safety. I started at OpenAI in mid-2017 as an intern and then became a full-time research scientist. I put a lot of time and effort into thinking about what is going to happen in the long term with this technology, how do we make sure it goes well, how do we build it safely, and I was focused initially on the technical side of safety.

[00:03:31] Gradually over time, I realized actually some of these questions about what is or isn't safe come not from having an algorithmic implementation of a safety method, but rather from negotiation with society as a whole about what counts as acceptable risk. Sometimes we talk about sociotechnical systems in the context of safety, that it's not just the technical thing that you have to make safe. It's the entire social system into which the technology is embedded that has to ultimately have good and acceptable outcomes for everyone who's a part of it. And in order to figure out what kinds of outcomes are acceptable, you have to get people talking to each other saying, this is what I feel good about, this is what I feel bad about, gradually translating those things into specifications that everyone agrees on. And then you operationalize that into technology.

[00:04:02] So I kind of graduated over the years of working on technical safety research into thinking about this bigger picture stuff. And for last year, I served as the head of mission alignment for OpenAI because I was thinking a lot about what are the ways in which the technology enables us to fulfill the mission, which is to ensure that AGI benefits all of humanity. What are the types of things we have to think very deeply about. And I focused a lot on topics in what might happen on international security as a result of AI, what are opportunities for philanthropies to use AI better to help people, what kinds of novel classes of risks might go beyond present-day safety frameworks. And through all of it, one of the biggest through lines has been just this sense that there's so much we could do with this technology to help people.

[00:04:58] Speaker 1: Technology to help people. Once you start to see it, once you start to see that it's possible to get to a world maybe where everyone's basic material needs are met, because automation enables you to provide all of the goods and services that people depend on at low to no cost, you kinda can't help but think, wow, that's actually really worth it. That's like the most important thing in the world to shoot for. We have to make sure that this goes well in the sense that we don't realize the risks. We also have to figure out how we fully realize these benefits. I also spent a lot of time just talking to people who weren't in AI at all, and that, to me, I think is where you really get the most profound feeling about what matters. You hear from folks who are thinking about how do they live good lives? How do they spend time with their families? How do they navigate the pressures and confusions and stresses of a modern economy that requires them to work around the clock to make ends meet? Figuring out how to get AI to serve everyone, to serve people, so that they don't have those types of stresses feels of paramount importance to me. So coming into the chief futurist role, this is kind of like a novel and experimental role in the same way that a lot of things at OpenAI are novel and experimental. But the main idea is that I'm going to work on figuring out how to comprehensively think through AI's impacts on the world. Try to be an open and honest communicator with the public about the things that can go wrong, but also the things that can go right. Think through what will hopefully be a fairly compelling vision for what we could get to and how we can get there. Engage deeply with a lot of subject matter experts along the way to make that vision increasingly realistic and realizable. I'm really quite hopeful and optimistic about this technology. I also try to be clear-eyed about it. And so that's the kind of thing that I'd like to talk with all of you about today over time and Chris, really excited to chat with you about this stuff in more depth.

[00:06:57] Speaker 2: Cool. You have already answered parts of my first question. You've been working on the mission for a long time. So I'd like to know, what does a good week feel like? What's a solid, substantial week where you think, wow, I was deep in it and this was meaningful?

[00:07:13] Speaker 1: Yeah, I think there are, that's a great question. My work involves a lot of different things. So part of it is systematic thinking and reading a lot and getting a lot of information from a lot of sources. I don't think you can have a good view on what AI might do in the world if you're not really trying to learn about and engage with many different fields. There's a breakdown that I like to think about that I kind of split things up in four ways: AI will have these big social impacts, it's going to reshape the social graph. It's going to reshape how we interact with each other, how we think about ourselves and our own inner thoughts because we're now talking with AIs that help us expand on them and give us new ideas. It's going to have consequences for the broader economy, so certainly we expect to change the nature of work. It's going to have consequences for science and technology. So a big thing that folks in AI are thinking about and actively working on is how do we get AI to be contributing to scientific research and frontier research and development across many different science and technology fields. We see the early signs that this is successful. I think you've written about this actually, where, you know, AI today is capable of contributing to solving Erdos problems—famous, long-standing problems that were open in math that were considered interesting. AI is able to design novel experimental protocols in wet labs, AI is able to contribute to automated laboratories. And these things have profound structural consequences. So it's not just that we are taking a thing that happened before and making it a little bit easier. There are actually long-term, profound structural transformations that result from doing this, and also the security impact. So the foundations of national power change, the way that national security will be operationalized changes. And correspondingly, nations have to think carefully about how they interact with each other to make sure that this goes smoothly. So I think about those four areas.

[00:09:15] I try to pay attention to the general discourse from folks who are very plugged in. I'm way too on Twitter. It's a blessing and a curse to be very online, right? But you hear a lot from a lot of different people, you get exposed to a lot of different ideas. I try to read actual textbooks from time to time. I don't get nearly enough time for this anymore. I love textbooks. I have like a big textbook collection, it's one of my things. I try to get textbooks and what are considered to be strong texts from many different areas. I love getting book recommendations. So folks, if you want to add me on Twitter and just send me recommendations for the books you think are most important in your field, I'm super interested. I will order it on Amazon. It will take me six months to read it.

[00:09:56] Speaker 1: Amazon, it will take me six months to read it, but I'll get there. I also try to talk a lot with experts in various fields. So I think it's really important to, you know, to listen to experts. Not that experts have the monopoly on what's true or not. I don't think that a cult of expertise is the right way to go, but I think you've got to talk to people who are deeply in the know and who are opinionated and who have been studying what the impacts of AI in various areas are. So I try to talk to a lot of experts, I try to talk to policymakers, I try to talk to, again coming back to earlier, folks who are also just in communities and who are thinking about how it's going to impact them. So through this mix of a lot of information gathering, I try to build a better picture. That's one part of a successful week. Other parts involve actually writing and trying to communicate internally and to the outside world. So I love writing. I've actually been writing fiction creatively since I was a kid. Most of it's terrible, so it's not published anywhere. I have a couple of like pieces of micro fiction on my website. They are like, there, okay. But no, I think that writing detailed explainers about different aspects of the technology or its strategic consequences is an important part of the week. And, you know, this could be on any number of topics. One of my plans for this office is to ultimately have some kind of blog type surface area where me and Jason Prude, who's my colleague who's working with me in this office, we plan to put out something, I'm hoping for like once every two weeks, that deeply explores an interesting and important idea, you know, in one of these different domains: social, economic, security, scientific, and tries to lay out what might happen in the long term. Other things that are part of a successful week, you know, at OpenAI all the time we deal with very complicated issues and questions about how will our technology show up in the world. I like to try to be a participant of those conversations and, you know, give my two cents and advocate for something that's in the public interest. So making sure that we ultimately help people is constantly top of mind for me and, you know, where we see an opportunity I try to encourage us to take it and where we see a risk I try to encourage us to address it in a very sober, serious, and thoughtful way, which is generally quite difficult. I mean, we deal with tons of shades of gray here, but that's kind of like what a successful week looks like, kind of dealing with all of these things.

[00:12:29] Speaker 2: That's great. That sounds really multidisciplinary. It strikes me that history is maybe one of the most multidisciplinary fields out there. Do you study history?

[00:12:37] Speaker 1: In such an amateur way, but I do love history. I've always felt like a real attachment to the human story as a whole, from the very earliest days, pre-civilization, to how it got made. I love the Epic of Gilgamesh. I come back to that one a lot. It's so beautiful. It's not strictly history, but it is a very important piece of history, like one of the first stories that we told, and it captures what the early conception of history was. It's the origins of history.

[00:13:09] Speaker 1: Yeah, it's the origins of history, and then going through that, I think if you want to have a detailed understanding of what people are trying to do today, you really do have to look at history to see what cultural trends kind of converge to this moment, what were the histories of nations and the stories they told themselves, what are the histories of peoples and the stories that are most dear to them? You can't really understand where people are coming from unless you're willing to take some time and study their history. So yeah, this is super important to me. I love reading books about this, I love randomly watching YouTube videos about these things. Oh, and one quirky bit that I think is fun is the history of food. I love watching the videos on, you know, people trying to make the ancient recipes and or recipes from a few hundred years ago and seeing how food evolved over time. I find that stuff super neat.

[00:14:04] Speaker 2: I love that. Okay, so in history, what do you think are some moments that might rhyme with a technological change close to the order of magnitude of AI? Is it fire? Is it electricity?

[00:14:14] Speaker 1: It's a few of these things. So there are a number of different, you know, obviously like the big revolutions are like the agricultural revolution, the industrial revolution, and then kind of the there's like a few things that are sort of in the industrial revolution, but I think of it as like a little bit separate. There was a lot of development in chemistry, physics, quantum physics that happened in like the early 1900s, 1910s, 1920s, and that profoundly changed the, you know, the nature of technology that was available to people. Certainly, the computer revolution, the Internet revolution, these things rhyme a little bit with AI. The way in which they rhyme with AI is that a new...

[00:14:54] Speaker 1: With AI is that a new kind or category of technology was developed. This allowed people to change an awful lot of things at roughly the same time, and they had profound, wide-reaching consequences socially, economically, scientifically, in terms of security. So, in the agricultural revolution, people obviously went from being hunter-gatherers to settling in towns and cities.

Speaker 1: Like the development of a town or a city, a center of human population was a pretty profound structural societal innovation, and people had to develop new laws and customs that enabled them to live together in cities. You get to the industrial revolution, and one of the things that changes is that industrialization makes it so much more efficient to produce food and so much more efficient to produce goods of all kinds that the proportion of the economy that's focused on agriculture, people being subsistence farmers, shrinks dramatically over a couple hundred years. What it is possible for states to do changes dramatically.

Speaker 1: You can begin to have all kinds of new conveniences for people, and what their daily life looks like changes radically. Things get a lot better, but also they get a lot more confusing and, in some cases, worse. None of these profound revolutions in technology and society are uniformly good. They all come with substantial challenges. In the Industrial Revolution, you had all kinds of great things. You also had extraordinary pollution. Cities got a lot dirtier, right? You had awful air, awful water pollution, and people were sicker living in cities than maybe they had been before.

Speaker 1: But these types of things gradually got solved as society figured out, OK, here's the nature of the problem. We're gonna identify it, we're gonna discuss it, we're gonna figure out some new standards, we're going to set to reckon with it. And I kind of expected hope that in the AI era, we're gonna have similar conversations and develop new kinds of standards that let us reckon with the externalities of AI. A profound difference, though, all the previous revolutions happened over a much longer period of time.

Speaker 1: The Agricultural Revolution stretched out over like a few millennia. The Industrial Revolution stretched out over a few centuries. Computer technology took a few decades to develop; the internet took a couple of decades to develop. AI went from, like if you look at papers from 2015 in AI conversation models, they look really silly. They're very cute; they have slightly cherry-picked examples, and the AI is saying things that are like borderline gibberish, but there's a little bit of a something there.

Speaker 1: And then you get to 2025 and AI is cracking through extremely hard open research problems and having deep conversations and expanding on philosophy and writing shoddy prose. It's not so great at prose yet. Maybe a little bit better now with some of the more recent models. But compared to 2015 to 2025 was an extraordinary leap, and the changes in society are, I think, actually now lagging the technology somewhat significantly, but they're also happening quite rapidly.

Speaker 1: So I think going from in 2022 when no one had heard of chat GPT because it wasn't really a thing to now there are hundreds of millions of people for whom this is an important part of their daily life and getting things done is an astonishing pace of progress. So we rhyme with history but we're happening at breakneck speed. So we're in these previous transitions have landed on a new way of life. It's usually produced a great surplus that has allowed us to do more things in society. It's made us more complex in civilizations, but this one is happening faster.

Speaker 1: Much. Now you're thinking about some of the second and third order effects of this technology, the wider repercussions and how to deal with them. People sometimes talk about that as resilience. How do you think about resilience?

[00:19:02] Speaker 2: I think resilience is an extremely useful framework. The world is a complex adaptive system, and a resilient adaptive system has a lot of mechanisms for ensuring that it's gonna do well and thrive and flourish under whatever kind of adverse impact or change in its circumstances.

Speaker 2: As for how to think about this—how to think about the second and third order consequences—I like to think of impact as coupled across domains. So certainly, the state of the economy influences culture, right? If things get tough, people react to it; they develop new kinds of social...

[00:19:52] Speaker 1: of social interaction patterns. They'll organize their communities differently. They'll reflect it in art and culture. Science and the economy also have this kind of interaction pattern. So progress in science may trigger the development of something that makes it much more efficient or easy to generate goods of some kind. You can find an innovation in science. Maybe you make a new material. Maybe you make a new process. And it drops the cost of something. Now all the firms that were previously doing things the old way either have to adapt or they go out of business. They get out-competed by the new firms that are doing things in the new way that was uncovered for them. Creative destruction.

[00:20:35] Speaker 1: Yeah, there's creative destruction that happens. What I think is kind of interesting and odd about the AI-related changes is one, if they're happening really fast, does the sort of standard market creative destruction process work out well? Do we get to the adaptations quickly enough or do we maybe frankly need some new social safety nets to address some of the changes and the pace of change? I personally am in the camp that thinks actually we should be thinking about how to develop either new kinds of social safety nets or expand social safety nets so that if something changes, we are able to be more resilient to an adverse impact.

[00:21:18] Speaker 1: That said, I also think that sometimes the way it happens might be a little bit unexpected. So I don't know if folks have seen. Apparently IBM's stock price dropped by like 10% in response to chatter that maybe an AI product on the market was going to be able to write COBOL, which is a type of programming language that undergirds a lot of the financial transaction systems. This is a really important thing. Maintaining legacy COBOL code is a giant pain in the neck, but it's very important, and IBM is one of the areas that has a concentration of expertise on this topic.

[00:21:58] Speaker 1: And the reaction from the market was an expectation that more people having the ability to use an AI tool to do COBOL might impact IBM's business. I actually think there's maybe a counterintuitive effect here, where the folks who are trusted to deliver in an area are going to be able to leverage the tools to overperform what they would have done before, and maybe IBM gets a win out of this in the long term. But part of getting to resilience is that everyone who could in theory use this tool to do something better and more efficiently, improve their economic outcome, understands the tool and has access to it. So I think a big piece of resilience is access and literacy, and I wanna be a part of making sure that people get those things.

[00:22:43] Speaker 2: Very cool, so democratization of AI is a form of resilience because with access to intelligence, people can solve some of the problems that come up. Is that right?

[00:22:52] Speaker 1: I think so. I think democratization is a critical pillar of resilience.

[00:22:54] Speaker 2: Yeah, very cool. I think we are close to taking some questions here.

[00:23:01] Speaker 2: Is that right? I know we've got some questions coming in from the community. Let me see, okay. What have I got? So, first of all, thank you everybody for sending in these questions. I can't wait to talk about these. This is from Carmen Dominguez, White House Innovation Fellow.

[00:23:15] Speaker 2: Can you define what you mean by AGI? How is it different from current AI and what would it be able to do that the current tech cannot? How would it make everybody's lives better or worse?

[00:23:30] Speaker 1: I think that's a terrific question. So I'll go back to the definition of AGI that's in the OpenAI charter. An AI system that can do most of the economically valuable work that people do today. Now that said, there are a lot of different ways to try to take this apart.

[00:23:46] Speaker 1: As for how it differs from the tech we have today, to be frank, it might not differ that much. I think we're pretty close by most subjective measures to what historically would have been considered AGI. There are a couple of problems that people in the community do generally tend to point to to say, well, here's why we're not there yet. One is that it doesn't yet outperform humans at everything. There's still a long tail of tasks at which we're at less than human performance. Two is the task horizon length that AIs are able to reliably do. There is a wonderful graph from a third-party evaluation organization called Meter that became the star of the show for how people in the AI community think about this, and rightfully so.

[00:24:35] Speaker 1: It's a great methodology, and it's a great way of thinking about it. What they show in this graph, on the X-axis, it's the date at which a model was released. And on the Y-axis, they show some estimate of the maximum length of task that the AI can perform with about 50% probability of success.

[00:24:50] Speaker 1: About 50% probability of success. So, what's the coin flip odds length of task that you'll succeed at with AI? And they show that over the course of time, this has been going up, now it's at somewhere in the order of magnitude, like five to 10 hour range for the frontier models, and it's been doubling about every seven months. And people all debate about whether this has now been saturated, whether this is now still a good measure, but what we, I think, are seeing is that the critique that the distinction between modern AI systems and AGI is at least partly the length of tasks that can autonomously accomplish is one where there is consistent progress being made over time and an almost predictable empirical curve fit for when they'll be able to do very long horizon tasks. So, outperforming people at most tasks is one thing that's a critique, and I think that'll gradually happen as the research continues. Task length and ability to execute autonomously is another, and I think that's changing as well over time in a fairly predictable way. And then the third thing is continual learning, so the ability to learn from a small number of examples of experience. I also am kind of inclined to think that this is the type of thing where we're just collectively on track for it. We saw in the sort of early era of the GPT-type models, GPT-3, that they can do some amount of in-context learning. So, you put stuff in the context window and it's able to kind of extrapolate from that. I think we'll progressively see more and more of that as we get to very long-horizon tasks. You should definitely ask the research team at OpenAI this question more so than you should be asking me right now because I wanna make sure that they're like the real authorities doing the actual hard work on a day-to-day basis on how they can get there. But this is like my personal rough picture of this. So, that's how I think about AGI. That's how I think we're kind of on track to get there. I don't think it's drastically different from what we have today. I think within a couple of years, we will look back and say we crossed the threshold at some point. We might even have a hard time pointing to exactly when.

[00:26:57] Speaker 2: Great. We've got Joel Stein, Deputy Director at the Collective Intelligence Project. He says, I'm very curious to hear more about the geopolitical impacts of scaling inference compute and how you plan to build society-scale resilience for that.

[00:27:14] Speaker 1: Oh, that's a fabulous question. Did you know I wanted that question? So, when I think of the geopolitical implications of inference compute, there are a few different pieces of this. And let's tell people what inference means.

[00:27:28] Speaker 2: Yeah, yeah.

[00:27:29] Speaker 1: So, when we talk about compute, first and foremost, what we mean is the ability to use specialized chips for running AI. And when we talk about inference, it's so funny because these pieces of jargon are descended from some number of different subfields getting together. And then over time, kind of diluting the meaning of a particular word. Now, when we say inference, we just mean when you run the AI with a query, you get a response. So, what happens when you have a ton of compute that's dedicated to running AI, having it think for longer, and getting tokens out of it. Right now, if you ask AI to think for a very short period of time about a task, it'll give you a gut reaction, and the gut reaction is probably decent. If you ask it to think for longer, let it think for like 10 seconds, it'll probably produce a stronger result for the query that you're interested in. Now, granted, it depends on what kind of query you come with. So, if you're asking it, how's the weather, then thinking longer isn't gonna help it do much. If you're asking it, okay, what happens if I were to do this, or how would I solve this hard math problem, letting it think longer allows it to explore more possibilities, compare them, and produce a stronger and more rigorous result. The longer and longer you let it think, prospectively, the better the result could become, in principle. It takes more R&D to ultimately make it so that it can think for a really, really long time and translate all of that extra thinking time into better results, but eventually we'll get there.

[00:29:02] Speaker 1: So, on the geopolitical application side, the AIs that are able to think for longer are going to produce better results. And if you're like a nation that's trying to pursue your national advantage, you're trying to win a geopolitical contest of some kind, getting better strategic advice from an AI that can think for longer is probably very useful to you. And if, there's a thing I like to go back to, consider two AIs that are playing chess against each other. An AI chess agent thinks into the game tree, thinks some number of moves ahead, and says, if I do this, what happens in response? What happens in response to that? What happens in response to that? If I get to the end of playing out this variation, do I win or lose? Propagates that information back to the start and says, okay, I know that if I pick this action,

[00:29:48] Speaker 1: If I pick this action, I am guaranteed to win in 17 moves or whatever, or I'm guaranteed to lose in five. The ability to think deeper into the game tree confers an advantage. One AI chess player that can think 10 moves ahead is going to probably always win against the AI that can think just five moves ahead.

[00:30:10] So back to the geopolitical implications side of things, if you've got two nations that are in a contest with each other and one has much more compute than the other for thinking ahead with AI, it's probably going to come up with a better strategy. It's probably going to enact that strategy more successfully. So nations now have an incentive to have compute parity or compute overmatch in order to successfully win contests. But there's a risk, if one nation starts pulling way too far ahead, then that's threatening to maybe other nations. So there are some kind of dynamics that come from that.

[00:30:44] I think this is an area that requires systematic and rigorous study. I think the frontier of strategy here is, and this is like such a, it's a spicy analogy to make, but it's a little bit similar to the types of developments in strategy that had to happen in the early, in the 50s and 60s as nuclear weapons and missile technology became progressively more advanced. Folks had to think, all right, there's this entirely new set of technological options at our disposal. We have to understand the consequences of those things. We have to figure out how do we have a strategy for winning a conflict, but also how do we deescalate a conflict if one starts to escalate?

[00:31:24] I think a lot about how important it is to maintain peace, to ultimately, you know, maximize human wellbeing. You know, for the good of humanity, we have to figure out paths to peace. That means that if there's a risk of escalation, we need to understand how to deescalate. So I think there needs to be research similar to the types of research on how we deescalate it from the crises in the 50s and 60s, new types of research on how AI related technology accelerations or the geopolitical implications of having more compute, how anywhere that this translates into a risk, we see an off ramp. We need off ramps.

[00:32:01] Speaker 2: Yes, I agree. I want to talk lots more about this with you and I think it's a wrap for today. So we're going to do that on another talk. So thank you. Today it's been a pleasure to host you here at the Forum and to the community and to everybody who sent great questions that I didn't even get to. Thank you.

[00:32:20] Before we wrap, we'd actually like to continue learning from you. So we regularly feature stories about Forum members. I'm the guy writing some of the stories and I want to learn from you how you're using Chat GPT, our API in your work and in your life. So Caitlin's going to drop a link in the chat where you can tell us more about that. Also, we're going to host a virtual networking session amongst everyone immediately after this talk. So if you'd like to connect one on one with other very interesting folks like yourselves in the Forum, please stick around. Join us. Select the pop-up notification when it appears.

[00:32:56] And finally, our next events are kind of incredible. We have a number of exciting gatherings. Opening Eyes chief research officer Mark Chen are going to join us for a conversation about accelerating mathematics and physics with AI, which is happening very quickly and very powerfully. We'll be in Washington, DC in person hosting journalists to discuss how AI is reshaping the newsroom. And I can tell you, as a former journalist, it alleviates a lot of tedium. Nobody has to transcribe a phone call anymore.

[00:33:25] And we'll be at SXSW with opening eyes chief policy officer Chris Lehane, along with RC Buford, the San Antonio Spurs sports and entertainment CEO. I love the Spurs and they're going to be exploring AI policy and the future of sports and community impact. And those two things, sports and community impact, are very closely related.

[00:33:45] So thank you again for being part of the Open AI Forum. We hope to see many of you in the networking session right after this talk. I know you'll have fun there. Until next time.

+ Read More
Comments (0)
Popular
avatar


Watch More

Event Replay: Learning Powerful Models: From Transformers to Reasoners and Beyond
Posted Oct 08, 2025 | Views 2.3K
# OpenAI Presentation
# AI Research
Event Replay: Using AI to Navigate Health: A Conversation with Kate Rouch, Nate Gross, and James Hairston
Posted Jan 16, 2026 | Views 1.7K
# Healthcare
# OpenAI Leadership
# ChatGPT for Health
# OpenAI for Healthcare
Terms of Service