Event Replay: A New Chapter for OpenAI: Mission, Momentum & the OpenAI Foundation
speakers

Josh Achiam is the Head of Mission Alignment at OpenAI, supporting the organization in defining and evangelizing the mission to ensure that AGI benefits all of humanity. He joined OpenAI in 2017 as a research scientist and has worked on AI safety research and operations, AI impacts research, and educational resources (including Spinning Up in Deep RL). Previously, Josh earned his PhD in Electrical Engineering and Computer Sciences from UC Berkeley and BS degrees in Physics and Aerospace Engineering from the University of Florida.

Jason Kwon is the Chief Strategy Officer at OpenAI, overseeing policy, legal, and social impact research teams. Previously, he was the General Counsel of Y Combinator Continuity, Assistant General Counsel of Khosla Ventures and an associate attorney at Goodwin Procter. He was a software engineer and product manager before practicing law. Jason has a JD from UC Berkeley Law and a BA from Georgetown University.
SUMMARY
A nonprofit with unprecedented resources, and the ability to start putting them to work immediately. A for-profit with enough capital to build the compute needed to unlock AI’s next advances. And a future marked by research breakthroughs in health, science, and AI safety made possible by OpenAI's innovative new structure
That sense of possibility came through clearly in an OpenAI Forum conversation with Chief Strategy Officer Jason Kwon and Head of Mission Alignment Joshua Achiam, who walked members through OpenAI’s historic recapitalization earlier this week. The shift creates a nonprofit OpenAI Foundation that will hold roughly $130 billion in equity in OpenAI’s for-profit arm, now a public benefit corporation called OpenAI Group PBC. The PBC carries the same mission as the Foundation and remains under its oversight — ensuring the company’s success directly strengthens the public good and expands access to AI and its benefits.
Jason said the restructuring removes the biggest bottleneck in advancing AI: access to compute and the infrastructure behind it. Under the new structure, OpenAI’s PBC can more easily raise the vast sums required to build more compute and support frontier research. As he explained, that shift could determine whether “you are driving a lot of this technological change or if you’re just one of the companies doing so.”
For his part, Joshua noted that the restructuring gives the Foundation access to significant financial resources, allowing it to begin quickly supporting meaningful work curing diseases and building AI resilience. As he put it, “great things can start happening much sooner.”
“We want to help all of humanity. We want to do something good for people. That's why we show up for work,” he said. “Critics think we’re not sincere, but we are in fact totally sincere.”
Jason closed with a story that captured what this moment unlocks. Early in his time at OpenAI, Jason asked then–VP of Research Bob McGrew if he could share more visibility into what was coming next. Bob simply smiled and said: “It’s research.” The lesson, Jason said, is that discovery doesn’t follow a fixed roadmap — and breakthroughs often arrive unexpectedly.
"The point that I took away is that you have to operate the company on a path of discovery and surprise, and you have to embrace that," Jason said. "That sometimes means the thing that you're doing right now may not be the thing three months later because of where the science is taking you, and you have to be comfortable with that."
Summary contributed by, Yochi Dreazen
TRANSCRIPT
Hi, everyone. I'm Natalie Cone, your OpenAI Forum community architect and a member of the Global Affairs team. Welcome to the OpenAI Forum.
Today's conversation comes at an exciting moment in OpenAI's journey. We've simplified our structure to reinforce our mission alignment. We now have a nonprofit, the OpenAI Foundation, which governs a public benefit corporation, OpenAI Group, allowing us to better align resources with our goal of ensuring that AI benefits everyone.
The foundation begins this new chapter with a $25 billion commitment to accelerate scientific discovery, advance healthcare breakthroughs, and drive positive economic impact in a rapidly evolving AI era.
This makes the OpenAI Foundation one of the most well-resourced philanthropic organizations in the world. And as Sam shared yesterday, the foundation now has the ability to deploy capital relatively quickly to maximize impact. As our models become more capable, potentially driving new scientific discoveries within just a few years, we remain focused on ensuring those benefits are widely shared and serve the public good.
We're deeply grateful for the members of this community who have been instrumental in OpenAI's journey. Your expertise and engagement make the forum an essential part of how we achieve our mission. And that's why we want to continue opening doors to moments like this so you can hear directly from our leadership and be part of OpenAI's story at every milestone.
Now, please help me welcome OpenAI's Chief Strategy Officer, Jason Kwon and our Head of Mission Alignment, Joshua Achiam to share more about what this week's news means for the future of OpenAI.
Hi fellas, how are you? Good, how's it going? Welcome to the OpenAI Forum. And this is a lunchtime chat, so we're going to try and keep it really, really compact so that our members get the most out of this day with our executives. So let's just jump right in.
Jason, why is this week's announcement an important milestone for OpenAI's evolution? Yeah, so it's really the next phase of our structure that kind of coincides with changing the environment. And we talked about this back in December publicly, which is you can kind of track out different phases of how the company was set up.
along with the evolution of AI science and research, right? And this phase that we're in right now, the amount of capital requirements that you need for building out data centers and infrastructure, it turns out you need a lot more compute than we originally thought, but you still wanna do an admission-aligned, admission-driven way, and you also wanna still make sure that the benefits of the technology really are directed with an eye towards the public benefit. And so we continued part of the nonprofit structure, created a PBC, which is a purpose-driven vehicle for corporations to be aligned towards a social mission, and balancing that against stakeholder interests.
And so you have these two corporate forums that are now kind of reconfigured in this new way that we've talked about over the last couple of days.
also makes it easier, more efficient for the company to kind of raise the capital that needs to build out the infrastructure in order to continue the R&D that has kind of taken us to where we are today. And so all the elements that, you know, conceptually that we cared about from the very beginning or continue through in the structure, but on the practical realities of actually marshaling the capital, this one will actually probably, you know, help us go further. And so that's why it's been an important change.
And of course comes also with the reconfigured relationship with our principal partner throughout this whole journey, which is Microsoft. And we think that this setup also sets up both them and us for a new phase of the partnership where we are an even bigger customer of theirs, you know, when it comes to the compute build out. And then we are still sharing technology responsibly between the parties.
Thank you so much, Jason. This next one is for you, Josh. A lot of companies say they're mission-driven. What makes this model, specifically the combination of a nonprofit and PBC, different? And what will it enable us to do?
Thanks, Natalie. So first and foremost, OpenAI has always been a mission-driven company. It shows up in everything that we do, from the structure that we've tried to design to ensure this, to how employees conceive of what they work on on a day-to-day basis, and how we try to make sure that the tech actually helps people.
Not very many companies have this kind of unique structure that we have that involves a nonprofit and a PBC or a for-profit type of entity. One of the major things here for mission alignment is that previously, of course, the nonprofit governed the for-profit. This continues to be the case.
case under this restructure. This is, I think, a really wonderful continuing feature of what we do. We wanted to put the mission front and center for everything.
And in particular, under this new configuration, one of the ways in which we've actively continued the mission alignment is that the safety and security committee of the board of directors. So this is the part that's responsible for making decisions related to how our technology impacts the world, how frontier AI has safety impacts in the world.
Those directors, when they're making those decisions about what gets released and what the safety standards have to be, they have the ability, and this is enshrined in the governance structure, to only consider the mission when making those decisions.
So the final authority on not just how do we get the tech.
into the world, but how do we ensure that it's safe and beneficial for people? The NFP maintains a full mission alignment and how it gets to make those decisions. So I think that that's a pretty exciting thing. I feel very good about it. Thank you so much, Josh.
Jason, how did feedback from state regulators inform and improve this structure? The main way I think is their feedback on the continuation of nonprofit governance and then making sure that that nonprofit governance had sufficient independence and would still work for the purposes of ensuring the capital raising commercial viability of the for-profit, balancing all those considerations together. And so we talked about this back in, I think, May, which was, we took...
into account their feedback in terms of continuation of the board of the non-profit overseeing the for-profit and, you know, for-profit really being under. And then, you know, continuing sort of dialogues over, okay, what is the board structure going to look like? And then I think you saw the result the other day, which is, you know, we have the SSE concept that Josh was talking about, and then Zico is the non-profit only director who sits on the non-profit board who looks after safety. Awesome, lovely.
Okay, maybe we'll dig in a little bit more into the OpenAI Foundation. So Josh, the OpenAI Foundation now holds equity valued around $130 billion. It's one of the best resourced philanthropic organizations in the world, as I mentioned in the intro. How will it operate, and how is it different from other philanthropic organizations? I think that's a terrific question. So
Yeah, first and foremost, JASON 07:05 the scale of it is something that's really easy to underappreciate. The largest nonprofits in the world have ballpark $100 billion in their endowment. OpenAI's nonprofit, the OpenAI Foundation, is starting today with $130 billion in assets.
And if you look at the trajectory of OpenAI over time, and the way that we've grown over time, and the way that we might hypothetically grow in the future, and this is not investment advice, but, you know, you start to see the possibility that this will be maybe by far the most resourced foundation in the world. It has a global mission, the intention is to help all of humanity. That is, I think, just a very, very exciting thing. It also poses challenges for figuring out how to operate it.
So today, you know, we're just starting this thing up, we don't know exactly how it's going to
operate yet. We're in the process of figuring that out. We know we have to do a lot of engagement with communities and experts to get to a place where we know how to marshal that extraordinary scale of resources in the best and most beneficial way possible. And we've started a number of things along the lines of those engagements.
So for one, we engaged with the NFP Commission. And I want to say in the event that any of you are watching, just a huge, heartfelt, deep thank you for all of the support in thinking through what the future could hold for the Foundation and for being a part of getting it there.
We have also engaged with nonprofits across California, across the country, on thinking through ways that they can use AI to improve what they do. And we're excited with the OpenAI Foundation to ultimately support the deployment of AI technology in.
beneficial use cases and community-driven use cases. Events like the 1,000 Nonprofit Jam that we ran in many cities across the country, I think have helped us build relationships with people who are trying to use AI to do good and we expect to continue to learn from engagements like that.
We're also having discussions with a number of philanthropies that are marshalling large-scale resources and we're talking with them about some of the interesting opportunities that they see for large-scale benefit and trying to figure out how we contour the AI and the tools towards that.
But of course, this is very much in the early days because we're going to wind up having capabilities that nobody's ever had before. We are an organization that's driving towards the development of AGI, Full Artificial General Intelligence, and we expect that.
over the next few years as our research roadmap bears fruit and that comes online, we'll be able to use AGI in ways that people previously, you know, couldn't imagine doing. I'm very excited for the ways that it will help accelerate scientific and technological progress. So we expect to build an AI researcher and we think that that is going to help us find new cures for diseases and, you know, other ways of improving people's health and well-being.
And this is part of how we're conceptualizing the initial 25 billion commitment that we might ultimately be able to use AI for those ends and then put the developed outcomes into the world in ways that are maximally helpful to people. So there's a lot of work to do to ultimately figure out how to deploy 130 billion in capital and whatever that grows into over time.
This is the way in which we're approaching it. That's so exciting, Josh. And I know that you have a really awesome team. I know that you guys have been working on collaborating with philanthropists. You recently just had a sort of jam or convening of sorts with faculty members. And I'm just so excited to hear about all of the work that your team has been doing. And I'm also really grateful that you've been weaving the forum members into it. And I want to give a shout out to the forum members that are here today. This community of experts have been consultants for us. And they've evaluated our models and provided all sorts of awesome feedback along the way on this journey. So now you guys get to put a face to the name.
Josh's team leads a lot of this work that you guys have been participating in.
OK, moving on. This is for, I think, I'd like Jason and Josh both.
to answer this question, but we can start with Jason. So Jason, what do you hope the foundation will achieve in the next five years? What are you dreaming about?
So we've got these two pillars that we talked about, which is one is in bio and health and the other is in AI resilience. And I think that I'd have different hopes for both of them. On the first one, that's the efforts that we have here really contribute to some kind of breakthrough over the next five years that really makes a difference in terms of like lots of people's lives, somewhere on the health or medicinal kind of dimension.
And I think on the AI resilience part, that it helps really kind of create an ecosystem around AI that
essentially just helps society, you know, as the technology develop, you know, co-develop with the technology, understand the technology, feel safe in the technology. And one of the ways I kind of think about the really productive way for the nonprofit to participate in this whole journey is to kind of fill institutional gaps, you know.
If you look at where we are in terms of technology development, you know, and the way that things are structured in our society is we just kind of need new ways, you know, for the public civic institutions and, you know, organizations like that to kind of participate in this technological development, probably need new types, and that they're focused on the technical
solutions that are going to help society, I think, adapt to this, make good use of it. And so I think a really great outcome here would be that ecosystem.
Thank you so much, Jason. And I know since the beginning of working at OpenAI, you've really, I'd say, lifted up this idea of building ecosystems around and embedded in OpenAI between us and external stakeholders. And it's one of the reasons you've always been a champion of community, so we could be a catalyst of these ecosystems.
So I love that answer. What about you, Josh? What are you really wishing that we will be able to achieve in the next five years?
Yeah, so the first thing that comes to mind is that on the five-year time horizon, it's very hard to predict how the technology will evolve. And things may wind up happening very, very fast. I come from an AI research background.
background. And so I look at everything that's happening with just tremendous excitement, and I have seen it be the case in the history of AI research, that progress goes faster than you expect. So in a stretch of less than 20 years from when people knew you couldn't train deep neural networks to do things to today, when we're talking about AGI within a few years, that's a very rapid pace of development.
And since we expect that AI will possibly help achieve breakthroughs in other fields of science, we may see that a large number of longstanding grand challenges for science, for health, for longevity, for other exciting things too. Maybe say in material science, which is currently an obstacle to high temperature superconductors or to space elevators. Maybe as the AI begins to drive scientific and technical progress, we'll see unexpected breakthroughs in these areas and things, Michael.
very fast over the course of the next five years. I'm very excited for that. There's something else that I think is really important.
And this I'm both excited for and approach with some trepidation, which is that there will need to be a lot of consensus building around how should AI show up in the world? And what should we be doing when we're in a position where we have very large scale capital to deploy on all kinds of problems? How should we make those choices?
So I'm excited over the next five years for the foundation to engage in a process of consensus building and outreach, and ultimately make sure that the things that we do are things that feel good in the heart to everybody. Because if we want to help all of humanity, there are really going to be a lot of people whose approval we have to earn and whose trust we have to earn.
So I'm excited for that. And I approach it with sort of the appropriate amount of trepidation.
And then maybe the last thing that feels just really exciting to me and that I'm very hopeful for is that as we enable these kinds of technological miracles, I want to see us find ways that help ultimately uplift everyone around the world. I would like to see us contribute to global development if possible. I think there's just a lot of infrastructural needs that we have to figure out how to fill if we want to make AI useful to everybody everywhere.
And I mean, that's something that I very much want. I want it to be the case that no matter where you live in the world, no matter what socioeconomic status, AI is able to be helpful to you. And in order to do that, we have to get AI to people. In order to get there, there's a need for global infrastructure, but you also can't miss the basic needs. So you can't just skip the data centers. You have to make sure that everyone's got clean water.
and electricity and food and roads. And so I'm hopeful that we'll ultimately be able to contribute productively to that.
That's really beautiful, Josh. And I was noticing just in the past few weeks as we've been getting to know each other because you've been contributing so much in the community that you've been at OpenAI for eight years and you started here as a research scientist and I actually didn't know that.
So you've seen the entire evolution. I mean, eight years is a lifetime in this industry. Yeah, that's right. That's good, yeah. Wow, yeah, that's really, really awesome.
Okay, fellas, I like this question and maybe we'll start with Josh then on the research science note. How does this structure help us continue to push the frontier of AI research safely and responsibly?
Yeah, well, I mean, there are two parts to that. One, how do we push the frontier and two, how do we push the frontier?
we do it safely and responsibly. On pushing the frontier, to Jason's point from earlier, this restructure enables us to access much more capital. And that's important for building compute because compute is the biggest bottleneck to AI research today. I routinely hear researchers tell me and tell everyone who will listen that they are compute bound, that they would be able to do so much more if they had more GPUs available to them. And so that's really an obstacle to scientific progress and also to safety progress.
I hear from the safety teams, from everybody really, but I also hear from the safety teams, oh, we wish we could run more experiments and they are working miracles and they'll be able to work more miracles when more compute comes online. So I think that's a very important piece of the picture and also for how this ultimately ensures that we get to safety and responsibility. So this isn't a new unlock, but.
I do think that the preservation of the role of the SSC, of the Safety and Security Committee, and the way that we put the mission first in the safety decisions that we make as an organization, enables us to continue doing the great work that we've been doing on safety and responsible deployment of AI.
That is so true, and I wish that everyone joining us today could see how when we're trading GPUs to make the research happen. If you saw our Slack channels, you would see how much we care about safety, and about the testing, and how much people will give up, so that that takes priority before we deliver a new model.
Jason, what are you thinking? I think that there's one thing that
is a level of detail that kind of amplifies something that Josh said about SSC that maybe is not super obvious to lots of people. So with SSC sitting up, you know, at the nonprofit and having to make decisions about this stuff really from a mission perspective, it means that the directors, when they're making those decisions, they have to follow fiduciary duties, right? And their fiduciary duty, if it's at a nonprofit, means that on this particular topic, you have to make your decisions in accordance with whatever is going to further the mission, which is to ensure the AGI benefits out of humanity. And so that's mechanically how that actually makes a difference.
I think the other part of this is, people might be sitting there thinking, what do you mean access new capital? I see lots of news about fundraising activity in the current structure.
or the previously current structure. And I think that this is the kind of thing that it takes some time to get used to when you join OpenAI, but the scale of the vision sometimes that Sam leads with is like, it's pretty jarring because whatever you thought was ambitious just kind of undergoes a complete revision in terms of the definition of it.
So if you are thinking about the amount of capital we currently raised and deployed in terms of compute build-out and research, and we're saying that the structure is gonna actually help us do more of that, more here in these terms means actually substantially more. And so the thing that really, I think makes a difference here too, is just if you are driving a lot of this technological change or you're one of the companies doing so.
the way that it advances for the frontier is through that, you know, fundamental resource, but then also the way that the safety standards are set is going to be determined by who's ever putting out the frontier capabilities. And so I think it's this combination, right? It's the combination of, you know, the mission being part of the driver of whatever safety and security is, but then the capital formation around it that's been enabled by this new structure is what allows these two things to work together.
I'm taking note, guys, that you two are the perfect, you're the perfect pair for giving different balance. So we've got the chief strategy officer, background in law, the research scientists running mission. I think this is the most balanced perspective we've ever been able to present the community in a discussion.
I love the way this is going. So let's see. I'll start with Jason on this one.
AI is one of the most, because you kind of started touching on this, capital intensive fields of study in the world. Maybe we can just reiterate, what ways does the new model unlock capital for advanced compute and long-term research? This is really from the standpoint of people that are providing capital.
And so having been on that side of the business, not this company, but in my past, when you're an investor, you think about the various risks and returns. And what you principally want to underwrite if you're an investor, especially in technology, is you'll take technology risk. Maybe you'll take market risk.
But you don't really want to take legal risk. You don't want to take funky structure risk. You don't want to take bespoke thing I've never seen before and have to understand kind of risk when it comes to where your capital goes. And so this is something that we had to kind of go through every round of capital raising. And we were able to do it, but increasingly, it just became harder and harder in terms of the friction associated with it.
So it's much easier to show up and say, this is a corporate structure you've seen before, you've invested in before, PBCs, even though they are not the normal C Corp, because they have this kind of new concept, newer concept that has to do with social responsibility. It's still a structure that's been around for over 10 years in Delaware. It's well understood. There are public companies that are PBCs. There are many more private companies that are PBCs. Investors have, you know, kind of put money
- "So it just takes off a whole part of diligence from the process and just streetlights everything. Definitely. Anything to add to that, Josh?
- "Yeah, I think that's something that has been underappreciated is exactly how weird our old structure was. And it was designed with just the most beautiful sentiment and idealism. And then in practice, when we got a few years into this completely bespoke custom design thing, it turned out to be creakier than any of us had anticipated. And it just very quickly ran into, well, okay, this trade-off that we made for this thing actually created this unexpected issue on this time horizon. And what we have today is considerably simplified. And it has been, the way in which"
we did this transition was one where everyone got fair value for every part of this transition. And so I see this as living on the, to be kind of like a nerdy researcher about it, like on the burrito frontier of optimality for organizational structures, we moved along the burrito frontier towards something that is at least equivalently good for helping us advance our mission in the sort of conceptual and idealistic strokes, but also in practice just makes it so much easier to do so many of the things that we have to do that it feels like a good trade to me. Thanks, Josh.
A couple more questions before we start taking audience questions.
So Jason, what do you hope people take away from this announcement?
That more of the news that they read about opening act can be about the research.
Yeah. Oh yeah.
Yes. No, it's, I mean, the truth is that when people are focused on things that aren't the research, they're missing out on the opportunity to have a discussion about something genuinely critical that will shape a lot of what happens to everybody over the next 10, 20 years.
The most important issues of this period in time are going to be about how we choose to use AI, what we choose to use it for, what agreements we make together about that. And if we're talking constantly in the media about the open AI structure and open AI does this deal with open AI, it's ridiculous. We're missing the point by a lot.
So very excited to be able to talk about the research and the impacts on people from AI and the potential for doing good with AI. Thank you for that.
And a few months back, we hosted Joaquin, and the head of recruiting now, he used to be the head of preparedness. And one of the takeaways from the community after interviewing Joaquin and after he played his guitar for all of us was that, and now we actually have this series with our recruiters and the world is starting to see who works at OpenAI. And they're surprised because the emphasis is always around, like Jason mentioned, like not necessarily the research and not necessarily the mission and how genuinely mission-driven we all are and how that's why we're able to actually do so much also, as Jason mentioned, because people feel really strongly and really passionately about this work.
So I am with you fellas, I do hope.
that we can start to make a shift now that this is all behind us. The corporate structure has been solidified. And we'll keep doing talks like this to reinforce that what we really care about is the research and the mission.
Last question before audience. We'll start with Josh. What excites you most about what this structure empowers us to do?
For me, I have to say what I'm most excited about the structure empowering us to do is actually use the resources of the nonprofit much sooner to try to accomplish good in the world.
So before, in the way the structure had been set up previously, it was going to be a long time before the nonprofit got to take very meaningful actions or have access to the large pool of capital that it would ultimately get. Now, it actually gets to have that capital up front. It gets to start.
figuring out how to deploy it for good up front. And what this means is that instead of, you know, great things will happen years down the road, great things can start happening much sooner. And the beneficial, good types of things the nonprofit can do can evolve in parallel with the technology as the technology is driving impact in the world. So before it was going to have to be just long delayed from the impact, I think having it concurrent, simultaneous, side by side with the impact is just a really wonderful thing that we're empowered to do now. I feel very good about that.
Okay. Thanks guys.
Let's jump into some of these audience questions. So recently you probably both know that we hosted GitLab at OpenAI. And one of their stakeholders asks, what are some of the ways you heard from nonprofits in the jam as to how they're using AI to further their mission? We see so much potential.
from a fund-seeking and impact perspective? And that was Shane Wynn. You wanna take that one?
Yeah, absolutely. I heard such incredibly varied things. I heard educators talk about how they were using AI to streamline the processes of making educational materials and better support their classrooms.
I heard from someone, and this one, my memory is a little bit fuzzy, but I remember just thinking, wow, this is so different than what I would have expected. But someone who was figuring out how to use AI to help combat a problem where there was severe overfishing in an area where people had an economic dependence on fishing. And in order to check the regulations and check to figure out how to ultimately address some of these problems. And again, my memory is like a little fuzzy. I don't know if I've got this one.
exactly right. But I just remember thinking, wow, AI is able to, to help do this kind of research on something that a local community really depends on in an area of the world where, you know, in California, in the open AI office, you don't hear about that kind of issue very often. And it was really exciting and interesting to me to hear how helpful it was to people about that.
I also heard from folks who were trying to use AI to make their nonprofits more efficient and able to tackle more than what they would with just the people that they had on staff. You know, nonprofits are chronically under resourced, they often are trying to do much more and much more important work than you would ever expect could be done with the number of people that they have on staff or the number of dollars that they've got coming in, seeing that AI could be a big amplifier for them, and help them do a lot more.
was something that they were very excited about and that I was very excited to see. Absolutely. Anything to add to that, Jason?
You know, one of the other things that kind of came out from the nonprofit jam, and it's interesting as we kind of think about doing more of these, and this is a little bit about the ecosystem point, is there was a resource posted by, I think it was somebody in New York, I'm blanking on the organization's name, but it's essentially just, it was a way of having a guide out there that was about using the latest AI tools, such as Chats with BT, for all of the activities that a nonprofit might engage in. Yes. Grant making, grant receiving, grant applications, administration, you know, how to, you know, help write like a mission statement, all that kind of stuff. And so, I think one of the other great things about these kind of...
communities is that the more you bring them together and then kind of introduce them to technology, the more they start cooking things up on their own. And that is really how you get just a lot of impact, right? It's not just about everything that we do. It's about kind of triggering things and then letting people build.
Yes. And I think what you're referring to, Jason, was the cookbook. So we have cookbooks for developers, and then the Open AI Academy launched the cookbook for nonprofits. And this idea of ecosystem, what ended up happening was the K-12 teachers in the forum and in the academy, they got wind of the cookbook for nonprofits. And a lot of those use cases mapped to pedagogy and to teacher communications. And I found myself constantly sharing that link with folks. And then we eventually published it because those resources are totally scalable. So I love that answer.
Andrew Holtz, AI tech lead at EPM Systems, asks, what is OpenAI's vision for balancing its product mission with its research? How do you ensure that the immediate demands of shipping features don't crowd out the long-run research for achieving AGI?
That's a good one. You want to start this one, Josh?
Yeah, that's a great one. So, I think that the research staff thinks about this a lot, because the research scientists who are pushing the frontier of AI, they are most excited about the long-term. They are also very excited about products, because they see applications as one of the most important ways to test whether what they built is actually good, right? It's about whether it's useful in the real world.
But they noticed that there is this trade-off between the long-term foundational research investments and shipping products at a high cadence. And what has wound up happening over time is that there's been an evolution of the way in which the research effort is structured, so that some folks get to focus predominantly on releases and making something that's going to be extraordinarily helpful to people and applying techniques as we understand them in the best way possible to facilitate that.
And then there are some other folks who are shielded from those kinds of incentives and demands who are just able to think really, really hard about the long-term research, and they can get very esoteric and mathematical.
And those things don't have to drive a result immediately, but many of the most important breakthroughs that have happened over time
open AI have come from people who were shielded in this way, spending a year or two years developing concepts until they reached a point in maturity where they enabled some large scale leap in AI capabilities. I mean, and we've driven the frontier paradigms for AI training by ensuring that we had that capacity for long-term research investments. So people are definitely thinking about it. And the sort of short answer is they're organizing their effort to make sure we've got the best of both worlds.
Thank you, Josh. Jason, do you want to contribute anything to that one?
Maybe it's like the way that I would articulate my point here is through like an anecdote. And so it was like maybe three or four months into my time at OpenAI. And I was having a little bit of trouble keeping up with what research was doing. And it was a much smaller company then. It was like 140 people or something like that.
And I remember grabbing some time back then with our then VP of research, Bob McGrew, who, by the way, is just an amazing human being. And I sat down with him and I just said, hey, is there any way that you could give me some more insight into what's being planned and what's coming, and that way I can kind of anticipate things more and be better prepared? And he just smiled at me and said, it's research.
So I think that the point that I took away from that is just you have to operate the company on a path of discovery and surprise, and you just kind of have to embrace that. And that also sometimes means the thing that you're doing right now, three months later, it just may not be the thing anymore. That's because that's just where the science is taking you. And you sort of have to be comfortable with that.
And I totally get, from the outside, people see us and they're just like, I don't know what you're talking about.
You're just like, it seems like you guys like just kind of zigzagging sometimes. And you're not necessarily sometimes getting like this kind of, you know, continuity of you did this exact thing. And then now it lines up to this next thing. And that's a very product different motion. That's very engineering driven motion.
And what we have is not that it's the, it's kind of the, it's the researcher emotion which is why sometimes it seems like we're surprising. It's actually, that is what the discovery has implied. And that is the direction we need to go.
And I think that is the most important thing is just not losing the spirit of that and really trying to operate in that way. Because at the end of the day that is what will produce the next generation of products that people that can use and get a lot of utility out of. And it's actually the.
the moment you step off that path where you then end up in the local mail. That actually reminds me of something Kevin Wheel once mentioned to us all when he was the chief product officer for everyone here. Now he's the VP of AI for science, but he said, it's important. Like the process of planning is really important, but the plan is not important at all. That seems to kind of underpin.
Okay. I think that this Josh, unless you wanted to keep going on that question, I'm totally open.
Sorry. There was like a teeny bit that I want to add, which is just, um, a huge plus one to what Jason is saying about how we're, we're driven by what the research implies and that it's very bottom up. So, uh, it basically always has to be the case that if a research scientist or an engineer, or, you know, some, some, some team of people figures out some new capability that we can.
reorient the entire company and product surfaces around what that enabled and what it implies. And one way you can kind of look at like a historical example of this, thinking models are weird.
Thinking models are weird. And the product surface area for that is weird, that we went from, okay, you've got instantaneous chat to, all right, now you have this option to go into this other model that's going to like, sort of annoyingly think for 10 minutes or however long, and then it'll eventually pop out the super detailed answer.
It turns out that's very useful. And the AI considerably improves in capabilities from the training process for thinking, and then the actual act of using test time compute for thinking.
And this would not have been intuitive at all from a product perspective. I don't think anyone would say, yeah, you know, what's great for chat? Latency. What if we added minutes of latency? No one would ever come up with that as like a top-down product direct.
So, so yeah.
I think what Jason is saying is just completely true, and it's a real thing, and it's also part of what makes OpenAI a really great and exciting place to be.
Actually, this is a funny echo too. When we were having the thinking models initially, we're like, what's the product we're going to put around this? I remember Sam was just like, let's just have a think harder button. That's not exactly what we did, but it's not that different from what we ended up doing.
I remember when he first said that, people were just like, what? Because it's just a thing that you didn't expect in a product, and it's totally new, and we thought people would just react negatively to it.
We did end up doing something like that because you can see now the model set. You ask a question, and now you can, in some cases, control how much it thinks, and then people are fine to wait for it to think. But then it also became an unintended feature.
When we first came out with these models, we did less in terms of exposing the chain of thought because we had a bunch of concerns related to doing too much in terms of that exposure. Then you had the DeepSeq models come out not that long after, and they just revealed their entire chain of thought.
The thing that I heard from lots of people was they found that fascinating. That was actually a product feature, because it's not something that people necessarily thought that they wanted. But as soon as it was shown to them, they're like, this is actually really cool. I've never seen a computer think in real time. It makes it feel a little bit more human in some sense because I can see how it got to the answer, and it's like a thought process.
I remember somebody telling me who he was coming from a conversation with a high-ranking government official, not in the United States, but in another country.
officials said when they saw the chain of thought in the deep seek model, they were like, Okay, now I understand this whole thing that people are talking about with like scaling and, you know, how the models are actually thinking and reasoning. And so this is the other thing too, right? It's just it's like, the things that you thought that product should look like, are not necessarily going to be the things that really capture attention in terms of people wanting to use the product, or finding value out of the product.
You know, or they're not necessarily going to be the things that you thought were useful before. And I think this says something also about like the path of AGI, which is when people think about, hey, as it gets more capable, what are we as people going to do? Where are we going to find the value? And I think that this is
Where are we going to add the value? And this is just another thing, which is, we find things that are valuable all the time that we didn't think were valuable before. And I think that this is also a part of the process of discovery. Thank you, Jason. Yeah, that was awesome. I'm glad we kept going, Josh. Yeah, pulling that thread on that one. We have time for maybe two more questions. So let's jump into Christy Connor. She's the founder at Seven Bear Inc. And she asks, what's the staffing structure and hiring plan going to look like for the foundation to execute items outlined in the OpenAI Nonprofit Commission Report and the new foundation alignment and mission? You want to take the first stab, Jason? Yeah, so, you know, this is all really going to first start with our board. You know, the board, you know, that sits the nonprofit, they run the nonprofit.
They are going to make the decisions about how things get staffed and, you know, it's day three, I think. So, you know, I think we're going to look for some really qualified candidates that have some operational expertise in running large nonprofits to kind of help us understand, you know, the way that are like established best practices for how you do certain things. And that's going to then combine, I think, with input from people at OpenAI as to really the vision of the technology and where it's going. And so you kind of want to combine best practice operations with the novelty that comes from the science and research. Again, everything really just goes back to research at the end of the day.
Yeah, plus one to what Jason said, it's going to be a while to figure out exactly how this works and the decisions.
are going to come from the board on this. One thing which is probably worth highlighting is that if you look at the MOU between OpenAI and I think it was the California Attorney General, one of the things that's mentioned in there is an understanding and an expectation that the OpenAI PBC will make resources available to the nonprofit as it needs them. So as the nonprofit stands up and figures out how to staff, it'll also be able to draw on personnel, IP, technology, et cetera, from the PBC, wherever it turns out to be helpful in accomplishing its mission. And so there'll be, let's say like a smooth process where even if the nonprofit doesn't have a complete staff on day one, it's gonna be able to use resources that are in a more mature state.
Thanks guys.
Oh my God, there's so many questions. I'm having a hard time deciding on the last one.
Last one, so let's just go with this one. Juan Ortega, president of iHeard Foundation, Inc. Would you partner with smaller organizations to help with community projects?
We're already doing that, actually. If you look at the People First AI Fund, we're putting $50 million right now towards partnerships with small community organizations, and we are figuring out how to get the resources that they need to improve AI literacy, to solve problems in their communities, and to do things for community well-being.
Yes, awesome. Okay, so now we have time for one more then.
Yeah. Okay, I like, oh, they're so good. This one's really hard. Daniel Green from Kansas City AI Collective asks, what are the most common misconceptions you have been hearing and seeing since this update?
was announced earlier this week? Oh, that's a very good question. I am a little too online, and so most of what I see is like Twitter chatter. And largely, I think people are, for the folks who I have seen who have been very critical of whatever we happen to do at any given time, including this transition, what I see most often is a misconception about our motivations and what we're really trying to do and why.
And I would say, having been here for eight plus years and knowing so many of the people who are here who have driven the decisions, who have been a part of operationalizing them, who have...
have played a role in building the technology, I feel that we are most often misunderstood because they think we just can't mean what we say, but we do. We are doing this to make sure that this technology benefits all of humanity. We wanna help all of humanity. We wanna do something good for people. That's why we show up to work. That's what we're excited about. And so, yeah, I think that's the biggest misconception and it's the easiest one for critics to make that they think we're not sincere, but we are in fact totally sincere. Absolutely.
I agree with that. I'm going to, we're at time guys. So I'm gonna leave you. I'm just gonna plant the seed of a question. I'm gonna come back and I'm gonna answer this on my own asynchronously in the community. So maybe just some time to think about it, but I think this would be helpful.
I'm seeing this question a few times. So folks are asking, there's an executive director of Humane Intelligence. What advice we have for new orgs that are trying out the nonprofit, PBC Combo, because they're embarking on this journey in their own companies. So take that one with you today. When you feel like answering, let me know. I'll share it back with the community.
And I just want to say thank you so much for being here today. That was really beautiful. We've never done an event with the two of you together. I think it was a perfect balance. And I hope we get to do it again. Not in the forum, at least. Not in the forum. Yeah, a couple of things internally, I think, over time.
Yeah, he and I have done stuff together. So thank you so much, fellas. And I hope you have a wonderful rest of your week. And I hope that everyone here also really enjoyed this. And I know Jason and Josh, you jumped on this so quickly.
We published that blog and then we invited you to do this talk and you both said yes. Thank you so much. I hope you have a really wonderful rest of your day. Thanks, Natalie, and thanks to everyone who joined in today.
Yeah, thank you so much for having us. Thank you to everyone in the forum. Thank you so much for being involved.
Wow, that was fun. Oh, man, I hope everyone enjoyed that. I hope everyone has a better sense for what we're trying to achieve now at OpenAI.
I'm going to use this opportunity to tell you about a really awesome event we have coming up on Monday. It's with the CEO of Stack Overflow and our head of developer experience. That will be another lunchtime talk about the future of what it means to be a software engineer.
We also will be hosting faculty member from UC Berkeley, Greg Neymar, who is also the founder for the Center of New Media.
in November. We are going to be hosting our friends, the San Antonio Spurs. I see Charlie Kurian here in the audience tonight. Thank you for always being a support and for being awesome community members.
We're going to be talking with the Spurs about how they've been leveraging ChatGPT and Sora and OpenAI's other tools to engage their fans and render their operations more efficient, and lots more coming down the pipeline.
So thank you so much for joining us this afternoon, everyone, and we will see you on Monday.

