Event Replay: Using AI to Fast-Track Scientific Breakthroughs
Speakers

Brian Spears is the director of Artificial Intelligence (AI) efforts at Lawrence Livermore National Laboratory (LLNL). He is responsible for setting vision for development and deployment of AI methods for national security missions while driving LLNL excellence in AI for science. He is a principal architect of Cognitive Simulation – artificial intelligence (AI) methods that combine high-performance simulation and precision experiments to improve scientific prediction. He is also leads the LLNL AI Innovation Incubator, AI3, which develops strong public-private partnerships on collaborative research projects to advance scientific AI in the national interest. Brian served as the Deputy for Inertial Confinement Fusion (ICF) Modeling where he guided the scientific simulation half of the ICF program at the National Ignition Facility (NIF) through its historic achievement of nuclear fusion ignition. His team used novel AI methods to predict fusion ignition for the first time in history. In his personal research, he applies cognitive simulation techniques to stockpile stewardship missions with emphasis on quantifying uncertainty in inertial confinement fusion (ICF) experiments and developing a new generation of self-driving laboratory systems. He received the LLNL Mid-Career Recognition for career achievements in research. He is the recipient of two Secretary of Energy Achievement Awards, multiple National Nuclear Security Administration Defense Programs Awards of Excellence, the Hyperion HPC Innovation Award, and the HPC Wire Editors’ Choice Award for Best Use of High-Performance Computing in Energy. Brian completed his PhD at the University of California, Berkeley where he was a National Defense Science and Engineering Graduate Fellow and studied topological methods for high-dimensional dynamical systems. He also holds a BS in mechanical engineering and a BA in liberal arts from the University of Texas at Austin. When not doing science, he can be found racing his bike or chauffeuring his two children to swim practice.

Kevin Weil is the VP, OpenAI for Science, previously Chief Product Officer at OpenAI, where he leads the development and application of cutting-edge AI research into products and services that empower consumers, developers, and businesses. With a wealth of experience in scaling technology products, Kevin brings a deep understanding of both consumer and enterprise needs in the AI space. Prior to joining OpenAI, he was the Head of Product at Instagram, leading consumer and monetization efforts that contributed to the platform's global expansion and success. Kevin's experience also includes a pivotal role at Twitter, where he served as Senior Vice President of Product. He played a key part in shaping the platform’s core consumer experience and advertising products, while also overseeing development for Vine and Periscope. During his tenure at Twitter, he led the creation of the company’s advertising platform and the development of Fabric, a mobile development suite. Kevin holds a B.A. in Mathematics and Physics from Harvard University, graduating summa cum laude, and an M.S. in Physics from Stanford University. He is also a dedicated advocate for environmental conservation, serving on the board of The Nature Conservancy.
SUMMARY
In this Forum session, OpenAI’s VP of Science Kevin Weil and Brian Spears, Director of Lawrence Livermore National Laboratory’s AI Innovation Incubator (AI3), will explore how advanced AI systems are beginning to make direct, measurable contributions to scientific research.
The discussion will highlight the OpenAI–LLNL partnership and what it looks like when frontier reasoning models are embedded in real scientific workflows—from accelerating hypothesis generation and analyzing complex datasets to uncovering connections that were previously out of reach. Weil will share the vision behind OpenAI for Science, including the ambition to “compress 25 years of scientific progress into 5,” by giving researchers powerful new instruments for discovery. Spears will offer the lab-level perspective on how AI is already expanding the pace, scale, and ambition of work across fields like energy, materials science, and high-performance computing.
By bringing frontier AI into some of the nation’s most capable—and most secure—research institutions, OpenAI and the national labs are working together to build a more rapid, reliable, and resilient model for turning scientific insight into real-world impact.
TRANSCRIPT
[00:00:00] Speaker 1: So welcome everybody! Happy Monday! So awesome to have a forum event on a Monday. I'm Natalie Cone, the head of OpenAI Forum. Welcome and thank you for joining us here at our headquarters in San Francisco. I know some of you traveled to be with us in person today. Michael Wall, for instance. Can you raise your hand Michael? Michael, thanks for coming all the way from Utah and we're grateful you're here. And a warm welcome as well to those joining us via live stream from around the world. The OpenAI Forum is a global community of experts committed to advancing OpenAI's mission, building artificial general intelligence that benefits everyone. Since launching in 2023, right after the release of Chat GPT, the forum has grown into an ecosystem of expert collaborators focused on forging impactful partnerships, exchanging resources and ideas, and amplifying real-world AI use cases aimed at solving humanity's most challenging problems. Every year in the Forum, we host Terence Tao, field medalist and math faculty at UCLA to hear his thoughts on our new model releases. After the release of O1 last year, OpenAI's first reasoning model, he shared a few declarations that really stuck with me. First, he said, we're entering a new era of discovery, and in part it was because of this new accessibility to interdisciplinary collaborations with Jasper and I we spoke of last week. So I want you to think of the OpenAI Forum as a place that helps you to forge those interdisciplinary collaborations, and I hope you find this gathering and those in the future supportive of that goal. Why are we here today? In the wake of the Executive Order Genesis, reflecting a national commitment to accelerate scientific discovery using AI, we wanted to bring together our VP of OpenAI for Science, Kevin Weil, and the Director of AI and Innovation from the Lawrence Livermore National Labs, and the Technical Director of the Genesis Mission to discuss the partnership, its goals, and the future of science underpinning AI. We'll begin with a demo from Brian, followed by a fireside chat between Brian and Kevin, and then we'll reserve a few minutes at the end for your questions. Please help me in welcoming Kevin and Brian to the stage.
[00:02:20] Speaker 2: All right. Thanks so much, everybody, for being here. It's amazing. I was talking to some people before who flew all the way across the country. Maybe there are some people who flew even farther than that. And thank you, of course, to Brian for coming and hanging out with us today.
[00:02:34] Speaker 3: Absolutely. Pleasure to be here.
[00:02:35] Speaker 2: Thanks Kevin. So I thought I would start, actually, by telling a quick story about how Brian and I met. This was just about a year ago, a little more than a year ago now, I think, in DC. One of my colleagues said, oh, you've got to meet this guy, Brian. And we were on the sidelines of a conference. And so I went to this conference and we sat down in a corner. I didn't have a lot of context, so I thought I was maybe going to be talking to him about OpenAI and Chat GPT and what it could do. And instead I sat down and just faced this whirlwind of, oh, all right, you guys have built the greatest tool for science. I can tell you all these things, here's what it needs to do. And I was just like, oh, my God, he was telling me things about the product that I was working on that I didn't even know. It was one of the most powerful meetings I maybe have ever had. And then I said, okay, you've got to come talk to OpenAI. I've got to get you in front of the team here. And so he came and he took us through a conversation with Chat GPT where he was sort of like, here's the undergrad level question, and then we keep going. And it absolutely blew my mind. And I thought maybe it was so good. I know this was a year ago, but maybe you could sort of take us through the same thing.
[00:04:01] Speaker 3: I'm happy to do it for you, Kevin. And for everybody else. Yeah. All right. Well, thanks, everybody, for the chance to talk with you today. Let me walk you through my first conversation with the O1 level model. This sort of calls for me what I call the O1 moment, where I first realized that, oh my gosh, this reasoning thing is real, and it's going to do stuff for science that can't be done otherwise. So I'll just walk you through the epiphany that I had. So as a scientist, my background is in inertial confinement fusion. All of that starts with shock waves propagating into things. So big, strong pressure pulse. Like, hey, describe the material response of a stainless steel bar with a one mega bar pressure on it. For the non-experts, one mega bar is big, but it's not, like, enormous. So we expect some shock waves to form. Let's see what the model says. So it says, all right, you're definitely going to get a shock wave. This is exciting stuff. But there's going to be elastic and plastic deformation. There are going to be phase transformations. There are thermal effects.
[00:04:58] Speaker 1: They're going to be phase transformations. They're thermal effects, wave-wave interactions and reflections. You might fracture this thing. There's all this kind of cool stuff that can happen. That was sort of impressive. That's kind of what you expect from a Wikipedia article. So it has kind of this encyclopedic knowledge of these physics phenomena. Those should happen at this level. So then I started asking it technical questions. This is what I described to Kevin as sort of the telescoping nature of the problem. Let's make it a little bit harder. If I were asking an undergraduate a question, what would we actually do? So assume a reasonable yield stress for this bar and do something. Let's try to plot the strain versus position. How much has this thing compressed as the wave has moved through it? After thinking, it comes back and says, yeah, let's do this. And this was the thing that first got my attention.
[00:05:41] It started to make some assumptions because I had ill posed the problem. There was not enough information. So it's like, right, why don't we take a density and elastic modulus, a yield strength, a sound speed and starts writing down relations for shock propagation. That is something that models could not do before the O1 model came out. So that was eye-opening, but still not like a great accelerator for science. So I'm pretty impressed at this point. It goes through and does some calculations, including at the time, spending a lot of time explaining how to use the quadratic equation, which was just pretty cool. But it got it right. That was good. This was at a time when folks were obsessed with the number of R's in strawberry. So that it can do this is awesome. So it goes through and does this, and you can see the state of the model from a year ago. For those folks who are used to using it, here's the plot of the axial strain. This is the shock wave, all it had access to was ASCII text, and it's trying super hard to give me a picture of what's going on. It's a little bit wrong, but it's trying hard. That was pretty cool.
[00:06:46] So it's kind of at undergraduate level. Now the next prompt is, all right, what would it take to generate a radiator precursor? That's a big step in complexity because now we're not just squishing material, but we're squishing it so hard that it's hot enough that it's radiating and that radiation is getting trapped in the material ahead of it and changing the material state that the shock wave's going to propagate into. So this is what we would call multiphysics. There's mechanics going on, there's radiation transport. This is tough, and it's like, yeah, got it. Here we go. So it goes down and says, we got to think about all those things I told you before, but we're going to have to add in a bunch of thermodynamics, and we're going to have to figure out what's going on with this. We're going to have to take into account a bunch of optical properties. And then it goes off and does some approximate solutions back of the envelope, like I would expect a graduate student to be able to go do. We've gone from Wikipedia to undergraduate to graduate student.
[00:07:36] By the way, I'm at a vendor partners meeting at this time, just using the model for my first sort of test drive that Kevin and company had given to me. And I am no longer paying attention to that meeting whatsoever. I couldn't tell you what happened in that if you paid me. So just to go a little further and get us back to our chat, the next thing that I asked was, all right, let's not mess around anymore. There's something that I know how to do as a national laboratory designer. I know how to think about these things in full 3D with high performance computing, multi-physics codes. Let's not do graduate student problems anymore. Let's go be pros. So I ask it, now consider 3D effects. Relax all the assumptions on planar shocks, which it made, by the way. Let the lateral area of the bar expand freely in the vacuum. How are you going to solve that?
[00:08:27] Speaker 2: So it's like, all right, this has gotten complicated. Let me write down all the governing PDEs in their correct conservative form. That was amazing. Now I'm super impressed and have absolutely no idea what's going on in the meeting that I'm sitting in. I'm watching the future unfold in front of me. And it explains the solution approaches that you could take. You could take some scaling laws. You could do some numerical analysis. Or the thing that really, really blew me away is a line in here, if I can touch it, that says, you ought to go get the Hydra simulation code from Lawrence Livermore National Laboratory. Which is the radiation hydrodynamics code that it took me three years to learn to use that you can't get your hands on because it's export controlled, that we use to drive fusion. So the model got from, hey look, here's Wikipedia, to this is what you would drive.
[00:09:13] But that's where it stopped. It did not have the capability to go use those tools to interact with code-based models or go do that. So there was sort of a clear boundary of where we were a year ago. This was a shock. So to everybody in the audience, if you've come into the conversation of AI frontier models advancing, maybe this is not quite as shocking now as it was then. It should be because it's still amazing. But that's not where we are today. So there are still really cool things for us to go explore. And maybe to that point, this was a year ago. So this I think was on O1 preview. And since then we've gone O1, O3, we're good at naming things, GPT-5, 5.1, now 5.2. So there have been many steps up. And also, you made an interesting point.
[00:09:56] Speaker 1: Steps up and also you made an interesting point this morning when we were talking, you've also leveled up. It's like any skill you've gotten better at prompting and working with the models. So maybe give us a sense of sort of what's the today equivalent of this.
[00:10:09] Speaker 2: Yeah, so Kevin's right. Two things have happened: I'm a much better user of the model. So if you handed me a 1 today, this is not what my first chat with it would look like, and the model is so much better. So let me fast forward to everything that we've been working on together.
[00:10:26] Speaker 2: Go on to an example that's in a paper that Kevin and some other scientists around the country worked on trying to understand what uplift from AI really looks like in the science space. You'll see immediately my evolution because the first thing I asked, remember, was tell me about a pressure applied to a bar. So there's my new first prompt, right? So it's this gigantic boomer prompt.
[00:10:49] Speaker 2: But there's a thing that I'm doing and I think I'm not doing; the thing you can now do is try to get sort of a shared model of what's going on in context for the model with me. So we're thinking about a burn wave propagating. It's a spherical system. It's hot here, it's cold there. These are the boundary conditions. I'm just letting it know the system that we're talking about. I'm not at all telling it how to go figure this out or what to go do. We're going to have that conversation between me and the model.
[00:11:15] Speaker 2: So let me tell this in two parts. The first part is the first answer is really bad. It's like, yeah, I did that. I set that up. Here's your beautiful result. So you're all scientists or science-minded in the audience. It's not a very informative plot. You're looking for waves propagating from left and right. That's just noise. So we have a very long conversation where we go back and forth and it's telling me here's your result. Super awesome. And it's not quite yet.
[00:11:40] Speaker 2: But it has this really great insight about what to do as soon as I tell it that it's not going to work. Hey, we're just seeing a constant response. I'm not finding an optimum to anything that I'm looking for in this problem. What's going on? And it thinks around and it's like, oh, you're right. The burn wave you're looking for isn't actually propagating. We're going to have to turn up the volume on the hot spot that's burning the nuclear fuel you're trying to load.
[00:11:55] Speaker 2: Okay. That's great. We go on for a long time, then it gets the one that's so strong that it's just exploding everything. Not burn wave propagation, just like nuclear fusion did. It's like, okay, great, you're right. At the end of the day, you're right. Burn wave propagation, just like nuclear fusion, bam. Also not what we were looking for. Until we get to this really cool place where the model and I have gone together between noise, constant responses, and no responses to answering a question that I don't think anyone has answered in detail before.
[00:12:21] Speaker 2: So there's a model of a burn wave propagating from hot nuclear fuel into cold fuel, and we have an answer for what is the optimal shape profile in order to let that hot material or the heat from the fusion propagate into and set a heat profile to that heat profile. And that's what we're doing.
[00:12:57] Speaker 2: So I'm excited about this. I'm super excited. As I'm doing this, and we'll get to the thing that actually matters. And here's a picture where we're finding an optimum ridge. This is the place you want to operate. Together with it, I get to do some adaptive sampling, and we get to the plot that says, okay, here's the combination of slope and curvature for the shape of a density transition from hot stuff to cold stuff that will actually help you optimally couple to push this thing forward.
[00:13:19] Speaker 2: Now, as the model told me at the top, this is a toy but useful. I'm going to use it on a lot of projects. It actually said that to me. I think my ego can tolerate the toy but useful. I think my ego can tolerate the toy but useful. But it's where I would go now and pick up that tool for the high performance code that I mentioned to you before. I can now start with this as kind of an initial guess.
[00:13:39] Speaker 2: If we were root finding, that's a close one. Now I can use these very expensive high-performance computing, pretty much in the ballpark that we want to explore. This is completely different than the tell me about a wave and what would it take? This is code in the sandbox in the background, interacting in real-time, helping me get from my idea to the tools I can use at national laboratory scale in a matter of a few hours where it should have taken me months to get there. That's today. How cool is that?
[00:14:05] Speaker 1: That was an awesome intro. And now I think it would be fun to get a little bit more into your work because you have had this super interesting career as a physicist at Lawrence Livermore. Three years ago you guys actually bought the company and demonstrated fusion ignition.
[00:14:19] Speaker 2: We did. It was amazing. Maybe introduce your work a bit. What is fusion? Where are we in the span of it barely starts to work to our whole world being powered by fusion in the future?
[00:14:29] Speaker 2: That's a good story. I feel some sort of sense of optimism. I feel that we have a lot of potential here. And I think if we can leverage that, I think we can be a bit more proactive. I think we have a lot of potential. We are a powerful society.
[00:14:43] Speaker 2: I think we have a lot of potential here. I think we have a lot of potential. I think we can be a bit more proactive. I think we have a lot of potential. I think we have a lot of potential. I think we can be a bit more proactive. I think we can be a bit more proactive. I think we can be a bit more proactive. I think we can be a bit more proactive. I think we can be a bit more proactive.
[00:14:54] Speaker 1: It's a fusing of atoms in order to release energy. And if we do it in the right way, we can get more energy out than what we put in because we convert some mass into energy. And if we can do that with fuels that are abundant on Earth like deuterium, we've got a solution to unlimited supply of fusion power, which can drive, I don't know, a lot of GPUs, can drive everything else that we need to drive with electricity. I had the privilege of leading the modeling and simulation half of the inertial confinement fusion team at Lawrence Livermore National Laboratory on our quest to ignition. So we have the world's largest laser. It's about the size of a football stadium. You put three football fields across the top of it, 10 stories high. That is the laser. When you go in the building, you're actually inside the laser. Kevin's been there. We use that to try to ignite a nuclear fuel pellet, and it's called the National Ignition Facility. And for 18 years of my career, we did not ignite. So you want to have a lesson there. Don't put your goal in the name of the facility that you have because if you don't meet it, it's a bit of a problem. But on December 4, 2022, just over three years ago, we were able to get more energy out of that fuel than what we put in with the driving laser, which is the world's first proof, other than in nuclear weapons or in stars, that you can do controlled thermonuclear fusion and have a hope of powering a system in the future. Now, the National Mission facility is not meant to drive power. It's meant to show science for national security and for power, what it is that we can go do. But it works. We knew that it should. And now it actually does. And that has launched the basis for billions of dollars of investment in startup companies to go figure out alternative schemes to harness fusion. That means that we've got an active and exciting way to pursue power for these things that demand so much power in our world. Computers, everything else that we have in our lives.
[00:16:47] Speaker 2: Yeah, and there are actually a bunch of other attempts at it, right? The French have a big tokamak. There are lots of other startups across, you know, a bunch getting going, some that are sort of mature. How do you sort of put what Lawrence Livermore is trying to do in context with what's happening across the broader world in fusion? Compare and contrast the different approaches.
[00:17:10] Speaker 1: Sure. On the commercial side, folks are attempting every way that you can possibly think of to get two atoms to fuse together and produce power. The big pieces are magnetic confinement fusion where there are things like tokamaks, gigantic, well-engineered magnetic systems that squeeze plasmas to make them hot enough to do fusion. The other major lane is inertial confinement fusion where you explode a pellet and try to squeeze it so hard through laser-driven implosion that it gets hot or maybe through driven implosion. Everybody is trying to do that. The best way to think about that is the magnetic world has fantastic engineering solutions, and the actual physics business is yet to be demonstrated. The inertial fusion world has demonstrated the physics piece and now it's quote, unquote just engineering, which is incredibly hard and challenging. So they both have strengths and weaknesses. For the role of the public sector, what we've done is essentially bear a bunch of that risk, pursuit in my career which actually grows out of six decades of trying to ignite a target with a laser so that the private sector doesn't have to do that. Now that that guaranteed absolute proof that it works has been handed to the private sector, now we can have conversations about the engineering piece, about the various routes that we can take, whether it should be inertial, whether it should be magnetic, whether we should do fission first before we go do fusion. These are all real conversations because the public sector has borne that risk for the long run.
[00:18:40] Speaker 2: And so let's talk about AI for a bit and how it sort of factors into your day-to-day. You have access to all of our models, of course, and all the public models. We're also very proud to have a partnership together thanks to you and to a lot of great folks here at OpenAI as well where we've installed one of our frontier models on Venato, which is a classified supercomputer that the National Labs operate. So you kind of get, you get the best of both worlds in some sense, but what are the workflows? Where does the model show up? Who's in the loop? You know, how do you actually kind of operationalize AI in your day-to-day work at the lab?
[00:19:19] Speaker 1: Yeah, for us humans are always in the loop. We do a lot of things that are high consequence and so there's no chance that humans are not in that loop. So we can start there. But what we want to do is de-bottleneck some of the places where humans are really slow. So a typical workflow looks like we will try to generate an idea using computational intelligence. Like, hey, I have a hypothesis. The problem that I just worked you guys through. What if we shaped appropriately the front of this thing? How would that look? Get to a point where it's, say, explorable. So I got in that demo to the place where I said we could go to HPC. So what does that mean?
[00:19:52] Speaker 1: To HPC. What does that mean? Right now, if I were sitting here, let's say if I were here six months ago, what I would have done is go to a computational code, maybe a million-line well-calibrated code. I would have set up all the input geometry, I would have had a hypothesis that I just generated, and I would have tried to run those simulations on a supercomputer. That's not where we are now. Right now we have agent-driven workflows that we are bringing up to speed, so the thing that actually sets up that simulation is a simulation setup agent. The agent that actually decides which simulation to set up is a hypothetical planning agent that's looking at the conversation that we were having and is saying, "Yes, that and maybe these other things that we look at, and let's go run a thousand of them." And then there's a set of agents that will execute that, a set of agents that can write analysis code for what comes out of those very complicated simulations, an analyzer that looks at the output from the analysis and post-processing code, and then a prototype that can feed back to me sitting at the prompt. So I can now launch that whole thing that we were just looking at, the demo that I did, demo number two, and we've got agent-driven workflows that are going to close that loop and bring those answers back to us. That's just the beginning of where we're going, but it's very different in our ambition today from where it was before.
[00:21:04] Just to be really clear, I am energized about being in that loop because I get to do what I always wanted to do, which is to think about the physics. What if I turn on this particular physics package or if I turn off this particular physics package? Without AI acceleration in the loop and the agentic workflows, I have to go code all of that up by hand. I have to go manage simulations. I have to wait for their computational meshes to tangle, and then I go in and untangle them by hand. And it's just super annoying. Now I have computational agents to do that for me, the stuff I never wanted to do, so all I do is think. That's the goal. That's what we're trying to get to. And so the human is always in the workflow thinking. It's never in the workflow laboring or doing computational or intellectual labor; that's for the computational systems to do. We talk about our mission at OpenAI for Science as accelerating science, which is accelerating scientists and allowing you to get so much more done than you could without AI. It's a great articulation of it.
[00:22:07] Speaker 2: When you look at the models today, you were talking, you gave the examples of a year ago, you were really impressed by this. A year ago you blew my mind when you showed me that. What today really impresses you? And conversely, what are the biggest limitations you still run into?
[00:22:19] Speaker 1: The thing that really impresses me is the sophistication with which the model manipulates really complicated concepts and does it in the background. I feel like when I have a conversation, I'm programming in physics, not programming in code. It's as if the model and I have set up a chess board of physics pieces, and we decide which physics we're going to turn on and which physics we're going to turn off. I could never have done that in the past because it would have been a few hundred lines of code refactor for the thing I'm working on every time I want to take out a package or put in a package, and I just wouldn't do it. The potential barrier is just too high. I wouldn't go do that. So the thing that's most impressive is how low the bar has been, how much the bar has been lowered to actually do something transformational and say, "Okay, this is the thing that I want to go do. I want to turn off alpha heating. I want to turn on alpha heating. I want to make it half as strong as is in nature. What happens? What if I'm wrong about these numbers and they're not actually what they said from the experiments? What would that be like?" All that's possible.
[00:23:20] Okay, but there are still things that are not great. And the things that are... I just had conversations with folks about this. The things that models are not great at are introspection about results. So right now it's like this, you know, semi-Nobel laureate that's in your back pocket for executing and moving around these conceptual chess pieces on the board, but it's really terrible at evaluating the result it just got. I showed that to you in the demo a little bit. First thing, let's go think about this model. It's like, "Yeah, I made all those plots you wanted, here they are." And they're just noise, right? So that's a downside. The upside is, as soon as you point the model to the fact that it's noise, it's like, "Oh, you wanted me to think about that. That's great. Let's go do that!" And here are all these really great things that we can do. So in that chat that I showed you, despite the frustration, there are lots of things that are just me saying, "Yeah, go do that. Okay, go do that. Sure, go make it better. Yes, definitely please improve the thing." So that's a frustration. But I think there are also solutions, and it's really exciting to see what the next version is going to be like.
[00:24:19] Speaker 2: Your story reminds me. We were talking with this very well-known math professor, and he was saying his kind of eye-opening moment with ChatGPT was he had given a problem. It was unsolved, and he said he expected it to be true, and he was very confident that given some number of hours, he could show it to be true. He gave it to his postdoc, and he said, "This is part of this overall, this larger thing that I'm working on. Can you go make sure that this is true?" And the postdoc was busy or had a hard time with it or whatever.
[00:24:50] Speaker 1: You know, had a hard time with it or whatever, so a week passed, he didn't have anything, and he was like, this isn't happening fast enough, and he gave it to chatGPT, and it came back with the answer and a proof. And it was like, oh wow, OK, this is a new way of doing math. But what he said was, you know, it's not just that it was able to solve this one problem, it's that before, I'm just one person and I have, you know, two postdocs, explore three directions at once, roughly, but if instead you live in a world where you can type a prompt into chatGPT and let it think for an hour, you can explore 10 different directions or 20 different directions in the space of an afternoon. And, again, like, that's acceleration.
So I thought we could switch gears a little bit and talk about policy. The White House just recently announced an executive order around the Genesis mission, which is essentially a national bet that AI can be a force multiplier for scientific progress. And I imagine not everybody in the audience is familiar with this, so maybe you just want to talk about what the Genesis mission is, what its goals are.
[00:25:58] Speaker 2: Yeah, I'm super excited about it. As Natalie mentioned at the top, I have a Lawrence Livermore National Lab hat. I'm also the technical director for the Genesis mission. So I've been working for some time to bring this into existence. What is it? Basically, it's the government's effort to build out a center of technological understanding for AI for science. It'll happen inside the Department of Energy. And the fundamental thing that we're going to do is build out what we're calling the Platform, with a capital P. The Platform really has two pieces. On the left-hand side is computational intelligence. So imagine a GPT 5.2-style model having hypotheses, driving those agentic workflows that I described in answer to your question. But what it's driving is the eight decades of scientific capability build-out from the Manhattan Project until now. So high-performance computing, high-precision experimentation, high consequence gigantic production capabilities, all enclosed in self-reinforcing loops.
To make that a little more concrete, what we imagine is that workflow we talked about, where there is not one hypothesis, but millions of hypotheses that are driven by computational intelligence that then get down-selected by simulations on high-performance computers to the places we really want to think that then get pushed on to experimental facilities that further refine those hypotheses, that get driven into production facilities, where stuff is actually built and executed, maybe inspected on machines as they are actually completing parts. And all of that information is in constant currents that are coming back into the centralized intelligence piece to refine, learn, and improve, so that what you have is your government dollars driving facilities that have been invested in for eight decades, but now wrapped in AI and computational intelligence, acting as not just the almost most sophisticated things on the planet that they are, the light sources and the neutron sources, but altogether as one new, most exquisite, most complicated instrument that humans have ever built. That's what we call the platform, and the GENESIS mission is trying to produce that.
[00:28:06] Speaker 1: Yeah, and if you, first of all, it's super exciting, and we're very ready and eager to work on this together. But when you look at the history of science, a lot of the big leaps in progress have come from new instruments. It's like telescopes, when you start with the Hubble, and now we have James Webb. You have particle accelerators, you have gene sequencers, you have supercomputers. How do you think of AI in the pantheon of scientific instruments? How does it fit in?
[00:28:29] Speaker 2: Yeah, so AI is the final wrapper that allows us to bring into existence that brand new instrument that hasn't existed before. So there have been a few revolutions, actually just in the last few years. It's hard for us, I think, as humans to understand this, but if you look at what we're able to do in large-scale science, just in the last couple of years, there is AI as computational intelligence. Only in the last decade have we had distributed sensing and high-speed networking that can get digital data off of these systems at very large speeds. We've invented exascale computing where we can produce 10 to the 18 floating-point operations per second as we go simulate what's going on.
All of that wrapped together to drive flagship facilities, like the National Mission Facility, which, for the first time in human history, is producing more power out than what we put in with the driving laser. This entire stack, once integrated in one computational intelligence wrapper, is a completely new instrument because we can not only ask questions that are quantitatively faster, but there's a quality in and of itself of having that in a closed and self-reinforcing loop that is something that has not existed on the planet before. There has been no light source that could operate in that way or at that speed. There's been no neutron source, no laser. Now we have the actual opportunity to not just accelerate one of those, but all of them together so we can ask a question that says...
[00:29:48] Speaker 1: So we can ask a question that says, what if I need to know the way this thing looks in a light source, but the way it reacts when illuminated by neutrons, the way that it is initiated by being driven by a laser. What does that look like in simulation and actually in the real world? And what can I learn from iterated experiments on it? That’s what we’re trying to breathe into existence. That is a brand new kind of instrument. Awesome.
[00:30:10] Speaker 2: And maybe, because I know this is top of mind for you and for me and probably for a lot of folks in this room, can you talk about the national security implications of all of this? It's science obviously, but it's also more than that.
[00:30:20] Speaker 1: Yeah, and this is something that I'm really passionate about. There is a global race that we are in. There is really one capable global competitor in the AI race, and that is China. They've declared very clearly that they intend to dominate the US in AI by 2030, openly so. We think that the United States should win that race. We think that our techno-economic competitiveness and our national security depends on these kinds of accelerated AI solutions. And when I say that, this is what I mean by winning. What we need to be able to do is what I call innovative overmatch. The United States needs the ability to mint solutions and mitigate threats on ridiculously short time scales in whatever technological direction we need to be able to go to outrun our adversaries in order to maintain the Western rules-based order and to be able to establish our competitiveness and our security on the global stage. So what we're doing for scientific discovery also ensures our innovation advances and makes sure that if someone tries to hold the United States at threat, we have an answer to that threat and we can mitigate it in very, very short times.
[00:31:29] And by the way, I have three more questions. So, and then we'll go to Q&A. So start thinking about what you want to ask Brian. All right, so think over the next say 10 years, right? We have our partnership between OpenAI and Lawrence Livermore. We have the Genesis Mission. You obviously have other frontier labs doing great work as well. What becomes possible over the next 10 years that just isn't at all feasible today? Either speed, scale, the kinds of questions we can take on, like what bottlenecks do we just take for granted today in science that you think may be completely eliminated over the next decade?
[00:32:06] Speaker 2: So everything that I think we can see with sort of clear in our headlights comes from speed, but there's a qualitative difference that emerges from that. So let me give a really concrete answer. I'm really passionate about advanced manufacturing and 3D printing and the ability to make things that are not just like hobbyist stuff in your garage, but are real parts for real important systems. What we can do almost today is have an AI Sentinel, an agent that runs on a 3D printer, gathering on-machine inspection from that part as it's being built, that can watch and say, hey, you just printed a defect into that part. They can pass that back to an AI-driven surrogate and do multi-physics analysis to say, if I pass that shockwave that I talked about earlier through this part, does that defect cause a problem? Is that a fatal defect? Can even do maybe real-time AI accelerated redesign of the anti-defect that should be printed on the next pass so that you now have a part that's locally off-nominal, but globally nominal, and you don't have to stop, you don't have to start over, you can just keep going.
[00:33:05] That’s the chance to accelerate the time to produce things, to shorten the time to discover those things, and drive these things in loops that’s just one example for us to imagine. So what actually emerges is the capability to explore and validate our scientific ideas at a speed that is much closer to our capability to generate those ideas. So there's this qualitative phase transition. Where we are right now without AI is that I have lots of ideas about things that can happen in the world. Most of them are bad, that's how humans work. If you wanna have a good idea, you have to have lots of them. But I can have lots of ideas, but I've got a gamble on which one I think is the best one. And so I've got some bias and intuition, and there are probably some great ideas that I'm leaving on the table.
[00:34:01] Qualitatively, the future that we're talking about is the ability to have many, many, many ideas to go explore all or most of those and find out which one is actually the best and double down on the ones that really matter. And so it's not just that we'll go faster, but we're gonna do with our resources the most important things that we can do, and that’s self-reinforcement of being faster to execute, executing only the very good ideas, using those good ideas to have better ideas, is a self-improving and self-reinforcing system that just cannot be beaten. And this, by the way, is why in a race to do this, we cannot lose. We absolutely have to go do that.
[00:34:20] And just to foot-stomp something that's really important, there is the run-up to artificial super intelligence or artificial general intelligence, which is absolutely critical. But we can't let it be such that if that happens right here today, maybe it just happened on the sixth floor of this building. I don't know. But if it did, it cannot be that nothing changes for the United States. We have to have AI.
[00:34:46] Speaker 1: We have to have a I ready supercomputers and experimental apparatus and production capability so that the moment that pops hot, the United States takes advantage of it and is the first to capitalize on it. So the race that you guys are in, we need you and the other frontier laboratories to push very hard. But we on the public side and the rest of the United States have to be ready to catch that and do something transformational with it.
[00:35:12] Speaker 1: It's a little bit of speed or quantity has a quality all of its own.
[00:35:15] Speaker 2: All of a sudden.
[00:35:18] Speaker 1: So now I think we're at this moment where the Overton window around AI and science is just beginning to change. People are just beginning to realize at some scale that AI really can influence and accelerate the pace of scientific discovery.
[00:35:35] Speaker 1: What do you want to be able to say 12 months from now, like you and I are up here in front of the OpenAI Forum 12 months from now? What does success look like? What should we be able to say to be proud of the progress that we've made on AI and science over the next 12 months?
[00:35:52] Speaker 2: Well, so I'll put my genesis mission hat on, we ought to be able to say that we're making the transition from one new phase to another new phase. The thing that I think will happen before that 12-month period that you're talking about is a relative step in automation of science, the AI-driven generation of hypotheses, the setup of tools, the execution of those things.
[00:36:12] Speaker 2: We ought to be able to say that we've gone beyond that to the AI extraction and capture of the output that comes from that to the self-reinforcing improvement. And there are some very low-level things that that means. That means we ought to have agentic systems with hundreds to thousands of agents running very complicated workflows. They ought to have concerted runtimes that are not days but are approaching weeks long. And they ought to be driving systems of sufficient complexity that we're not talking about a particular connection to a piece of apparatus but to an entire facility or network of facilities.
[00:36:44] Speaker 2: So 12 months from now should be progress against very complicated workflows for very long runtimes against systems of complexities that we would have thought were impossible or could not be computationally tractable, all wrapped up in the first systems that show that. And that will take the scientific Overton window that you're talking about and slide it in whichever direction complexity and goodness is on that scale.
[00:37:06] Speaker 1: Let's turn it vertical, it'll slide it up.
[00:37:09] Speaker 2: Awesome, all right. My last question so get your questions ready and we'll do a little bit of rapid fire here before we go into Q&A.
[00:37:16] Speaker 1: All right.
[00:37:17] Speaker 2: Most common misconception about fusion?
[00:00:37:20] Speaker 1: The most common misconception is that it's fission, which is probably the number one. The second is that it's a thing only for the future. We can do that with government will and sufficient funding from both public and private sectors; fusion is a thing we can have in our lifetimes.
[00:37:37] Speaker 2: Love it. Outside of a computer, what's a tool that you couldn't do your job without?
[00:37:43] Speaker 1: This one I got in advance and I thought about it. The first thing I was going to say is my cell phone, but I think that's a computer. It's actually a whiteboard. I cannot get by without the ability to draw. And by the way, if we could have an AI-connected whiteboard so I could get from my context and the model's context via whiteboard, I'd buy it today.
[00:38:02] Speaker 2: All right. I love it. What field other than fusion and AI obviously should people be watching closely?
[00:38:08] Speaker 1: I probably give this one away in my previous remarks, but I think automation is everything. Everything from robots that do things that people do to automation that does robotic chemistry in laboratories to things that automate production capabilities. Anything that gives embodiment to the AI system and lets us have ideas that move mass and energy in the real world, that's the thing not to sleep on.
[00:38:35] Speaker 2: Most frequently used emoji?
[00:38:37] Speaker 1: Yeah, that's gotta be the facepalm. It's so good for when I do dumb things, which is pretty frequently, and it's pretty helpful for getting out of situations when I want to explain that I just didn't get it.
[00:38:50] Speaker 2: Okay, so we're going to open up to Q&A now before I do. I want to just say a couple words of thanks first to Brian for being game to come down here with us and for just being such an incredible partner from the minute that you and I met all the way through to now. You've been spearheading this for the US for our national labs. It's such critical work. Thank you.
[00:39:13] Speaker 1: Thanks, Kevin. Thanks, everybody.
[00:39:18] Speaker 2: And I also just wanted to give a quick shout out to everybody on OpenAI's engineering, government, policy, partnerships, and all the other teams that have worked to make this partnership between OpenAI and Lawrence Livermore and the national labs possible. There's been a huge amount of work that's gone into it, and there's a lot more great work ahead, but it has been nights and weekends and blood, sweat, and tears from a lot of people. So thank you to the team.
[00:39:44] Speaker 1: Thank you. Awesome. We're going to start with a question from the virtual audience, and then we'll move into the real-life audience. This is from Dr. Christopher Stubbs, professor of physics and astronomy at Harvard. Much, perhaps most, of science is incremental, and current tools are certainly accelerators for that. But what are the prospects for these models making a big conceptual leap, such as quantum mechanics or general relativity?
[00:39:51] Speaker 2: You think that was for me?
[00:39:53] Speaker 1: Yeah.
[00:39:56] Speaker 2: I think we're still trying to understand how to use those in that discovery mode. So the question is astute. We are going to see acceleration of the things that we think we know how to tackle. But I see in my own efforts that as I'm able to answer a question, I can ask bigger and bigger questions. And so I think there's some kind of transition that he's getting at, which is, how do we make this jump to a brand new discovery? Where do we get something like general relativity or quantum mechanics? I think that's going to require a lot of very expert and skilled use by humans for a long time. I think given the things that I said before, those ideas will come from putting computational intelligence in humans against our ability to squeeze nature, get it hot, accelerate it, make it go fast, and give us surprising things that help us and models together ask those big questions, like, hey, do I need a new theory to go explain this?
[00:40:44] If I go back to what I was saying before, there are a bunch of questions that in the past I would be afraid to ask because the lift to get there, if they were right, is heavy and if they're wrong, is a waste of too much lifetime. We can go after these big, transformational new theoretical things because we can go so much faster that we can attack the things that we wouldn't ordinarily. So I think there's actually an element of just being bold and going and trying to do things. You probably have to be a genius, and you probably also need computational intelligence, but you put those things together, the courage and the inherent talent of some humans and some computational amplification, and I think that does bring in years the capability to do transformational things. I also think it goes back to what you said, that even if you go look at the history of quantum mechanics or GR, a lot of that started with experimental results that highlighted the fact that current theories were not going to hold, were not actually the correct theories. And then you get people getting together and bouncing a bunch of ideas off of each other and being wrong a lot before you start to find some ideas that are right.
[00:42:08] And if you can accelerate the pace of bouncing ideas off of each other between humans, between humans and AI models, et cetera, then I think you get to those answers faster. The goal, our goal, is certainly not to, is not about replacing scientists with AI, it's about augmenting them and accelerating them with AI, and I think that does come in lots of small leaps that collectively make a big leap.
[00:42:30] Speaker 3: Yep, totally agree.
[00:42:32] Speaker 4: Hey, Josh Blum, UC Berkeley, thanks for the great conversation. You talked about programming in the language of physics rather than the other kinds of languages we're used to. I think we can do that now; many of us in the room can do it because we're trained as physicists to be able to do that. Maybe you could talk a little bit about the future of education, where we need people to be able to program in that language, but if they're not trained to do the things that many of us are trained to do in the trenches early on, it's going to get harder and harder to be able to ask the more important questions.
[00:43:08] Speaker 2: Yeah, I think that's a really important and slightly difficult conversation. Since most of my work has been in a national laboratory, I'll reflect on training of a similar kind, which is, when we bring people into our space. There are a couple of things that I learned to do as a professional user of high-performance computing supercomputers, which is a way of attacking physics problems. I had to learn to set up those codes by figuring out which is the right equation of state or closure model that I put in. What is the right way to mesh this thing up so I resolve the things I need to and not the things that I don't, and get wrong answers and recognize them so that I can understand right answers when they come at me. That's the programming and physics, right, that we're both alluding to.
[00:43:50] I don't know fully the answer to can I get to the place where I can move the big pieces on the chessboard without having done the little small things underneath. We don't need that to train the AI systems necessarily, but we might need that to train the human systems. So one wonders, we’re not above teaching third and fourth graders how to do arithmetic, and we generally stop doing arithmetic somewhere around algebra one, algebra two, like, yeah, you can do number stuff, but it's really about conceptually moving things around, but you still have to have that foundation.
[00:44:20] So I think there is some kind of training Weils that come off where people learn to do, you have to solve the problems in the textbook, you have to do some numerical programming, you have to understand what's going on so you can recognize that that answer smells really bad, and that one is really good, and these are directions we go. There's a notion in there that I think is all about building intuition, and intuition, I think it's a bad rap because it sounds like this magical thing of scientists just know the answer.
[00:44:42] Speaker 1: Like this magical thing of scientists just know the answers. It's not that. It's that you've been down this path or one a lot like it a whole bunch of times before and statistically, when you went left at this path, it's been good. When you went right, it's been bad. So if you're going to pick one first, go left. And I think what we have to do is train people to have that intuition, which is not magic. It's having seen this path enough times that you've got a feeling for what's right and what's not, but also not be so cocky as to think that if a computational intelligence suggests the right one, that that's the wrong thing to do. Then why are you using AI at all? Go do it on your own. So there's some sophistication and maturity that I don't think we have an answer for as a community. It's going to come from academia. It's going to come from the public side. It's going to come from private solutions.
[00:45:31] Speaker 2: I'm John Cumbers, a former NASA bioengineer, and now I run a network for synthetic biology startups and investors called SynBioBeta. You gave some amazing examples in physics. It feels like the last 100 years has been about physics. I think the next 100 years is going to be about biology. I'm curious what's happening at the National Labs and the Genesis mission around biology. Also, I'm curious about OpenAI's view on the next advances in biology and the interaction with AI.
[00:45:59] Speaker 1: I'll give a short answer so that Kevin's got time to answer. For Genesis, biology and healthcare will be a central thrust that we care about. We will have grand challenges and Lighthouse challenges in those. If I temporarily put on my Lawrence Livermore National Lab hat, the thing that I think is most interesting in our AI work has been the acceleration of the production of antibodies with AI accelerated loops. It looks like using HPC to make antibodies for SARS 2 that started from SARS 1 that are actually meant to bind 10 different pathogens that might evolve as we go forward. All of that was done in a two to three week computational sprint because the Department of Defense said to us, we're afraid we're going to send more fighters into a space where they're going to be hindered. How fast can you go? There was an answer. Some of the predicted antigens have emerged in the wild for SARS-CoV-2 and the antibodies that the team produced are actually effective against those and are in clinical trials. So that's just one spark.
[00:47:05] So I think you're right that the future is biology in some sense. As a physicist, it's okay because biology is physics. But the acceleration of not only that, but then can you make it? The automated production of doing that, of synthesizing it. It's right there in your name. I think that is just super important and it needs tools from the private sector to make it go too. I'll be brief because I think we should maybe make this another forum event. But we think about accelerating science. There are a few more important sciences to accelerate than biology in terms of the positive impact that we can make in the world.
[00:47:32] Speaker 2: One of the things that I am most excited about in this particular area is, as a human, as a researcher, it's really hard to be deep in many areas at once. And biology in particular is so broad. When you partner with an AI though, the AI actually is pretty deep in just about every topic in the world, and having a collaborator that is tireless and infinitely patient and extremely knowledgeable about a whole bunch of areas that you aren't is its own form of acceleration. It's a collaborator across any field that you want, available whenever you want.
[00:48:13] And so teaching the model to understand how to use biological tools and scientific tools more broadly is a huge part of that. I was even reminded of the question about general relativity. You go back to Einstein developing general relativity. He got stuck for a while because he didn't know how to do Riemannian geometry, and he ended up having to work with Emmy Noether and a bunch of people. Well, that took some time. That was two humans that needed to get together and somebody needed to make that connection. I mean, imagine a world where he had an AI model and suddenly was like, oh, the things you're doing, you just need these techniques from this other field of math, here you are.
[00:49:06] Speaker 1: And so I think, again, in terms of acceleration, having an AI model that knows so many things beyond what any individual human could possibly know, you know, in fields that they don't specialize in, is a massive form of acceleration, if we can get it right.
[00:49:13] Speaker 3: So this question is for both you guys, Kevin and Brian. Thank you so much. Your story resonates so much with me because one of the things I feel like has been accelerated most for me as a musician is the quickness to be in community with really high-level scientists and physicists and mathematicians, and the last year I've just become friends with folks and in spaces with folks like you. So how are you seeing it on your side? Are you hanging out with a bunch of poets and vibe poetry?
[00:49:40] Speaker 1: Because we catch a lot of flack for vibe coding and what not as musicians. For you, what has it opened up for you to expand in other domains to see how folks are really pushing their domains? Yeah, it broadens quite a bit. I'll confess that I got two undergraduate degrees, one in engineering and one in liberal arts. I'm already inclined to hang out in the artsy spaces because I'm split into pieces like that. But the tool opens up community the way you describe it for me in two ways. One that is probably unexpected, which is I can ask it questions about policy. I'm a scientist. I have this role in this national government mission. But anybody who thinks they know how the government works, you're probably wrong. It's super complicated, so I can ask it things that are way outside the lane of science and engineering. I can ask it questions about how should I approach this ethically? Is this the right thing to go do or is it not? And it points me in the direction of resources where I can go get deeper. I can do fun stuff for literature just because it amuses me. It doesn't always have to be work.
[00:50:50] But the other thing that I really feel on a daily basis because it's my day job is that science is so broad and so specialized these days that to do large scale science is to have to bridge into completely different communities, sort of the way that Kevin was describing it. I feel much more confident about sitting down and reading a paper that I really don't have the tools to understand, except that I've got someone to hold my hand that can make me not look like an idiot when I go talk to an actual mathematician because I didn't understand exactly what these geometric algebras were and I wanted to go understand what was going on. Now I know what a bivector is in a way that I didn't fully before, and instead of just the abstract of this paper, I can actually get through it. In fact, I think I need to have a conversation with a real mathematician now without being embarrassed and without feeling inhibited. I can be bold and go do it.
[00:51:40] So for me, the thing that I always feel is that, and this is maybe just my psychology, is that it reduces my anxiety about going to try new things and increases the courage that I can bring to go have some new relationship, ask a person at a national laboratory, knock on the door of somebody who's a world's expert in the thing that I'm not and say, hey, I have a question that I now think is probably not dumb, can we start here and see what we can do? So there's this potential barrier not only just to do things for myself, but to go have community that doesn't exist without it.
[00:52:07] Well, let's tell a really quick story from this was a Fields Medal-winning mathematician. He was saying, over the course of my career, I have written a lot of papers, and in many of these papers, I've known that there were these areas that I could go down and that I was very confident would bear fruit, but they were a little bit outside my area of expertise, and I just felt, you know, it's probably just not the most efficient thing. Maybe somebody else will go pick that fruit over time. Nobody has. There's still all these untrodden paths in my research, but I just don't quite have the confidence to go after it until now. And now with these AI models that can do so much, I'm going to actually go back through all of my old papers and take these paths and prove a bunch of things that I know I can do, but now I'll be able to do it so much faster. Anyways, it just struck me as very similar to that story, and hearing that from a Fields medalist was powerful.
[00:53:08] Speaker 2: Hi, my name is Yashna. Thank you so much for the talk. In the back here, hi. I'm a researcher and student at UC Berkeley, and I wanted to ask what you think the biggest bottleneck for data to push science with AI is. Obviously, there are security concerns for different institutions, but also it takes a massive amount of time and resources to get experimental data. So do you think in the future, do you see our efforts being put towards that, more simulated data? Yeah, I guess I'd love to hear what you think.
[00:53:39] Speaker 3: Do you want to start, Kevin? I mean you've got ideas about what data needs are. I've got some ideas. I mean, far be it from me to speak about the Genesis mission in front of you, but this is one of the most driving, like fundamental parts of the Genesis mission. The national labs have done incredible scientific work over many decades. Some of it is classified, obviously, and will stay classified, but there's a lot that's unclassified, and a lot of that data can be fed into frontier models to help them better understand science. Why not do that? Let's go do that. My general mental model of where a frontier model is, is it's incredibly smart. It can learn just about everything. It can generalize very well, but it doesn't, from first principles, know everything about the world that had to be experimentally done by researchers over decades. You still do need to teach it, and when you do it, you can learn, but you need to teach it, and that's where this data comes in, and it's one of the most exciting things about the Genesis mission and the opportunity.
[00:54:38] Speaker 1: The genesis mission and the opportunity here.
[00:54:40] Yeah, I mean, I totally resonate with that. The one thing that I will say is from the perspective of somebody driving this, that our Undersecretary for Science, Dario Gil, and our administrator of the National Nuclear Security Administration, Brandon Williams, have said we're gonna go create the world's most exquisite data set for scientific AI. We're gonna go do that for the country. There are two major bottlenecks that I think we're looking at. One of those is that data is usually collected badly. Like, if you guys are out there doing professional science, you know that the first thing that happens is that somebody acquires the data, goes into a spreadsheet on their laptop, a particular PI knows what the cable calibrations were and flat field settings were for a camera, and unless you have all of that, the data is not useful. So we've gotta capture that data in a way that is sort of AI native and digital ready. The moment that it's born, it's got to be born into our AI environments and into our data sets. So we gotta fix defective data being born in a way that's difficult to use. And we'll try to do that in the Genesys mission.
[00:55:39] The second thing is that data is born with intentionality that's not quite right for collection of data. Great for answering a hypothesis, you put in a response to an RFP and collected a small data set. I did it. These things are disconnected in parameter space. What a model might like is a uniform interpolation of that with weighting that matters based on something we care about from physics. Generating that kind of data is something we're imagining in Genesys where we might take 10% of our facility time in order to generate data to make AI models of the way science works. Not for a particular hypothesis or as the undersecretary says, the happenstance of who asked the question, but general data that is smeared over the space in a way that it is useful. So we will try to go attack exactly what you're talking about and show a way to build out these data capabilities.
[00:56:24] Speaker 2: Hello, my name is Sanjana. I'm also a student at UC Berkeley, a student and researcher at UC Berkeley. I was too for the record. Berkeley, Berkeley, Berkeley. This was not another Berkeley. This was not a plan. So sorry, Stanford, if you're nearby. Yeah, go bears. But my question was actually very similar to the one that was asked before. So I think one question that I wanted to ask building on that is what parts of the Genesis Project is focused on safety, evaluation of safety and how we evaluate fairness, safety, bias within these frontier models? Because I know that, especially when applied to science, it can get into some very tricky fields. Like I know that there was this paper that was looking at how AI can generate bacteriophages and that opens up sort of a security risk there or could. And so, yeah, I'm just curious to hear your thoughts on that.
[00:57:16] Speaker 1: Lots of thoughts. So this will have to be a quick answer since it's our last one. I'll say safety and security are built into everything that we do. Let's start not even at AI. So what we do for the Department of Energy is make lots of high consequence decisions about weapons issues, nuclear issues, biological issues. All of those have safety issues. There are a set of nested and interlocking safety interlocks that keep us from doing harmful things, both policy and physical, cybersecurity, computational, all of those are going to remain in place and we are gonna have a Hippocratic approach. We will do no harm when we go do these things. We are also trying to engineer systems that accelerate us but also operate within those systems. So we are inherently trying to make safe systems.
[00:58:01] It turns out that my wife also works at Lawrence Livermore National Laboratories. Our friends, our family live near where we work. We operate in these systems. We're quite interested in understanding the questions of bias and safety and security that arise. It's also one that is intentionally part of the scientific endeavor. We are intentionally biasing these models toward what we believe is true and what we believe is not true. So we have to have a way to understand how to build in biases for distributions we want to mimic and how to take out biases for distributions that we do not want to. In some sense, from the science side, we have an answer in the back of the book. We can go do an experiment and see what it is that we were biased against, that we should be biased toward and start building in capabilities with private partners to help have control of the way these models answer in the way that they steer. So we are definitely thinking about safety and security. As I said at the very beginning, we're putting scientists back in control of these systems, helping them think about what to do. We are not automating these things and taking scientists completely out. One, most of us want to keep our jobs. Seems like a good thing to do. And two, humans are necessary for doing all the things that we don't trust machines to do right now, to verify, to validate, to criticize, to understand what to do next.
[00:59:13] Speaker 3: Awesome. Well, I just wanted to say thank you to everyone of you here in person and to everybody online for joining us. And a huge thank you to Brian for coming in here today.
[00:59:21] Speaker 1: Oh, thanks Kevin. Thanks everybody.

