OpenAI Forum
+00:00 GMT
Sign in or Join the community to continue

Memory’s Blueprint: How AI Is Uncovering and Rebuilding the Architecture of the Mind

Posted Jun 26, 2025 | Views 201
# AI Science
# Innovation
# Scientific Advancement
Share

speaker

avatar
Anton Maximov PhD
Professor and Chair Department of Neuroscience @ Scripps Research Inst. Dorris Neuroscience Center

My laboratory at The Scripps Research Institute (TSRI) is dedicated to understanding how neurons in the mammalian brain form their synaptic networks, and how these networks are reorganized during memory encoding. We approach these questions using mouse genetics, deep sequencing, biochemistry, optical imaging, three-dimensional electron microscopy (3D-EM), electrophysiology, and behavioral studies. Additionally, we are developing new methods to access and manipulate specific neuron types in brains of model organisms with small molecules, as well as methods for AI-based analysis of brain structures in 3D-EM volumes. I have been studying neural circuits for over 20 years, as a postdoctoral fellow and as an independent NIH-funded investigator. I have successfully mentored many young scientists, several of whom now hold faculty and postdoc positions at other universities, serve as medical doctors, or work in biotech companies. My group is part of the vibrant and collaborative local neuroscience community, which includes investigators from TSRI, UCSD, and the Salk Institute.

+ Read More

SUMMARY

Dr. Anton Maximov delivered a compelling presentation on how artificial intelligence is revolutionizing neuroscience by enabling the analysis of complex brain structures at an unprecedented scale and speed. His work uses AI-driven tools—particularly convolutional neural networks integrated with 3D electron microscopy—to uncover the nanoscale architecture of long-term memory. This research, previously impossible with manual techniques, showcases how AI transforms the pace, precision, and possibility of scientific discovery. Dr. Maximov also reflected on how AI is streamlining everyday scientific tasks, democratizing hypothesis generation, and setting the stage for more dynamic, scalable, and biologically inspired AI systems. His talk illustrated a future where AI not only accelerates research but redefines how science is conceived, conducted, and shared.

+ Read More

TRANSCRIPT

Welcome to the OpenAI Forum. I'm Natalie Cone, your community architect.

We like to begin all of our sessions with a reminder of OpenAI's mission, which is to build artificial general intelligence that benefits everyone.

Today's talk explores how a groundbreaking new study from the Maximov Lab at Scripps Research Institute has revealed the nanoscale structural hallmarks of long-term memory, reshaping long-standing dogmas, and offering fresh hope for treating memory loss and cognitive decline.

This research exemplifies how AI is advancing scientific discovery and helping to solve the hardest challenges that humans face. And this example is only a sliver of the work being done in collaboration with AI to advance scientific discovery.

I think we all have a very hopeful future to look forward to, thanks to the amazing scientists in this field and the community around the world who are pioneering the use of artificial intelligence in their research.

Before I introduce our speaker today, I want to give a shout out to his lab, especially Marco Uytiepo, Anton's PhD candidate, who played a pivotal role in shaping this research.

Fun fact, we plug forum experts into all sorts of innovative initiatives, including model evaluations at OpenAI. We got to know Marco well during the project that led to the BioRisk evals in 2024 and subsequent research paper building an early warning system. He was an awesome team member and contributed significantly to that project. I'm glad I finally have the space to thank him for that now in the community. So thank you, Marco, and we look forward to seeing you today.

About our guest of honor and one of our very first expert community members to be invited into the forum, Anton Maximov. Anton Maximov leads a laboratory at the School Scripps Research Institute, TSRI, focused on uncovering the mechanisms by which neurons in the mammalian brain form synaptic networks and how these networks are reorganized during memory encoding. His team addresses these questions through a multidisciplinary approach that includes mouse genetics, deep sequencing, biochemistry, optical imaging, three-dimensional electron microscopy, which we call 3D-EM, electrophysiology, and behavioral studies.

In addition to investigating neural circuitry, Dr. Maximov's group is developing innovative methods to selectively access and manipulate specific neuron types in modern organisms using small molecules. The lab is also advancing AI-based techniques for analyzing brain structures within 3D-EM volumes. With over two decades of experience studying neural circuits, both as a postdoctoral fellow and as an independent NIH-funded investigator, Dr. Maximov has successfully mentored numerous young scientists. Many of his trainees have gone on to pursue careers in academia, medicine, and the biotechnology industry. His laboratory is an integral part of a vibrant and collaborative neuroscience community that spans TSRI, UC San Diego, and the Salk Institute. On a personal note, Anton is also so lovely to collaborate with. He's provided support for this community from the start, and there's no one I'd rather host than him. Please help me in welcoming Dr. Anton Maximov to our stage.

Thank you, Natalie, for this generous introduction. It's truly a pleasure to be a part of this terrific group. My talk today is divided into two parts. In the first part, I will tell you a little bit about how we combine different technologies, including artificial intelligence.

artificial intelligence to reverse engineer the biological neural networks in the brain.

And in the second part, if time permits, I'll briefly share some personal reflections on how AI has already transformed many areas of science and how ongoing and future developments may completely transform the process of scientific discovery in the future.

By the way of introduction, I would like to remind you that despite remarkable advancements in AI, the human brain remains the most complex computer known to science. It contains almost 100 billion neurons. Each of these neurons forms several thousands of synaptic connections.

And this intricate and dynamic network underlies our ability for context-dependent reasoning, for creativity, and emotion. And these are capabilities that artificial systems have yet to have.

fully replicate. The brains are also very vulnerable. A number of neurological and neuropsychiatric disorders affect our cognition. Some of these disorders are attributed to changes in specific genes. Others result from combination of multiple factors, both genetic and environmental.

And unfortunately, despite decades of research, we're still very far from being able to develop meaningful treatments for many of these disorders, including memory loss, which is associated with neurodegeneration, or even normal aging.

So our research is driven by the realization that this problem, at least in part, attributed to the fact that we still don't really understand how the brain actually works. And needless to say, we're also quite curious in understanding the basic principles of information processing in the brain. Lastly, we hope that what we do.

may ultimately help developing new, more capable artificial systems. And I'll touch base on this idea a little bit later in my talk. So what we do in neuroscience nowadays is possible because of technological advancements in many areas, not just biology, but also areas of science, such as physics, chemistry, and computer science.

We have tremendously powerful techniques that allow us to monitor activity of single neurons or populations of neurons in the brain in real time, understand how those neurons are organized at the structural level, understand what kind of molecules build neurons in synapses, and manipulate these molecules using genetic tools. And of course, over the past 100 years or so, imaging techniques have been absolutely instrumental for our work.

our ability to unravel the brain wiring, understand the organization of individual synapses, and understand how neural circuits and individual synapses are changed as a result of sensory experience, are changed when we learn, and are affected by disease.

Imaging is an integral part of our research program.

And just to give you a better idea about what imaging techniques, available imaging techniques, can do in the field of neuroscience, I will use this analogy. Imagine that you wanted to study the organization of a very complex forest, a forest which is comprised of different kinds of trees and different other plants.

They come in different shapes. They may be located in specific areas. They have unique morphological features. They play different roles. And they also integrate.

directly connected with each other. So some imaging techniques, in particular imaging techniques that are commonly used in clinical settings, only give us a very vague idea about what's going on, structural and functional. It's a bit like trying to study the landscape of a moon or geography of a moon using a simple telescope that you could buy on Amazon.

There are also techniques that allow us to zoom in, in particular techniques that we can only use in animal models that are not directly applicable to human studies for ethical reasons. And this technique allows us to study individual neurons, study individual synaptic connections in greater detail. But the problem is that oftentimes, these approaches are biased.

In other words, imagine that we wanted to study pines, and we can find the ways of identifying them in this complex network of forest. And we can study them in great details. But the problem is that

Oftentimes, by using this approach, we miss everything else. And needless to say, these details can be also quite important. As I will show you in a minute or so, the imaging approaches that we use overcome these limitations. They can be used in both animals and humans. And they allow us to reverse engineer, as we call it, neural circuits at very high resolution and do so in a completely unbiased manner.

Before I go on, I would like to say that this study is performed in collaboration with our terrific colleagues at UC San Diego. We collaborate with a wonderful group of microscopists, led by Dr. Mark Ellisman.

In my lab, we have a number of talented and passionate young folks who are very interested in neuroscience. They're also increasingly becoming interested in AI.

Most of the work I will share with you today will be available on our website at www.ncbi.nlm.nih.gov.

spearheaded by Marco, as Natalia already mentioned earlier today. And we have a number of newer graduate students, postdocs and technical personnel who are also playing an important role on this team.

So as I mentioned already, the type of imaging that we use extensively, not exclusively, but this is one of the core techniques that we use in the lab is called three-dimensional electron microscopy. And the idea is that you can generate a 3D volume, as we call it, of brain tissue, where all resolvable structures can be analyzed, recognized and analyzed quantitatively.

And although this technique is powerful, we also face a couple of challenges. The first challenge is that we produce a tremendous amount of data. And the second challenge is that this data also

is incredibly complex and heterogeneous. And just to exemplify this point, what's shown here in this right corner of the slide is a small EM micrograph, as we call it, that depicts neuronal synapses. And as you can appreciate, the structures are highly heterogeneous.

So 3D EM, as we call it, has been introduced in the field of neuroscience quite a while ago. But only recently, we started to fully realize the potential of this technology. And the reason is because, until recently, the analysis of this data was largely done manually. And as you can imagine, this process of manual tracing, of manual annotation of different structures in these complex image data sets is very tedious and also very inefficient.

Well, thankfully, we now have AI. And these tools have really transformed the field.

And just to give you a very brief idea about how we actually do it, so we train convolutional neural networks to recognize specific cells, specific subcellular features in brain tissues.

And when we get to the acceptable level of precision, we can do two things. We can scale up the volumes of our imaging data sets. And we can also multiplex.

And what I mean by that is we can start analyzing not just individual structures, such as a neuronal synapse, or maybe an organelle, such as a mitochondria, but we can analyze them all together in the same exact image sets.

So the combination of 3DEM and AI has already been very powerful. And a number of groups in the United States, Western Europe, and China have already successfully used this combination.

of these two techniques to reverse-engineer neural circuits in multiple model organisms. For example, the entire nervous system of C. elegans, which is a simple worm, has been already reconstructed. The fruit fly, the brains of fruit flies, have been reconstructed fully. There are several studies that report large-scale reconstructions of brain tissues in the mouse.

And also, more recently, there are studies that perform similar analysis in the human. Obviously, we're not talking about the whole brain, but just a small part. But nonetheless, this is a very informative line of research. However, the studies also have one limitation, namely that they usually rely on data sets that are acquired from a single biological specimen. And what it means is that we can get a basic idea about how the brains are wired, how individual synaptic connections are organized. But what this.

experiments don't tell us is how different these brains are. And obviously, this is a very important problem. I hope everybody in this audience would agree with me that every brain is a bit unique.

The approach that we use in MyLab is a bit different. So we recognize this problem, and therefore, we try to understand not only how neural circuits are organized, but also how they are reorganized.

And to be more specific, we try to identify structural changes that are associated with encoding of new memories and storage of long-term memories. And as a related question, we're also interested in understanding how these mechanisms are affected as we get older.

So we are heavily invested in studying the structural changes in the brain that are associated with normal aging. And to a great extent, what we're doing in the lab is driven by this.

this simple notion that there is a common principle that applies not only to our neural circuits in the brain, but also to other systems, both biological and artificial. And that is that long-term storage of information commonly requires some sort of a change in the carrier structure. This principle applies to our DNA. This principle, to some extent, applies to computer hard drives. And there is a lot of reasons to believe that this principle also applies to biological neural networks.

So what we do in our lab, again, is strategically different from the typical connectomic studies that I very briefly told you about in the way that we do not try to reconstruct as much tissue as possible. Rather, we focus on one area of the brain that we already know is important for memory storage, and that is the hippocampus. This is a structure within the limbic system.

that is essential for our ability to learn, remember, and also navigate in space. And we try to understand, as I mentioned already, how this structure or neural networks that are present in this particular brain region are reorganized as we form a memory.

Now, to give you a little bit of context about the implications of what we find, I would like to give you some background on the current theories of learning. And this slide may be a bit nerdy, so I apologize for this nerdiness.

So in essence, the field of learning and memory for the past 40 years or so, probably even longer than that, has been heavily dominated by the work, pioneering work, I must say, of a Canadian neurophysiologist, Donald Hepp. He's well-known for the.

a theory that he was essentially a founding father of, which is called the Hebbian theory for learning. And in essence, what this theory postulates is that learning arises from a selective increase in synaptic weight between neurons that will co-activate at the same time.

And such increase could be attributed to changes in their connectivity, so they can potentially form more connections, and also changes in functional synapses.

And according to the theory, the several ensembles that become engaged in formation of a specific memory form a stable network. And this network also becomes reactivated when we recall the memory later on, as depicted in the simple diagram.

And as I mentioned already, this theory has been quite dominant in this field, but recently.

multiple groups, I must say, using functional readouts, readouts that allow you to monitor populations of neurons in real time using these optical imaging approaches or using other approaches that allow you to monitor neural activity, started to detect something that doesn't quite fit into this theory.

And the studies suggest that what could happen, at least in some cases, is that neural networks that are important for our ability to process information, sensory information, and store memories are flexible, as depicted here. So neurons that are activated in learning do not form a stable substrate. But rather, there is a broader network. And the patterns of activation might not be overlapping.

Another important implication of, well, regardless of what model is correct, so we currently still don't really understand what is going on. Because

is a big debate in my field. Another important point I wanted to highlight today, which is directly relevant to what I'm about to share, is that the way how our brains encode information is sparse.

In other words, we do not use all of our neurons every time we learn. We do not use all of our neurons every time we perceive a sensory stimulus from the external world or express an emotion. So what happens in reality, only small subsets of neurons become engaged for every cognitive task.

And also, we already have very powerful AI tools that allow us to recognize different structural features. And also, there is a lot of interest in the field in applying AI for identifying cells exclusively based on their morphological characteristics that are engaged for different types of behaviors, different types of memories, if you will. They're not quite there yet.

To overcome this challenge, we combine

the imaging approaches that I've described just a couple of minutes ago with AI, with other tools. And these tools, without going into great details, allow us to label cells that were recruited for a specific form or a specific learning event in a specific period of time or during strict time windows.

And by labeling them, we can also identify them in our three-dimensional EM data sets. And this is very useful, because we have a very important frame of reference. Again, I don't want to really go into this nitty-gritty details, because they're probably only of interest to experts in the field.

All I will tell you that we combine genetic tools with chemistry to label neurons that will activate transiently in a way that this labeling is permanent. And this is how these reconstructions happen.

typically look like. So this is an example of reconstruction of a small conic dome, as we call it, of neurons that were either irrelevant, shown here in gray and blue, or neurons that were specifically engaged for a learning task.

And I must say that we do everything in mouse because these experiments are not quite feasible in the human for ethical reasons.

As a side note, I would like to share an anecdote. I first saw this video, the early version of this video, when I was traveling back home from the National Institute of Health. Marco sent this video to me via a Dropbox link. So I uploaded it to my laptop, and I hopped on a plane, and I started playing it on my laptop. There was a person sitting next to me who happened to be an artist. And she kept glancing at my screen and eventually asked, what are you looking at? And I said, well,

well, these are reconstructions of neural circuits in the brain. And she said, well, these are really beautiful. Then paused and said, but these are not real colors, right? And of course, she was correct. So these are not real colors. These are colors that we use artificially just for the purpose of classifying different structures in the brain.

So the real biological networks do not look maybe as beautiful. And if you keep close attention, if you actually pay close attention to this little video where, again, so the green neurons actually represent neurons that are activated during memory encoding, you can probably start appreciating that these neurons do not exclusively form connections with each other. This is consistent with the idea of a flexible network. This is consistent with the idea that there

network, and within this network, only populations of neurons are activated during memory encoding, but they are not exclusively interconnected with each other. So what is nice about this approach that we use is that we also can zoom in and study individual synaptic connections with a great deal of precision. So what's shown here is a high-resolution view of similar reconstructions of neurons that were either engaged during learning or neighboring irrelevant neurons, and as you can appreciate here, they form a tremendous number of synaptic connections. And we can zoom in and study the architecture of these connections in great detail.

So before I share what we think is one of the most important applications of our recent work, I would like to highlight two dogmas. So the first dogma is that a synapse is a state.

where you have vesicular organelles that we call neurotransmitter vesicles that store neurotransmitters. And these neurotransmitters are being secreted from this so-called presynaptic site, which is typically formed by an axon. And as a result, the postsynaptic target neuron is activated. And the signaling is typically thought to be one-to-one.

I'm sure that most of you have seen images of real synapses. There are also tons of cartoons that depict the organization of synaptic connections in the brain. And what they have in common is that they all sort of resemble this image that I am displaying on the screen.

The second dogma is that we believe that the numbers of synaptic connections increase as we learn.

actually is kind of contradicting both of these notions. So what we find, so I guess one of our key findings is, involves a very unusual structure that has not been really studied in great detail, although I must say that we have not discovered the structure, we knew about the existence of this very atypical synaptic connection for quite a long time, but at the same time they have been under the radar of most people in this field, because due to the atypical, unusual structural organization, it's not easy to identify them using conventional imaging techniques.

And we call these structures multi-synaptic boutons. So what you see here are two examples of these multi-synaptic boutons, and as the name implies, these are types of synapses, where a single side

that secretes neurotransmitter can signal to multiple recipients. And in many cases, these recipients belong to postsynaptic sites. They belong to completely different neurons. And maybe I'm sharing a bit of unpublished observations.

We also started to realize that in many cases, these recipients also are completely different entities. In other words, a single presynaptic site can signal to not only different neurons, but also to neurons that belong to different functional classes.

Let me play this video again. And yeah, so what we find is quite interesting and exciting in our opinion. So what we find is that when neurons become activated during memory encoding, they do not produce a long-lasting change or increase in the total numbers of the sites.

synaptic connections. Rather, what happens, they increase the complexity of their target interactions. And what I mean by that is that the numbers of conventional synapses, as I showed you just a few minutes ago, drops down. The number of these atypical multi-synaptic boutons increases. And what is also exciting is that this increase also is accompanied by an increase in their structural complexity.

In other words, a single multi-synaptic bouton of a neuron that was involved in memory encoding starts signaling to more targets. In other words, recruiting more cells into the network, including cells that were not initially engaged in memory acquisition. In essence, what we find is that there is a non-trivial expansion of an initial network, or enneagram as we call it, that does not involve changes in the total number of individual synaptic sites on both ends.

pre- and post-synaptic. And we're quite excited about these observations because, quite frankly, it changes the way how we think about neuronal communication and principle. There are also very non-trivial implications to cell biology of synapses, and I probably don't have a lot of time to talk about it. I might as well skip that.

But what I would like to share with you also is that these findings prompted us to think about what needs to be done in the future, and there is, in fact, a number of questions that we would like to address. As a starting point, we need to understand how general this principle is. In other words, we still don't completely understand whether or not the rules that we learn in one part of the brain are also applicable to others. We don't know if these rules are also applicable for encoding of different types of memories.

And we still don't completely understand what really distinguishes this individual multi-synaptic connections, this unusual connections from typical synapses at the molecular level. And this is a very important question because it is reasonable to predict that these connections represent stable structural hallmarks of information storage.

And it would be of great interest to manipulate them, and ultimately it would be of great interest to develop strategies that would allow us to manipulate them pharmacologically, potentially treating memory loss associated with neurodegeneration or even normal aging. But again, as I mentioned already, for that to happen we still need to do a lot of work and we need to really understand how these individual connections are organized.

And lastly, again, so I'll share a bit of unpublished data consistent with the idea that these are very unusual.

structural hallmarks of long-term information storage. We recently started to observe that the networks change as we age. So the numbers of these connections become more abundant as the brain gets older. And this suggests, indirectly, that this change is associated with the fact that we progressively accumulate more and more memories over the lifespan. And of course, this is just a speculation. We still need to prove it experimentally. But we are quite excited about this idea.

And lastly, I would like to say that this discovery has also prompted us to think about the possibilities of potentially translating the knowledge about the organization of biological neural networks for developing to new algorithms. As far as I know, and I must say I'm not an expert in AI, so maybe my idea.

is perceived as a bit naive. But as far as I know, no principles of information processing that are similar to what we find. Principles that involve these atypical multi-synaptic connections that permit coordinate signaling from one neuronal site to multiple sites, also signaling that is dynamically regulated by sensory experience, have been implemented in AI.

And we think that this could be a very interesting avenue to explore, because ultimately, this could enable or open a roadmap towards dynamic scalable architectures that better reflect the biological systems.

Of course, we recognize that we're deeply passionate about biological neural networks, and we're increasingly relying on AI. Again, we have no

experts in AI. Much of what we do is also restricted by how much resources we have in the lab, and our group is relatively small. So if any of you are interested in exploring these ideas and are interested in exploring the possibilities for some collaborative studies, we would be more than happy to explore these possibilities as well.

Yeah, so yeah, lastly, okay, so we still have time. So as I mentioned already, so in the second part of my talk, I wanted to offer you some personal reflections about how AI is already shaping science today, how it impacted the way how we do experiments, how we interpret them. AI impacts daily lives of many scientists, if not all of us.

sets, and also maybe share some ideas about what's going to happen in the future.

So I think it's fair to say that the impact of AI has already been tremendous. The type of research that I shared with you briefly today would not be possible without algorithms that allow us to produce this data set and make sense out of data sets. Because tracing the structures manually by humans, no matter how many humans we would involve, would probably take years.

There are many other examples that fall into this category, for example, platforms that tremendously facilitate the process of drug discovery, platforms that tremendously facilitate the process of annotating the genomic information. And these are just a few examples. There are also examples, very successful examples.

of how AI also allows us to uncover relationship in multidimensional data, including data that is not really can be comprehended by a human. So there are patterns that we simply miss.

There are also models that assist, as I mentioned already, design of new drugs. And I'm only talking about biological sciences. Of course, these examples can be extended to other avenues of basic science, including chemistry, and physics, and math. Design of new proteins, design of new materials.

And lastly, AI is just a terrific tool that allows us to really accelerate the progress of scientific discovery on many different levels. For example, I personally use ChatGPT virtually on an everyday basis.

a basis, on a daily basis. I think it's a terrific personal assistant. Just its ability to edit documents, proofread documents is tremendously helpful, because these tasks are very tedious. And what the folks at OpenAI are doing, I think, saves us a tremendous amount of time. And these are just a few examples. But I think it's even more exciting to think about what's going to happen in the future.

And again, so these are just some naive ideas. Again, keep in mind that I am not an expert in AI. Some of you may not fully agree with these predictions. I would predict that, especially with the development of AGI, and I hope that many, if not all of you, share the impression that it's not a matter of if, but just a matter of when AGI becomes a reality.

quality, the way how we do science is going to change tremendously. It's going to change radically. And one could predict that AI systems actually will lead the process of hypothesis generation. And this is the problem that is still affects to a great extent the way how we do science because, again, so to go back to the type of research that we do in my lab, so we use AI tools at early stages of discovery to make sense out of the data sets. But ultimately, we still have humans who decide what priorities, what directions are important. Still, humans decide on what hypothesis we should pursue and et cetera.

In this prioritization, I think AI is going to make a huge difference in changing these priorities, ultimately making science more efficient.

and making science more cost-effective, if you will.

One could also predict that AGI-level systems could integrate massive data sets that belong to different categories, such as data sets that include raw data produced by different techniques, available literature, and et cetera, et cetera.

AGI could also build new theoretical frameworks from first principles, potentially surpassing human ability to abstract into it.

Again, it boils down to the initial remarks that I offered in the beginning of this slide.

One could also predict that the process of scientific discovery is going to become more democratic, because at least in theory, anyone who has access to AI tools can participate in this process, considering that the data that we produce as a scientific community is becoming more and more available to the public.

Thank you.

it's becoming easier and easier to access data. And the reality is that oftentimes, we only scoop the top of the icing from this cake, so to speak. And oftentimes, we produce data sets that still have so much information that is still hidden.

And the reason why we don't extract information is purely technical, because we don't have appropriate tools. So sometimes, because our ideas are biased, and we simply don't look at specific features or specific pathways, because we don't think that they're important, or probably because we don't even know about their existence.

Yeah, so one could also predict that the advancement in AGI could also make a dramatic transition in the way how we perform experiments in principle, right? So in many areas of biology and neuroscience in particular, we still are heavily reliant on experimental science.

No matter what kind of ideas we come up with, with some exceptions maybe, we still need to test these ideas experimentally. And this process can be tedious.

This process is also very expensive. And sometimes our experiments fail for technical reasons or because our ideas are not correct or because our techniques that we are using are not good enough to actually successfully complete these experiments.

And I predict that we will be more and more reliant on in silica discovery, if you will, where computational modeling is going to play a much bigger role. And confirming this model is going to be also more efficient.

There are, of course, many ethical and other related issues that we should consider, aside from the fact that we, in my opinion, have to be careful about the quality of information that we get.

from AI systems. Obviously, this is a big problem. And we know very well that there are folks that are very capable of solving this problem. And they are aware of this problem. I know there is a lot of development in this field. But still, as scientists, we have to be quite careful about not blindly trusting what we get back from AI.

But one could also imagine that the way how we communicate science is going to change. Currently, this process is extremely slow. Just to give you an idea, the time from when a core discovery is made to the point when this discovery is published could be years. And this is because we need to add more experiments. This is because the peer review process is relatively slow. Usually, its papers get reviewed more than once. When the initial reviews are received

by investigators, they oftentimes have to go back and do more experiments. Then there is a delay with final publication. What it really means is that there is a very considerable delay between the point when a key discovery is made and when these discoveries communicate to the rest of the world. And this is a problem. This is a problem that not only affects individual scientists or individual groups, but also affects the society.

Imagine there is one example. Imagine that there is a tremendously important discovery that sheds new light on mechanisms of cancer, potentially allowing us to come up with a new treatment. If there is a year or maybe more than a year delay between this point of this discovery and the point when the discovery is published and when it becomes accessible to the scientific community, how many people will actually die who would potentially benefit from this? This is just one example.

And yeah, and ultimately, I think that the advancement in AI and AGI in particular may also shake up the system. It may actually change the way how the scientific institutions are organized, how the scientific journals operate, the tenure systems, funding agencies are probably all going to be affected by that. And of course, it's very difficult to predict what is going to happen exactly.

But I like to be on the optimistic side. And I think that all of our fears of new technologies are pretty normal. They always happen. People always worry about the potential negative implications about every new technique. But I think that the future is very bright. And I think that the collaboration between humans and AI is going to be very powerful. And again, I think that it's going to have a very positive impact.

on the process of scientific discovery on many different levels. With that being said, I thank you all for your attention and I would like to invite Natalie back to the stage.

If you wanna join us for the live Q&A in the future or if you wanna attend our in real life events at the OpenAI Labs, please complete your profile, log in and register for upcoming events.

On the horizon, we have our very first international networking event where we'll highlight members from OpenAI team and members of the forum all over the world. And then we'll jump into a one-on-one matching segment of that networking session after we introduce you to some really cool members of the community. This one is super exciting.

In July, folks, we're gonna be hosting Joaquin Quinoneira Candela. He's our new head of recruiting at OpenAI, was formerly the head of preparedness. For.

Careers at the Frontier, Hiring the Future at OpenAI. This one is going to be so amazing. We'll do it just like tonight where we stream it to the public and then we move into a live Q&A for members only.

We are also featuring once again the Collective Intelligence Project and we're going to be announcing the winners of their Global Dialogues Challenge.

For those of you who are new to the forum, the Collective Intelligence Project was the very first external stakeholder that we hosted in the forum along with Llama Ahmad, a researcher, an AI researcher at OpenAI.

And the Collective Intelligence Project works in collaboration with other organizations including OpenAI to develop ways and systems to incorporate broad public feedback and democratic inputs into AI systems.

So we really hope to see you all there. And if you'd like to check out the very first talk in the forum from two years ago, we're going to try to get you there.

drop that in the chat now. So if you're a guest, it was so wonderful to host you tonight. I hope this event makes us all feel hopeful for the future.

And if you're a member, I will see you in just a moment in the live Q&A.

Good night, everybody.

+ Read More
Comments (0)
Popular
avatar


Watch More

Thinking Machines & AI Economics: How Reasoning AI Is Rewriting the Future of Work, Science, and Strategy
Posted Apr 23, 2025 | Views 2.5K
# AI Economics
# AI Policy
# AI Research
# Future of Work
Exploring the Future of Math & AI with Terence Tao and OpenAI
Posted Oct 09, 2023 | Views 26.9K
# STEM
# Higher Education
# Innovation
How Wharton is Becoming an AI Native Institution
Posted Oct 23, 2024 | Views 8.8K
# Higher Education
# GPT-4
# Everyday Applications