OpenAI Forum
Groups
/
AI in Higher Education
/
navigation.content
Sign in or Join the community to continue

Event Replay: Discoveries Across Disciplines

# AI Education
# AI Pedagogy
# Edu Use Cases
Share

Speakers

user's Avatar
Natalie Cone
Forum Community Architect @ OpenAI

Natalie Cone launched and now manages OpenAI’s interdisciplinary community, the Forum. The OpenAI Forum is a community designed to unite thoughtful contributors from a diverse array of backgrounds, skill sets, and domain expertise to enable discourse related to the intersection of AI and an array of academic, professional, and societal domains. Before joining OpenAI, Natalie managed and stewarded Scale’s ML/AI community of practice, the AI Exchange. She has a background in the Arts, with a degree in History of Art from UC, Berkeley, and has served as Director of Operations and Programs, as well as on the board of directors for the radical performing arts center, CounterPulse, and led visitor experience at Yerba Buena Center for the Arts.

+ Read More
user's Avatar
Katherine Elkins
Professor of Humanities and Comparative Literature @ Kenyon College

Katherine Elkins is a Professor of Humanities and Comparative Literature at Kenyon College who has spent the last decade proving that AI can transform research in the humanities and social sciences. Her work demonstrates how AI enables new forms of discovery, from analyzing the emotional architecture of hundreds of stories to modeling social networks in historical letters and films. Elkins' book The Shapes of Stories (Cambridge University Press, 2022) established methods now used globally to quantify what we've long theorized about how stories work. Most recently, her research has explored questions surrounding AI and literary translation, the AI-Fiction training paradox, and AI for modeling the shapes of fairy tales. As Principal Investigator for the US AI Safety Institute representing the Modern Language Association, Elkins brings humanities expertise to national AI policy while pioneering research methodologies that answer fundamental questions in our fields: What makes a story compelling? How do political narratives persuade? How do cultural meanings evolve over time? More importantly, AI democratizes cutting-edge research. Through Kenyon's AI Lab, undergraduates conduct original research that has been downloaded 85,000+ times from over 1,000 institutions worldwide. Students leverage AI to analyze everything from congressional AI legislation to emotion in Playboy covers, producing new knowledge that advances our disciplines.

Her international collaborations include developing AI methods alongside researchers with Colombia's Truth and Reconciliation Archive and contributing to UNESCO's AI and culture policy framework. Elkins will demonstrate how AI can act as a genuine research accelerator, enabling faculty and students to test longstanding theories in our fields and to answer questions that matter.

+ Read More
user's Avatar
Leonardo Impett
Assistant Professor of Digital Humanities @ University of Cambridge

Leonardo Impett is Assistant Professor of Digital Humanities at the University of Cambridge, Bye-Fellow of Selwyn College, and Research Group Leader at the Max Planck Institute for Art History (Bibliotheca Hertziana), a German research institute in Rome, Italy. His research focuses on crossing AI approaches to art history with art-historical approaches to AI - and especially in how knowledge can go both ways. At Cambridge, he has recently been responsible for setting up and directing the new MPhil and PhD programs in Digital Humanities, which sit under the Faculty of English. He is occasionally asked to write about himself in the third person.

Leonardo has a background in information engineering and machine learning, having worked or studied at the Cambridge Machine Learning Lab, the Cambridge Computer Lab’s Rainbow Group, and Microsoft Research. He was previously Assistant Professor of Computer Science at Durham University, in the AI and Human Systems group. His PhD (EPFL) focused on the use of computer vision for the ‘distant reading’ of the history of art. He has previously been a fellow of Villa I Tatti – the Harvard University Center for Italian Renaissance Studies, and a visiting professor at the CNRS (French National Centre for Scientific Research) on the theme of “Artificial Intelligence and Digital Humanities.”

His new research group at the Max Planck Institute, “Machine Visual Culture,” investigates the reciprocal relationship between artificial intelligence and visual culture, focusing on how AI systems both shape and are shaped by histories of seeing. Combining digital art history with critical AI studies, the group explores AI not only as a technology but also as a cultural phenomenon with important implications for the humanities. Spanning the Max Planck and Cambridge, Leonardo’s distributed ‘lab’ includes over ten full-time PhDs and postdocs.

Leonardo’s new book (with Fabian Offert, UCSB) on the history of the philosophy underneath AI systems, Vector Media, is forthcoming with the University of Minnesota Press. He has been a PI in projects on AI, copyright, and political cartoons (Australian Research Council); studying the visual culture of computer vision datasets (Volkswagen Foundation); and user interaction with AI in online exhibitions (UKRI). Alongside his research in digital art history, he frequently works with machine learning in arts and culture, including with the Royal Opera House, the Whitney Museum of American Art, and the Liverpool Biennial, where he developed the Biennial’s first AI-curated show in 2021, based on OpenAI’s CLIP and GPT models.

+ Read More
user's Avatar
Marco Uytiepo
PhD student @ The Scripps Research Institute
user's Avatar
Kevin Weil
VP, OpenAI for Science @ OpenAI

Kevin Weil is the VP, OpenAI for Science, previously Chief Product Officer at OpenAI, where he leads the development and application of cutting-edge AI research into products and services that empower consumers, developers, and businesses. With a wealth of experience in scaling technology products, Kevin brings a deep understanding of both consumer and enterprise needs in the AI space. Prior to joining OpenAI, he was the Head of Product at Instagram, leading consumer and monetization efforts that contributed to the platform's global expansion and success. Kevin's experience also includes a pivotal role at Twitter, where he served as Senior Vice President of Product. He played a key part in shaping the platform’s core consumer experience and advertising products, while also overseeing development for Vine and Periscope. During his tenure at Twitter, he led the creation of the company’s advertising platform and the development of Fabric, a mobile development suite. Kevin holds a B.A. in Mathematics and Physics from Harvard University, graduating summa cum laude, and an M.S. in Physics from Stanford University. He is also a dedicated advocate for environmental conservation, serving on the board of The Nature Conservancy.

+ Read More

SUMMARY

This OpenAI Forum session focused on how AI is accelerating research across disciplines—especially in the humanities and social sciences, where it’s enabling new ways to test theories, analyze culture, and train students in computational methods. The program framed AI as a powerful scientific tool that can compress long research timelines by handling tasks that may require deep, sustained reasoning and large-scale synthesis.

Katherine Elkins shared “applied humanities” projects that model emotional arcs in novels, compare how translations reshape narrative patterns, and surface meaningful peaks and shifts that often align with what close-reading tends to notice. She also showed how students are using AI to explore cultural datasets—ranging from storytelling structures and social media dynamics to bias investigations in image generation, legislative text mining, and network/knowledge-graph analysis.

Marco Uytiepo described how deep learning accelerates nanoscale brain-imaging analysis, turning months or years of manual reconstruction into days and helping researchers study circuit features linked to memory.

Leonardo Impett argued that modern computer vision models don’t just analyze images—they embody a “machine visual culture,” and researchers can use art-historical methods to study both visual media and the cultural lens of the algorithms themselves.

The event ended with a live Q&A where participants discussed responsible use with domain experts, creative uses of generative tools in storytelling, examples where AI changes research direction (not just speed), translation effects, long-term implications for analyzing AI-generated imagery, global archival preservation, and practical first steps for bringing AI methods into labs and classrooms.

+ Read More

TRANSCRIPT

00:00:00] Speaker 1: Hi, everyone, welcome to the OpenAI Forum. I'm Natalie Cone, your community architect and the most enthusiastic steward of our beautiful community. Thank you all for joining us. When many faculty researchers think of AI-supported research, they think STEM. However, we're featuring two professors today who perform research and mentor their students to perform research in humanities and social sciences. We're going to learn how these creative and innovative faculty members are inventing new methodologies for performing and framing research, and in one use case, compressing years of discovery into months. We'll start with a recording from a live convening we held at OpenAI recently, but then we'll be joined by participating faculty for a live Q&A at the end. Today, Kathryn Elkins, Professor of Humanities and Comparative Literature at Kenyon College, and Leonardo Impet, Assistant Professor of Digital Humanities at the University of Cambridge, and research group leader at the Max Planck Institute for Art History will join us after we watch their presentations for a live Q&A. Please check out their full bios and drop them a DM in the forum to learn more about their work. Without further delay, let's watch these presentations. See you soon.

[00:01:28] Speaker 2: Everyone, please help me welcome our VP of OpenAI for Science, Kevin Weil.

[00:01:32] Speaker 3: Hey, everybody. It's awesome to see so many educators and scientists in the room here because this is something that we are extremely excited about, the future of science. So I was Chief Product Officer here for the first, I don't know, year and change that I was here and just recently moved over and have started this new team, OpenAI for Science. The reason is that we think this is maybe one of the most impactful ways that the work we do is going to impact the world. It's amazing to build a product that's used by some huge fraction of the world on a daily basis. But I think it may be even more profound the way that AI changes the fields of mathematics, physics, biology, and material science. It may well be that the way that people truly feel artificial general intelligence is not because ChatGPT gets so much better. Of course, it will, but it may be that the way they feel that is because we have personalized medicine and we have fusion, and we understand the origins of our universe and we do that all in a much faster way.

[00:02:37] So the way that we're thinking about it is that AI can be the next big scientific tool. It can be maybe one of the most powerful scientific tools of all time. If we can put this in people's hands and give them a ton of compute because there are hard problems that you all want to solve that aren't going to be a two-second ChatGPT answer or even a 30-minute deep research question. These are going to be problems where the model may need to think for hours or days on end, but if we can do that and we can give people these tools and a lot of compute, then maybe we can do the next 25 years of scientific research in five years instead. And if we can do that, I think the world would be a better place.

[00:03:28] So, we're just getting this effort started, but GPT-5 has certainly crossed some kind of threshold in terms of the work that it can do. We're regularly seeing examples of GPT-5 doing novel scientific research. It's not yet proving, you know, millennium problems. It's not yet solving, like, the biggest unknown things. You can't say, go solve the Riemann hypothesis, GPT-5 is not there yet. But we do see very concretely that in the hands of scientists and expert researchers, it can solve, maybe not theorems, but lemmas. You know, it can do work that humans have never done before—maybe not yet quite work that humans could not do, but work that humans have not done. It can do novel things, and it can also do some really powerful, accelerated things like very deep literature search.

[00:04:14] I've seen it connect concepts from one field to another using completely different terminology, often in different languages. You know, oh, did you know that actually there's a related thing that was published in German in a journal in the 60s that the world has maybe forgotten, but is actually relevant to the thing that you're doing? That's also its own form of acceleration. So we're super excited about where the models are going.

[00:04:58] Speaker 1: I'm excited about where the models are going. I always tell people, remember, as good as GPT-5 and these models are today, the model that you're using today is the worst model that you will ever use for the rest of your life. And when you really sort of build that into the way you think, it's kind of profound. The models are just going to keep getting better. They're going to be better and better tools. We're excited to work with all of you, I hope, and put them into your hands, because as much as we're going to try and do, the surface area of the fields of science is huge. The only way that we're going to do this well is doing it in partnership with the academic community, the national labs community, with startups, and with enterprises. So we feel like we're just getting started, but we're incredibly excited. Working on science and accelerating science is mission-driven for us. So I'll stop there, but we'll have a lot more to say and hopefully a lot more work to do together with all of you. Thank you.

[00:06:06] Speaker 2: OK, so in this next segment we're going to be hearing from faculty and PhD researchers. Please help me in welcoming Katherine to the stage.

[00:06:16] Speaker 3: Hi, everybody. Thank you for having me. OK, so I'm going to blow through this talk pretty quickly, and just a little word here. I actually work across industries. I've worked with the UN and UNESCO on cultural heritage and AI. I have worked on a project with Public AI, which is working internationally on AI for public good. I have collaborated with MEDA, and right now I'm a PI for the USA Safety Institute. I work a little bit in tech, just came back from a tech rally with Kevin O'Leary, Mr. Wonderful, and do some VC stuff, have worked with Bloomberg. But I'm also an academic, and my day job is teaching. So I'm going to take you through a little bit of that. You might be interested to know I am traditionally trained as a comparative literature person. I'm known for my Proust scholarship and work on the modern novel and had a little bit of a transition in 2016.

What I'm most excited about is now we can actually take a lot of our theories, so social sciences, humanities, as many of you know, are very theory driven. What I'm most excited about is now we can actually begin to test those theories. Now unfortunately, some of those theories will be wrong. So not all of my colleagues are entirely excited about this, but this is a lot of the work that I do, what I call applied humanities and social sciences. I just want to do a shout-out to my collaborator, John Chun, because people ask me, how on earth did you start working in AI? In 2016, we created what we believe to be the first human-centered AI curriculum in the world. We were six months ahead of Stanford. That is really thanks to my collaborator, John Chun, who did start-ups in Silicon Valley. He retired to little old Gambier where Kenyon College is, and he became very concerned that we were not training enough voices to understand tech deeply to weigh in. If we didn't do that, we would be leaving these important questions to the engineers alone. So he came out of retirement in 2016. We began teaching an entire curriculum that blends programming and real technical skills with answering what we call the big questions. That's him there. And then Abby, who is in this photo in our cultural analytics class, after she graduated, went on to the National Gallery to do some network modeling and some knowledge graphs, because the National Gallery didn't have anybody to do that. Those are the kinds of students that we're training.

We're really focused on the big questions, the questions that matter. So some of our early work here was using AI to leverage the shapes of stories. I was not the first one. You may have seen an article in The Atlantic by Andy Reagan, who first talked about the shapes of stories. Then Matt Joppers, who's now at Apple, wrote the bestseller code using AI to see if we could predict what was a bestseller. Some people had done a little bit of surfacing these shapes of stories, but there was no real method to it. I wrote this book in 2022 called The Shapes of Stories, where I really talk about using AI to leverage the shapes of stories and to look at what we call emotional arc. A lot of this work in that book was actually our students. Although I'm talking about research, I do want to say that we take undergraduate students, and after only a few AI courses and programming courses, we train them to do a lot of this research, and they were very impactful in writing that book. A lot of what I'm going to show you is their work.

Here is a shape of a story. These are all these different models. We started working with many different models. Because in the beginning, we had to actually take and find the right model for the...

[00:09:56] Speaker 1: and find the right model for the text. There's no perfect model. You actually match the model to the text. This may look pretty noisy to you, but this is actually a pretty strong signal of all of these models. So how does this work? This is To the Lighthouse by Virginia Woolf. It's considered pretty much a plotless novel because nothing much happens, but guess what? It has an incredibly, yeah, it's really true. I'll say it, but it turns out to have a roller coaster of an emotional arc. And the reason why we picked this is because we wanted to test, does a so-called plotless novel actually have a very strong emotional arc? And here you can see that in fact it does.

[00:10:37] So let me just tell you what you're looking at here. This is just line by line, sentence by sentence across the time of the novel. And just like we might look at a stock ticker, we can actually model the emotional arc. It's not actually emotion, it's sentiment. It's affect, positive, negative, and intensity. And this is how we did it originally with less sophisticated AI. So since then, and we started doing that in 2018, we have looked at all kinds of projects. Many of these are my students. This is Emily Wilson's Odyssey. And I had a student who really wanted to ask this question with Emily Wilson. Emily Wilson's translation of The Odyssey was seen as being this amazing kind of feminist, radically different translation. And so she wanted to ask if translation is different. Is the emotional arc different in translation?

[00:11:36] And so she started with this work and then we ended up looking at 18 different translations. Very interesting. You can look at Alexander Pope, who's writing in a very much older language in rhyme, and it actually has a very similar emotional arc to contemporary versions. So there's no kind of rhyme or reason to which ones are similar, but they are somewhat different. And it turns out that the biggest point of difference is this moment of what happens after Odysseus returns to his wife. Okay, because she's had all of these suitors and then they're murdered. And the real question is, how positive is the end of that story? Can you have a good marriage after all of that? Right, or is there kind of no going back? And that is where we actually tracked the biggest difference in translations.

[00:12:18] This is another student, Flannery Strain back in 2022, who wanted to ask, are we reading different Kafka stories when we read them in different languages? Because this theory and this book, Transforming Kafka by Patrick O'Neill, said yes. And in fact, we found that the stories are quite different depending on translation. And I later actually looked at translations of Proust and published research on that. We have looked at everything from political speeches. If you want to ask about Trump's speeches, I can talk about that. And how everything is either wonderful or terrible. And he has quite an emotional arc in his speeches, but we look also a lot at things like screenplays versus novels.

[00:13:03] So this is Erin's work, where she asked whether the screenplay version of Harry Potter and the Sorcerer's Stone was the same as the novel. And in some cases, they're actually pretty similar. What I want to tell you here, but that's actually kind of interesting, is these peaks and valleys, the moment where the emotion changes, tend to be the points that literary critics, when you go to English class and your professor picks out that passage and you wonder how did they pick that? Well, it turns out that the machine can surface most of those, and they tend to be these peaks and valleys. Again, my colleagues are not too thrilled necessarily that an AI can pick out these peaks and valleys, but that is the case.

[00:13:43] So in this case, same, but then Little Women, very, very different, the screenplay from the actual novel. The novel actually, yeah, very, very different. We even had a student, Alex, look at Shark Tank episodes. I had a fun time talking about this at the tech rally with Kevin O'Leary. So this W shape is actually considered what Matt Jackers calls a bestseller code. And we see that many, many stories that are downloaded the most, that are read over and over, have this kind of W shape. And we found in successful Shark Tank pitches that they tended to have this W shape as well.

[00:14:21] Henny Zong was very disturbed by watching her grandmother die and how poorly in the US we handle geriatric care in a particular end of life. And so she took three case studies, this is only one here, and tracked the emotional arc, including when the terminal diagnosis happens. And what she surfaced was she could actually find in these valleys, the five stages of grief with every single memoir, although not necessarily always in the same order. So she used that to look at memoirs. We even, before Elon Musk put a...

[00:14:54] Speaker 1: Before Elon Musk put a wrench in it, we used to be able to do social media. What was very interesting is we could take instead of a single story, thousands of tweets and actually have the same kind of story. Only this is a story of everybody on Twitter tweeting at the same time about a particular topic. Here we were actually looking at the 2022 Senate recounts to see if we could predict how contested the election was. What we found was, you can see this larger box, more of a ringing. When the election was more contested, we actually saw more of that, like a rubber band being pulled and snapped in real time.

[00:15:39] This is another great example. My student who was from Sri Lanka and I looked at Twitter during this financial crisis. You can see that ultimately it ended very, very poorly in Sri Lanka. But again, just like in a story in a novel, we could take these peaks and valleys and see real events happening and track them. Again, these are many, many people, thousands of voices on Twitter instead of individual ones.

[00:16:02] Starting in 2019, we were lucky enough to have the beta privilege to look at GPT-2, and we had a lot of fun fine-tuning it. Fine-tuning, when we actually train it on a particular author, and you can see here all of the different kinds of work that we fine-tuned it on. We explored how long to fine-tune it and created metrics for deciding how well it did. We were very excited. I don't know if any of you follow Guern; it started picking up all of our work, and then we knew we were on the map. We have been doing all of that fine-tuning for a lot of time.

[00:16:37] We also worked on how well it could write a screenplay. We got scooped by DeepMind, which published its paper just as we were finishing that, but we were trying to use the Save the Cat 15-beat structure. Then also we created DivaBot. I can talk a little bit about that later. So that was in honor of Choppeck's 100th anniversary; Choppeck is the one who coined the word "robot." We really wanted to train GPT-3, this is GPT-3 with Lauren Katz, who's been on Curb Your Enthusiasm. Lauren is improv-ing with DivaBot, and unfortunately, it was during COVID, so she couldn't fly in from LA.

[00:17:12] Speaker 1: There she is in LA, and we were in Gambria, Ohio, and Lauren was improv-ing. I will tell you that I really loved the early GPT because it invited her to smoke weed. My favorite was when Lauren was improv-ing, "How do I stop a crying baby?" and guess what the answer is? "Wear a condom." I still want to know if that's in the training data; somebody's got to tell me that. I looked everywhere to see if that's a real joke; I could not find it.

[00:17:44] Early work. We can't really do this anymore because you guys have put on great, or OpenAI has put on great safeguards and guardrails. But early on, with Dolly, we did a lot of investigation of bias. These are two different student projects. At the top, we have male executives and female executives. What do you know? And then, actually, we did hundreds, right? This is just a small sample to see what it looked like, but this is pretty representative of the difference. When I give talks, I always ask women, "What do you notice about the women?" Nobody wants them as a boss, right?

[00:18:27] So female bosses versus male, also more male diversity. I had a black student who's very interested in black women and hair, looking at white women and black women. Interestingly enough, the white women all seem to have straight hair, which is not really the case in real life, blowing in the wind, right? But also the difference in representation. He spent a lot of time on this project.

[00:18:51] Speaker 1: Now I'm going to talk about a few other projects. Here, we did copy Stanford CS that puts all of their posters on the internet, and so we put the posters on the internet. Here is an example of "AI reads Playboy," but not for the articles. Jill really wanted to look at Cosmo, but it was all not accessible, and Playboy was accessible because it went under. I said, "Do you mind working on Playboy, 60 years of Playboy?" She said, "Absolutely, I would love to." She was looking at the face, and we were doing emotion analysis of the face. What we found is that DeepFace actually surfaced a lot of negative emotion in these female models.

[00:19:33] Here, what I want to point out that I'm so excited about is that we've always trained our students to code. It's pretty sophisticated to do a lot of these kinds of projects. But what is really amazing now is that our students can jumpstart using AI. Fiona did code some, but this project was really made possible.

[00:19:52] Speaker 1: So this project was really made possible by leveraging AI to do really cool stuff. So this is looking, she used AI to scrape every single piece of legislation that was proposed about AI and dividing it between Republican and Democrat, and then doing some text mining. And this is a project that she did in a week, right? So you can see what we're now able to do. And what she found is actually using text mining, there's not nearly as much disagreement across the aisle in terms of things that people talk about. Here is the five stages of grief, which I already mentioned, so I'll keep running. And then just this last fall 2024, MAESI wanted to look at this trend in American history. I have a friend who's a historian who said we're not allowed to tell stories anymore in history. And if you're telling a story, everybody is suspicious of you. And so we wanted to see if over time, narrative and storytelling has dropped out of history textbooks, and MAESI looked at this over time. So here's just a few. You can see this is the kind of research that I do, and it's all very easy to find, but also the student research. One last kind of few examples that I'm gonna talk about is social network modeling and knowledge graphs.

[00:21:08] Speaker 1: Again, Aram can code, but he was able to do this very sophisticated knowledge graph with only a few semesters of coding. And what was really interesting here is he looked at the Winthrop papers. So this is Puritan America, and he actually rewrote our understanding by showing how important women were in this network. Just another example, there's a larger poster, but I just wanted to show you what this looks like close up. Again, Jessica Dougherty can code some, but was actually able to look at little women in every single movie version to see if the social networks looked different. And in fact, we found that they did, and it's not necessarily just a question of progress. Some of the most recent little women are actually maybe a little bit less progressive than earlier ones, and what she was really looking for are what are the main connections? You can see the connections by the thickness here of the lines, and some of the versions of the movie emphasize more the male-female, some emphasize the sisters, some are very individualistic with the characters. So this kind of social network modeling is incredibly useful for kind of understanding these differences.

[00:22:19] Speaker 1: My favorite example is Love Island. I don't have a poster for it because it was a senior project. I doubt anyone in this room is gonna admit to watching Love Island. What, does anybody? Oh, yeah, hands up, okay, good. So I had a student who believes so Love Island is this reality show where you're trying to find your mate and pair up, and she had a theory that people had changed the way that they were playing the game. That they were no longer really focused on matching up. They were focused on winning the game by establishing the most social networks to get voted as the best couple. And so we actually looked at each season to see if the networks were changing, and we found that they did. And we also then took the scripts of the men versus the women. And the men would say things like, "bro, I just wanna have fun." And the women were like, "I just wanna find someone who understands me." And so we also did text mining on the men versus women, which is really funny.

[00:23:16] Speaker 1: Okay, so Hannah Sussman was a sociology major who came to me senior year, and she's very interested in AI. And we taught her, she took two classes each semester, and she's very interested in the relationships that people are actually developing with these. So she got permission to use WildChat. Melanie Walsh, by the way, shout out. She's at University of Washington, has also done some work. She's the one who inspired this. So WildChat is Allen AI that offered free chat GPT subscriptions if it could actually take all of those chats. It's an enormous data set. She worked both semesters using text mining and trying to understand what people's actual relationships were. Looking at the people who had ongoing longer relationships, whether they developed over time, looking at the way that people actually use it and those kinds of relationships.

[00:24:12] Speaker 1: A few other things that we've done. I've been working with UNESCO project on how we can continue to encourage people to produce culture in the age of AI, which is a real problem. And also on cultural preservation, I'll talk about that in a moment. I've also done some work in the medical field. I just came back from Cornell in Doha, but also working with the NYU Lingo and Health Empathy project using what we understand about emotion and storytelling to try to help practitioners actually be more empathic as they meet patients. And then this summer, our lab and several students worked with the folks in Columbia and the...

[00:24:50] Speaker 1: In Columbia and the University of Havariana on this truth and reconciliation archive, again, using AI to surface patterns in that data. We also have done some work looking at how well Gen AI can predict human behavior. So our lab got a grant from the Notre Dame IBM technology ethics lab and we got one of these actually looking at youth and criminal recidivism. We used MAD, if anybody has tried that, so multi-agent debate to see if we can improve predictive powers and that paper should be coming out soon. Some other work that we did early on, this is actually from quite a while ago, was starting to do ethical audits of all of the different models when they're in autonomous situations and ethically fraught scenarios and actually trying to see how they would perform.

[00:25:44] Speaker 1: As part of the USAI safety institute at the plenary you can see that they actually presented this work. We represent the MLA, there are a bunch of people here from the MLA, it's very exciting. So that's 26,000 linguists and literature folks doing this kind of work for the USAI safety institute. And then this is just a piece, Bob Marzek is here, the editor of Modern Fiction Studies, it's not out yet, but really using the kind of bilingual understanding of both how these models work and how fiction works to ask this question, why did so many companies take significant risks to train on fiction data?

[00:26:29] Speaker 1: Actually, OpenAI decided not to, but Anthropic and Meta did, as you may have seen from the lawsuits. They've trained on my work, probably many people in this room as well, and trying to think through why the fiction might be so important to developing good models. I actually think that the researchers developing these models may not know, but I have some ideas and that's what I explore in this. And then, finally, Archival Intelligence is a project that my lab is doing in consort with folks from Columbia, LSU, and the Berkeley School of Music. We're working with archives in New Orleans to try to use AI to restore very poorly digitized archives.

[00:27:37] Speaker 1: We're also working with video of jazz musicians, and our goal is to produce better streamlined pipelines for museums doing exactly the kind of work that I talked about that Abby did for the National Gallery for all kinds of smaller museums, but also to build better AI by bringing in Creole and Cajun and historical understanding. So I can't talk fully about that right now because there'll be an announcement soon, but anyway, this is another really interesting project that we're doing. Just as one last mention, this paper just came out, I looked at the shapes of Cinderella because Kurt Vonnegut in his famous video, he predicted that AI would eventually be able to find the shapes of stories, and he talks about the Cinderella story.

[00:28:43] Speaker 1: And so I actually looked at the Tang Dynasty Chinese version, the 17th century French and the 19th century German. And here you can see this is Perel Saint-Oleon from 1697. But what I want you to know about this is at this point, instead of all of that sophisticated coding, you can actually use the models to score the sentiment. You can go in and see how well it does. I was actually really surprised. These are all very well-resourced languages, but I was really surprised by how well it did with them. And then you can actually graph it using AI.

[00:29:06] Speaker 1: So even last year, my students had to be pretty good coders to do this, and now we can actually do a lot of the work using large language models. I will say, there's still all kinds of decisions to be made about how you chunk the text, how you smooth the text, it's still pretty sophisticated, but you can focus on those higher-level questions and be less focused on the nitty gritty of running the code. All the projects on the poster are downloadable. We now have 175. We've done about 300. They're up on digital Kenyan, this is the QR code. We are up to over 85,000 downloads by over 4,500 institutions.

[00:29:38] Speaker 1: This is all student work after a class or two, very exciting. And we're actually in more countries than exist because Macau and the Isle of Man are for some reason on this. So you can go there, and we've had students have their work picked up by Forbes, and sometimes faculty write me and say, okay, well, I see the poster, where is the article? And I say that was a sophomore who did that research in one week.

[00:29:48] Speaker 1: So there's much more research than we have time to write up. So if anybody has time and is curious, go into Digital Kenyon, and I hope that inspired you to see the kind of work that we can do now with our students. OK, thank you.

[00:30:03] Speaker 2: Thank you so much, Catherine. That was amazing. Next up is Marco Utiepo.

[00:30:09] Speaker 3: Thank you all for being here today. I'm Marco. I'm a graduate student at Scripps Research in San Diego. Today, I'd like to share how our group is leveraging AI to understand how memories are stored in the brain. Our memories are more than just stored information. They carry the stories of our lives, the relationships that we form, and they really make up the foundation of who we are. This subject is actually really personal to me.

[00:30:39] As I was working on this research, my grandfather was diagnosed with dementia. Witnessing someone I'd always known as articulate, thoughtful, and funny start to forget his memories was really heartbreaking. Knowing that we still have no way to prevent it made it even harder. So this sense of helplessness made our work feel all the more urgent. I'm sure I'm not the only one who's had this experience. Many of us here, maybe most of us, have someone in our lives who's affected by memory loss.

[00:31:26] But part of the reason why it's so hard to cure is because the brain is incredibly complex. Even understanding how a single memory is stored is a huge challenge. So that's really what our group is trying to address. To do that, we first need to confront the brain's overwhelming scale. Just to give you an idea, the human brain contains over 100 billion neurons, and it's connected by what scientists estimate to be over a quadrillion connections, called synapses.

[00:32:03] So that's more than the stars there are in the Milky Way. But these connections are not static. They're forming, changing, and disappearing as we go through life. Trying to understand how all of that works is quite difficult. But I think it also gives us a window into how complex and adaptive systems actually work in general, if we get a good grip of how all these things work. I think that's part of why the brain continues to inspire fields such as computing and AI.

[00:32:38] But inspiration only really gets us so far. To really get at memory at its root, we need to physically see these memories in the brain. One way that we can do that, and what our group is trying to do, is to zoom into the brain at a nanoscale level, a level where we can see individual connections. The imaging technique that we use is called 3D electron microscopy. It allows us to see highly detailed images of brain tissue.

[00:33:21] Just to give you an example of that, this is a 3D image, what we call a 3D volume or a 3D block that's taken from a part of the brain responsible for memory. This block that you're seeing here is a hundred thousand cubic microns, actually a hundred times smaller than a single grain of sand. Yet within this tiny volume are over a million connections and cellular structures.

[00:34:00] Trying to visualize this type of data and to reconstruct it and analyze it is a massive challenge. This is exactly where AI has really become essential for us. We built a deep learning-based platform that allows us to automatically annotate these structures in the 3D volume. How it works is we train a convolutional neural network to detect a specific feature of interest. Using that trained network, we then apply it towards the entire data set.

[00:34:38] What you get is essentially a fully saturated reconstruction of a piece of brain tissue. Now, what you're looking at here is only a fraction of our total data set. It's about 1/12th of a single block.

Speaker 1: [00:34:46] About 1/12th of a single block. But if you were to do this manually with probably a whole team of people, it would probably take you months to do or even years. With AI, we're able to do this actually just within a few days. Here is actually a reconstruction of a real neural circuit in the brain area that I told you about. And actually, every single synapse and structure here was reconstructed using the help of AI.

With these massive improvements in analysis and these new reconstructed data sets of the brain, we can then get to the root of the problem, essentially asking these biological questions that we care about. How are these circuits organized in the brain? What changes with our everyday experiences? And I think, importantly, can we find structural signatures of stored memories? Now, I don't have enough time to share all of the findings that we had. I'd love to talk after if you're free. But I'd like to share at least one key finding that really stood out to us.

What you're seeing here is one of the structures of synapse that we actually saw in our data set. It's a rare type of synapse that's called a multisynaptic bouton, or an MSB in short. So in the brain, most neurons are actually connected by just a single connection. What's unusual about these types of connections is that they allow a single neuron to connect with multiple targets simultaneously. In other words, it allows a single neuron to influence a whole group of other neurons, essentially adding complexity to how information flows in the circuit.

This is a rare type of synapse, and surprisingly not much is known about them. But in our experiments and our analysis, these were the types of connections that appeared more likely in the circuits that were involved with stored memories. And obviously, this is just the tip of the iceberg. If we can figure out what are the molecules and mechanisms that actually regulate these types of synapses, we think it could open the door to developing new treatments for memory loss in aging, in dementia, or other neurological diseases.

Obviously, the caveat here is, as we keep doing these analysis, we're going to keep generating much more enormous data sets. And I think this really exemplifies how AI is becoming a partner in the neuroscience field.

So I'd just like to highlight and end with a few key points of how we think AI is really changing science. First, as you might have seen, AI is changing the pace at which we do science. Things that used to take a few months, for example, with tracing cells in an image or maybe protein folding now can take days or even hours. And obviously, patterns that we could not see before are starting to emerge.

It's also allowing us to deal with much more complex types of data sets. So things like multimodal data sets. You can think brain imaging, genetics, and behavior, and trying to figure out patterns in a single framework. So integrating these types of multimodal data is becoming more and more important. Multimodal data is going to allow us to ask questions at a different scale and complexity than ever before.

And lastly, I'd just like to speculate, but looking ahead, you can imagine that there could be a huge shift in how science is performed. And this could come in the form of testable hypotheses, theory building, and even reasoning from first principles. And that could really change how we perform science.

I'd just like to wrap up by thanking the community of people that are doing this work. As you can appreciate, it's very interdisciplinary and collaborative. I'm especially grateful to my mentors, Anton Maximov at Scripps Research and Mark Ellisman at UC San Diego. They're really guiding the field in terms of being at the intersection of neuroscience, imaging, and AI. I'm fortunate enough to work alongside very talented and passionate scientists, engineers, and staff who inspire me every day.

And as you can see, I think AI is, we're still in the early days of this journey, but AI is opening new paths for decoding the brain's complexity. I think we're making meaningful steps towards answering one of life's big questions: what makes a memory? And my hope is that for me, these insights could help people like my grandfather, or...

[00:39:44] Speaker 1: Could help people like my grandfather, or not just to understand how memories are made or lost, but to preserve them for our loved ones and future generations. Thank you for your time.

[00:39:59] Speaker 2: Thank you, Marco. Last but not least, is Leo Impet.

[00:40:04] Speaker 3: I'm Leonardo, and I'm some sort of a recovering computer scientist. I spent years in machine learning and computer vision, and about 10 years ago, I was working at Microsoft Research. This is this classic TEDx backstory part. I was working at Microsoft Research in Cairo, actually, on a problem called neural image aesthetics. So a convolutional network looks at an image and tells you how pretty the image is, you know, from 1 to 10. This is a pretty picture, this is an ugly picture. You'd be surprised how often they use these things in industry. And I thought, gosh, we really don't understand much about how complex images really are, how complex our relationship to visual culture really is. So 10 years later, I'm here as an art historian. Since then, I've been working on the relationship between computer vision and visual culture. This is one of my very, very early experiments, you know, pose recognition to look at gesture in Renaissance art.

What people usually think of when they think of AI and art history is they think of the history of paintings, of sculptures, of the sort of things you see in museums. And I want you to forget that perspective, you know, don't think of AI to detect forgeries or AI to see which painter copied which other painter. Think of art history not as the history of pictures, but as the history of seeing, as the history of the process of cultural seeing itself. The founder of the discipline, Will Flynn, wrote that that was the fundamental task of art history. Now obviously no two people see the same way. It's shaped by, you know, our culture, our identity, our body, our technology. And so if art history is the history of seeing and computer vision is a new way of seeing, then studying computer vision is part of art history.

Now this is what digital art history felt like a few years ago, in the late 2010s in the age of, you know, ImageNet and YOLO and things like that. This of course is Las Meninas by Velázquez, a wonderful picture, but the gap between what computer vision was able to say about an image like this and what art historians were able to say about an image like this was extremely wide and we worked, of course, on narrowing that gap, but this progress was pedestrian. In 2021 came an enormous shift, multi-modal foundation models of course, most famously CLIP from this very company, as you know it connects images and texts and that link between image and text which Hannes Bayeux has also written about has always been central to art history and now it became computable. I want to show you a little experiment that I did with Fabian Offert from UCSB who's here in the audience asking CLIP for images related to the word power from the Collections Museum of Modern Art in New York and of course you'll not be surprised but we were massively surprised back in 2021 that it gives us pictures of electrical power and also of political power.

So you can imagine compared to the world of bounding boxes here we were massively impressed by its polysemy for its capacity for interpretation and visual ambiguity and you can imagine that with multimodal foundation models compared to object detection or whatever the potential for machine learning in art history as a tool in art history explodes. Now there are enormous opportunities here I'll leave you to imagine them but you know looking at questions like violence or rhythm or sexuality or the body through time, through space, through different sorts of artists or visual media, but it also brings of course an enormous new problem which is what sort of lens are we looking through? With the old sort of computer vision when we talked about bias we generally meant sort of differential rates of failure so face detection algorithm works better on some sort of people than other sort of people and the classic situation there was beards of course.

Now you know we're working with predictions that are not simply right or wrong but that always reveal a cultural worldview so I think we have to go beyond talking about bias in the old sense. Bias is of course extremely important but we have to start thinking more ambitiously than that and talking about a machine visual culture. In other words, there's no longer such a thing as an unbiased model as there was in the past. A model that never makes mistakes is a model that is unbiased. Now you have no such thing as an uncultured model because we're doing some sort of cultural computing here. So art historians are always going to have to play this double game looking at the images of course that they're studying, that the collection they're looking at, but also thinking about the culture of the algorithm that they're looking through.

Now how do we do that? How do we understand the visual culture of a computer vision model? Fabian and I have just finished a book looking at the cultural history of architectures. Neural architecture is extremely interesting, but of course the first...

[00:44:42] Speaker 1: Which was extremely interesting. But of course, the first port of call would be to look at the training data, understanding the visual culture of a dataset. That means not just looking at what's in it, of course that's important, what percentage of men, what percentage of women’s on, but it’s forms of depiction, it’s forms of representation, it’s ideologies, it’s iconographies, it’s ways of seeing, again thinking about this machine visual culture in a much more complex and ambitious way. Now that’s of course easy to say, but how do we do that? How do we understand the visual culture of a dataset of millions of images, hundreds or thousands of millions of images? Well, we have a discipline that does exactly that, it’s what we’ve been trying to do for 10 years, digital art history, using computer vision to study art, to study the history of art. So we take the methods we’ve been developing to study art history, and we turn them around to look at artificial intelligence, to look at training data, this is the basic idea. Using AI to study art history, using art history to study AI, and I think the two sides are really not that far apart when you think about it. I would argue there’s even a sort of symbiosis here, where you cannot do one without the other.

[00:45:51] Now we’ve used CLIP in all sorts of early projects. In 2021, we did a wonderful project with Liverpool Biennial, which is the largest contemporary arts biennial in the UK, where we used CLIP as a sort of machine curator to reorganize their collection. Anyway, I knew I only had a few minutes here to show you maybe a kind of playful, small result, so I’m going to show you an old, but I think conceptually elegant project that I did again with Fabian Alfert here, showing our approach I think really well. What we did is we took 10,000 images from Google Street View from various cities, including Paris, as you see here, and we encoded them with CLIP and we built an interactive map, which is still probably online, still probably works, and you can imagine you can look for all sorts of things in the urban landscape of Paris. What you’re looking at now is a heat map of graffiti, and so you know, street art, and if I look into that particular red dot, I should expect to see some graffiti, so I could open up the Street View image and I could look at the street art in Paris and so on, so on, might be interesting. It might be useful for urbanists, might be useful for art historians, architectural historians.

[00:46:58] But what we were interested in doing is turning the algorithm and the dataset, if you like, inside out to study itself. So what if we explore Parisness? This is a heat map of CLIP's vision of where in Paris looks like Paris. Now, of course, these are all photos of Paris, right? But we learned something about the urban history here of Paris, something about central and periphery, and there’s a whole complex question here of Baron Haussmann’s expansions and so on. We also learned something about CLIP and about CLIP's idea of what Parisness looks like, which of course, you know, is tied to all the images that define what Paris looks like. It’s tied to images of Paris, it’s tied to images of Parisian-like cafes in other cities and so on. These are some pictures of the most Parisian bit of Paris, the most Los Angeles bit of Paris on the bottom left, the most New York bit of Paris on the bottom right. But there’s a serious point here, which is that we’re looking at the visual culture of the urban landscape of Paris, the architectural history of Paris, but we’re also looking at the algorithm itself. And it’s not that easy to separate when we’re learning about one or the other. From now on, we’re arguing is that we’ll never be able to fully separate the two. We’re always going to be looking through the algorithm and at it.

[00:48:13] Now, again, we’ve done much more complex work since then, which I didn’t want to show today. You know, we’re working on a swora, we’re working on the history of textiles. But I think this very first experiment of ours shows you the approach really clearly, studying the visual culture and studying the neural network. They don’t just go together, they become the same thing. Anyway, this is the last slide, so we can go to coffee now. But this idea, this idea is really the foundation behind this new lab I’ve set up at the Max Planck Institute for Art History in Rome, German National Institute, but in Rome, Italy. And at Cambridge University, where we’re mostly funded by the Gates Foundation. So deeply interdisciplinary. Many, many people from art history, many people from computer science and engineering, but also deeply international. And the idea is this, that we need new paradigms to understand the cultures, the visual cultures of computer vision models, of multimodal foundation models. Not just what they show, but how they see a much more complex discourse around machine visual culture. And of course, that those algorithms can, yes, on the one hand transform art history, as we've been looking forward to, but also, you know, we can turn them back on themselves to study the visual culture of AI. So machines to study visual culture across both the history of art and artificial intelligence itself. I think I should stop there and we should probably get a coffee, thank you very much.

[00:49:32] Speaker 2: Thank you so much, Leo. All right, hi, everyone. We are.

[00:49:40] Speaker 1: All right. Hi, everyone. We are back and we are live. So thank you so much for sticking around, and we hope you enjoyed hearing from all of our speakers today.

[00:49:50] I am Jane, I am a part of the global affairs team here at OpenAI. I am very very jazzed to be here with the OpenAI forum community and to join in on all these incredible conversations surrounding AI and research across disciplines.

[00:50:05] So let's dive right in. We've received so many great questions from the community, so I am very excited to kick off our live Q&A with Kate and Leonardo.

[00:50:15] So let's bring them up to the stage. Welcome, y'all. Good to see you both.

[00:50:20] Speaker 2: Thanks for having us. Yeah, this is great.

[00:50:23] Speaker 1: Let's go right into it. OK, Kate, this one is for you. This one is from Andre. How do you think we can overcome skepticism around using AI tools? And where do you personally draw the line between "vibe research," quote unquote, and using these tools to accelerate scientific work?

[00:50:34] Speaker 3: Yeah, that's a great question. I think right now we really need to make sure that we do have domain experts in the room and verifying everything that is coming out. So while it can help us really boost research, I know we do have already a little bit of an example in STEM with a lot of great researchers really trying to push scientific research forward.

[00:51:07] And that is fantastic. I would say in the humanities and the social sciences right now, our best move is really to actively engage our experts to really be evaluating all of our results.

[00:51:21] Speaker 1: Thank you so much for that. Leo, this question is for you from Lex in our community. Leo says, "I work with artists who train young adults to use their stories to shift dominant narratives. What is the first AI tool you would introduce into a storytelling workshop seeking to shift perceptions?"

[00:51:29] Speaker 2: Yeah, interesting. There have been lots of interesting kind of radical creative practices that use text-to-image generation and image-to-image generation, of course, rather than— I think in the end, a lot of the tools—and I've found there are multiple tools that do variations on the same thing.

[00:52:10] So, put a text in, get an image out, or put some sort of video and get a video out and so on and so on and so on. And the particular qualities of different tools are changing all the time, so rather than recommending one particular brand, as it were, I think I'd look at the work of particular artists who are engaging with creative AI in interesting and critical ways.

[00:52:40] I've just finished a two-day conference with Eric Salvaggio, who works with us in Cambridge, also known as Cybernetic Forests, who's a really exciting video artist who uses a lot of—most of the users—and reflects on how in his work. So yeah, rather than a particular tool, I think I'd point you to that body of work and then, you know, we can always sort of reverse engineer the tools from the work when one finds an interesting way of working with the technology.

[00:53:08] I have to agree with that. I'm a photographer myself, and I think whenever people ask me what camera should I use, what tools should I use, I always point them back to use what you can, where you are, what you have, where you are, and then learn the foundations and the concepts before getting too narrow down into the actual tools themselves. So I think the same principles kind of apply there.

[00:53:34] Speaker 1: Great, thank you for sharing. Thanks. This question is probably for both of you, so Kate, let's start with you. This is from Svetlana. Could you share one concrete example where AI significantly changed the direction of a research project, not just a speed?

[00:53:43] Speaker 3: Yeah, that's a really great question. Um, I would say, right now, there's a really great synergy around this question of creativity, and in particular, in terms of fiction writing. So one of my favorite projects right now is we're really benchmarking some of the different models to see who is the best creative writer and how well we can actually generate a bestseller.

[00:54:09] And in this case, I think it has really interesting things to say back to the field of modern fiction studies, which is why that was actually having a special issue on this. So, we like to be very critical of the AI models, particularly fiction folks about can't write fiction that humans are going to read. But I think we also have a lot to bring back into our field of fiction studies.

[00:54:38] Speaker 1: Into our field of fiction studies, understanding narrative, what's hard, what's easy and, and those kinds of questions. Leo, I'll pass that back to you. And just to reiterate the question, could you share one concrete example of where AI significantly changed the direction of a research project and not just its speed?

[00:54:58] Speaker 2: Thanks. Yeah, I mean I completely take Kate's point there that generation and creativity is a really important part of that, in terms of kind of cultural history. I'm just struck by Joanna Drucker, who's a wonderful DH scholar saying something like this back in 2012, 2013, something like that. She said, "Look, the digital is allowing us to do lots of things more quickly but have we really changed the kind of the object of study in a way that say feminist art history or decolonial art history had done?" And I’d argue that it's just starting to do that in the last two or three years. I think that is actually turning back on itself in investigating the role of AI in everything from smartphone cameras to recommendation algorithms on social media to art produced by AI to advertising, etc. But it's also changing the way we think of historical media forms. A PhD student is working on textiles; of course we all know about how the Jacquard loom punch cards were invented for textile production and not for computers, and then they later became a computational technique. So we're really rethinking the history of, in that case, 19th-century visual culture completely from the perspective of AI. So yeah, it is making things quicker, of course, it's making things quicker, but there are important philosophical shifts happening at the same time.

[00:56:30] Speaker 1: Right. All right, we have a question from Jason. I'm going to flip this around. I'll start with Leah first and then we'll go to Kate. In your data, do cultural storytelling patterns of the target language noticeably reshape the narrative arc when a novel is translated? Oh, this one might be for Kate, or are the original signals mostly preserved?

[00:56:47] Speaker 2: Sorry about that. That is a great question and I think it depends a lot on the writer. So we do see writers like Kafka— that was one of our early projects— looking at that, where there's already been a lot of theoretical work that suggests that Kafka is a different writer in different languages, and we do seem to be seeing that kind of effect. I would say it is somewhat writer dependent and very complex in terms of that kind of emotional component, for sure. But yeah, sometimes yes and sometimes no, which I realize is maybe not the answer you want to hear, but actually, it's the more interesting answer because it gives us a lot more to look at and begin to ask why that is. Here at the forum, I think we always want interesting answers over the right one or the perceivingly right one, so thank you, Kate.

[00:57:43] Speaker 1: I have a few questions from Andre. Traditional research locks scientists at one level of analysis, e.g., molecular, and different scientific fields were invented for every other level, e.g., biology is a multi-level science. Who wants to take that one?

[00:58:02] Speaker 2: So, let me bring it over. I'm curious what Leo will say, but let me bring it over to a humanities social science kind of question. I do think we are working on multiple levels. Just the other day, I had somebody ask me about explainability. So we're looking at explainability in terms of how are we modeling the shapes of stories? We're looking at feature extraction with particular points in those stories but then we are moving also on a different level of abstraction to ask how do stories manifest emotionally? What does that mean? Maybe they can persuade us for good but also for bad; maybe they can manipulate us. So, you know, we're working at multiple levels, and they're somewhat related and they inform each other. But I think we're really interested in digging into the weeds and working on one level and then others, and then some of us really like to move between the micro and the macro. That is very typical for literary studies in particular, where we take a small example but then we try to extrapolate out. It's work at all levels. So if you are drawn to one level, do it, share it with us, and we will incorporate it into our work for sure.

[00:59:19] Speaker 2: Yeah, I know I completely agree. It's really interesting, in a way, one of the promises of computers in the humanities in general, which of course is something that's kind of been happening for 50, 60 years— kind of been happening in a big way for 10 years— and is exploding with natural language processing.

[00:59:36] Speaker 1: With natural language processing, with computer vision, with everything else we call AI. One of those promises was to scale up, to scale up in the size of the corpuses, collections that we look at, but also in the complexities that we can deal with. And in a sense, I don’t want to overdraw the analogy between yeah, you know, fundamentally everything is, whatever, quantum physics, and then you can on top of that build something like, you know, biophysics, and then on top of that you can build biology, and so on and so on and so on. So on the one hand, I don’t want to kind of borrow the interdisciplinarity of that. I mean, biophysics is already interdisciplinary in an important way. And of course, we’re working across disciplines in all sorts of ways like that. But the other thing that AI really brings into that is the ability to try and deal with the complexity of connecting up the scales without aggressively simplifying. So there’s never really been, except in the most simplistic school book ways, a large scale art history in the way that there has been in, say, economic history. Now, that’s because in economic history you can trace numbers across a thousand years. In art history, everything is so nuanced, so particular, so complicated that it’s very difficult to do that. Now, one of the promises of this kind of work is that we’ll be able to deal not just with the scale but with the complexity to be able to work across those scales. That’s not too distant a metaphor to the kind of levels of science, but I think there’s an analogy in there.

[01:01:03] Speaker 2: Great, thank you both. Leo, this next question is for you from Svetlana. Your work frames AI systems themselves as part of a visual culture. What do you think art historians should pay attention to when analyzing AI generated images in 10-20 years from now?

[01:01:23] Speaker 3: Oh, gosh, yeah. That’s a... Loaded question. That’s a beautiful question. It’s very, very, very difficult. I mean, in a way, there’s a tension for art historians now and for scholars of contemporary visual culture. I mean, cinema is a big one, of course, where on the one hand, you’re sort of pulled to looking at the particulars of every single new model. And particulars are important in art history, right? I mean, that’s where you learn about anything. You have to get into the particulars, you have to get into the evidence of one specific model, of one specific image, of one specific trend. At the same time, things, of course, move so quickly that, for instance, now there’s a big discussion of the aesthetics of latent spaces in a way that’s kind of, and this is the argument of a PhD student of mine, Ludwig Kirscherf, kind of anchored to GANs and therefore to the tradition of image generation from the late 2010s. And of course, latent spaces exist in different ways and different sorts of models, but it’s not a great way to think about image tokenization, for instance, sorry if I’m getting too technical here, but there’s this bunch of new art history that’s just caught up with the models of ten years ago and suddenly the architecture’s changed. And so we’ve got all these nice, new conceptual metaphors and new discussions in art history, film studies, media theory, and it’s already too late. So it’s a real tension.

[01:03:20] Speaker 3: I realize I’m not really answering the question of what to look for in ten years, but I wonder whether one of the solutions for me has been to slow down and actually look at slightly older networks, look at exactly just what was Geoff Hinton doing in the mid-90s when no one was paying attention, and try and build an art history that builds up from that as well as from the latest developments. Yeah, I feel like this whole conversation can actually be a forum event in itself, so I’m going to pass that to Caitlin and Natalie. I feel like that could be many different formats of discussion, so thank you for that question, Svetlana.

[01:03:35] Speaker 2: And this next one is for Kate from Alejandro. There are amazing legacy archives around the world, like the Indies archives in Spain, that are not properly digitized. Are you planning to go global with your initiative?

[01:03:49] Speaker 4: Yes, we are. So we’re building a prototype right now. We’re looking at under-resourced languages and communities and culture, particularly a Black experience in New Orleans. But what we really want to do is build something that people can use all over the world. And one of the really interesting things is, A, we want to do this for communities. There are all kinds of really interesting questions about privacy and community data, but we’re also building something that preserves that community’s feeling about how their data will be used. But, obviously, we also want to unlock all of this kinds of data so we can build models that understand more cultures and languages and our entire cultural history. So we’re working on a prototype. We’re going to make it open-source. We want to build it so that...

[01:04:34] Speaker 1: We want to build it so that archives all over the world without the resources can actually save a lot of our crumbling heritage that is really vastly disappearing, vanishing. So yes, give us about six months and I'll check back in. All right, Kate, we will hope to get an update for you from the forum. We can't wait to hear about how that goes.

[01:05:01] Speaker 2: I want to be mindful of time, I'm going to wrap with one last question for the both of you from Svetlana. For smaller institutions or researchers outside of the U.S. or the EU, what are the most realistic first steps to bring these AI methods into your labs or your classrooms? Leo, should we start with you?

[01:05:22] Speaker 1: Learn the theory. It's not only is it free, but it's amazingly well resourced online. I mean, it's much, much easier to go from zero to relatively well versed in machine learning with an internet connection in a lot of time than it is in art history. In a way, it's a much more open discipline. There are so many resources online. And I don't think the route into that necessarily is programming. I think the route into that is computer science, basically. And even if, you know, I'm not talking about taking four years out for a degree program, but even a little bit of the theory behind it can give you a much clearer view of exactly what's going on under the hoods and therefore a much better almost intuition for what interesting things you might be able to do next with the technology. And there are, you know, we are in an orchard of low-hanging fruit here. There are a million things left to do and not many people trying to do them. So that would be my recommendation.

[01:06:21] Speaker 2: I would second that. And I would also say, you know, my lab is quite small. There are two of us. But we work with collaborators all over the world to supercharge our research. And, you know, you start small. So start with one application. Start with one example. Try it out. Apply it. You know, we're always kind of doing the lateral thinking. So be inspired by one project. In fact, I teach, I've told Leo I teach his work too. So my students know all about what he's doing and are inspired by that. So take one piece, try it out on something that you know. One of the things that I would add is both Leo and I do obviously do research on AI and use AI for more disciplinary research. And for both of us, I think it's super interconnected. And it's really exciting and hard to pick one, and I'm constantly being kind of torn between the two. I would guess that many of my colleagues would prefer to stick with disciplinary to start. Research into AI is a little bit of an acquired taste and it might take you a while to decide you really want to do that. So take these tools, use them for something in your discipline. And the last thing that I would add that Leo has also mentioned and is similar with some of the work that he's doing is, originally when people were using these tools, we were looking at massive scales, we were using huge corpora. But now we can actually use these tools to look at a much smaller scale. And that kind of approach really feels much more similar to more traditional work in the field. So maybe start with one of those smaller approaches and see what it can do for you. And honestly, if we can do it in my lab, you know, with limited resources, I think anybody can do it all over the world. I invite everybody, and it's so exciting. And as Leo said, there's so much to do.

[01:08:13] Speaker 1: Wonderful. Thank you so much to you both for joining us live to answer our community's questions. And so if you all love this session as much as I do, I highly recommend checking out the other amazing presentations from this event on demand at the forum. The team will be sharing links to those videos in the chat. And before we close out, I just want to make sure we plug the next forum event that's going to be happening next Monday, the 15th, and that's going to be using AI to fast track scientific breakthroughs. You can go ahead and register for that at forum.openai.com. So thank you all again. Thank you again, Kate and Leo for joining us, and we hope to see you at the next one.

[01:08:58] Speaker 2: Thanks so much.

+ Read More
Comments (0)
Popular
avatar


Watch More

Event Replay: AI & Pedagogy Sessions
Posted Dec 09, 2025 | Views 52
# AI Pedagogy
# AI Education
# Edu Use Cases
Teaching Data Science with AI at Harvard Business School
Posted Jan 16, 2025 | Views 27.4K
# Higher Education
# AI Adoption
# Data Science
Terms of Service