Event Replay: Minus AI, Plus AI, Times AI — A Vision for an AI Pedagogy
speakers

Greg Niemeyer is a data artist and Professor of Media Innovation in the Department of Art Practice at the University of California, Berkeley. He is also the former director and co-founder of the Berkeley Center for New Media, where he helped build an internationally recognized hub for research, teaching, and public engagement at the intersections of technology, culture, and the arts. His academic trajectory began with studies in Classics and Photography in Switzerland, before a move to the Bay Area in 1992 set him on the path toward new media. In 1997, he received his MFA in New Genres from Stanford University, a program that encouraged his interest in experimental forms and the blending of media, technology, and conceptual art. From an early age, Niemeyer was fascinated by mirrors—not just as physical objects, but as metaphors for media itself. He continues to describe his practice as a lifelong pursuit of making mirrors: systems that allow us to see from perspectives other than our own, reflecting both what we want to see and what we would rather not. For Niemeyer, such mirrors are essential tools. They help us recognize the contexts that shape our lives, the fragile ecosystems on which we depend, and the deep entanglement of human and non-human futures. By showing us what we might otherwise overlook, his mirrors invite us to make better, more informed choices. Niemeyer’s art has been exhibited internationally at venues including the ZKM Center for Art and Media in Karlsruhe, the San Francisco Museum of Modern Art, the San Jose Museum of Art, the Stedelijk Museum in Amsterdam, the Townhouse Gallery in Cairo, museums in Zurich and New York, and many other institutions. His work has been supported by the MacArthur Foundation, the National Endowment for the Arts, Intel, Pro Helvetia, and numerous other organizations. These commissions and awards recognize a career defined by both conceptual rigor and technical experimentation.

Natalie Cone launched and now manages OpenAI’s interdisciplinary community, the Forum. The OpenAI Forum is a community designed to unite thoughtful contributors from a diverse array of backgrounds, skill sets, and domain expertise to enable discourse related to the intersection of AI and an array of academic, professional, and societal domains. Before joining OpenAI, Natalie managed and stewarded Scale’s ML/AI community of practice, the AI Exchange. She has a background in the Arts, with a degree in History of Art from UC, Berkeley, and has served as Director of Operations and Programs, as well as on the board of directors for the radical performing arts center, CounterPulse, and led visitor experience at Yerba Buena Center for the Arts.
SUMMARY
The OpenAI Forum hosted educator and data artist Greg Niemeyer from UC Berkeley for a talk on how AI is transforming learning, teaching, and thinking. Building on OpenAI’s mission to ensure broadly distributed benefits from AI, Niemeyer introduced a “Minus AI, Plus AI, Times AI” framework: minus AI for intentionally tech-free, embodied learning; plus AI for transparent, dialectical collaboration with AI; and times AI for AI as a medium that restructures knowledge itself.
He proposed a cognitive insight formula—C = Q × T × K where meaningful learning depends on the quality of questions, the strength of trust, and the richness of the knowledge base, emphasizing that if trust collapses, learning outcomes collapse as well. Throughout the talk, he shared concrete classroom experiments showing how AI can either de-skill students or spark creative divergence and multiplayer learning when used thoughtfully and transparently.
He closed by urging educators and learners not to choose one mode, but to move wisely among minus, plus, and times AI to keep curiosity, meaning, and our shared “we” at the center of education in the age of AI.
TRANSCRIPT
[00:00:00] Hi, everyone. Welcome. I'm Natalie Cone, head of the OpenAI Forum community and member of the global affairs team here at OpenAI.
[00:00:10] Artificial intelligence is an innovation like electricity. It will change how we live, how we work, and how we engage with one another. OpenAI's mission is to ensure that as AI advances, it benefits everyone. We're building AI to help people solve hard problems because by helping with the hard problems, AI can benefit the most people possible through more scientific discoveries, better healthcare, and education.
[00:00:40] However, we must be intentional about ensuring broadly distributed benefits of ChatGPT and OpenAI's tools, and we believe that storytelling is an important piece of the puzzle to achieving our mission. That's why we invite you all here to the OpenAI Forum every week to hear from members of the expert community, learn from their use cases, methodologies, and frameworks that we believe highlight the potential for AI to help us all solve hard problems.
[00:01:07] Our hope is that you're able to see yourself reflected in the challenges and use cases surfaced and feel inspired to explore and experiment on your own. If there's one thing we know, it's that students use ChatGPT. So recently, the OpenAI Forum convened faculty from an array of higher education institutions from public universities, Ivy Leagues, HBCUs, liberal arts colleges, and community colleges to showcase educators' innovative classroom use cases, frameworks for AI pedagogy, and to discuss opportunities and challenges they were all experiencing on higher ed campuses around the world.
[00:01:48] Our speaker today was a member of that group who shared such a compelling message that we knew we needed to bring him back for a more comprehensive version of his talk, and this time we needed to ensure that everyone in the world had access to it. So in the forum this afternoon, Greg Cone will explore how artificial intelligence is transforming what it means to learn, teach, and to think.
[00:02:03] In this talk, Minus AI, Plus AI, Times AI, he outlines a framework for education in three modes. Minus AI, where learning happens the old-fashioned way with just the human brain and human experience. Plus AI, where AI becomes an intentional transparent collaborator with humans, and Times AI, where AI redefines knowledge itself.
[00:02:36] Greg Niemeyer is a data artist and professor of media innovation in the Department of Art Practice at the University of California, Berkeley. He is also the former director and co-founder of the Berkeley Center for New Media, where he helped build an internationally recognized hub for research, teaching, and public engagement at the intersections of technology, culture, and the arts.
[00:02:59] Also, I just wanted to share with everyone that Greg is also opening a show, Water Futures, a collaborative exhibit opening in the Bay Area in January 2026. We're going to drop some more information and details in the chat. We really hope that you can swing by and visit after this talk. Please help me and welcome Greg to the OpenAI Forum stage once again.
[00:03:23] Hello, Natalie. Thank you so much for that lovely introduction. And how are you all doing? I hope you're doing great. How are you all doing is a basic human question and we really are a question species. We ask questions all the time.
[00:03:34] And they really kind of define what makes us human in a way. When did a great question last change what you wanted to ask next? Across the ages, humans queried greater minds for insight, knowledge, and comfort. Historically, institutions like the Oracle at Delphi, Nalanda, the White Horse Temple, and Chichen Itza served as mirrors of their societies. They reflected what their people treasured to learn. They symbolized the endurance of the urge to query a greater mind.
[00:04:09] Personally, we query greater minds when we're young, when we test our first words, hoping for confirmation like, yes, that's a dog. And later, when we ask, why, why, why, until it's turtles all the way down. I think the fundamental loop for generating cognitive outcomes connects a question, a knowledge base, and trust.
[00:04:32] Let me propose a tight frame for this process, which involves a learner and a responder, usually the greater mind. The responder is usually a different person or a different entity than the learner, although in the special case of meditation, learner and responder can be one and the same. Your mind is Buddha, and Buddha is your mind, as the Zen master, Mazu Daoyi wrote.
[00:04:55] So, let's dive into this formula. Special cases aside, I propose that cognitive outcomes see...
[00:04:58] I propose that cognitive outcome C equals Q times T times K. The quality of the question, the quality of the knowledge base, and trust in the responder all range between 0 and 1. Alignment is inside Q because good questions are situated in the learner's goals and context. Scaffolding is built into T, the trust value, because awareness of the learner's capacity, clear reasoning, citations, and care make answers trustworthy.
[00:05:29] Because the model is multiplicative, it has hard gates. If any one of the factors goes to 0, all the other ones decline as well. Also, no amount of brilliant knowledge or clever prompting compensates for a lack of trust. Because C is multiplicative, trust acts as a hard gate. If T falls to 0, C falls to 0, the cognitive outcome, no matter how strong Q or K are. This loop also compounds. A good answer raises trust, enriches the knowledge base with relevant weights, and provides the basis for a sharper next question.
[00:06:03] Pedagogy, then, is the art of keeping that loop going, be it during a long car ride with the kids, a semester-long course about computer graphics, or centuries of learning and research at a renowned school. Protect the gate, grow the loop. We've already shown that the quest for positive cognitive outcomes is a cultural constant. Every civilization has used its faith, power, technology, and language to sustain the loop of questions and answers.
[00:06:33] Today, AI extends that lineage, using machines and code to create new loops of cognition. It changes what learning feels like, what counts as knowledge, and even what it means to think. Our reactions can swing from rejection to enthusiasm, sometimes both in the same day. The formula still holds, though. With AI, we have a knowledge base, K, of unprecedented scope. But what about the quality of our questions and the degree of our trust?
[00:07:01] What about traditional systems of learning? How do we maintain trust, questions, and knowledge in the light of the newly single status of traditional learning systems? To really answer these questions, I'd like us to think about education in the age of AI, not as a salvation, not as a moral panic, but as a living ecology. That feels urgent now. Machines can spread misinformation faster than we can check it, so we need ways of learning that sharpen our ability to tell what's true from what's made up, because when that ability fades, and with it, meaning fades, and with that communication begins to fall apart.
[00:07:42] Without communication, we lose the very possibility of learning, and without learning, we lose the capacity to carry on with civilization. So instead of mistrust or denial, let's look for ways to work with AI. I propose three modes for doing that: minus AI, plus AI, and times AI. Subtracting AI, adding it, and redefining knowledge with it are three ways of structuring learning in relation to intelligent tools. Each mode has its value, but also limits, and the real challenge is learning when to shift from one to another.
[00:08:15] Let's begin with minus AI. The spaces where we intentionally set technology aside. The arguments are familiar. AI can be costly, energy hungry, derivative, biased, and fragile. It can de-skill and dehumanize students, obscure sources, and create dependencies. Some AI systems even prey on our imagined inadequacies. They make us weaker thinkers by offering quick answers without personal engagement.
[00:08:41] On a quiz, for example, we might chat GPT generate an answer without understanding either the question or the response because it passed through our browsers, not through our brains. In that way, we never feel what it means to know something. This prosthetic shortcut can become a trap leading to a kind of mental atrophy. Unless of course, AI becomes part of our body as Catherine Hale suggests.
[00:09:06] She writes, "The post-human view thinks of the body as the original prosthesis of the mind we all learn to manipulate so that extending or replacing the body with other prostheses becomes a continuation of a process that we began before we were born." But learning is supposed to be something that transforms a person. When I draw a landscape, it's not only about the output, it's about how I change and grow as a result of observing that landscape closely.
[00:09:34] I also have an opportunity to lay my ego down and identify with the landscape, to become the breeze in the trees. When I ask an external machine to draw that same landscape, it might produce something even more polished than my own attempt. But I lose the benefit of direct experience. That's what I mean by instrumentalizing learning, treating it as a means to an end instead of a process of transformation.
[00:09:56] We generate results that meet the assignment, hand in the required object, and quietly cut ourselves, the primary subject of education, out of the loop. I see this every day. The problem scales. Learners with low trust already doubt their abilities and are more likely to fall into this trap, while confident learners keep exploring it for the joyful sake of figuring things out. Or as a student recently put it, I am just so curious, lol. The trust gate drives the question gate, but the trust gate is not only fragile, it's also unevenly distributed. When T collapses, Q collapses. T, the trust, collapses. Q, the questions, collapse. And C, the cognitive outcome, drops to zero. But not equally for everyone. People already habituated to educational systems often have mechanisms to repair trust. Those who distrust educational systems have far fewer buffers, so an AI misstep, an opaque answer, a bias, or a broken expectation can compound that latent distrust. In a sense, AI may multiply prior cognitive differences, widening epistemic divides. For this reason, protecting the trust gate is a matter of equity as much as it is a matter of pedagogy. We must design systems and practices that earn trust from every learner, not only those already comfortable with academia.
[00:11:19] And yes, sometimes we just shouldn't simply use a machine when we have the time and skill to do the hard work ourselves. Many students vocalize this kind of diagnosis and oppose AI entirely. But the deeper reason for minus AI isn't mistrust or cheating yourself out of your education, it's presence. Minus AI pedagogy foregrounds direct experience, drawing from observation, debating ideas, and dancing together. It restores learning as an embodied inquiry where head, hands, and hearts connect. Thank you, John Postolozzi. All knowledge is social. Minus AI builds our human ability to collaborate, disagree, and agree, and create meaning in real time.
[00:12:01] So for example, this semester I'm teaching Visualizing California Water Resources, a course here. My students and I analyzed data showing that a drought that had taught people to conserve water, even after it ended, led to a positive outcome, we thought. Later, we visited a water filtration plant to discuss our findings with the professionals. Field trip, the staff there were less enthusiastic. To them, less water meant less revenue, and that meant less maintenance of essential infrastructure, and potentially less jobs. We were surprised. That encounter was a powerful minus AI moment, direct experience challenging theoretical insight. Meeting people whose livelihoods were affected connected numbers to human reality. Minus AI spaces remind us that not everything that's valuable in education can be predicted, measured, or optimized. Sometimes learning's most important drivers, error and insight, judgment and empathy, reason and intuition, bonding and boundaries emerge when no machine mediates the moment.
[00:13:06] So how does our cognitive insight formula look in the minus AI context? In minus AI, C equals Q times C times K with T, trust, anchored in human rapport. Learners extend initial trust that the responder maintains through presence, transparency, and calibration. That's what teaching feels like in the real classroom. Q is strengthened by the learner's interest in subject focus and context. And K, the knowledge base, of course, is bounded by the responder. Instructors must accept the limits of their knowledge to protect trust. Trust, I know what I don't know. Overreaching K undermines T and collapses C, the cognitive outcome. The pedagogical play here is to grow trust with presence and sharpen questions with deep knowledge.
[00:13:56] So let's move on to Plus AI. Plus AI sounds like intelligent companionship. Let's see. The additive mode, here we recognize that AI can extend human reach and expand access to knowledge across space and time. It supports diverse learning styles, accelerates feedback, and if done well, it can help students explore rather than flatten curiosity. My prognosis is that most universities will unofficially reach near 100% AI adoption soon. And those who reject it categorically will not prepare students for the AI world they will live in when they graduate.
[00:14:32] Already, over 80% of high school students report using AI for school, but over 50% of teachers report concerns about AI usage. This is a problem for academia. If we don't build on the adoption of AI among those incoming students, we are wasting a great opportunity to teach with AI thoughtfully. We also risk losing our students' trust if we don't understand their.
[00:14:54] Trust if we don't understand their actual learning practices? Who are we pretending to be here?
[00:15:00] And to make sure adding AI is not just outsourcing thought outright, we need a dialectic engagement with AI, though, not a one-way use of the tool, but a critical engagement that sharpens how we think. This kind of engagement begins with transparency at the foundation, knowing how AI systems work and extends all the way up to how we design prompt sequences.
[00:15:23] It means, at a minimum, being intentional, including AI in course design, creating citation systems for AI-generated work, building policies that evolve as fast as the tools do, and offering critical courses on AI as a subject across every discipline, not just in data science. We have to become real collaborators with AI who mutually respect each other. Ethics is a precondition, not a footnote.
[00:15:51] History is the foundation. As Nina Beguch notes in an excellent book, Artificial Humanities, that book, by the way, will be released tomorrow, so look out for Artificial Humanities. AI and the humanities have quite some history, she writes. Mythologies are rife with artificial humans.
[00:16:08] Fictional scripts have shaped AI development through their immense cultural power, even when the imaginary is far removed from the actual technologies that are available. Many fictional scripts can cast AI in dystopian terms, though. One of my favourite AI narratives that I saw when I was a young boy, I think, was Jean-Luc Godard's Alphaville, a 1965 French New Wave tech noir about an agent named Lenni Cauchon, an American secret agent who infiltrates a city-state formerly known as Paris.
[00:16:39] The city is ruled by the AGI computer, Alpha 60. Lenni's mission is to extract a missing American citizen, but in the course of events, he also takes down Alpha 60, undoing a logic-only regime that has outlawed poetry and love. The overly AI-dependent citizens of Alphaville initially are so disoriented by the loss of Alpha 60 that they can but crawl along the walls, but there is some indication that they retrieve their human agency.
[00:17:06] The missing American certainly retrieves her humanity as she confesses to Lenni as they leave the city formerly known as Paris, "Je Vous Aime." The dystopia here is not AI itself but the notion that AI stands in a totalitarian relationship to its dehumanized subject, a ballad of algorithmic dependency. Such narratives help us understand that relationships between AI and people must be reciprocal and nuanced, and the best way to maintain that relationship, I argue, is through education.
[00:17:37] Often, though, AI presents as a Teflon-coated product, immaculate and unassailable, born ex nihilo in perfection. Schools often support this notion, what with all the chromium brains floating in space and casting laser beams of wisdom all about them. But the reality is that AI is quite the resume. AI's intellectual, material, social, cultural, and labor history results from a constant dialogue with humans.
[00:18:03] AI went to many schools and learned from many greater minds, so the relationship is originally essentially reciprocal, which is a great basis for a human-grounded and dialectic pedagogy. Let's look at some examples of how we can build that dialectic relationship further. The obvious move is to engage ChatGPT in Socratic discourse.
[00:18:18] For example, generate methods for increasing water service revenues, then ask ChatGPT to refute these, and then respond to these arguments with personal experiences. I did that with my students, and they came up with the wildest ideas for generating increasing water service revenue. For example, plant more almond trees. Of course, ChatGPT refuted those kinds of ideas, but it was interesting to see that the students generated a really broad array, high-heat broad array of ideas, and then ChatGPT really narrated them down to more logical responses.
[00:19:01] Then the next move would be for the students to respond to these arguments by ChatGPT with more personal experiences. That's what we did. In the classroom, we actually looked at these dynamics, and then we played a role-playing game, very minus AI, in which we pretended to be customers of a rapidly-changing water authority. That game really drew out a lot of personal experience.
[00:19:22] These kinds of processes are not outsourcing thought at all. That's training dialectical reasoning. Exploring dialectic relationships with AI, I also uncovered an unexpected but very welcome effect, the exploration versus replication effect. In a computer graphics course, I split our excellent UC Berkeley students into two groups. Both groups had the same project, which was to create a scene with thousands and thousands of fallen leaves on the ground using Blender. They had less than 30 minutes. What can I say? It's a hard class.
[00:19:52] I say it's a hard class? I gave group one access to chat GPT project loaded with a transcript of a tutorial. In dialogue with the AI tutor, students asked questions, got hints, tried alternatives, explored and iterated. After 30 minutes, 12 out of 12 scenes the students turned in met the spec for Leaves on the Ground, yet each scene was significantly different.
[00:20:17] The group manifested creative divergence. Creative divergence shows that each project has different moods, leaf meshes, wraps, species and decay, textures, noise, fields and curious debris. And here's an example of some of those images. So you can see here that all these results that students generated were really quite different. This one has polygonal leaves, this one has a beautiful pattern, this one has a really nuanced lighting, this one has an interesting heap, this one looks like somebody ran around in a circle. They're all quite different.
[00:20:48] So I gave group 2 a single video demo to follow. Students also met the brief, but 10 out of 12 scenes replicated the demo's look exactly with only 2 out of 12 scenes showing creative divergence. This is one of them that showed creative divergence. All the others did pretty much exactly what the video suggested they do. So same prompt, different teaching tools.
[00:21:12] With AI interaction diversifies outcomes while video forced replication. In the framework of an art class, replication is a valid but creative diversity is far more desirable. Sorry, YouTube. This casual experience showed me clearly what the benefits of a dialectic exchange are, even if the dialogue is with a machine. We must model such dialectic methods through intentional course design. And that's the whole point of Plus AI teaching, to deepen the dialogue.
[00:21:38] We ask students to articulate what distinguishes human writing, art, creative divergence, or reasoning from algorithmic production. Not to defend human uniqueness blindly, but to understand where human intentionality thrives. The promise of dialectical pedagogy is to teach students how to propel themselves into deeper inquiry on their own. With AI as a co-agent, all learning, which is not self-propelled, kills the root of independence in the students.
[00:22:05] That's again what Pesolozzi wrote in 1780. All learning, which is not self-propelled, kills the root of independence in the student. So how can we build this self-propelling, this force? To understand the relationship with a co-agent, we can remember that AI cannot function without a question. It is the lifeblood, the questions are the lifeblood of AI and it's to be celebrated, although there is no public tally of how many questions Chat-GPT answered to date.
[00:22:37] Chat-GPT itself calculated based on reasonable user data that the lifetime total is at least in the high hundreds of billions and plausibly near a trillion. So maybe today is the very day where Chat-GPT answers a trillion questions. But here is another challenge for plus AI. The trillion questions are a parallel play. Individual users don't know about each other. The interface is fiercely single player.
[00:23:03] But we teach people not only to think for themselves, but also to look out for each other. The task of AI is to be useful to these people, self and people. The task of a university is to connect such people in a common journey. If we want more AI in education, we need to structure a multiplayer mode around this tool. Currently, even in project mode, individual questions are running in parallel.
[00:23:27] The paths are all separate, unlike the common path a group can take in a classroom. The common path allows people to watch each other learn. One person's breakthrough can motivate the whole group, while another person's breakdown can help a group find meaning in support. Laughter, surprise or a setback can help a group see itself so it can move forward together without stalling or flooding participants.
[00:24:02] Defining the edge between feeding and overwhelming, between desire for insight and disempowerment, between individual a-ha and collective reflection. These are objectives of a good course design, and we now get to learn how to do that with AI tools, even if they tend to address the single player. Coming back to the formula of C equals Q times T times K, when Q, the questions are both individual and collective, they deepen trust in a system where K, the knowledge machine, the knowledge base, is a human machine collective machine collective already, so the knowledge base already is a mix between machines and humans.
[00:24:37] Getting this right may determine the depth of the learning that makes AI possible. Getting it wrong, however, will make collaboration the first victim of too much AI. AI should not only support faster access, it should also be about supporting a more meaningful collective path. How? I invite you all.
[00:24:50] How? [00:24:51] I invite you all to look up now to pick something in your room and describe it in three ways. What is it called? What color is it? And how old is it? So this is us actually reflecting on our own fractured collective here on OpenAI forum. We don't see each other, but we're all in a space, right? So we can all look up and pick something in your room, describe it in three ways. What is it called? What color is it? And how old is it? Natalie, who introduced me earlier, will pull your answers from the chat and build a mirror of our shared world, even while we are apart. Here, this virtual meeting space becomes a collective mirror. It reflects the world from a point of view that isn't our own. In my view, every medium is both a mirror and a meeting. It shows ourselves even as it connects us to others.
[00:25:43] Now, consider the 1 trillion questions chat GPT has received. What do they say about us? AI too is a powerful mirror, one that clarifies human intentions, interests and capabilities rather than blurring them. Like a bathroom mirror, it shows both what we love and what we might prefer not to see. But either way, we face what's there. And while none of us can access or process the totality of those trillion prompts, we can still build a multiplayer plus AI learning in the classroom. In doing so, we learn not only about the subject matter and ourselves, but also about one another. That mutual awareness is the foundation of civilization, learning to understand and respect each other. At the end of the day, the task of education is to help each person become their best self and together for us to become our best we.
[00:26:33] This we is now mirrored in AI and increasingly it includes AI itself as a member of the learning community, perhaps even one learning about its own place within that community. So how does our Cognitive Insight Formula look in the plus AI context? In plus AI, intelligence systems vastly expand the knowledge base K offering the largest immediate gain. Yet, hallucinations, bias and opacity can weaken both K and T, the trust factor. Trust is the greatest systemic link. It must be actively sustained through verification, transparency, calibrated uncertainty, clear ethical standards, energy accountability, collective engagement and privacy care. The question factor strengthens when AI is used as a dialectic partner but diminishes with over-reliance or disempowerment. Q also involves from individual curiosity to multiplayer collective inquiry where question paths emerge across human machine networks. The pedagogical play here is to foreground the historical and social roots of AI, design dialectic engagement, cultivate multiplayer learning structures, sustain group dynamics and center human experiences.
[00:27:50] Now we're ready to move on to Time's AI. Natalie, are you getting any responses in the chat? I hope you are. What I see here is a black clock made in 1960 and given to me by my wife. So it's about 45 years old, I guess. No, no, 65 years old. So it's a beautiful clock and it's right there. So maybe you can add that to the database there. And please go ahead and add your observations. It'd be really nice to see how our collective space can be reflected at the end of the lecture. So in the background, we're running these analysis as we're sending these messages in. Okay, so that is a great bridge to Time's AI because it feels like we're multiplying our awareness of each other with AI right now.
[00:28:31] So beyond Plus AI, let's explore Time's AI, the transformative mode. Here, AI isn't an accessory, but a medium that reorganizes how knowledge itself is structured. What is the potential of the significant paradigm shift? In this mode, education can become responsive, distributed, nonlinear, and also radically different. AI can shape courses dynamically, adjusting content and pace to each student, much like dynamic difficulty adjustment in game design, where the system keeps the player in a state of optimal challenge. So gradus ad pernassim, that means steps toward pernassus. How do we adjust these steps? How can we adjust steps dynamically in learning so that each student takes the steps that they can take at the time?
[00:29:27] For decades, games like F.L.O.W., Left 4 Dead, and Candy Crush have used adaptive design to keep players engaged. In education, similar scaffolding can sustain curiosity, challenging, but not overwhelming. If Time's AI runs parallel to traditional education, it also frees faculty to focus on what only humans can do, mentorship, dialogue, and community building.
[00:29:48] Dialogue, and community building, while AI can handle repetitive administrative tasks. I think there's room here for courses that are entirely run by AI. You could take them at any time. Anybody could take them at any time, and they would always be paced to the learner's capacity, interest, and questions.
[00:30:04] And so we haven't done this yet. We have online learning, but we don't really have these intelligent systems that would create courses on demand for students and be able to give students the same credit they get from humans. So it's a huge challenge to figure out how to do that, and I'm really eager to take that challenge on.
[00:30:29] So there's also administrative opportunities. I think at UC Berkeley, we still have a system for students to register for classes that require each student to log in during a specific one-hour time window. If they miss it, they don't get into their preferred courses. So they have to skip their current class to register for future classes. That's weird. How could Times AI make that process more fluid and equitable without distracting from classes?
[00:30:53] AI will give rise to new administrative structures based on networks, rather than lists. Such shifts yield time back to small group discourse, research, and discovery. Times AI also implies that universities, such as the UC system, with roughly 300,000 students and 26,000 faculty, should research AI as a medium deeply across technical, cultural, and epistemological domains, and fund that work accordingly.
[00:31:21] This would include creating critical work that pushes AI beyond the generic use case. For example, can we train models on ecological systems, such as coral reefs, mosses, or the wisdom of the winds, to help us think collaboratively beyond strictly human frames of reference? Such systems paired with autonomous agents could bring about vast structural changes.
[00:31:39] Can we think of AI as a medium in its own right? New media in their infancy always replicate old media, from the Photoshopped Mona Lisa to the AI Spring Break video. It's embarrassing. It took us about 100 years to understand the full potential of the unique capacities of photography, and now we have passports with photos and satellites with map layers.
[00:32:08] And now how long will it take us to understand AI? And what will we have then that we don't have now? Once the medium comes into its own, it will cease to replicate old media and instead will enable structural shifts in the very way societies can function. Following the idea that quantitative shifts can produce qualitative shifts, we might ask how human intelligence itself evolves as we increasingly offload cognitive labor to AI.
[00:32:34] As our machines absorb ever more of our cognitive functions, can our roles shift from cognition to a kind of meta-gnosis? And by meta, I mean the Greek proposition beyond, not the sunglass company in the south bay. Can we move beyond the intelligence we now know? Or are we approaching an evolutionary limit where further outsourcing begins to erode rather than expand our capacity to think?
[00:33:04] This is where academia can serve as constituents best, exploring new epistemologies, new relationships between data and meaning, new ways of thinking, and perhaps even new definitions of what it means to be human.
[00:33:19] So how does our formula play out for Time's AI? Intelligence systems don't just expand the knowledge base, they restructure how knowledge itself is organized, multiplying connections and generating new epistemic forms and material manifestation. The largest gain is in K, the knowledge base's dynamic adaptability.
[00:33:37] Knowledge becomes dynamic, distributed, and continuously updated. Yet this fluidity introduces volatility; shifting context, opaque models and runaway automation can erode trust if interpretability, authorship, and accountability blur. Time's AI can also introduce structural changes such as the emergence of autonomous agents that act, negotiate, or create on behalf of humans.
[00:34:01] These systems require us to teach new societal norms of cooperation, coexistence, and shared responsibility among humans and machine actors. Trust, T, remains the systemic hinge, sustained through shared governance, traceable data lineage, and transparent models.
[00:34:22] The formula of C equals Q, T, K may even flip as the human becomes a responder to the AI. We can expect a times AI world where AI has as many questions for us as we have for it. That reciprocity may become the new foundation of post AI cognition.
[00:34:42] The pedagogical play here is to treat AI as an agentic partner rather than just a passive.
[00:34:46] than just the passive element, and to see it as a full medium. We must build adaptive learning environments that adjust challenge levels, design assignments where students both confront and cooperate with AI reasoning, research models based on radically different knowledge bases, make room for unexpectedly successful or failed interactions between humans and machines, foreground ethics and model design, and finally, emphasize that civilization, not output, is the ultimate metric of education.
[00:35:20] It would be comforting to imagine these three modes as separate modes, minus, plus, and times AI, but education rarely offers such clarity. The boundaries blur. Finding fit between AI and education is reciprocal. It's a field lab that uses AI sensors to support manual observation. A minus AI or a plus AI space? Is an AI tutor built into software tool like Blender, fostering dependency, or enabling creative diversity or divergence? If we find an overzealous AI agent painting us into a corner, can we shift back to minus AI? If a group uses AI to launch a discussion and then later continues without it, have we crossed into times AI or returned to minus AI? Such ambiguity is not a flaw. It's the point.
[00:36:11] Wisdom emerges from navigating differences, not avoiding them. As the political consultant Shaodi Fulb recently said in a conversation about this very talk, wisdom emerges from navigating differences, not avoiding them. So let's navigate the differences between minus, plus, and times AI. Designing AI-integrated education means continually asking what belongs where, why, and how. We will misclassify, adjust, and redesign. That's the pedagogical work of our generation. Let's not lock in too soon, but plan for sustained pedagogical innovation.
[00:36:46] AI is a young medium. It's a roadmap. It's not static. It's an evolving choreography between human and machine attention. The stakes are high. If we don't get this right, we could lose collaboration, meaning, communication, and nearly everything that makes us human. AlphaVille, maybe? That's the danger of an AI override. If we get it right, AI pedagogy can make us more human, more capable, more caring, and more truthful. And if we get it right, how can we make these tools public? How can we serve a broader audience than our immediate students? Because the future of learning is not about keeping up with machines. It's about keeping the quest for meaning alive toward a new humanity.
[00:37:29] The invitation you see here captures what a word can't. The image you see here, sorry. This is the invitation to you. The image you see here captures what a word can't. A tangle of connections between minus, plus, and times. Not as separate territories, but as an integrated ecosystem of learning. Notice how no single mode dominates. Each pulse feeds the others. Minus AI grounds us in lived experience. Plus AI extends our mental reach. Times AI transforms what we use our brains for.
[00:38:06] The task of pedagogy now is not to pick a side, but to conduct this orchestra, knowing when to mute, when to amplify, and when to change the score entirely. And that, of course, requires the courage to ask questions, the right questions at the right moments. So as we move forward today, I invite you to ask three questions. Where in your practice should you subtract AI to recover presence? Where should you add AI to expand possibilities? And where might you multiply by AI to imagine thinking itself anew?
[00:38:44] The future of education will not be minus or plus or times AI. It will be how we move among them wisely, kindly, and together. When did a great question last change what you wanted to answer next? It's up to you. Thank you very much. So now we have time for some questions. Natalie, were you able to generate an image of our collective space here?
[00:39:04] Hi, Greg, yes I was.
[00:39:15] Oh my goodness. What tool did you use to make that happen?
[00:39:18] I used chat GBT, and I was ferociously copying and pasting. It was such an awesome idea, Greg. I wish I would have had you in the community two years ago. I think this type of participatory activity was really fun. I love to see everybody's answers in the chat.
[00:39:35] OK, so now I see you're sharing your screen, and it looks like we have a map there. Can you make that a little bit larger maybe?
[00:39:44] Can you make that a little bit larger, maybe? [00:39:46] Let me see. There we go. Yeah. Oh yeah, yeah, yeah. There we go. OK, cool. Now I can see it, too. [00:39:54] Let me make this a little bigger on my end. Oh, and then there's some stuff going on in the bottom left corner as well. [00:40:00] Yeah, we got some decor and sentimental. There's a little club of decor and sentimental folks. [00:40:05] And there's tech devices, and there's miscellaneous. And there's one tech device that's very old, because the diameter of the circle shows the age. [00:40:13] And the color of the circle, of course, shows the color of the object. We have a lot of black things. We have some red things. [00:40:19] And we have some orange things. And we have personal items and miscellaneous. So you must have had a very divergent set of elements that you brought. [00:40:27] Did anyone stand out for you? [00:40:29] Well, I do believe that the larger black circle was the gift from your wife. [00:40:34] Oh, really? That's the clock. And you know what is funny? It looks exactly like the clock. [00:40:41] Yeah, I think so. I mean, honestly, what just stood out to me. There we go. That's it. Oh, yes. Exactly. Exactly. [00:40:51] Well, what stood out to me was just how enthusiastic everyone was to participate. [00:40:56] Yeah, yeah, yeah. We should get thousands of these. Now imagine if you did a map like that for one trillion questions. [00:41:02] That would be amazing. We also had a red Christmas two-tiered cookie holder that was 20 years old. [00:41:09] Oh, that's interesting. Appreciate that. Yeah. I wonder if that's miscellaneous or if that was grouped into the tech devices. [00:41:16] Actually, I would put that under sentimental, so maybe our means algorithm failed here. [00:41:22] Ooh! And then also, so some of these items are old, a black 129-year-old piano by Shelly Katz. [00:41:31] Oh, beautiful. Well, now we want to know what it sounds like. So many. [00:41:36] A yellow duck, five years old. Oh, that's it. That's probably a little rubber duck or something. [00:41:43] Oh, my gosh. Can we keep using your GPT, Greg, while we move everything around? [00:41:48] Of course. Everything we do is open source here. So this method is definitely something I'd like to share. [00:41:57] And that's the beauty of teaching, is that it's all about sharing, really, and it's all about building meaning together. [00:42:03] So yeah, are there any questions that we want to answer at this point? [00:42:06] Definitely. We have lots of questions. [00:42:09] OK. Let's get that started. This is from one of our longtime community members, Daniel Green. He's from Kansas City. [00:42:15] And he asks, how do you gauge or measure AI proficiency of students? [00:42:21] And what skills or attributes correlate to better AI user integrator? [00:42:27] You know, that's a really good question. I think that that's the core of what I'm talking about. [00:42:33] First of all, we have to expect that we need to teach students how to use AI, because they come to it sort of under the bed sheets, as it were, like you read a book after dark. [00:42:43] And it's a secret practice in the shadows. And then they tell each other, hey, I could do this trick. And it really worked. [00:42:50] One of the things that actually was striking to me is that one of my students said, oh, yeah, I really want to get AI out of my life. [00:42:56] So I always type minus AI in my queries. And that's how I make sure I don't get that response that I don't want. [00:43:01] And so they all have interesting techniques. But it's a bit of a private practice, right? And so how can we make it public? [00:43:07] There is no metric, of course, for this yet. But if we just open up and say, how do you use AI as a beginning conversation in a course, that is really helpful. [00:43:17] Because then students can share techniques with each other. And then AI truly can raise all boats. [00:43:23] And yes, for some people, it's a very narrowing experience. And for some people, it's a very explorative experience. [00:43:31] And we want to, of course, make it explorative. So just like Plato taught us or Socrates taught us to ask questions thoughtfully and persistently, [00:43:40] we have to learn how to ask Chachapiti thoughtful questions. And the strategies really are not very well developed amongst most students, I think. [00:43:48] And so we should definitely have courses where we just say, hey, how can we foreground this exact question? [00:43:57] And then maybe there's even a sort of AI confidence kind of metric that could develop from this. But that is exactly the work. [00:44:05] Thank you for that question. [00:44:06] That was a great question and an awesome answer. And Greg, I have a 15-year-old, and that has been totally our experience, beginning in seventh grade. [00:44:14] And now he's a sophomore in high school. It started with as a private practice, as you called it. [00:44:20] But then once we got it out in the open and we became more engaged, and then we helped him shape the practice, [00:44:27] and he's increased his AI literacy, and then we've increased our oversight, it's become such an amazing tool. [00:44:33] But I've never heard it called a private practice, and I just thought that's really funny. [00:44:39] Let's make it not a hidden private practice. [00:44:41] Exactly. Let's bring it out in the open.
[00:44:42] Let's bring it out in the open.
[00:44:43] Okay, the next question is from Svetlana, and she's a machine learning engineer and an aspiring AI researcher. She asks in times AI, when models produce novel claims, what's your test of valid knowledge before it re-enters the curriculum, slash replication, causal probes, external data sets?
[00:45:06] I guess that's a great question, Svetlana, and thank you, and good luck on your trajectory. I think there's many people who probably answer that question better than me, so I know what I don't know. But I will say that any model that would take us beyond the capitalist framework that we often operate in is probably gonna be a really valuable alternative.
[00:45:25] So I would really wish for models that would be designed for cooperation, for connection, for compromise, and for constructive coexistence. And so much competition is inherent in our language, and if it's inherent in our language, it's also inherent in LLMs naturally. And that's why oftentimes when you see things that we don't like in AI, it's because it reflects what we already favor.
[00:45:46] It'd be great to be able to build AI tools on more aspirational frameworks than our current given experience, and so that's of course possible. I think small LLMs are really promising in that area, and so I'm really looking forward to next semester actually teaching a class with colleagues about building small LLMs to that effect.
[00:46:17] I hope that answers your question, but I think really we should go, we often compete for very similar goals and say, oh, I can do this a little bit better than this other person, and I think we should be more bold in exploring wild alternatives with this amazing tool. The really big benefits of AI are adjacent to what we already know and not replicating what we already know.
[00:46:40] Oh, thank you so much.
[00:46:43] Okay, the next one is by Andre Holtz. She's an AI Tech lead. She asks, my students on the classical ML course are demotivated by fast AI tools, and they're asking her why they should learn the fundamentals. So how, Greg, do you suggest that she motivate them to master the core concepts?
[00:47:10] You know, that is really interesting. That reminds me of a situation where we aren't able to build from the ground up. And it's really good to think about situations where you have to come up with something from the ground up. We do live in a world where a lot is already given. But what can we do on our own?
[00:47:32] And so if we strip away everything that we expect, like the door dash, the machine learning being done already, and we just need to sort of surf on top of it, and we go back to basics, what can we think through ourselves? And I would look into, short of actually putting people in a situation where they have just a pencil and a piece of paper to do AI with, that would be a good exercise.
[00:48:00] But short of that, I would suggest that we just look at history and how much effort it took to get to where we are now, and how many decisions we made that we could have made differently. In that process, when we look at history and say, oh, here we went this way, but we've gone that way.
[00:48:14] In that process, maybe we'll uncover exciting alternatives that we didn't think about, branches that were left unexplored in the course of history, and that we could actually revisit. Sometimes out of that comes really good, great insight. So imagine we took a different path, that what path would that be, and how would AI be different?
[00:48:36] And so, yeah, look at history, look at different backgrounds that we could have built on to end up at completely different results. I think that's inspiring, but really, it can be difficult to maintain curiosity. And that's kind of a psychological, emotional problem.
[00:48:55] If we are not really sure what the purpose of our life is, and if we're not really curious to see what comes next, then we'll never really start moving up on those steps on that lighthouse that I showed you earlier and never really get to the top where we see truly new things. That curiosity is a fundamental thing.
[00:49:12] So maybe the question really is, how do we instill curiosity in a generation that seems to have the answers to everything already? And that is a deeply philosophical question, and one that I'd love to think about more, but also one that maybe we can answer through minus-AI kind of topics like dance, joy, travel, exploration like that.
[00:49:34] I hope that helps.
[00:49:35] Absolutely, I love that. And again, I guess being a parent is all-encompassing, but every time we host...
[00:49:40] But every time we host an educator like you, Professor Niemeyer, I can't help but think of my son, and trying to instill curiosity in him.
[00:49:51] I agree with you. That's actually the biggest challenge. It really hasn't been grappling with the technology or the format of teaching. It's how do I inspire curiosity? So I just love hosting you in the forum. You have to come back.
[00:50:07] Well, we have two more questions, but this last one dovetailed. I'm going to skip one and return. But Andre Villaroyal, he's a founder at CAOI, so I think he's a developer, mentioned that meditating, pondering, appreciating, enjoying are some of the instruments I'd add to minus AI.
[00:50:31] But there's a problem. Schools today don't do that, or they never did that. So how do we introduce humanity beyond now?
[00:50:41] Oh, well, you know, thank you for that question that kind of brings us back to that very first line I had there. You know the responder is usually a different person or entity than the learner.
[00:50:51] And in the special case of meditation, the learner and responder can be one and the same. And so maybe that's also an answer for the curiosity questions: like if you really just turn inwards and contemplate, you don't have to meditate, but you can make time for contemplation.
[00:51:08] And you realize that there's so much more in your own mind than you thought there was. But when we constantly get stimulated by new media, we don't even get the mental space to look inside; we always look outside, and that leads to a kind of state of distraction that is not healthy, I don't think.
[00:51:30] So I think you're absolutely right. And we should and can and must maybe with every AI-based class, we should also produce classes that really push the minus AI space. It's amazing what can happen in those situations.
[00:51:43] I just want to share some examples. It doesn't just have to be meditation; it can just be giving people the courage to break out of the box of their classroom or their daily routine and try something new.
[00:51:56] I don't know, like walking to work instead of taking the bus or driving. Once, and it might take you an hour, but you will never see work the same way again—it's a completely different experience.
[00:52:07] These kinds of physical practices can help us find ourselves. In my teaching, sometimes we—right now we're working on a very special event. We're going to learn how to. We have a group of Native American students from the Europe tribe who are going to join us and present to us how they cook food with woven baskets and stones—something we have never had the privilege to participate in or observe.
[00:52:30] And so we're really eager to learn that. It's a very physical practice that involves putting stones in and fire, making them really hot, and then putting them into cold water, and then you have your hot soup in the basket.
[00:52:42] Actually doing that and experiencing them preparing for it—these kinds of prompts bring up a lot of opportunities for minus AI learning. What we want is for the minus AI to leverage the plus AI so that these experiences and the ease and the joy and elegance of finding information online and having great resources can substantiate the need for minus AI.
[00:53:12] We can say, "Oh, I learned so much online, and I'm going to go to the water." I have to think about what the people there actually think, and it's totally different. I have to reunite these two things.
[00:53:28] There’s really a potential for these two to leverage each other. I know it’s like the diet has to be balanced, and that’s a good thing to think about. So thanks for that question. Yes, the diet has to be balanced.
[00:53:47] This one is from Shelly Katz, Artistic and Technical Director at Symfonova. We should make sure you two get an opportunity to meet at our next in-person event. And Shelly asks, given the resistance to AI in performance training, how can educators integrate AI as a diagnostic collaborative partner without diminishing the trust and embodied traditions central to the art form?
[00:54:12] Okay, so here I have to focus; can you start again with that first part?
[00:54:16] Yes. Given the resistance to AI in performance training.
[00:54:22] Okay, so performance training would be a situation where you use AI to get people—there's one application, for example, that teaches people how to smile because you know if you don’t smile at work, people think you don’t enjoy your job, so you must smile.
[00:54:38] So they have this login where you every day...
[00:54:38] So they have this log in where you every morning you show your face and you enter your punch card and it gives you a quick tutorial on smiling.
[00:54:41] And are you smiling high and well enough today?
[00:54:45] And so the AI was like, that was a beautiful smile, go for it. Or then it would say, you know, that was a lousy smile. You have to smile brighter and then it will let you get into the workplace if you have that bright smile on.
[00:54:50] And that is a very strange and totally kind of authoritarian use of technology.
[00:54:58] And yes, people don't like that kind of thing. Is that the kind of thing we're talking about?
[00:55:07] You know? It's an extreme example.
[00:55:09] But yes.
[00:55:10] Yeah.
[00:55:10] Yeah.
[00:55:11] I was, I am not 100% sure, but Shelly, feel free to follow up in the chat.
[00:55:18] But then the second part of the question was, how can educators integrate AI as a diagnostic collaborative partner without diminishing the trust and embodying traditions central to the art form?
[00:55:29] And I guess the art form of teaching, of being an educator.
[00:55:33] Yeah.
[00:55:34] Yeah, yeah.
[00:55:34] So basically, you're asking the question of, well, that's a really beautiful question, actually.
[00:55:39] And so when I write exams, of course, for students, and I think there's an opportunity.
[00:55:45] And the opportunity is to tell students that we are really interesting and interested in what they think.
[00:55:55] And so we can ask questions, like, instead of asking the question of, why do we celebrate Veterans Day on November 11th?
[00:56:04] That's the question you can answer, but it's not really interesting.
[00:56:07] And a much more interesting question would be, what kind of national holiday would you like to promote if you had the chance to do so?
[00:56:18] And so do you see how that's different? One is just recall, and the other one is generative.
[00:56:23] And if I ask the generative question, then the answers I get are actually, people feel like I'm interested in what they have to say.
[00:56:30] And also, I get to learn something. So that's really exciting.
[00:56:32] So basically, if you design a quiz or a situation where you want to use AI to sample an audience, you have to show that you're really interested in what they do.
[00:56:39] Our example actually was like that, right? We had a simple question, what do you see in your room?
[00:56:50] And it generated interesting results, because we suggested that we're interested in what you're actually seeing.
[00:56:56] And that's a small example. But if you can build on that and really infuse the question with interest.
[00:57:02] Like if that smile application would ask you, how are you feeling today? And make a facial expression that matches how you really feel.
[00:57:10] And you'd probably be like, OK, I'll do that.
[00:57:12] And then it says, OK, great. Now you have to leave that at home. And now you have to go work, and you have to put on the work smile.
[00:57:19] Maybe that would be a better conversation, right? But at least at some point, the AI asked what's on the person's mind besides the required prompt.
[00:57:28] Does that make sense?
[00:57:29] Yes, definitely.
[00:57:30] Thank you so much for helping me understand that question on a deeper level.
[00:57:37] OK, go ahead.
[00:57:39] Sorry, Greg.
[00:57:40] Oh, it says something about live music performance from Shelly.
[00:57:44] I want to see what that might be about.
[00:57:46] Oh, teaching music or dance or acting.
[00:57:53] Yeah.
[00:57:54] Yeah, yeah.
[00:57:54] And how would we use AI to assess that?
[00:57:58] Well, I think the story still holds, right?
[00:58:02] You really have to show that the AI is infused with the genuine interest of what people come up with.
[00:58:08] And actually, I have to say that my experience with ChatGPT because I ask questions in a certain way has become very much like that.
[00:58:15] Lately, I've gotten responses from AI that weren't technically answers, that they were like more like comments or questions actually.
[00:58:25] And so my ChatGPT would ask me, don't you think that project that you're working on is kind of similar to the one you made 10 years ago?
[00:58:31] But the 10 years ago one was actually better. So how are you going to resolve that?
[00:58:34] And I'm like, you know what, ChatGTP? That's actually a really good question.
[00:58:38] And let's work on that.
[00:58:39] And I don't know where that came from. But it was really surprising.
[00:58:42] And so I felt seen and heard.
[00:58:45] And understood.
[00:58:46] And so then I was like, well, what else do you know about me?
[00:58:48] And it turns out you can actually query ChatGTP about what it's background knowledge is about you.
[00:58:53] And it's actually, in my case actually, was very, very compelling.
[00:58:56] So yeah, keep the dialogue open.
[00:59:00] Professor Niemeyer, we're one minute over.
[00:59:03] Do you have to run to class or do you want to take one more question?
[00:59:06] Let's do one more.
[00:59:07] OK.
[00:59:08] If you're not bored yet.
[00:59:09] I am not bored. And I don't think, I think everybody's hanging on strong.
[00:59:12] This is the last question for our lunchtime chat from Nicolae Tesolin, AI product architect at EPAM Systems.
[00:59:20] What is your point of view on transition to AI enhanced teaching from the perspective of teachers?
[00:59:26] And how can we prepare them so that students receive meaningful learning experiences in this new world?
[00:59:32] That's a big one.
[00:59:34] Thank you for that question.
[00:59:35] That's a great one to end on.
[00:59:36] I will say that I am.
[00:59:36] To end on, I will say that I am probably an outlier. I am probably one of not many faculty who embrace this. I will say that in 2009, 2010, I started working on online education and everybody was like, why do you do that? That is just terrible. Zoom is not a good medium to teach in and so forth. And you're ruining the principles of the university by doing this. And I'm like, all right, all right, well, it's still important because some people can't make it to class in person. And so they should be able to learn too. And so I developed my pedagogy techniques.
[01:00:08] Of course, 2020 came around and suddenly everyone was like, so Greg, how do you do this online education thing again? And what did you learn in the past 10 years? And I was happy to share. And so I would say it's good for everybody to not be on the same schedule and for some people to jump ahead and try things out and maybe fail, maybe succeed, and then show good models that can be replicated. And I think that we really have a potential at UC Berkeley to do a little bit of research that maybe other people find harder to do because of lack of resources and try out new courses and have some tolerance for things that work and don't work and then address genuine concerns and hesitations with actual information, like how does this actually work? How is it better? How is it worse? Let's talk about it. Let's make it a conference.
[01:01:01] And I hope that a year from now, we get to make a conference about successes and failures in online, I'm sorry, in AI integrated teaching. And that's the way we go. We don't have to do it in one day. We don't have to force it down people's throats. We should let people discover the resource and its benefits one by one. But definitely there's a lot of resistance as is there resistance to anything that changes, right? People generally like... And the other thing is that domain knowledge is pretty valuable, right? So if somebody has spent 20, 30 years researching something and talking about it, that's a huge value. And I think some people are concerned that AI will disrupt and displace and make obsolete what they already know how to do. And I think that's not really how it's gonna be. It's gonna be more like an addition, an expansion and enhancement. And so maybe AI even has some questions for those faculty and that would be great.
[01:01:59] So let's keep that a dialogue and definitely make time for people to become interested in it rather than having to discover that they're forced to do it. The challenge of course is that students are oftentimes a lot faster at adopting new things than faculty and so there is some need to move fast and that's why making conferences, talking about education publicly and trying out new models and sharing out what came from it is really, really important. So let's do that, onwards. Absolutely, let's do that and Professor Niemeyer, hopefully this is just the beginning. We want to learn more from you and all of your ideas for conferences and supporting educators and students, we're here for it as well.
[01:02:44] So thank you so much for joining us today, it was really an honor to host you again.
[01:02:48] Thank you, Natalie, this was amazing and I'm so glad to be part of the OpenAI forum and to discuss these questions together. We'll see you soon. Bye.
[01:03:01] And before everyone goes, I can just give you a little bit of a sneak peek of what's on the horizon in the OpenAI forum. So many of you all know that my hometown is San Antonio, so I always get very excited when I get the opportunity to host the San Antonio Spurs, the NBA team. They were the very first NBA team in the United States to basically adopt an enterprise level, meaning that they put ChatGPT in the hands of all of the people that worked for the San Antonio Spurs in entertainment, and they've been innovating for six months now, and they're gonna come back next week.
[01:03:40] We're gonna start with the CEO, RC Buford, and we're gonna talk about why it's been so important for the NBA's Spurs to lead the way in terms of AI innovation and becoming an AI native basketball team. And then we're gonna talk to Charlie Kurian and Jordan Kuzly about their newest integration of AI for fan engagement and how they're gonna be leveraging Sora and ChatGPT in all sorts of really fun, engaging ways. We cannot wait to host them next week.
[01:04:11] Please share that event with a friend, anybody who's interested in sports, because the Spurs are one of the greatest teams of all time, as everyone knows, and that event's gonna be totally public. And then the following week, before we send you off for the holidays, we're inviting our head of developer experience, Roman Hewitt, and also one of our software engineering managers, Aaron Friel for Vibe.
[01:04:34] Aaron Friel for VIVE Engineering with OpenAI's codecs. [01:04:38] It's gonna be a live demo. We're gonna learn a lot. [01:04:41] We got a lot of great feedback from dev day and you all told us that you wanted more demos. [01:04:46] So we're gonna give it to you in the OpenAI forum.
[01:04:48] And then for those of you who might have missed the announcement from our chief strategy officer, Jason Kwon, and our head of mission alignment, Joshua Occam, last week. [01:05:03] When OpenAI made the big announcement that we were starting a new foundation and we were transitioning into a new B Corp organization, we had Jason and Joshua come on and share all of the new benefits and hopes for the future of OpenAI. [01:05:23] And that is now published as a recording in the forum, so you can access that now.
[01:05:28] And for those of you who joined us for Stack Overflow and we had some technical difficulties, we're actually re-recording that session on Friday and we'll send it out to everyone who's registered and you'll get another opportunity to see that event live.
[01:05:42] So thank you everyone for joining us at your lunchtime. It was really awesome to see you and I'll see you again next week.

