Event Replay: AI & Pedagogy Sessions
Speakers


Moran Cerf is a professor of neuroscience and business (Columbia University) and the Alfred P. Sloan Professor of screenwriting (AFI).
He holds a PhD in neuroscience (Caltech), an MA in Philosophy, and a BSc in Physics (Tel-Aviv University).
In his research, Prof. Cerf studies patients undergoing brain surgery with neural implants embedded deep inside their heads. His work addresses questions concerning consciousness, dreams, behavior change, and decision-making. Additionally, his work has paved the way for several key neuroscience disciplines, including advanced brain-machine interfaces, and is at the heart of numerous commercial applications.
Recently, he has been involved in implementing key lessons from his studies to help leaders improve critical decision-making protocols (i.e., helping the U.S. government revisit its nuclear launch protocol).
Previously, he spent a decade working in cybersecurity, as a hacker, and served as a board member of several public companies.
Cerf published papers in academic journals (i.e., Nature, Science) and popular science ones (Scientific American, Wired, New Scientist, …). He is a contributor to numerous media outlets (Forbes, Atlantic, BBC, Bloomberg, Time, NYT, New Yorker, …), and his work has been portrayed in venues like CNN, Fox, or Netflix Explained, and featured in TV shows such as Bull, Limitless, House, Mr. Robot, and more. His work was recently turned into an Off-Broadway show ("Sans Me").
He has published several books, including the recent graphic novel "Brain Imaging: An Illustrated Guide to the Future of Neuroscience” and "Return on AI," and has made much of his research accessible through public talks at TED, TEDx, DLD, and the World Economic Forum, garnering millions of views and a large following.
Prof. Cerf is a four-time US national storytelling champion (The Moth), a recipient of several awards and grants, including the President's Excellence Award, and was named one of the “40 leading professors under 40.”
Most importantly, he is left-handed.

Greg Niemeyer is a data artist and Professor of Media Innovation in the Department of Art Practice at the University of California, Berkeley. He is also the former director and co-founder of the Berkeley Center for New Media, where he helped build an internationally recognized hub for research, teaching, and public engagement at the intersections of technology, culture, and the arts. His academic trajectory began with studies in Classics and Photography in Switzerland, before a move to the Bay Area in 1992 set him on the path toward new media. In 1997, he received his MFA in New Genres from Stanford University, a program that encouraged his interest in experimental forms and the blending of media, technology, and conceptual art. From an early age, Niemeyer was fascinated by mirrors—not just as physical objects, but as metaphors for media itself. He continues to describe his practice as a lifelong pursuit of making mirrors: systems that allow us to see from perspectives other than our own, reflecting both what we want to see and what we would rather not. For Niemeyer, such mirrors are essential tools. They help us recognize the contexts that shape our lives, the fragile ecosystems on which we depend, and the deep entanglement of human and non-human futures. By showing us what we might otherwise overlook, his mirrors invite us to make better, more informed choices. Niemeyer’s art has been exhibited internationally at venues including the ZKM Center for Art and Media in Karlsruhe, the San Francisco Museum of Modern Art, the San Jose Museum of Art, the Stedelijk Museum in Amsterdam, the Townhouse Gallery in Cairo, museums in Zurich and New York, and many other institutions. His work has been supported by the MacArthur Foundation, the National Endowment for the Arts, Intel, Pro Helvetia, and numerous other organizations. These commissions and awards recognize a career defined by both conceptual rigor and technical experimentation.

I am an enthusiastic learning and development professional who focuses on teaching AI, ranging from Intro to AI to leading students in creating real-life AI solutions for small businesses. My experience has mainly been in academia, where I work at Miami Dade College as an AI Faculty, with a few months in banking, where I got a chance to learn how all things tech, especially updating the core banking system. Academia was not my first career choice, though I don't regret ending up in it, as I have come to enjoy pouring into students, mentoring, and nurturing them to grow into loving AI and being skillful in it. I have a Master's in Artificial Intelligence from the University of Bridgeport and a Bachelor's in Information Technology from Kenya, and how I got to AI is by mistake. I was interested in cybersecurity, specifically to be an ethical hacker, as I admired how good they were at accessing computers and the vulnerabilities present, though my grad professor discouraged me from pursuing that path and suggested I work on knowing about neural networks and data mining. It was hard at first, as they were new concepts, and the day that everything clicked was when I was watching AI Day by Tesla, which they used to showcase during the COVID period, to showcase how they create their autonomous vehicles. That experience piqued my curiosity, and I started doing a deep dive into AI. Interestingly, it was around the same time that ChatGPT had just launched, so it became easier to apply the knowledge. My favorite domain of AI is Natural Language Processing because there is power in language. What we speak, how we say it, can break or make a society, and implementing this in machines is one of the interesting things to do as we are teaching machines how to communicate with individuals. My favorite part of being in academia is seeing the glow on the student' faces after a class or a semester where they feel confident in their skills and they can utilize the knowledge accordingly. I work with students from different disciplines in AI research, and some of the research we have done is how Natural Language processing can improve learning in the classroom, and a comparative analysis of the performance of ChatGPT and DeepSeek. My main initiative right now is guiding students in creating AI solutions for small businesses in Miami to expose them to the industry, and as a way of nurturing their skills. This has resulted in students becoming confident with what they have learnt and kickstarting AI initiatives in small businesses. Other than my academic and professional life, I enjoy traveling, singing, and spending time with family as a way of decompressing and bonding just to refill my cup. I believe that one can't pour from an empty cup.

Tina Austin is a biomedical researcher–turned–AI educator who brings critical thinking to the adoption of AI in higher education. For the past decade she has been a lecturer at various universities UCLA, USC, CSU and Caltech teaching subjects ranging from regenerative medicine to communication. In 2025 she developed a graduate course on critical thinking with AI at USC and a novel computational biology with AI ethics at UCLA, while also creating faculty development workshops and advising school districts on responsible AI policy. In 2022, following the release of ChatGPT, Tina created and organized a for-lecturers, by-lecturers AI initiative across the UC system to help faculty share adoption strategies from their campuses. Her syllabi were first circulated at UCLA as models for AI policy. Since then she has helped over 950 faculty integrate generative AI into their curricula using her innovative UnBlooms framework, a reimagining of Bloom’s Taxonomy designed to support AI integration through discipline-spanning pedagogy. Her approach applies the same rigorous methodology she once used to train pre-med students in deconstructing biomedical research, now adapted to evaluating AI applications across disciplines. A member of both the California AI Taskforce and the Los Angeles AI Taskforce at LARC, Tina is also an internationally recognized presenter, having spoken in Sydney Australia and at Oxford University UK on responsible AI adoption. She has received multiple teaching and innovation awards. She has been recognized at 2025 ASU+GSV as one of the Leading Women in AI and honored with the AI Innovator of the Year recognition. In 2024, Tina was awarded a UCLA-OpenAI EDU License through a competitive call for proposals and has led faculty development workshops. Her LinkedIn community of 12k+ educators and researchers values her research “reality checks” that cut through AI hype. Recent speaking engagements have even outdrawn major tech company sessions by 200+ attendees, underscoring the demand for her balanced, evidence-based perspective on AI in education.
SUMMARY
The session explored how AI is reshaping education, beginning with OpenAI’s Olivia Pavco-Giaccia outlining why teaching and learning sit at the core of the company’s mission and how student adoption—now more than 40% of ChatGPT users worldwide—has accelerated AI’s integration into campuses, leading to large-scale deployments such as ChatGPT EDU across the CSU system and emerging research partnerships aimed at improving learning outcomes. She emphasized moving beyond fears of cheating to unlock personalized learning support, including early progress with Study Mode.
Columbia University’s Moran Cerf, then connected AI to cutting-edge neuroscience and behavioral research, detailing how AI literacy is now essential for executives and students alike, and sharing striking findings from brain-interface studies that reveal how humans make decisions, process memories, and respond to engagement—insights he believes could transform pedagogy and human–AI collaboration.
Miami Dade College’s Beth Muturi followed with a pragmatic model for “humanizing AI,” showing how community-embedded capstone projects allow students to build real AI solutions for local organizations, expanding opportunity and practical skill development.
UC Berkeley’s Greg Neimeyer proposed a holistic AI pedagogy organized around three modes—minus AI, plus AI, and times AI—arguing that education must balance embodied human experience, critical engagement with AI, and transformational uses of intelligent systems to sustain meaning, collaboration, and truth in an AI-saturated era.
Finally, UCLA’s Tina Austin presented her “Autumn Bloom’s” framework, a recursive, discipline-flexible alternative to Bloom’s taxonomy that shifts assessment away from grading AI-generated output and toward evaluating students’ comparative reasoning, critique, and metacognitive understanding, offering a path for assignments that integrate, challenge, or intentionally exclude AI depending on pedagogical need.
TRANSCRIPT
[00:00:00] Speaker 1: Please help me in welcoming my colleague, Olivia Pavco-Giaccia up to the stage.
[00:00:13] Speaker 2: Hello. I'm Olivia. I'm a leader on OpenAI's education team. Honored to be with you all today. Thank you so much for making the trek. My hope over the next seven or so minutes is to share with you a little bit about why education really matters to us here at OpenAI and then also give you some insight into how we're thinking about the future of teaching and learning.
[00:00:36] So let's start with a few basics. OpenAI began as a small research lab nearly a decade ago. Our mission then, the same as it is now, which is really to create artificial general intelligence that benefits all of humanity. Now, you probably didn't know of the company back then. The real shift came in late 2022, when we launched a low-key research preview called ChatGPT. Thank you for the laugh in the front row there. It was intended as a simple, quiet release, but within days, it sparked a global conversation. A little under three years later, ChatGPT is used by more than 700 million weekly active users. And if this feels fast, that's because it is. The adoption of AI is unlike any other technological revolution that came before it, and students have been an essential part of that adoption story.
[00:01:36] So let's take a look at a few key stats. More than 40%. That's the percentage of current ChatGPT users who are under the age of 24. And learning is a top use case. That's true across every region and every demographic. At 700 million weekly active users, that means ChatGPT is the number one learning platform in the world, and it's part of why learning isn't just important to us; it's really core to our mission to benefit all of humanity.
[00:02:10] Students in the U.S. are not waiting for permission, right? We know this. They're already using AI. Previous waves of technology were often delivered to students top-down, so an institution or a professor would choose the tool, they would set a policy around it, and they would deploy it on campus. This is different, right? Students are evangelizing this tech. They're bringing it into the classroom, they're bringing it into their workplace, they're bringing it into their homes. In doing so, they are gaining a fluency that will be crucial for the future of work and learning. And if you're curious about what that means, I would encourage you to take a look at our "100 Students Chats" book. I think we have a few that are floating around here today. You can also scan this QR code and request some copies for yourself or for your campus. These books were created, curated, and edited by a group of student power users of ChatGPT, and they give you a little window into some of the myriad ways that students are using this, not just for studying, but also to advance their career and to optimize their life.
[00:03:14] So, thanks in large part to students, AI on campus is here to stay. But how should we think about it in the context of the existing educational ecosystem? When I joined OpenAI over a year ago, most universities and professors were still experimenting. They were asking smart questions, they were testing use cases, they were trying to make sense of the truth behind the hype. A year later, we've come a long way. Today, AI is being deployed at scale across educational systems, proving that AI is no longer just a novelty on campus; it's really core to infrastructure.
[00:04:03] One of the clearest examples of this is our deployment with the CSU. I think we have a few CSU folks in the audience here today. I see a hand in the back. A few hands in the back. Excellent. Earlier this year, we launched a major deployment of ChatGPTEDU with CSU in the state of California. ChatGPTEDU is our enterprise version of ChatGPT for schools and universities. At CSU, they are deploying it to over 500,000 students, faculty, and staff. Just as a student today might start in college and get access to the library or to email, we envision a world in which every student joins and gets access to the latest AI models. We are also partnering with Lighthouse Universities to build what we call AI-native universities. So, places like Harvard, Oxford, ASU, and Duke, we are reimagining how AI can impact things like teaching and learning.
Speaker 1: [00:04:58] Teaching and learning, professional services, research. We're gonna talk about all of these in our breakout sessions later today. So I look forward to hearing from you and learning from you on these topics as well.
But there's a challenge, even at the most forward-looking institutions, there's a challenge we cannot ignore. And that's the growing fear that AI is just a shortcut, just a way to cheat. And when that's the dominant narrative, it really erodes the trust between educators and students. But the data tells a more nuanced story. A recent meta-analysis of 51 studies in Nature found that Chat GPT improves learning performance, perception, and higher-order thinking when used well. Our job is to identify what using it well really means and to make sure that students and educators have the ability to do just that. These findings are promising, but still preliminary. That's part of the reason why we've announced research partnerships with places like Stanford, IIT Madras, and Estonia to really study how AI tools can deepen learning outcomes while also mitigating some of the risks. We look forward to collaborating with our Chat GPT EDU partners and also all of you here in this room to continue to test and iterate and refine these findings and also bring them into a real-world educational context.
The ultimate promise that we are chasing is ambitious. What if AI didn't just give students access to content? What if it gave them truly personalized learning support? I'm sure many of you are familiar with Benjamin Bloom's study from the 1980s, this idea that one-on-one tutoring could help students perform up to two standard deviations better than their peers. Personalized tutoring has been a gold standard for many educators since, but it's very difficult to implement at scale. But with AI, we see the possibility here, right? We're seeing early signs in places like Nigeria and Australia that AI can actually provide individualized support, helping students through class more prepared, ready to engage more deeply with the conversation that's happening in the classroom.
Our first pass at starting to realize this vision for effective personalized tutoring is Study Mode. It's a tool that allows students to ask questions. It provides personalized support. It was shaped directly by feedback from learning scientists and from educators, some of whom are in this room today. We look forward to continuing to build on this foundation. This is really the one, right? And we want to make Chattopadhyay increasingly valuable for educators and for students who spend time on the platform. Because at the end of the day, our goal is not just to make smarter models, better machines. Our goal ultimately is to expand human potential. We want to give students the support they need to thrive. We want to help educators focus on what matters most. We want to unlock new ways of thinking and creating and learning together.
So that's the opportunity in front of us. I look forward to discussing it with all of you and learning from all of you today. Thank you, Olivia. Please help me welcome Warren to the stage. Thank you very much.
Speaker 2: [00:08:28] The trick to doing this is that I speak very fast. I gave a lot of talks in the last couple of years over Zoom after COVID, and back then I could blame things on buffering issues on the other side, but we're in person. I'm going to have to just do my best to speak slow and make sure that you all understand me.
I aim in the coming minutes to connect research, something about education, how we're using it, and tell you how we think AI is going to be helpful to a lot of educators in the room. I'll do that by telling you about what I, as a professor, do with AI and what research we look at that hopefully will be relevant to some of you as you try to embark on journeys with AI or use it. When I got asked if I'm concerned about my students using AI to write their essay and assignments, I said, what do you think I use to grade them? So in that sense, we're in the same boat.
Let me start by just giving you a quick rundown of what we do at Columbia University, that's where I teach, when it comes to AI. This might be just a glimpse because I only have a lens to some of the things that Columbia does, but we essentially try to sit in all the columns that you can imagine. The thing I'm maybe doing most right now at Columbia is I talk to a colleague of yours about how to use AI in the corporate world. Columbia has an executive education program. We now have two because as you probably all know, AI moves so fast that if you took the class on Monday...
[00:09:56] Speaker 1: That if you took a class on Monday, by the end of the day, it might not be true anymore. We tell the students that we have a one-year warranty for the entire class, but we really have to update it every couple of weeks. So what we do is we have them take the class, say, on January 1st to January 10, and then we give them access for one year to professors who do quarterly updates. Those are the two programs you see on the corner. And right now, I think we have probably most industries that you know of, and within those, many organizations. We have many of the big banks, many of the tech companies, some from Silicon Valley. Because we're in New York, we cater a lot to audiences that care about the business side rather than the tech side. So even if they're the tech companies, they come to know how other companies are using the technologies.
We spend a lot of time with industries that are a little bit behind and try to catch up, like manufacturing industries who are doing things less with AI and want to know how to do it. Overall, I think we try to speak to all of them. To me, the interesting thing is that we see a lot of senior people in companies that you think are leading in the world of AI who are somehow unable to catch up and they feel uncomfortable asking their employees. So they prefer to go together to a one-week program with other CEOs of other companies and learn about AI because they're in the company not comfortable asking questions.
We have, of course, MBA programs. I teach partially at the business school, and in that sense, the students who are now going to be business leaders need to take AI classes on top of the accounting, finance, and so on. We have a case library. So we try to talk to companies like OpenAI and see what problems and challenges they have and take those and find something that everyone can learn and make it into a case. We now have a series of cases that tell you stories about companies and the problems that they had. Recently, the most popular one is one about the world of AI and energy. Many of you probably know that it's a looming problem or a looming challenge. But a lot of people in technology are very well-versed with say Python, very, very good with engineering and building hardware. They don't know much about energy. We try to kind of bridge this by having cases on that.
And then we talked about the future of AI. We partnered with a therapist who's very good at understanding how people interact with technology and trying to see what will be the future of technology interfacing with humans through those things. Of course, speaking to what was just mentioned before me, the goal is to improve human performance, how we think, how we behave. So again and again, we have this entity that tries to think of people who make decisions and how you can help them, people who solve problems, and how we can help them. Not to make technology better, but really ask the question, what is the human who's using it gonna make out of that?
We write books similar to what I told you about the class itself. We write a book that is now already six months old but already four editions in because every two months we have to update the book entirely. We learned that actually, it's possible to really print a book every couple of weeks again and again with a new cover and some changes. We try to take the companies that come to us and say, now that we taught us how to do, help us also implement it by really staying with them. Academics aren't always doing that. They kind of usually teach you and then drop you into the real world. We try to stay in touch with people and help them move forward.
So I gave you a sense of that. This is where I'm gonna also finish my talk by saying the ask that I come with on behalf of all of my colleagues is we need a lot more access to the real world. All we do in academia is we try to imagine how people actually use it and try to work for that. It would be great if you guys came to us and said, hey, in our company we use it this way. It would be great if you teach your students this particular case. So my ask for you is to endorse us, so to speak, by giving us a question that you wish students in a year from now will be able to help you with. I think this is my call.
[00:13:37] Speaker 2: Okay, now that I did that, I want to spend a few minutes talking about research. I'll tell you about three or four studies that we are doing that use AI or attempt to benefit AI. I’ll also focus on one that I think is relevant for educators, which I know is many of the people in this room. Ultimately, what we're interested in is the interface between AI and humans.
Sorry about this picture. If you are kind of not after lunch, this is what we do day to day. So maybe my claim to fame as a scientist is that I'm part of a small group of researchers who work with humans by opening their brain and putting electrodes and neural implants inside. Most neuroscientists either study animals, where they can open their brain and look inside, or they look at humans via imaging techniques like fMRI or EEG or CT machines. There's a handful of groups that actually open the brains of humans, put electrodes inside, and listen to the brain speaking its own language.
And I came here to look for volunteers because I'm running out of... No, the reason... It's a joke. The serious part of that is we work with patients who have severe brain problems that require surgery. The problem involves opening their brain and putting electrodes inside to understand the exact onset of the problem, what neurons are the cause of the problem. There are dozens of people...
[00:14:54] Speaker 1: There are dozens of people every day who go through similar procedures. Some of them become candidates for research, as in, they go on day one to the surgeon who opens their brain and puts electrodes inside, and then they're asked to stay in the hospital for a period of sometimes two weeks. Sometimes they walk out with a neural implant in their brain for the rest of their lives. There's about 40,000 people in the US right now that have this neural implant in their brain for clinical reasons. But if they stay in the hospital, we now have a human being with open brain, electrodes inside connected to a machine, and they're awake and behaving. They can't really walk away, but they sit there for the next two weeks until whatever problem they have is going to manifest itself. And then one can come and say, you're already here, do you mind me asking you questions about preferences and showing you pictures and see how your brain responds and see if an AI could read your thoughts and use that to do things with you. This has been going on for quite some time now. I want to show you some of the things that we learned and give you a sense of how it works.
If you were to open someone's brain and put electrodes inside and microphones and listen to the forests of cells, you would see something like that and you would hear a sound. What you would hear is the sound of something like a machine gun. Those sounds are the sounds of one cell or another speaking. When neurons take code in the world, they respond by just a rapid fire. The more rapid the firing, the more they care about the world. The setup that we looked at looks something like this. Here's a human being. Their head is open. They have a bandage and wires coming out. They sit in front of a laptop. They watch the screen. Whatever we show on the screen allows us to connect to correlate what the AI in this case is showing. A variety of movies that keep changing based on the response to actually what happens in their brain. I'm going to show you an example from seminal work that gives you a sense of how it works.
What you're going to see are the movies that a patient was seeing. What you're going to hear, hopefully, is the sound of one cell in this woman's head that corresponds to something that is on the screen. Whenever something interesting is happening to this cell, the cell is going to fire. If not, it's going to be a visual diagram. So this cell, if I did the job right, is one cell in this woman's brain, thank you, that corresponds to thinking about the symptoms. Whenever this woman thought about the symptoms, this was what was on the screen, the cell lights up. In fact, in your brains right now, if you knew the symptoms, there are similar cells. All of you who know what the symptoms are, had multiple cells in your brain, fire rapidly, telling your brain this is the total symptom.
To find those cells, you need to have technology that quickly looks at hundreds of things that a person can see, picks up the right one, and immediately follows up by zooming in. So if you see the cell firing for this picture of the symptoms, you can follow up by saying, hey is it the symptoms or yellow characters or cartoon characters? I immediately have to pick from the internet a different picture and show it next, so I can start zooming in if it's maybe Homer Simpson versus Bart Simpson. So all of that requires full-time understanding of what's happening on the screen, what's happening in your brain, and quickly adjusting the reality.
Now the interesting thing about those cells is that they aren't just responding to things that are visual, they respond to thoughts. As in, when we came back to this woman later and we said forget not seeing things, just close your eyes and try to recall from your memory the things that you've seen before. You see that when she speaks, the cell fires, and now I'm going to again do it in a very kind of a colorful way by making the sound, but if I do it right, what you see is that the sound emerges before she actually responds, so we can see her memories coming to her before she actually knows about them.
So here are how it looks. Tick, tick, trrr, trrr. And this is what she's saying. Einstein. Hollywood sign, those are all things you've seen before. Tick, tick, trrr. So this was essentially her memories coming to her. When we live life, we think we live in real time. As in every word I say to you right now, I feel like it comes out in real time. We don't really think that the words are somehow created in our language models before they come out. In that sense, this is a unique experience where this woman sees a thought, expresses it, and I can tell what she's about to say seconds before. I can see the compute happening before she gets to experience it. If I stopped before, I would know what she's about to say before she says it.
Once you find those cells, you can do a lot of things. The easiest thing you can do is connect them to machines and have a person play computer games with their thought, build all kinds of gadgets that decode thinking and you don't have to speak them, you just know what you're about to say, and you can immediately communicate that. If you think about users.
[00:19:52] Speaker 1: You can immediately communicate that if you think about users like language models that are built not on your vocal expression, but on your thoughts. Or you can do fancy stuff that comes with the idea that I know your thoughts before you know them. The maybe most remarkable experience I had the pleasure to give out of is taking one patient. I'm going to show you an illustration of this in a second, who was asked to look at a box in front of her eyes. The box has two buttons, one on the left, one on the right. And we tell her whenever you feel like it, press one of the buttons, left or right. And just one thing, when you press the buttons, lights are going to turn on. These lights tell you that we're now saving data from your brain so don't touch anything the lights are on. That's the only requirement. When the lights are on for a second after you press them, just wait. They're going to turn off and then you can start again. But in reality, we find the cells in her brain that tell us that she's about to press a button. And I know that in two seconds she's going to want to press a button. So what I do is I turn the lights on just before she presses the button. So every time she presses a button, there's a buzzer in the room and it says, "eh, you just ruined the experiment." We say, "I'm so sorry doctor, it's going to happen by itself, never mind, just don't do it again." And again and again, whenever she wants to do something, it happens before. And to me, this is the moment we actually get to experience it. We all know it's true but we never get to experience it, which is that we live life in delay. Our brain already predicts what I'm going to say in the next word. It's the slowest I can speak. But I don't know it. And these people, for the first time, get to experience it.
[00:21:38] I'm going to show you how it looks. Again, it's in this situation. So here she is, she presses a button, the lights are already on, and she gets upset, and then she tries to do it again. And in a way it's kind of magical to see, but it's not surprising. If we have cells in your brain that tell me what you're about to do in a second, all I have to do is just turn the lights on before. It's not really surprising. To me, it's a kind of just reflection of how it feels when you get to see things in real time.
[00:21:42] So I promised you not just one thing here, the next, I think, most relevant to the people about AI here. One of the things we did in the last year, and it started as a project in a class. I told my students to do this project, they did a remarkable job, it turned into an academic research that I spend most of my time doing, asking the question, how old is AI if we take HPT5 and you try to kind of apply it to a kid, how old will it be? And also what can we do that is a very human thing? We basically did what we call Psychology 101: took 101 years of psychology, all the studies you can imagine in psychology, everything that you study if you take Psychology 101 from the Milgram studies to the conformance studies, we just tried to run them again and again with all the AI models we can think, including having them fight each other. So play like a game theory experiment with a prisoner's dilemma, they get each other, to see where they fail, where they behave like humans, really try to understand what they do well. And the point is try to see if it can do things that a kid develops as they become more and more alert.
[00:22:39] There's a concept in Psychological Theory of Mind that people develop sometime around the age of 5, and it allows you to understand that other people have thoughts that are different than yours, that you exist in the real world but others don't know what you know necessarily, you can lie, you can be humorous because you can lead people to think without actually saying this. I have a friend whose son is now at the age where it's just happening, and he one day called school and he said, "Johnny cannot come to school today." And the teacher says, "Who is this?" He says, "I am my dad." And to me, this is like a moment where you can see how it starts to understand he knows that dads are authorizing that, and he knows only dads, but he doesn't know yet how to kind of lie or how to do it. To me, that's the question about AI. Is AI already at the level where it does that? And there are many people who do that in many ways to it.
[00:23:16] I just borrowed one from my colleague right here at Stanford, who basically just showed that AI's language model isn't just auto-completing the next word, it really understands somehow concepts of the world. For example, geography. I flew today from New York, and if I had an AI agent try to author a language model, complete the sentence, you all can guess what it would be. I left New York six hours ago, and after six hours of flying west, landed in, you all know that the answer is San Francisco. But to do that, the AI had to somehow form an understanding of geography, that the word New York, six hours flight, have a completion in the form of San Francisco. If I said Italian chefs are known for their pasta, again, it's not just to auto-complete the language, it has to understand something about culture. If I said things like Ben hates oats, he is forced to eat some, he is upset, disgusted, you have to understand something about psychology. You have to understand disgusted and forced, and oats. You have to build something beyond just auto-complete to really start to do what we call psychology.
[00:24:28] Sandra loves Ben, so she feels sad or empathetic. And you can go on with that, and basically get to the point where we get to a level where people can run experiments, like classical experiments. This one is called a Sally-Ann, where you tell little kids that one person hit a ball at one place, another person moved it at a different place, where is the ball? And if the kid is very young, they think that whatever they know, everyone knows, so if they know that the ball was moved somewhere, everyone knows that. But at some point when they go to the other end and say, "Hey, I know the ball was moved, but the person who was not in the room does not know it," so they would look for it here. We now know which.
[00:24:50] Speaker 1: We now know which models of AI are able to pass this test. And in doing so, we basically start to understand what levels they are in being able to lie, and to cheat, and to be funny, not just by kind of taking previous jokes and repeating them, but really coming up with them. To me, that is an interesting question now to everyone in this room, if you're trying to think about education, about using AI to enhance performance. It's the question of where is it coming to a level where it kind of hits the fabric of what we call psychology. And the answer is that we're getting closer to being adults. Every year, when we look at the models with the same test, we see that it's passing more and more measures, or metrics of psychology. It indicates to us that it's doing a good job. Maybe the most interesting result, or a paper that's not out yet, but it's already on archives so you can get a glance, is how good it is in reading our behavior on dating. So we think, okay, dating is a very human thing, and so on. We basically fed data from video on online dating, sorry, on speed dating, videos, the chats, the videos, all the questions you can give a person, and ask the AI questions like, are these two gonna give a good match? Will she like him? Will he like her? And the fact that the AI is performing really, really well in this task, to me says that there's something that it crossed a little barrier that we didn't have before, which is it's beyond just being a language model, it's somehow getting our psychology. And I'll say this note, that you can look at it as a positive or negative. Sociopaths are amazing in reading your psychology and knowing what you're predicting what you're gonna do, and use it against you. So by virtue of getting there, we're not necessarily doing a great thing, we are opening a floodgate that we can now look at both ways.
[00:26:30] Speaker 1: Because we're talking about education, and education in this context, I wanted to kind of mention one study that I think speaks to how neuroscience teaches educators how to do a good job. We have been studying engagement for the last 10 years. As you heard from Natalie, I care a lot about storytelling, and how you communicate things. I think that in the world we live in right now, this is a skill that should be elevated in how important it is. I think CEOs, presidents, leaders of companies, many of them have spent a lot of time telling the story of the organization and communicating ideas, not just by thinking of them and kind of using a whip to tell people what to do, but really kind of getting everyone right behind an idea. And it turns out that humans are acquiring this ability to figure out what's interesting very early on in life, before they even have language. As babies who don't speak it, they already know that if everyone looks this way, something interesting is happening and that my brain should focus there as well. And if everyone nods their head, it's a good idea to do the same thing. This is something that we almost come equipped with. And now the question is, when is it happening in the brain and how to recapitalize on that?
[00:27:40] Speaker 1: What we do right now, we try to leverage one insight from neuroscience to make storytellers do a better job. The insight is that if brains look alike when something happens, it means that whatever happens is interesting. Let me say it again and explain it. If I talk right now, and you care about what I say and I'm engaging to you, then if someone puts measurements on your brains, all your brains are gonna look very similar. Because I essentially took over your brains. If I'm doing a good job, it doesn't matter that some of you are old and young, hungry and full, angry or calm. Something about my content is so powerful that I claim your brains and I direct your brain as if you're under my spell. If I'm not interesting, then one of you thinks right now about the homework they have to do or a job they have to do and even they have to respond to. And if I look at your brains, they're gonna be very different. So if I measure all your brains and I get a scale that says everyone is very similar, it means I'm doing a great job. If your brains are not similar, if the correlation is close to zero, I'm doing a bad job.
[00:28:49] Speaker 1: So now you can actually understand how to train better, how to do better teaching by looking at the brains of the students live and tell the teacher, hey, everyone looks the same, move on, they got it. Or these people's brain looks very different. So whatever you're saying didn't land with these people. So you can give people feedback in real time or give technology a tool that will kind of tell the teacher in real time, this point stayed differently because no one got it. Or this point no one got to move on. Or all the women got it but not the men. So you speak really well to this group and not that group; see if you can figure out what it is about the teaching that doesn't work.
[00:29:12] Speaker 1: And I'm saying teaching, but it applies to many things. I want to show you an example. Example from all the movies. I spent a lot of time in Hollywood. I think that they are usually ahead of a lot of researchers. So spending time with them gives you a good clue to what's gonna happen next. There's a saying in Hollywood, the difference between science fiction and science is timing. I think if you spend time there, you get a glimpse of the future. One of the things we did for Hollywood in return is we studied movies for them and tried to help them understand what works. Specifically, we worked with AMC theater, a big movie theater, when trying to see which ads that they put before the movies are making people care. So I'm gonna show you right now an ad. And this yellow line here can go from zero to the top of the screen. And the higher it is, the more the brains of the people in the room were aligned. So if it's very low, everyone is thinking about different things and basically, it's very, very low. If it gets...
[00:29:48] Speaker 1: The engagement is very, very low. If it gets higher, it's better, meaning people in the room care more about what they see. And in fact, what you see here is pretty much nothing. For the 30 seconds of the ads, the line is pretty much the bottom quarter, nothing interesting happening. It goes up and down, but very little. It seems like it wasn't that engaging. But if we take a technology, as in this case, an AI code, it does a very simple job. So look at all the hundreds of people that were in the room, and try to cluster them into two groups. Try to see by looking at all the pairs of brains, whose brains look very similar and very different than group number two. And the AI by itself figured out that the difference is men versus women. So it just says most of these people are women, most of these are men. And if you just look at the brains right now of all the women looking at the same content of the men, you'll see the effect. So I'm gonna show you the same thing, but now you're gonna see two lines. I'm not gonna tell you which line is men, which line is women, but you'll see that basically if you just look at all the brains of women together and all men together, they're very different in some moments. So in the beginning, it's kind of equally boring to all groups, and at some point, one of the two is gonna care a lot more about what's going on here.
[00:31:03] This is an amazing mode of technology, right? We didn't tell the AI anything; we just said here's lots of brains, you figure out how to cluster them and just tell me what's the label. And it says, I think it's mostly men, mostly women, and of course these are the men, these are women. And now as scientists or as teachers, we can ask the question, okay, what happened in this moment? Like, what did it work? And we say, oh, I guess men like this more, so maybe make sure that this happens in the next ad more, or maybe make sure that as a teacher I do different things for women and men. We now apply this across the board in many domains. Whether it's in doctor-patient interaction, many times doctors say, I told the patient to take the drugs this amount of time, the patient leaves and said, I don't know what I was told. They say, but the words were said, so somehow they didn't land. So let's see what it is that we can do to make sure that your brain and doctor brain are aligned. We do that with big teams who have a short huddle and they need to communicate a message. And the question is how do you make sure that everyone heard the same thing so they do a good job?
[00:32:41] We look at fans and see how much the fans are aligned with the sport team. We look at musicians and the audience and see the performance by this boosting thing working best if his brain and many people's brains look the same. We look at politics; we look at ways to communicate ideas. And the one I'm, and I think again mostly, is we look at educators. So we try to see, can I find a teacher that's best for a person? Right now, the education system is mostly broadcasting. You bring a professor, one, you have him speak to 150 people and you hope that they like someone who speaks fast, you like this particular sense of humor. It works. But maybe there are 15 in the room that would benefit from a different professor and they just didn't sign up for the class. If we can look at all the brains and start matching them, we can actually change the teaching rather than have all six-year-olds in the same classroom. We'll have all six-year-olds who like this teaching versus all of that. I think AI here is really, really a good use of this method. It quickly can tell you, you are a better student for this person and you're a better teacher for that person.
[00:33:54] Okay, I'm gonna stop here. As in, I want to tell you about all the other cool stuff, but I just listed a few of the things I'm doing right now that I thought I'm gonna leave you hanging with like a cliffhanger because we have a full day together and we're gonna have a chance to talk. So I wanted to just know what's on the menu so you can ask me or some of my colleagues what we're working on right now. The coolest thing I could think of that we're working on right now that I find interesting, that I thought you'd find relevant, we're looking right now at the ability to actually connect the brains to machines, not just for the sake of changing the inputs in real time so you get the most engaging experiences or understand bettering, but really to do what we saw in all movies, which is, can we capture those memories that you have and put them on a different external device? So basically, you can take the memories of one person and either download them or at least back them up or transfer them.
[00:34:38] A good colleague of mine is doing it with mice and rats right now; takes one rat, has her move in a maze so she learns how to navigate the maze, brings a new rat who's never been to this maze, and essentially moves the memory from one to another. With rats, the movement of a maze is a very easy memory to copy. We know exactly how it works in the brain, and then the new rat walks in the maze. Every turn, she gets to feel to her like it's new, but she guesses the right way, as in her brain knows something that she doesn't know that she knows and when she gets to the end in one trial, you basically now realize that she just acquired a memory. So we're looking at this. We're looking again with technology on the ability to enhance your sensory experience. Right now, we know the world through five senses, but the reality is that the brain is like a sponge. Whatever signal you feed into it, it's gonna make sense of, and we know that nature is full of other senses. Bats have echolocating ears, and snakes have eyes that see the infrared spectrum, and birds can know where the north is because they have magnetometers in their brain. Human brain, if you feed it these signals, will make sense of that. It's amazing in getting signals and finding patterns in them if you present the right way. So we're looking right now at ways to take the senses of one animal, plug it into another animal.
[00:34:46] Speaker 1: Plug it into another animal and having the brain learn to make sense of that. So the example would be take the whiskers of a cat that acts like accelerometers, plug them into the brain of a ferret and just have this data feed into the ferret's brain for a few weeks and have it learn how to now know that it's standing straight. Finally, the thing I'm most interested in and where AI right now plays the major role in our research is decoding dreams. For the last several years we've been playing with this and I think that every time I talk about that, that's what people want to know most about. So I finish that as a way to tease you to ask me more. But for hundreds of years, millennia, people were interested in dreams. But up to 10 years ago, no one had access to them. You could only get the story of a person who woke up and described to you what they think their dreams were. And what we know is that the story you tell when you wake up is typically flawed if not totally made up. You don't really have access to your dreams. You just conjure a story when you wake up. The example I like the most to kind of summarize it is that in the 1920s, people thought about their dreams in black and white. And as soon as TV and movies started to have colors, everyone said, oh, our dreams are in colors too. So clearly they saw what the world has to offer as a movie, and they thought that their dreams looked like that. We now have more evidence for that. Finally, because they're like those in people's brains live, we can actually see their dreams and we can now ask them what they think and see what's going on. And here it's great because it's happening so fast, the dream. And you have access to so many neurons that you have to really be quick in figuring out what's the narrative. How can you make it into a story? And how do you make sense of all of that? The coolest thing we try to do right now, really, I would not say that it's mature enough to disclose it, other than the fact that it's like a small room and I think we can at least kind of imagine it together, is try to write dreams. So you go to sleep and you say, I would love to go on a date with someone that I could not do in the real world and maybe have a date with them, and it feels real for the minutes that you dream. Or if you have someone that you really miss and you can't see them in the real world, like grandma died and you always want to talk to her, you can have her come to your dream and again have an experience with you. It's your brain putting words in her mouth, but it still feels like it's her talking. Or all the things you cannot do, like travel to Mars or meet dinosaurs, you can do for a sliver of reality in your dream. And like Freud said, dreams are real, while they last, can we say more about life? In that sense, I feel that we're playing in a category that we would love to get your help, input, and use cases, because what I said in the beginning, I'm going to apply to myself as well. I'm not sure it's a good thing. We're playing with things that are interesting, but we don't have enough time to stop and see where it's going to be taken to and what's going to be used for. And in that sense, I think that it's a good moment to reflect with people who are new to this, who have not seen the value before. To ask the question, hey, do you like it? How are you going to use it? What it's for? Is it a good idea? And if not, can you stop it? And what it is. So again, the call I began with, I'm going to end with, we really need the real world to be involved. Labs and academia are moving fast, and they're not ideally the people you should trust all your knowledge at. I think the real world is much better, and I hope that you're going to help us with that. Thank you very much.
[00:37:48] Speaker 2: Thank you, Morin. Next up is going to be Beth Maturi. I am Beth Maturi. I'm an AI faculty at Miami Dade College. And my focus is mainly on teaching AI from the intro process to the point of where students do their capstone projects. So I was given the task, when I joined academia, or what I normally tell my students, academia chose me, AI chose me, because what I wanted to be was to be a white ethical hacker. You know, how could we see things on how they hack on TV? That was what I was aiming to do. So but then, it wasn't easy for me to get cybersecurity, so they decided to choose AI. At that time, AI was still a buzzword for most of us, but then I decided why not let me just flow with it and see how it will turn out. So it ended up being a very crazy process, and when I went to academia, I was like, how can I make it easier for my students to learn? How can I make it more effective for them when they come out here, they're able to demonstrate something. That's what I came up with what we call humanizing AI in education. So what did I do with the capstone project class? Instead of them doing a normal project where you come up with, you know, hypotheses or something more of like creating the classroom, I was like, let me connect to my local community in Miami and see what can we create for them in terms of AI solutions. How can we go around that? So I reached out to a couple of companies in our community, and most of the Miami businesses are like small businesses. One of the key ones that actually Alex and Natalie were able to see was for the Boy Scouts Miami department. I don't know if we know about the Boy Scouts Miami, I mean Boy Scouts, Girl Scouts before? Have we heard of them before? Yes. So now we have one in Miami. And when we approached them, they had no system in place to be able to understand how can they be able to like gain money or fundraise money and so on. And they're like, you know what, we can help you with something. And as we know as academia, it's a process because you only have three months in a semester. And fall semester is normally the shortest.
[00:39:44] Speaker 1: The fall semester is normally the shortest, but I was like, let me try this challenge. One thing is, our students are enthusiastic, and I'm like, let me help them through this process. So we approached them, we spoke to them. They had no idea about AI; they were also green. As a professor, you're educating the business—in this case, the Boy Scouts—and you're getting the students to build a solution for them. The main challenge was how can we have a consistent way of fundraising? A way that we can know that come next year, we have this money to help the Boy Scouts; we have a way of being able to help the community. One of the challenges was that, yes, we have these amazing people, but they don't have the resources to learn about education or even how to grow and so on. So, how can you have an effective way of fundraising? I'm like, we can use AI, predictive analysis, a system where we can have a central point. When fundraising is happening, it can all be automated. Now you can foresee and foretell based on your expenses how to plan for the next year. That's where the humanizing piece came into place. People who do not have the power or even the money to create these amazing AI solutions we see so far. That's what Alex and Natalia are talking about—they're able to foresee what we're doing and what we have done so far. That was the first time we had our humanizing piece in terms of AI in the classroom.
Last semester, that was in the spring, we had a chance to do the same thing for other companies, for example, for an art studio. They create really fine art, but their concern was how to be more effective in terms of focusing on the art piece and not just understanding clients' needs. So we worked on automating the process. We sat down with the students, brainstormed on how we can help them automate it. If you're interested in art, you're able to come and just use a system, input your needs. For example, if you like color, how can you do that? As the main business owner, the focus will be on perfecting the art piece. That's where I thought about humanizing the piece, connecting with a community of people who cannot hire, let's say, an AI engineer for like 100,000 a year. We were able to provide a solution through our students. For the students, the benefit is practicality; they come up with a better way to showcase their skills. When they go out to the job market, they can portray a capsule project that was able to foresee or change a business, increasing efficiency as well.
Some key feedback from the businesses is that everyone wants an easier and better way of doing things. Humanizing it in the classroom, bringing solutions in, helps the local community—people who do not have the chance to do it on their own or don't even have the money to perfect it. For the students, it's a benefit; it's a skill they grow. Just this summer, one of our students was hired just because of the AI capstone project. These are people who are doing their associate's program, not yet the bachelor's program. I still believe all these aspects can be improved. This spring, we had just our first AI showcase in Miami. It was something that brought a lot of attention to the city.
As you can see, that's our AI center right in Downtown Miami with my students. They were really impressed and happy to showcase their skills, happy to get some interviews from companies. At the end of the day, it's always a win-win. Students who come to school, who may not have had a job but are coming to learn, they leave with something to showcase. Our community is also benefiting from it. At the end of the day, I just term it as humanizing AI in the classroom. How do I do that? Just using capsule projects. My presentation is short and sweet. If you want to follow me, these are my details. I look forward to a chat with you all after. Thank you so much.
[00:43:46] Speaker 2: Thank you so much, Beth. Next up, Professor Greg Neimeyer.
[00:43:53] Speaker 3: How y'all doing? I am Greg Neimeyer, and I'm here to share my vision for an AI pedagogy. I think I have another slide here... There we go. As you can see, we're all going in different directions on AI times. Don't we all sense that AI changes what learning feels like, what counts as knowledge, and even what it means to think? Our reactions range from rejection to blind enthusiasm, sometimes in the same day. I want to propose a vision for thinking about education in the age of artificial intelligence, not as institutional policy or a moral panic, but as a living ecology. In a world where machines facilitate misinformation faster than we can verify the truth, we need an education that strengthens our ability to tell truth from fiction.
Speaker 1:
[00:44:42] Because if we lose that ability, we lose meaning itself. And when meaning erodes, so does communication. And with that, the very possibility of education. So to move past the ad hoc use of AI in academia, I propose we map three pedagogical modes: plus AI, minus AI, and times AI. Subtracting AI, adding it, and redefining knowledge with it are the three ways of structuring learning in the relationship to intelligent tools. Each mode has its value, but also limits. And the real challenge is in learning when to shift from one to the other. So let's begin with minus AI.
[00:45:19] Minus AI, the spaces where we intentionally set technology aside. The arguments are familiar. AI can be costly, energy hungry, derivative, biased, and fragile. It can de-skill and dehumanize students, obscure sources, and create dependencies. And yes, sometimes we simply shouldn't use a machine when we have the time and skill to do the work ourselves. Some students resist AI and vocalize this kind of diagnosis. But the deeper reason for minus AI isn't just fear; it is presence. Minus AI pedagogy foregrounds direct experience, drawing from observation, conducting experiments, field work, building things, debating ideas, and dancing together. It restores learning as an embodied inquiry where heads, hands, and hearts connect. Thank you, Mr. Pesolosi.
[00:46:12] Minus spaces also remind us that not everything valuable can be measured and optimized. Sometimes learning is about the most important outcomes: error and insight, judgment and empathy, reason and intuition. These only emerge when no machine mediates the moment. Then comes plus AI, the additive mode. Here we recognize that AI can extend human research, expand access to knowledge, increase access to information across space and time, support diverse learning styles, accelerate feedback, and if done well, it can help students focus because AI can offer challenges rather than premature answers.
[00:46:47] My prognosis is that most universities will unofficially reach near 100% adoption soon. And those who reject it categorically will not prepare students for the AI worlds they will live in when they graduate. To make sure adding AI is not just outsourcing thought outright, we need a dialectic engagement with AI and not just a one-way use of a tool but a critical engagement that sharpens how we think. This kind of engagement begins with transparency, knowing how the system works, and extends all the way down to the individual prompts. That's the real foundation of AI literacy.
[00:47:21] It means at a minimum being intentional, including AI in course design, creating citation systems for AI-generated work, building policies that evolve as fast as the tools do, and offering critical courses on AI as a subject across every discipline, not just data science. Ethics is a precondition, not a footnote. Intentional design lets us integrate AI reflectively. For example, generate an argument on your own, ask chat GPT for three counterarguments and then refute them. That's not outsourcing thought; that's dialectical reasoning. And that's the point of plus AI teaching: to keep the dialogue alive.
[00:48:00] We ask students to articulate what distinguishes human writing, art, and reasoning from algorithmic production, not to defend human uniqueness blindly, but to understand where intentionality thrives. Plus AI raises subtle risks as well. For example, curiosity can be sustained, but it can also be squashed when learners are given answers too quickly, and when every question is instantly resolved. The slow burn of wonder can cease the work of pedagogies to teach students how to propel themselves into deeper inquiry on their own, even with AI.
[00:48:34] And here again I have to quote Pestalozzi, who said in the 18th century, "all learning which is not self-propelled kills the roots of independence in the student." The task of education is to teach people to think for themselves. The task of AI is to sustain the path of self-determined exploration. The task of a university is to connect such explorers in a learning community in a common journey in a plus AI pedagogy. That means crafting community beyond the individual prompt; it also means pacing a group so they can move forward together without stalling or flooding any participants.
[00:49:08] Defining the edge between feeding and overwhelming, between desire and dis-emancipation, is the task of good course design. We now have to learn how to do that with malleable AI tools. Getting this right may determine the depth of learning that AI makes possible. Getting it wrong will make collaboration one of the first victims of too much AI. Plus AI isn't just about faster access; it's about more meaningful collaborative seeking. Used this way, AI can become a mirror that clarifies human intentions and capability.
[00:49:40] Speaker 1: Human intentions and capability rather than blurring them. So let's move on to TIMES AI, the transformative mode. Here AI isn't an accessory but a medium that reorganizes how knowledge itself is structured. What is the potential of this significant paradigm shift? In this mode, education can become adaptive, distributed, and non-linear. AI can shape courses dynamically, adjusting content and pace to each student, much like dynamic difficulty adjustment in game design where the system keeps the player in a state of optimal challenge. For decades, games like Flow, Left 4 Dead, and even Candy Crush have used adaptive design to keep players engaged. In education, similar scaffolding can sustain curiosity, challenging but not overwhelming. TIMES AI also implies that universities such as the UC system with roughly 300,000 students and 26,000 faculty should research AI deeply across technical, cultural, and epistemological domains and fund that work accordingly. This would include creating a critical artwork that pushes AI beyond the generic use case.
[00:50:46] Can we train models on things that are not strictly human frames of references? Can we think of coral reefs, mosses, wisdom of the winds as the source and foundation for a model? Can we think of AI as a medium in its own right? New media, in its infancy, always replicate old media. An example of this is photography. It took us about 100 years to figure out what photography is good for. Of course, we didn't have AI to help us out with that question, but now we do. Now we can think about what is the true meaning of AI? What is its true form? Real artists and visionaries find a medium's purest form through experimentation, and it's often not what it seems to be. Hannes Bayer's novels, Linda Rabay's flowers, and Ryoji Aikida's installations point in that direction.
[00:51:43] I just finished doing a show in Detroit myself, where I was looking at resource extraction, sand mining in Oaxaca. In order to make that point clear, I brought in predictive modeling from AI, but I also brought a kilogram of sand and also brought in a kilogram of copper so people could feel the weight of sand and the weight of copper and also maps of where the extraction happens, which is often illegal and which leaves landscapes in disarray. So seeing the image, feeling the weight, and thinking through the prediction, that is where heart and head come together. I think these things are really important; these are really important spaces where we reintegrate these facilities, these human faculties.
[00:52:22] Following the idea that quantitative shifts can produce qualitative transformations, like when you boil water and it turns into steam, we might ask how human intelligence itself evolves as we increasingly offload cognitive labor to AI. As our machines absorb even more of our cognitive functions, can our role shift from cognition to metagnosis? By that, I mean meta, as in the Greek preposition beyond, not the tissue company down the bay. Can we move beyond the intelligence we know now, or are we approaching an evolutionary limit where further outsourcing begins to erode rather than expand our capacity to think? This is where academia can reinvent itself and serve its constituents, exploring new epistemologies, new relations between data and meaning, new ways of thinking, and perhaps even new definitions of what it means to be human.
[00:53:17] What does it mean to be human today? It will be comforting to imagine these three modes are separate modes, minus, plus, and times, but education rarely offers such clarity. In practice, the boundaries blur. Finding fit between AI and education is reciprocal. Is a field lab that uses AI sensors to support manual observation a minus AI or a plus AI space? Is an AI tutor built into a software tool like Blender fostering dependency or enabling imagination? If we find an overzealous AI tutor paints us into a corner where we don't understand what we're doing anymore, can we shift back to minus AI? If a group uses AI to launch a discussion and then later continues without it, have we crossed into times AI or returned into minus AI?
[00:54:06] Such ambiguity, in my opinion, is not a flaw; it's a point. Designing education today means continually asking what belongs where, why, and how. We will misclassify. We will have to adjust and redesign. That's the pedagogical work of our generation. Beware of many institutions that lock in too soon. The map isn't static; it's an evolving choreography between human and machine attention. The stakes are high. If we don't get this right, we lose collaboration, we lose meaning, we lose nearly everything that makes us human.
[00:54:38] Speaker 1: Nearly everything that makes us human. That's the danger of an AI override. If we get it right, however, AI pedagogy can make us more human, more capable, more caring, and more truthful. And if we get it right, how can we make these tools public so they benefit everybody? That's a huge challenge. Because the future of learning is not just about keeping up with machines, it's about keeping the quest for meaning alive toward a new kind of humanity. So I have an invitation for you. The image you see here, it captures what words can't. A tangle of connections between minus, plus, and times. Not as separate territories, but as an integrated ecosystem of learning. Notice how no single mode dominates. Each pulse feeds the others. Minus AI grounds us in lived experience. Plus AI extends our mental reach. Times AI transforms what we use our brains for. The task of pedagogy is not to pick a side, but to conduct this orchestra, knowing when to mute, when to amplify, and when to change the pace entirely. And that of course requires the courage to ask questions. The right questions at the right moments. So as we move forward today, I invite you to ask three questions. The first one, where in your practice should AI be subtracted to recover presence? Where should AI be added to expand possibilities? And finally, where might you multiply by AI to imagine thinking itself new? The future of education will not be minus, or plus, or times. It'll be how we move among these three wisely, kindly, and together. Thank you very much. Woo! Woo! Woo! Woo! Woo! Woo! Woo! Woo! Woo! Woo! Woo! Woo! Woo! Thank you so much, Greg. That was really beautiful.
[00:56:27] Speaker 2: Last but not least in our AI for Pedagogy segment is Tina Austin. First of all, it's an honor and delight to be here with you all today. I'm gonna start off by, if I can see a show of hands, how many of us have tried to recreate assignments in the age of AI? Everyone? Mostly. How many of us are concerned with cheating AI? Okay, that's a good number. I think Natalie mentioned that earlier. My goal in the next 10 minutes or so is to briefly talk about this and how we can potentially get past this issue and also talk about a framework where we can leverage AI's capabilities while critiquing its inaccuracies and basically stop grading the output.
[00:57:24] I've been teaching for over a decade, but in the past five years, specifically at UCLA, how many people here? Oh, wow. Quite a few of us. And specifically mostly focus in this domain. No, this is not a pointer, in this domain. I was one of the first people in my university to try and implement AI in the classroom. And as a result, one of the first people to face a lot of resistance early on. I love my students and we have great conversations, great debates with them in the classroom. But since that generative AI came along, I noticed that, well, there's a way to actually make things easier and more useful for them. Post-AI, I was actually able to not only bring in new experiments to my classes, but also connect to three different disciplines because now I was teaching new AI courses in different fields and I got a lot of requests from faculty to do faculty professional development, as well as share my syllabus. My syllabus was one of the first shared on campus because a lot of people were looking at solutions and how we can go beyond looking at this as a cheating tool. I ended up using a lot of custom GPTs. How many of us have done custom GPTs in our classroom? Wow, everybody. Some of my work was shared on the OpenAI Academy, which I was really happy about, but today I'm not going to talk about custom GPTs. I want to talk about how post-AI, rethinking pedagogy has become increasingly essential.
[00:59:10] So how many of us are familiar with Bloom's taxonomy? All of us. We had a great talk earlier about Professor Bloom's work. For 70 years, educators have relied on Bloom's taxonomy. It's a framework that organizes learning into six hierarchical levels. Students must first remember the facts, then understand them, then apply the knowledge, then analyze it, then evaluate it.
[00:59:36] Speaker 1: And analyze it, then evaluate it. And this made sense in the 20th century classrooms. But what's happened now is AI comes along and creates your assignment for you. AI comes along and analyzes some of the work for you. Or maybe your student doesn't have to go through this linear process all the way up in order to create something. So some of my colleagues suggested in the writing department, how about we flip Bloom's Taxonomy. We've all seen, quite a few of us have seen, this was some work published before AI and then after AI. One of my colleagues was thinking that this really does do well in writing classes. However, my issue with this is I don't believe that learning is linear. It's recursive. Sometimes we don't go up this ladder. And especially in STEM fields, this doesn't really apply.
[01:00:36] Based off the faculty workshops that I did and experiments that I piloted in my own classroom, I put together a proposal. And I called it the Autumn Bloom's framework. It identifies where AI can scaffold learning versus where human thinking is essential. And it's different for every discipline, so it's not like everyone has to follow one formula. The goal here is problem solving. So what determines the starting point in this Bloom circle? It's the problem you're trying to solve in your classroom and the kind of thinking it demands. Unlike Bloom's pyramids, it has no start at the bottom rule.
[01:01:25] So the question will be, are students generating something? You can start with create. Are they fact-checking AI? You can start with evaluate. Are they building from AI's output? You can start with analyze. Are they encountering new content? Start with understand. The goal is to not create the output, but to create the comparison. As a result of this, I ended up coming up with this scale to get my students to reflect on the AI output. We all know that AI constantly makes mistakes. The first stage is to have students compare their work with what AI got wrong. I had them watch AI podcasts. I had them use some pre-made custom GPTs. I had them use different AI tools to see what mistakes AI is making. That made them not only more confident in their own answers but also realize AI's way of doing things.
[01:02:29] Level two would be where a student explains why AI made that error. At level three, a student analyzes how AI's reasoning is different from their domain expertise. Level four would involve the student evaluating patterns in AI's limitations. Finally, if we want to give it a higher score, they could design a task that overcomes AI's output or create something new, basically solve the problem.
[01:03:02] How does that look like in practice? Well, traditional Bloom's, if a professor wants students to learn enzyme structure and function, they would have them memorize a textbook, go take an exam, and finally create an essay. This no longer works in the age of AI. This is why I think a lot of us are focused on cheating because we're grading the output. Now, you would focus on solving the problem. Let’s say you have a history professor, a science professor, and a coding instructor. The history professor might start with analyze, then apply, evaluate, create, and go back to analyze. The science professor, if they're trying to learn enzyme structure and function, might start with something that AI has already generated and then have them evaluate that.
[01:04:02] For a coding instructor, they might use something again created previously with AI, analyze it, and apply it. We've all heard of vibe coding, right? There are some issues with vibe coding, so you can have students really evaluate that. I tried that with my custom GPT. I had students, Eric knows what I'm talking about, evaluate those. This is different. I'm really glad that I'm following your presentation in times AI minus AI. I've evaluated these tools in a way that's for AI, against AI, and with AI sometimes.
[01:04:34] Speaker 1: And with AI, sometimes around AI. And what I mean by critically evaluate is students will be able to generate initial understanding tested against AI versus the traditional Blooms where they're just looking at the content. So partner for thinking with and against AI.
[01:04:57] Speaker 1: And what was valued in the traditional Blooms was, for example, getting it right. Whereas here, what's valued is understanding why you were wrong and what that reveals. Another advantage of this, and I think is that in the past, in traditional Blooms, you'd have failure would be something to hide. And that's why we have concerns with sheeting. Whereas here, failure would be data for learning. So it's recursive because learning is recursive, and so that is kind of like my slides right now.
[01:05:33] Speaker 1: Again, this is another comparison. I don't have a ton of time to explain all the details. I have lots of examples if this is like a one hour or two hour talk or workshop, but I'm just glossing over the general idea of this proposal. And this was something I presented recently, a month ago at Oxford, and we're collaborating with some colleagues in the UK and Europe on.
[01:05:56] Speaker 1: So another aspect of it is assignments to skip AI with. So if you've had reflection essays in the classroom or debates, has anyone had debates in the classroom? Yeah, so those are some examples of times where you may not use AI. And just to make it easier for educators, I built a custom bot to help with this on Bloom's model.
[01:06:22] Speaker 1: So it will ask you, what discipline are you in? It's not perfect yet, I am collecting feedback on it. So it's really just to give you an idea of where to start evaluating your students and AI output. So how does this complement ChatGPT's products like Study Mode or Pulse? You can again, look at the output, have your student evaluate them. And they have told me that they feel a lot more confident now with their answers versus AI.
[01:06:57] Speaker 1: So, in conclusion, I'm gonna leave you with three things. My goal here is to just remind you all that learning in the AI era has to be recursive, not linear. So rethink learning here. Bring student voice and personal experience into assignments. Reclaim human judgment, because that's, I think, something that's missing. And then finally, to really stop grading the output and grade the critical evaluation of students.
[01:07:38] Speaker 1: And that will make them more confident learners. So if you're interested in collaborating or want to talk about this further, feel free to scan this or connect with me online and thank you all. Thank you so much, Tina.

