OpenAI Forum
+00:00 GMT
Sign in or Join the community to continue

From Experiment to Institution-Wide: How AI is Changing European Higher Education

# Higher Education
# AI Adoption
Share

speakers

avatar
Louis-David Benyayer
Associate Professor @ ESCP Business School

Louis-David Benyayer is Associate Professor at ESCP Business School (Paris campus) and is the AI Initiatives Coordinator for the school. His work on AI includes teaching, researching and scaling AI solutions for the school. He is the instructor of several AI courses at the school and acts as scientific director for the Certificate AI for Business. His research focuses on the strategic implications of digital technologies, in particular big data and artificial intelligence. After using Generative AI in his courses for a couple of years he is in charge of coordinating the actions to investigate if and how to scale the use of this technology within faculty, students and staff. He earned his PhD from the Dauphine University and graduated from ESCP Business School.

+ Read More
avatar
Jayna Devani
Higher Education Lead, EMEA @ OpenAI

Jayna leads Higher Education partnerships in UK, Europe & the Middle East. She works primarily with universities, research labs and Education Ministries, to provide safe and equitable access to AI. Prior to joining OpenAI, Jayna held various roles in tech and finance in London. She read History at UCL.

+ Read More

SUMMARY

This Forum session, led by OpenAI’s Jayna and ESCP’s Louis-David Benyayer, highlighted the transformative role of AI in education, particularly how institutions like ESCP are pioneering campus-wide adoption of generative AI (notably ChatGPT Edu). The session detailed experimental and scalable implementations of custom GPTs, reflections on responsible AI integration, and a broader philosophical shift toward preparing students for an AI-rich world. It underscored the value of change management, experimentation, and community-building in embedding AI tools meaningfully in academic and administrative settings.

+ Read More

TRANSCRIPT

Hello, Forum members, a very warm welcome. I'm Jayna, our International Education Leader at OpenAI, coming to you from the London office in Kings Cross today, and it is a real pleasure to be here and to be joined by Associate Professor at ESCP, Louis-David Benyayer, and the wider ESCP team.

Just to give you a quick overview of what's going to happen for the next one hour, 15 minutes, I'll start with a little bit of an introduction, I'll give you a little overview of OpenAI for education, why that exists, why we have an education team, and the kind of schools we work with, and then I'll hand over to Louis-David himself to talk about some very practical ways to integrate AI into teaching and learning, and lessons learned from the work so far we've done together, and then eventually scaling up AI to campus wide.

We'll then have a little bit of Q&A, so please don't be shy and start preparing a few questions that you might have, I'll be pulling them out in the chat and asking them to Louis-David, and then one of the best parts of this forum is really about bringing together this broader community and having a chance for everyone to meet each other, so there'll be a little bit at the end of some one-to-one matching.

So with that, I'll start with a little bit of an introduction, just so you get to know me. So I'm Jayna, I lead our work on the international team here with educational institutions, research institutes and policy makers. I've been at OpenAI since January last year, so just coming up to a year and a half, and when I joined, the team was very small, we didn't actually have an education team, and really since hearing that this was going to be one of the biggest focuses for OpenAI, I put my hand up and I thought really there's nothing more meaningful to be doing than working with students, staff, faculty, researchers, given how much AI has changed education, and it has been a really interesting ride.

So just to give you a very brief overview of what the education team is, what we're doing, I wanted just to share a quick graphic here, which is just to say that really we had to start building out an education team at OpenAI from the very beginning. The number one use case of ChatGPT today is learning, and the number one user group is students. So it just became very clear to us that any attempt we made to engage with this community, it would really be about building a lifelong learning platform, and really thinking very deeply about how AI is going to shape the next generation, how it's going to change people's critical thinking skills, creativity, motivation to learn, all these incredibly interesting topics.

And yeah, just to say out of the users that are half a billion and growing, the number one user group continues to be students, and so a lot of the time spent in this team, a lot of my time is spent talking to professors, learning from them, and talking to students.

The way that we look at ChatGPT and the benefits of AI tools is actually very broad, and we think of it in three buckets. The first is ChatGPT is a really powerful tutor. It can speak a student's language, it can understand their learning style, it can personalize itself, and this is the journey we're going on as we continue to build out ChatGPT Edu. It also is a very useful tool for a teacher in terms of saving time, getting rid of admin in day-to-day, and also creating really wonderful classroom experiences, which I'm excited that we'll see some of today.

The second piece is, as we look at across the education sector, AI is a tool to make people more creative and more productive, and just looking across university finance teams, IT teams, marketing teams, we've really seen some amazing transformative ways that teams have been able to add a data scientist, a virtual data scientist assistant, or save time, produce better quality work.

The third, which really goes back to the mission of OpenAI, is the ability for AI to transform research. That is both helping automate some of the workflow of a researcher, some of the admin burden, but actually, much more excitingly, it's also about accelerating scientific innovation in research.

That leads me on to the three sort of broad goals and the vision here on the education team. When we saw the number one use case of ChatGPT was learning, we really decided to found this team and a specific education product to work with directly faculty, staff, and students. We launched ChatGPT Edu nearly a year ago, and our goals are to provide equitable access to intelligence. Really, what we're trying to do is get rid of this state that sometimes exists here where some students, some professors are paying for a premium version of a tool and getting a very different experience to others. Our real goal here is to provide an equitable experience to every single person on a campus, at a school, at an institute.

The second big goal is really about scale. What we're looking to do as this team, and we're a very small team, is to find the best possible use cases and work with an early set of adopters and innovators, and share that widely across the community as much as possible. That's exactly what we're doing here today with the forum and having the chance to hear from Louie Devy directly, but really this is one of the reasons that this team exists, is to find those early adopters, work with them, build with them, and then share that work more broadly.

Lastly, going back to sort of why OpenAI exists as a company, it's about accelerating research innovation. We really recognize that these tools have a long way to go, and very often it's actually not really about the tool itself, but about what we can learn together building on it. One of our key pieces of work is not just partnering on tool access, but also partnering on research collaborations. We recently announced a NextGen AI research consortium where we're funding and providing support on all sorts of interesting areas from climate change to drug discovery to metacognition, but the hope is that these models get smarter, they become more personalized, and they become a very useful tool.

Going back to why this team exists, what really drives us, I think personally what's most motivating for me is just the opportunity every day to work from and learn from people who are much smarter than me and have a lot to teach me in the education team. That really brings me to ESCP and the role that they've played in building out this education team. ESCP was one of our very first partners, I think one of our first in Paris but also globally, that really made a bold move to embrace AI holistically across the entire education system. That was driven by a very determined and wonderful group of people. It really does take a kind of task force to set up something like this broad change management, and we really learned a lot from and are still learning from what ESCP did. So it's particularly meaningful and exciting to have one of the people in the room who created that, Louis-David, and with that I'll give him a brief introduction before handing it over.

Quick introduction to Louis-David, he is an associate professor at ESCP and he leads AI initiatives on the Paris campus. His work has appeared in numerous scientific journals, outlets like the LSE Business Review and global platforms including the World Economic Forum. He was co-director of ESCP's top-ranked MSc in Big Data and Business Analytics in Paris from 2018 to 23 and he currently co-leads the executive MSc on this topic in Dubai. Beyond academia, Louis-David has spent the last 20 years running his own strategy consultancy working both with large companies and SMEs. He's also been involved in startups, successfully managed a business turnaround and regularly leads exec training focused on digital transformation. Recently he's been researching how businesses adopt and use AI and how this shapes business strategies. He earned his PhD from Dauphine University and his master's from ESCP and he's also a wonderful person and has been a real pleasure to work with so far.

So delighted now to turn the spotlight on to Louis-David.

Well thank you very much Jayna for the invitation first and for that very very nice introduction. So it's my pleasure to be here contributing to that community of like-minded people that want to explore the impacts of generative AI in teaching. So I feel lucky to be a prof these days because not only the technology is exciting but also it urges us to think back about the impact we want to have on our students and how we want to have that impact maximized.

But it comes of course with a sense of responsibility because we all faced also the difficulties and problems associated with detrimental use of generative AI in their learning process. So like many of you I guess I've explored the topic from many different angles and today I'll share with you two different views.

One view was the view from the profs and how they used GPTs and developed their own bots for their learning process. So that would be half a dozen use cases that I will be sharing in a few seconds and then I'll share the backstage story if we could say because my role is of course not only to be a prof but also to coordinate this AI initiatives which is a bit of a change management and a digital transformation process.

So I'll start right away sharing my screen to share these two let's say topics that I would like to discuss with you today. The bots that we developed collectively and the experiments that we engaged in a few months ago. So let's start with the bots.

The first one that came and maybe you see that the UX doesn't look familiar to the traditional open AI and chat GPT box came from one of our colleagues in the Berlin campus that developed a specific and fine-tuned bot for his class on academic writing skills and it was a bot designed to help the students having a support when writing their bachelor thesis. So it was designed leveraging a large language model but also fine-tuned it with specific information about ESCP process like dates, formats, expectations. So that's of course...

one group of the bots that we are developing, or we have developed, bots that are designed to support students as they are, as they are studying. But we have another type of bot, and this one has been developed in a strategy class by Chang Wah Hoon, one of our strategy professors, and he wanted to have the students to use a bot for preparing their exam. But he wanted to have a bot that wouldn't answer the questions of the students, but would more guide them towards learning and understanding the concepts that they were supposed to master for the final exam.

So it took a bit of, it took a bit of engineering, and not engineering, but a bit of prompt engineering to design the bot with the right instructions. But at the end, it was possible for the students to be guided without being provided the answer.

Then we had another type of tutor in math classes by our colleague Erwan Lamy in the Paris campus, that proved to be quite successful, because when the math tutor was challenged to answer, let's say, the test, it proved to be quite a good student. So once again, the idea is, was to help the students to embark a bot for preparing their exams, not using them during their exam, but I come back to that later when speaking about how did we adapt our practices to generate EVI.

So this was the first group of tutors and bots for students for learning by themselves, right? But we have other types of bots which are used in class for pedagogical activities. Lorena Blasco-Arcas from our Madrid campus, she's a marketing expert and marketing professor, and she has designed several bots that would represent personas and typical personas of typical clients. And the class was tasked with producing a presentation for a client, a marketing presentation, and the client was embodied by bots.

And there were half a dozen different bots referring to different businesses of different sizes, different industries, etc. And the students had to interact with the bot at several occasions for preparing their marketing presentation, which was a great way for Lorena to expose in a safe place the students to the discussions they would have with a client, should they be marketing consultants. And that's probably one advantage of those bots when designed that way, is to have a way to expose the students to a reality that would be hard to bring to the classroom.

It's super hard to bring to the classroom representatives of medium-sized companies located everywhere in Europe because you don't find them very easily and you don't convince them very easily to come to Madrid to interact with students for a couple of times. But with those bots, it's now becoming accessible to expose our students to the thoughts and reflections and reactions of potential clients. Of course, there is a difference between a persona and a bot and a real client, but that's all the know-how of Lorena to frame the discussion in a way that the students can understand the benefits but also the limitations of discussing with a bot for dealing with a client demand.

Then we also have other bots which are not designed directly for our students but are designed some of them for our profs. Alara Tassioglou, she is one of our information and operations management member team here in Paris, has designed a very interesting GPT that has been copied over and over by many of her colleagues. It's basically an assistant for her course, so she designed specific instructions, so you see that it's for the course MBD03, information system management, and she also used some specific documents and resources that are associated with the course and she uses that assistant like she would use a teaching assistant.

For example, she shared some of her use cases, like sessions are organized, well, the course is organized in a series of sessions, so she went back after a session and said, okay, that session went well, but I felt that at that point the engagement was a bit lower, so preparing for the next course, she asked the assistant to identify new ways of engaging with the students for the next course. That's one thing you probably have tried, many of you, but another use case that she had is that she was grading, she was grading the assignments of that class, and you know that grading qualitative material is not always easy when grading qualitative material to be consistent across the students that we're grading, so one thing she asked the assistant is after having graded and having given some feedback to the students, she asked the assistant to tell her if she had been consistent and if the assistant would recommend to adjust some of the grades and some of the feedbacks.

So I want to make to be clear that she didn't use the bot to grade the students, but she used it for making sure that she had done a right grading or at least a consistent one, and well, the use cases are super numerous, and in case you're curious, I'm pretty sure you can reach out to her, she'll be happy to share, and that's been a very popular, let's say, use case that we have been numerous to use and to design our own assistant for courses.

Speaking about grading and assessing students, we went one step further with something that is more like a research experiment more than, let's say, a real life experiment. We have our Prof Yassine Rankik, also from the Paris campus, that worked with a student to design a specific LLM-based tool that would be used for grading and giving feedback to essays and to assignments, so they developed a bit of software to have this done, and they also, of course, did a lot of fine-tuning and prompt engineering for having the right approach, and after that they made a test with about a hundred examples of final exams, and they had the exams graded by Yassine in the traditional way, and they had the system being also grading and giving feedback to each of the 100 copies, and then the research was about, okay, how consistent the grades are, and I don't want to disclose what's going to be published in a few weeks or months, but let's say that the experience was sufficiently, let's say, interesting so that we start thinking about if and how we could use that kind of processes more globally, or at least in some defined environment.

Then there is another example, which is a bit of a reflexive and meta approach by our professor at Madrid campus, Victor Lima, so he is quite of a tech guy, and he, of course, engaged with the technology pretty early and pretty intensively, and one thing he does is that he has developed some bots, the same as maybe I described with Lorena or the one I described with Alara, which are assistants, but not only he uses them for his own, let's say, benefit, but he also uses them as a way to have a reflexive discussion with the students, for example, having them using the bot for getting some feedback and having them comparing the feedback that they receive from him, so that it triggers a very interesting conversation with the students about the bots, their usage, their limitations, and the ethical part of using automated machines in a setting such as education.

But the other group that I won't disclose in detail here, because there are too many of them, are the bots designed by our students, because as we equipped a group of them with a chatGPTEDU license, they were able to design their own bots, and you can guess that they're super creative into developing bots that would serve their educational purposes, but also you can guess that many of their personal goals have been achieved. I need to say, so that it's clear for everyone, that we don't have access to the conversations of our students.

We do know how many messages they send, but we have no way to access to the conversations they have. So we developed those bots in the settings I described. There are many different use cases, but I, of course, focused here on the one I felt would be interesting for the audience. We are piloting a lot of experiments. I'll get back to that in a minute.

So what did we learn with those bots? First is that, and I think it's a great way to approach the question, is that something

previously impossible things are now possible. And I think that's a great way to approach the possible benefits of the technology in learning and education. I think that the case of Lorena with inviting some SMEs, leaders in class for several occasions is a great example of that. But there are many other previously impossible things that are now possible. The other thing is that it's super easy to start. And of course, it's thanks to how well the technology has been developed and how the UX has been also developed. So that's very easy to start. But if you've been through that, you already experienced that as it's easy to start, then you have a lot of bots. And then the question is, okay, but then what is that bot for? So we clearly understood pretty early in the process that the first step, if we were to decide on designing bots was to ask ourselves, what's the purpose of that bot? And now we are having a clear view about the different types of purposes. And of course, if they are required or interesting in serving the purpose. Like many other topics, it's key to experiment and practice to see if the intentions turn into reality because sometimes we feel that there's a possibility there somewhere, but it's very important in short cycles and small loop experiments. Referring to what I said about easy to start, too many bots, too little differentiation. So at the end, if you end up having 10 different bots for having a tutor, that's probably a problem for our students. So we need to manage that in a very, let's say, efficient way so that we don't, let's say, get diluted into the number of bots. Originally or not originally, we also had some feedback from students that said, okay, bots are nice and they're great because we can ask questions. But in some occasions, we would prefer the traditional index like documents with a table of content, and then we could orient ourselves. And I'm referring here to the academic writing skills bots that I started with. We had some students and not only a few of them that said, okay, but we're happy with the usual document that we can go through from page one to page 100 or to go and orient into the topics we want to see. And the thing I want to conclude on with that bot experiment is that it's super important to be transparent with our students about when and if we use bots, because we're asking them to be transparent about IDU. So that's the minimum. And also it opens for a discussion about reflexivity, reflexivity from our students, but also reflexivity from our staff, because I didn't share much about this in the presentation, but there has been also a lot of efforts from our staff for identifying and defining bots for administrative purposes or support services purposes.

Okay, so this is a few things that I could share about what we learned from those bots. And before I move to the second part, I would hand over to Jayna.

Wonderful, thank you so much, Louis-David. That was super practical and really full of evidence and full of a kind of very clear insight into how these experiments went. So thank you so much for sharing that. I guess a few things I wanted just to call out before we move into that second part. One is really this culture of experimentation that the ESCP has. I think one of the biggest risks I often see talking to schools is that they are sort of doing nothing. That's really, I think, the biggest competitor I see to JetGPT. It's not one of the other AI tools. And I think that there's often a sort of waiting for a textbook to come out or for another school to pilot something, but actually what ESCP has done and what you've shared is you've created this culture of experimentation that allows anyone, you mentioned students, you mentioned postdoc researchers, to build their own tools and to actually engage in the process of doing that. And it's good to see that GPTs were a big part of that. You have a lot of questions for the Q&A later about GPTs, so we'll get into those. But it just sounds like you've got this open kind of tinkerer culture, which is wonderful because it's really only through running multiple experiments, actually engaging in these tools, using them, that you can develop any kind of thesis and you can make decisions, which eventually need to be made. And then just the second piece, which really struck me, was how you've actually reflected on the experiments and measured things and evaluated things. I think another common pitfall I see with schools who are rolling out AI tools, rolling out JetGPT Edu, at the sort of scale that you're doing it, is a kind of lack of clear success metrics or evals. And it's really important to measure the result of some of these. It's very difficult with AI being a general purpose technology, an infrastructure, where the ROI, the things to measure are so unclear. But you've really forged a path in actually clearly identifying success metrics and also not being afraid to tie those to broader things that ESCP is looking at, learning outcomes, student outcomes. So I think it really just speaks to what has made this feel successful so far is that this has been really co-designed and co-built together. I guess with that, we wanna move now to the next part of this before we open up to Q&A, but please keep that coming. We've had some really fantastic questions. To bring it back to you, Louis-David, and really when we talk about making a decision like ESCP did and the kind of work you've been doing to get this out equitably to the campus, I think really there's nothing better to have a view of someone on the frontline like yourself talking about how you've begun scaling this up. So with that, I will hand back to you.

Thank you very much. I think you really described very well the approach that we had, which leverages openness, experiments, and transparency, reflexibility. So maybe I'll use a bit of time now just to give you a bit of the backstage story without being too detailed, I hope, but maybe I focused on the things that for now appear to me as things that were helpful in the process. So it started like a Dead Poets Society thing, right? We started a few years ago, well, a few years ago, it seems ages, right? But a couple of years ago with Howard and Erwan, and we said, okay, we see that thing and we're using it and let's meet together and let's have a discussion. And then we said, okay, but that would be great if we could share that discussion with our colleagues. So we asked the Dean and he was nice enough to offer us a stage and to do a bit of promotion of that event. And we had a lot of colleagues joining and we spent a one-hour webinar for sharing what we had experienced for using generative AI in teaching, which was not, of course, limited to bots, but let's say that started that way, I would say in 2022, early, early 23. And then, of course, it scaled up as time went by, but it started with a very, very limited group of people and then we grew bigger as the leadership of the school also, let's say, trusted us with moving and growing bigger. So after those first discussions, we were convinced and the leadership of the school was convinced that it was not about choosing a tool, it was not about a technology, it was really about a change management that the school was entering in. And I think it's quite original, I would say, that we started with that mindset of, okay, it's not about the technology, it's about the change management approach with all what comes with change management. So, okay, we said, okay, in that sense, what are our priorities? So, of course, we care about our students and we want them to develop the right skills. The right skills are, some are technology related, but others are not technology related. But the fact that this technology is appearing reinforces the need for developing other type of skills. Of course, we want them to use the technology as a leverage, as not as a substitute. So another objective is that we want to prepare our students to decide if and how they use that technology in the workplace because they're gonna be in a management decision position and we want to prepare them with making the decisions of using it or not and using it in one way or another. Of course, it also encompassed adapting our pedagogy, how we teach, what we teach. We entered into revising our curriculum, providing specific courses, but also deciding if and how we use that technology for teaching. The examples I shared with the bots are part of that, let's say, work stream. And of course, this is something we won't cover much today, but we also have a lot of efforts, let's say, dedicated to identifying the way we are and we will use generative AI in the research process, which is another full, let's say, body of knowledge experiments, but that's not the topic for today. So the other thing we decided early in the process is to say, okay, we don't know. This is an emerging technology. No one has the playbook. We can learn from others, but we need to discover also by ourselves. So we were a few of us convincing the leadership team to say, okay, you need to give us a bit of time with a group of people and to do the serious experiment setting things so that we can learn about what could be done, what is useful to be done, what problems happen. So they were convinced by that.

approach, and at the beginning of the academic year 24-25, we had a group of 1,000 champions that had access to Chad GPT EDU. A lot of them were students, some of them were faculty, but don't be misled with the percentage here. It's 12% of the champions, but it's 85% of the faculty body that has been provided a Chad GPT EDU license.

The 15% that are not provided because they didn't want to have a Chad GPT EDU license, so we of course didn't force them. We of course started with a first version of a generative AI in business course with some students, and then we started having that Excel spreadsheet with one line by experiment, and then piloting more than 100 experiments that has been identified by profs, by programs, and by staff. So we started with that in, let's say, fall 24.

We also said, okay, we need experiments, we need to test and see, but to do this, we need to have people, and when I mean people, it's both prof but also staff, that will be pioneering the use of the technology and their class. So similarly to identifying student champions, we're also identifying profs champions, and we made a call in October 24, and we said, hey, we're starting that experiment, who wants to be part of that champions network? And the role would be to learn, and to get better, and to test things, and also to be a bit of a reference for their colleagues at their departments, or at their, let's say, campus, because you know that ESCP has several campuses in Europe, London, Madrid, Turin, Berlin, Paris, Warsaw.

And we were positively surprised by the wave that came of the yes to that offer to be champion, and of course, that was reflectively a great way to scale, or at least to learn at a higher scale, because you know that the use cases are so different, the students are so different. We have bachelor students, and we have up to executive PhD students. You can imagine that those two types of audience don't share the same profile, don't share the same education level, don't share the same.

So we wanted to have a lot of profs exposed to a variety of contexts, so that we have a better and fine-grained information about the impact of the technology in various contexts. I need to add also that the use cases, challenges are different if you're teaching math, or if you're teaching psychology, and this was also one of the objectives we had for, let's say, gathering that group of champions.

So we had that group of champions, and of course, we made that effort to provide them with a lot of resources, so that they can learn themselves, but they can also share information between them. So we had a hub where we would share things, we had developed several courses, one for specifically using generative AI in teaching, which has been very popular, and we also had a place where people could share their examples, so providing resources was one thing.

We also had a lot of training done, because we also had in-house training with some of our internal experts, we also brought in experts from Harvard and other universities that have been using the tool, and of course, that also explains how we could scale the technology faster than if we hadn't. Then we had that list of experiments that was a bottom-up list, right? We asked people to say, what would you like to try? What would you like to test?

This technology is available, but what do you think it could be useful for your own scope? So bottom-up experiment identification and piloting, but also we also had at the coordination level some of the use cases we wanted to test and see, so it was a mix between bottom-up identification and also centralized identification, because some use cases are happening at the school level and not at the program level.

To be honest, we also had a great time and it was fun. We had that idea of building a community of practice, having physical gatherings where we would meet and share examples, but of course, as you understood, as we are a diverse university, we had also to have online settings and we developed some very regular Brown Bag meetings where people would come and share their examples online or share their doubts and concerns, and then we would have some feedback from the people here.

One also key success factor for that process is Anne-Laure Auger that you see here on the picture that has been, let's say, appointed for that role of coordinating and supporting the project of speeding up generative AI, because you can guess that organizing such a volume of students, courses, profs, is a bit of a challenge of organization and we've been lucky both to have the budget for having this, but also to have that particular person to support us in that process.

Okay, so what is up to us now? And maybe I'll use a few more minutes and then it will be time for answering the questions. So we're ending that experiment, meaning that now we are analyzing its results. We have collected a lot of data, data from the students, data from the profs, different types of data, quantitative data about how frequently each of them use ChatGPT, how intense the engagement is with the courses, but also qualitative data from the profs about how useful the technology is or not in their pedagogical settings.

So we are now currently analyzing all of this to prepare the next step, which will be if and how we scale the use of the technology within the school.

So we are preparing for next year. In particular, we have developed a course that will be made available and made compulsory for all of the students' bodies that is named Generative AI in Business and in My Studies, where the students will be trained, of course, on the technology and the business use cases, but they will also be trained on how they can use that technology in the right way for developing their learning activities.

This is something that will come in a few weeks. We're having that course run up in June, and then we're having a series of tasks about updating our guidelines, updating our materials, so that they are adapted to that new technology embarked into the pedagogy. But there is something that also we will concentrate some efforts on, which is always hard for an incumbent, which is to identify more radical use of Generative AI and develop fully Generative AI-native initiatives.

And it's been, of course, shown in many research that that task is super hard for institutions such as ours to embark into changing, but we're having the objective to find some of this. But the most important step that we are about to face is a change of framing. The frame we had over the last few months was using Generative AI in teaching. So we scratched our heads and we made some experiments about the hows. How do you develop the bot?

How can you use Generative AI for revising a syllabus, etc.? But we now need to move the frame and to frame the question slightly differently, to answer the question, how do we teach in a world where Generative AI is abundant? Which is a slightly different question, because it calls not only for identifying how to use it, but it calls for identifying when not to use it.

And we are going to be making some that kind of decisions, or we are processing that kind of decisions of not only prohibiting, but designing instances where it will not be possible for the students, for example, to use it, because we have concluded that in some occasions it's more detrimental than beneficial.

But we also need, and that's a hard conversation because you probably saw that in your classes, some students no longer read the documents we're giving them, they upload them and ask for a summary. They no longer listen to what the prof said, and I'm saying some of them, I'm not saying all of them, but they're prompting live to see if they can access the knowledge without, let's say, making the effort.

So these are hard discussions to have within the faculty, but also with the students, and we need to design settings and instances that take into account Generative AI as something that exists, without reducing our expectations regarding our expectations towards what we want the students to learn.

The last thing we have on our map...

is to be more intentional and precise about when we're using the technology for serving educational objectives. We all have a syllabus with learning goals, learning objectives. We want to be much more intentional into connecting the technology with those learning goals so that we can be a bit more mature into if and how we use it. So we're not there yet. It's a fascinating journey that we are in and we're, of course, it's a mix of excitement because it's about changing things, it's about making things differently, but it also comes with a bit of responsibility because we clearly see that depending on how we deal with that technology, the results of our students could be either way. So that's, of course, something we are facing very directly.

So that's about it for what I wanted to share today, and I'm happy to answer the questions that popped in, Jayna.

Wonderful. Thank you, Louis-David. That was brilliant. I think everyone appreciated being taken right behind the scenes and kind of picturing it with their own eyes, seeing the community of practice, seeing An-lor, who every school needs an An-lor, as I say regularly. So thank you for sharing that. I think it's a really interesting note you left us on, and Chagipiti Edu is actually itself not one year old yet. So the Opening Eye team are also really eagerly awaiting the results of, you know, what does it actually mean once we've done a full year? And I think no school yet has gone through a full year of deploying this or full academic year. So I think the things you shared, you know, I wholeheartedly agree. It's not just about switching on tools. It's about building champions, building community, being very self-sufficient, trying out multiple tools, not just one. And I think now pushing the boundaries of more radical use, I'm particularly excited to see how the rest of your year unfolds.

So with that, we have quite a few interesting questions for you. I hope we'll be able to get through a few. And beginning with this one, which I think is very relevant to some of the later lessons you shared, really on the topic of future-proofing. So quite a few people have asked, how are you thinking about the rapid pace of model innovation? You know, the fact that OpenAI, other companies are putting out new models, new products, new features every week, every few days. How have you thought about designing your syllabus? What governance practices do you feel will help you kind of upgrade quickly? What's the iterative process?

Well, that's a very, yeah, that's a challenge, right? Because on a daily basis, there is something announced and it's super hard to, when the announcement is made, well, at least I can't, to identify if this would become a major shift or is it just a bit of a noise. So I think we don't have the perfect answer to that question. I think the way we are approaching it is to expose ourselves as much as we can.

So we are testing a lot of things. We're having a lot of conversations with a lot of startups, solution providers, and of course, model designers to have a sense of what it is about. And we're trying our best to, let's say, have our pillars solid and to base our, let's say, next moves on a few solid things and then try to keep up with what's happening to see if we need to adjust. So it's a bit of agility, right? And unfortunately, I can't say much more about solving that problem, but it's a hard problem because you guys, you don't make our life easy, right? Because every week there is a group of announcements and we need to take that into account. But to be fair, a lot of the announcements don't change dramatically the approach. They're adding up things, but they don't change much. For example, if I'm using your own products like Canvas or projects, they're great, but they probably don't change much the impact on education. So this is how we deal with it, monitoring and keeping solid on our pillars.

Yeah, but I think that's a great answer. I also have trouble keeping up with how much OpenAI releases and I think agility, a change management mindset, this is how I've seen schools absorb this and adapt to this best. And I think really the role I see model providers playing here is just being able to provide that line of sight very early into the technology where it's going so it doesn't feel so reactive. And hopefully as you start writing guidelines and people continue to experiment, you have that kind of front row seat. Wonderful. Well, a few other questions to hit you with, maybe one that feels a little personal. How do you see the role of professors evolving as you institutionalize the adoption of generative AI? What does it mean to be a teacher anymore?

Wow, just a small question. Small questions. Our community is engaged. Of course, that's the things that come pretty early when you experiment with the technology. What's our role? I think what I feel is that it's a mix of keeping things as they are and changing things, which seems a bit paradoxical. As we observe students behaving and interacting with the technology and in general, it's clear that whatever technology they still need mentorship, they still need confrontation, they still need to face knowledge, they still need to create knowledge by themselves according to the level of studies that they are. So a lot of things don't change much about the role. But there is a but, right? The but is that the technology is here. So we can't pretend it's not and we can't continue doing acting as we were because some of our practices are no longer made possible. So the way I see it is that it forces us to go back to the roots of what a professor is and what is meant for and maybe to go back to this and then to retro-engineer and to see, okay, where does the technology fit to serve that purpose? And sometimes it's about making hard decisions. So for example, I'm teaching a digital transformation class and in 75% of the classes, I ask my students to perform activities with pen and paper. And of course, it's a hard take, right? Because students engaging into digital transformation, they, of course, want to engage with the technology, right? But I concluded that in some occasions, it's better to have them thinking with pen and paper. So this is maybe the way I would put it. Go back to the roots of what it is and see where the technology fits and see where you should prohibit the technology for honing because it wouldn't serve what you're aiming with the students.

Yeah, I think it's great that you have time for that reflection and that you're kind of actively practicing that because it must be so many moments that you're confronted with that, not just with AI, but all sorts of ways that the world is changing around us and the role of the SEP. And it's one of the reasons I particularly love working with European institutions is it's seeing the centuries-old institutions that have unfolded slowly, thinking somewhat of what they originally stood for and what that means.

Okay, I think we have time for one or two more questions. I had an interesting one around how ESEP partners with other universities. So I think this is really about what we're doing here, building this community. This is OpenAI, ESEP, many other faculty researchers coming together. How does ESEP play a role connecting with tech companies, research institutes? What are some ways that you're exchanging good ideas, best practices to advance the community as a whole?

Yeah, so of course, some of the conversations like this one contribute to this. We are, a lot of our professors are involved in different networks, professional networks, where those conversations happen. And a lot of them are, of course, engaging with their colleagues and their peers. We didn't have yet, we didn't feel that maybe it could be the right time that we could organize something for, let's say, spearheading a community-driven approach in Europe for having a, let's say, so that's something we contemplate, but that's, of course, something adding up to many other things. So for the moment, we haven't committed that. And of course, we have, I think, a lot of conversations with solution providers, fellow schools in Europe, but also in the US. And we're starting having conversations with Asia, but it's still 101, right? It's not yet. So there's still room. And if some people in the room have ideas, we'll be happy to contribute to a broader conversation and to contributing to making everyone, let's say, more mature on the topic, we'll be happy to contribute.

Wonderful. Open invitation then. I like it. Okay, very last question, and then I'll invite you to just give some very last reflections before people can meet each other. So this one, you don't feel the need to be polite because I'm here. A lot of people asked a lot of questions about Chag GPT Edu and licenses and custom GPTs and how you access the plan. But I thought actually the most interesting one for you was, you know, in this journey, and it sounds like there've been several milestones along the way that have led you to these decisions.

questions, what were the specific ways that OpenAI was able to support the change management?

So well, many ways, and I need to say that it was also part of the reason why we decided to have that conversation together, because we felt that it was more than providing a license, and it was your contribution could be composed of many other things, right?

So of course, things that we benefited from is very tangible resources. So at that time, a year ago, there was less content produced on the chat GPT education newsletter, but you were nice enough to give us some videos and tutorials for using the technology, so that's one that we benefited. Then it's about experience, because you were discussing with other schools and universities, so we leveraged your experience, and also we were having Pierre-Edouard Leib, who is a customer success manager that joined several of our meetings with students, profs, to demo. So we got some very explicit support in terms of content, but also in terms of mentoring and coaching, and also of information, and that's been great, of course.

Okay, you spared me the areas for growth there, but I'm glad to hear that. I think that it's felt very much two-way. We've learned a lot from ESCP, and I'm really looking forward to seeing where the next phase leads us.

So we are at the hour. Louis-David, do you want to share any final words or reflections for the audience before I lead us into our one-to-one matching?

No, I think, well, I have one sentence, let's say, that comes in my mind for preparing next year. It's going beyond the technology, which doesn't mean taking the technology aside or not using it. It's going beyond, right? The technology is here, but the problem or the questions or the issues are not much technology-related. So this is maybe something I could conclude with. It's important to understand the technology, but the real job is beyond the technology.

Wonderful. Thank you for sharing that last thought. Thank you so much for being with us and sharing your reflections and your practical insights and your experience. Thank you, merci, grazie, to the whole very international diverse audience that have joined us today, and all the countries you've joined us from.

And we hope this has been interesting. Look out for many more sessions focused on education. We are doing and building a lot here. It's really just begun. With that, I wanted to share that at this point, we'd love you to meet other attendees through what we call one-to-one matching.

So you'll see a match tab on the left. And when you're ready, you can click join match. That will be pairing you with someone at random for 10 minutes after you've configured your camera and microphone. And each of you have to approve the match, kind of like a dating app. So we'll leave you to do that.

Please do continue to join future forum events. There'll be more to share there soon. Thank you to Natalie, Caitlin, Noah, and the whole team for helping us put this on. And with that, have a wonderful rest of your day. Thank you.

+ Read More
Like
Comments (0)
Popular
avatar


Watch More

How Wharton is Becoming an AI Native Institution
Posted Oct 23, 2024 | Views 8.6K
# Higher Education
# GPT-4
# Everyday Applications
AI Art From the Uncanny Valley to Prompting: Gains and Losses
Posted Oct 18, 2023 | Views 39.1K
# Innovation
# Cultural Production
# Higher Education
# AI Research
Thinking Machines & AI Economics: How Reasoning AI Is Rewriting the Future of Work, Science, and Strategy
Posted Apr 23, 2025 | Views 1.9K
# AI Economics
# AI Policy
# AI Research
# Future of Work