Teaching with AI: Faculty Stories from Maryland Writing & Engineering
speakers


Pam Orel is a teacher, writer and prior editor with a background in online, blended and in-person course formats. In the past year, she rebuilt her Business Writing course to better align with university AI goals and student’s growing needs.
This included a chatbot that offers more convenient access to course information and announcements for students. She is an advisor to a local student group and also serves on the English department's Working Group on AI. She has mentored roughly a dozen faculty members at the university. Her research interests include teaching with technology, affordability in higher education and supporting students with differing learning styles.
She has served in a few leadership roles, including as a panelist for the Office of Student Conduct, where she has served on over 100 panels hearing cases related to academic integrity issues over more than a decade. Prior to joining Rutgers, Pam held teaching and administrative roles in another, New Jersey Big 10 institution. She joined Maryland as a lecturer in 2012; was named Senior Lecturer (2016-2021) and was awarded her current Principal Lecturer rank in 2021.

Michel Cukier is a professor of mechanical engineering at the University of Maryland and also heads up the ACES program. That's their Advanced Cybersecurity Experience for Students. Before joining the University of Maryland back in 2001, Michel spent time as a researcher at the University of Illinois at Urbana-Champaign. His work focuses on the dependability and security in computing, and he's published over 90 papers in top journals and conferences. Michel earned his PhD in computer science from the National Polytechnic Institute of Toulouse in France,
and he brings some serious depth to both teaching and research in the cybersecurity world.
SUMMARY
The OpenAI Forum session featured two University of Maryland educators—Michel Cukier and Pam Orel—sharing real-world examples of integrating AI, especially ChatGPT, into their teaching. The conversation showcased practical applications, highlighted student engagement and learning improvements, and pointed to a larger vision of AI-enabled personalized education. OpenAI’s Kirk Gulezian framed the discussion within a broader mission to enhance, not replace, education, aligned with democratizing AI benefits and strengthening national competitiveness through innovation in education.
TRANSCRIPT
Hey, everyone. Thanks so much for joining tonight's session and hope you're all having a great Wednesday. I'm calling in from OpenAI's New York City office right now, but I know we have a presence around the world tonight, so we're super grateful for your time and enthusiasm for the event. We'll get it kicked off since we're a few minutes after the hour here. Pull up my slides. Awesome, let's dive in. My name is Kirk Gulezian. I'm part of the education team here at OpenAI, and I'm so excited to welcome you to this very special event of the OpenAI Forum.
Tonight's session is all about celebrating the real creative ways that educators are bringing AI into the classroom. We hope that you'll walk away from this feeling really energized, curious, hopefully with a few ideas that you'll want to try out for yourself. We've invited two incredible professors from the University of Maryland to share how they're actually using ChatGPT in their teaching. These are short TED style talks, and they'll be packed with reflections of what's worked, what's been surprising, what they've learned along the way. Tonight will be minimal fluff, just really the real experiences.
OpenAI, and we mean this truly, we believe tools like ChatGPT are really here to not replace the magic of teaching, but instead to support it. This is about freeing up time for more connection, deeper conversations, more active learning for your students. It's about helping educators do more of what they love, with AI helping to support them along the way.
Before we really get going, I just want to give you a little context for tonight's conversation. At OpenAI, we launched ChatGPT EDU. It's a version of ChatGPT that's specifically built for universities and other educational institutions with the goal of helping students, faculty, staff work a little smarter, write a little bit better, and just get more done in their day to day. ChatGPT EDU includes access to our most advanced models, and it also comes with some pretty powerful tools like coding, data analysis, file uploads, the ability to understand and reason through images. All of this lives inside a really secure institution-managed workspace, and so schools can really roll it out with the right oversight in place.
But more than anything, what's been most exciting, especially for someone on the education team, is just seeing how educators are really starting to use this as a foundation to shape what AI can and should look like in education, which brings us to our speakers tonight.
First up, I'm excited to introduce Michel Cukier. He's a professor of mechanical engineering at the University of Maryland and also heads up the ACES program. That's their Advanced Cybersecurity Experience for Students. Before joining the University of Maryland back in 2001, Michel spent time as a researcher at the University of Illinois at Urbana-Champaign. His work focuses on the dependability and security in computing, and he's published over 90 papers in top journals and conferences. Michel earned his PhD in computer science from the National Polytechnic Institute of Toulouse in France,
and he brings some serious depth to both teaching and research in the cybersecurity world.
And I'd love to introduce you to Pam Orel, our second speaker. Pam is a teacher, a writer, a former editor with years of experience across online, blended, and in-person classrooms. Over the past year specifically, she's done a full overhaul of her business writing course to better align with the university's AI goals and what her students actually need, including building a chatbot that helps them easily access course info and announcements. She's also super involved outside the classroom, advising a student group, serving on the English department's working group on AI, and mentoring over a dozen faculty as they explore AI in their own teaching. Pam is particularly interested in inclusive learning, affordability, and finding ways to use tech to genuinely improve education. Pam has played a leadership role on more than 100 academic integrity panels, and she has big 10 roots, starting out in New Jersey and now at the University of Maryland, where she's been a lecturer since 2012, and now holds the title of principal lecturer.
So for tonight's event, each professor will present for roughly 15 minutes, but we won't be holding up a clock, sharing their unique approach to integrating AI in their classrooms, what's inspired them, what they've learned, what's actually working for their students. And then after the talk, we'll transition to a live Q&A session, where you'll have a chance to directly connect with our speakers.
So as you listen, feel free to drop down questions that come to mind or share them in the chat, whether it's about the tools that they're using, how students are responding to them, how they address academic integrity, even just how to bring AI into your own course design. One quick note, only registered forum members will be able to join the Q&A after the presentation today. So this will be a chance to kind of ask about real world use cases, course planning, broader questions around AI policy and pedagogy. We're really looking forward to a thoughtful and engaging discussion.
And so I'll now pass it over to Michel Cukier. Thank you.
Hello, everyone. So my name is Michel Cukier, and I'd like to spend a few minutes to give you some background about how we applied AI to one of the very big courses in statistics, undergraduate courses in statistics at the University of Maryland. So just to introduce who I am, I'm a faculty in mechanical engineering. I teach several courses there, especially one on statistics. And at the same time, my research is on cybersecurity, and I'm the director of the first honors program in cybersecurity. And the pictures that you see there is the first cohort of the students in 2012.
To give you a quick overview of the University of Maryland, we are really inside the Beltway, which means we are very close to DC. So there's a metro stop next to the university. We are really strategically located where you can see on this slide, a lot of the federal agencies and labs all around the University of Maryland.
To give you a brief overview of the Clark School of Engineering, you see that at the graduate level, we are ranked ninth among public schools, and at the undergraduate level, we are ranked twelfth. The research impact, it's about $141 million. Each faculty brings in the Clark School around $760,000. Because of the location, we are the top recipient of NIST research funding. And the thing which is less known is because of the location again and how close NSA is, we have three labs, which are, depending on how you explain that, on campus or across the road.
We have several career awards. We have amazing researchers on campus. So the leadership at that point of Dean Pines, who now is President Pines, you have many student organizations, faculty organizations that got involved in competitions, as well who decided to start companies, startups, and so on. So this is a very strong entrepreneur approach in engineering.
So this is the outline for tonight's talk. I want to give you an overview about what is the class on statistics where we applied to GPT, how we applied it, where we are, what we are planning to do, and where we hope to be in a few years.
So as you know, usually it's very complicated to teach statistics and to make it an exciting class. This one is a required three-course class in mechanical engineering. Most of the classes taught in mechanical engineering are really hands-on, so project-based. So this is the first one where students face something more abstract, where there is no direct project. And that's why we're so interested in including AI. It's around 170 to 240 students per semester with at least two sections. And the topics covered are the typical ones with probability, discrete and continuous distributions, confidence and prediction intervals, hypothesis testing, ANOVA, regression, and design of experiment.
So starting, so the class is really based on a lot of assignments, so two midterms. So basically you have four weeks of lectures, one midterm, four weeks of lectures, a midterm, and then a final exam, weekly homework assignments, some on paper, and some using PrairieLearn. And I'll go over what PrairieLearn is in a few slides.
In the fall of 2024, we started using an AI agent that we trained on the syllabus, on the presentation slides, but also on all the recordings. So the class is streamed at several locations, so we have a tech support. So all the classes are being recorded, and then the transcripts are being ingested and used by the agent to answer students' questions. So the first feedback we had was, in fact, even though the class is an engineering class, some students were hesitant of using the agent. So to help them really to be motivated, what we did in fall 2024 was to have a quiz just about the agent. So for that, we made it mandatory for the students to really use the AI agent. And then this semester, we decided to go one step further to have students practicing the midterm. As a homework, we told them to create exam questions, set templates so they could create new exam questions, have them solved by ChatGPT, and then as homework, identify what were the incorrect parts, and then upload that as the assignment.
So some of the things you understand, and these are just quotes for some of the students, they really communicate with an agent if they were talking to someone, one of their buddies, or so on. And so this is something really interesting, where you catch some very familiar language, where there is no idea about how do you frame the question to really optimize the answer, how to get basically a feedback loop, and additional information. So these are just some of the examples of what you can see what students had submitted. What we also did was over several weeks, so from week 6 here to week 14, in the fall 2024, we tried to see how many students asked how many questions. So what you see in yellow is how many students did not ask any question. And you will see there's a significant increase on week 14. This is just because week 14 was Thanksgiving, and so you need to go backwards. The interesting thing is that there is no huge adaptation, so it just varies. What you also need to understand here, it doesn't mean that students are not using AI. They might have been using some other AI tool where we don't have a recording about that.
But one of the things to see is the University of Mainland recorded how many queries were submitted per class, and you see that statistics class is the number one at the University of Mainland among all the classes with over 16,000 queries. So this is just for fall 2024 and spring 2025. Between 170 and 240 students, again, per semester. And then I'm also teaching the fourth one with HACS 100. So this one is about UNIX and ethics in the cybersecurity honours programme. And there we have over 6,000 queries.
So one of the things we used ChatGPT was to really assess what kind of answer we would get from some of the assignments. And like Kirk explained, we have access to different models, and we uploaded assignment questions on different models and tried to figure out the quality of the answer depending on the model. One of the things which was really interesting is we made it really clear, for example, that we want students to be able to make a difference between a prediction interval.
and a confidence interval, and this is something that we have found CGPT is really struggling and is really answering a prediction interval like a confidence interval. So one of the big takeaways that we have is that we really need to be carefully reviewing the solutions before we green light them and share them with students.
So other thing we have been doing is trying to use a tool called Prairie Learn. So Prairie Learn is an online assessment and learning system where the idea is for students to really master the content. So you will create a question, it's used usually in engineering, you create a question where you might randomize some values or you randomize part of the question. You allow the student to answer the assignment certain number of times, so usually it's around three.
So students have three attempts for the assignment with the numbers that we provided to answer that, and if the student doesn't get it correctly, then another assignment or some changes in the assignment or some changes in some of the values of the variables is provided, and then they can have again three attempts to solve that problem.
So what we would like to do with chatGPT is first helping creating new assignments. So for example, we have a library of questions about conditional probabilities, so we'd be great to have chatGPT create a new assignment. From that new assignment then help generate the code. So we did a few attempts on this one, so we explained how Prairie Learn code is structured, what are the files, what are the rules, and so on.
So to get that automated, and then also to provide some randomization done automatically. And then afterwards, we would like to have some validation to ensure that the code is correct and that Prairie Learn assignment is correctly programmed. This is a heavy lift, so when we create Prairie Learn programs, we have our new assignments, we have TAs almost spending the entire semester to create one from one week to another one. So support of AI would allow really to increase the bank of questions we would have that then can be shared with many institutions, because there are many institutions using Prairie Learn, and so having students then exposed to many more examples and assignments in statistics.
And here's a long-term vision that we would have. We discovered, even in these large classes, that you have many students with ADS accommodations. This is probably due to COVID, but basically post-COVID, we have 10% of the students that need ADS accommodations. And what we would like to do is basically figure out how AI can help us creating a learning experience specific to the students.
So some students might tell us what level of abstraction they would like to see, if they like theory, if they don't like anything theoretical, how much text they want compared to images or even sounds, how many examples they want, if they want to understand by building examples one by one, so step by step, or if they want something more conceptual. And having all that, hopefully soon, we'll be able with ChatGPT to create a learning environment on a learning experience specific to the student based on the student's needs. So thanks for listening.
I now would like to hand it over to Pam. Good evening, everyone. Thank you so much for joining me. My name is Pam Orell. I'm a principal lecturer at the University of Maryland here in College Park. And I am a writing teacher, which is an interesting take on AI. It's not an environment where the STEM community has always had a leadership role in this, but certainly I'm seeing more and more interest in other departments.
So let's get started a little bit about me. Just grab the next slide real quick. I teach technical writing, I teach business writing, and I also teach, I have taught scientific writing in the past. The interest I have in research are teaching with technology, also with diverse learning styles, which ties into Michelle's presentation on, you know, kind of customizing needs for the different learners in the community. And I also, as I said, have some committee service and some other things that I'm interested in doing.
But I also teach in a program that is within the English department. It includes academic writing, which is a popular course for our freshman students. Many of our first-year students take this course. There's also a tutoring program in the writing program, and there's the professional writing, which is sort of my home away from home. The professional writing program offers three, 19, excuse me, 19 different courses, and it has a minor in professional writing, which is an academic minor.
Professional writing in general, the various courses, all focus in different areas, but they're on a practical mix of assignments, original research, and in general, the kind of writing and communication that's going to be needed when a student transitions into the workforce or if they're in the workforce and they need to sharpen their skills.
So that's a little bit about the course. One thing I was excited about for this place and for this space where we all are, this is a 300-level course. It's mine is online and asynchronous, but it's offered in all different formats. Again, it's practical. Our particular course in business is sort of a refresher on memos, proposals, and letters. For some people, it's not totally a refresher. Some people kind of need a skills refresh there, but the main goal of a course like this is logic and persuasion, and it's helpful to many students because often they will have taken writing courses in different disciplines with different disciplines, with different frames of view, with different worldviews, and they need a more business-focused approach to messages, to memos, to communication inside and outside of the workplace, to working with their clients in the client space.
And my AI journey began about early, late 22, early 23 when the tool was first announced, and it was a journey of hesitation, and I think we all kind of went through this journey, but maybe we don't honor it, or we don't notice it, or we don't always come back to it. But our journey is important, even though it tends to start in that very is this the right thing? Am I in the right place? Students were concerned.
I started getting concerns about students who felt that the rules varied a lot, and this is probably true in every one of the communities, those of you who are in higher ed. There were AI-focused courses and courses where AI was not allowed, and there were potentials for accusations relating to the overuse of this technology, or using it when it was told you were directed not to use it, and those accusations can create real harm, and in some schools, they actually wind up on your transcript for a period, at least for a period of time.
As time went on, faculty resources were evolving, and that has been a tremendous help, and this is true in many schools. The faculty had to learn the tools before the students could really get the resources that they needed to get, and I am, you know, I am part of that journey, but I'm also part of a community that's in that same journey. So, we'll speak a little bit about the journey from the student perspective, and then we'll transition over to the journey with colleagues.
Again, it started with a wait-and-see. It started with a wait-and-see, and then it continued with the wait-and-see. Again, we're looking at the growth, and I, again, I talk a lot to my students. I ask them a lot of questions about these things, it was a lot of confusion.
I had an AI policy, but I didn't have a lot of assignment-level guidance. Teaching tools were out there. I started taking some of those classes and some of those workshops, and I put a minor sort of half-hearted attempt at an AI-enabled assignment in early in the spring. It was one assignment, and it had to do with letter writing, which is often a tool where AI has some strengths, but then by the end of last spring, I just realized that the course that I had didn't work anymore. It was not focused as deeply on the tools themselves. It wasn't engaging with students at those real pain points of the individual assignment level, and it was time to redesign it.
So, I involved our teaching center, which, as I said, is a a huge component of the success in any school, certainly, but certainly in ours. A huge part of that transition was their involvement, my involvement to get it started, their involvement to provide wise guidance, but by the fall, which was last fall, all of a sudden, it was ready to launch, and I said, well, this is wonderful. We'll get it started. This is the fall of 2024. It's now, you know, everything is so wonderful, and I wasn't done yet.
Through a simple kind of conversation with a person who was part of a learning community I belonged to in AI at the university, I found out that there were these chat bots that were being made available, and they could not they were getting a lot of interest in some departments and less interest in others, and I said, gee, can I have one? And expecting very little of it because, I mean, everything is a list. Everything is, you know, priorities in a big organization, but lo and behold, they got back the IT team got back to me right away, and I had a chat bot which I packed and loaded with assignments and other readings and other things and got it started in October, and this was October of last semester.
It wasn't ready the first day of school because it wasn't offered to me until like mid-September, but it did launch, and as I was packing it and getting it ready and interacting with it because I wanted to make sure it was not messed up, one of the things that I kept noticing was that it really did a good job of explaining things. Now, you'd think AI, that's a wonderful thing. AI should be explaining things, but it really was something that, as someone who's worked with people with diverse learning styles, I really understood it was doing something that students were afraid to do in the class, which is to ask a lot of questions, to talk about, hey, I don't understand this. Could you do it again or could you try it again?
So, in the spring, it launched all over again, basically the same or similar course, and that's where we are right now. So, and I found from that initial start, again, we're talking two semesters, faculty and students may share a lot of the same concerns.
around AI tools. Do they work? Do they give you good information? Do they give you bias? Do they mishandle your private information? This is not a small thing in a business setting because business information has to be handled with a level of privacy and integrity. What are the ethical implications? Environmental and energy challenges. How is this going to affect my job when I move forward in career X? All of those questions are coming not only to my attention, not only from students, but also from colleagues who are also teaching in the same writing program.
For me, the instructor made challenges were timeframes. The redesign takes time. No matter how you do it, it does take time. It means that you have to learn the tools so that you have a comfort level to share with your students. I often emphasize this when I talk to colleagues, that your comfort level is going to become your students' comfort level. Where AI has an advantage, those assignments have to be reconfigured and in some cases just updated to be better suited to the need.
Probably the simplest version that I can share with you is to compare Smith's version of this to Jones' version of that. AI tool does comparisons pretty well. When you recast that, you can look at it. Smith says this. Jones says that. Where would you stand? Where do you see the value in? Or do you see a value in? Something that really forces the student to say, I really think. Because AI tools don't do that as well as the student will do it themselves.
Although many of my students now will be sending their AI tools, their material to an AI tool just for routine checks of various kinds. It is a learning curve. You're going to be on it for a while. I always tell my faculty friends that, hey, I know AI from last Friday, but now it is Wednesday and I probably don't know that much. It has changed that quick. And there are lots of resources. A bewildering number of tools and technologies and opportunities are out there. Again, your challenge is to figure out which one works for you.
My new approach has a couple of key components that make it stand out a little bit. There's information on AI, appropriate AI use at different entry points. There's a policy, but there's also assignment level guidance. Why is this AI tool going to help you or not? And there's also a guidance, you know, the policy is more detailed. When I talk to students, I correct those assumptions that AI is just a quick fix. Typically, it's not a quick fix.
Basic changes. The former one was a little short on leads, assignments, level guidance, changes in how we use overused situations. The new one is a little bit more detailed. Overused situations, what was going to happen if you used the tool and you weren't supposed to or if it wasn't a good choice and it made a mess. There is advice there on each assignment. Try this, don't try that. If you mess up with this, you know, do that. It really encourages a dialogue.
And I think for those of you who are in instructional roles, that's really the one critical piece of this. You want a dialogue with the students when things go right and when they go wrong. And we're still evolving. My overuse rate, which I loosely define as the rate at which it misses the mark totally and probably because it was just not a good, it wasn't a good use of the tool for that purpose. I estimated about 40%. The actual number for the fall semester was about 5%. Student peer reviews changed. They focus now on more on logic, logistics, stakeholders, variables. I no longer see those peer reviews that talk about sentence structure and typos and those type of things. When I see them, I tell the student, you know, did you check this? Or did you check this letter? It still has a field in it. You should put your real email in there or some facsimile of an email in it.
A virtual assistant can show a student if a prompt was followed. It can also show students if the tool, you know, it can read out summaries. It can summarize my class notes. It can summarize a lot of things. And that helps my students who have different learning styles. We have an interdisciplinary institute at the University of Maryland. A number of people from different units in the humanities and the social sciences and the STEM community are on it. A few people from English department are on it.
AI faculty learning communities. When I first started, they were mostly STEM faculty. Now they include multiple disciplines as well as staff people, which is a wonderful feeling. Staff people often staff in our community often look to tools like this for expediting processes to save time. And colleagues, when you mentor colleagues, they feel excited or overlooked or unheard or uncomfortable or unfamiliar. All of which are important feelings, but they're not fun feelings to have.
When I talk to colleagues in a supportive slash mentoring role, instructors or TAs may rely on AI detectors. In general, these are unreliable or not approved in many cases. Each institution is different. But they can lead, those false positives can lead to unfair accusations against students. Academic integrity, I've served on a particular panel that handles academic integrity cases, but it is specific to the universities involved and it really should be followed consistently. Be particularly wary of things where a university, where you follow a process, the university follows a process, but there is a divergence at some point because the universities get very sensitive about those things. Almost all of them do.
I always say to listen to your concerns, listen to your own heart, listen to colleagues' concerns. I encourage people to be open about the time it takes to enforce any restrictions or bans or whatever. When you find those things happening anyway, it often puts people at a real crossroads, you know. And sometimes for many of us, that staged slow approach where you change one or two assignments or you open up the dialogue for one or two discussions, those are really productive times for you because you're learning from your community, you're learning from the people you're mentoring, you're learning from the people that are students, and that gives you a level of familiarity that helps you to grow and to become a better teacher in addition to obviously helping the course grow.
Thank you so much for listening. I'm going to turn it back over to Kirk and we'll get on to the next phase. I deeply appreciate your time.
Awesome. Thank you so much Pam and Michelle for sharing your time, your expertise, your candid reflections with us tonight. It is not easy to be at the front of the curve when it comes to educational technology, especially with AI. It's clear that both of you are helping to chart that path with really thoughtfulness and a lot of care. What we heard is a reminder tonight of something kind of very important. We are at the very beginning of understanding how AI can support teaching and learning. This moment is new, it's still new, it's evolving, and the impact will be shaped by educators, by the conversations that you're having with your students, with your colleagues, with each other, and that's what tonight was about. We feel that this is not about replacing teaching, it's about enhancing it, so freeing up time for feedback, for exploration, for meaningful interaction, helping students think more critically, write more clearly as Pam talked about, engage more deeply as Michelle talked about. We believe that with the right mindset, AI can be a really powerful ally in the mission of education. Thank you everyone so much for joining tonight. We really appreciate the time.