Groups
/
South Korea
/
navigation.content
Sign in or Join the community to continue

Future of AI - From Korea to the World

# South Korea
# AI Education
# AI Economics
# AI Research
Share

Speakers

user's Avatar
Jason Kwon
Chief Strategy Officer @ OpenAI

Jason Kwon is the Chief Strategy Officer at OpenAI, overseeing policy, legal, and social impact research teams. Previously, he was the General Counsel of Y Combinator Continuity, Assistant General Counsel of Khosla Ventures and an associate attorney at Goodwin Procter. He was a software engineer and product manager before practicing law. Jason has a JD from UC Berkeley Law and a BA from Georgetown University.

+ Read More
user's Avatar
Joanne Jang
GM @ OpenAI Labs
user's Avatar
Hyeonwoo Noh
Member of Technical Staff @ OpenAI
user's Avatar
Wonbae Park
GTM SE @ OpenAI
user's Avatar
Ronnie Chatterji
Chief Economist @ OpenAI

Aaron “Ronnie” Chatterji, Ph.D., is OpenAI’s first Chief Economist. He is also the Mark Burgess & Lisa Benson-Burgess Distinguished Professor at Duke University, working at the intersection of academia, policy, and business. He served in the Biden Administration as White House CHIPS coordinator and Acting Deputy Director of the National Economic Council, shaping industrial policy, manufacturing, and supply chains. Before that, he was Chief Economist at the Department of Commerce and a Senior Economist at the White House Council of Economic Advisers. He is on leave as a Research Associate at the National Bureau of Economic Research and previously taught at Harvard Business School. Earlier in his career, he worked at Goldman Sachs and was a term member of the Council on Foreign Relations. Chatterji holds a Ph.D. from UC Berkeley and a B.A. in Economics from Cornell University.

+ Read More
user's Avatar
Raghav Gupta
Head of Education, India & Asia Pacific @ OpenAI
user's Avatar
Eunsoo Lee
Asst. Professor & Director, Center for AI Digital Humanity @ Seoul National University (SNU)

Eunsoo Lee is an Assistant Professor of Philosophy at Seoul National University (SNU), where he also serves as the Director of the Center for AI Digital Humanities at the SNU AI Research Institute. He holds a B.S. in Mathematics and an M.A. in Classics from SNU, followed by a Ph.D. in Classics from Stanford University. His research into the historical development of scientific knowledge informs his work on shaping the future of humanistic inquiry and innovating higher education for the AI era through his 'Meta-Humanities' lab. He also serves on advisory committees for Korea's Ministry of Science and ICT and Ministry of Trade, Industry and Energy.

+ Read More
user's Avatar
Gahgene Gweon
Professor @ Seoul National University

Gahgene Gweon is a professor at Seoul National University. She directs the Cognitive Computing Lab (http://cclab.snu.ac.kr), where her research focuses on understanding human cognitive processes and developing intelligent systems that can support human cognition, learning, and behavior. To achieve this goal, she integrates advanced AI technologies such as natural language processing, deep learning, and data-driven modeling to better understand and augment human cognition and behavior. Professor Gweon received her Ph.D. in Human–Computer Interaction from Carnegie Mellon University and a dual bachelor’s degree in Computer Science and Economics from the University of California, Berkeley. Before joining SNU, she served as an Assistant Professor at KAIST. Her interdisciplinary work has been published in leading venues across HCI, learning analytics, and educational technology, advancing human-centered approaches to AI-enhanced learning and communication.

+ Read More
user's Avatar
Gunhee Kim
Professor @ Seoul National University

Gunhee Kim is a professor in the Department of Computer Science and Engineering and School of Transdisciplinary Innovations of Seoul National University from 2015. He was a postdoctoral researcher at Disney Research for one and a half years. He received his PhD in 2013 under supervision of Eric P. Xing from Computer Science Department of Carnegie Mellon University. Prior to starting PhD study in 2009, he earned a master’s degree in Robotics Institute, CMU. His research interests are solving computer vision, audio, and natural language problems that emerge from big multimodal data shared online. He is a recipient of 2014 ACM SIGKDD doctoral dissertation award, 2015 Naver New faculty award, Best Full Paper Runner-up at ACM VRST 2019, Outstanding Paper Award at EMNLP 2023, and SAC Award at NAACL 2025.

+ Read More
user's Avatar
Sungroh Yoon
Professor of ECE and AI @ Seoul National University

Sungroh Yoon is a Professor of Electrical & Computer Engineering and Artificial Intelligence at Seoul National University (SNU), where he served as Associate Dean of Student Affairs at the College of Engineering from 2019 to 2021. He is leading the SNU Data Science and AI Laboratory (DSAIL). His recent research areas include generative AI, agentic AI, physical AI, and artificial general intelligence (AGI). Prof. Yoon received his B.S. degree from SNU and his M.S. and Ph.D. degrees from Stanford University, USA. Prior to joining SNU, he was an Assistant Professor at Korea University and also held a Senior Engineer position at Intel Corporation, USA. From 2020 to 2022, he served as Chairperson of the Presidential Committee on the Fourth Industrial Revolution. Professor Yoon is a Member of the National Academy of Engineering of Korea.

+ Read More

SUMMARY

The Seoul National University–OpenAI Joint Symposium underscored Korea’s ambition to become a global AI leader, with SNU and OpenAI presenting their partnership as a milestone for responsible AI development. Speakers from government, academia, and OpenAI emphasized AI’s transformative potential for education, economic growth, and societal progress, while also highlighting the importance of democratic values, infrastructure investment, and equitable access to AI’s benefits.

Timestamps 33:32 - Hyeonwoo Noh 43:07 - Joanne Jang 52:32 - Wonbae Park (Demo 1) 1:02:09 - Prof. Sungroh Yoon 1:19:12 - Prof. Kim Gunhee 1:32:55 - Ronnie Chatterji 1:47:17 - Raghav Gupta 1:56:02 - Wonbae Park (Demo 2) 2:05:40 - Prof. Gahgene Gweon 2:19:46 - Prof. Eunsoo Lee

+ Read More

TRANSCRIPT

Good morning, ladies and gentlemen. My name is Taegyun Kim. We are now about to start, so please be seated and then ready to go.

Once again, my name is Taegyun Kim. I'm the Vice President of International Affairs at Seoul National University.

I'm deeply honored to serve as your emcee this morning for SNU OpenAI Joint Symposium. Actually, we just gathered today in the midst of increasingly, you know, the competitive kind of race and competition towards the artificial general intelligence.

So as Asia's leading institution, Seoul National University has chosen to step forward with the confidence and the determined to lead from the front.

So with OpenAI, as you know, it's one of the world's leading companies in artificial intelligence, we are going to be embarking on a new journey of innovation that will reshape the future of education, research, and society as a whole.

So actually, I do believe that today is truly a historic occasion. So this symposium marks, I think, the OpenAI's first, I think, collaboration with the academic institution in South Korea.

So I think just earlier this morning, we just concluded the signing of Memorandum of Understanding, MOU, with the OpenAI. That is like the signature kind of gesture where we are going together to collaborate and deepen and build our partnership by using the OpenAI or other partnership indeed.

All right, so actually I got some of the mission from the OpenAI to introduce the, you can recognize the podium, the OpenAI forum. So I got two sentences to introduce because I'm not qualified to introduce what is the OpenAI forum.

So let me read it. The OpenAI forum is the vibrant global community of experts dedicated to advancing OpenAI mission, ensuring that artificial general intelligence benefits everyone. It is a place for dialogue, collaboration, and contribution where members can share insights, shape the development of OpenAI technology, and work alongside us in imagining a positive future with AGI.

So the OpenAI forum, we invite you to join the forum as well as to stay connected with the upcoming events opportunity from OpenAI. So actually they have the website, the forum.openai.com. So if you have interest, just try to find the website and the Google and you can join.

Right, so that's my mission is done for OpenAI side. So let's share some of the housekeeping announcement first and then we're gonna start off the today event.

So firstly, we kindly ask you to set your mobile devices to the silent mode. And second, this session, opening session will be done I think by 10.15 and also the following that, the two sessions will run from the 10.15 to 12.35. So the approximately two hours and 20 minutes.

So please be seated, remain seated until the end of today's symposium. And third, for your safety, please note there are three exits. You can see in the front, in the middle, in the back of the hall. So kindly keep this in mind, I think in case of the emergency.

And then last one, the water dispenser is available in the exit of the middle, so that you can just use it if you like to have some water.

And two more things. The first one, at the conclusion of this event, you know, the lime is a sandwich, the lunchbox will be provided. So there is another incentive why you remain seated until the end of the symposium.

The very last, very critical things. So five participants from the audience will be selected by raffle to attend the Open Air Special Evening event today. So this is another kind of have reason why you're here until the very end of this symposium. All right, so without further ado, I will begin today the opening session.

So to open our symposium, it is my privilege to invite Dr. Honglim Ryu, President of Seoul National University, to deliver his welcome remarks. Please join me in a warm round of applause.

Hello, thank you. I’ll keep my greetings brief and echo the earlier remarks. I believe this occasion can be a historic inflection point for the Republic of Korea.

The President has publicly committed to elevating Korea into an AI top-three nation and is moving swiftly. The National Assembly has already passed a Basic Act on AI—the world’s second of its kind. Korea is a powerhouse in both manufacturing and ICT. We are strong in semiconductors and digital, and above all we have an abundance of ChatGPT-savvy users. So OpenAI has made a very good choice. In fact, you went to Singapore and Japan first, but you should have come to Korea before anywhere else.

Korea’s AI development is at a point where the proverb “beads must be strung to become a treasure” applies—we must thread the scattered beads into one. I hope today becomes the strong thread that strings them together. At gatherings like this we often repeat similar messages, but there’s something I want to say to the young people in the back. In every country and every era, there have always been “the young ones.”

Even in the grim years under Japanese rule, we remember Baekbeom Kim Gu, but alongside him were young people who gave their lives for independence. In the 1960s, young soldiers dedicated themselves to overcoming poverty, and their determination helped build today’s Korea. People like myself—along with Floor Leader Young-do Choi, Representative Tae-ho Jeong, and Floor Leader Hyun Kim—were also among “the young ones” who resisted military dictatorship in the 1980s. Roles change by era, and now it’s your turn to become the next generation of ‘young ones.’

I also want to offer an apology. I believe the foundation of democracy is self-interest and egocentrism—and its boundless expansion. The most talented among you possess immense self-regard that, at any time, can transform into altruism, coupled with the drive to lead Korea with your brilliant minds. Yet we, the older generation, have pushed you into endless competition, leaving your great potential too often latent. For that, I am truly sorry.

Going forward, the government and the National Assembly will do our utmost so that you—the young ones of this great transition—can lead the Republic of Korea. Korea’s future rests on your shoulders. Thank you.

Hello, I’m Representative Jeong Tae-ho. Representative Min-kyu Park and I both serve constituencies in this area, so of course we had to attend. Thank you for inviting me to give congratulatory remarks.

Congratulations to Seoul National University and OpenAI on this partnership. AI has made my life a bit easier. Politicians are always asked to deliver greetings or congratulatory speeches, which is hard. These days, even if I don’t fully know the event details, I can go to ChatGPT and say, “Please draft a congratulatory message in the voice of a National Assembly member,” and it generates one right away. That’s how convenient AI has become—even for politicians.

As Chair Min-hee Choi mentioned, the current administration aims to make Korea one of the world’s top three AI nations. We believe this is truly the path for Korea’s future. That determination became concrete yesterday. We established the National Growth Fund: I originally designed it at 100 trillion KRW as chair of an economic–industrial subcommittee, but yesterday the President announced it would be expanded to 150 trillion. The focus will be investment centered on AI.

The day before, as a floor manager on the Strategy and Finance Committee, we reformed the system for science and technology R&D. Previously, the Ministry of Economy and Finance set budgets via preliminary feasibility studies; going forward, science/tech R&D can proceed without such intervention, even on fundamental topics.

Although R&D budgets were once cut substantially, next year’s budget has been raised back to 35 trillion KRW. We’ve prepared both institutional frameworks and substantial funding for the AI era. What remains is for industry and academia to research boldly and bring results to life. The National Assembly will provide strong support. I hope today’s SNU–OpenAI seminar opens limitless possibilities. Thank you.

Hello, I’m Hyun Kim, floor manager of the Science, ICT, Broadcasting and Communications Committee from the Democratic Party. It’s a great honor to be here with the SNU President and OpenAI Chief Strategy Officer Jason Kwon.

In the 1970s we built the Gyeongbu Expressway; in the 1990s, the Internet super-highway. Those were the foundations for Korea’s leap forward. In this administration, through an AI super-highway, I believe Korea will once again stand tall.

As many have said, Korea’s future is right here today. I’ll work hard in the National Assembly so that SNU can excel in AI—surpassing even KAIST. I also believe Representatives Tae-ho Jeong and Min-kyu Park, who represent this area, will provide their utmost support. Since the start of the 22nd National Assembly, there have been about 200 AI-related forums. One of the key figures—now serving as the President’s Senior Secretary for AI—is Ha Jeong-woo, who tirelessly served as an “AI evangelist” in the Assembly as well as in industry. Floor Leader Young-do Choi has also played a major role.

We will keep our heads together and fully support Korea in becoming a leading AI nation. Thank you.

I’m Hyung-du Choi, floor manager of the Science, ICT, Broadcasting and Communications Committee. I previously served briefly as the opposition floor manager as well. In U.S. congressional terms, that’s the “ranking member.” In Korea, we might say Chair Min-hee Choi is the Chair, and I’m the Vice Chair.

The collaboration between Seoul National University and OpenAI—also pledged in the presidential platform—is a critical starting point for making Korea an AI G3 powerhouse. As we mark the 80th anniversary of Liberation, Korea has achieved a miracle: sixth in national power, ahead of Japan in per-capita income, and now at a G7-level standing. But in AI, we must be number one—that’s how we truly become a top-three AI nation.

The Assembly is working hard. We recently expanded AI budgets significantly. To join the global top three, we need capabilities not only in LLMs and sovereign AI, but also in physical AI—processing unstructured physical data to build foundation models. We have increased related budgets accordingly. The government has also introduced exemptions and support measures. Over the coming years, about 1 trillion KRW will be invested in physical AI, with over 10% allocated to research teams at Seoul National University. Please, a round of applause!

Together with OpenAI, we in the Assembly will do our utmost until the day Korea leads in both physical AI and LLMs. Thank you.

Hello. I entered Seoul National University in 1993 as an economics major and later served as a BK research assistant professor—so I’m one of SNU’s own.

I sincerely congratulate my proud alma mater, Seoul National University, and OpenAI, which is leading the global AI revolution, on this new beginning. Korea’s future will be determined by three grand pillars: AI, space, and bio. If AI innovation expands the ecosystems of bio and space and strengthens basic science and technology, Korea can leap forward again.

Just as the Kim Dae-jung administration helped Korea overcome the 1997 financial crisis through investments in the Internet backbone and dialogues with global scholars and innovators—paving the way for digital transformation—this administration can overcome structural economic challenges through AI innovation and spark a new national leap.

It is meaningful that, at such a crucial time, the government and the Assembly—leaders across the aisle—are here together to celebrate the new start of SNU and OpenAI. Once again, congratulations. The government and the Assembly will do our best so that SNU’s students, professors, and researchers can boldly take on the future.

Everyone, let’s take on the challenge. Thank you. I would like to extend my deepest gratitude to President Ryu Honglim, our colleagues at SNU, leaders for the National Assembly and the government, and partners from our industry for being here today.

I would especially like to recognize Chairperson Chen Min-hee of the Science, ICT, Broadcasting and Communications Committee of the National Assembly, Executive Secretaries Kim Hyun and Choi Hyung-doo of the committee, and Jung Tae-ho, Park Min-hyu, distinguished National Assembly members from the Gwanak District.

From the government, we're grateful to AI Policy Director Gong Jin-ho for joining us, representing the Ministry of Science and ICT. Their leadership is shaping Korea's vision for the AI future, and their presence here today underscores the significance of this moment for SNU, OpenAI, and for the nation as a whole.

As I mentioned earlier, we opened our office yesterday to much fanfare. This represents not only the most important step, but also the first step for OpenAI for providing a deeper commitment to Korea. Our vision is clear. OpenAI Korea seeks to be a trusted partner in Korea's AI transformation. Working closely with government, academia, and industry to empower this nation's ambitious journey into the AI era.

Today marks a meaningful milestone. By coming together, SNU and OpenAI are not only signing an agreement, but we are forging a partnership built on a shared vision to advance Korean AI globally and responsibly, and to ensure these advances serve the greater benefit of society.

OpenAI has always believed that our mission is best achieved through collaboration. Korea has a vibrant research community, a forward-looking policy environment, and one of the world's most dynamic technology ecosystems. Partnering with SNU allows us to combine global expertise with Korea's academic leadership, ensuring that innovation AI translates into meaningful impact here in Korea and beyond.

AI is no longer a distant promise. It's already reshaping education, research, and industry, but with this potential comes great responsibility. That is why this partnership emphasizes not just innovation, but also safety, ethics, and human-centered design.

Together, we will explore how AI can empower students, strengthen research, and create new opportunities for society while it continues ensuring these technologies remain aligned with human values.

At OpenAI, we sincerely hope that through our collaboration with Seoul National University, Korea can realize its ambition to become one of the world's top three AI powers.

This collaboration is not only about technology, it is about people.

We believe this is the beginning of a journey where SNU's exceptional students will gain opportunities to thrive on the global stage, and where our joint efforts can help lay the cornerstone for Korea's continued leadership in AI.

In the sessions that follow, you'll hear from brilliant minds from both OpenAI and SNU on innovation in AI and its impact on macroeconomics and education.

These conversations will lay the foundation for long-term collaboration.

So once again, I thank our partners at SNU for their vision and commitment, and I thank the leaders of the National Assembly and the government for their support. We look forward to building a future where AI serves as a powerful engine of knowledge, progress, and shared prosperity in Korea and around the world, just as the title of this symposium envisions. Thank you.

Thank you, Jason, for sharing OpenAI's vision and commitment for this collaboration.

Okay, so we have, next up for the opening session, we have five members of the National Assembly, and we are also deeply honored to be joined by the distinguished five members of the National Assembly. So their presence reflects the importance of the partnership at the national level.

So I think that firstly, I welcome Min-Hee Choi, the member of the National Assembly, the member of the one of our program, that's the innovation in AI. So OpenAI prepared some of the demo, and as well as the senior professors will present some of the materials.

So from the first session and second session, the MC will be changed from me to our deputy vice president of international affairs, the Professor Boyol Kim.

So I will toss the mic to the Professor Kim. Thank you. Okay. Good morning again. Nice to meet you. My name is Boyol Kim. I'm a deputy vice president for international affairs of Seoul National University.

Now we're going to begin the session one, innovation in AI. We're leading voices from OpenAI and SNU. We'll share their insight on the latest advances in artificial intelligence.

So the first session consists of four panelists.

So, our first presentation is by Hyeonwoo, Asian Research Lead of OpenAI. So, let's please welcome Hyeonwoo for his first presentation. Thank you.

Hello, my name is Hyeonwoo. And I'm working on research on agents, and I'll present about AI agents using computers. Let me start with motivation.

So, when we are doing collaboration with our human coworkers, we often make assumption that they must have access to their computer. It is very natural to imagine that they have access to internet, so that they can have access to the real-time information on the web app, and they can use internet and the browser, so they must have access to the same web-based applications that we are using for our work, like email, calendar, web drive, OPISuite and many others.

At the same time, they must have access to the softwares that are needed for making progress to the work, so if you have some files with certain file extension, they should be able to open it and edit it using those softwares to make progress together by editing these files on our collaborations.

In terms of collaborations, sharing context is very important. We share context not only through messages, but also through links and files. We can share context through links and files because we assume that if you share some link for some web pages or web applications, they should be able to open those links from their browsers and interact with some web-based files and web-based applications to make progress on the work.

work and at the same time they have access to the software so we can share context through files. Finally, there are certain assumptions about access control. There is access control that allows our collaborator to safely access private information and also there are certain permissions that we can give for them to edit some shared files.

Based on this observation, we can imagine that if you want to evolve chat GPT to become more like our human collaborator, it is obvious that we probably want to give something like computer to chat GPT so that we can collaborate in the same natural way with our human collaborators.

OpenAI has been making progress on this direction and we released something called chat GPT plugins in 2023. We provide a few basic tools to chat GPT so that it can imitate things that people have been doing with their computer. For example, we provide a text browser so that the model can access real-time information on the web. We provide a Python interpreter so that it can run basic Python programs to do simple calculation and data science. And we provide retrieval tools and third-party plug-ins to give a way for users to provide a way to allow a model to access their private information through APIs.

We also noticed that there are things that we can improve further. First, the model can be much better on utilizing all these tools to accomplish more complex tasks. And second, the set of tools that we provide in this initial release is quite limited compared to what human can do with their computers and softwares.

Since then, we make good progress. We make good progress on reinforcement learning with reasoning, which allow our model to take very long, reliable sequence of actions. action to accomplish very complex task. And second, our model can now have multi-modal capability, see the screen and generate text and take an action, so they can use graphical user interface-based software, which is much richer and much more capable than the set of tools that we mentioned above.

And instead of just providing Python interpreter, model can now access to the full terminal interface for a full computer. Based on all these advances, we released ChatGPT Agent in July, and these are the set of tools that we provide to the agents.

First, we still have text browser, so the model can still seek information on the web. And also we provide the graphical user interface browser. This is basically browser that we built for human. So we just give this browser to the model, and model now can interact with the browser built for human by looking at the screen and taking action from keyboard and mouse. Basically allow model to access any information on the web.

app and can interact with any dynamic web pages and can even use complicated web-based softwares. And this is a browser built for humans.

So a user can log into agent's browser to give credentials so that model can have access to certain websites and applications. We provide a terminal interface for the model to control the computer through a text-based interface.

So a model can use this interface to write programs, edit files, and running programs on a computer. And this interface is connected with a computer that have file system. And also, this file system is shared with browsers and APIs.

Models still have access to APIs. And it can use API for using certain web services and to assess private information, like email, calendar, GitHub, and many others.

And also, a model can use API to use other AI-based tools to accomplish tasks, like image generation model to create an image and use it for creating PowerPoint slides, for example. Here are a few things that I want to highlight.

First, we provide, like, Modo can now utilize full graphical user interface. So in theory, Modo should be able to use any software and computer interfaces that are built for human.

Instead of just giving only the GUI-based software and tools, then Modo can still utilize text browser and API. Because by providing these tools that are tailored for agents, Modo can be more efficient in utilizing its superhuman perception capability, like reading a large chunk of the text and comprehending it very quickly.

Second, we provide a terminal, and it is not only for programming. For example, agents can use terminal and some headless Office Suite softwares and Python libraries to create, for example, spreadsheet files and PowerPoint slides in a way that a human collaborator can open and edit it on their Office Suite.

Third, all these tools are sharing state, especially the file system, so by sharing the file system, it can do things like using browser to download a bunch of images and creating slide using terminal interface or it can create front-end web pages through coding and open it in the browser for the debugging. So basically by sharing the state, these tools are becoming more like different interfaces or software for a single computer.

What I'm trying to say from this slide is that by combining all this together, we are basically trying to give a computer for the agent. This computer is designed for agent to be maximally efficient to use. With all this richer set of tools and advances on reimbursement learning with reasoning, we make good progress to make this agent to be very useful.

In our internal benchmark on complex, economically valuable knowledge work tasks,

models, test GPT agents win and tie late against the human expert on this task are approaching 50%. And if you see the excesses, estimated time for human to complete for this task is open more than 10 hours. It means that model now start to be capable of solving this task very complex and time consuming.

There are certain area that model is already significantly outperforming human. For example, our company's very complicated research task, they often take more than 10 hours. And in this benchmark, if humans spend up to two hours per task, the human's performance is about 25%. And here, test GPT agent achieved like 68%. And it is very strong on data science benchmarks and already significantly outperform human performance here.

At the same time, there are area that model needs further improvement.

human to be really useful. First, one example is a spreadsheet-based task. In the spreadsheet benchmark, the ChatGPT agent makes significant progress over previous baselines, but still behind human. In our internal investment banking modeling task, which is also spreadsheet-based task on more challenging financial domain problems, model is around 41%. And web RNI is a task that model have to use web-based software, like maintaining number of products on the back end of the online web shopping more.

And in this task, the model, we are making consistent progress, but still behind humans' capability. All these things combined together, what I'm trying to share from this presentation is that ChatGPT is now evolving to help work needs a computer. So we are trying to give agents, ChatGPT agent, access to their own computer. And we are designing this computer to help tackling the task that human can't do with their computer. And it is designed to be maximally efficient for agents to use. And with progress on reinforcement learning with reasoning, the agents are now becoming more and more capable on solving all these tasks and can tackle more and more domains now. That is everything that I prepared. Thank you for listening.

Thank you very much, Hyunwoo, for sharing those perspectives on more capable AI agent. Now, we are pleased to invite our second presenter, Joanne Zhang, head of model behavior and policy at OpenAI, who will speak about overview of model behavior. Please welcome Joanne Zhang, please.

understand them, and maybe instead of mail order catalogs, we can shop instantly by telephone, and businesses can maybe meet over picture phones, and we'll need more translators to keep up.

How did it actually play out 60 years later? When it comes to relationships, we now have, everyone is connected all the time through informal texts, cacao talk, reactions. People meet on dating apps, so they swipe left and right to meet their loved ones, at least more in America. And there are these online communities where for a lot of people, they're primary sources of belonging and support and identity.

As far as knowledge goes, we got Wikipedia, Reddit, Twitter community notes, where global knowledge is continually crowdsourced, debated, and updated by everyone in real time.

I don't know if you all remember, but we used to talk about UGC, user-generated content, but now we kind of just take that, that's the norm. Users, of course, generate content.

And now we have decentralized content distribution that gives new meaning to trust and consensus. And then, of course, culture. There are now personalization algorithms providing endless entertainment that is directed to what you like.

I don't really know how we would even be able to explain meme culture to someone in the 1960s. And then anyone can become global celebrities from their bedroom now.

And when it comes to economy, online shopping is a thing, but also there are these algorithm-driven gig economies like Pedaraminjok, Uber, DoorDash, KakaoTaxi, and micro-entrepreneurships as well.

So all this is to say, we're gonna get a lot of things wrong when it comes to...

revolutionary technology and its impact on the future. And a lot of what shapes today's culture today and norms were nearly unthinkable back then, and they were outside what we call the Overton window.

And we consistently underestimate how technology can reshape society in that way, so we're gonna be wrong and we'll just accept that. And then our challenge is to think through in what ways could we be wrong and what that means for what we're building right now.

So I'm gonna throw out three ideas that are just entering the Overton window or currently outside the Overton window from sooner to later, but it will be extremely uncomfortable because we won't find it acceptable right now. It's gonna sound weird and uncomfortable and gross and maybe even dystopian.

I need you to know that these are predictions and not normative statements. So I'm not saying this is what should happen, but these are what things.

that might happen, so please don't cancel me. Maybe as an overarching prediction, I think that a lot of these predictions will be rooted in the belief that AI will change what it means and how we seek meaning in being human. So I think it will radically change how humans have adopted to fulfill core human needs, and specifically the top three levels in Maslow's needs, I think we're gonna see dramatic shifts. And I think in general, AI is just gonna break our notions on what it means to be human and what makes us special.

So prediction one, we talk about the attention economy. I think that it will shift from attention to attachment as a scarce resource. This prediction is an easy one and something that I think we've all been thinking a lot about in terms of our people outsourcing thinking, what can we do? Today, we struggle to imagine that people can sincerely love or choose lifelong.

partnerships with AI, but you can imagine that for kids growing up or babies who haven't been born yet who are just going to be native to where AI has always been around, maybe that won't be weird.

Companies optimizing for attention may be now turning to optimizing for attachment. And then there are some downstream predictions I can make.

For instance, I think relationship norms will change in general since now everyone has access to something that can validate you. And people might, yeah, start having relationships.

And it may not just be these chat bots, but maybe there will be AI in everywhere from your fridge to your car. And maybe robots won't just be humanoid robots, but cat-shaped, preferably cute.

Prediction two, entertainment and specifically gaming I think will replace work. People will have so much free time, I think, because as we automate a lot of toil as we

call it, work that is not very meaningful for humans to do. And human identity has largely revolved around work, from hunting to gathering. And professional accomplishment and productivity are just so core right now to people's identity. But what would a post-work world mean? I think a lot of people will need to redefine themselves outside traditional employment contexts.

That said, I think that what we consider not work or just play or gaming may actually be considered real work in the future. And that's how things will evolve. Going back to my point from what we would have guessed in 1960s, just until last week, I was designing Chachaputi's personality. And I don't know how I would be able to explain that to someone in the 1960s. And they would have guessed that it might be a fake job.

So I think that work will change. And maybe we'll have even gaming, virtual societies, where people can craft.

identities. I think that people might be doing Excel spreadsheets just for fun, just like how we in games we farm right now for fun, right, like FarmVille or whatever. Maybe we'll do, we'll fill out some spreadsheets or make some decks and entirely new careers I think will arise from virtual architects, experienced curators that blur lines between creativity, play, work, and jobs. Yeah, that again feel fake to us.

Social status, achievement, and recognition also increasingly will be derived from these new settings, and it will have new forms of celebrity and influence.

Third prediction.

Okay, this one is the weirdest probably. I think identity is going to get very weird. Just thinking about how I have been at OpenAI for four years now, so OpenAI as a company has four years worth of everything that I've written in a work context,

I've written, every deck I've made, and every message, experiments that I've run, and there is a world in which you can now fine-tune off of that data to create a Joanne bot. And what would that mean? And would intellectual property then become identity property? Do they even need to employ me if OpenAI wouldn't waste GPUs on replicating me, but the smarter researchers instead? But yeah, and then upstream, AI will digitally replicate things, and I think that also means, okay, if there's a bot that can kind of say what I would say, then what does that mean? That will be very weird to me. And then people might then license parts of their selves, like their voices, their likeness, but also personalities and ways of thinking.

Some people might even have to decide on whether or not they want to upload and

instances of their clones or digital clones of themselves, and maybe they'll have a stance on whether they might want to be moralists or immoralists, and people might be able to blend multiple people's identities as well, where I would have this person's humor or that person's brains. So it's all going to be very weird.

So what does it mean for all of us? I think the overton window will move with or without us, but we do happen to all be at the forefront and are thinking about what this means for society. This is a lot of heft, but if I may suggest a call to action is to think deeply about what kind of AI and interactions you want to see, and everyone here can actually make a difference in terms of pushing for change or building things.

And for me, that means exploring how we design AI to empower people, how it can help people find purpose, meaning, and self-actualize.

So yeah, I'm excited to hear everyone's thoughts later. Thank you. Thank you, Joanne.

for your insightful and thought-provoking presentation. Now, we are pleased to feature an agent demo presented by Wonbae Park, solutions engineer at OpenAI.

Hello, everyone. I'm Wonbae Park, a solution engineer at OpenAI. Today, I'd like to show you how AI is evolving from simply explaining to actually executing them. So let me start our questions.

What is the most repeated or time-consuming task in your own work or daily life? What is that?

OK, I expected no one has an answer. OK, OK, let me do that. It's like, for me, it might be searching endlessly, personally, cheap flight, right? And also, it's like booking a tennis court. I know that.

As some people already figured out, finding the tennis court is really tough here. And also, it's like even writing the same type of the report over and over again. So as you watch today's demo, keep your own example in your mind.

So I will show you the differences between normal chat versus agents. Then we will build a practical agent together called CareerBuddy later then.

So I'm going to switch the screen here. I'm going to type a very simple comment here. It's like, could you purchase a flight from Incheon to Narita? So boom there.

So I guess you might be expecting what's to come, the results.

So, as you see here, it's like, from here, so, now, I mean, the response is really helpful, right? So, but notice something about from here. I still have to do on myself, actually work myself, actually. This is why I call that, you know, explain, not the, you know, execute.

So I'm going to use the same prompt here to copy it and paste it with agent mode from here. So you will see what the difference looks like. So boom there. You know, agent actually set up the browser soon as after he got some information from the human. So I'm just like, like this, but, you know, for example, just

Just consider you ask to your friends, could you purchase a flight from Incheon to Narita? What will be the responses? OK. I also expect it.

So it's like, your friend might be asking you, what the hell is that? I mean, when you want to go? So with a round trip, or like a single trip, or how many people you want to go there? So those kind of simple questions will be coming up soon.

So I'm just like, I'm going to change it to the here.

So as you see here, I'd like to help that. So agent actually answer about like this. But could you let me know the date of the travel, or something, one way, or some?

The agent actually figuring out how to going to work behalf of you. So in this case, it's definitely.

definitely essential information has been needed to do, because he wants to figure out what will be next. So in this case, I typed it, economy, any time in August 2016, and that, and then it's coming, bring up the browser here, so I'm going to rewind it once again for you, just to, you know, figuring out what it looks like.

So it's just open the browser here, just like a human, just like a human. So it's a click here, around there, it's like, so it's easy, I mean, the browser itself just automatically, you know, click around there behalf of you. If the agent needed, you know, if agents got some information from you, you know, free.

So now you see the differences. Agents, you know, do not provide instructions, they actually.

detect the actions. So if they needed more detail, they ask it. You see it here, right? Then they use the information to complete the task. So agents can even sometimes do some scheduled jobs as well from here.

So you might be asking yourselves, you just want to cheap flight every single day because it will be coming up sooner or later. So I was like, you can schedule it every single day to find a cheap flight here.

So let's try something more practical, building a job search agent from here. So I'm going to show you about the CareerBuddy.

So for this demo, meet the CareerBuddy, so your personal job search assistance. So think about the hardest part of the job searching. What is that?

OK. So for me, it was not only finding the positions I fit, but also checking every single day. So I wouldn't miss the new opportunity in the website.

So we have no idea when this new opportunity has to come up. So sometimes, out of all the effort, there was still no good opportunity there.

So CareerBuddy solved this by organizing your strengths, collecting your opportunity, and helping you apply the smart way way better.

So let me try using this one. So this is your CareerBuddy. And going to, please read my resume file. I already uploaded to it.

Google Drive, and using this information, collecting all kinds of things to match me, like a scoring for myself, and give the three reasons for like that. So I'm going to try this one.

So to make it short, I'm jumping over to the pre-packed one. So if you hit that one, so this agent, think about the 10 minutes to do that, same as the popped up the browser, and then going to everywhere to find your job.

And before that, actually, this agent, actually, before kicking the browser, he's trying to figure out what a resume means. So let me see the resume here. So this is what I uploaded, actually. So you can see that when you graduated, and how is this your skill, how is your experiences, how is the projects we're going through.

group previously. So agents actually understand what you are aiming for. And going back to browser, do search for you without your attending. So you can just close your laptop and just do you. I mean, whatever you do, it's going to be job done.

So after that, the agent showed me the kind of matching score here. So research internship from the KICE. Sorry about it. I didn't mean it. So it's like, no, it's like, no, no joke.

So I got research intern is perfectly matching for my, you know, these circumstances right now at that day. And also, it's going to be, it can be make a spreadsheet for you to stacking up the all day by day. And then you can, you know, utilize it, that information on your spreadsheet.

as well for you. Back to the screen here. Today, we saw how agent mode goes beyond the chat, actually. So chat explains, but agent executes. So by automating the repeated or time-consuming work, agent frees you to focus on higher-value tasks, whether that research, teaching, or creative projects.

For SNU students and faculty, this means automating routine work, less time on data, cleaning, or formatting, faster prototype, turn idea into draft instantly, accelerated learning and research, more time for deep thinking and discovery.

As we close, I invite you to think once again, what is the most repeated or time-consuming job in your life?

is, as far as I remember, the future of AI from Korea to world or something like that. So in line with that, I will talk about the following concept, where AI innovation meets reality.

So for AI innovation to truly meet the world, we need a lot of effort and even courage, I think. Today, I'd like to talk about the role Korea can play in this journey and also the direction we should be heading in.

The advancement of AI is a global trend, as you all know. At the same time, local characteristics can significantly influence AI innovation. Open AI, for example, is building foundation models that are used all over the world.

Even today, I used the chat repeatedly this morning to ask some questions. I believe Korea, too, has a unique contribution to make to the advancement of AI by leveraging its own strength.

Korea is not just a consumer of AI, but an enabler and innovator, I think. So, Korea is a full-stack powerhouse in many fields.

First, we are global leaders in IT and memory chips. We are also very strong in manufacturing and robotics.

As this chart shows, we are number one in terms of robot density. Also, we are collaborating with the U.S. in terms of shipbuilding.

And also, we are very strong in the healthcare sector, too.

Recently, a friend of mine visited Korea and she was amazed that she could get a full body checkup in just two hours. But many Koreans would think, well, it took as many as two hours, it's too long. So we do many things in bali bali mode.

And on the cultural side, Korea is also making a huge mark. As K-culture becomes global, I think we might be entering a golden age for Korean culture.

These strengths position Korea uniquely in the global AI ecosystem. But beyond technology and industry, Korea is also making bold policy choices about how AI should be developed.

One of the most prominent of these choices is the push for so-called sovereign AI, which is an initiative that is both exciting and controversial. The term sovereign AI can mean different things to different people, I think. Anyway, let's look at the opportunities first.

Sovereign AI allows us to do something like strategic independence, cultural alignment, and also it secures data sovereignty. Because of these advantages, the Korean government recently selected five teams to develop our own national AI, so-called KAI. This is a two-year project to create top-class foundation models.

As you know, Korea exports a lot of tanks.

and missiles. So some people even suggested that we should also export AI for military intelligence analysis, maybe combined with tanks and missiles so we can create a kind of Korean version of Palantir. If it goes well, then we could not only build our own sovereign AI, but also we could find a way to export it.

On the other hand, there are controversies as well. First, it could be a duplication of global efforts. Many people say it would be better to just use existing foundation models. Second, some critics argue that creating our own foundation model is like trying to build a new OS operating system. Actually, there have been many attempts to create an alternative.

alternative to Microsoft Windows in the past, but most have not been very successful.

Third, as this image shows, there is a law called scaling law. Simply put, performance is often proportional to the resources you put in. The resources being invested in KAI project are a bit, at least a bit smaller than those in the US and China.

In this situation, building a competitive model sounds like a real challenge. We are trying hard, but it's a challenge, I guess.

As a result, it is still not clear which side will be correct in the future. But there is one thing very clear. In the current situation where the US and China are leading the development of large-scale AI models, it is crucial for creating a competitive model.

to find a differentiated field and directions so that we can maintain technological leadership and create our own survival strategy. From this perspective, another prominent initiative Korea is pursuing is the push for so-called physical AI, or more broadly, AI that truly meets the real world.

If sovereign AI is about who controls knowledge, then physical AI is about how intelligence is embodied in the world. These national directions show Korea's distinctive priorities. Just like sovereign AI, the definition of physical AI can be a bit fluid.

I think the concept of physical AI includes not just humanoids and robots, but also other related fields. For example, PIN, world model.

models, embodied AI, and vision language action model, or VLA model. Whatever you call it, I believe that true innovation happens when AI models are applied to real world domains. Let me give you an example. Manufacturing is very important to Korea. And for AI to be applied to manufacturing, there are a few things we need. First, dexterity. Actually, the term menu in manufacturing literally means hand. This picture shows a re-imagined human body where the size of each part is proportional to the amount of brain power used to control it. As you can see, the hands are drawn the largest. Our hands use up to 30% of brain power.

But what is the current status of engineering dexterity? We still don't have robots with high adaptability and dexterity.

level of dexterity to perform complex surgery or play the guitar with flair like a human. According to mechanical engineers, it'll take some time to achieve high dexterity actuation like that of a human hand.

Second, the tacit knowledge of skilled workers. Michael Polanyi said, we know more than we can tell. This image shows the layers of knowledge. The tip of the iceberg is explicit knowledge, which answers the question, what? Below that is implicit knowledge or how, which is often called know-how.

In manufacturing, know-how is important, obviously, but even more crucial is tacit knowledge, which answers the question, why? To apply AI to manufacturing, it is essential to extract the tacit knowledge of skilled workers and turn it into knowledge.

into AI. Korea is rapidly becoming an ultra-aged society and many skilled manufacturing workers are retiring. The extraction and AIfication of this testing knowledge is a critical need. My lab is conducting related research and actively collaborating with various companies. Some of these are already being used on actual production floors, helping to increase productivity and reduce costs.

Another important area where AI understands the real world to drive innovation is science. Last year the WEF listed AI for scientific discovery as number one on its list of top 10 emerging technologies. My lab is also actively engaged in scientific discovery using AI. For example,

we've created a method for designing genetic scissors using AI, a method that was one of the first in the world at that time. We also developed a method to accelerate the prediction of material properties more than thousand times faster than existing approaches, all using AI.

The two speakers from AI gave me an impressive talk and a demo on agents in the previous session. In the field of AI, AI and AI agents are really driving a lot of innovation, I think. In particular, the proactive nature of agentic AI is crucial, moving beyond the reactive nature of traditional AI.

Many companies and researchers in Korea and all over the world are using agentic AI to accelerate the cycle of innovation. I think this is really a great example of AI.

creating a real impact. But when AI meets the real world in science, industry, or society, ethical and regulatory issues become important.

My lab conducts research on speech and image synthesis. I don't have enough number of GPUs so I cannot do much of video synthesis yet. And we've produced some really impressive results, so we have published a number of papers at top-tier conferences. But many times we couldn't release the technology immediately because of the ethical issues.

Also, AI can produce sometimes biased results. For example, if you ask your AI to generate an image for the prompt doctor, then it often generates more male images than female images.

Technologies are also being developed to overcome these issues, like explainable AI or XAI, reliable AI, trustworthy AI, fairness in AI, and so on. My lab is also actively researching technologies to remove bias and preserve privacy, maintaining its performance.

We've also developed a method to integrate XAI with fMRI data to offer insights into the language selectivity of the brain. Actually, there are many more excellent AI researchers here at SNU. In particular, the SNU AI Institute has become a central hub for AI research.

And the IPAI, or so-called AI Graduate School, is dedicated to training many graduate students in AI.

I heard that more than 6,000 users at SNU are already using the paid version of ChatGPT. I'd love to know what SNU members are using for it. So I look forward to hearing about SNU's AI education in the next session.

I'd like to wrap up my talk now. I've talked about AI innovation, Korea's role, and our potential. I believe that AI innovation truly happens when research meets reality, and responsibility guides impact.

Before I end, I have something more to add. Of course I asked ChatGPT for some ideas on what I should talk about today. It gave me some suggestions. Interestingly, as you can see, the suggested title was From Seoul to Silicon Valley, From Silicon Valley to Seoul. This phrase sounded familiar.

at least to me. When Seoul hosted the Olympics in 1988, the slogan was the world to Seoul, Seoul to the world. I'm sorry this image is in Korean. I couldn't find English version of this. Maybe it was a domestic slogan, I guess.

So Seoul is my hometown, and Silicon Valley actually feels like a second home to me. Why? I lived there for nearly 10 years as a graduate student and then as an engineer.

This is a picture taken on the day I graduated with my PhD. At that time, people at my school actually jokingly called PhD graduates in my field losers. Know why? They probably said so because they could have made a fortune by joining a startup instead of getting a degree, right?

I also had

a lot of opportunities, actually, a lot of opportunities to join startups when I was a student. But I just got the degree. I don't know why. I should have dropped out. Anyway, during the graduation ceremony, I had all these thoughts running through my head.

Then the commencement speaker delivered a dagger to my heart. We all remember him and what he said, right? Stay hungry, stay foolish.

It was June in California. It was hot, and I was exhausted, of course. So I didn't really listen to what he was saying, not knowing how famous it would become. I even thought that he said, stay foolish, yo, so you stay hungry. Not the other way.

Hearing it, I thought that even Steve Jobs was making fun of me, not joining a startup and for getting a PhD.

I thought he was calling me foolish and telling me to stay hungry. Jobs is no longer with us, but we have what? AI. I used the power of AI to create a video.

Professor Yoon is still foolish. Korea is still hungry. It's true. It's true. Really true. I'm still foolish. Yeah, I'm still foolish and Korea is still hungry for new growth engines. How can we satisfy that hunger? We have young graduate students. It's you.

Now I'm really going to conclude my talk. Another story Steve Jobs talked about on that day was connecting...

the dots. I believe this symposium is a starting point for us to connect and grow together just as Chachapiti suggested connecting Seoul and Silicon Valley. We are building the dots now and I hope they will somehow connect in the near future. And in the spirit of that connection, I'll end with a quote from another person we all know. Even if you know him, you're not that old. Don't worry. This one is real, by the way, not a deep fake. We go together. That's what we're about. Thank you very much.

Thank you, Yoon. What an impressive presentation. Thank you very much.

Now, it is my pleasure to introduce Professor Kim Gun-hee at the department.

I have joined from the Graduate School of Computer Science and Engineering at Seoul National University. Please welcome Professor Kim. Hello. First of all, thank you for inviting me. It is truly an honor to present our research at this symposium. My presentation will be a little different from the earlier ones. In this talk, I would like to introduce my own research, so it will be a bit more technical. The work I want to present is called Behaviour SD, which we presented at NAACL this year, and it is about understanding language through speech. Let’s begin with the purpose of this research. Many of you may have experienced large language models mainly through text dialogue. But language is not only text-based. For example, language carries not only semantic meaning in words, but also paralinguistic features. In the example I’ll show, the meaning of the dialogue depends not just on the text, but on how it is spoken. For instance, when a young woman becomes angry while waiting, her speech may clearly carry emotions of anger—her voice may have a higher pitch and faster tempo. On the other hand, the person speaking to her might try to calm her down, speaking with a lower pitch and slower pace. Speech can also contain cues like crying or laughter. So, spoken language carries this kind of richness. Another difference from text is that spoken dialogue is not strictly turn-based in the same way as text conversation. In text, one person speaks, then another replies. But in speech, people often overlap, interrupt, or give short backchannel responses like “uh-huh” or “right,” and the speaker needs to adjust to that in real time. Our goal in this research is to build language models that can not only understand but also produce speech with these features. We focus on four speech behaviors: Verbosity – how long or short utterances are. Sometimes people speak in long sentences, sometimes in short ones. Models need to control this.

Filler words – sounds like “um” or “uh” that appear in natural conversation.

Backchannels – short responses that show you’re listening, such as “yeah” or “I see.”

Interruptions – cutting in while someone else is speaking, which shifts the flow of dialogue.

To study these, our first task was to build a dataset, which we call Behaviour SD. It contains over 1 million dialogues, more than 2,000 hours of full-duplex audio. Each sample includes labels for these four behaviors, with three levels for each, plus transcripts, timing, and emotional cues such as laughter. Because collecting such a large dataset with human annotators was infeasible, we generated much of it using LLMs and TTS models to simulate realistic conversations. We started with large text-based dialogue datasets such as SocialConv and PyperLOD (released at EMNLP 2023), then converted them into speech with labeled behaviors. For backchannels in particular, we developed a special pipeline. We trained algorithms to predict when a backchannel is likely, based on content, pitch, and speech rate. For example, if the text indicates agreement, the model might generate a backchannel like “uh-huh” at the right timing. Finally, we converted these annotated transcripts into audio files using conditional TTS models. We ensured that speaker identity and naturalness were preserved, with synchronized timing between the main speech and interruptions or backchannels. Compared to previous datasets, Behaviour SD is much larger and richer, containing over 100,000 human-validated samples. It is available for download. We then trained language models on this dataset. The training pipeline tokenizes audio into discrete units using models like HuBERT, then represents two speakers’ audio channels along with special tokens for silence, overlap, and turn-taking. Our base model is built on LLaMA 3.2 (1B parameters). We compared it with larger models such as GPT-4o and LLaMA-3 7B. Even though our model is much smaller, after training on Behaviour SD, it showed better performance in producing natural conversational behaviors. Human evaluations also rated it highly. You can try the demo on our website. The model focuses on making language more human-like by reproducing behaviors such as interruptions, fillers, and backchannels. In the future, we aim to develop real-time voice agent models based on this work. That concludes my presentation. Thank you very much for listening to this technical talk. Please welcome Dr. Chatterjee.

Good morning, everybody.
First of all, it's great to be back at Seoul National University.
As an economist, I've come here many times to this great university.
I want to say two things.
One is that what's impressive about SNU is not just the depth of the fantastic faculty, and you saw two of your faculty here who are doing amazing research.
It's also the breadth of all the subjects that can be studied here.
I have more familiarity as an economist with your Department of Economics and your business school, and really, Seoul National University is a leader across all fields.

And so it's a real honor not just to be the chief economist of OpenAI here, but also as a professor and researcher to join you.
And as a professor and researcher, I had many students from Korea, many of my best students from...

Seoul National. And what I'm hoping to do in this job is to connect more with you, especially the younger students and researchers, to work on joint projects.

So at the end of my presentation, I'll provide a contact for my team. And if you're interested in how AI is affecting the economy, if you're interested in conducting research or learning more about those topics, I'd really encourage you to reach out to me and do that.

And when it comes to your faculty, there are so many faculty here who are looking at the impact of AI on business. Korea is home, obviously, to some of the world's leading companies. And if there's interest in those topics from the business school, whether it's venture capital, innovation, entrepreneurship, please feel free to reach out to my team. We'd be glad to work with you and meet with you.

So let me tell you a little bit about how I think AI is going to transform the economy. This is the big question, right? As an economist working at an AI lab, the big question I get is not about reinforcement learning or agents.

are you gonna have a job, right? What is it gonna do to the economy? And there's lots of different numbers when you think about this. These are some of the leading economists in the world and the numbers in this screen are all the percentage predictions of the impact on the annual growth rate of the economy.

And what you'll notice here is there's a wide range from 0.06% all the way to 18% about how AI is gonna affect the economy. And I know here in Korea, you're having these same discussions, making massive investments in AI infrastructure, in human capital, in your innovative capabilities.

And we're all wondering what's gonna be the return on this investment for the growth of your country. And if you look at the last 50 years, Korea is a success story in terms of economic growth. When I did my PhD in economics, people asked me what is the economic development case

for the rest of the world, and often it was Korea that was given it as an example. And just as Seoul National starting in 1946 grew, the Korean economy grew as well. And one of the reasons your GDP grew so fast over the last 50 years is because of your investments in innovation, semiconductors primarily, and now looking towards AI.

So the question is, which one of these futures is yours? Which one of these futures is going to be what happens in Korea and around the world? If you think about this, if you study the history of technology, if you look back at the dawn of the semiconductor, often there was a lot of divergence about the impact of the semiconductor, the computer chip on the economy. We didn't know when we invented the chip in the late 1950s how it would be used. And in many ways, we're at the beginning of the AI revolution.

Sam Altman, our CEO, calls it the early innings of AI, and that's what I think we're seeing as well. And so it's not surprising that there's divergence

all these numbers. Economists take a while to figure these things out. If you think about the capabilities of models, you saw some of this with this amazing research on speech. The capabilities of the models are changing very fast. It makes it very difficult for social scientists to study these models as they're changing so fast. This is why some of our predictions are different. And finally, it's the theory of the case. How AI is going to change the economy.

For many people, when they think about AI, they think about chat GPT. And when they think about chat GPT, they think about a chat bot. But it's not clear to me that the way we'll be using AI in five years will be through a chat bot interface primarily. There's so many different ways, including agentic models, as was shown by my colleague, where we could be using AI. And we haven't even talked about the use of the API to build programs and products that we haven't seen before. If you think about five years from now, the impact on the economy.

and you're only thinking about chatbots, you're gonna be closer to these low number on this side. But if you think about all the things that AI, agentic capabilities and otherwise can do for the economy through scientific innovation, through defense applications, through government services enhancement and efficiencies, then we're talking about much stronger and more accelerated economic growth.

Jobs is the other big question. Lots of questions about how many jobs we're gonna create and how many jobs will be disrupted by AI. And just like when it comes to growth, economists have many different views about how AI will affect jobs.

Many people ask me, I have three children, they say, what should your kids do when they go to college? They're very young now, but one day they'll go to college. None of us have a crystal ball. None of us can predict whether computer science is a better major than economics. I think economics is a.

a better major than computer science, but that's just me. But none of us can predict which subject is gonna be in demand at different times, and so it creates a lot of anxiety among students, like many who are here, about what to study.

And as you think about how AI is gonna create opportunities, I want you to remind you of one thing in the economy, which is most of the jobs that we have today, we didn't have names for them 100 years ago. Even 50 years ago, if somebody told you, like my colleague Joanne, they would be designing the personality of an AI model, or your child told you they wanted to be an influencer, or someone told you about K-pop, none of these things were jobs, none of these things were categories.

And so when you think about the opportunities in the global economy, AI will create some things that we don't even have words for that we don't understand. And this is the opportunity right now with AI.

Now, I lead the economic research team at OpenAI, and I wanna tell you a few.

things that we found about AI that we think make this technology revolution different. Closer to the internet, electricity, and the semiconductor than other technological revolutions.

First, look at the speed of adoption. This chart is basically showing you how long it took for different products to get to 100 million active users. Chat GPT took two months to get to 100 million users. We've never seen growth like this in a consumer product, and this kind of adoption is different than what we've seen in the past on technology.

When Sam tweeted about this, he said it was the chat GPT launched 26 months ago that was one of the craziest viral launches he'd ever seen. But when we launched image generation in 2025, we added 1 million users in an hour. So what's very interesting, and many of them were in Korea, right? What's interesting here...

is that this technology continues to be adopted at such a fast rate, we really haven't seen as economists anything like this before. That's one thing that makes it different. Second is capabilities. Many of you here are coming from a technical background. You know that in machine learning, we use benchmarks to measure the performance of models. These benchmarks, things like speech recognition, handwriting recognition, all the way down to nuanced language interpretation, we grade models in a variety of different ways as terms of their performance, and we saturate those benchmarks when we achieve certain levels of performance. We are saturating those benchmarks much more quickly than we were before. What used to take two decades is now just taking one or two years. Think about that for a second.

So not only are we adopting technology, but at the same time, it's getting better and better faster and faster. This is without precedent, if you think about the economics of technology. Cost. You might be saying, what does it cost? Well, if you look at the cost per one million tokens,

$33 for GPT-4, GPT-5 Mini, $0.23. So if I told you a story about rapid adoption, if I told you a story about expanding capabilities, but I also told you at the same time hundreds of millions of people were adopting the technology and the capabilities were getting much, much better, it was also getting 99% less expensive, that's a story for a technology revolution that we really haven't seen for a long time. That's what makes us optimistic at OpenAI, that we're really living in a moment where things are going to be very different.

When you think about some trends, things I'm watching, I want to highlight two things to talk to you about, and then I want to show you some stats from Korea, and then I'll end my presentation.

First, where is the data center and the infrastructures built around the world? This is very relevant, obviously, for Korea. If you look at the amount of money, nearly $200 billion.

in USD in 2024, it's really by the top five hyperscalers. And if you look at where the compute capacity, this is the infrastructure to power training and inference of AI, more than 90% of it is owned by the US or China in terms of the companies. This is a really important issue to think about when building AI infrastructure.

The second thing, and Korea leads obviously in enterprise, but if you think about enterprise AI, a large percentage of Korean workers are using generative AI at work, about 50% specifically for work purposes, and many Korean companies, almost 40%, have now integrated AI into their operations.

Enterprise AI, the impact for companies to use AI to drive top line and bottom line growth is the key to the AI revolution. And the more that Korean companies are leading the way in that area, and we hope to be your partner in that, the better it is for Korea.

These are two things to watch, where is the infrastructure built, who can access it, and what will happen with enterprise AI in terms of economics over the next year.

Now a couple of things on the state of play in Korea. You are in the top five in all Asia-Pacific countries in terms of chatGPT users, and you're the top five in API developers.

So my team has the privilege of analyzing a lot of the data on AI, a lot of the data on chatGPT. And I can tell you in terms of Korea, you are a leader in the region.

Secondly, if you think about growth, your users of chatGPT have 4x-ed, quadrupled over the past year. This is like an amazing level of growth. I think many of them are here at Seoul National, but it's probably all around the country you're seeing this kind of growth. And over 50% of users are young people under the age of 35.

You're growing fast, you have a lot of young youth users who are learning for lots of applications that we'll talk about here, and you're top five in the region.

Couple other things.

how are Koreans using ChatGPT? The number one use case is writing and critiquing text. So it's very common for someone to put text into ChatGPT and not necessarily ask ChatGPT to write something new, but to edit it or to make it better, or maybe translate it. So about 15% of all messages are writing and critiquing text. This is in Korea specifically from our team.

14% of the messages though are about how-to advice. This is practical knowledge. This is what makes us different than a traditional web search. It's not just about writing or getting information, it's also how to do things, which is really interesting. And eventually, agentic capabilities to do that, as you saw with the demo.

9% are actually for translation, which I think is very interesting. And this is higher in Korea than most other countries. 9% of people translating, let's say from Korean to English, are back to communicate with people across the world. This bodes well for your ability to connect.

connect the dots, as the professor said, around the world. Now, a couple things I think that are notable. Here's the number of AI patents per capita, 100,000 inhabitants. South Korea is top. This is why we are here. Because the combination of innovation and AI, your leading companies, your entrepreneurial spirit, as well as leading universities like Seoul National, is the reason why we have this. And so as you're thinking about where innovation is going to be, South Korea is going to be a locus for innovation when it comes to patents.

When you think about talent, this is talent concentration by geographies, right? South Korea, very high on the list, you can see between the Netherlands and Lithuania in terms of the percentage of people who have AI backgrounds, but not leading the world in this. This is an important area. If you think about the story of Korea's economic growth in the post-war era, it is about investing in human capital, right, education in particular. And this is why Seoul National.

nationals' efforts to invest in AI, to develop an AI institute are so important because this number is going to go up in the future. Finally, global investment in AI by geographic area, the United States is by far the largest.

This is the area, you see South Korea between Israel and India. This is the area where your recent legislation and the investments in AI are really going to matter. For Korea to keep pace with the world, the investments in AI, both in infrastructure and talent, are going to have to increase.

Now a couple things just to mention, I think energy and infrastructure are the key ingredients, the linchpins for how South Korea is going to win in AI. If you want to be top three in the world, the investments need to come there. You need an AI-ready workforce, so people who are familiar with AI, who are trained, who can use it at work, and you need an education system to produce those kind of workers.

This is the recipe that helped Korea in the past, it's going to be the recipe to help it in the future as well. If you're interested in working with our team and

And you would like to collaborate on projects around this, student or faculty, please send my team a note at econresearchatopenai.com. We're so excited to be here, and the opportunity to work with Seoul National is such an amazing privilege. Thank you again, and look forward to talking to all of you during the breaks. Ronnie Chatterjee, the Chief Economist.

Thank you.

Thank you very much, Dr. Chatterjee, for your energetic presentation.

Next we have Laghav Gupta, Head of APEC-EJU at OpenAI, who will speak on AI-powered education. Please welcome Mr. Gupta.

Hello, it's wonderful to be here. I'm thrilled to share a little bit with you about why education is so important to us at OpenAI. And what we're doing to help shape a positive future for AI in education around the world.

the world as well as here in Korea and we heard from Ronnie right before this it's one of the three areas where it's important as a part of the recipe for getting countries ready for the future with AI and education plays a massive role in that. Now you saw a little bit of this as well you know when ChatGPT launched back in late 2022 it grew very rapidly and like Ronnie said it became one of the fastest consumer product adoptions around the world. There are 700 million people that come to ChatGPT every week around the world to use the product and AI is being used in many many different ways.

I'd love to show you a little bit of how this is being used in education and what specifically is happening here in Korea as well. I mentioned by education is important and here's something which is which is very significant. Globally almost 80% of all ChatGPT users are under the age of 35. It's a little bit lesser here in Korea about 50%

Globally, almost 80% of users are under the age of 35. What that means is many of these folks who are using ChatGPT are either in K-12 schools, they are in higher education, they are in early career or mid-career.

And we heard that almost 20% of these folks are in colleges in Korea. They're in the age group of 18 to 24. And what do they use ChatGPT for? And this is the global view. The top use case is learning. And we see this for every region, for every demographic around the world.

A little bit different here in Korea, you'll see writing, how to advise, mentoring, which were some of the topics that we heard from Ronnie as well. But learning is the single biggest, largest use case around the world.

And I mentioned this earlier that with 700 million people coming to ChatGPT every week, ChatGPT, in a sense, is one of the largest learning platforms in the world as well.

And that is why learning and working with students has already become so important for us at OpenAI. Now, what this comes with is, there are many opportunities for us to progress what AI can deliver in education.

There are certain challenges as well. On the opportunity side, we see the potential to become a personal tutor for every student, a tool that can help democratize education and improve learning outcomes.

Years ago, you'll see on the left hand side, this is a study by Benjamin Bloom that found that students who received one-on-one tutoring outperformed almost every other student through every other form of education. Now, this study has become very famous around the world, but what is also true is that only a very small percentage of students have the opportunity to access this one-on-one tutoring that this study speaks about.

So we've been thinking about, is there a way that AI can help with one-on-one tutoring as well?

And then there are studies like this one on the right in Nigeria and elsewhere, which talk about the potential benefits of AI in education. Now, that being said, there are challenges as well, and it's important to be aware of them. It's important to, you know, work towards solving for them as well.

And these challenges are very real, right? As AI adoption grows, there are increasing fears that AI can undermine education. We do know that true learning takes effort. You know, it takes friction, it takes time to grapple with new ideas, actively engaging with material. And we are very clear that if students just use AI as a shortcut, they won't learn, right? That's very, very clear.

So what we've been doing in terms of addressing some of this is we've seen that when students use AI in a specific way, when they are guided by educators, AI can significantly enhance learning performance. You know, this is where the core challenge and the opportunity lies.

as well. And the difference this time around, though, with AI is we've seen in past waves of ed tech that many times it was tops down.

I used to work with a company called Coursera. And during COVID, it went all around the world. But the difference was that it was tops down. The government encouraged it. Universities adopted the platform. And then finally, it came to students.

The difference this time around is AI adoption is grassroots up. Students have already adopted it. And many times, they are ahead of faculty members and ahead of teachers as well and ahead of parents as well in terms of adoption of this technology.

And as an education ecosystem, we have the responsibility and urgency to dive in and guide students to focus more on the opportunity side and avoid some of those challenges that I spoke about as well.

So how do we work towards getting this right? How do we build a system where AI can significantly help progress?

our work in education. And we think there are three components here. Firstly, building learning products, which are based on feedback from students, from faculty members. Secondly, working with educators and institutions to shape how AI is used on campuses and by learners. And then lastly, advancing how AI is used on campuses for good, for the research and for the progress as well. And I'll touch on each one of these fairly briefly.

From a product standpoint, the image is coming out a little bit blurred, but essentially it's a campus. And earlier this year, our team from various offices was in different parts of Asia. We visited many universities to get feedback on what would you want to see in the product from a learning standpoint. And we heard some similar themes from different countries. Students said they wanted ChatGPT to act not like a tutor, sorry, like a tutor and not just like a shortcut machine.

Policymakers wanted tools that can strengthen learning outcomes and parents said they wanted solutions that are safe, affordable and available in local languages. And finally, teachers told us that copy paste is a big issue that they are concerned about and wanted us to work towards addressing some of those.

One direct result is what we call study mode. This was launched, I think, about three weeks ago. And my colleague, Wanbei, is going to show us what study mode is and how it looks as well.

So what study mode does is instead of simply providing you with answers, it guides learners to engage more deeply with the material and to truly understand it. And this is one example of some of the work that we are doing by getting feedback in and then building the product towards shaping what AI can do in education.

We are also investing in working with educators and institutions on adoption of AI on campus. There is a version.

of ChatGPT that we call ChatGPT-EDU. It is a version that is customized for educational institutions. We are actively exploring how we can bring ChatGPT-EDU to SNU as well and make it available to faculty and to students.

And this is a step towards building what we call an AI native university. It is where AI is embedded into the entire campus. It's a part of the core infrastructure, almost like the internet is, but with obviously a lot more capabilities. It allows students to access OpenAI's most advanced AI features and it also empowers educators to use AI and guide students to use the tools most responsibly.

And then lastly, like I said, we can only maximize the potential of AI in education if we commit to encouraging students and faculty to think about how they can use these tools for good. And one such program is what we call NextGenAI.

announced earlier this year with leading institutions from around the world, like Harvard, MIT, and Oxford. And these are institutions who are dedicating funds and resources to use AI to accelerate research breakthroughs. And they use OpenAI tools to expand research in domains like education, in public service, and other socially beneficial applications. And we're very hopeful that SNU will come on board and be a part of this consortium, as well, in the not-so-distant future.

And lastly, if I summarize, ultimately, our journey today is not about education as an end in itself. We are very clear that we see a world where AI tools can be used to advance research and learning to benefit all of humanity. And we believe that, in the end, what this will do is help us truly expand human potential. And I'm excited to work with all of you to see how we might achieve this.

I’m here again at SNU. Thank you all for your time and attention. Thank you. Hello. I’m back. Yes, it’s me again. Honestly, I wish I could do this in Korean. So, what is the hardest part of learning in school? The hardest part is not just getting the answer, but asking ourselves: Do I really understand it well enough to explain it in my own words? Have you all tried that? No? I see the students sitting in the back that one. AI2, like a chat GPT, can give the answer instantly. Turning those answers into our own knowledge is way different, I guess. I do believe. The AI acts as a supportive tutor that adopts your level, ask guiding questions, and check your understanding step by step. Let me jump over to the demo session here.

Let's try to comparing the regular chat versus study mode.

So I'm going to turning down the study mode at the very first time to type that.

Like, let's try solving this equation. Like, 3x plus 5 equal 11.

Hell yeah, hell yeah, this answer is correct, right, and polite, but it's only the explanation. So as I told you, for many students like me, this is not enough building, lasting, understanding, those kind of things.

So now let's switch to the study mode and then see what happens with the same prompt here. So I'm going to type it. It's kind of hidden, so you can just press the more button here and then study and learn and boom it.

See that? It's cool, right? So notice the difference is like, you know, the study mode.

like a patient tutor, so it asks me the questions and check if I understand and give the feedback if I made a mistake.

So for example, like, what is it, minus four.

So you know what it came from, if you do the same thing to your tutor, the human tutor, they might be upset.

But you know, very polite, very polite, and then do the, you know, the second chance is to think about the, what's, why you think about it, and you know, like that.

So study more work like, you know, patient tutor, and ask me the question, check if I understand and give the feedback if I make a mistake.

So it can even adjust the voices as well, because this, this one is for.

a student, right? But what if, what if I, what if I, can you answer this from the parent's perspective?

Normally it's a student using the study mode, but everybody can use it. It's a different perspective, like for parents, for faculty, for teacher, whatever you lower this, just say about the, you know, those kind of things.

And then it's going to be changing the, you know, tones into the, you are, if you are parents, like that. So, this makes the learning process more personalized and effective, for sure.

So, let's jump over to the second demo here.

Okay, I'm going to, I'm going to try to upload it, one of the, you know, physics problem.

And trying to solve, and definitely turn on the study and learn mode here. And I'm going to show you that the really good here. This one is it looks like that. So you guys know it, right? This is pretty much a straightforward question.

But someone asked me really want to learn about how it's going to be approached, those kind of problem, for a student perspective, I guess. So it said about the step-by-steps. And as I followed it, this kind of steps is really critical for the students. They want to know from the scratch to the end.

And it takes a moment, but AI actually analyzes the problem and explain it in very detail like this way. So as I followed along, I realized visual explanation would make it more clear. Because the problem here is just two dimensions, just to stick with that one. So I just wanted to test it as a physical one, theoretically. So I'm trying to do that.

So could you create the simple GIF showing two balls falling? So I want to just show a more clear view, like that.

And after some moments, it's going to be turned into like this way. I'm going to show you the prepack one. So this kind of thing.

things like happen here. And also, if you can download it, the GI file looks like this. So you can easily understand what is the, you know, clear, I mean, the differences between two balls falling, you know, between the time. So I'm pretty much straightforward to understand more like, I mean, compared with the other, you know, just a fixed 2D, you know, images compared with this one, it's much better.

And also, if you are, you know, teachers, you want to just make some simulator on that problem. So I just, you know, asking, please write the code for an interactive simulator that runs in a single HTML file. And it takes some minutes to create it, you know, just instantly create this kind of, you know, code for it. And boom there.

이렇게 여기서 가격을 조절할 수 있고 연주할 수 있습니다
만약 당신이 선생님이라면 학생들에게 이해하기 쉽습니다
아니면 만약 마스크를 가게 되면 지구가 다를 수 있습니다
그런 프로토타입을 학생들에게 더 확실하게 활용할 수 있습니다
멋지죠 빨리 맞다고 해주세요

think, and reason, and reconstructs your knowledge.

For senior students and faculty, this brings real values.

For students, can turn difficult concept of the paper or paper idea into visualizing, you know, explanation, notes, and practice questions.

For professor, can instantly transform the lecture notes into teaching material or visualize something like you already saw it.

For researchers, can prototype the idea faster and reduce the wasted cycle.

For learners, avoid gaps and misunderstanding by building correct foundation from start.

So what I show was a simple physics example, but the same approach applied to many contexts.

So let me end with a question. What is the.

And the most repeated topic is learning and learning again. With study mode, you can see how to learn faster and more deeply. Thank you. Thank you very much.

Once again, thank you, Wonbae, for that amazing and passionate demonstration. Now, let’s return to Seoul National University to hear an academic perspective on AI and learning. Please welcome Professor Gahgene Gweon from the SNU Institute of Artificial Intelligence and Information.

Professor, welcome. Good morning. I’m very glad to be here with you today. Before I begin, let me ask you a question. Think back to your school days. Think back to your school days. Think back to your school days. teacher to slow down or maybe speed up or explain things in a different way.

Well, what I'd like to talk about today is to invite you to reimagine education with me. Not education where AI is a threat or a gimmick, but where AI is a partner. A partner in both teaching and learning.

Middle school and high school times are times of enormous curiosity, but also enormous variation. Some kids learn quickly, others can struggle. And year after year, the gap widens, which is sad.

Picture a seventh grade math classroom and today's lesson is fraction division. One student may doodle because the work feels too easy for him. Another stare blankly at the board because it's already too hard. Now, is this because one child is smart and the other is not? I would argue no. It's because our system assumes that every student learns the same way at the same pace. But what if, just what if we had a system that could flex with each student, recognizing their differences and adjusting to those differences in real time?

Now, that's the promise of AI. The core idea of AI education is simple but powerful, as you can see at the bottom screen here. It's acknowledging learner variability. Some kids grasp a concept by reading. Others may need to see a picture or a video, or even do an experiment with their hands so that they could embrace the concepts. Now, AI allows us to capture these differences through data. It can notice how a student solves a problem, where they hesitate, or where they would take the hints, and what kinds of mistakes the student repeats.

So instead of just checking the answer, your problem is correct or incorrect, AI can now ask follow-up questions. Why do you think that? What would happen here if we just changed this number?

Now, here's an example. Imagine a student memorizing the formula for the area of a circle. It's fine, but what if they don't know when or how to use it? Now, AI, a good AI tutor, could step in offering analogies or scaffolding. So the AI tutor could say, think of this circle as a pizza.

How much space does the crust cover?

The best AI tutors like this don't just give out answers. They teach the students how to think.

Now, does that mean our teachers are becoming less important? Absolutely not. If anything, their role becomes more critical.

When AI handles a routine drills, the teachers have now more time for what really matters, which is coaching the learning strategies, fostering student creativity, and building curiosity and critical thinking.

AI doesn't replace great teachers. It makes the great teachers shine.

So think of it as this way. If AI is a personal trainer for each student, then the teacher is a coach who can inspire the whole team.

We are already seeing some glimpse of this future. Actually, we just saw it three minutes ago. When GPT-3 came out a few years ago, a student could ask it to solve a math problem. It would generate a neat step-by-step solution, like you see on the left. It's useful, of course, but it's like giving away the answer in the back of your book, right?

So fast forward to today. Now, systems like ChatGPT now has a study mode, which was released three weeks ago. And Khan Academy's Camigo, which is also built on OpenAI's technology, presents something like a study mode as well.

So it acts like a solution machine, less like a solution machine and more like a tutor. When a student types in, I don't get this problem, then instead of just saying the answer or correcting the student in terms of the answer, would nudge the student forward saying, you know, what's the first step you might try? Can you maybe break down the problem a little bit and make it to a smaller problem?

And when students, after students use this new version of the tutor, they say things like, well, I'm not embarrassed to ask a stupid question. It's okay if I get it wrong. And it feels like I have a friend that's studying with me. And this kind of momentum is actually growing.

Analysts predict that by the end of this year, year 2025, nearly half of all major learning platforms will have an AI driven personalization feature.

Okay, but technology alone is not enough. We need a framework, a roadmap. I suggest today the four steps that should give the interaction between a tutor, AI tutor.

and a student. And the four steps are adaptive questioning, guided problem solving, non-invasive assessment, and meaningful feedback.

And each of these isn't just about providing a technical solution. Each one has deep roots in learning science.

For the first step of personalized questioning, now when AI asks a question, it shouldn't be random. It should rather build on what the student already knows, what they've struggled with, and even their preferred learning style.

So now this is powerful because the type of questions as a tone for students' motivation and also curiosity. A good question can pull a student in, a poorly chosen one can actually shut them down.

So for example, in this work, we showed that the way AI tutor presents a question can influence both student motivation and learning outcomes. So when the AI tutor gives a student a problem in a constructive mode or an active mode, the students learn differently. As you can see from the constructive learning bullet point there, students learned better more in the constructive mode, and this is built on learning sciences theory. So this kind of work illustrates how adaptive and engaging questions can make AI tutoring more effective.

Now during the problem solving stage, I encourage Socratic problem solving mode. So the Socratic method is thousands of years old, and AI actually is rediscovering it. Instead of giving answers, AI asks, why? What if? Could you try another way? So that's a Socratic method of questioning.

During the problem solving stage, I encourage Socratic problem solving mode.

process, the AI tutor should adjust the hints in real-time, and the learner can stay in that sweet spot of challenge. Not too easy, not too hard. And that's where the growth happens. That's where learning happens.

This research that I'm citing here showed how adaptive scripting in an online collaborative environment helped learners stay engaged and productive.

So here, these are some of the prompts that the tutor, AI tutor, could give out to make the students engaged.

So for example, in the last number four box here, it says, it seems like you're moving on before understanding your errors. Please spend more time reviewing this page.

So this kind of prompts can have the students, bring the student engagement back in. And in the same way, when AI tutors adjust their prompts and guide the students dynamically through their...

problem-solving processes, they can support student growth without removing the struggle that's essential to learning.

Traditionally, in terms of assessment, assessment actually, traditional assessment means tests, which are stressful, high stakes, can be often disconnected from actual learning.

Because of the stress, a student might make some mistakes, even if they know the concepts. But now, with AI technology, assessment can be woven into the learning process itself.

So for example, it can see what the student does, the student actions, and their languages.

And here is an example from our lab, showing this kind of non-invasive assessment.

So when the student is solving, well, actually, the student is solving a math problem using a game.

based system, the system records the student's finger traces. So if the student moves the first block, and then the third block, the system would record all of that. And using these traces, the system can gauge is the student just guessing, randomly guessing, or actually making meaningful movements that's suggestive of student learning.

Now, in this research, we showed that behavioral data can accurately, or actually, reveal understanding, turning assessment from a stressful exam into a natural part of learning. So in this work, we used geometric features of students' drag and drop actions to understand their learning. And it shows how subtle behavioral data can reveal some deep insights, making assessments invisible, continuous, and also subtle.

so supportive. Finally, feedback. Just telling the student whether you're correct or incorrect is not enough. The student needs to know why. So imagine finishing a math problem, and the AI says, yeah, you got it right. OK, let's move on. Versus, yes, that's right, but why did the method work? Maybe show here's how it connects to other ideas. So AI models, like EPTX, demonstrates how a system can not only solve problems, but also explain its reasoning. So say we are solving this kind of a math word problem. Given the word problem, traditionally, the AI model would just spit out the answer, solution, to that problem. But instead, AI models should do something like this, should be able to provide explanations.

So given a problem, it should be able to tell what each number means in the problem and also what each variable means. And if the machine, the AI model could do that, then AI system doesn't just generate answers but also it can produce explanations for numbers and this can help learners connect steps to the concepts. This is actually the kind of feedback that turns answers into understanding.

And this shift from answer-giving AI to explaining AI opens new possibilities for truly meaningful feedback.

So where does this leave us? Now AI-powered personalized learning is real and it's improving quickly. But the bigger question is, can we realize this technology?

But the bigger question isn't can we just realize this technology? It's more about what do we want education to become. And two sort of shifts seem especially important. The first is going from test prep to skill building. With AI handling the routine drills, we can move beyond high-stake exams to continuous skill growth. The second one I propose is going from memorization to creativity. If AI takes care of repetition, then teachers can use class time for debate, collaboration, and imagination. In other words, less cramming, more creating. Now AI will continue to advance that much, I'm sure, but how we use it in education is actually up to us.

So students don't need just simple answers. They need to, they need the

encouraged to ask questions, the freedom to make mistakes and move on, and the chance to learn at their own pace. If we embrace AI technology thoughtfully, we won't change the technology of classroom. We can change the culture of learning itself. And I want to end it with this quote. I was thinking of, can I squeeze in a quick joke after looking at Professor Yoon's lecture, I said maybe I should also put a joke, but I couldn't think of one within the 30 seconds. So I'm going to end with hopefully, what's a powerful quote instead. So AI doesn't replace teachers, it amplifies their ability to personalize learning. Thank you. Thank you very much for your talk.

Last, to conclude the session two, we'd like to invite Professor Eunsoo Lee at the Department of Philosophy. Please welcome.

Professor Lee: So I'm Eunsoo Lee. I'm the director of Digital Humanities Center at SNU and I'm very honored to deliver the final presentation of today's symposium. Let's begin with two powerful marketing logos. Some of you might have seen this logo somewhere in music school in North America, old Steinway schools. Well it's not without its critics. It is undeniably one of most successful branding campaign in piano industry.

Also you have another pretty much familiar logo sign like we proudly serve Starbucks, right? So it leads me to a thought experiment. Imagine seeing logos like all over

OpenAI school, or we probably serve GPT 5.0. Do you find this plausible?

Well, I wouldn't say it's impossible. But for now, my answer is not yet. Why is that?

Because the unspoken reality on our campuses today, it's much closer to we secretly use GPT 5.0.

The title of my talk is that impact of AI on education, navigating the hype and fear. But my goal today is not just simply to rehash the many anxieties we all share.

Instead, given that we are here to explore potential SNU OpenAI collaboration partnership, I want to use this time to propose a concrete vision for education, how we can collaborate to build a more resilient and meaningful future for education.

As a class assistant in Silicon Valley, an intellectual historian, I have been always fascinated by the eureka moment.

You know all this, right? The aha moment. History shows us a beautiful tradition where great minds gave their peers the time and opportunity to discover the new knowledge for themselves.

We see this in ancient Greece, when Archimedes challenged his fellow mathematician in the ancient Mediterranean world to solve new problems, he sparked a collaborative pursuit of knowledge.

And we see it again in the 17th century when Johann Bernoulli threw out the Brakistan problem, curve problem, to his fellow mathematicians in Europe in early modern period. But he did not just show up his genius, he just want to ignite the intellects of Europe at the time.

This illustrates a fundamental truth, the joy and struggle of discovery are indispensable in true learning. When ChantGPT arrived, my initial reaction was one of profound caution. Here was a tool that could deliver answers instantly, potentially short-circuiting that essential process of discovery. That's a problem.

The pace of change has been so rapid, as you all know, that we haven't had time enough for deep collective reflection on it. We still don't have a clear answer to this type of question, will AI make you stupid?

You guys did a wonderful job. Everybody was all perplexed, right, because of your fast speed. But I believe we must hold firm to one principle still. Authentic learning happens not when an answer is given, but when it is earned.

through process of inquiry and discovery. In this context, the development features like a study mode by OpenAI and other leading labs is a very welcome step. It shows a clear recognition of the problem. These tools are direct answer and direct response to a growing concern about the long-term cognitive debt the students might accumulate when they rely on passively on AI.

Some of you may have learned this news, the MIT Media Lab scanned for the first time the brain scan recently, right? The theory is that we are training students to become excellent prompters, but in the process, letting their muscles of critical thinking and creativity atrophy.

The study mode itself is a smart design. Since the OpenAI team just gave the full presentation of that, I won't go into further detail.

It acts as an interactive tutor, as he showed, guiding students with questions rather than simply handling over the answers. However, its greatest strength is greatest weakness. The temptation to exit the study mode and get a direct answer in the regular mode is just one click away. A constant battle of willpower to use or not to use that we cannot expect every student will win in this battle every time.

The reality is this, AI providers will continue to build more responsible and heuristic tools, and yet, as this report from Anthropic shows, my apologies to my fellow friends for citing the competitor's report here, no intention. Students will continue to oscillate between the collaborative use and direct use. They are.

oscillating again and again, right? So we need a structural solution, not one that relies solely on one individual's virtue.

This brings us to an even more concerning phenomenon, which is AI humanizer. Even as students use AI, they are acutely aware of the AI smell. Have you ever smelled, recognized the smell AI? What is that? How can you describe it?

It's like more very polished and very well organized generated text. So we use it, but at the same time, we have some kind of reluctance against it, right?

This has led to an absurd technological arms race, where students use one AI to write a paper and another AI to humanize it. It's a very silly situation. It's like an applying a function first.

and then apply inverse function again, back and forth, a tremendous amount of effort just to create illusions of authenticity. But what is this humanizing process?

In truth, it's often just about injecting deliberate imperfections, like mistaking some punctuations, or altering syntax, or adding noise to avoid and evade detection algorithms.

It's not about making the text more authentic or more humane. It's about making it less detectable. So it's not worthy of calling AI humanizer. It's not a humanizer at all.

This is a tragedy because AI holds immense creative potential. As some of you may know, the Coca-Cola Real Magic Brand campaign. In that campaign, people were

able to make a creative image using AI tools. I heard this news from Harvard, my friend. When they opened a new semester, the faculty members recommended students not to use AI tools by citing and by reminding them of their high, expensive tuition. You are paying $80,000 USD, but are you using AI tools when writing your paper? Then I cannot give you proper responses to your paper.

So remind how much you are paying for Harvard. Well, at SNU, we have much lower level of tuitions, but that doesn't guarantee that students can use AI when they're writing. Our hesitation to fully embrace AI in education stems from the fear that the unique and diverse voices of our students will be diminished.

The numerous unresolved copyright, look at the map here, unresolved still, and training data

lawsuit to deepen this concern. We are becoming trapped in a cycle of deception and detection, deception and detection, like this is an ouroboros, so eating itself. We are consuming our own authenticity.

However, what we want to see as an educator and teacher in the classroom, what we want to see is not just monotonous or just perfect impersonality. This is not what we want from the paper. What we want, it's more like imperfect beauty as touched by just one artisan, by one student.

You know, this is a dal hangari. This is the mass produced in the factory. Sometimes I feel like AI generate text like mass productions, but we want more like, looks like imperfect, but this is a genuine and very own individual things.

This is the problem space where I believe a powerful collaboration between SNU and open AI can make a real difference. I'll focus my proposal on the two foundational pillars of academia, which is reading and writing. First, reading. In an age of AI, where AI can summarize a huge amount of pages in a minute, so what will be the future of reading? Sometimes I will be very, very kind of perplexed when making students read the articles.

So at my lab, so we believe the answer is to make a reading a social and feasible act. That's why we developed the semicolon. You know the punctuation mark semicolon? It consists of with the end period and comma. Your sentence is ended, but I want to add more of my thoughts onto the text, so because we just want to link of our thoughts more and more, as many as possible.

So it shares a philosophy with platforms developed in the U.S. Some of you may know.

the perusal and hypothesis which is developed in Harvard and MIT, but I'm more inclined to use and analyze the thinking data in that platform. So here you can see how it is in action. Sorry, this is the real captured images that I'm using in my class. So students are supposed to leave their messages and notes in the margin, and they just participate

All these bubbles means every single thought by the students who are taking this class. And then they are just, as you mentioned, you can just take it, communicate in the margins. What I really wanted to do, this was made by video, but I just reduced that into a screen capture here. If you just click the statistical button at the bottom, using OpenAI API, so we just analyze all these bubble thoughts and to make a cluster or themes or just basic statistics so that students can see what's going on.

going on in the classroom before they enter the physical in the classroom. And then we have students to just accumulate all the thought-provoking and insightful thoughts in their own My Box and My Bookmark so that they can review that in their university career. So they are attending university more than four years, but sometimes you may think about this one because our thoughts in the classroom evaporates. They are ephemeral.

So you may not remember what you talked in your freshman year. But in SNU, what we want to do, we just want to gather all these thought data in, generated in the classroom. And we want to kind of use that one to train the model and fine-tune the foundational models that we can use.

And then sometimes we also think about some kind of masking sentences because students browse so fast. So sometimes I just want to give a suspense.

expense as Professor Kwon showed us today. So when solving mathematics, we need a time to think about it. But when reading a text, the students don't think much what comes in the next sentence. So that's why we use some kind of masking tapes when reading the text. Well, this is one of the attempts that we are doing the readings.

Okay, now let's move on to the writing. For writing, we face an even more existential challenge, you know. The traditional college essay, which was the cornerstone of the university education for a century, is in crisis, as you know. The old paradigm of product-centric assessment is broken, I would say. So we cannot believe that students wrote this one by themselves. So in SNU, we are just combining the GPT killer or GPT detector in our learning management systems. Maybe this semester will be a chaos. It's a chaos.

there will be a debate between a student and teacher.

Students will argue that, oh, no, this is the essay that I wrote by myself. I didn't use any AI tools.

But when GPT Healer tells you that there is a chance that this essay was written more than 60% by AI, there should be very picky problems between the two groups.

So my core proposal is this.

So I just want to recommend and suggest that Sanyo and OpenAI should collaborate to design and pilot so-called human AI co-authoring studio.

This is not just a new tool. It's a new ecosystem for writing assessment.

Imagine a writing environment that logs the entire creative writing journey.

We would shift our focus to new metrics of skill because teachers should see every step what students did in writing a paper together with AI.

So, for example, you can think about the prompt sophistication. How well does the students articulate the needs to the AI? Are they asking deep questions or are they simply say, write me an essay?

Number two, criticality of responses. We can evaluate that one as well if you have the log of everything, right? This is crucial. So we would assess how the students evaluate, reject, and refine the AI suggestions. Is the students the master of the tool or its servant?

And number three, the platform would require students to create their own outlines and initial arguments before engaging the AI, allowing us to assess their foundational thinking, which is the strategic planning. And we could even develop a metric to quantify the amount of original text and substantive revision contributed directly by the students.

Well the problem is that we are not getting, receiving all the logs that students are using in the OpenAI Chat GPT. If that is possible, we want to give a proper evaluation to the students because AI detox is not possible for writing paper. We cannot monitor students more than three weeks.

Well if you take a written exam, we can monitor them by proctor more than three hours, but writing a term paper, it takes several weeks. So we cannot exclude the use of AI in writing a paper, that's why I believe the AI detox is impossible. Then, we need to think about how we can create the co-authoring studio and to make a transition from product results centric to the process centric.

Okay, to conclude, my vision for AI in education is one that protects and cultivates students' agency. Thank you.

I think, I read, thus I write. To do this, we must redefine each act that we value high value, we put high value on. Reading must be an active process of, if you see that, interlinear. It must be interlinear analysis. And students are supposed to redefine the concept. And students are supposed to engage with the text. And they must be appreciating what the author is saying. And sometimes, they must be allowed to deconstruct the text. But if you collect all the initial letters, you make an acronym, which is I read. I just want to kind of recover this atmosphere in SNU at least. In an era where it reads much more efficiently, it reads, right? Our mission is to rediscover the unique, irreplaceable value of I read.

When you do iRead, there's a meaning and there's a value of meeting in the classroom. If you do all use AI tools and if you meet at the same time in the spot, why do we have to discuss more, right? So therefore, to recover the agency and the recover of the iRead action, it is very crucial for the future of university and higher education. Let's move beyond the fear of this secret usage. Do not say I secretly use GPT 5.0. I expect someday we can just make a logo like all OpenAI school, right? So let's work together to build this new and more transparent and more meaningful future for education. Thanks for listening.

Thank you very much, Professor Lee, for your relevant critics and constructive suggestions. Now let me invite Professor Kim Tae-gyun, Vice President for International Studies.

International Affairs of Seoul National University back to the stage to deliver the concluding remarks. I'm back.

So I do hope that everyone's just enjoying today's symposium and I'm speaking, I'm just enlightened tremendously by the OpenAI team as well as the SNU faculty, the presentation as well.

So let me wrap up today's symposium by mentioning three keywords instead of concluding remarks.

So first one, good partnership between two institutions. So I think that is a really good kind of evidence for the gateway to invite the further career universities into our kind of collaboration.

Also other like the platform for AI. So these kind of truly assuring the new chapter and the new era of innovations will be, which will be the reshaping our future.

future education, you know, research and society as well. And the second keyword, actually I adopted the Professor Yoon's presentation, we are still hungry, so we are ready to go further with the partnership, I think the OpenAI and other, you know, the good kind of leaders in AI industry. And the last one, social impact. I think the not only technical cooperation between two institutions, but also we have to tackle many different kind of crisis, I think at the national level and the global level. As a social scientist, I'm a social scientist, so we have climate change, you know, the gender inequality and the labor issues, and many different kinds of the social issues we have to tackle by using AI, by using the real partnership between two institutions.

Okay, so I will close.

today, so I would like to very much looking forward to the further collaboration and further the innovations will lie ahead. So thank you for all the special distinguished guests and the OpenAI team and all the participants, especially students.

So this is not the end. Actually, we have one more thing. That's the Rapport pick. So as I mentioned in the beginning of today, we have the select five participants. If you select it, you can get the invitation for today, the OpenAI, very inclusive evening event in Seoul.

Thank you very much indeed for your participation and enjoy the rest of the day. Thank you.

+ Read More
Comments (0)
Popular
avatar

Terms of Service