Forum Hits of 2025
# AI Adoption
# AI Literacy
# AI Safety
OpenAI & the San Antonio Spurs Empower Parent Communities in Texas with AI Literacy

Yochi Dreazen · Apr 1st, 2026

Chris Nicholson · Mar 10th, 2026
Renowned mathematician Terence Tao and OpenAI Chief Research Officer Mark Chen were joined by OpenAI’s VP of Science Kevin Weil and such luminaries as Caltech mathematician Sergei Gukov, UCSB physicist Nathaniel Craig, Stanford’s Eva Silverstein, Lance Dixon of SLAC National Accelerator Laboratory, UCLA’s Zvi Bern, Wahid Bhimji of Lawrence Berkeley National Laboratory and NERSC, University of Wisconsin physicist Kyle Cranmer, and OpenAI’s Alex Lupsasca and James Donovan for talks, panels, and public discussion.
# STEM
# AI Mathematics
# AI Physics


Julie Cordua & Chelsea Carlson · Feb 11th, 2026
The content in this presentation covers subjects that may be distressing, including technology-facilitated child sexual abuse and exploitation. No actual or depicted child sexual abuse material is contained. Attendees are encouraged to practice self-care/wellness as needed.
At this OpenAI Forum event, Natalie Cone hosted a conversation with Julie Cordua, CEO of Thorn, and Chelsea Carlson, who leads child safety efforts across OpenAI’s products, focused on protecting children in digital spaces. They described how online harms have evolved over the last decade, including increased grooming, sextortion, and the rise of synthetic or AI-generated child sexual abuse material.
Julie explained Thorn’s approach across research with youth, technical innovation, and building tools that help platforms and law enforcement detect abuse, triage cases, and find victims faster. Both speakers emphasized there is no single fix, and that meaningful progress requires safety-by-design, clean training data, strong guardrails, scalable detection, and clear pathways for reporting and response.
They also underscored the mental toll on investigators and moderators and discussed how AI can reduce unnecessary exposure by grouping, prioritizing, and filtering sensitive material without replacing human judgment. During Q&A, they highlighted the importance of real-time, multimodal, and contextual detection, and shared practical guidance for parents centered on engagement, literacy, and keeping open lines of communication with kids.
The session closed with a call for deeper collaboration among nonprofits, tech companies, and governments to improve capacity, transparency, and cross-border coordination to keep children safer online.
# AI Safety
# Ethical AI
# OpenAI Presentation

Chris Nicholson · Feb 7th, 2026
# ChatGPT Tips
# Future of Work
# Upskilling

Chris Nicholson · Nov 21st, 2025
# AI Sports
# AI Adoption
# Community

Yochi Dreazen · Nov 14th, 2025
# AI and Creativity
# AI Education
# AI Pedagogy
# Higher Education


Greg Niemeyer & Natalie Cone · Nov 14th, 2025
The OpenAI Forum hosted educator and data artist Greg Niemeyer from UC Berkeley for a talk on how AI is transforming learning, teaching, and thinking. Building on OpenAI’s mission to ensure broadly distributed benefits from AI, Niemeyer introduced a “Minus AI, Plus AI, Times AI” framework: minus AI for intentionally tech-free, embodied learning; plus AI for transparent, dialectical collaboration with AI; and times AI for AI as a medium that restructures knowledge itself.
He proposed a cognitive insight formula—C = Q × T × K where meaningful learning depends on the quality of questions, the strength of trust, and the richness of the knowledge base, emphasizing that if trust collapses, learning outcomes collapse as well. Throughout the talk, he shared concrete classroom experiments showing how AI can either de-skill students or spark creative divergence and multiplayer learning when used thoughtfully and transparently.
He closed by urging educators and learners not to choose one mode, but to move wisely among minus, plus, and times AI to keep curiosity, meaning, and our shared “we” at the center of education in the age of AI.
# AI Education
# AI Pedagogy
# AI Adoption

Lukasz Kaiser · Oct 8th, 2025
Łukasz Kaiser’s OpenAI Forum talk, “Learning Powerful Models: From Transformers to Reasoners and Beyond” offered a research-focused but deeply values-aligned reflection on how AI is evolving from data-hungry systems toward reasoning models that learn more efficiently and safely. The framing he used emphasized safety, learnability, and human-like reasoning. He consistently underscored that making AI more learnable from less data and more computationally powerful ensures that progress in AI remains beneficial, efficient, and accessible to all, rather than concentrated among a few actors.
# OpenAI Presentation
# AI Research

Jack Stubbs · Oct 1st, 2025
Jack Stubbs, from the Intelligence and Investigations team, described how his group disrupts organized scam networks while also empowering the public to use ChatGPT as a personal safety tool. He emphasized that most scammers are not inventing new methods but using AI to scale old tricks more efficiently, and that OpenAI has disrupted operations in Cambodia, Myanmar, and Nigeria. Stubbs underscored both the human and financial toll of scams, citing $12 billion in reported U.S. losses last year and even teen suicides linked to sextortion. His team uses a “ping–zing–sting” framework to map scam patterns and has found AI involved at every stage. Importantly, he highlighted that millions of people already use ChatGPT to identify scams, with three times more scam-detection interactions than scammer interactions, and noted growing demand for free, accessible AI safety tools. Stubbs closed by stressing transparency through public reports, partnerships with groups like AARP, and collaboration across tech and civil society to ensure AI strengthens safety, security, and shared benefits for all.
# AI Safety
# Security

Casey Cuny · Sep 26th, 2025
The conversation, featuring educator and California teacher of the year, Casey Cuny showcased how ChatGPT is transforming education. Cuny emphasized AI literacy as a moral imperative, framing AI as an opportunity. He spoke about democratizing AI benefits, boosting productivity, expanding educational access, and reinforcing the importance of ethical, human-centered use. The dialogue also highlighted teacher adoption, policy considerations, and workforce readiness—areas where democratic AI values, infrastructure investment, and shared benefits intersect.
# AI Education
# ChatGPT Tips
# Edu Use Cases
At OpenAI, we’re building safe AGI that benefits all of humanity. We look for people who are inspired by this mission and ready to tackle big challenges
# Career
# Future of Work
# OpenAI Presentation

