Featured
56:51
# AI Adoption
# AI Sports
# Sora
Event Replay: Game Time With the San Antonio Spurs: Engaging Fans with ChatGPT & Sora



Charlie Kurian, Jordan Kolosey & Johnny Rodriguez
58:23
Advancing Diagnostic Medicine: The Role of AI in Future Healthcare
Shahram Yazdani, Sebastian Salazar, Mericien Venson, MD, PhD & 1 content:more content:speaker
Collections
All Collections
All Content
A personal essay from Chris Lehane outlining a “Fair Chance Agenda” for 2026: expand access and literacy, protect kids, invest in energy + data infrastructure, and ensure AI-driven growth is broadly s
# OpenAI Leadership



Nate Gross, James Hairston & Kate Rouch · Jan 16th, 2026
In this Forum session, OpenAI leaders Kate Rouch and Dr. Nate Gross joined James Hairston to discuss the launch of ChatGPT Health and OpenAI for Healthcare. They explored how AI can responsibly support patients, clinicians, and health systems by offering personalized, secure, and contextual assistance. Nate emphasized that over 40 million people ask ChatGPT health-related questions daily, highlighting both the demand and responsibility for thoughtful design. Kate shared her personal experience navigating a breast cancer diagnosis, explaining how ChatGPT empowered her to better understand clinical literature, prepare for specialist visits, and communicate with her family. The speakers detailed how ChatGPT Health integrates with medical records, wearables, and health apps to provide a more holistic, patient-centered experience. For clinicians, OpenAI’s tools help reduce administrative burden, align care with institutional policies, and improve documentation. Together, they underscored the importance of trust, transparency, and collaboration in building AI tools that enhance—not replace—human judgment in healthcare.
# Healthcare
# OpenAI Leadership
# ChatGPT for Health
# OpenAI for Healthcare
Comment
One year after DeepSeek-R1 reshaped the AI race, OpenAI assesses where the U.S. still leads, how China is deploying at scale, and what will decide global AI leadership in 2026.
# Policy Research
# Security
# Scaling AI
# Innovation
Millions use ChatGPT to make sense of healthcare—comparing plans, costs, and coverage—as AI helps bridge access gaps where the system falls short.
# Healthcare
# Everyday Applications
# Socially Beneficial Use Cases
# AI Adoption
# Responsible AI
AI infrastructure can power communities, not strain them. OpenAI’s Wisconsin Stargate site pairs clean energy and local jobs with new insights on solopreneurs using ChatGPT
# Infrastructure as Destiny
# Responsible AI
# Socially Beneficial Use Cases
# Future of Work
# AI Adoption
# OpenAI Leadership

Chris Nicholson · Dec 16th, 2025
# AI Science
# OpenAI Leadership
# Scientific Advancement
Comment


Brian Spears & Kevin Weil · Dec 16th, 2025
In this Forum session, OpenAI’s VP of Science Kevin Weil and Brian Spears, Director of Lawrence Livermore National Laboratory’s AI Innovation Incubator (AI3), will explore how advanced AI systems are beginning to make direct, measurable contributions to scientific research.
The discussion will highlight the OpenAI–LLNL partnership and what it looks like when frontier reasoning models are embedded in real scientific workflows—from accelerating hypothesis generation and analyzing complex datasets to uncovering connections that were previously out of reach. Weil will share the vision behind OpenAI for Science, including the ambition to “compress 25 years of scientific progress into 5,” by giving researchers powerful new instruments for discovery. Spears will offer the lab-level perspective on how AI is already expanding the pace, scale, and ambition of work across fields like energy, materials science, and high-performance computing.
By bringing frontier AI into some of the nation’s most capable—and most secure—research institutions, OpenAI and the national labs are working together to build a more rapid, reliable, and resilient model for turning scientific insight into real-world impact.
# AI Science
# Infrastructure as Destiny
# OpenAI Leadership
1



+2
Natalie Cone, Katherine Elkins, Leonardo Impett & 2 content:more content:speakers · Dec 11th, 2025
This OpenAI Forum session focused on how AI is accelerating research across disciplines—especially in the humanities and social sciences, where it’s enabling new ways to test theories, analyze culture, and train students in computational methods. The program framed AI as a powerful scientific tool that can compress long research timelines by handling tasks that may require deep, sustained reasoning and large-scale synthesis.
Katherine Elkins shared “applied humanities” projects that model emotional arcs in novels, compare how translations reshape narrative patterns, and surface meaningful peaks and shifts that often align with what close-reading tends to notice. She also showed how students are using AI to explore cultural datasets—ranging from storytelling structures and social media dynamics to bias investigations in image generation, legislative text mining, and network/knowledge-graph analysis.
Marco Uytiepo described how deep learning accelerates nanoscale brain-imaging analysis, turning months or years of manual reconstruction into days and helping researchers study circuit features linked to memory.
Leonardo Impett argued that modern computer vision models don’t just analyze images—they embody a “machine visual culture,” and researchers can use art-historical methods to study both visual media and the cultural lens of the algorithms themselves.
The event ended with a live Q&A where participants discussed responsible use with domain experts, creative uses of generative tools in storytelling, examples where AI changes research direction (not just speed), translation effects, long-term implications for analyzing AI-generated imagery, global archival preservation, and practical first steps for bringing AI methods into labs and classrooms.
# AI Education
# AI Pedagogy
# Edu Use Cases
Comment
OpenAI unveils grid-flexible data centers that cut peak load and lower costs, while enterprise AI adoption grows globally and boosts workers’ productivity.
# Infrastructure as Destiny
# Scaling AI
# AI Economics
# Innovation
# Future of Work



+2
Olivia Pavco-Giaccia, Moran Cerf, Greg Niemeyer & 2 content:more content:speakers · Dec 9th, 2025
The session explored how AI is reshaping education, beginning with OpenAI’s Olivia Pavco-Giaccia outlining why teaching and learning sit at the core of the company’s mission and how student adoption—now more than 40% of ChatGPT users worldwide—has accelerated AI’s integration into campuses, leading to large-scale deployments such as ChatGPT EDU across the CSU system and emerging research partnerships aimed at improving learning outcomes. She emphasized moving beyond fears of cheating to unlock personalized learning support, including early progress with Study Mode.
Columbia University’s Moran Cerf, then connected AI to cutting-edge neuroscience and behavioral research, detailing how AI literacy is now essential for executives and students alike, and sharing striking findings from brain-interface studies that reveal how humans make decisions, process memories, and respond to engagement—insights he believes could transform pedagogy and human–AI collaboration.
Miami Dade College’s Beth Muturi followed with a pragmatic model for “humanizing AI,” showing how community-embedded capstone projects allow students to build real AI solutions for local organizations, expanding opportunity and practical skill development.
UC Berkeley’s Greg Neimeyer proposed a holistic AI pedagogy organized around three modes—minus AI, plus AI, and times AI—arguing that education must balance embodied human experience, critical engagement with AI, and transformational uses of intelligent systems to sustain meaning, collaboration, and truth in an AI-saturated era.
Finally, UCLA’s Tina Austin presented her “Autumn Bloom’s” framework, a recursive, discipline-flexible alternative to Bloom’s taxonomy that shifts assessment away from grading AI-generated output and toward evaluating students’ comparative reasoning, critique, and metacognitive understanding, offering a path for assignments that integrate, challenge, or intentionally exclude AI depending on pedagogical need.
# AI Pedagogy
# AI Education
# Edu Use Cases
Comment

