OpenAI Forum

Most Popular

# STEM
# Innovation
# Higher Education
# o1 reasoning model

The Future of Math with o1 Reasoning

During the virtual event on December 3rd, Prof. Terence Tao and OpenAI's Mark Chen and James Donovan engaged in a deep discussion on the intersection of AI and mathematics. They explored how AI models, particularly new reasoning models, could enhance traditional mathematical problem-solving and potentially transform mathematical research. The speakers discussed the integration of AI into various scientific fields, emphasizing AI's role in accelerating discovery and innovation. Key topics included the challenges of AI in understanding and contributing to complex mathematical proofs, the evolving nature of mathematical research with AI integration, and the future of collaboration between AI and human mathematicians. The conversation highlighted both the potential and the current limitations of AI in advancing mathematical sciences.
Terence Tao
Mark Chen
James  Donovan
Terence Tao, Mark Chen & James Donovan · Mar 13th, 2025
All
David Autor
Tyna Eloundou
David Autor & Tyna Eloundou · Mar 12th, 2025
About the Talk: Much of the value of labor in industrialized economies derives from the scarcity of expertise rather than from the scarcity of labor per se. In economic parlance, expertise denotes a specific body of knowledge or competency required for accomplishing a particular objective. Human expertise commands a market premium to the degree that it is, first, necessary for accomplishing valuable objectives, and second, scarce, meaning not possessed by most people. Will  AI increase the value of expertise by broadening its relevance and applicability? Or will it instead commodify expertise and undermine pay, even if jobs are not lost in net. Autor will present a simple framework for interpreting the relationship between technological change and expertise across three different technological revolutions. He will argue that, due to AI’s malleability and broad applicability, its labor market consequences will depend fundamentally on how firms, governments, NGOs, and universities (among others) invest to develop its capabilities and shape its applications.  
# Higher Education
# Future of Work
# AI Literacy
# Career
# Social Science
Chi-kwan (CK) Chan
Chi-kwan (CK) Chan · Mar 12th, 2025
Using the imaging of black holes as a case study, this talk highlights the key requirements for AI to make meaningful contributions to astrophysical research. Dr. Chan introduces several pioneering projects that are integrating AI into astrophysics, covering aspects such as instrumentation, simulations, data processing, and causal inference. He also discusses an innovative project aimed at enabling AI to gain scientific insights independently.
# STEM
# AI Research
# Higher Education
Siya Raj Purohit
Jake Cook
Natalie Cone
Siya Raj Purohit, Jake Cook & Natalie Cone · Mar 12th, 2025
The OpenAI Forum hosted an engaging session titled "Harvard’s AI-Enhanced Classroom: Revolutionizing Learning with Custom GPTs", featuring Jake Cook from Harvard Business School and Siya Raj Purohit from OpenAI. Siya provided insights into AI-native universities, showcasing how ChatGPT transforms education at individual, team, and institutional levels. Jake presented real-world applications of AI in his classroom, emphasizing creative experimentation, personalized learning, and the power of play to unlock students' potential. The event highlighted innovative uses of AI, including custom GPTs for dynamic teaching and adaptive learning, fostering both efficiency and critical thinking. The discussion underscored AI's role in reshaping education and left participants with actionable insights and strategies for adoption.
# Higher Education
# Future of Work
# AI Adoption
Iavor Bojinov
Siya Raj Purohit
Natalie Cone
Iavor Bojinov, Siya Raj Purohit & Natalie Cone · Jan 16th, 2025
The OpenAI Forum's first event of 2025 brought together Assistant Professor Iavor Bojinov from Harvard Business School (HBS) and Siya Raj Purohit from OpenAI's Education Team for a compelling discussion on the integration of artificial intelligence into education and business learning. Hosted by Natalie Cone, the forum highlighted groundbreaking efforts to establish AI-native universities and innovative strategies for incorporating AI into teaching and learning. Siya Raj Purohit introduced the audience to OpenAI for Education's initiatives, particularly the launch of ChatGPTEDU in 2024. This enterprise-grade platform offers secure and collaborative tools for educators, students, and institutions to deepen engagement with knowledge. ChatGPTEDU fosters network effects on campuses by enabling professors to create custom solutions for their classes and allowing students to engage with content in an interactive, conversational manner. Siya also shared OpenAI's broader vision for AI-native universities, where students experience AI touchpoints throughout their academic journey—from orientation to career services and beyond. These innovations are designed to empower students to achieve their goals with greater efficiency and creativity. Professor Iavor Bojinov shared the journey of integrating AI into the MBA curriculum at HBS. He detailed the development of the school's first AI-native course, "Data Science and AI for Leaders," which emphasizes foundational data science, AI, machine learning, and managerial decision-making. This course reflects a significant shift in business education, incorporating tools like ChatGPT, custom tutor bots, and advanced analytics bots to support interactive and practical learning. By redesigning the course as an AI-native experience, HBS has replaced traditional coding lessons with a focus on prompting, enabling students to engage deeply with data analysis while still addressing managerial challenges related to AI integration. During his presentation, Professor Bojinov highlighted key lessons learned from his pioneering efforts. Training, he emphasized, is essential for students and faculty to adopt AI tools effectively. He also noted the importance of aligning AI applications with the curriculum to prevent confusion and ensure meaningful learning outcomes. Exposing students to a variety of AI platforms prepares them for real-world scenarios where multiple tools are used in tandem, and understanding the strengths and limitations of these tools is vital for success. One of the standout features of the event was Professor Bojinov's hands-on teaching exercise, the "AI-First Snack Company." This activity enables students to explore AI's capabilities in market research, product development, and marketing strategies, providing an interactive way to learn prompting techniques and understand the creative potential of generative AI. The exercise also underscores AI's limitations, offering a balanced perspective on its applications in problem-solving and innovation. The session concluded with announcements of upcoming events, including a panel discussion with OpenAI’s new VP of Education, Leah Belsky, and other academic leaders, as well as a session featuring the integration of AI in research led by Dr. Rojansky from Stanford University. The OpenAI Forum continues to prioritize meaningful collaboration and invites members to engage actively through its global chapters and referral network. This event set the tone for 2025, providing valuable insights into the transformative potential of AI in education and fostering collaboration among academics and professionals. The discussions and initiatives presented underscored the pivotal role AI is poised to play in shaping the future of learning and professional development.
# Higher Education
# AI Adoption
# Data Science
Lois Newman led another session in the exciting ChatGPT Enterprise Learning Lab series. During the session, participants gained valuable insights into deploying ChatGPT widely across their organizations, along with best practices for driving user adoption. Whether attendees were just beginning with ChatGPT or looking to scale existing initiatives, the session provided actionable strategies for ensuring success. Designed to guide users through the ins and outs of GPT technology, the series offered a comprehensive overview of essential topics. The agenda covered: 1. AI Strategy 2. Change Management 3. Understanding ChatGPT Users 4. Developing Use Cases 5. Adoption Initiatives
# AI Literacy
# Technical Support & Enablement
# AI Adoption
Ahmed El-Kishky
Hongyu Ren
Giambattista (Gb) Parascandolo
Ahmed El-Kishky, Hongyu Ren & Giambattista (Gb) Parascandolo · Oct 4th, 2024
Natalie Cone introduced three key contributors to OpenAI's O1 model, Ahmed Elkishki, Hongyu Ren, and G.B. Parascandolo, who discussed the development and reasoning capabilities of the O1 model. The speakers shared insights on how the O1 model utilizes reinforcement learning to develop its reasoning skills, including breaking down complex problems into sub-steps, error correction, and employing a structured thought process, similar to how humans solve problems. The discussion emphasized the significance of reasoning in AI, highlighting O1's ability to handle complex tasks, including high-level math problems and code generation. Ahmed presented O1's advanced capabilities, showing its superior performance in benchmarks like AIME and Codeforces, demonstrating how reasoning allows the model to explore different approaches before reaching a solution. Hongyu introduced O1 Mini, a smaller, more cost-efficient version optimized for STEM tasks, while maintaining high performance in general inquiries. The event also included a Q&A session, where the speakers addressed questions on the nuances of reasoning, O1's applicability in creative domains, and its potential impact on AGI development. Overall, the discussion showcased O1 as a pioneering advancement in reasoning-focused AI, with significant implications for the future of large language models.
# AI Research
# o1 reasoning model
Lama Ahmad
Lama Ahmad · Mar 8th, 2024
In a recent talk at OpenAI, Lama Ahmad shared insights into OpenAI’s Red Teaming efforts, which play a critical role in ensuring the safety and reliability of AI systems. Hosted by Natalie Cone, OpenAI Forum’s Community Manager, the session opened with an opportunity for audience members to participate in cybersecurity initiatives at OpenAI. The primary focus of the event was red teaming AI systems—a process for identifying risks and vulnerabilities in models to improve their robustness. Red teaming, as Ahmad explained, is derived from cybersecurity practices, but has evolved to fit the AI industry’s needs. At its core, it’s a structured process for probing AI systems to identify harmful outputs, infrastructural threats, and other risks that could emerge during normal or adversarial use. Red teaming not only tests systems under potential misuse, but also evaluates normal user interactions to identify unintentional failures or undesirable outcomes, such as inaccurate outputs. Ahmad, who leads OpenAI’s external assessments of AI system impacts, emphasized that these efforts are vital to building safer, more reliable systems. Ahmad provided a detailed history of how OpenAI’s red teaming efforts have grown in tandem with its product development. She described how, during her tenure at OpenAI, the launch of systems like DALL-E 3 and ChatGPT greatly expanded the accessibility of AI tools to the public, making red teaming more important than ever. The accessibility of these tools, she noted, increases their impact across various domains, both positively and negatively, making it critical to assess the risks AI might pose to different groups of users. Ahmad outlined several key lessons learned from red teaming at OpenAI. First, red teaming is a “full stack policy challenge,” requiring coordination across different teams and expertise areas. It is not a one-time process, but must be continually integrated into the AI development lifecycle. Additionally, diverse perspectives are essential for understanding potential failure modes. Ahmad noted that OpenAI relies on internal teams, external experts, and automated systems to probe for risks. Automated red teaming, where models are used to generate test cases, is increasingly useful, but human experts remain crucial for understanding nuanced risks that automated methods might miss. Ahmad also highlighted specific examples from red teaming, such as the discovery of visual synonyms, where users can bypass content restrictions by using alternative terms. She pointed out how features like DALL-E’s inpainting tool, which allows users to edit parts of images, pose unique challenges that require both qualitative and quantitative risk assessments. Red teaming’s findings often lead to model-level mitigations, system-level safeguards like keyword blocklists, or even policy development to ensure safe and ethical use of AI systems. During the Q&A session, attendees raised questions about the challenges of red teaming in industries like life sciences and healthcare, where sensitive topics could lead to overly cautious models. Ahmad emphasized that red teaming is a measurement tool meant to track risks over time and is not designed to provide definitive solutions. Other audience members inquired about the risks of misinformation in AI systems, especially around elections. Ahmad assured participants that OpenAI is actively working to address these concerns, with red teaming efforts focused on areas like misinformation and bias. In conclusion, Ahmad stressed that as AI systems become more complex, red teaming will continue to evolve, combining human evaluations with automated testing to scale risk assessments. OpenAI’s iterative deployment model, she said, allows the company to learn from real-world use cases, ensuring that its systems are continuously improved. Although automated evaluations are valuable, human involvement remains crucial for addressing novel risks and building safer, more reliable AI systems.
# Expert AI Training
# AI Literacy
# AI Research
Terms of Service