OpenAI Forum
+00:00 GMT
OpenAI Presentations
# AI Literacy
# Higher Education
# Technical Support & Enablement

ChatGPT Enterprise 101: A Beginner Guide to Your AI Work Assistant

About the Talk: Unlock the full potential of ChatGPT Enterprise with this live webinar hosted by Lois Newman, Customer Success Manager at OpenAI. This foundational session provides an overview of ChatGPT Enterprise, including an introduction to GPT-4o and its latest advancements. You'll learn about multimodality, practical everyday use cases and we’ll also explore how ChatGPT can enhance data analysis tasks, making complex processes more efficient. To help you get the most out of the tool, the session includes prompt engineering tips and tricks, guiding you on how to craft prompts to get the best results. This is the first in a series unfolding throughout the rest of the year. Our sessions will become more advanced over time, and tailored to the needs of the community. Your attendance, participation, and questions will inform future sessions. Whether you're new to ChatGPT Enterprise or looking to maximize its capabilities, this webinar offers valuable insights to help you effectively integrate AI into your daily operations. About the Speaker: Lois is a Customer Success Manager at OpenAI, specializing in user education and AI adoption. With over 10 years of experience in SaaS, she has extensive experience in developing and delivering engaging content, from large-scale webinars to stage presentations, aimed at enhancing user understanding and adoption of new technologies. Lois works closely with customers to ensure ChatGPT is integrated into daily activities and effectively utilized in the workplace. Lois is known for her storytelling approach, making complex technology relatable and accessible to all audiences.
Lois Newman
Lois Newman · Sep 19th, 2024
Popular topics
# AI Research
# AI Literacy
# Higher Education
# STEM
# Social Science
# Democratic Inputs to AI
# Socially Beneficial Use Cases
# AI Governance
# Innovation
# Public Inputs AI
# Career
# Expert AI Training
# Future of Work
# AI Safety
# Technical Support & Enablement
# Ethical AI
# Policy Research
# Non Profit
# Security
# Everyday Applications
All
Jacqueline Hehir
Jacqueline Hehir · Aug 19th, 2024
Informative session about OpenAI's Research Residency program, perfect for anyone interested in forging a career in AI, but without extensive experience in the domain. Our 6-month residency helps technical researchers from diverse fields transition into AI. Led by the program manager, Jackie Hehir, this session offers insights into the program's structure, benefits, and application process.The residency is an excellent way for people who are curious, passionate, and skilled to sharpen their focus on AI and machine learning and contribute to OpenAI’s mission of building AGI that benefits all of humanity. Learn more about the residency program and discover research blogs published by residents at the bottom of this page here.
# Career
# Future of Work
1:01:33
Hear from research leadership first hand about the significance of expert trainer contributions to the OpenAI mission.
# AI Research
# Expert AI Training
# AI Safety
58:48
Yonadav Shavit
Yonadav Shavit · Apr 26th, 2024
Yonadav presents his research Practices for Governing Agentic AI Systems.
# AI Safety
# AI Research
# AI Governance
# Innovation
1:02:58
Teddy Lee
Kevin Feng
Andrew Konya
Teddy Lee, Kevin Feng & Andrew Konya · Apr 22nd, 2024
As AI gets more advanced and widely used, it is essential to involve the public in deciding how AI should behave in order to better align our models to the values of humanity. Last May, we announced the Democratic Inputs to AI grant program. We partnered with 10 teams out of nearly 1000 applicants to design, build, and test ideas that use democratic methods to decide the rules that govern AI systems. Throughout, the teams tackled challenges like recruiting diverse participants across the digital divide, producing a coherent output that represents diverse viewpoints, and designing processes with sufficient transparency to be trusted by the public. At OpenAI, we’re building on this momentum by designing an end-to-end process for collecting inputs from external stakeholders and using those inputs to train and shape the behavior of our models. Our goal is to design systems that incorporate public inputs to steer powerful AI models while addressing the above challenges. To help ensure that we continue to make progress on this research, we have formed a “Collective Alignment” team.
# AI Literacy
# AI Governance
# Democratic Inputs to AI
# Public Inputs AI
# Socially Beneficial Use Cases
# AI Research
# Social Science
56:40
Lama Ahmad
Lama Ahmad · Mar 8th, 2024
In a recent talk at OpenAI, Lama Ahmad shared insights into OpenAI’s Red Teaming efforts, which play a critical role in ensuring the safety and reliability of AI systems. Hosted by Natalie Cone, OpenAI Forum’s Community Manager, the session opened with an opportunity for audience members to participate in cybersecurity initiatives at OpenAI. The primary focus of the event was red teaming AI systems—a process for identifying risks and vulnerabilities in models to improve their robustness. Red teaming, as Ahmad explained, is derived from cybersecurity practices, but has evolved to fit the AI industry’s needs. At its core, it’s a structured process for probing AI systems to identify harmful outputs, infrastructural threats, and other risks that could emerge during normal or adversarial use. Red teaming not only tests systems under potential misuse, but also evaluates normal user interactions to identify unintentional failures or undesirable outcomes, such as inaccurate outputs. Ahmad, who leads OpenAI’s external assessments of AI system impacts, emphasized that these efforts are vital to building safer, more reliable systems. Ahmad provided a detailed history of how OpenAI’s red teaming efforts have grown in tandem with its product development. She described how, during her tenure at OpenAI, the launch of systems like DALL-E 3 and ChatGPT greatly expanded the accessibility of AI tools to the public, making red teaming more important than ever. The accessibility of these tools, she noted, increases their impact across various domains, both positively and negatively, making it critical to assess the risks AI might pose to different groups of users. Ahmad outlined several key lessons learned from red teaming at OpenAI. First, red teaming is a “full stack policy challenge,” requiring coordination across different teams and expertise areas. It is not a one-time process, but must be continually integrated into the AI development lifecycle. Additionally, diverse perspectives are essential for understanding potential failure modes. Ahmad noted that OpenAI relies on internal teams, external experts, and automated systems to probe for risks. Automated red teaming, where models are used to generate test cases, is increasingly useful, but human experts remain crucial for understanding nuanced risks that automated methods might miss. Ahmad also highlighted specific examples from red teaming, such as the discovery of visual synonyms, where users can bypass content restrictions by using alternative terms. She pointed out how features like DALL-E’s inpainting tool, which allows users to edit parts of images, pose unique challenges that require both qualitative and quantitative risk assessments. Red teaming’s findings often lead to model-level mitigations, system-level safeguards like keyword blocklists, or even policy development to ensure safe and ethical use of AI systems. During the Q&A session, attendees raised questions about the challenges of red teaming in industries like life sciences and healthcare, where sensitive topics could lead to overly cautious models. Ahmad emphasized that red teaming is a measurement tool meant to track risks over time and is not designed to provide definitive solutions. Other audience members inquired about the risks of misinformation in AI systems, especially around elections. Ahmad assured participants that OpenAI is actively working to address these concerns, with red teaming efforts focused on areas like misinformation and bias. In conclusion, Ahmad stressed that as AI systems become more complex, red teaming will continue to evolve, combining human evaluations with automated testing to scale risk assessments. OpenAI’s iterative deployment model, she said, allows the company to learn from real-world use cases, ensuring that its systems are continuously improved. Although automated evaluations are valuable, human involvement remains crucial for addressing novel risks and building safer, more reliable AI systems.
# Expert AI Training
# AI Literacy
# AI Research
58:29
Collin Burns
Pavel Izmailov
Collin Burns & Pavel Izmailov · Feb 26th, 2024
Collin Burns and Pavel Izmailov present their research, Weak-to-Strong Generalization
# STEM
# AI Research
1:03:12
Carl Miller
Alex Krasodomski-Jones
Flynn Devine
+56
Carl Miller, Alex Krasodomski-Jones, Flynn Devine & 56 more speakers · Nov 29th, 2023
Watch the demos presented by the recipients of OpenAI’s Democratic Inputs to AI Grant Program https://openai.com/blog/democratic-inputs-to-ai, who shared their ideas and processes with grant advisors, OpenAI team members, and the external AI research community (e.g., members of the Frontier Model Forum https://openai.com/blog/frontier-model-forum).
# AI Literacy
# AI Research
# Democratic Inputs to AI
# Public Inputs AI
# Social Science
2:33:03
Terence Tao
Ilya Sutskever
Daniel Selsam
+1
Terence Tao, Ilya Sutskever, Daniel Selsam & 1 more speaker · Oct 9th, 2023
# STEM
# Higher Education
# Innovation
1:00:00
# Democratic Inputs to AI
# Ethical AI
# AI Governance
# Policy Research
About the Talk: Language models (LMs) are increasingly being used in open-ended contexts, where the opinions reflected by LMs in response to subjective queries can have a profound impact, both on user satisfaction, as well as shaping the views of society at large. In this work, we put forth a quantitative framework to investigate the opinions reflected by LMs -- by leveraging high-quality public opinion polls and their associated human responses. Using this framework, we create OpinionsQA, a new dataset for evaluating the alignment of LM opinions with those of 60 US demographic groups over topics ranging from abortion to automation. Across topics, we find substantial misalignment between the views reflected by current LMs and those of US demographic groups: on par with the Democrat-Republican divide on climate change. Notably, this misalignment persists even after explicitly steering the LMs towards particular demographic groups. Our analysis not only confirms prior observations about the left-leaning tendencies of some human feedback-tuned LMs, but also surfaces groups whose opinions are poorly reflected by current LMs (e.g., 65+ and widowed individuals).
# AI Literacy
# AI Research
50:00
Popular
The Importance of Public Input in Designing AI Systems: In Conversation with The Collective Intelligence Project
Saffron Huang, Divya Siddarth & Lama Ahmad