OpenAI Presentations
# AI Literacy
# Technical Support & Enablement
# Everyday Applications
Enabling a Data Driven Workforce
The webinar, part of the ongoing ChatGPT Enterprise Learning Lab series, featured Ben Kinsella, a member of OpenAI’s Human Data Team, alongside Lois Newman, Customer Success Manager, and Aaron Wilkowitz, Solutions Engineer. They explored how ChatGPT Enterprise can empower organizations by streamlining data analysis, enhancing productivity, and fostering a data-driven culture.
Key Takeaways:
1. Data Security & Privacy: Lois highlighted the robust data privacy and compliance measures of ChatGPT Enterprise, emphasizing that user data is not used to train models and is fully controlled by the organization.
2. Integration with Data Infrastructure: The session outlined how ChatGPT Enterprise can seamlessly integrate with existing tech stacks, providing employees with easy access to powerful AI tools.
3. Demos and Practical Applications: Aaron demonstrated how ChatGPT Enterprise helps teams prepare, analyze, and visualize data, showcasing examples from anomaly detection to complex forecasting.
AI-Powered Data Analysis:
1. Enhanced Accessibility: ChatGPT Enterprise makes it easier for non-technical employees to run analyses, freeing data scientists to focus on more complex tasks.
2. End-to-End Demos: The session included live demos showing how users can prepare data, generate visual insights, and integrate results directly with tools like Jira and Outlook using GPT Actions.
Q&A Highlights:
Elan Weiner, Solutions Engineer, joined for a live Q&A, answering questions about integrating ChatGPT into organizational workflows and data security concerns.
Lois Newman & Aaron Wilkowitz · Oct 25th, 2024
Popular topics
# AI Research
# AI Literacy
# Higher Education
# STEM
# Career
# Technical Support & Enablement
# Democratic Inputs to AI
# Social Science
# Everyday Applications
# AI Governance
# Innovation
# Public Inputs AI
# Future of Work
# Expert AI Training
# Socially Beneficial Use Cases
# AI Safety
# AI Adoption
# GPT-4
# Ethical AI
# Policy Research
Lois Newman · Oct 10th, 2024
Lois Newman led another session in the exciting ChatGPT Enterprise Learning Lab series. During the session, participants gained valuable insights into deploying ChatGPT widely across their organizations, along with best practices for driving user adoption. Whether attendees were just beginning with ChatGPT or looking to scale existing initiatives, the session provided actionable strategies for ensuring success. Designed to guide users through the ins and outs of GPT technology, the series offered a comprehensive overview of essential topics.
The agenda covered:
1. AI Strategy
2. Change Management
3. Understanding ChatGPT Users
4. Developing Use Cases
5. Adoption Initiatives
# AI Literacy
# Technical Support & Enablement
# AI Adoption
Lois Newman · Sep 23rd, 2024
At the event, Ben introduced Lois Newman from OpenAI, who provided insights into using ChatGPT Enterprise effectively. Lois discussed the recent launch of new AI models and emphasized the importance of prompt engineering to improve interactions with AI. She introduced the 'OK, Better, Best' framework, which helps users optimize their prompts for more effective outcomes. Additionally, Lois explored the concept of GPTs (Generative Pre-trained Transformers) and demonstrated building a custom GPT to automate tasks, illustrating the practical applications of AI in workflow automation. The event concluded with Lois addressing how these technologies could enhance productivity and strategic decision-making across various business domains.
# AI Literacy
# Higher Education
# Technical Support & Enablement
Lois Newman · Sep 19th, 2024
About the Talk:
Unlock the full potential of ChatGPT Enterprise with this live webinar hosted by Lois Newman, Customer Success Manager at OpenAI. This foundational session provides an overview of ChatGPT Enterprise, including an introduction to GPT-4o and its latest advancements. You'll learn about multimodality, practical everyday use cases and we’ll also explore how ChatGPT can enhance data analysis tasks, making complex processes more efficient. To help you get the most out of the tool, the session includes prompt engineering tips and tricks, guiding you on how to craft prompts to get the best results.
This is the first in a series unfolding throughout the rest of the year. Our sessions will become more advanced over time, and tailored to the needs of the community. Your attendance, participation, and questions will inform future sessions.
Whether you're new to ChatGPT Enterprise or looking to maximize its capabilities, this webinar offers valuable insights to help you effectively integrate AI into your daily operations.
About the Speaker:
Lois is a Customer Success Manager at OpenAI, specializing in user education and AI adoption. With over 10 years of experience in SaaS, she has extensive experience in developing and delivering engaging content, from large-scale webinars to stage presentations, aimed at enhancing user understanding and adoption of new technologies. Lois works closely with customers to ensure ChatGPT is integrated into daily activities and effectively utilized in the workplace. Lois is known for her storytelling approach, making complex technology relatable and accessible to all audiences.
# AI Literacy
# Higher Education
# Technical Support & Enablement
Jacqueline Hehir · Aug 19th, 2024
Informative session about OpenAI's Research Residency program, perfect for anyone interested in forging a career in AI, but without extensive experience in the domain. Our 6-month residency helps technical researchers from diverse fields transition into AI.
Led by the program manager, Jackie Hehir, this session offers insights into the program's structure, benefits, and application process.The residency is an excellent way for people who are curious, passionate, and skilled to sharpen their focus on AI and machine learning and contribute to OpenAI’s mission of building AGI that benefits all of humanity. Learn more about the residency program and discover research blogs published by residents at the bottom of this page here.
# Career
# Future of Work
Hear from research leadership first hand about the significance of expert trainer contributions to the OpenAI mission.
# AI Research
# Expert AI Training
# AI Safety
Yonadav Shavit · Apr 26th, 2024
Yonadav presents his research Practices for Governing Agentic AI Systems.
# AI Safety
# AI Research
# AI Governance
# Innovation
Teddy Lee, Kevin Feng & Andrew Konya · Apr 22nd, 2024
As AI gets more advanced and widely used, it is essential to involve the public in deciding how AI should behave in order to better align our models to the values of humanity. Last May, we announced the Democratic Inputs to AI grant program. We partnered with 10 teams out of nearly 1000 applicants to design, build, and test ideas that use democratic methods to decide the rules that govern AI systems. Throughout, the teams tackled challenges like recruiting diverse participants across the digital divide, producing a coherent output that represents diverse viewpoints, and designing processes with sufficient transparency to be trusted by the public. At OpenAI, we’re building on this momentum by designing an end-to-end process for collecting inputs from external stakeholders and using those inputs to train and shape the behavior of our models. Our goal is to design systems that incorporate public inputs to steer powerful AI models while addressing the above challenges. To help ensure that we continue to make progress on this research, we have formed a “Collective Alignment” team.
# AI Literacy
# AI Governance
# Democratic Inputs to AI
# Public Inputs AI
# Socially Beneficial Use Cases
# AI Research
# Social Science
Lama Ahmad · Mar 8th, 2024
In a recent talk at OpenAI, Lama Ahmad shared insights into OpenAI’s Red Teaming efforts, which play a critical role in ensuring the safety and reliability of AI systems. Hosted by Natalie Cone, OpenAI Forum’s Community Manager, the session opened with an opportunity for audience members to participate in cybersecurity initiatives at OpenAI. The primary focus of the event was red teaming AI systems—a process for identifying risks and vulnerabilities in models to improve their robustness.
Red teaming, as Ahmad explained, is derived from cybersecurity practices, but has evolved to fit the AI industry’s needs. At its core, it’s a structured process for probing AI systems to identify harmful outputs, infrastructural threats, and other risks that could emerge during normal or adversarial use. Red teaming not only tests systems under potential misuse, but also evaluates normal user interactions to identify unintentional failures or undesirable outcomes, such as inaccurate outputs. Ahmad, who leads OpenAI’s external assessments of AI system impacts, emphasized that these efforts are vital to building safer, more reliable systems.
Ahmad provided a detailed history of how OpenAI’s red teaming efforts have grown in tandem with its product development. She described how, during her tenure at OpenAI, the launch of systems like DALL-E 3 and ChatGPT greatly expanded the accessibility of AI tools to the public, making red teaming more important than ever. The accessibility of these tools, she noted, increases their impact across various domains, both positively and negatively, making it critical to assess the risks AI might pose to different groups of users.
Ahmad outlined several key lessons learned from red teaming at OpenAI. First, red teaming is a “full stack policy challenge,” requiring coordination across different teams and expertise areas. It is not a one-time process, but must be continually integrated into the AI development lifecycle. Additionally, diverse perspectives are essential for understanding potential failure modes. Ahmad noted that OpenAI relies on internal teams, external experts, and automated systems to probe for risks. Automated red teaming, where models are used to generate test cases, is increasingly useful, but human experts remain crucial for understanding nuanced risks that automated methods might miss.
Ahmad also highlighted specific examples from red teaming, such as the discovery of visual synonyms, where users can bypass content restrictions by using alternative terms. She pointed out how features like DALL-E’s inpainting tool, which allows users to edit parts of images, pose unique challenges that require both qualitative and quantitative risk assessments. Red teaming’s findings often lead to model-level mitigations, system-level safeguards like keyword blocklists, or even policy development to ensure safe and ethical use of AI systems.
During the Q&A session, attendees raised questions about the challenges of red teaming in industries like life sciences and healthcare, where sensitive topics could lead to overly cautious models. Ahmad emphasized that red teaming is a measurement tool meant to track risks over time and is not designed to provide definitive solutions. Other audience members inquired about the risks of misinformation in AI systems, especially around elections. Ahmad assured participants that OpenAI is actively working to address these concerns, with red teaming efforts focused on areas like misinformation and bias.
In conclusion, Ahmad stressed that as AI systems become more complex, red teaming will continue to evolve, combining human evaluations with automated testing to scale risk assessments. OpenAI’s iterative deployment model, she said, allows the company to learn from real-world use cases, ensuring that its systems are continuously improved. Although automated evaluations are valuable, human involvement remains crucial for addressing novel risks and building safer, more reliable AI systems.
# Expert AI Training
# AI Literacy
# AI Research
Collin Burns & Pavel Izmailov · Feb 26th, 2024
Collin Burns and Pavel Izmailov present their research, Weak-to-Strong Generalization
# STEM
# AI Research
+56
Carl Miller, Alex Krasodomski-Jones, Flynn Devine & 56 more speakers · Nov 29th, 2023
Watch the demos presented by the recipients of OpenAI’s Democratic Inputs to AI Grant Program https://openai.com/blog/democratic-inputs-to-ai, who shared their ideas and processes with grant advisors, OpenAI team members, and the external AI research community (e.g., members of the Frontier Model Forum https://openai.com/blog/frontier-model-forum).
# AI Literacy
# AI Research
# Democratic Inputs to AI
# Public Inputs AI
# Social Science
Popular