OpenAI Forum
+00:00 GMT
AI in Science & Research
# Social Science
# Higher Education
# Socially Beneficial Use Cases

AI Ethics in Action: UC Berkeley’s Data Science for Social Justice Workshop

The Data Science for Social Justice Workshop (DSSJ), organized in partnership between UC Berkeley’s Graduate Division and D-Lab, is an 8-week program aiming to provide an introduction to data science for graduate students, grounded in critical approaches of data feminism, data activism, ethics, and critical race theory. Attendees receive training in natural language processing and leverage their skills to conduct discourse analysis on social media data in an interdisciplinary project. This workshop, about to conclude its third year, has trained over 75 graduate students across 20 disciplines. These students form a community of interdisciplinary scholar-activists who uphold a values-driven approach to data science and machine learning. In this event, Claudia von Vacano, Ph.D., Executive Director of D-Lab, introduces the Data Science for Social Justice Workshop, highlighting its goals, structure, and outcomes. Then, three students who have participated in the workshop – with diverse and rich personal and academic backgrounds – present lightning talks on their experience with DSSJ, highlighting their personal journeys, the projects they worked on, and what they gained from the workshop. The event will conclude with a Q&A and discussion on how workshops like DSSJ present novel opportunities to train a generation of interdisciplinary, diverse data-driven scientists who center values and social justice at the forefront of their work.
Claudia von Vacano
Claudia von Vacano · Aug 30th, 2024
Popular topics
# AI Research
# AI Literacy
# Higher Education
# STEM
# Career
# Democratic Inputs to AI
# Innovation
# Social Science
# Everyday Applications
# Technical Support & Enablement
# AI Governance
# Public Inputs AI
# Expert AI Training
# Socially Beneficial Use Cases
# Future of Work
# AI Safety
# Security
# Ethical AI
# Policy Research
# Life Science
All
Jacqueline Hehir
Jacqueline Hehir · Aug 19th, 2024
Informative session about OpenAI's Research Residency program, perfect for anyone interested in forging a career in AI, but without extensive experience in the domain. Our 6-month residency helps technical researchers from diverse fields transition into AI. Led by the program manager, Jackie Hehir, this session offers insights into the program's structure, benefits, and application process.The residency is an excellent way for people who are curious, passionate, and skilled to sharpen their focus on AI and machine learning and contribute to OpenAI’s mission of building AGI that benefits all of humanity. Learn more about the residency program and discover research blogs published by residents at the bottom of this page here.
# Career
# Future of Work
1:01:33
Chi-kwan (CK) Chan
Chi-kwan (CK) Chan · Jul 22nd, 2024
Using the imaging of black holes as a case study, this talk highlights the key requirements for AI to make meaningful contributions to astrophysical research. Dr. Chan introduces several pioneering projects that are integrating AI into astrophysics, covering aspects such as instrumentation, simulations, data processing, and causal inference. He also discusses an innovative project aimed at enabling AI to gain scientific insights independently.
# STEM
# AI Research
# Higher Education
57:07
Hear from research leadership first hand about the significance of expert trainer contributions to the OpenAI mission.
# AI Research
# Expert AI Training
# AI Safety
58:48
Nathan Chappell
Dupé Ajayi
Jody Britten
+5
Nathan Chappell, Dupé Ajayi, Jody Britten & 5 more speakers · Jun 24th, 2024
The session featured several nonprofit organizations that utilize AI to drive social impact, emphasizing their long-standing involvement with the community. The discussion was facilitated by Nathan Chappell, a notable figure in AI fundraising, and included insights from a diverse group of panelists: Dupe Ajayi, Jodi Britton, Allison Fine, Anne Murphy, Gayle Roberts, Scott Rosenkrans, and Woodrow Rosenbaum. Each speaker shared their experiences and perspectives on integrating AI into their operations, illustrating AI's transformative potential in various sectors. The event highlighted the importance of AI in amplifying the efficiency and reach of nonprofit initiatives, suggesting a significant role for AI in addressing global challenges. The conversation also touched on the ethical considerations and the need for responsible AI use, ensuring that technological advancements align with human values and contribute positively to society. This gathering not only served as a platform for sharing knowledge and experiences but also fostered networking among community members with similar interests in AI applications. The dialogue underscored the critical role of AI in future developments across fields, advocating for continued exploration and adoption of AI technologies to enhance organizational impact and effectiveness.
# Socially Beneficial Use Cases
# Non Profit
1:25:50
Given the computational cost and technical expertise required to train machine learning models, users may delegate the task of learning to a service provider. We show how a malicious learner can plant an undetectable backdoor into a classifier. On the surface, such a backdoored classifier behaves normally, but in reality, the learner maintains a mechanism for changing the classification of any input, with only a slight perturbation. Importantly, without the appropriate "backdoor key", the mechanism is hidden and cannot be detected by any computationally-bounded observer. We demonstrate two frameworks for planting undetectable backdoors, with incomparable guarantees.  Our construction of undetectable backdoors also sheds light on the related issue of robustness to adversarial examples. In particular, our construction can produce a classifier that is indistinguishable from an "adversarially robust" classifier, but where every input has an adversarial. Can backdoors be mitigated even if not detectable? Shafi will discuss a few approaches toward mitigation. This talk is based largely on collaborations with Kim, Shafer, Neekon, Vaikuntanathan, and Zamir.
# STEM
# Security
# Higher Education
1:03:48
Yonadav Shavit
Yonadav Shavit · Apr 26th, 2024
Yonadav presents his research Practices for Governing Agentic AI Systems.
# AI Safety
# AI Research
# AI Governance
# Innovation
1:02:58
Teddy Lee
Kevin Feng
Andrew Konya
Teddy Lee, Kevin Feng & Andrew Konya · Apr 22nd, 2024
As AI gets more advanced and widely used, it is essential to involve the public in deciding how AI should behave in order to better align our models to the values of humanity. Last May, we announced the Democratic Inputs to AI grant program. We partnered with 10 teams out of nearly 1000 applicants to design, build, and test ideas that use democratic methods to decide the rules that govern AI systems. Throughout, the teams tackled challenges like recruiting diverse participants across the digital divide, producing a coherent output that represents diverse viewpoints, and designing processes with sufficient transparency to be trusted by the public. At OpenAI, we’re building on this momentum by designing an end-to-end process for collecting inputs from external stakeholders and using those inputs to train and shape the behavior of our models. Our goal is to design systems that incorporate public inputs to steer powerful AI models while addressing the above challenges. To help ensure that we continue to make progress on this research, we have formed a “Collective Alignment” team.
# AI Literacy
# AI Governance
# Democratic Inputs to AI
# Public Inputs AI
# Socially Beneficial Use Cases
# AI Research
# Social Science
56:40
Lama Ahmad
Lama Ahmad · Mar 8th, 2024
In a recent talk at OpenAI, Lama Ahmad shared insights into OpenAI’s Red Teaming efforts, which play a critical role in ensuring the safety and reliability of AI systems. Hosted by Natalie Cone, OpenAI Forum’s Community Manager, the session opened with an opportunity for audience members to participate in cybersecurity initiatives at OpenAI. The primary focus of the event was red teaming AI systems—a process for identifying risks and vulnerabilities in models to improve their robustness. Red teaming, as Ahmad explained, is derived from cybersecurity practices, but has evolved to fit the AI industry’s needs. At its core, it’s a structured process for probing AI systems to identify harmful outputs, infrastructural threats, and other risks that could emerge during normal or adversarial use. Red teaming not only tests systems under potential misuse, but also evaluates normal user interactions to identify unintentional failures or undesirable outcomes, such as inaccurate outputs. Ahmad, who leads OpenAI’s external assessments of AI system impacts, emphasized that these efforts are vital to building safer, more reliable systems. Ahmad provided a detailed history of how OpenAI’s red teaming efforts have grown in tandem with its product development. She described how, during her tenure at OpenAI, the launch of systems like DALL-E 3 and ChatGPT greatly expanded the accessibility of AI tools to the public, making red teaming more important than ever. The accessibility of these tools, she noted, increases their impact across various domains, both positively and negatively, making it critical to assess the risks AI might pose to different groups of users. Ahmad outlined several key lessons learned from red teaming at OpenAI. First, red teaming is a “full stack policy challenge,” requiring coordination across different teams and expertise areas. It is not a one-time process, but must be continually integrated into the AI development lifecycle. Additionally, diverse perspectives are essential for understanding potential failure modes. Ahmad noted that OpenAI relies on internal teams, external experts, and automated systems to probe for risks. Automated red teaming, where models are used to generate test cases, is increasingly useful, but human experts remain crucial for understanding nuanced risks that automated methods might miss. Ahmad also highlighted specific examples from red teaming, such as the discovery of visual synonyms, where users can bypass content restrictions by using alternative terms. She pointed out how features like DALL-E’s inpainting tool, which allows users to edit parts of images, pose unique challenges that require both qualitative and quantitative risk assessments. Red teaming’s findings often lead to model-level mitigations, system-level safeguards like keyword blocklists, or even policy development to ensure safe and ethical use of AI systems. During the Q&A session, attendees raised questions about the challenges of red teaming in industries like life sciences and healthcare, where sensitive topics could lead to overly cautious models. Ahmad emphasized that red teaming is a measurement tool meant to track risks over time and is not designed to provide definitive solutions. Other audience members inquired about the risks of misinformation in AI systems, especially around elections. Ahmad assured participants that OpenAI is actively working to address these concerns, with red teaming efforts focused on areas like misinformation and bias. In conclusion, Ahmad stressed that as AI systems become more complex, red teaming will continue to evolve, combining human evaluations with automated testing to scale risk assessments. OpenAI’s iterative deployment model, she said, allows the company to learn from real-world use cases, ensuring that its systems are continuously improved. Although automated evaluations are valuable, human involvement remains crucial for addressing novel risks and building safer, more reliable AI systems.
# Expert AI Training
# AI Literacy
# AI Research
58:29
Collin Burns
Pavel Izmailov
Collin Burns & Pavel Izmailov · Feb 26th, 2024
Collin Burns and Pavel Izmailov present their research, Weak-to-Strong Generalization
# STEM
# AI Research
1:03:12
In our conversation, we explored the fundamental principles of organization and function of biological neural networks. Anton Maximov provided an overview of imaging studies that have revealed the remarkable diversity of neurons in the brain and the complexity of their connections. His presentation began with the pioneering work of Santiago Ramón y Cajal and extended to contemporary research that integrates advanced imaging technologies with artificial intelligence. He discussed discoveries from his laboratory at Scripps, unveiling surprising new mechanisms by which neural circuits in the brain are reorganized during memory encoding. His presentation was engaging, with vibrant videos and images to showcase his findings.
# Life Science
# Higher Education
# AI Research
# Healthcare
1:00:00
Popular
Scientific Discovery with AI: Unlocking the Secrets of the Universe Key Requirements and Pioneering Projects Highlighting AI’s Contribution to Astrophysical Research
Chi-kwan (CK) Chan