OpenAI Forum
Explore
+00:00 GMT
Featured
57:07

Scientific Discovery with AI: Unlocking the Secrets of the Universe Key Requirements and Pioneering Projects Highlighting AI’s Contribution to Astrophysical Research

Chi-kwan (CK) Chan
Chi-kwan (CK) Chan

58:29

Red Teaming AI Systems

Lama Ahmad

All Content

Popular topics
# Democratic Inputs to AI
# Ethical AI
# AI Governance
# Policy Research
All
Nathan Chappell
Dupé Ajayi
Jody Britten
+5
Nathan Chappell, Dupé Ajayi, Jody Britten & 5 more speakers · Jun 24th, 2024
The session featured several nonprofit organizations that utilize AI to drive social impact, emphasizing their long-standing involvement with the community. The discussion was facilitated by Nathan Chappell, a notable figure in AI fundraising, and included insights from a diverse group of panelists: Dupe Ajayi, Jodi Britton, Allison Fine, Anne Murphy, Gayle Roberts, Scott Rosenkrans, and Woodrow Rosenbaum. Each speaker shared their experiences and perspectives on integrating AI into their operations, illustrating AI's transformative potential in various sectors. The event highlighted the importance of AI in amplifying the efficiency and reach of nonprofit initiatives, suggesting a significant role for AI in addressing global challenges. The conversation also touched on the ethical considerations and the need for responsible AI use, ensuring that technological advancements align with human values and contribute positively to society. This gathering not only served as a platform for sharing knowledge and experiences but also fostered networking among community members with similar interests in AI applications. The dialogue underscored the critical role of AI in future developments across fields, advocating for continued exploration and adoption of AI technologies to enhance organizational impact and effectiveness.
1:25:50
Shafi Goldwasser
Shafi Goldwasser · May 3rd, 2024
Given the computational cost and technical expertise required to train machine learning models, users may delegate the task of learning to a service provider. We show how a malicious learner can plant an undetectable backdoor into a classifier. On the surface, such a backdoored classifier behaves normally, but in reality, the learner maintains a mechanism for changing the classification of any input, with only a slight perturbation. Importantly, without the appropriate "backdoor key", the mechanism is hidden and cannot be detected by any computationally-bounded observer. We demonstrate two frameworks for planting undetectable backdoors, with incomparable guarantees.  Our construction of undetectable backdoors also sheds light on the related issue of robustness to adversarial examples. In particular, our construction can produce a classifier that is indistinguishable from an "adversarially robust" classifier, but where every input has an adversarial. Can backdoors be mitigated even if not detectable? Shafi will discuss a few approaches toward mitigation. This talk is based largely on collaborations with Kim, Shafer, Neekon, Vaikuntanathan, and Zamir.
1:03:48
Yonadav Shavit
Yonadav Shavit · Apr 26th, 2024
Yonadav presents his research Practices for Governing Agentic AI Systems.
1:02:58
Teddy Lee
Kevin Feng
Andrew Konya
Teddy Lee, Kevin Feng & Andrew Konya · Apr 22nd, 2024
As AI gets more advanced and widely used, it is essential to involve the public in deciding how AI should behave in order to better align our models to the values of humanity. Last May, we announced the Democratic Inputs to AI grant program. We partnered with 10 teams out of nearly 1000 applicants to design, build, and test ideas that use democratic methods to decide the rules that govern AI systems. Throughout, the teams tackled challenges like recruiting diverse participants across the digital divide, producing a coherent output that represents diverse viewpoints, and designing processes with sufficient transparency to be trusted by the public. At OpenAI, we’re building on this momentum by designing an end-to-end process for collecting inputs from external stakeholders and using those inputs to train and shape the behavior of our models. Our goal is to design systems that incorporate public inputs to steer powerful AI models while addressing the above challenges. To help ensure that we continue to make progress on this research, we have formed a “Collective Alignment” team.
56:40
Collin Burns
Pavel Izmailov
Collin Burns & Pavel Izmailov · Feb 26th, 2024
Collin Burns and Pavel Izmailov present their research, Weak-to-Strong Generalization
1:03:12
In our conversation, we explored the fundamental principles of organization and function of biological neural networks. Anton Maximov provided an overview of imaging studies that have revealed the remarkable diversity of neurons in the brain and the complexity of their connections. His presentation began with the pioneering work of Santiago Ramón y Cajal and extended to contemporary research that integrates advanced imaging technologies with artificial intelligence. He discussed discoveries from his laboratory at Scripps, unveiling surprising new mechanisms by which neural circuits in the brain are reorganized during memory encoding. His presentation was engaging, with vibrant videos and images to showcase his findings.
1:00:00
Daniel Miessler
Joel Parish
Daniel Miessler & Joel Parish · Jan 30th, 2024
In this talk, Meissler shared his philosophy for integrating AI into all facets of life. He highlighted a framework built for leveraging custom prompts as APIs, as well as demonstrated several specific use cases that the speaker hopes will resonate with OpenAI Forum members and translate across disciplines and professional domains.
1:00:00
Sam Altman
David Kirtley
Sam Altman & David Kirtley · Nov 29th, 2023
Earlier this year, Sam Altman, CEO and Co-Founder of OpenAI and David Kirtley, CEO and Founder of Helion convened at the OpenAI office among a small group of OpenAI Forum members to discuss the future of energy. This is the recording of their discussion.
52:27
Carl Miller
Alex Krasodomski-Jones
Flynn Devine
+56
Carl Miller, Alex Krasodomski-Jones, Flynn Devine & 56 more speakers · Nov 29th, 2023
Watch the demos presented by the recipients of OpenAI’s Democratic Inputs to AI Grant Program https://openai.com/blog/democratic-inputs-to-ai, who shared their ideas and processes with grant advisors, OpenAI team members, and the external AI research community (e.g., members of the Frontier Model Forum https://openai.com/blog/frontier-model-forum).
2:33:03
Miles Brundage
Alex Blania
Miles Brundage & Alex Blania · Nov 13th, 2023
About the Talk: Worldcoin is creating a new identity and financial network that distinguishes humans from AI. This recording captured the the final in-person OpenAI Forum event of 2023 featuring Alex Blania, the CEO and co-founder of Tools for Humanity and Worldcoin, and Miles Brundage, head of policy research at OpenAI. Attendees learned about the Worldcoin systems designed to generate proof of personhood and ensure democratic access and governance of these systems in order to fairly distribute their benefits. We even featured the Worldcoin Orbs on site!  About the Speakers: Alex Blania is the CEO and Co-Founder of Tools for Humanity, the technology company building tools for the Worldcoin project. He is a Co-Founder of the Worldcoin protocol with Sam Altman and Max Novendstern. Alex is responsible for leading strategy, development, and execution of the technology and tools enabling Worldcoin to ensure everyone benefits fairly from the opportunities AI presents. Alex holds degrees in Industrial Engineering and Physics from the University of Erlangen-Nuremberg in Germany and studied physics at Caltech before dedicating his full time and attention to supporting Worldcoin. Miles Brundage is a researcher and research manager, passionate about the responsible governance of artificial intelligence. In 2018, he joined OpenAI, where he began as a Research Scientist and recently became Head of Policy Research. Before joining OpenAI, he was a Research Fellow at the University of Oxford's Future of Humanity Institute. He is currently an Affiliate of the Centre for the Governance of AI and a member of the AI Task Force at the Center for a New American Security. From 2018 through 2022, he served as a member of Axon's AI and Policing Technology Ethics Board. He completed a PhD in Human and Social Dimensions of Science and Technology from Arizona State University in 2019. Prior to graduate school, he worked at the Advanced Research Projects Agency - Energy (ARPA-E). His academic research has been supported by the National Science Foundation, the Bipartisan Policy Center, and the Future of Life Institute.
53:31
Popular
Shaping artificial intelligence through collective intelligence
Lucy Andresen