About the Talk:
As AI gets more advanced and widely used, it is essential to involve the public in deciding how AI should behave in order to better align our models to the values of humanity. Last May, we announced the Democratic Inputs to AI grant program. We partnered with 10 teams out of nearly 1000 applicants to design, build, and test ideas that use democratic methods to decide the rules that govern AI systems. Throughout, the teams tackled challenges like recruiting diverse participants across the digital divide, producing a coherent output that represents diverse viewpoints, and designing processes with sufficient transparency to be trusted by the public. At OpenAI, we’re building on this momentum by designing an end-to-end process for collecting inputs from external stakeholders and using those inputs to train and shape the behavior of our models. Our goal is to design systems that incorporate public inputs to steer powerful AI models while addressing the above challenges. To help ensure that we continue to make progress on this research, we have formed a “Collective Alignment” team.
About the Speakers:
Teddy Lee is a Product Manager at OpenAI on the Collective Alignment Team, which focuses on developing processes and platforms for enabling democratic inputs for steering AI. Previously, Teddy was a founding member of OpenAI’s Human Data team, which focuses on improving OpenAI’s models with human feedback, and has also helped to develop content moderation tooling in the OpenAI API. He has previously held roles at Scale AI, Google, and McKinsey. He serves as the President of the MIT Club of Northern California, the alumni club for 14,000+ Northern California-based MIT alumni, and is a member of the MIT Alumni Association Board of Directors. Teddy holds a BS in Electrical Engineering from Stanford, an MS in Management Science & Engineering from Stanford, and an MBA from MIT Sloan.
Kevin Feng is a 3rd-year Ph.D. student in Human Centered Design & Engineering at the University of Washington. His research lies at the intersection of social computing and interactive machine learning—specifically, he develops interactive tools and processes to improve the adaptability of large-scale, AI-powered sociotechnical systems. His work has appeared in numerous premier academic venues in human-computer interaction including CHI, CSCW, and FAccT, and has been featured by outlets including UW News and the Montréal AI Ethics Institute. He is the recipient of a 2022 UW Herbold Fellowship. He holds a BSE in Computer Science, with minors in visual arts and technology & society, from Princeton University.
Andrew Konya is a Founder/Chief Scientist @ Remesh. Working on deliberative alignment for AI and institutions.