About the Talk:
This session is crafted to embrace a wide range of participants, from those already involved in research projects to active members of the AI Trainers community seeking project matches, as well as individuals curious about AI training opportunities. It will reflect the structure of the In Person AI Trainers Community Mixer, an event dedicated to members actively supporting OpenAI research. Our virtual program will include recorded talks from research leadership, Q&A with OpenAI expert AI Trainers, and offer interactive 1:1 networking opportunities. This setup ensures that all attendees, regardless of their geographical location, have access to the valuable insights and networking opportunities provided at the in-person event.
About the Speakers:
Mia Glaese, Head of Human Data
Mia is the head of Human Data at OpenAI.The Human Data Team creates custom data solutions driving groundbreaking research. Our work enhances and evaluates our flagship models and products like ChatGPT, GPT-4, and Sora, and contributes to safety initiatives through collaboration with our Preparedness and Safety Systems teams.
Mia herself is a researcher focused on advancing AI capabilities in a way that inherently aligns with human values and ethical standards. She works on the joint optimization of human data and training algorithms, integrating human judgment to refine AI behaviors. By developing methods that incorporate human feedback into AI training processes, she has contributed to making AI systems more effective in real-world applications.
Previously Mia worked at Google DeepMind on pretraining, evaluations, factuality, and RLHF for large models. She has authored numerous publications on the ethical and technical challenges of AI, particularly in aligning AI with human values, mitigating harmful outputs in language models, and developing robust multimodal models. Her work also emphasizes understanding and addressing the social and ethical risks posed by AI technologies, contributing to creating safer and more responsible AI systems
Lilian Weng, Head of Safety Systems
Lilian is the Head of Safety Systems at OpenAI, where she leads a group of engineers and researchers who work the end-to-end safety stack for deployment of our frontier models, ranging from alignment training of model behavior with safety policies to inference-time monitoring and mitigations.
Previously, she built and led Applied Research at OpenAI to leverage powerful language models to address real-world applications. In the early days of her OpenAI time, Lilian contributed to OpenAI's Robotics team, tackling complex robotic manipulation tasks, such as solving a Rubik's Cube using a single robot hand.
With a wide range of research interests, she shares her insights on diverse topics in deep learning through her blog that is popular in the ML community: https://lilianweng.github.io/
Evan Mays, Member of Technical Staff, Preparedness
Spencer Papay, Technical Program Manager, Human Data
Declan Grabb, Forum Member and AI Trainer
Samar Abedrabbo, Forum Member and AI Trainer
Naim Barnett, Forum Member and AI Trainer