AI, Art, Culture & Society
# AI and Creativity
# AI Art
# Sora
# Cultural Production
Sora Alpha Artists: Preserving the Past and Shaping the Future
At the OpenAI Forum event, artists and technologists Will Selviz and Manuel Sainsily explored how AI can empower creatives, democratize storytelling, and preserve cultural heritage. The discussion centered on their short film, Protopica, which was created using OpenAI’s generative video model, Sora, and was selected for Sora Selects in New York. Their work demonstrates how AI can be used as a creative tool, not a replacement, allowing for innovative storytelling while ensuring cultural narratives remain authentic and accessible.
Selviz and Sainsily highlighted the urgent need for cultural preservation, especially for communities facing displacement, language erosion, and the loss of traditional artifacts. They noted that a language disappears every two weeks, raising concerns about the preservation of oral traditions and indigenous knowledge. AI offers new ways to document and share these histories.
For instance, Sainsily, who hails from Guadeloupe, emphasized that his native Creole language is spoken by fewer than half a million people and has limited digital representation. He demonstrated how AI can help preserve underrepresented languages by training models on poetry, literature, and oral history. He also experimented with Guadeloupean Creole voiceovers in Protopica to showcase the potential of AI-driven language preservation.
Selviz, a Venezuelan-born digital artist, spoke about using AI-powered 3D scanning, spatial computing, and generative models to archive artifacts, from traditional Venezuelan masks to Kuwaiti weaving patterns. AI allows communities to digitally document their culture without relying solely on institutions, making preservation more accessible and democratized.


Will Selviz & Manuel Sainsily · Feb 20th, 2025
Popular topics
# AI Research
# AI Literacy
# Higher Education
# STEM
# Social Science
# Career
# Democratic Inputs to AI
# Socially Beneficial Use Cases
# Everyday Applications
# Innovation
# Public Inputs AI
# Expert AI Training
# AI Governance
# Future of Work
# Security
# Technical Support & Enablement
# GPT-4
# Ethical AI
# Policy Research
# AI Safety


Kate Rouch & David Droga · Feb 12th, 2025
The webinar, opened by Natalie Cone, the community architect of the OpenAI Forum, highlighted its inception in 2023 to bring together experts across disciplines to ensure artificial general intelligence (AGI) benefits all of humanity. The Forum has since enabled rich dialogues among Nobel laureates, Turing Award winners, and over 10,000 other experts, shaping the future of OpenAI's technologies. The session featured David Droga, CEO of Droga5 (part of AccentureSong), and Kate Rouch, Chief Marketing Officer at OpenAI, discussing the creative process behind OpenAI's inaugural television advertisement, which premiered during the Super Bowl.
Kate emphasized AI as the transformative technology of our era, recounting her journey from leading global marketing at Coinbase, including a viral Super Bowl ad, to OpenAI, where she focuses on making AI accessible and inspiring. Her extensive experience at Meta, where she contributed to its massive user growth, underpins her expertise in global brand building. David reflected on his career from his early days in advertising in Sydney to founding Droga5, known for its creative and technologically savvy campaigns. He discussed his belief that creativity must be amplified by technology, evident in how Droga5’s campaigns have effectively combined creative boldness with technological innovation.
A significant part of the conversation focused on the synergy between creativity and AI, with both speakers endorsing the power of AI to enhance, not replace, human creativity. They discussed how AI tools could democratize and expand creative processes, making them more inclusive. The dialogue also touched on maintaining authenticity in advertising, especially in high-stakes environments like the Super Bowl, advocating for ads that resonate with the brand’s core values rather than conforming to typical advertising clichés.
The webinar concluded with insights into OpenAI’s future initiatives, emphasizing integrity and authenticity in communications and innovations. Natalie Cone wrapped up the session by thanking the speakers for their insights, reflecting on the productive discussion that not only illuminated OpenAI's advertising strategies but also the broader implications of AI in creative industries. This session was part of OpenAI’s broader effort to engage the public and experts in meaningful conversations about the future of AI and its societal impact, showcasing how strategic and thoughtful advertising can effectively communicate the transformative potential of AI.

Claudia von Vacano · Aug 30th, 2024
The Data Science for Social Justice Workshop (DSSJ), organized in partnership between UC Berkeley’s Graduate Division and D-Lab, is an 8-week program aiming to provide an introduction to data science for graduate students, grounded in critical approaches of data feminism, data activism, ethics, and critical race theory. Attendees receive training in natural language processing and leverage their skills to conduct discourse analysis on social media data in an interdisciplinary project. This workshop, about to conclude its third year, has trained over 75 graduate students across 20 disciplines. These students form a community of interdisciplinary scholar-activists who uphold a values-driven approach to data science and machine learning.
In this event, Claudia von Vacano, Ph.D., Executive Director of D-Lab, introduces the Data Science for Social Justice Workshop, highlighting its goals, structure, and outcomes. Then, three students who have participated in the workshop – with diverse and rich personal and academic backgrounds – present lightning talks on their experience with DSSJ, highlighting their personal journeys, the projects they worked on, and what they gained from the workshop. The event will conclude with a Q&A and discussion on how workshops like DSSJ present novel opportunities to train a generation of interdisciplinary, diverse data-driven scientists who center values and social justice at the forefront of their work.
# Social Science
# Higher Education
# Socially Beneficial Use Cases



+5
Nathan Chappell, Dupé Ajayi, Jody Britten & 5 more speakers · Jun 24th, 2024
The session featured several nonprofit organizations that utilize AI to drive social impact, emphasizing their long-standing involvement with the community. The discussion was facilitated by Nathan Chappell, a notable figure in AI fundraising, and included insights from a diverse group of panelists: Dupe Ajayi, Jodi Britton, Allison Fine, Anne Murphy, Gayle Roberts, Scott Rosenkrans, and Woodrow Rosenbaum.
Each speaker shared their experiences and perspectives on integrating AI into their operations, illustrating AI's transformative potential in various sectors. The event highlighted the importance of AI in amplifying the efficiency and reach of nonprofit initiatives, suggesting a significant role for AI in addressing global challenges. The conversation also touched on the ethical considerations and the need for responsible AI use, ensuring that technological advancements align with human values and contribute positively to society.
This gathering not only served as a platform for sharing knowledge and experiences but also fostered networking among community members with similar interests in AI applications. The dialogue underscored the critical role of AI in future developments across fields, advocating for continued exploration and adoption of AI technologies to enhance organizational impact and effectiveness.
# Socially Beneficial Use Cases
# Non Profit



Teddy Lee, Kevin Feng & Andrew Konya · Apr 22nd, 2024
As AI gets more advanced and widely used, it is essential to involve the public in deciding how AI should behave in order to better align our models to the values of humanity. Last May, we announced the Democratic Inputs to AI grant program. We partnered with 10 teams out of nearly 1000 applicants to design, build, and test ideas that use democratic methods to decide the rules that govern AI systems. Throughout, the teams tackled challenges like recruiting diverse participants across the digital divide, producing a coherent output that represents diverse viewpoints, and designing processes with sufficient transparency to be trusted by the public. At OpenAI, we’re building on this momentum by designing an end-to-end process for collecting inputs from external stakeholders and using those inputs to train and shape the behavior of our models. Our goal is to design systems that incorporate public inputs to steer powerful AI models while addressing the above challenges. To help ensure that we continue to make progress on this research, we have formed a “Collective Alignment” team.
# AI Literacy
# AI Governance
# Democratic Inputs to AI
# Public Inputs AI
# Socially Beneficial Use Cases
# AI Research
# Social Science


Daniel Miessler & Joel Parish · Jan 30th, 2024
In this talk, Meissler shared his philosophy for integrating AI into all facets of life. He highlighted a framework built for leveraging custom prompts as APIs, as well as demonstrated several specific use cases that the speaker hopes will resonate with OpenAI Forum members and translate across disciplines and professional domains.
# Career
# Everyday Applications
# Technical Support & Enablement


Sam Altman & David Kirtley · Nov 29th, 2023
Earlier this year, Sam Altman, CEO and Co-Founder of OpenAI and David Kirtley, CEO and Founder of Helion convened at the OpenAI office among a small group of OpenAI Forum members to discuss the future of energy. This is the recording of their discussion.
# Innovation
# STEM



+56
Carl Miller, Alex Krasodomski-Jones, Flynn Devine & 56 more speakers · Nov 29th, 2023
Watch the demos presented by the recipients of OpenAI’s Democratic Inputs to AI Grant Program https://openai.com/blog/democratic-inputs-to-ai, who shared their ideas and processes with grant advisors, OpenAI team members, and the external AI research community (e.g., members of the Frontier Model Forum https://openai.com/blog/frontier-model-forum).
# AI Literacy
# AI Research
# Democratic Inputs to AI
# Public Inputs AI
# Social Science


Miles Brundage & Alex Blania · Nov 13th, 2023
About the Talk:
Worldcoin is creating a new identity and financial network that distinguishes humans from AI. This recording captured the the final in-person OpenAI Forum event of 2023 featuring Alex Blania, the CEO and co-founder of Tools for Humanity and Worldcoin, and Miles Brundage, head of policy research at OpenAI. Attendees learned about the Worldcoin systems designed to generate proof of personhood and ensure democratic access and governance of these systems in order to fairly distribute their benefits. We even featured the Worldcoin Orbs on site!
About the Speakers:
Alex Blania is the CEO and Co-Founder of Tools for Humanity, the technology company building tools for the Worldcoin project. He is a Co-Founder of the Worldcoin protocol with Sam Altman and Max Novendstern. Alex is responsible for leading strategy, development, and execution of the technology and tools enabling Worldcoin to ensure everyone benefits fairly from the opportunities AI presents. Alex holds degrees in Industrial Engineering and Physics from the University of Erlangen-Nuremberg in Germany and studied physics at Caltech before dedicating his full time and attention to supporting Worldcoin.
Miles Brundage is a researcher and research manager, passionate about the responsible governance of artificial intelligence. In 2018, he joined OpenAI, where he began as a Research Scientist and recently became Head of Policy Research. Before joining OpenAI, he was a Research Fellow at the University of Oxford's Future of Humanity Institute. He is currently an Affiliate of the Centre for the Governance of AI and a member of the AI Task Force at the Center for a New American Security. From 2018 through 2022, he served as a member of Axon's AI and Policing Technology Ethics Board. He completed a PhD in Human and Social Dimensions of Science and Technology from Arizona State University in 2019. Prior to graduate school, he worked at the Advanced Research Projects Agency - Energy (ARPA-E). His academic research has been supported by the National Science Foundation, the Bipartisan Policy Center, and the Future of Life Institute.

Dr. Ahmed Elgammal · Oct 18th, 2023
About the Talk: The use of AI in art making is as old as AI itself. The ways artists integrated AI in their creative process have been evolving with the advances of AI models and their capabilities. What is the value artists find when using AI as part of their process? What is the role of an artist and what is the role of AI in the process? How is that changing now with generative AI being dominated by text prompting as the way of interface? What have we gained and what have we lost with the progress of generative AI taking its course out of the uncanny valley to utility with the introduction of large language models as part of image generation. In this talk I will present my viewpoint about answering these questions as well as feedback from talking to many artists about this issue and trying to understand the ways they have integrated AI in their process.
# Innovation
# Cultural Production
# Higher Education
# AI Research


David Autor & Tyna Eloundou · Aug 22nd, 2023
About the Talk: Much of the value of labor in industrialized economies derives from the scarcity of expertise rather than from the scarcity of labor per se. In economic parlance, expertise denotes a specific body of knowledge or competency required for accomplishing a particular objective. Human expertise commands a market premium to the degree that it is, first, necessary for accomplishing valuable objectives, and second, scarce, meaning not possessed by most people. Will AI increase the value of expertise by broadening its relevance and applicability? Or will it instead commodify expertise and undermine pay, even if jobs are not lost in net. Autor will present a simple framework for interpreting the relationship between technological change and expertise across three different technological revolutions. He will argue that, due to AI’s malleability and broad applicability, its labor market consequences will depend fundamentally on how firms, governments, NGOs, and universities (among others) invest to develop its capabilities and shape its applications.
# Higher Education
# Future of Work
# AI Literacy
# Career
# Social Science
Popular