OpenAI Forum
+00:00 GMT
Sign in or Join the community to continue

Event Replay: Democratizing AI: Insights from the Global Dialogues Challenge

Posted Aug 01, 2025 | Views 49
# Democratic Inputs to AI
# Ethical AI
# Public Inputs AI
Share

speakers

avatar
Natalie Cone
Forum Community @ OpenAI

Natalie Cone launched and now manages OpenAI’s interdisciplinary community, the Forum. The OpenAI Forum is a community designed to unite thoughtful contributors from a diverse array of backgrounds, skill sets, and domain expertise to enable discourse related to the intersection of AI and an array of academic, professional, and societal domains. Before joining OpenAI, Natalie managed and stewarded Scale’s ML/AI community of practice, the AI Exchange. She has a background in the Arts, with a degree in History of Art from UC, Berkeley, and has served as Director of Operations and Programs, as well as on the board of directors for the radical performing arts center, CounterPulse, and led visitor experience at Yerba Buena Center for the Arts.

+ Read More
avatar
Zoe Hitzig
Econ Research @ OpenAI

Zoë Hitzig is currently a Research Scientist at OpenAI. Prior to joining OpenAI, she was a Junior Fellow at the Harvard Society of Fellows, and completed a PhD in economics from Harvard, where her research centered on privacy and transparency in market design. Outside her research, she is also a writer––she is the author of two books of poetry and has published essays and criticism in a range of venues.

+ Read More
avatar
Audrey Tang
1st Minister of Digital Affairs @ Ministry of Foreign Affairs, Taiwan

Audrey Tang is Taiwan’s first Digital Minister (2016–2024) and the world’s first nonbinary cabinet minister. A self-educated technologist, Tang left formal schooling at 14 and rose to prominence in the open source community through contributions to Haskell and Perl. They later co-founded g0v, a global civic tech movement, and played a key role in Taiwan’s 2014 Sunflower Movement. As Digital Minister, Tang championed participatory democracy platforms like vTaiwan and Join, fostering civic innovation through initiatives such as the Presidential Hackathon. Their leadership was also pivotal in Taiwan’s acclaimed COVID-19 response and in protecting the 2024 elections from foreign cyber interference.

+ Read More
avatar
Nabiha Syed
Executive Director @ Mozilla

Nabiha Syed is Executive Director of Mozilla Foundation, the global nonprofit that does everything from championing trustworthy AI to advocating for a more open, equitable internet. Formerly, Nabiha was the CEO of The Markup, an award-winning journalism non-profit that challenges technology to serve the public good. Under her leadership, The Markup’s unique approach was referenced by Congress 21 times, inspired dozens of class action lawsuits, won a national Murrow Award and a Loeb Award, and was recognized as “Most Innovative” by Fast Company in 2022. Prior to The Markup, Nabiha was a highly acclaimed media lawyer with a legal career spanning private practice and the New York Times First Amendment Fellowship. She led BuzzFeed’s libel and newsgathering matters, including the successful defense of several high-profile libel lawsuits. Nabiha sits on the boards of the Scott Trust, the $1B+ British company that owns The Guardian newspaper, the New York Civil Liberties Union, the Reporters Committee for the Freedom of the Press, and the New Press. She also serves as an advisor to ex/ante, the first venture fund dedicated to agentic tech, and she is a current member of The World Economic Forum’s AI Governance Alliance. In 2023, Nabiha was awarded the NAACP/Archewell Digital Civil Rights Award for her work.

+ Read More
avatar
Tyna Eloundou
Member of Technical Staff @ OpenAI

Tyna Eloundou is a researcher at OpenAI, whose most recent work includes leading safety evaluations, economic impact evaluations, and the democratic inputs to AI grant program. Prior to OpenAI, she worked as an associate economist at the Federal Reserve Bank of Chicago and programmer at the RAND Corporation.

+ Read More
avatar
Divya Siddarth
Co-Founder & Co-Director @ Collective Intelligence Project

Divya Siddarth is the Co-Founder and Co-Director of the Collective Intelligence Project. She is a political economist and social technologist at Microsoft’s Office of the CTO, a research director at Metagov and the RadicalXChange Foundation, a research associate at the Ethics in AI Institute at Oxford, and a visiting fellow at the Ostrom Workshop. Her work has been featured in Stanford HAI, Oxford, Mozilla, the Harvard Safra Center, WIRED, Noema Magazine, the World Economic Forum, and Frontiers in Blockchain.

+ Read More
avatar
Faisal Lalani
Head of Global Partnerships @ The Collective Intelligence Project

Faisal M. Lalani is a global community organizer with a background in building international coalitions, advising policymakers, and preserving human rights and democracy. He has worked all over the world — including in Nepal, South Africa, India, the UK, Sri Lanka, and the US — and has expertise in digital rights, education reform, public health, climate and energy transitions, clinical psychology, foreign policy, and social movements.

+ Read More
avatar
Joal Stein
Communications and Strategy Lead @ The Collective Intelligence Project

SUMMARY

The event was hosted by Natalie Cone, head of the OpenAI Forum, who introduced the evening's collaboration between OpenAI and the Collective Intelligence Project (CIP). CIP, led by executive director Divya Siddharth, is dedicated to steering transformative AI technologies toward democratic outcomes, emphasizing both democratic governance of AI and leveraging AI to enhance collective intelligence. The event featured a panel moderated by OpenAI researcher Tyna Eloundou, with judges Zoe Hidzig (OpenAI Research Scientist), Audrey Tang (Taiwan's former Digital Minister), and Nabiha Syed (Executive Director, Mozilla Foundation) discussing the outcomes and significance of CIP’s Global Dialogues Challenge. Faisal Lelani, Head of Global Partnerships at CIP, announced the winners, highlighting their innovative projects promoting democratic AI engagement and cultural inclusivity. Joel Stein, communications and strategy lead at CIP, was also recognized for his role in organizing the event. The overall winner, Saranjan Vigram, presented a creative approach designed to build cross-cultural understanding through a detective game aimed at educating younger generations about AI.

+ Read More

TRANSCRIPT

I'm Natalie Cone, head of the OpenAI Forum. Our mission at OpenAI is to build API that benefits everyone, and we typically feature conversations on the OpenAI Forum that highlight how our technology is helping people solve hard problems.

But tonight, we're focusing on an organization other than ourselves, but with whom OpenAI has collaborated, the Collective Intelligence Project, an R&D lab focused on building democratic governance models for transformative technologies, particularly AI.

They do a lot as well, and you're going to learn more about it, but they develop tools and processes that enable meaningful collective public input.

on how these technologies evolve.

They were our very first guests in the OpenAI Forum two years ago, and collaborators on OpenAI's Democratic Inputs to AI initiative a couple of years ago as well.

I'm honored that they've decided to join us again this evening. We're gonna meet several people tonight, so here's a roadmap of our agenda and our guests.

We've invited the judges of Collective Intelligence Project's Global Dialogues Challenge for a discussion led by OpenAI researcher, Tyna Eloundou.

Tyna's most recent work includes leading safety evaluations, economic impact evaluations, and the Democratic Inputs to AI grant program. Prior to OpenAI, she worked as an associate economist at the Federal Reserve Bank of Chicago and programmer at the Rand Corporation.

The Global Dialogue Challenge judges include Zoe Hidzig, a research scientist at OpenAI. Prior to joining OpenAI, she was a junior fellow at the Harvard Society of Fellows and completed a PhD in economics from Harvard where her research centered on privacy and transparency in digital, in market design.

And I said I'd keep these bios short but I really love this part. So I'm gonna share it, Zoe. Outside of her research, she's also a writer. She's the author of two books of poetry and has published essays and criticism in a range of venues.

Also on the judges panel is Audrey Tang, Taiwan's first digital minister from 2016 to 2024 and the world's first non-binary cabinet minister. A self-educated technologist, Tang left formal schooling at 14 and rose to prominence in the open-source community through contributions to Haskell and Pearl. They later co-founded Gov, a global civic tech movement, and played a key role in Taiwan's 2014 Sunflower Movement.

As digital minister, Tang championed participatory democracy platforms like VTaiwan and JOIN, fostering civic innovation through initiatives such as the Presidential Hackathon. Their leadership was also pivotal in Taiwan's acclaimed COVID-19 response and in protecting the 2024 election from foreign cyber interference.

Nabiha Syed is the Executive Director of the Mozilla Foundation, the global non-profit advancing trustworthy AI and a more open equitable internet. She previously served as CEO of the Markup, where her leadership drove national recognition, multiple awards, and citations in Congress for its public interest technology reporting. Earlier in her career, Nabiha was a prominent media lawyer serving as a New York Times First Amendment fellow and leading high profile and news gathering matters at BuzzFeed.

Faisal Lelani will be revealing the winners of the challenge. He's a global community organizer with a background in building international citations, advising policymakers, and preserving human rights and democracy. He's worked all over the world, including Nepal, South Africa, India, the UK, Sri Lanka, and the US, and has expertise in digital rights, education reform, public health, climate and energy transitions, clinical psychology, foreign policy, and social movements. And Faisal is also a member of the Collective Intelligence Project team.

I also want to give a shout out to Joel Stein, who's the communication and strategy lead at the Collective Intelligence Project, and one of the key reasons we're all here tonight.

But before we dive into the Global Dialogue Challenge, I'd like to introduce Divya Siddharth, the executive director and co-founder of Collective Intelligence Project. Previously, she's been a political economist and social technologist in Microsoft's office of the CTO and AI and democracy lead at the UK's AI Safety Institute. She's held positions at the Ethics in AI Institute at Oxford, the Ostrom Workshop, and the Harvard Safra Center. She graduated from Sanford with a BS in computational decision analysis in 2018.

Please help me in welcoming Divya to the stage.

Hi, everyone. It's really wonderful to be here. As Natalie mentioned, I was honored.

to be along with my co-founder Saffron, one of the, and Lama, who's at OpenAI, first, I think, guest on the OpenAI forum. So it's really delightful to be back.

So I'm Divya Siddharth. I'm the founder and executive director of the Collective Intelligence Projects. And we are steering transformative technology towards democratic outcomes for all of humanity.

Do we want to put the slides up? We're happy to have worked with a range of AI labs and companies.

Actually, in addition to being, I think, the first guests on the forum, OpenAI was our first partner in running democratic input processes.

processes into model development, Taino was involved in that project, which was wonderful. And since then, we've worked with companies around the world to build democratic inputs directly into models, as well as governments like in the UK, in Taiwan, in India, on building evaluations and processes towards democratic AI.

This means two things for us. So we look at the intersection of artificial intelligence and collective intelligence in two ways. One is that we want there to be democratic steering of AI. We believe that models are going to have a transformative impact on society, and there should be democratic approaches to how they are built, how they are governed, and how they are evaluated, so that they can be used.

to the full benefit of humanity. But we also think about it the other way. We think about building better democracy and better collective intelligence through artificial intelligence.

So we also work on ways to improve our democracy with AI-enabled processes, have collective intelligence projects that combine inputs from around the world to govern large, world-scale problems.

So we think about this feedback loop between artificial intelligence and collective intelligence. And we're here today, of course, to meet the winners of the Global Dialogues Challenge and talk about the Global Dialogues on AI, so I'll give you a quick rundown on what that is.

Our Global Dialogues project runs every eight weeks.

from around the world. We have thousands of participants from almost 70 countries who come together and deliberate on how they are impacted by AI and the kind of futures that they want to see from AI. We think this is important because this is a global phenomenon, this technology. And we want to have an understanding of not only what people around the world want to see but how they're already interacting with AI and how they are already being impacted as this technology diffuses.

So we have these incredible data sets from multiple global dialogues, runs hundreds of thousands of responses, hundreds of thousands of votes. And of course, we are the Collective Intelligence Project. So we believe in using collective intelligence to understand things. And that means that in addition to all the work we're doing on this data for a frontier model.

evaluation and policy, we wanted to open it up to the world and try to better understand how people might use this data for their own purposes and to tell their own stories and to create their own impact. So that's why we're here today as part of our work on bringing together artificial intelligence, collective intelligence, and democracy. There's more on the Collective Intelligence Project at cip.org, and we're really delighted to share our work with you today.

And Faisal will be announcing the winners later, which we're all very excited for. But for now, I'd love to welcome Tyna from OpenAI and the judges to the stage.

Hi, everybody. Hi, everyone. Hi. I think we are all here. Hi, everyone.

I'm Tyna. Thank you so much, Divya, for the introduction, for my introduction, and also for the introduction to CIP.

Thank you, judges, for being here. Nabiha, Zoe, Audrey, I'm thrilled to be able to facilitate this discussion with all of you.

I've prepared a few questions ahead of time, but I would love for this to feel like a conversation. So if you feel the need, please feel free to build off each other's points.

And does anybody have a question before we get started? All right, wonderful.

All right. Just to warm us up, I would love to start with a big picture question for all of you as a way of introducing us to your perspective about this challenge.

So I would love to ask all of you individually, what drew you to participate in the Global Dialogues Challenge and what excites you about the work the Collective Intelligence Project is doing at the frontier of AI development?

I can jump in first. I am just so honored to be a part of this project.

I'll just say, first of all, I love.

of CIP, everything that they do. And it's just an honor to be judging this with two people who I hold in such high regard, Nabiha and Autry.

And one of the things that really excited me about this project was not just that CIP is doing the incredible work of collecting global dialogues and collecting data from really this kind of novel data that we don't get from other sources, but really applying their ethos, as Divya said, the ethos of collective intelligence to this data that they've collected. And from my perspective, working as a researcher, thinking about the social impacts of AI.

of the big websites, things are moving really fast and anything that we can do to sort of get our hands on better data and a better understanding of how the world is understanding AI is just amazing. And then to be able to put this out to the community and solicit not just more data visualizations but also stories and more creative attempts to use this dataset.

So I'm just super excited about all sorts of aspects of this project.

I'm happy to jump in afterwards.

I am so used to being a lawyer, it's very exciting to be a judge, particularly with other judges who are so incredible. And Collective Intelligence Project, from the first time I met Divya, I was like, this is asking the right set of questions to make sure that a revolutionary technology is actually inclusive in the deepest way possible, not in a superficial, like, hey, we want everyone to use this, but actually be deeply participatory.

I've been at Mozilla for a minute now, and at Mozilla, we're defiantly optimistic that the future of technology can be good, even if it doesn't seem that way every day. But for that to happen, we're going to need actually everyone to roll up their sleeves and to participate. And what was so exciting about this data set is realizing

that participation isn't a homogenous thing. Everyone's starting in a different place with a different cultural context, with a different set of needs, a different set of anxieties. And if we ignore that, and we don't see it as a strength that actually in something that's moving so quickly that we have many vantage points that to borrow a phrase from open source, with many eyes, all bugs are shallow. If we can really introduce in all these different perspectives, we have a shot actually at building the good technology that we know that we deserve.

And so it's to have a tangible example, not just lip service of the world should participate and it should be good, but to say we've done the work, we have a data set, let's try to.

understand, and then build from there. It's just, it's an extraordinary act, and it's extraordinary fun to be able to judge there. So, excited to be here. Great. Well, I got into global dialogue because I announced global dialogue in the AI Action Summit in Paris. And I've been working with CIP for more than two years now.

And I'm really happy to see what we have seen in Taiwan, which is people may appear polarized devices on matters such as what to do about defects and so on. And if you poll people individually, they're going to be kind of extremes, in bees, name bees.

But if you put people in rooms of 10 and have AI facility.

a facilitated conversation, then everybody is like, maybe, okay, maybe in my backyard, if you do this, if I do this, and so on, and become much more depolarized and actually listen to one another.

Of course, facilitated conversations like this is known to produce very high quality results, but we could not actually scale it to thousands or tens of thousands of people at the same time.

And global dialogue is the attempt to take what's worked in Taiwan for like 450 people in the same hour, but scale it so that it's like 5,000 people across the world in the same hour.

And I think that is methodologically a very good breakthrough, so that people can actually see, instead of just...

just aligning AI to one single principle, one single CEO, or one single user, sometimes in a sycophant, flattering way, how can we actually align it to communities to facilitate our understanding between those? And I'm very happy to see many winners actually using the datasets to do community tuning and empathy building and civic care building, but I wouldn't spoil the winners now.

Beautiful, fantastic, thank you.

I have a follow-up question to Zoe's point about being a researcher who might be excited about actually digging into the data. I'm curious, as you reviewed some of these projects, did you find anything in particular that stood out to you? Were there any-

themes or trends that caught your attention, whether it's about, you know, what was highlighted in the data itself or the presentation of it. I'm happy to jump in.

There were two paradoxa, also I learned recently that's the plural of paradox. That's not what I would have guessed. The Latin dictionary is helpful. There were two that were called out in the data that I thought were really fascinating. One was the emotional AI paradox and the other was the innovation anxiety paradox and I'll just describe what I thought was surprising about each.

The emotional AI paradox, fascinating, right? A little over half

of the people surveyed said they're really worried about creating intimate connections with AI, but also half of the people are already using AI for emotional support. So there's this anxiety rooted in proximity and seeing how appealing it is.

And I think I was surprised by how pervasive that paradox was, because I had had a working hypothesis that in societies that were more individualistic, like much of the Anglosphere, we would see more of that, and in more collective communitarian societies we would not see that because why would you talk to AI about your feelings? You got your family living next door.

And so to see that paradox actually more broadly

pervasive, really fascinating to me, wouldn't have expected that. And what that means for downstream design principles of how do we think about navigating that paradox, fascinating to think about how that then is metabolized in culturally resonant and significant ways. So that was one that was interesting.

The second one, which was the innovation anxiety paradox, the idea that the more innovation capacity you have doesn't necessarily mean optimism. It sometimes means that you have a better texture on the complexity, and the challenges, and the need for public debate. That was one that also, in different countries, was kind of unexpected.

And I will pass the mic after just observing.

But there was just one data point that I found fascinating. The economic anxiety between Canada and the United States, both North American, was tremendous. Canada, very, very anxious. And the US, which is where I spend the bulk of my time, not as anxious as the discourse I might have estimated, which maybe says something about the waters that I swim in. And so it was also really fascinating to just see that your assumptions about this don't necessarily bear out in the data. So both paradoxes are fascinating to me.

Yeah, one of the things, oh, sorry, you go ahead, Audrey.

Sure, yeah, I want to highlight a gap, maybe not a paradox per se, that over the world, people trusted.

the chatbots, the AI models, much more than the companies that made them. There's this like consistent 30% or so gap, and so in a place where they trust the AI models maybe, you know, 50% of the time, they will trust the companies making those models, like 20%.

And in places with even higher trust, it will grow, but there's just this gap that's very consistent. And so, which is kind of a paradox if you think about it, because supposedly the companies are there to keep the AI models on a leash, right, through model specs, through constitutions, and things like that, so that it doesn't go haywire. But from the receiving endpoint, because the models are

so responsive and provides real time feedback. So actually, it's that cybernetic feedback, where if I express something, the model at least try to pay attention, right to my actual needs, whereas the company doesn't yet anyway, provide this kind of high frequency feedback. And I think that is what's partly what's caused this disconnect in the trust levels.

Yeah, I found the trust levels fascinating to Audrey. And one of the other comparison points in the data is also to comparing chatbots to elected representatives. And the chatbots are sort of consistently and I think this was in multiple of the waves of the of the

GDC, the chatbots are consistently more trusted than elected representatives. And I didn't, I don't remember, geographic, like significant geographic variation there. So I thought that was fascinating.

I also was really interested in this question of these questions around consciousness and that some really non trivial percentage, I think it was like over 30% of respondents said that they said yes, responded yes to a question that was along the lines of have you ever felt an AI seemed conscious. And I think that's really interesting. It's a it's a conversation that I think

was like, even just a few years ago, it's like, really not a mainstream conversation at all. And to think that 35% of people are responding yes to that question is something to contend with. I don't know what I think about it, and we don't have to talk about it exactly, but it's pretty interesting.

Amazing. I want to dig in a little bit on the first point that, Nabiha, you brought up, at the risk of putting you on the spot.

So yeah, when Chachapati was initially released, I went to Cameroon, and exactly that emotional AI paradox was the first response that people had.

had, really the question of the social tapestry and what it was going to do to influence relationships of interdependence and mutual help.

So from your background, you have spanned content policy across multiple cultures. What do you think is the biggest challenge in creating shared AI governance?

When you observe these sorts of pervasive or near-universal paradoxes, does it make you more or less hopeful about our ability to bridge the gaps across cultures?

Oh, what a juicy big question. I think there are a

areas of commonality that emerge here that gives me optimism. For example, the people who are concerned about climate, right? Consistently throughout this, we saw people who want to believe that there is a way that we can have the advantages of this technology while not destroying the physical needs as embodied creatures that we have for like sunlight and water and a planet to live on. And that was pretty cross-cutting. So I think there are going to be areas in which it will be easier for us to find entry points of saying, okay, as embodied creatures, like in flesh and blood, there are things that we need. How do we think about that governance?

And there'll be other areas, and I think content policy, as a person who's done a lot of free speech policy in her life, there are other areas where the chasms between cultural context will be too large to bridge. And in fact, the right thing is to create the conditions for people to govern it in a way that is authentic for their own sovereignty and for their own needs. And so we've got to figure out that balance of what are the things where actually the collective really does need to come together to decide, and what are the areas in which we're not trying to create.

I would not advocate, and I won't speak for everybody, I will not advocate for a homogenizing, totalizing, oatmeal global culture. I want there to be the ability for people to design, and build, and govern for themselves.

I think the interesting work here is to make sure that we're surfacing the diversity of opinions so we at least have visibility into what those differences are. And then we're having a sort of educated, elevated discussion with knowledge about what that is. And I think we see that with these kinds of projects. It really creates the framework for that.

I would love to follow up on this thread with this question of who actually owns the agency to make these changes, whether it's to enable agency at the local level or to sort of consolidate the learnings from these sorts of data.

So Audrey, do you have any opinion on how smaller countries and local communities maintain

maintain their agency in this landscape that seems to be global and local at the same time?

Yeah, definitely. And so, in Taiwan, for example, we use the same method as global dialogue. We ask people to chime in how do they want AI to behave in a specific social setting, in a scenario, online. And we collected people's conversations and actually made some principles out of it. And these principles, nowadays called model specifications, are then used to fine-tune open weight models so that it fits our local culture. And it's actually like a nested thing, right? Even for Taiwan,

And we all agree, for example, the public sector should set examples to deploy AI in a trustworthy way. When we hold in-person sessions in Taipei City and the southern Tynan City, they have also very different takes on how the cultural nuances should be expressed.

And in Taiwan, we have 16 indigenous nations with 42 language variations. And so they also have very different epistemic norms, the norms of knowing. And so they would then need the agency to take the sovereign AI model in the country level and then to re-express that, maybe fine-tune it even further, maybe just with a very long system prompt, so that it works in the cultural level and so on. So I don't think this is...

just a matter of a country fine-tuning a Frontier Lab model. I think this is a nested thing where each and every community can use democratic AI to build their own community models, which by the way is also a CIAP initiative.

Wow, I love that angle. Does anybody have another perspective they'd like to share?

Well, yeah, I would just add that one of the great things about this contest is, again, just seeing these datasets in action and showing how people can kind of take control and share.

build data sets like this and kind of just create their own data sets to fine tune existing models?

I mean, again, as Audrey said, that's not the end story. There are really deep questions here, but I think the kind of approach of letting a thousand flowers bloom or whatever it might be is very inspiring.

And I would just add, from the perspective of working at OpenAI, we have this massive document called the model spec, which is worth reading. It's a set of principles that OpenAI products attempt to adhere to. And

And, you know, there's always, the line is always open to give feedback on what that's missing. And I think like our perspective is generally that that's an ongoing work in progress and not something that could ever be set in stone.

Well, speaking of some of these terms, like fine tuning, model spec, these principles, I think they probably live at this liminal layer of artifacts that are produced out of these data sets that we collect, right, and which themselves are downstream of latent beliefs that people hold.

that may change, may not necessarily be easy to elicit in the formats that we're used to.

Zoe, again, I would love to hear a little bit more about what you've learned about making these sorts of decisions that we're making at the model spec level at making things into principles.

Audrey, also, if you'd love to chime in, I would love to hear from you. How can we make these sorts of decisions understandable and actionable from these same diverse global voices to close the loop, in a sense?

I'd love to ask you that, Tyna. You're the expert on that work.

Audrey, do you want to go first?

The first good step is don't guard modifications to the model spec, right? So I'm very happy to have contributed to OpenAI donating the model spec to the public domain under Creative Commons 0. That is very important because each person in every community actually have a slightly different expectations for AI assistance.

Even the same person in a family setting, in a medical setting, in a professional setting, and so on, have wildly different norms around what ethical AI, when they enter the conversation, what kind of needs should they be attentive to, what kind of responsibility should the AI agent take.

so on and so forth. So there is really no personalized AI. There's really only kind of community-level expectations for AI norms.

And as Tyna said, it is very dynamic. And so one of the best ways to evaluate that dynamism is just to make slight tweaks to model spec. And starting from 01, I believe, and now certainly CHPT-5, they have deliberative alignment. That is to say, it's not just in the fine-tuning time, but rather as it thinks. It can check the modified model spec to think differently. So if you change the model spec locally, and then you run a different deliberative or reasoning AI in it.

conversation, it will actually read the air, kind of check the room, is the spec changing for this particular room, should I behave differently, and so on. Which is exactly what a person in a place where we are kind of a foreigner entering a foreign place, what we would do, we would check the local norms.

And so I think keeping the model spec evolvable and also add to it crowdsource evaluations so that people can see which kind of model spec tweaks actually works better for a particular community in particular scenarios that I think works very well, too.

And the crowdsource evaluation is also a CIP project at WeBell.org. I can maybe speak a little bit

briefly about the model spec and the learnings that we've had.

Audrey, I'm sure you know the first version of the model spec was much shorter than the second version. I think it went from 10 to 60 pages in one iteration. And the reason for that is that I think thinking at a high level is very, it's, it's, is I think the easy thing to do. But very quickly you actually in, when you're in the messy scenarios and cases and the local context that somebody is encountering versus somebody else, you have to make hard trade-offs. And I think these are really where cultural and normative differences shine, right? And how do you as a platform, as OpenAI, take

advantage of all of the information that you can use to make this as good as it can be for everybody. And it's always this difficult trade-off, similar and not too dissimilar from the sort of trade-off that like governance at scale, like democracy tends to manage. How do you make most people well-off without disadvantaging minorities?

And on the other hand, you have people who have a lot more context about sort of their local needs, their local norms, and figuring out a mechanism to communicate that to whoever is the owner of the platform or the artifact and making sure that these things are in constant conversation is, I think, the main

exercise here. And so we're still working on figuring out how to do that in a way that stays mainTynable. One thing we've learned is really, like, demonstrating at the principle level is helpful. Demonstrating at the example level is helpful. And in many ways, we're excited to see crowdsource contributions come in so that we actually get better calibrated to what people really understand about our intent and we can understand the principles they're working from. Sorry. Okay. I'm going to get off my soapbox, but I would love to give Nabiha the chance to chime in if she would like to.

All right. Great. So

It sounds like Global Dialogues Challenge is an ongoing effort and it's had a really meaningful impact in the data sets that it's created already. One of the questions that I'm curious about from the entire panel, anybody, would be beyond one-time consultations like this challenge, how can we create ongoing feedback loops related to what I was speaking about between communities and AI researchers or AI research companies?

Well, since I passed on the last question, I'm happy to go here. I think at minimum, there's three categories of interventions I think are important. The first is that you need meaningful space.

to share externalities. It's not enough to just have a space for people to say, well, I'm worried about the environment or mental health or economic impact. There have to be meaningful spaces where people can actually provide more detail about what that looks like in a way that it actually gets to meaningful dynamics that are there.

The second, and I think it helps inform the first, is you do need education. It need not be formal education, although we do a lot of that work at Mozilla through our Responsible Computing Challenge sort of work funding education. I think we live in a time where educational institutions are not always accessible to everyone and are undergoing dramatic disruption even where

or they have been quite dominant. So thinking about how we educate people to be able to, not everyone knows what fine tuning means. Not everyone knows how to read a model spec. And so thinking about the sort of equipping education that allows people, not just at a technical level to participate, but to feel the agency and the confidence to participate is really important. Because I think a dynamic that we see often, and it's certainly been the case in some of Mozilla's engagement with civil society, is people are like, well, I'm not an expert and I don't really know, so I can't share.

And so you see the inhibiting aspect of approaching things too technically. And again, educating people that, one, they are experts in science.

something, they're experts in their own lives, and that makes it worthwhile for them to, again, surface those externalities is really important. And the last is like education isn't just a read this over here, but actually giving people across contexts, the resources to experiment, which includes having other data sets that allow them to understand their own world, a diversity of data sets, access to compute, access to really the work of iterating and participating in this realm, I think is also important because what we want is meaningful agency and participation and confidence in that way.

And so I think those are three like very high level buckets that are necessary to have diversity of participation.

Yeah, thank you for that, Nabiha. I really like how you laid it out. I would add that one of the most powerful things for ways of creating an ongoing communication channel is by creating new tools for communication.

So again, I'm fangirling a little bit the GDC challenge here, but I do think that the inclusion of storytelling and the inclusion of a category that focused on creative uses of this data is exactly how we can try to promote more feedback loops because as we're saying, not everyone feels like they're an expert.

One way to get their participation is to sort of try to convince them that they're an expert about something which is their own lives, but also to kind of show rather than tell how their voice can be powerful and show rather than tell the, you know, genuine questions that AI researchers are grappling with.

I think that when I talk to people who aren't super steeped in the AI world, I think they sort of assume that, you know, AI progress is just marching along and like they have absolutely no tools for participating in it. And they wouldn't even know what to say if they were approached.

has inspired me about a lot of the submissions for this challenge was that they're really trying to show rather than tell both how, you know, how normal people and a wide range of people can make a difference. And also to show, rather than tell how AI researchers are genuinely looking for the answer to these questions. And that, you know, it really goes both ways. It's not just, you know, shouting into the void on either side.

Yeah, I'd like to bring up, I think, in the original

like the first CIP slash OpenAI conversations, where we ask random people, like literally random people, like what are their priorities when it comes to OpenAI models and interactions, and asking some of those people to share with OpenAI researchers.

And there was one sharing that said, somebody said that, okay, I asked Chad GPT to remember something about me because this is important to me, and then Chad GPT made a promise. And then next day, I opened a new Chad GPT session, and it has forgotten that it has broken its promise, and it feels harmful, kind of hurtful to

have the bots break its promise.
Of course, later on, GTP would introduce memory mode and therefore not breaking that particular promise. But months or years have passed between that feedback and this new product implementation.

I think what a Frontier Lab can do, essentially, is like a group study modes.
Instead of just like study modes that makes those complex ideas about approachability or fine-tuning or alignment or whatever, like what people need to align to study, like the limitations of AI technologies.
The study mode can explain that.

to the people but it can also connect people to other people who are feeling similar puzzlement or feeling similar pain or similar anxieties or fear.

As Sam Altman just said a couple days ago about this AI motives and once connected together then people become much more empowered because they can actually share their recipes on how to make things better.

So they're not just protesting against something they could be demonstrating by building like new interaction models or you know vibe coding some of those newer interaction model doesn't suffer from the previous limitations.

And I think all this is much better than simply saying like.

like to the model replies and tune the model to get the short term reward by getting people to like them more, because that would probably only turn the model to be more sycophant, more flattering.

Well, I find that future very inspiring.

And on that same thread, the initial thrust in the introduction, when we're talking about CIP, we talked about it as developing tools and processes. Audrey, Nabiha, Zoe, I would be curious, like this project itself seems kind of like a collective intelligence experiment, right? How do you see approaches like this, where they're on the tool side, the tool development

and the elicitation process you've developed, or the process side, just the muscle, building the muscle to ask people what they think and figure out how to incorporate it into technological development. How do you see this actually practically helping to democratize AI development more broadly?

Well, I think of it like a group selfie, right? So every two months, you take a group selfie of the entire world across all the different jurisdictions. But, okay, a group selfie is taken. What next, right, do we airdrop that to AI? So

I think that's a really good question.

I think that's a really good question.

I think that's a really good question.

I think that's a really good question.

I think that's a really good question.

our friends and families? Do we share it on social media? So that's partly what the challenge, the global dialogue challenge is about, because if we can have a kind of printing press to make the group selfie a relevant part of people's lives, so that just like people would check the weather of the day for our city and something, we can also check the weather of what people forecast as what's coming, a rainstorm or whatever, when it comes to AI integration to the society.

Or people can also collectively say, okay, now that we're seeing this emerging threat of malicious AI swarms before elections or whatever, then let's do something together. Because without this common urgency.

It's probably ranking, I don't know, number 10, number 20, in what people view as important in priority agenda. And so the local policymakers would not have a search engine, they would not have the index of, okay, this is what my jurisdiction, my constituents, all already think is acceptable to make as policy to mitigate some of those harms or to tap into those opportunities.

So for policymakers, they are essentially operating in the blind without a GPS, without a weather forecasting system. So the part of the GDC is to build that feedback loop, so that it's not just taking a group selfie, but making sure that group selfie reaches policymakers and Frontier Labs.

and everybody else. I'm so taken by the group selfie analogy. And I think what feels very important about this next era of GDC is how do you get more people in the group selfie. And one, for people to know, and two, to sort of build, I think, a really important feedback loop, which is it matters to be in the group selfie, and someone is going to use it for something. And that can only grow. This is an incredible snapshot already. It's a crowded selfie. But we can have and invite more people in.

And even when I was sharing it with people, when it was first being publicized, there's something really interesting that I think has happened.

happened in the last 25 years in technology, where at the dawn of the internet, there was this idea of we're all going to create our technological future. We're all doing it. We all have our little under-construction signs and our own websites, and this is happening.

And somewhere along the way, it became that you aren't really a creator of your digital destiny. You're a consumer of it. You get access to it if you can buy it. You can access it if you happen to work in one of these places if you have a particular set of skills.

And the quiet radicalism of BDC is actually saying, no, it's not a thing that you buy. You actually can create it, and it's through these kinds of participatory processes that you're going to be able to. That is a direct assault on the sort of inevitability.

of like, this is happening and there's nothing you can do about it. Actually, really speaking, it's like core fundamental emotional belief that people do have agency here. And so I think getting people into the selfie and like continuing to build that and be ambassadors for that, very important. And then to identify and tell the story of the proof points of policymakers looked at your selfie and it made them understand that this was the Overton window, what was possible or that this was a concern that they hadn't prioritized and that to tell those stories, create that feedback loop, I think we'll only get more people in the selfie.

Yeah, absolutely. I love so much of what both of you just said.

the group selfie and, you know, the way that we can try to get people to push against this inevitability logic, which I think, you know, often, I think that sometimes, you know, people talk about technology in a way that's scary.

And what's really sad is that, like technology and its oldest idea, its oldest form was like making something new, making something possible that wasn't possible before.

One of the things that's so kind of tragic when I hear people being pessimistic about technology is that they've kind of forgotten that originally, like the basic idea of technology is that it's about making more possible than was possible before.

of what it often feels like, because, you know, corporate actors have certain incentives, and, you know, blah, blah, blah. But they start to feel it as if it's, you know, bearing down on them as if this as if technology makes fewer things possible than were possible before.

And I, what I, you know, I also see the radicalism in the Global Dialogues project. And I would also just say, you know, speaking kind of personally, I, you know, was not working for OpenAI for most of my life, I only started at OpenAI about a year ago. And what I'll say is that I've, you know, I came in being a little bit of a

a technology skeptic and to be quite honest, a little bit skeptical about the ability to influence things at these big companies and had this idea that there's this really strong agenda that you definitely can't influence. But I really do believe that what I've seen in the whole tech world right now, which is vast and is not limited to the US and has many global actors doing many different things under many different circumstances, is I think that everyone really does want to figure this out together. And I think that it's been incredible to see that some random people and...

Twitter can just sort of say things. And if it, you know, is really resonating with communities, then it becomes something that resonates, you know, inside the builder sphere and inside the space of people thinking about policy and thinking about and building the technologies themselves.

So speaking a little bit personally there, but I just really want to underscore the idea that, you know, that I think both sides are really sort of open here and, you know, we just, I think that there are many willing receivers of those big group selfies.

Wow. On that incredibly poetic note, I think this brings us to the close of our session. I would like to thank the judges for this incredibly fruitful conversation. And now I would like to hand the stage back to, the stage to Faisal, who will introduce the most exciting piece yet of this, of this hour, which will be introducing the winners of the Global Dialogues Challenge. Thank you.

That was such a fascinating panel. But yes, I am so, so excited for this next part. Let me share my screen really quick.

So, yes, hi, everyone, I'm Faisal, I'm the head of global partnerships at CIP. I cannot tell you how thrilled I am that so, so many people signed up for the challenge and from all over the world, too.

We've had people submit films, games, stories, benchmarks, even full-on research papers. It was a challenge in and of itself to pick just a few winners. Thanks to our esteemed judges that you've just heard from and much, much deliberation, we first selected a group of finalists and I'd like to give them a big shout out before we announce the overall winners.

Apologies in advance.

advance if I mispronounce any names.

Our first honorable mention is a fantastic paper written by Arian Goenka and Alice Benoit. It's titled AI Threat Meta-Perception Benchmark. Their goal was to understand how closely frontier AI models are aligned with human concerns.

Next, we have the team of Amy Tang, Evelyn Tsui, Rishi Gupta, Layla Yokoyama, and Maddy Change. With the goal of finding compatibility in perspectives around AI, seeing what brings us together around AI instead of what sets us apart, they built this platform called Sync.

Next is Meghana Kochalakota and their excellent and in-depth paper on moral foundation theory. Meghana offers a robust framework for bridging political divides through shared values all rooted in the Global Dialogues data.

This is Mira, an AI assistant developed by Mark Shutera that turns synthesized values derived from the Global Dialogues data into actionable guardrails that guide LLMs. A really cool idea all around.

Next is a dynamic framework crafted by Subobam Judar, Aditya Karan, Leif Hancox-Lee called

called Simulating Diverse User Preferences on AI Interactions. They intelligently propose interacting with AI agents that simulate the diverse populations showcased in the Global Dialogues datasets to understand cultural variations and explore global insights. I kind of wish there was like Jeopardy music as I'm saying this.

who created the AI Imagination Quiz, which is a way to create and see what your imagination profile is and see how it compares to people all over the world. Understanding these perspectives can be difficult, but Ben's made it really easy to see who thinks what in a very engaging way.

way. Atharva Joshi and Atharva Vasudev from Civis created the Global Dialogues in AI Trust dashboard. It's extremely interactive, it's very beautiful, and it provides data that can be very actionable for stakeholders.

And finally, Sabiha Chowdhury, who wrote this excellent research proposal called Encoding Consent, Rethinking AI Development through the Lens of Gender, Language, and Power in the Global South.

Every single one of these honorable mentions were incredible in their own right and could have very well have won the whole thing, but we had to select a few.

You can find

all of these honorable mentions on our website cip.org which is linked in the event details and in the chat and I highly encourage you to interact with all of them they are all they are all fantastic.

So without further ado let's introduce our big four winners. First, I will start with the categories storytelling. This winner was selected because they presented a very grounded narrative, offered an accessible and engaging flow, and effectively guided viewers through their findings.

The winner for this is Kashish Khara who made a project called the world wants to feel AI.

Khashish explores what AI might look like if it were built not just to inform or automate, but to accompany. Their project offers a framework to understand global dialogues' reflections through beautiful artwork, touching quotes, and thoughtful design.

As they put it, rather than treating the global dialogue responses as mere data points, I listen to them as stories—stories of loneliness and ritual, of distraction and desire, of technology that feels both helpful and hollow.

Congratulations, Khashish. You're the winner of the storytelling category.

Next, we'll reveal the winner for creativity. This winner offered a completely unique entry full of novel ideas and fresh perspectives. And the winner for this one is Megha Goel for their project, If Words Were Enough.

I know Zoe particularly would love this one. If Words Were Enough is a poetic research tool that explores how AI handles language shaped by memory, emotion, and cultural context. It's built on the Global Dialogues data, and it invites users to co-create poems with AI using untranslatable words and phrases often misunderstood by models. Each five-line poem

becomes a trace of what matters, part benchmark, part archive, part miniature policy, revealing what AI systems hear and what they still miss. This project asks not just how AI understands us, but what it erases in the process. Congratulations, Megha.

Next, we have applicability. The winner for this one created something that was particularly useful for AI labs, policymakers, or AI practitioners. Their project was deemed not only meaningful, but actionable. And the winner for this one is Ganesh Gopalakrishna for creating the AI

social contract. Ganesh created a stunning interactive tool called the Atlas of AI Sentiment. By transforming thousands of global voices into clear, quantifiable data, this work reveals what people think about AI and how it's tied deeply to their daily lives, their economic reality, their trusted institutions, and their level of development. It's a powerful argument for why a one-size-fits-all approach to AI governance will always fail, and it provides a concrete blueprint providing a more context-aware and equitable AI future.

Congratulations, Ganesh.

Again, you can find these three category winners also on our website. We have them, and we flicked them.

But, finally, the moment you have all been waiting for, the overall winner. Woo-hoo! I don't know who did that, but I'm really glad to hear it. This project demonstrated everything that we were looking for when we launched this challenge. It tells a powerful story. It is endlessly creative, and it's applicable to a wide range of actors.

The overall winner for CIP's Global Dialogues Challenge is Saranjan Vigram for his project, AI Cultural Intelligence Agency.

Just so we can put a face to the name, if Saranjan is in the virtual house.

-house, please raise your virtual hand so we can spotlight you after I tell everyone a little bit about your project. So this project... Sorry.

-Oh, Faisal, go ahead. Keep going. Sorry to interrupt you. But he's already ready to come on stage. So as soon as you're done describing it, we'll bring him on.

-Great. I'm super excited to share your projects, Ranyan. So what Ranyan has built is a project that transforms global AI dialogue data into a detective game, where young minds discover these differences not as conflicts to resolve, but as wisdom to integrate. Children don't just learn what people think about AI, they learn why cultures dream and nightmare so differently.

about our technological future. As said by Sranjan, we're at a crossroads. We can continue teaching AI through technical tutorials that create brilliant engineers who build systems for people just like themselves.

Or we can nurture cultural detectives who understand that the same technology lands differently depending on the cultural soil it grows in. The children playing detective today become the leaders building inclusive AI tomorrow.

This isn't about making technology education fun. It's about preventing a future where brilliant minds create systems that inadvertently harm the communities they never learned to understand.

Today's kids will decide whether AI helps or hurts real people.

They'll design the systems, write the policies, make the choices that affect billions of lives. Let's make sure they're ready to make it work for everyone, and not just people who look and think like them. And with that, I'm so happy to introduce Suranian. I'm assuming he's spot ready.

Hey, welcome. There you go. This is very humbling and very overwhelming at the same time. Thank you so much. Yay.

I just want to encourage everyone that's here. We have a lot of policy makers in the audience, Suranian, faculty members, PhDs from all over the world.

and also some technologists. So, Saranjan is in the community now. If this project is exciting to you, you can DM him in the community, you know, meet up for a coffee date.

We really hope that your project, we hope to make this really visible, Saranjan, because it's absolutely beautiful. And maybe you might find some collaborators to help scale it. Thank you so much. Really appreciate it. Looking forward to continuing this journey.

I just heard Zoe writes poetry, and I write poetry as well, so there's already one connection. So I'm pretty sure there are many more here.

No, you really deserve it. We loved your project, and we are so, so happy you won.

Please feel free to connect, yes, with Saranyan, but also all of our winners. Again, cip.org, you can go there and see all the winners, their submissions. We'll have their contact information available as well.

But congratulations once again to everybody. We literally couldn't do this challenge without you, and we are so, so grateful to all the judges, to OpenAI for hosting this. It's wonderful to see you all here tonight.

Congratulations to the Global Challenges winners, and I'll see some of you again tomorrow. But if not, we'll see you in August.

Good night, everybody.

+ Read More
Comments (0)
Popular
avatar


Watch More

AI Art From the Uncanny Valley to Prompting: Gains and Losses
Posted Oct 18, 2023 | Views 39.2K
# Innovation
# Cultural Production
# Higher Education
# AI Research