OpenAI Forum
+00:00 GMT

Shaping artificial intelligence through collective intelligence

Shaping artificial intelligence through collective intelligence
# Democratic Inputs to AI
# Ethical AI
# AI Governance
# Policy Research

Using collective intelligence to shape AI

September 29, 2023
Lucy Andresen
Lucy Andresen
Shaping artificial intelligence through collective intelligence

In this article you will learn:

  1. How AI governance is both urgent and challenging
  2. How the Collective Intelligence Project aims to shape AI development for the collective good
  3. How existing governance models are failing to keep pace with AI
  4. How OpenAI is partnering with CIP to pilot public input to AI through ‘alignment assemblies’
  5. How AI can streamline collective intelligence processes

Last year, the White House released the Blueprint for an AI Bill of Rights. This document, although not currently enforceable, signifies a crucial step towards regulating the technology that is already transforming our lives at a rapid pace.

One of its core principles is the right of the public to have input into the design, use, and deployment of AI systems. The argument is that given the significant and far-reaching impacts of AI, this technology should be developed in consultation with the people it will affect. This sounds good in theory, but the practical challenges involved in eliciting and aggregating this input, not to mention translating it into actual outcomes, are considerable.


“We need a lot more innovation to guide this fast-moving, transformative technology.”

OpenAI has decided to tackle this by funding experiments into innovative democratic processes to govern AI behavior. To kick off the OpenAI Forum, Divya Siddarth and Saffron Huang from the Collective Intelligence Project (CIP) joined us to discuss their vision for shaping AI development through public input. The conversation was facilitated by Lama Ahmad, an OpenAI Policy Researcher with a passion for bringing diverse perspectives to bear on questions around the social impacts of AI and the trajectory of system development.

Divya and Saffron founded CIP earlier this year with a mission to "direct technological development towards the collective good." They are focusing on integrating public input into AI systems to do so. According to them, the secret to aligning AI to the values of humanity is collective intelligence, a term they use to encapsulate "decision-making technologies, processes, and institutions that expand capacity to address shared concerns."


There are incredible risks; there are incredible opportunities. How do we take a sense of collective power to this space?

The duo is full of verve and vision. CIP presents a rallying cry to move fast, not to break but to shape things, to take this pivotal moment in human history as an opportunity to create revolutionary democratic processes that better serve the people. Rather than stewing in existential dread at the thought of all the ways AI can and has gone wrong, CIP encourages us to take the elephant by the tusks, if you will, and steer it towards a place of collective flourishing.

There's a good reason for the urgency. As Saffron cautions, "technology is accelerating much faster than our democratic structures can handle, and it's affecting people really quickly and really deeply." CIP seeks to address this by expanding the capacity for groups to define and accomplish collective goals, thus creating a new governance model for transformative technology.


Our existing collective intelligence systems aren't up to the task.

Current democratic processes are unable to keep up with the accelerating pace of technological development and have failed to address public concerns ranging from regulation of social media platforms to coordination on climate risks. Add to this concerns about effective representation, and it's clear that a better model for collective decision-making is necessary when it comes to AI governance.

Leaving this up to market mechanisms is not a viable option either, as profit-maximization is a poor proxy for human values, and one that could result in disastrous outcomes. On the other hand, it doesn't seem feasible or advisable to stymie technological development indefinitely due to a lack of both consensus and the means to reach one.


How do we put into practice collective intelligence over AI?

It's not yet clear what an effective collective intelligence model for transformative technology will look like. Although they have identified some essential design questions (such as what decision is to be effected, who should you ask, how should you phrase the question, and what tools and methods will be used to reach the target public), Divya and Saffron believe that the model may vary depending on the problem to be solved. And the best way to find out what works? Run a bunch of pilots trialing different approaches.

Through collaboration with OpenAI, CIP is piloting alignment assemblies. This term is intended to capture both the goal, aligning AI with collective values, and the process, assembling a diverse array of participants to discuss and deliberate their needs, preferences, hopes, and fears regarding this emerging technology.

The initial pilot in June focused on risks and evaluations for AI systems, using wikisurvey tools to determine a prioritized list of risks that reflected the key concerns of the US public. This will in turn inform model evaluations and release criteria, along with broader standards-setting processes and regulations.


It's a complicated design question of public and expert input.

Divya and Saffron emphasize the importance of balancing public and expert input, revealing the thoughtfulness that goes into designing effective collective intelligence processes. "You can't go to folks and say, 'What kinds of evaluations would you design for models?'," explains Divya. Saffron agrees, adding: "The public are worried about specific things, and I think that should be accounted for," however "experts have a lot of context and they know what's plausible to evaluate on."

Although evaluations aren't the most accessible concept when it comes to gathering public input, Lama explains that these are "actionable and important for AI companies" and an essential step to "understand the capabilities and limitations of the models." It's important to first understand the technology in order to make legitimate decisions about how to regulate it.

Divya reports that the initial responses have been promising, with participants engaging in the discussion in a nuanced and thoughtful way. The next challenge is to figure out how to aggregate this collective intelligence data and ensure it translates to actual outcomes.


We're excited about the ways that the technology can help.

Both CIP and OpenAI are interested in exploring how AI could aid in collective intelligence processes. For instance, language models may be able to assist in facilitation, encouraging group discussions towards consensus. They could also help aggregate information from large volumes of natural language qualitative data, allowing nuanced opinions to be synthesized and gathered towards a solution that works at scale.

This trade-off between nuance and scale is a classic issue for collective intelligence, and one AI may be able to mitigate by enabling more efficient collection and analysis of public input. In turn, Saffron hopes to "directly feed people's values into the technology and iterate on that," improving model alignment through reinforcement learning from collective intelligence feedback. In essence, CIP envisages a model where collective intelligence helps shape AI while AI helps innovate collective intelligence processes.

We're looking forward to hearing more about the findings of these pilots in the coming months. In the meantime, you can read about OpenAI's Democratic Inputs to AI grants here, and find out more about the Collective Intelligence Project in its white paper. You can also watch the full recording of the event here.

Sign in or Join the community

Create an account

Change email
e.g. https://www.linkedin.com/in/xxx or https://xx.linkedin.com/in/xxx
I agree to OpenAI Forum’s Terms of Service, Code of Conduct and Privacy Policy.
Dive in
Related
1:00:00
video
The Importance of Public Input in Designing AI Systems: In Conversation with The Collective Intelligence Project
By Laura Curzi • Jul 14th, 2023 Views 21.1K
56:40
video
Collective Alignment: Enabling Democratic Inputs to AI
By Laura Curzi • Apr 22nd, 2024 Views 15.8K
2:33:03
video
Democratic Inputs to AI: Grant Recipient Demo Day at OpenAI
By Laura Curzi • Nov 29th, 2023 Views 15.8K
Article
Red Teaming AI Systems
By Lama Ahmad • Oct 1st, 2024 Views 4K