AI, Art, Culture & Society
# Social Science
# Higher Education
# Socially Beneficial Use Cases
AI Ethics in Action: UC Berkeley’s Data Science for Social Justice Workshop
The Data Science for Social Justice Workshop (DSSJ), organized in partnership between UC Berkeley’s Graduate Division and D-Lab, is an 8-week program aiming to provide an introduction to data science for graduate students, grounded in critical approaches of data feminism, data activism, ethics, and critical race theory. Attendees receive training in natural language processing and leverage their skills to conduct discourse analysis on social media data in an interdisciplinary project. This workshop, about to conclude its third year, has trained over 75 graduate students across 20 disciplines. These students form a community of interdisciplinary scholar-activists who uphold a values-driven approach to data science and machine learning.
In this event, Claudia von Vacano, Ph.D., Executive Director of D-Lab, introduces the Data Science for Social Justice Workshop, highlighting its goals, structure, and outcomes. Then, three students who have participated in the workshop – with diverse and rich personal and academic backgrounds – present lightning talks on their experience with DSSJ, highlighting their personal journeys, the projects they worked on, and what they gained from the workshop. The event will conclude with a Q&A and discussion on how workshops like DSSJ present novel opportunities to train a generation of interdisciplinary, diverse data-driven scientists who center values and social justice at the forefront of their work.
Claudia von Vacano · Aug 30th, 2024
Popular topics
# AI Research
# Higher Education
# AI Literacy
# STEM
# Innovation
# Career
# Everyday Applications
# Technical Support & Enablement
# AI Governance
# GPT-4
# Democratic Inputs to AI
# Socially Beneficial Use Cases
# Social Science
# Expert AI Training
# AI Safety
# Ethical AI
# Policy Research
# Public Inputs AI
# Future of Work
# Non Profit
+5
Nathan Chappell, Dupé Ajayi, Jody Britten & 5 more speakers · Jun 24th, 2024
The session featured several nonprofit organizations that utilize AI to drive social impact, emphasizing their long-standing involvement with the community. The discussion was facilitated by Nathan Chappell, a notable figure in AI fundraising, and included insights from a diverse group of panelists: Dupe Ajayi, Jodi Britton, Allison Fine, Anne Murphy, Gayle Roberts, Scott Rosenkrans, and Woodrow Rosenbaum.
Each speaker shared their experiences and perspectives on integrating AI into their operations, illustrating AI's transformative potential in various sectors. The event highlighted the importance of AI in amplifying the efficiency and reach of nonprofit initiatives, suggesting a significant role for AI in addressing global challenges. The conversation also touched on the ethical considerations and the need for responsible AI use, ensuring that technological advancements align with human values and contribute positively to society.
This gathering not only served as a platform for sharing knowledge and experiences but also fostered networking among community members with similar interests in AI applications. The dialogue underscored the critical role of AI in future developments across fields, advocating for continued exploration and adoption of AI technologies to enhance organizational impact and effectiveness.
# Socially Beneficial Use Cases
# Non Profit
Teddy Lee, Kevin Feng & Andrew Konya · Apr 22nd, 2024
As AI gets more advanced and widely used, it is essential to involve the public in deciding how AI should behave in order to better align our models to the values of humanity. Last May, we announced the Democratic Inputs to AI grant program. We partnered with 10 teams out of nearly 1000 applicants to design, build, and test ideas that use democratic methods to decide the rules that govern AI systems. Throughout, the teams tackled challenges like recruiting diverse participants across the digital divide, producing a coherent output that represents diverse viewpoints, and designing processes with sufficient transparency to be trusted by the public. At OpenAI, we’re building on this momentum by designing an end-to-end process for collecting inputs from external stakeholders and using those inputs to train and shape the behavior of our models. Our goal is to design systems that incorporate public inputs to steer powerful AI models while addressing the above challenges. To help ensure that we continue to make progress on this research, we have formed a “Collective Alignment” team.
# AI Literacy
# AI Governance
# Democratic Inputs to AI
# Public Inputs AI
# Socially Beneficial Use Cases
# AI Research
# Social Science
Daniel Miessler & Joel Parish · Jan 30th, 2024
In this talk, Meissler shared his philosophy for integrating AI into all facets of life. He highlighted a framework built for leveraging custom prompts as APIs, as well as demonstrated several specific use cases that the speaker hopes will resonate with OpenAI Forum members and translate across disciplines and professional domains.
# Career
# Everyday Applications
# Technical Support & Enablement
Sam Altman & David Kirtley · Nov 29th, 2023
Earlier this year, Sam Altman, CEO and Co-Founder of OpenAI and David Kirtley, CEO and Founder of Helion convened at the OpenAI office among a small group of OpenAI Forum members to discuss the future of energy. This is the recording of their discussion.
# Innovation
# STEM
+56
Carl Miller, Alex Krasodomski-Jones, Flynn Devine & 56 more speakers · Nov 29th, 2023
Watch the demos presented by the recipients of OpenAI’s Democratic Inputs to AI Grant Program https://openai.com/blog/democratic-inputs-to-ai, who shared their ideas and processes with grant advisors, OpenAI team members, and the external AI research community (e.g., members of the Frontier Model Forum https://openai.com/blog/frontier-model-forum).
# AI Literacy
# AI Research
# Democratic Inputs to AI
# Public Inputs AI
# Social Science
Miles Brundage & Alex Blania · Nov 13th, 2023
About the Talk:
Worldcoin is creating a new identity and financial network that distinguishes humans from AI. This recording captured the the final in-person OpenAI Forum event of 2023 featuring Alex Blania, the CEO and co-founder of Tools for Humanity and Worldcoin, and Miles Brundage, head of policy research at OpenAI. Attendees learned about the Worldcoin systems designed to generate proof of personhood and ensure democratic access and governance of these systems in order to fairly distribute their benefits. We even featured the Worldcoin Orbs on site!
About the Speakers:
Alex Blania is the CEO and Co-Founder of Tools for Humanity, the technology company building tools for the Worldcoin project. He is a Co-Founder of the Worldcoin protocol with Sam Altman and Max Novendstern. Alex is responsible for leading strategy, development, and execution of the technology and tools enabling Worldcoin to ensure everyone benefits fairly from the opportunities AI presents. Alex holds degrees in Industrial Engineering and Physics from the University of Erlangen-Nuremberg in Germany and studied physics at Caltech before dedicating his full time and attention to supporting Worldcoin.
Miles Brundage is a researcher and research manager, passionate about the responsible governance of artificial intelligence. In 2018, he joined OpenAI, where he began as a Research Scientist and recently became Head of Policy Research. Before joining OpenAI, he was a Research Fellow at the University of Oxford's Future of Humanity Institute. He is currently an Affiliate of the Centre for the Governance of AI and a member of the AI Task Force at the Center for a New American Security. From 2018 through 2022, he served as a member of Axon's AI and Policing Technology Ethics Board. He completed a PhD in Human and Social Dimensions of Science and Technology from Arizona State University in 2019. Prior to graduate school, he worked at the Advanced Research Projects Agency - Energy (ARPA-E). His academic research has been supported by the National Science Foundation, the Bipartisan Policy Center, and the Future of Life Institute.
Dr. Ahmed Elgammal · Oct 18th, 2023
About the Talk: The use of AI in art making is as old as AI itself. The ways artists integrated AI in their creative process have been evolving with the advances of AI models and their capabilities. What is the value artists find when using AI as part of their process? What is the role of an artist and what is the role of AI in the process? How is that changing now with generative AI being dominated by text prompting as the way of interface? What have we gained and what have we lost with the progress of generative AI taking its course out of the uncanny valley to utility with the introduction of large language models as part of image generation. In this talk I will present my viewpoint about answering these questions as well as feedback from talking to many artists about this issue and trying to understand the ways they have integrated AI in their process.
# Innovation
# Cultural Production
# Higher Education
# AI Research
David Autor & Tyna Eloundou · Aug 22nd, 2023
About the Talk: Much of the value of labor in industrialized economies derives from the scarcity of expertise rather than from the scarcity of labor per se. In economic parlance, expertise denotes a specific body of knowledge or competency required for accomplishing a particular objective. Human expertise commands a market premium to the degree that it is, first, necessary for accomplishing valuable objectives, and second, scarce, meaning not possessed by most people. Will AI increase the value of expertise by broadening its relevance and applicability? Or will it instead commodify expertise and undermine pay, even if jobs are not lost in net. Autor will present a simple framework for interpreting the relationship between technological change and expertise across three different technological revolutions. He will argue that, due to AI’s malleability and broad applicability, its labor market consequences will depend fundamentally on how firms, governments, NGOs, and universities (among others) invest to develop its capabilities and shape its applications.
# Higher Education
# Future of Work
# AI Literacy
# Career
# Social Science
Saffron Huang, Divya Siddarth & Lama Ahmad · Jul 14th, 2023
About the Talk:
AI will have significant, far-reaching economic and societal impacts. Technology shapes the lives of individuals, how we interact with one another, and how society as a whole evolves. We believe that decisions about how AI systems behave should be shaped by diverse perspectives reflecting the public interest. Join Lama Ahmad (Policy Researcher at OpenAI) and Saffron Huang and Divya Siddarth (Co-Directors of the Collective Intelligence Project) in conversation to reflect on why public input matters for designing AI systems, and how these methods might be operationalized in practice.
The Collective Intelligence Project White Paper: The Collective Intelligence Project (CIP) is an incubator for new governance models for transformative technology. CIP will focus on the research and development of collective intelligence capabilities: decision-making technologies, processes, and institutions that expand a group’s capacity to construct and cooperate towards shared goals. We will apply these capabilities to transformative technology: technological advances with a high likelihood of significantly altering our society.
Read More About the OpenAI Grant, Democratic Inputs to AI
# Democratic Inputs to AI
# Public Inputs AI
# AI Literacy
# Socially Beneficial Use Cases
# Social Science
Popular