OpenAI Forum
+00:00 GMT
Sign in or Join the community to continue

Event Replay: Scams in the Age of AI

Posted Oct 01, 2025 | Views 313
# AI Safety
# Security
Share

speaker

user's Avatar
Jack Stubbs
Member of the Intelligence and Investigations Staff @ OpenAI

Jack Stubbs is a member of OpenAI’s Intelligence & Investigations team, where he leads efforts to detect and disrupt AI-enabled scams and fraud. Previously, he served as Chief Intelligence Officer at Graphika, where he built and led a team of open-source intelligence analysts focused on identifying and mitigating online security threats. Jack began his career as a Reuters correspondent, reporting from Russia and Ukraine before moving to cover cybersecurity.

+ Read More

SUMMARY

Jack Stubbs, from the Intelligence and Investigations team, described how his group disrupts organized scam networks while also empowering the public to use ChatGPT as a personal safety tool. He emphasized that most scammers are not inventing new methods but using AI to scale old tricks more efficiently, and that OpenAI has disrupted operations in Cambodia, Myanmar, and Nigeria. Stubbs underscored both the human and financial toll of scams, citing $12 billion in reported U.S. losses last year and even teen suicides linked to sextortion. His team uses a “ping–zing–sting” framework to map scam patterns and has found AI involved at every stage. Importantly, he highlighted that millions of people already use ChatGPT to identify scams, with three times more scam-detection interactions than scammer interactions, and noted growing demand for free, accessible AI safety tools. Stubbs closed by stressing transparency through public reports, partnerships with groups like AARP, and collaboration across tech and civil society to ensure AI strengthens safety, security, and shared benefits for all.

+ Read More

TRANSCRIPT

Good morning. If you're joining us from the west coast of the United States, it is very early, so thank you for joining us. Welcome, everyone. I'm Natalie Cone, head of the OpenAI Forum, the expert community hosting our conversation this morning.

In the forum, we spotlight discussions that reveal how AI is helping people tackle hard problems, and we share cutting-edge research that deepens our understanding of this unprecedented technology's societal impact. AI is an innovation on the scale of electricity. It's transforming how we live, work, and connect.

OpenAI's mission is to ensure that as AI advances, it benefits everyone. We build AI to help people solve the toughest challenges because solving hard problems creates the greatest benefits, driving scientific discoveries, improving healthcare and education, and boosting productivity across the world.

Today, we're excited to host a conversation with Jack Stubbs from OpenAI's Intelligence and Investigations team.

Jack Stubbs leads efforts to detect and disrupt AI-enabled scams and fraud. Previously, he served as Chief Intelligence Officer at Grafika, where he built and led a team of open source intelligence analysts focused on identifying and mitigating online security threats.

Jack began his career as a Rutters correspondent, reporting from Russia and Ukraine before moving to cover cybersecurity. We're excited to have him share more about how the team works to detect scam operations and ways ChatGPT is used to help spot scams.

Please help me welcome Jack to the stage, joining us all the way from London.

Morning, Jack.

Hi, good afternoon, Natalie.

So good to see you.

You too. Thanks for having me here today. My pleasure. And thank you for making time in your schedule.

You're joining us from such a long ways away. But we love hosting these morning events because it really casts a broad net for the different time zones that can join us. So thank you so much for being here.

Yeah, absolutely.

Okay, before we dive into your presentation, which is why we're all here, I also love to get to know our guests on a little more personal level. So if you don't mind, I have a few questions for you.

Fire away. I'll do my best.

So Jack, you have had an interesting career and background. You were a journalist before transitioning to the threat intelligence work. Tell us about that transition and what inspired you to change your career track.

Thank you. Interesting. Definitely one word. But yeah, I spent a long time as a foreign correspondent and investigative journalist and then moved into what people refer to as the open source intelligence community. So using publicly available information to kind of do similar work, which I think boils down to.

identifying hard questions and then trying to find the answers to those questions. I think you asked about inspiration. I'm not sure if this counted as inspiration for changing career trajectory because it kind of happened afterwards, but one of the things I have definitely realized doing this kind of intelligence work, particularly at companies like OpenAI and some other places I've been, is the opportunity for impact is so readily available. I think modestly, as a reporter, I was able to do a couple of stories that really cut through and kind of changed the world a little bit. We exposed corrupt Russian oligarchs and they were sanctioned, or we exposed cybersecurity flaws and the law was changed. But doing this work, particularly at OpenAI, we get that kind of impact every day, particularly on our team. Every day we're basically making life harder for people that are clearly acting in bad faith to try and use these technologies to do objectively harmful things. So that's definitely a large part of what I enjoy about the job and I know that's the same for my colleagues as well.

That's fascinating, Jack, and I think I'm going to have to do some searching of your background to excavate those articles and essays because they sound really exciting and we're very lucky to have you here at OpenAI. So now you are here at OpenAI. Can you tell us a bit about the intelligence and investigations team? What does the team focus on and how does it fit into OpenAI's broader mission?

Absolutely. And I'll talk about this in a bit more detail as well. But I am, as you said, I'm a threat investigator and part of our intelligence and investigations team. And our job within OpenAI is to detect and understand and disrupt bad actors that are attempting to use our models in ways that are objectively harmful. So we're working on things like child safety and online scams, obviously, national security issues. And in terms of how that fits into the bigger picture, there's a longer conversation to have here that we've got time for today, but OpenAI's mission is AGI for the benefit of all humanity and safety and security are such a foundational and technical part of that.

So I'll speak a bit more in a few moments about what that kind of means for I2, how we interpret and how we try and execute on that. Okay, perfect. Then we definitely have to also point our compass in the direction of having you back because this morning session, while everyone's drinking their coffee, isn't enough to unpack it all. And you are clearly a really fascinating person, Jack. So please consider this just the first of many times we host you in the forum.

And on that note, we are looking forward to hearing so much more about your work and I will turn it over to you now. Thank you very much.

Okay, so thank you, Natalie. And yeah, good morning, everyone. Good afternoon, depending on where you are in the world. As you've just heard, I've spent kind of most of the last decade really rooting around in strange places on the internet to essentially try and better understand how different types of malicious actors are leveraging the new technologies and capabilities available to them to attempt to conduct.

harmful activity. Today I'm going to be talking to you about definitely one of the most interesting and I think honestly one of the most important aspects of this work which is online scams and specifically how online scams and scammers are attempting to use AI.

We've got about 20-25 minutes together now and in that time my plan is to share some details of what we've been seeing at OpenAI in terms of how scammers are attempting to misuse our products, talk a bit about what that means for ordinary people on the internet, you know folks like you, your friends and your family and how they encounter these types of threats in the wild and then also share more detail about how we're working at OpenAI to detect and disrupt and specifically to deter these actors.

Before we get into into more of that detail I actually want to pull a bit more as we were just discussing pull a bit more on that thread we started with Natalie there about how this work fits into the bigger picture of OpenAI.

So in my role as a threat investigator I'm a member of OpenAI's intelligence and investigations team and our team's role

is to detect and investigate and disrupt the abuse of our models. So we focus on some of the most critical safety and security areas, things like child safety, covert influence operations, national security threats, violent activities, and of course, we're talking about today, online scams.

And the examples that I'm going to be sharing with you all today, these are also reflected in our public effect reports. We put these out multiple times a year. They're kind of really in-depth, detailed case studies of the types of activity that we've been able to detect and disrupt. Our hope is that by doing this, by providing these detailed and applied examples of just how bad actors are attempting to misuse our models, we can essentially support an informed and public conversation about what we all recognize are really very important issues.

In terms of how that then fits into the bigger picture of open AI, our mission as a company is to ensure that artificial general intelligence benefits all of humanity. And for us on the intelligence investigations team, that kind of broadly means two things. So the first is...

preventing harmful uses of AI. As I said, detecting and disrupting those bad actors who unfortunately will always exist, and they will always attempt to misuse the capabilities and technologies available to them to cause harm.

Then the second piece of it is realizing the opportunities to also use AI as a kind of positive force in this area, a force for harm prevention. And a really direct example of this that we'll be able to talk about in more detail today is, for example, how AI tools like ChatGPT can also empower people to detect and avoid scams and help keep themselves safe online.

I think this dual approach, this kind of dual-pronged interpretation and execution of the mission, it's actually really evident, particularly in the scams work we've been doing on the Intel Investigations team. So on one hand, we are, every day, detecting and banning bad actors who are attempting to use our products to deceive and defraud people online. And it's probably also worth mentioning here that these aren't just one-off fraudsters. So in the last few months, for example, we've disrupted...

large-scale organised criminal networks that are operating out of geographies like Cambodia, Myanmar, Nigeria. These is the level of persistence and sophistication of the actors we're dealing with in this space. But at the same time, what we're seeing is that millions of real people are also using chatGPT to check suspicious text messages, ask whether they should trust emails reportedly coming from their banks, even get on with chatGPT and describe a strange phone call they've received, and they're doing this to check whether something is a scam and to basically help keep themselves safe. And that's what you can see here on the screen.

I think this is a really nice kind of the full arc of an applied example. On the left-hand side is an AI-generated scam text message, which actually came from a Cambodia origin scam operation and was sent to an open-eye investigator earlier this year. The open-eye investigator received this message to their phone.

And on the right-hand side, what you can see is a screenshot of the response that we got from chatGPT when we shared that same

that same text message with ChatGPT and asked, is this a scam? And you can see ChatGPT there correctly, identifying a number of the flags, indicate this message is likely suspicious and not trustworthy, and lower down the response is also giving some really clear and good advice as well on how to kind of protect yourself and the next actions that you should take.

So this to me is a really powerful and at times kind of poignant reminder that AI needs to be part of the solution, not just part of a problem. And I personally, you know, strongly believe that in the long run, using these technologies to provide everyone with an accessible, easy to use, reliable tool that they can have in their pocket and pull out whenever they need it to check whether something is a scam, that will do far more to prevent harm than any amount of scammers that we can detect and ban from our products.

Before we go on, I kind of, I know from doing this kind of work for a while and speaking on this topic as well, it's always, I mean, it's important, it's always important.

important to pause and particularly at the beginning, pause and acknowledge both the human and the financial cost of what we're dealing with here. I'm sure some of you have heard these numbers before or seen similar things, but when you see them written down, it does really underscore the impact of this type of abuse and the very real human suffering associated with online scams.

In the US alone, so just in the US, last year, consumers reported losing over $12 billion to scams. That is, A, that's reported losses. It's not a kind of extrapolated estimate of how much people are losing. This is people reporting money they've lost. And also, it's a 25% increase from the previous year, according to the FTC.

I felt, after hearing these numbers so many times, they can also become kind of abstracted, not to the point of meaninglessness, but the impact of just how big that number is can be hard to comprehend. So if you break that down a little bit, what it means is that just in investment scams, the average victim loses $9,000. Every victim of an investment scam in 2024, the average amount of money they lose is $9,000.

was $9,000 and beyond the financial losses there's a human cost here as well which is frequently devastating is that between 2021 and 2023 the FBI reported at least 20 teenage suicides that were linked specifically to sextortion scams so these aren't just abstract problems it's a very real threat it's a very real abuse type and it has a very real impact on on real people's lives every day

in terms of what that means for what we're seeing you know how are these scammers operating and how they work with AI I think the main thing to call out here and it's something we'll discuss kind of in more detail over the next 15 20 minutes or so

when people hear AI scams in my experience they often think of deep fakes or synthetic voices for example getting a video call from someone who looks and sounds exactly like your boss and this person reportedly your boss is telling you to I don't know immediately make a large financial transfer to an offshore bank account

These things do happen and we've spoken publicly about the isolated cases where we've detected and disrupted this type of abuse. But the reality is that the vast majority of scam activity we see is more kind of prosaic. And I think in some ways, as a result, more worrying.

What we're actually seeing is that most often the way scammers use AI is essentially to scale old tricks. So write messages faster, translate into more languages, automate really repetitive parts of their scam workflow. The majority of the scam activity we've disrupted, from what we can see, it's more about kind of fitting AI into an existing scam playbook rather than, say, creating new playbooks built around AI.

I've got two recent examples to walk you through here to illustrate this point. So in one case, and these are recent disruptions from the last couple of months. In one case, we actually had two separate scam operators.

operations, one in Cambodia and one in Nigeria, but they were both running very similar playbooks where they were setting up fake investment firms.

So in these scams, the actors are using really slick and well-designed high production quality websites. They're creating online ads, operating in authentic personas on social media, and they're using these to lure victims into private group chats.

And then once the target or the victim is in the group chat, they're pressured to invest money into a bogus trading platform, which is obviously controlled by the scammers.

And across both of these scams in two different geographies, what we saw was the scammers primarily using our models to generate and translate correspondence with their targets, so those messages to and from the potential victims, to conduct basic web research, things like researching how much, say, a teacher in a certain country might make as an annual salary, and to create content for their websites and their social media accounts.

In another case, which I think is a really nice contrasting example, we disrupted a scam center almost certainly originating in Myanmar.

and this center was using AI not just for the scams themselves but also to conduct you know the day-to-day business operations of the center. So this included organizing rotas, drafting internal announcements, assigning desk and dormitory allocations and even managing their financial accounts.

We also saw some workers in the scam center actually asking the model you know what are the criminal penalties for people that get caught conducting this type of activity. So in both those cases we're not seeing a new type of scam or scammers you know changing what they're doing to incorporate AI. In fact they're doing more of what they've always been doing but they're able to use AI to do it cheaper and quicker and more efficiently.

What those case studies also show and it's a really kind of core characteristic of the type of activity we're seeing is that scammer activity on our models it often falls into this kind of gray zone. So prompts and generations that could indicate either legitimate business activities or could when viewed in context be part of fraudulent schemes. So things like

messages to a client or creating a website, doing financial accounts, that could be part of a completely business, sorry, that could be part of a complete legitimate business operation, or when you investigate further, it could be part of a scam network.

And what this means is that to detect and disrupt scams effectively, without also disrupting the work of our everyday complete legitimate users, we need a more nuanced informed approach that focuses really on patterns of active behavior, rather than isolated model interactions and individual messages to or from chat GPT.

To make sense of those behavior patterns, and the way we've been kind of approaching this is by using a pretty simple framework that you can see here on the screen. And we've been referring to internally as the ping, the zing and the sting. Because what we've seen is that regardless of their origins, and regardless of the precise tactics, the scam related activity that we've disrupted, it consistently follows this three stage trajectory, this three stage pattern.

The first stage is the ping, which is

essentially the cold outreach, the text, the call, the email, where the scammer initially makes contact with a target or a potential victim.

We then move into the zing, which is typically an extended period of social engineering. So this is where the scammer is conversing with the target, they're building trust. They're often trying to create a sense of panic or a sense of excitement to engender people to make decisions quickly and not really think about the potential consequences of what they're doing.

And then the third part is the sting. This is where a scammer attempts or often successfully extracts money or material possessions or sensitive personal data from a victim. Understanding these different stages of the scam has helped us to see where AI is being slotted in and how it's being used by the scammers, make assessments in terms of the impact that it's having, and then that's informed how we can tailor our own detection and the disruption we're doing in order to have maximum effect.

What this looks like in terms of AI is that we're seeing AI being used at every stage of the scam. So in the ping.

we're seeing scammers use AI to generate and translate bulk messages, for example, or do translation or build websites. In the zing, we're seeing that they sometimes ask the model for persuasion tactics or have the model help them create fake personas they operate on social media. And in the sting, we're seeing them use it to, as I said, conduct basic research on targets, maybe set up fake investment platforms, or even get advice on how to conduct a bank transfer or transfer large amounts of cryptocurrency.

The takeaway here is that the use of AI in scams, it's all about efficiency, and it's about scale. It's about doing more of what they have been doing, but doing it faster and quicker and cheaper. To give you all an applied example, we can look at a scam that we disrupted earlier this year, and the details of it were included in our June threat report. You can see some pages of that here on the screen. And this is a scam network that we call Operation Wrong Number. This was a scam center, very likely based in Cambodia, and it revolved around fake job ads promising really large amounts.

of money for very simple and low effort online tasks. Once the victims were hooked in with the scammers, they were then pressured into paying deposits or fees, sometimes bogus tax payments, in order to access their purported earnings.

And AI in this case, as we'll see, the main use of AI in this scam was basically to generate and translate large amounts of text message content, be able to create loads of different text messages in multiple different languages near instantaneously. The ping stage of this scam, scammers used AI to generate SMS messages, as I was saying, and then they were able to send those out to a large number of potential victims.

And these messages, they offered easy money for minimal work, so things like getting paid just for liking or sharing social media posts. Our assessment is this was designed to grab people's attention and pull them in with promises of earning potential or easy money that seemed too good to pass up.

Once people engaged with the scammers, if they replied to one of those messages, the actors would then move into the zing, so that extended period of social...

Here they used AI to generate more messages and translate it into different languages. But this time, the content was aimed more at motivating and building trust with the targets.

So sometimes these messages would include, say, fake testimonials from fictitious people that purportedly be part of the scheme and made lots of money. Or the scammers would create and share fake screenshots showing a target's reported earnings.

We also on occasion saw the scammers pay out a small amount of upfront payments to essentially legitimize themselves with the target and then also encourage people to recruit their friends and families to be part of this purported high-earning scheme.

The last part of this is a sting. So here, again, scammers are using AI to generate and translate different text messages. And here the messages are telling the victims that they either need to pay deposits or handling fees, sometimes tax charges to access their purported earnings, or sometimes to pay a fee to unlock a more lucrative work stream to kind of level up in the earning pyramid.

Sometimes this would be a few hundred dollars, occasionally significantly less.

more money transferred via cryptocurrency but once the money was sent and once the scammers we believe who kind of made the assessment they got as much as they could out of this individual they would then simply disappear. So that's an overview of how scammers are you know attempting to incorporate AI into their operations and workflow.

The immediate next question at least in my mind and I know one that comes up a lot is like well so what can we do to stop them and how can we fight back here and in our work at OpenAI there are multiple different kind of strands to this I would say.

The first and most obvious some of what we've just been talking about is we can detect and we can disrupt this activity where we find it on our products and using our models and to do this we can also use our models ourselves and we combine that with human expertise and kind of really targeted deep dive investigation to identify and ban these networks from our products.

Our upcoming public FET report our next report goes out in a couple weeks and it's going to be publicly available and this really goes into a lot of detail about the types of scams.

operations that we've taken down in recent months. As I said earlier, kind of coordinated and organized criminal networks that are coming out of geographies like Cambodia, Myanmar, Nigeria, and also a number of the other threat areas and abuse types that we've been dealing with on the intelligence and investigations team.

The second part of this is how we've seen CHAT-GPT itself actually being used as a safety tool. So in our investigations of different scam networks, while we're investigating that violative activity, what we've also seen at the same time is people using CHAT-GPT millions of times a month to detect and avoid scams.

So for example, taking a screenshot of a suspicious text message or sharing a copy of a suspicious email, and then simply asking CHAT-GPT, does this sound or does this look like a scam? And in fact, in every operation that we've disrupted in the last few months, we've also seen people independently using CHAT-GPT to identify that same scam and get advice on what to do and how to keep themselves safe online. And that to me...

as I said earlier, that's hugely encouraging, personally very exciting. If we can provide this accessible and easy to use consumer safety tool, that will do so much more to combat scams than I think any amount of scammers that we could find and ban from our products.

A really compelling data point around this is some analysis that we've done over the last couple of weeks. And based on the current numbers, we estimate that there are currently up to three times as many scam detection interactions with chat GPT as there are interactions with scammers. So in more accessible terms, what that means is that for every attempt by a scammer to misuse chat GPT, at the same time, three people are using it to protect themselves from scams.

We think this presents a real opportunity and not just an opportunity for Open AI, but for kind of all of us as a community, an opportunity to empower people really to detect and avoid scams that they encounter in the wild and help keep themselves and their friends and relatives safe. Some of the numbers that speak to that come from this survey that was conducted last week.

and that survey showed that two-thirds of Americans say they're concerned about being targeted by or falling victim to an online scam. And almost the same number of Americans say that if one was available, they would be happy to use a free, accessible, easy-to-use AI tool to spot scams.

But only 31% of those people had thought about using an AI tool like ChatGPT to do exactly that.

And the message we want to get out there is that this tool already exists, it's free, it's ready to use, and it's something that can very easily and very effectively help people keep themselves and their friends and their relatives safe online and really help us push back in terms of the tide of scams.

And then the last part of this, and typically I would say the most important, is that when it comes to combating scams, we really understand the importance of sharing information. So we publish public-facing threat reports like those that I've mentioned today, we partner with consumer safety groups to educate vulnerable communities, and we collaborate with different tech companies and anti-scam organisations to disrupt networks across the internet.

You know, we know from doing this work that no single company, no single organization can meaningfully tackle this thing alone. One example of that, just from some recent work that we've been doing, is a training which we released last week in partnership with the American Association of Retired Persons. This is a training geared at helping senior citizens use chatGBT as a tool to detect and avoid scams. And it's available now, it's live on the OpenAI Academy. And yeah, welcome everyone here to check it out and share it with someone who might find it useful.

So just to sum up before we kind of go back to Natalie, the vision, our vision in this space is, you know, really quite simple. The vision here is to identify and disrupt scammers, while also empowering consumers with the tools they need to stay safe. And I think all of us, you know, from our different kind of vantage points, we all have different roles to play, and there's different ways that we can contribute. The first one is, I would say we all have, yeah, we all have a responsibility, a responsibility to ourselves, in fact, to stay cautious online.

And one of the ways we can do that is by using tools like ChatGPT, essentially as a second pair of eyes to double-check suspicious messages, help try and keep ourselves safe.

Second piece of this is to support, advocate for responsible information sharing. As I was saying earlier, we know that no single tech company, no single organisation can effectively combat this alone. We need to share information so that we can disrupt and combat these scams across the internet.

And then the last part of this, I would say, is being part of the conversation. So sharing your ideas and your feedback and identifying and, you know, alerting folks like us and others to the opportunities that you all see to better leverage this technology to help keep people safe from online scams.

Thank you so much for your time today. It's been an absolute, you know, pleasure to talk to you all and share a bit about what we're doing in this space. As I said earlier, if you'd like to learn more about what we do on the OpenAI Intelligence Investigations team, about scams, also other types of abuse that we're dealing with, please check out our public reports.

one going out in a couple of weeks and they're really a great resource to kind of get some really hands-on detail the type of work that we're doing. And yeah, I'd like to welcome Natalie back to the stage.

Jack, thank you so much for that. Every single day in the forum we learn all sorts of new ways to use chat GPT and honestly trying to identify threats and scams was not a use case I had even thought about. So thank you so much Jack.

So on the horizon folks we have lots of events in October and even a couple of them are in person. So I'm just going to share a few of those with you right now. You probably if you check your inbox got our newsletter this morning. We're going to continue our detective series on October 7th with disrupting malicious uses of AI. We're going to be hosting Jim Sciutto from CNN and our very own Ben Nemo from OpenAI.

And then later in the month actually the very next day October 8th we are going to be hosting Lukasz Kaiser for Learning Powerful Models.

from transformers to reasoners and beyond. This is for definitely our STEM folks, our engineers, our startup founders, but also he will be presenting in a way that is digestible and understandable for all of us. And there's no one else in the world I would rather be hearing this information from than Lukasz, so please join us.

And then October 20th at our research lab in San Francisco, we are gonna be hosting the inaugural summit for our Higher Education Guild in the OpenAI Forum. We have reserved many seats for forum members. It is limited capacity, so if you received the newsletter, open it up, request your invite now, and I hope to see you.

And then last but not least, the week after our Higher Education Guild in the OpenAI Forum, the forum will be traveling to Washington, DC for the very first time. We're gonna be collaborating with AI for libraries, archives, and museums, which includes the Library of Congress, the National Art Gallery, and the National Library of Congress.

Smithsonian, and we're also going to be joined by our very good friend in the forum, Ask Mona Thiem from Paris, to be bringing the Thomas Jefferson Library to life with AI. We're going to be hosting a series of presentations, a panel discussion, and then a live interaction of Thomas Jefferson's library. So we hope to see you there. And there are 400 seats in this auditorium, and we have reserved at least 100 of them for OpenAI Forum members. So please request your invite. I hope to see you in Washington, D.C. That one's going to be super fun. And thank you all for joining us.

There were 300 people here this morning. I really wasn't expecting that for an early morning OpenAI Forum event. I hope you guys enjoyed it. I hope the content was useful, and I will see you next week.

+ Read More
Comments (0)
Popular
avatar


Watch More

Event Replay: Jobs in the Intelligence Age
Posted Sep 04, 2025 | Views 501
# Future of Work
# OpenAI Certifications
# OpenAI Jobs Platform
AI & Social Impact: Exploring the Role of AI in the Non Profit Sector
Posted Jun 24, 2024 | Views 18.2K
# Socially Beneficial Use Cases
# Non Profit
The Importance of Public Input in Designing AI Systems: In Conversation with The Collective Intelligence Project
Posted Mar 11, 2025 | Views 24.8K
# Democratic Inputs to AI
# Public Inputs AI
# AI Literacy
# Socially Beneficial Use Cases
# Social Science
Terms of Service