OpenAI Forum
+00:00 GMT
Sign in or Join the community to continue

Event Replay: Disrupting Malicious Uses of AI

Posted Oct 07, 2025 | Views 166
# AI security
# Intelligence and Investigations Team
# Threat Intelligence Report
Share

speakers

user's Avatar
Ben Nimmo
Principal Investigator @ OpenAI

Ben Nimmo is Principal Investigator on OpenAI’s Intelligence and Investigations team. He was a co-founder of the Atlantic Council’s Digital Forensic Research Lab (DFRLab), and later served as Graphika’s first head of investigations, and as global lead of threat intelligence at Meta. He has helped to expose foreign election interference in the United States, United Kingdom and France; documented troll operations in Asia, Africa, Europe and the Americas; and been declared dead by an army of Twitter bots. A graduate of Cambridge University, he speaks French, German, Russian, and Latvian, among other languages.

+ Read More
user's Avatar
Jim Sciutto
Anchor & Chief National Security Analyst @ CNN

Anchor and Chief National Security Analyst for CNN, covering the Defense Department, State Department, intelligence agencies as well as foreign affairs with frequent assignments in the field from Ukraine to the Mideast to Taiwan. I have written four books on international affairs including most recently the NYT bestseller and NYT 2024 notable book “The Return of Great Powers.” My work has a special emphasis on great power competition among the U.S., Russia and China as well as middle powers Iran and North Korea.

+ Read More

SUMMARY

The OpenAI Forum focused on the intersection of AI, security, and information integrity. Ben Nimmo (OpenAI’s Principal Investigator) and Jim Sciutto (CNN) discussed how AI is transforming threat intelligence, including efforts to detect and disrupt covert influence operations by authoritarian actors. The conversation underscored how OpenAI is defending democratic information ecosystems, advancing global safety, and empowering users to protect themselves from scams through AI tools.

+ Read More

TRANSCRIPT

Hello, everyone, and welcome to today's OpenAI Forum. We're so glad you could join us for what promises to be a fascinating conversation about how artificial intelligence intersects with global security, information integrity, and online safety.

I want to mention that we aren't going to host a Q&A for this session, but this will not continue as the norm. It's simply how logistics and availability of our speakers worked out for this one event.

At OpenAI, we operate a dedicated intelligence and investigations team of analysts, investigators, and security engineers who use a mix of OpenAI technology and traditional investigative methods to identify, track, and disrupt malicious uses of our models. Their work helps us understand emerging threats.

coordinate with our safety and legal teams, and share insights with industry partners and the public. This helps build stronger defenses, not just for OpenAI, but for the broader digital ecosystem.

Today, we're sharing more about that work. Our discussion will explore findings from OpenAI's latest threat intelligence report, which examines the evolving tactics of online covert influence operations and coordinated attempts to misuse AI.

We'll talk about what our investigators are seeing across the global threat landscape, and how these insights can help shape a safer online environment.

We're honored to be joined by two outstanding speakers.

First, Ben Nimmo, principal investigator at OpenAI's intelligence and investigations team. Ben has spent his career uncovering covert.

influence campaigns and exposing organized disinformation networks around the world. Before joining OpenAI, he co-founded the Atlantic Council's Digital Forensic Research Lab, led investigations at Grafica, and served as a global lead for threat intelligence at Meta. His groundbreaking work has illuminated election interference and troll operations across continents and helped define the modern field of digital forensics.

And guiding today's discussion, we're joined by Jim Sciutto, CNN's Chief National Security Analyst and anchor of The Brief with Jim Sciutto. Jim brings decades of experience reporting from conflict zones and covering intelligence, foreign policy, and the shifting dynamics of global power. He's also the author of several acclaimed books, including The Return of Great Powers, The Shadow War, and The Madam Theory. Together, Ben and Jim will explore how AI is reshaping both the threat landscape and the defense strategies that protect us all.

Please join me in welcoming Jim Sciutto and Ben Nimmo.

Thanks very much to OpenAI. I'm really happy and privileged to be taking part in this. Thank you to Ben for allowing me to pepper you with questions over the next 30 minutes, and thanks so much to the community for joining us here today. I'm looking forward to the conversation.

Absolutely. Thank you, Jim, and thank you for the whole forum and the whole community for making this happen.

Ben, I want to begin with your background because you've been studying influence operations in your work from a number of angles for so many years, whether it be at NATO.

As a journalist like myself, now at OpenAI, but elsewhere in the tech world, can you tell us about how you found your way to where you are today, and how that background has helped you with your current role at OpenAI?

Absolutely, and it's funny that you should talk about the tech world, because my background is about as non-tech as you can imagine. When I was at college, I was a linguist, and I specialized in medieval languages and literature. I was a travel writer, I was a journalist. I worked in Eastern Europe for a long time. And that's where I started coming face-to-face with covert influence operations. Thinking back to the early 2000s in the Baltic States, there was a lot of Russian activity there. And so I started getting interested, as an observer, as a journalist, as a citizen of how that activity worked and what it meant. And increasingly,particularly from about 2014 onwards, more and more of the talk was about influence and deception and influence operations using social media. And I was a medievalist. I had no idea how this stuff worked. But it was clearly important, and so I wanted to understand. So from about, I guess, 2015 onwards, I was trying to learn how does this technology fit together?

Influence operations are always people on one end trying to influence people on the other end. They have been throughout history. You can go back 3,500 years, you would still find the same patterns. But you have this box, which is called technology in the middle. And I had no clue how it worked. And so I tried to work out from first principles, for example, what is a Twitter bot? Twitter bots were very big back in 2015. I was trying to work out what they were, but I didn't have any of the technical background to be able to explain the technology.

And so I had to learn, and I had to learn to put into simple language that I could understand. How does this technology fit in between the operators and the targets?

And I realized that actually most of society is more like me than like somebody who works in the tech sector. They don't understand how the algorithms work either. We don't understand how all this stuff fits together, but we have to live with it. We have to learn how to identify it.

And so I started writing analytical pieces, which were basically saying, for people who have no tech background like me, here's how you can spot a fake account. Here's how you can spot a bot. Here's how you can spot an AI-generated profile picture. And because I've had to do it from the front end as a user, I was able to start explaining to other people, here's how this all fits together.

But technology moves fast. There are always new platforms coming up. And so I started out as an open-source investor.

I moved on, I worked at Meta for three years, but then more and more there was the drumbeat about AI. And so my drive has always been to try and understand and explain how do the different pieces fit together? How does that tech fit between the humans on each end? And AI was clearly the big coming piece of tech that had the potential to revolutionize everything. And I wanted to understand, I wanted to know how is this going to work? And OpenAI, we're looking for a threat investigator and I was lucky enough that they get the job.

So for the last year and a half, I've been trying to understand and explain where does AI fit in? Where does it fit into that bigger picture of the kind of work that I've been studying for well over a decade now and the kind of influence operations that human beings have been running as long as human beings have been a thing.

First of all, here's to humanities majors. I was a Chinese history major and somehow ended up in this space myself, and it's interesting as well because my first real encounter with it was during the 2016 campaign with Russian influence operations in the U.S. election.

You like to say how AI is a tool for malicious activities online for the actors to get faster at things they were already doing. In other words, as a force multiplier, largely, as opposed to moving them into new spaces that they didn't occupy before. Is that fixed? Right? I mean, in your time studying this, are they pushing outside of those usual spaces over time, or are they really still staying in phishing and influence operations via social media, et cetera, and using AI?

AI largely to date as a way to make those tools better? The way I would think about it is what we've been seeing for the last year and a half since I've been at OpenAI is evolution rather than revolution. There was a fear when ChatGPT came out, and then again when things like Dali were launched, that this would somehow completely revolutionize the influence operation space, and there was a fear that you'd see a whole kind of different beast emerging. What we've actually seen in operation after operation is that it's really threat actors take the workflow and they fit the AI into it. They're not taking the AI and then bending the workflow around it.

For example, you'll get an operation which is still running fake accounts on social media and is still running fake email addresses so they can get those accounts set up, and then instead of manually typing very badly written...

and social media posts, they're asking ChatGPT to do it. The great majority of the activity we see is still fitting in that bucket of, here's an existing operation which has now heard about AI and is trying to work out how to leverage it.

And it is interesting that if you look at the disruptions that we've done of threat actors who are using OpenAI's models, a lot of them were pre-existing influence operations which are known to the community, which had been running, let's say, websites or social media accounts for years, but now started using ChatGPT as well. They're really kind of fitting it in.

That said, and I think we're still in a stage of evolution rather than revolution, but AI evolves really quickly. It's a commonplace if you work at OpenAI, how fast things move here. And so that means that for the threat actors, the tools are evolving really quickly.

things we're seeing that we call out in our latest threat report. If you think back to early 2024, the operations that we were looking at were basically single mode. They were using chat GPT to generate text. It might have been debugging code, it might have been generating a tweet, but it's basically, it's always working with text.

By the second half of last year, we were disrupting operations that were multimodal. So they'd be generating text and image. And there was a particular Russian operation that we call Stop News, very, very prolific generator of AI images. And at that point, I suspect that a lot of the folks who are listening are thinking, aha, deepfakes.

But actually what we were seeing, these actors were not generating images that portrayed an event that never happened. They weren't generating a photo of one politician, like pushing another politician off the roof or something. They were asking for bright,

1930s art deco style posters they were asking for cartoons satirizing different characters
It's not the kind of content that anybody would take for a representation of the real thing but it was really just trying to get people to look, you know, if you think about when you're doomscrolling Twitter a Really bright image will make you stop and read and it felt like that was the goal. Just just doing this multimodal generation

But what we're seeing now, we've been we've been disrupting for the last few months. It's not just multimodal. But it's multimodal.

So that same operation stopped news that a year ago was using chat GPT to generate texts and images. They recently tried to come back to our platform. We caught them. We banned them again but what they were asking the model to do was to generate scripts and video prompts for
You know fairly classic propaganda videos about how wonderful Russia is and how terrible

Europe is and the stuff. But then by every test that we've been able to apply, they're not using any of our tools to generate the videos. They're going somewhere else. We don't have hard evidence of which tool they're using, but they're hopping around between different AI models.

They use chat GPT for one bit and they use somebody's model for something else. We've even had a case where we disrupted an influence operation in the middle of last year and we reported on it in October, which was running a whole load of social media accounts across different platforms. We banned them from using our models.

I think they tried to come back a couple of times and we stopped them again. They gave up using us and all their social media assets went silent for about 10 weeks. Then they started tweeting again and the content was no longer being generated by our models. But what happened a few months later was that Anthropic published a threat report.

Exposing the same operation that they had now caught and banned. So that's a case where it looks like an operator has started off using us. We've caught them and banned them. They've gone to Anthropic. Anthropic have caught them and banned them. Where they've gone now, we don't have eyes on that, right? But it's that kind of, the latest evolution in this space is this sort of multi-model activity.

Yeah, it's interesting. And just in the domestic environment, when you see some of these AI-generated videos that make their rounds in the political debate, oftentimes they're not meant to fool folks into believing they're actually real, but to grab eyeballs, right, and to create conversation.

A consistent theme throughout is how authoritarian states are using AI, again, as force multipliers for existing malicious activities, including one that stood out to me, of course, this one in the report regarding targeting Uyghurs with travel monitoring, which of course is on so many levels disturbing, right, because it's a targeted group. They're tracking them, right, and using, they've been tracking them for years, right, but now they're using AI to do so.

So I'm curious how OpenAI stops them, you know, stops a malicious activity like this, and do you find, particularly with China, that the skills are advancing to a point where it's hard to stay ahead of them, right?

Like a statistic always sticks in my mind. Chris Wray used to be at the FBI, always talked about how China always outnumbered the FBI in cyberspace 50 to 1, you know. If they had one FBI guy trying to track these operations, they were 50 somewhere in China.

So how do you stop these things? and do you, is it hard to stay ahead of them? I mean, so a couple of thoughts there.

One of the interesting things is the case you're referencing, what we detected and banned was chat GPT activity that was likely linked to a Chinese government entity. So it's somebody, the evidence points to being in China, working for a Chinese government entity, asking chat GPT for help with stuff.

But the help they're actually asking for is, help me build a work proposal for a very large scale tool, which would monitor Uyghur traffic. And I believe that the, what they were coming up with was like a project plans for monitoring different kinds of like airplane bookings and train bookings, and then matching them against lists of people they were interested in.

But the chat GPT usage was purely drawing up this documentation. It was, it was building. The model to help write a work plan, kind of a pitch. And then there's no evidence, I mean, we've been looking, there's no evidence at all that they were using any of OpenAI's models to actually implement that plan. So it's this strange space where it looks like somebody in China is thinking, okay, I need to write this work pitch. Maybe I'm in a rush. Maybe I'm not very confident in my writing skills. I'm going to get a different AI model to do it, but I'm not going to use that AI model to then implement what I'm writing up.

And so for us, it gives us this very rare snapshot into, here's what somebody appears to be planning to do with AI using a different model somewhere down the line. Because of where we sit, we don't have evidence whether they ever developed that tool. We don't have evidence of whether it was ever deployed. But we can see that somebody was using ChatGPT to try and think about how you would scale this thing up.

Now that's really interesting for a couple of reasons. One is that it does give you just a snapshot of the direction of travel. Somebody who is working on a project plan for a large-scale monitoring device, presumably at the end of their journey, what they want to have is a large-scale monitoring device. But also, if they're only drawing up the project plans, they haven't got it yet. And so it's this really interesting snapshot in time where you're seeing a threat actor who is asking for help with a very small subset of their overall activity.

But they're likely gonna, their aim is to put it somewhere else. And it's one of the interesting things of working in the AI space is that because you get that model hopping, just sometimes somebody shines a flashlight for a moment onto a set of activity.

But more broadly, what we've seen with Chinese influence operations.

Operations is that there's always been a tendency that they've been quite large and noisy. I was lucky enough to detect and discover a very large-scale Chinese influence operation back in September 2019, which I called Spamouflage, because they were posting spam to use as camouflage. That's where the name comes from. Next time I name an operation, I'll give it a name that people can spell.

But Spamouflage was always focused on very large-scale networks of social media accounts, and the social media platforms would find them and take them down. I was at Meta when we did a big takedown of Spamouflage activity, and I forget the exact number of accounts that we banned in one hit, but it was definitely the hundreds, and I think it was in the low thousands.

Paradoxically, what we've actually seen in the more recent activity that we've seen using using our models, it's actually been smaller networks. It's almost counterintuitive, but I wonder if there's a sense, and this is purely me hypothesizing, but spamouflage has always been notorious for being very big, very noisy, very clumsy, and never actually convincing real people. Also notorious for very bad use of language. So I treasure the memory of a spamouflage video whose title was, The Water of American Democracy Can No Longer Support the Boat of Presidential Rule. It's beautiful poetry, makes you weep, but it's also not really what somebody in the middle is actually going to be posting.

So definitely in that stage, there was a very large-scale element to it. What we've been disrupting more recently is much smaller operations, which seem to be putting a little bit more effort into a far smaller set of accounts. So it's almost that scaled effect that you talked about working in reverse and it makes me wonder are the operators thinking you know what we run a thousand accounts and they all get taken down at the same time maybe we need to just like keep one or two going that which won't get taken down so often I mean we can only hypothesize but but but there is a there is a strange and somewhat counterintuitive trend in the Chinese operations we've seen recently that they have tended to be smaller scale you know the human element there and therefore human error as a way for you to spot these tools or this middle of malicious activity I mean it one that comes to mind I remember going back to 2016 was was Russian operators couldn't get English perfect so when they wanted to dive into the take a knee protest here in the NFL they they hashtag it take the knee as a opposed to take the knee, because it's always hard to get articles right when you're learning a foreign language, as you know, given you speak half a dozen languages. And you've brought up before, and this comes up in the report a little bit, about M-dashes as a signal of folks overusing. But then they figure it out, and they eliminate the M-dashes. But then they have these fragments of sentences, because they didn't correct the punctuation later. Are you finding, though, that the human users of AI are getting better at hiding those habitual mistakes, whether it be grammatical, or even the way GAN-generated images can be spotted for certain kind of commonalities? Are they getting better at hiding those?

The way I think about it is that there are some giveaway signals which are now widely known. And what the threat actors are trying to do is avoid those giveaways. I mean, GAN images are a really good example. So GAN generated adversarial networks was a really popular technology circa 2018, 2019. And there were publicly available websites out there where you could click on and it would AI generate a profile picture for you. I'd only ever did people's faces, but there were websites where you could just like, you'd refresh the website and you'd get another picture and you could build up a stock of 500 fake faces. And then you could put them on your social media accounts.

I was lucky enough to be running the first big investigation into a scaled network that was using this technology. And for about the first week as open source investigators, we were a little alarmed because one of the ways you could always identify a fake social media account before then was you'd right click on it, you'd search it with Google, and you'd find where it really came from. And, you know, you'd find, oh, it's stolen from an Instagrammer or stolen from an influencer. And that no longer held true. But after about a week of all of us intensively looking at all these operations, we realized that we could recognize these GAN-generated profile pictures if it was the size of a little fingernail, because they're so distinctive in the layout. And we realized that technically, the eyeballs always align. No matter which way the face is turned, if you'd superimpose five of these images on top of each other, the eyeballs always line up. And so very quickly, that team, we were all able to spot these things a mile off.

And then one of my colleagues actually built a scale tool that could identify a GAN profile picture. And so for the threat actors, there would have been this sweet spot for, I guess, a few months, where they were thinking, hey, we've got a way to hide that nobody can catch us. And suddenly, we catch on to how to identify it, and what should be a good way to hide gets to be a really, really good way to get caught. And still to this day, the members of that team can tell you again a face from the other side of a room because they're so distinctive. In the same way, whenever you're trying to hide a signal, you're creating another one by mistake. So we have seen threat actors who will remove the M dashes. It looks like they're doing it manually because there's been enough internet chatter now that AI produces M dashes a lot.

But if you don't replace that with another punctuation mark, as you said, you get these really garbled sentences, which are themselves an indication that something is not right here. And so you've pushed that signal down, but you've elevated that one instead. And so it's really this game of adaptation and evolution.

But for the threat actors, the harder they try and the more they give really detailed, make really detailed attempts to hide one thing, it's almost like they're putting a bigger fingerprint somewhere else.

And as an investigator, once you realize, okay, this is the thing I look for now, then that gives you another detection vector. And so it's this – it's really an evolutionary process at high speed where we're thinking, okay, what are we going to find next?

And there are signals that you can no longer use. It used to be for a few months, I guess the end of 2023, the phrase as an AI language model was a wonderful way of finding fake accounts on social media. Then the internet caught up on it. It became a meme. You had songs about it as an AI. You can't use that signal anymore. But by then, we'd found other signals.

And so the trick in so much of our work is we talk about disrupting the threat actors. Part of that is, like, ban them from using our models. Part of that is share information, shine a light on them, expose them as much as we can so that other people in other places can understand them and expose them to. Part of that is also learning the lessons. Okay, we've seen them doing this new thing. How do we make sure that all our detection teams are now calibrated to look for that? How do we make sure that we're training our model to take that into account as well? And if we're thinking about that, that learning as we go, then we're always reinforcing the defense.

Yeah. I was thinking, listening to you, that there are things that you have in common, I think, with high school English teachers, right? You know, looking for the tells of folks using these tools.

I want to get to scams, but before I do, just because you highlight so many PRC-driven ops here targeting the Uyghurs, the 9M-line relating to the South China Sea and information campaigns around that, but it would be a mistake to discount Russia, North Korea, and Iran, given they're so active in this space as well. Can you briefly about that, then I want to get on to how AI is being used to help spot scams as well.

Sure, and we've disrupted quite a lot of China origin activity over the last, I guess, nine months to a year. But we've also had threat access from many other countries as well.

In the threat report that we're launching now, there's a Russian origin operation which we call Stop News. It's following very clearly known and clearly understood Russian geopolitical priorities. It's criticizing Ukraine. It's criticizing the Western supporters of Ukraine. There's a lot of time spent on criticizing the role of the French government in Africa and praising the role of Russia in Africa. So it's very much geopolitically aligned with Kremlin activity.

What it tends to do is it runs a few websites that poses news outlets and it will use the model to generate articles for those. It runs social media accounts which then claim to be associated with those news outlets. Recently they've started doing video content as well.

As far as we've been able to tell, none of this is getting much pick up, much break out on social media. I think off the top of my head their main X account has on the order of a couple of hundred followers. It's not huge. When you look at most of their tweets have single digit engagement at best.

But it's a good example of one of those operations which almost everything it's doing is tracking with what are known Russian run government priorities. So you look at it and think that's probably a Russian government run operation. But actually there's like one portion of their activity using ChatGPT which isn't that. It's not geolocation politics, and it's generating basically promotional material for small companies in northwestern Russia. And recently they were generating promotional material for unlicensed online gambling. Last year there was one I think which was around some kind of building activity in northwestern Russia.

And so when you look at that activity, you think, well, that would be a weird thing for a government-run influence operation to be doing. But if you think about, maybe that's a PR firm which does some PR and some influence ops on the side, and they're sort of mixing standard PR and dark PR, suddenly that pattern makes sense.

And so one of the really interesting things when you're looking at Russian influence operations particularly is, it's always a mistake to jump to the conclusion that because it works in the interests of a certain country, it's run by the government of that country and what we see here is an operation where to the best of our ability to assess this this resembles commercial activity. There's some kind of PR firm who has some client who is paying them to do geopolitical interference and then a different client who's probably running some unlicensed online gambling and wants to give that a bit of a social media boost as well and so you always need to be careful about that.

You've also taken quite a lot of Iranian influence activity over the last nine months to a year and a lot of that has been what you would expect from an Iranian influence operation, anti-American, anti-western, pro-Iran, pro the Iranian sort of allies network in the Middle East and the strange thing there is that it's these operations often have a component when they're targeting Scottish independence.

I live in Scotland, so I tend to be interested by that. And it's a strange characteristic that I've seen in Iranian operations more than anything else, or more than ops from any other country. They keep coming back to this point about Scottish independence. And they tend to be arguing, it's a bit on both sides of the spectrum. There's probably a bit more weight pro-independence than anti. Are they trying to stir the pot? Are they trying to support one side or the other? It's not always entirely clear. But it's a very curious wrinkle of Iranian operations that I don't think I've seen in the context of operations from other countries. And on the Iranian side, it dates back to at least 2012. There were meta takedowns from 2012, where you very briefly had Iranian ops targeting Scotland.

Let's talk, if we can, about just straight old-fashioned scams and how they take advantage of chat GPT.

Make the point in the report that people chat GPT has helped people spot millions of scams per month and more times I think it's is it four to one four times a single one three three to one at times it's been used to spot as opposed to to to be weaponized in effect tell us how you see that and also I'm curious about who's using chat GPT to spot can't scams because part of me wonders if some of the most likely targets for these scams might not know if you're thinking your grandparents right might not even know that they can't use this and how could you get the word out to them that this is a tool to help protect themselves you did that that's absolutely right what we've been seeing is on a scale of millions of times a month people have been asking chatGPT, words to the effect of, I just got this message, or I just got this call, or I just got this call outreach. Is it a scam?

One of the fascinating things with these scam networks is they seem to be working off very long lists of known phone numbers, and then they'll just send cold SMSs to everybody on the list. A couple of the people on that list are open AI investigators. And so periodically, some of our investigative team will get what is clearly a scam text message. But what you do as a phone user, either you copy the text or you screenshot it, and you go to chat GPT, you either paste it in the text or drop it in the screenshot and say, I just got sent this, is it a scam? You can read as much of the message as you can. So it includes like, is it coming from a strange phone number in Indonesia or Philippines? or somewhere. And then the model will actually go through, here's why this looks like a scam. You know, one that we got recently was, I believe, offering, purporting to offer work at, work with TikTok for 300 pounds a day for two hours of work just by clicking on likes. And it was coming from, I think, a phone number somewhere in the Asia-Pacific region, which was mass sent to about 19 different people with UK phone numbers.

And the model goes through and says, TikTok aren't going to be sending out cold call SMSs to hire people. They're not going to be using this many emojis in the text. They're not going to be sending it to 19 people at the same time saying, hey, we really like your resume, you can work for us. But it will actually enumerate, here's what all the indicators are that tell you this is a scam. You know, by definition, people who are doing this are people who have access to chat GPT. We're looking on a scale of millions of prompts a month. We're not looking at who's doing it. What we see is the traffic coming in. But something we are trying to do as far as we can is to build partnerships with civil society organizations. Because we're aware it's not generally the young, trendy, tech-savvy ones who are the targets of the scammers.

I mean, we did a scams takedown, I think, end of last year where there seemed to be a disproportionate focus by the scammers on commenting about golf posts by men over the age of 50 who played golf and worked in the medical professions. And you can kind of imagine a scammer who doesn't really know the society as that well, you're going to ask them, what does a wealthy person look like? And they might well land on, well, they're a doctor and they play golf, surely there must be vast amounts of disposable income.

But what we're doing is we're trying to build partnerships with civil society organizations who reach out to all kinds of demographics, from the youngest to the oldest, and work out how can we show them in the simplest way, in the clearest way, if you get a text message which says, you just got a traffic fine, before you click the link, just here's how you screenshot it, here's how you ask chatGPT is this a scam. And we try and make this as simple as possible, because so few people have the time in the day to really, really, really study this stuff.

But for example, with scams, we think there's so many of the scams we look at, there's a common structure, and you can think about it as the ping, the zing, and the sting. So the ping is they will cold reach out to you, it might be a social media message, often it's an SMS, it might be a message on WhatsApp or Signal or one of the messaging apps, but it's this cold outreach which says, hey, I've got this wonderful opportunity for you, or I've got this really scary news that you're, you know, unpaid parking fines, whatever. That's the ping.

The zing is then they just try and trigger your emotion. It might be excitement. It might be fear, like, wow, I can earn 500 pounds an hour for, like, liking one TikTok video. Or, oh, they're going to fine me thousands if I don't pay this fine straight away. They're trying to trigger so much emotion that your brain is zinged, and it's just, it's not thinking rationally anymore.

But if you can recognize that that, what's the ping and what's the zing, and then the sting is they actually ask you for money, or they ask you for your data, right? You click through to the website, and it says, pay your $500 fine here, give me your credit card details, or input all the personal information you ever have. That's the sting. They're trying to get something out of you.

And just something as simple as that, the ping, the zing, and the sting. People remember it, and they stop and think, oh, this text message I got, isn't that the ping? Isn't that how this works?

And if you, the more you can, and what we're trying to do is like work with as many civil society organizations as we can to try and make sure that people out there understand here's how this stuff works, but also here's how you can use the tooling to not be scammed.

That's such an essential element. I mean, you mentioned activity in the Baltics going back in the 2000s, I remember being in Estonia 2007.

You know, this is a country that is constantly attacked. And one thing, they educate their population, digital hygiene they call it, so that they are in effect the front line, because people got to get smart, right?

Ben Nimmo, fascinating conversation. I enjoyed it. I appreciate you taking the time and I appreciate everybody who's joining us today listening to us, listening to us dig in.

Absolutely. Thank you, Jim. And thanks to everyone for being here from the Forum.

+ Read More
Comments (0)
Popular
avatar


Watch More

Event Replay: California Teacher of the Year Uses AI to Make Good Teaching Even Better
Posted Sep 26, 2025 | Views 1.1K
# AI Education
# ChatGPT Tips
# Edu Use Cases
Event Replay: Scams in the Age of AI
Posted Oct 01, 2025 | Views 618
# AI Safety
# Security
Event Replay: Careers at The Frontier: Hiring the Future of OpenAI Part 2
Posted Sep 19, 2025 | Views 3K
# Recruiting
# Career
# OpenAI Presentation
Terms of Service