Sign in or Join the community to continue

Event Replay: Using AI to Protect Children Online: In Conversation with Thorn

Posted Feb 11, 2026 | Views 87
# AI Safety
# Ethical AI
# OpenAI Presentation
Share

Speakers

user's Avatar
Julie Cordua
CEO @ Thorn

Julie Cordua is CEO of Thorn, a nonprofit that builds technology to defend children from sexual abuse.

Since Julie joined the organization over a decade ago, Thorn has become a pivotal force in creating safer online environments for children, developing tools and solutions for digital platforms to help them prevent and address child sexual abuse and exploitation, and implementing solutions for law enforcement that have led to the identification and rescue of thousands of child victims on a global scale.

Thorn is a leader in tackling emerging threats such as the rise of financial sextortion, the misuse of generative AI for creating and distributing child sexual abuse material and facilitating online grooming, and other modern and evolving threats against children in the digital age.

Julie has fostered Thorn's collaborative, ecosystem-wide approach, partnering with tech companies, law enforcement, parents, youth, policymakers, and other entities to develop innovative solutions to these challenges.

Before leading Thorn, Julie served as VP of Marketing/Communications at (RED) and helped establish the brand as one of the most successful cause marketing initiatives in history, working with a team to deliver more than $160 million to fight AIDS in Africa. Prior to joining (RED), she spent nearly a decade in the wireless industry. Julie holds a B.A. in Communications from UCLA and an M.B.A. from Northwestern’s Kellogg School of Management.

+ Read More
user's Avatar
Chelsea Carlson
Lead Child Safety Investigator @ OpenAI

Chelsea Carlson leads Child Safety Investigations at OpenAI, where she leads global efforts to prevent and detect child sexual exploitation across OpenAI's products. She oversees the development of advanced detection pipelines and threat investigations programs, while partnering with governments, NGOs, and industry peers to strengthen international safety standards. Chelsea regularly briefs policymakers, law enforcement, and technology leaders globally on emerging risks and solutions at the intersection of AI and child protection.

+ Read More

SUMMARY

The content in this presentation covers subjects that may be distressing, including technology-facilitated child sexual abuse and exploitation. No actual or depicted child sexual abuse material is contained. Attendees are encouraged to practice self-care/wellness as needed.

At this OpenAI Forum event, Natalie Cone hosted a conversation with Julie Cordua, CEO of Thorn, and Chelsea Carlson, who leads child safety efforts across OpenAI’s products, focused on protecting children in digital spaces. They described how online harms have evolved over the last decade, including increased grooming, sextortion, and the rise of synthetic or AI-generated child sexual abuse material.

Julie explained Thorn’s approach across research with youth, technical innovation, and building tools that help platforms and law enforcement detect abuse, triage cases, and find victims faster. Both speakers emphasized there is no single fix, and that meaningful progress requires safety-by-design, clean training data, strong guardrails, scalable detection, and clear pathways for reporting and response.

They also underscored the mental toll on investigators and moderators and discussed how AI can reduce unnecessary exposure by grouping, prioritizing, and filtering sensitive material without replacing human judgment. During Q&A, they highlighted the importance of real-time, multimodal, and contextual detection, and shared practical guidance for parents centered on engagement, literacy, and keeping open lines of communication with kids.

The session closed with a call for deeper collaboration among nonprofits, tech companies, and governments to improve capacity, transparency, and cross-border coordination to keep children safer online.

+ Read More

TRANSCRIPT

[00:00:00] Hi, everyone. [00:00:12] Today's talk comes with a standard content warning. The content in this presentation covers subjects that may be distressing, including technology-facilitated child sexual abuse and exploitation. No actual or depicted child sexual abuse material is contained. But attendees are encouraged to practice self-care and wellness as needed. Let's get started.

[00:00:33] Welcome to the OpenAI Forum. I'm Natalie Cone, your community architect, and as always, it's such a pleasure to be here learning alongside you all virtually. The OpenAI Forum exists to bring together serious thinkers across disciplines to explore how AI can be applied responsibly and how collaboration can turn potential into real-world benefit.

[00:00:57] Today we have the privilege of learning from two leaders who are shaping how AI is used in one of the most consequential arenas of our time, protecting children in digital spaces. We'll learn more about Thorn's mission and how the organization operates at the intersection of technology, child safety, and law enforcement. We'll explore how tools like AI alongside thoughtful platform design and human expertise each play a critical role in reducing harm and strengthening protections for children online.

[00:01:27] This morning, we're welcoming Julie Cordua, the CEO of Thorn, a non-profit that builds technology to defend children from sexual abuse online, and Chelsea Carlson, who leads global efforts to prevent and detect child sexual exploitation across OpenAI's products. Given how long and deeply Thorn and OpenAI have been collaborating to protect children online, I thought it would be a good idea to bring them to our community and teach us about the real-world implications of their work and collaboration. Please help me welcome Julie and Chelsea to the OpenAI forum.

[00:02:03] Thank you so much, Natalie, and also thank you so much, Julie, for taking the time to speak with us. It's always really great chatting with you. So for those who don't know about Thorn, could you start by walking us through Thorn's mission and the role your team plays in protecting children online? What problems are you solving and why is your work so critical right now?

[00:02:27] Yeah. Hi, Chelsea. Thanks for having me, and thanks to you and the team for bringing this topic to your community. It's one that I know is often hard to talk about, and so we don't talk about it as much as we should, so I'm grateful for this opportunity.

[00:02:41] So at Thorn, we work to change the way the world responds to child sexual abuse in the digital age. We've been working on this for 15 years, and so you asked why is this so important today. When we started working 15 years ago, the role of technology in child sexual abuse was very, very different. It was quite small, and you could maybe see child sexual abuse being documented and shared online, and today that world has completely changed.

[00:03:16] You have crime-tense children that often start online, that start with just technology, and you still have all the crimes that start in the physical world, and where technology plays a role. So what we're dealing with is much more expansive, and that means that technology needs to be a part of the solution as well, and that's what we do.

[00:03:35] We focus on three things in particular. The first is we do a lot of research with kids to understand their experiences online and the role technology plays in exploitation directly from children. The second is we focus a lot on technical innovation, so understanding how technology is used in exploitation, but also how technology can be a part of the solution.

[00:04:06] And then the third area is we actually build and scale products for law enforcement to help them run their investigations and find child victims faster and remove them from harm, and for tech platforms to detect and remove content at scale and to help them build safer online environments.

[00:04:28] And as children's lives have been moving increasingly online, what kind of harms are you seeing most often in your work today, and how has the scale or the nature of that harm changed as technology has been evolving?

[00:04:41] Yeah, it's changed a lot. I've been working in this field for 15 years and what we saw 10 years ago was that if you were a child and you did not live near or with an abuser physically, you would not be abused.

[00:04:58] you would not be abused. [00:05:00] And the role technology was like then, we were mainly dealing with the documentation, images and videos documenting physical assault of a child that were shared online. And today now you have, we're seeing a rise in grooming and sextortion. So a child does not have to live near or with an abuser. They're online. They can be groomed and then extorted for sexual images.

[00:05:30] We're seeing a rise in synthetically generated child sexual abuse material. And that isn't just a child that doesn't exist in real life, being child sexual abuse material being created in that way, but also real children being morphed into child sexual abuse material. So these may be children who've never been physically abused, but now there is child sexual abuse material of them. And we're seeing also a rise in a combination of child sexual abuse with sadistic online gangs.

[00:06:08] So children who are being groomed for the sole purpose of just harm and terror. And so it is much more expansive than we were dealing with 10 years ago. And I think that's where technology has to play a role because it's so massive that this is not something that can be managed just by human eyes and humans alone.

[00:06:38] So Thorne sits at the intersections of technology, child safety and law enforcement. What are the biggest gaps today between the harms that we know exist and our ability to detect, prevent and respond to it?

[00:06:55] Yeah, I mean, I think, so when we look at, I said we build for both law enforcement and industry. When you look at law enforcement, and well, let me start with industry actually. So with industry and there's a lot of different types of platforms, right? You have where we were five years ago, which was social media and gaming, and now we have generative AI platforms.

[00:07:26] So anywhere that there is user generated content or generative AI content, you have potential abuse. And that is where you have billions and billions of files passing through a network at any given time. Some portion of those are child sexual abuse. And the difficulty is finding it. It's like finding a needle in a haystack.

[00:07:50] And so detection, I wouldn't say, I think you asked like, what's missing. I wouldn't say that this is missing. I think there are good tools for this today. The problem is catching up with how harms are evolving. Like I talked about one of the more recent ones is sadistic online gangs.

[00:08:08] And so finding that is more difficult because it's an emerging harm, but we are building tools to find known content, which you find through hash matching and then find generative or new content, which you find through AI classification. And implementing those types of detection tools at scale is one of the most important things that we do at Thorn.

[00:08:34] And that is needed for these types of platforms because you have to look or you have to be able to operate at that scale to find those needles. And then when you look at law enforcement, in the United States, if companies detect child sexual abuse material, they're required by law to report it to the National Center for Missing and Exploited Children.

[00:09:00] And the National Center then reports that out to law enforcement. But when law enforcement receives that content, they open an investigation and then probably serve a search warrant. They're seizing hard drives, phones that have terabytes of data on them. And the question that they're trying to find is, is there a child being harmed right now?

[00:09:18] And how do I get to the information on these hard drives that will help me remove that child from harm as quickly as possible? To lay human eyes on every piece of data in terabytes of hard drive would take years. And so the job to be done for law enforcement is triage.

[00:09:36] How do you go through all of that data as quickly as possible and find the information you need to get to a child? And that's where technology and AI classification comes in as well for law enforcement. That leads, I think, well into my next question, which is.

[00:09:56] which is, you know, looking at tools like AI alongside policy platform design and human expertise, [00:10:05] where do you see the greatest opportunities to meaningfully reduce child sexual exploitation and abuse online? [00:10:12] Yeah, I think when I started in this field, I wish that there was a single kind of a silver bullet because it is incredibly difficult work and you just wish, like, if we just did this, we would stop this kind of abuse. [00:10:31] But what I've come to realize and I think is really important is that this requires multiple layers of approach. [00:10:40] You have to have, ideally, safety by design. So that's how tech companies are building their platforms. [00:10:48] So when I look at where AI is going, how are you training your models? [00:10:54] Something that's really important is if anyone is going out and collecting a bunch of data to train a model, you will collect child sexual abuse material. [00:11:01] There is child sexual abuse material all over the Internet. So the most important thing is clean your training data to start. [00:11:12] And then as you train your models, how are you training your models? Once you deploy them, how are you putting guardrails in place to detect child sexual abuse and remove it at scale? [00:11:22] How are you creating pathways for disrupting harmful networks? How are you supporting survivors or those who've been harmed? [00:11:32] And how are you implementing good governance frameworks within companies? [00:11:38] But also, how are parents and children and communities being educated about how to stay safe online? [00:11:49] One of the most important things we can do for our kids is to create a safe environment so that if something happens, they can ask for help. [00:12:02] And I think sometimes we think that if we just create safer online environments and we do certain things, that that will solve all of it. [00:12:11] We're working in an adversarial world. There are people who are trying to harm children. [00:12:18] And so we have to create all of those safe online environments, yes, and talk to our kids so that if they encounter something, they know that they can go to a parent, a loved one, a teacher, a community member and say, hey, something bad happened, I need help. [00:12:38] And so it's really a combination of all of those things that need to be in place to kind of create this digital safety net for our kids. [00:12:48] Yeah, definitely. In that same vein, you all work very closely with platforms and with law enforcement and also with families. [00:12:59] What kind of interventions actually make a difference in real world outcomes for children, and are there any that tend to fall short? [00:13:07] Yeah, I mean, I think when we look at platforms, you know, one, I talk a lot about safety by design. That's important. Thinking very critically about how online environments are built to prevent harm from the first place. [00:13:28] Second is detection at scale. So detecting harmful images and videos, but also text. [00:13:39] We talk about grooming happening online. This is a place where you can detect. [00:13:45] Thorne has text detection as well to get signals for trust and safety teams when grooming is happening, to flag those types of things and intervene quickly in those environments. [00:13:58] I think education for parents, again, and children so that they can kind of self-regulate. That's a place where I feel like and why I'm grateful for this conversation. [00:14:13] We need more digital literacy in schools. We need more conversation with our children about these types of harms so they can kind of know when they see it and they know what to do. [00:14:28] And parents for how to create those safe environments for kids. [00:14:33] On the law enforcement front, I think unfortunately, we're still kind of understaffed in this area. [00:14:41] And especially when we look globally, this is not a US crime. These crimes cross borders, especially the virtual only crimes. [00:14:51] Right. And we have got to build up the capacity.

[00:14:54] To build up the capacity of our law enforcement and the technical training and tools.

[00:15:00] We at Thorn, we're a non-profit, so it gives us the ability to donate our technology in countries where they don't have the capacity for this. But these are crimes that again, cross borders and we have got to build up that technical capacity globally to investigate these crimes.

[00:15:24] And you've mentioned running into certain limitations. What specific use cases are hardest to support with the current tools given the sensitivity, scale, and urgency of your work?

[00:15:38] Yeah, I mean, I think one of the hardest things in this field is that, so there's kind of two, if you think about kind of two types of detection technologies. One is hash matching. So that's saying, if I've seen an image before or a video of child sexual abuse, you create a hash, which is like a string of numbers, and then you provide that to companies and they can find that hash on their platform. So you're now finding known content.

[00:16:13] And then there's AI machine learning classifiers that we build for image, video, and text. But talking about this content, it is illegal content. So it is very difficult to train classifiers to detect this type of content because, I mean, I'm talking to a group of folks who know probably a lot more about AI than I do, but you need labeled data often to train these types of classifiers.

[00:16:40] And when you're doing that, especially in the initial training, that requires human eyes. And one, a very small portion of people in the world can look at this content to label it. And two, that is a very difficult mental toll on individuals. I mean, we are asking people to look at some of the worst that humanity has to offer to train this type, these tools.

[00:17:13] So it is happening right now and we are, we have trained AI classifiers for image, video, and text detection, but that still remains some of the most difficult work in this field, but also some of the most critical because two things. AI or classifiers detect never before seen content. And even before generative tools came out and really scaled in the last few years, classification was incredibly important because if you're running that on a social media platform or someplace that there's just images and videos, hashes will help you find content you've already found before, which is very important because those are children who have been harmed and hopefully that they've been removed from harm and they're recovering.

[00:18:03] And you wanna take their content down because that's horrific, you don't want anyone seeing the worst days of their life again. But a classifier will find new content and often that is a child who's been, not often, we don't know this, but that can be a child who has been abused now. And those classifiers have become even more important in the world of generative AI because if you have a model that creates images or videos, you should be running a classifier, the output of that model to detect new child sexual abuse material before it ever hits the internet.

[00:18:39] And so this classification training, it's very difficult to do, but also critically important in this work.

[00:18:50] And one other thing I would add to this is that for law enforcement, again, actually for content moderators and law enforcement, I talked about the mental toll of training these models. The mental toll of doing this work, these investigations on content moderation within companies, but also law enforcement is also incredibly difficult.

[00:19:16] And so, ideally, you can build tools, which we're doing, to not just say this is a piece of child sexual abuse material, but to further define what type of what is happening in the images so that you can triage and prioritize your case or group information together without having to look at it. There is no silver bullet. The AI is not going to solve these crimes or solve these cases for anyone, but it can help get the information into groups so that when someone has to watch a video or watch...

[00:19:52] has to watch a video or watch or look at images. [00:19:55] They're not having to do the manual labor of organizing or prioritizing. [00:19:58] That can be done with AI classification. Again, very, very difficult, but very, very important in this type of work.

[00:20:10] Thank you. I think I covered a lot of what my next question may be going to get at. I'll ask it in case you have anything else to add, but then we can also move on to the next question if that's the case.

[00:20:24] Thorn has built sophisticated detection technology, but across the broader ecosystem, content moderators, law enforcement, investigators. Where do you see AI having the greatest potential to protect the humans doing this work and speed up these protections?

[00:20:39] Yeah. I talked about it a little bit. I think for industry, the jobs to be done are finding a needle in a haystack. So that's just detection in general. If I think about maybe who is on this webinar, those are folks who are building AI platforms.

[00:21:04] There's detection in, again, your training data and how you're building the guardrails in the model that has to happen, but then there's also in the generative process. So being able to detect an upload and detect at output.

[00:21:21] But then once there is detection, there's a whole triage process for the content moderators, and that's a bit of what I was talking about — it's not just this is a harmful image or video, but how do I prioritize my cue?

[00:21:37] I think that job to be done for law enforcement is also critically important. We have a tool for law enforcement called Thorn Detect that does that integrated into the top forensic platforms in the world.

[00:21:54] Again, they seize millions and millions of files through search warrants, and it reduces their time to get to the most important information quickly. One, it speeds up investigations, but two, what you just said is, this is a mentally taxing job for anyone who's on the front lines, and often you have folks who cycle in and out of this work, or you have people who make it their life's work.

[00:22:28] But if you have people who choose to make this their life's work, the question is, how do you protect their mental health and allow them to do this work, trying to reduce some of the burden that will lead to burnout, which is having to continually rewatch the same abuse video every time.

[00:22:51] That's unnecessary with technology. AI can help stack those, can help blur, can help reduce sound, and there's a lot we can do to enable front-line workers who are willing to do this, do their jobs well and reduce harm.

[00:23:11] Thank you. If we zoom out, Thorn was founded in response to a possible market failure where protecting children online wasn't adequately prioritized or resourced. Do you think any failures like that still exist today?

[00:23:28] Yeah. I think that's an important question. We're a nonprofit, and when we started building, our first five years were mostly focused on research and what you would consider normal nonprofit work. We started building technology 10 years ago.

[00:23:47] When we started, we really thought we were going to prototype and prove that technology could play a role in protecting kids. We thought, we'll prototype it, and we'll spin it out and someone else will take it and scale it.

[00:24:02] It became apparent really quickly that anytime we would think of spinning something out, folks would say, well, what other use case does this have? Because there's not really a market to sell child protection software.

[00:24:18] We're like, no, but we want it to. There are children who deserve this technology to exist. That's when we decided that we were going to build and scale this technology within a nonprofit.

[00:24:32] I still do think that the market, if you will, isn't there yet for this to be not outside of a nonprofit. Being a nonprofit allows us to focus on one thing, and it is how do you build the best technology.

[00:24:50] you build the best technology to protect children, and not have to think about what if this was pointed at a different crime type, or a different kind of harm vector? Because you work in trust and safety, and you have for a while, you know, there's a lot of different kinds of harms online. And people are looking at spam, terrorism, hate speech, other types of things. But thinking just about how do we stop child sexual abuse is a difficult, mentally taxing niche focus. But it's an incredibly important one, because when we think about the volume of children that are potentially harmed every day, there was a study, a We Protect Global study that more than half of all children in the world, by the time they reach 18, have had some form of harmful online sexual interaction. And I think our own research says one in 17 kids know of someone who has had a deepfake nude made of them. That's one or two kids in every U.S. classroom. And so the scale of this deserves its own focus.

[00:26:09] And I think the model that we've created where we philanthropic support, funds our research and our technical innovation, and then we charge tech companies for software is a good hybrid model for sustainability and allows for this very niche and very important focus. Thank you.

[00:26:34] I think we have time for maybe one more question. So maybe we'll end it on, where we've made real progress. Where do you think the deeper collaboration between non-profits, companies and governments is still needed to close the remaining gaps?

[00:26:53] Yeah, I mean, I think this is, I mentioned this earlier, I wish there was a silver bullet, but there's not, and addressing this harm truly is a collaboration between government, tech and non-profits, and I think speeding up the kind of information flow, so crimes against children don't just happen on open web services, there's dark web harms that are completely outside the scope of that as well, and oftentimes you'll see things happening in the dark web that we're now gonna know, we're hearing may move into the open web, and so how do we create more visibility to what types of harms are coming? How do we share or get access to data again to train classifiers?

[00:27:54] How do we create incentives for companies to improve safety by design, to improve transparency in their child protection policies and approaches? I think at the end of the day, if we design and create safe environments for children, it creates safe environments for all of society. And so the more we can make every part, government, law enforcement, industry, NGOs, kind of rising tide lifts all boats if we can all be working together hand in hand to do our part in this ecosystem. Great.

[00:28:37] Again, thank you so much for your time and obviously the amazing work that Thorne does. We really appreciate you coming on and chatting with us all. I think now it's time to bring back Natalie to help facilitate our Q&A portion.

[00:28:52] Chelsea.

[00:28:54] Okay, I'm hoping my WiFi works. I'm so sorry for the annoyance team. We have so many questions. Julie, Chelsea, thank you so much for being here in the forum. I had no idea how explosive this chat was gonna be. It's just, it's so engaging. So Julie and Chelsea, the questions we don't get to, I'll follow up with you both and then get back to the community to make sure we can address all of their questions here.

[00:29:20] So let's start with Lana Romanoff. She's a machine learning engineer, and she asks, which capability will most transform child safety online, grooming, prediction, multi-model detection, or real-time threat intelligence?

[00:29:35] Hmm. Chelsea, you might have an opinion on this too, but I think a real-time threat intelligence is interesting because detection after the fact.

[00:29:48] Detection after the fact is important, but detecting especially grooming in real time and harm, I think would be a leap forward.

[00:30:01] Chelsea, I don't know if you have a...

[00:30:03] Yeah, similarly, I think multifaceted and multimodal and highly contextual detection being really important. As an investigator, a lot of my work is putting together a lot of signals that maybe those signals individually don't add up to something concrete that we can report to the National Center. But the whole of the picture can, so increased capability for AI systems to help find those pieces of the puzzle, put them together and put them in front of a human analyst who can make the final decision on what's reportable or not, especially while we're still in the space where even bad actors are being experimental with these kinds of technologies. So just detecting a single bad prompt or a single bad image generation that might be sufficient for policy enforcement but it might not be sufficient for what might be needed to actually safeguard a child.

[00:31:06] Thank you, ladies, awesome answers. This next question is from one of our community leaders, Daniel Green, and he asks, what surprised you most about how modern AI models are helping your work, Julie?

[00:31:20] Surprised. I don't know if this surprised me, but I'm excited by the potential is just how quickly, one, how quickly we can build AI classifiers for detection when we have access to data, and two, how granular we can get. So, not just, you know, is it potentially CSAM, but what else can you tell me about this image so that, again, I can reduce eyes on it, images or videos or text. So, I don't know that that surprised me, but has been very helpful in our products.

[00:32:09] Thank you, Julie. Did you wanna add anything to that one, Kelsey?

[00:32:12] Maybe. I think, you know, even using, like, LLM capabilities and, like, their, yeah, human-like decision-making capabilities alongside human investigators has been something that's, like, been really helpful in terms of, like, looking at a lot of data and, like, taking out the important data, being able to instruct the model about what's important to me and, like, get me to those pieces of information faster, has, like, sped up investigations in ways that, you know, like a standard text classifier, like, would maybe catch, but maybe be more brittle to. Things like position of trust or other escalating elements of a case could be brought out more quickly, even, you know, without my initial knowledge of their presence.

[00:33:07] Yeah, definitely.

[00:33:08] Okay, the next question comes from long-time member of the forum, as well, the Chair of UCLA Pediatrics, Dr. Yazdani. Says, thank you for addressing this important issue. Do you plan on working with AAP to update their online safety for children to include AI mediated generated content?

[00:33:26] Hmm, well, my alma mater, so glad to hear UCLA in there. Um, I will take that back to my research team. So our research team publishes a lot of research, all available on our website. And then we see our role of kind of just informing others in the ecosystem. I think there are so many people around the world, organizations that work on child sexual abuse. Not all of them, I think to this question, incorporate the technology element. And what we see today is that a large part of child sexual abuse has some amount of technology involved in it, whether the child was groomed, or it was published after the fact, or it is synthetically generated. So I think that there may be an opportunity there and perhaps worth the conversation. Dr. Yazdani is a very forward thinking individual. I like it, I'm right up the road, again my alma mater, so happy to chat anytime.

[00:34:25] Oh, I will introduce you to over email.

[00:34:29] This next question comes from Gabriel Lars Sabadin. He is a founder and a developer, and he says, I'm building open tools for child online safety to reduce barriers for companies that want to protect kids. How can independent developers best collaborate with organizations to make safety easier to achieve?

[00:34:46] Chelsea: Make safety easier to adopt. [00:34:49] Chelsea: Do you have a take on that? [00:34:54] Chelsea: I'm not sure about my take. So we have developers that use our API and our free safety offering for that called modAPI. [00:35:06] Chelsea: But it's always extremely helpful when developers are much more forward thinking. A lot of times child safety, and rightfully so, isn't the first thing that they think that whatever they're building is going to be abused for. [00:35:21] Chelsea: So having more of that circulated within the developer community is always really good to hear. [00:35:29] Chelsea: Insofar as people are surprised by, they build something they think is really cool and they don't predict the child exploitation use case and having places to go. [00:35:41] Chelsea: In terms of collaborating with organizations like NGOs and us, I'm not sure what avenues we have for us, but I imagine being part of the child protection industry circle would be very helpful to push those conversations.

[00:36:03] Speaker: Yeah, I would just second that. I think I've had the opportunity to speak at a few college classes of folks who are in engineering, which is exciting because 10 years ago you would never have this topic discussed. [00:36:20] Speaker: And I think more, to Chelsea's point, the more that entrepreneurs and builders and folks understand that perpetrators will try to use every technology to abuse children, every single, we don't want to think about that as innovators, but it's going to happen. [00:36:38] Speaker: So the more we can think ahead and be in the developer community or the entrepreneurial community and be thinking about, I want to build this great thing, but I want to think about child safety, the better. [00:36:49] Speaker: So I'm not sure I have a specific answer of how to collaborate in this space except to keep doing what you're doing and thinking about this from the very beginning and building community amongst developers.

[00:37:02] Moderator: Thank you, ladies. [00:37:03] Moderator: And this next question, Julie, this is what we were talking about when we met at the off-site. [00:37:09] Moderator: So, Andrew Holtz asks, what are the guidelines you'd recommend to parents to safely introduce AI to children?

[00:37:16] Julie: Hmm. I'm not sure I'm a specific expert on how to introduce AI to children. There's actually a woman, I think who's at OpenAI, who wrote a children's book on this, of how to introduce AI to kids. [00:37:34] Julie: I'll find that after this. But I think with any technology, it is really important, and this is where it's hard because not all parents know about technology, but to approach it as a learning opportunity, an open dialogue opportunity, and to make the right judgment call in your own family. [00:37:59] Julie: So when, at what age to introduce it, I'm not going to prescribe because every child in every home is different. But when you do, introducing it as technology is a tool. It is not a companion. [00:38:18] Julie: It is not a be-all, end-all of anything. It is a tool, and how do we use that tool to achieve what end? [00:38:23] Julie: And then how do we approach it with a learning mindset and also the guardrails and the open conversation of, if bad things happen, it's okay. Come talk to me. [00:38:36] Julie: So that's not a direct answer because I'm not an expert on how to introduce technologies, but I think some basic principles of open lines of communication, curiosity, using any type of technology or AI as a tool in your life for you to achieve something else that you want to achieve are important.

[00:39:01] Speaker: Beautiful answer, Julie. And as a mother to a 15-year-old, my son, without really me even realizing it, had access to AI by around seventh grade. It was pretty prolific in his circles. [00:39:10] Speaker: They were using the Grammarly AI to plagiarize. And that was his experience in seventh grade and it was so obvious that it wasn't his writing. [00:39:22] Speaker: And really the signal to me, okay, I need to be very literate in this tool. I need to figure out how that I can leverage this tool to support him and his academics and set some guidelines. [00:39:35] Speaker: And I think, Julie, what you and I talked about was really just being engaged. And if our kids are going to be using the tools, we have to kind of.

[00:39:44] We have to kind of be watching them. We have to understand the tools that they're using. And I can honestly say that AI has helped me help my son in his academics such. But it was a real learning curve. And we do it in the room table. So, yeah, that was a beautiful answer.

[00:39:59] Yeah, I think this is one of the most important things we can do. And I think, Chelsea, you asked me where some of the gaps are earlier. I think this is an area that is incredibly difficult. I have three kids as well. Parents are overwhelmed, like especially if we're working parents. And not every parent works for a tech company or lives in Silicon Valley like us.

[00:40:30] And if we think about the rest of the world, like asking a parent to sit down and become immersed in this so that they can have an open dialogue with their kids is very difficult. And so we have lots of people on this call. If anyone has that brilliant idea of how to bridge this gap and help kids and parents facilitate the safe adoption of this technology, I think that's a great area for investment and growth.

[00:40:54] Absolutely. We're so open-minded. To incorporate, Julie, your perspective, Chelsea, we're drumming up some ideas for how to have this kind of... You know, have this dream of parent-teacher conferences with ChatGPT. And we could just share stories about what has worked for us. But yeah, that is a big one. And I totally, totally agree, Julie, that it's so hard.

[00:41:19] And even harder for other people, depending on what your job looks like. Like how often you're away from home, what your access history has historically been. So no judgments there. But let's form a parent group and share our ideas together.

[00:41:37] Okay, do we have time for one more question? Let me go to the very beginning. Annie Kwon, a director at Microsoft, who has also been a member of this forum for a long time, asks, how are you partnering with law enforcement to bring justice to perpetrators?

[00:42:00] So our tools for law enforcement are used in, I think, over 40 countries today. A large portion of child sexual abuse material investigations touch our software at some point. And I think that that is our role.

[00:42:20] At THORN, we do not work the cases. We serve those on the front lines. That's what we say. We serve folks like Chelsea and content moderators who have to do this job day in and day out. And we serve law enforcement all around the world.

[00:42:41] The way that we can help children get justice is that I think about this is a hard thing to think about, but this content is, like I said, it's like a needle in a haystack. And if you don't provide the tools to find the content and get to these children quickly, it's kind of just lost in the ether.

[00:43:06] So if I think about what our role is, is finding the content and then helping law enforcement run their investigations so they can find these children, remove them from harm, and serve justice for these children. And there was a case in Texas. One agency used our Thorn Detect software. In one month, they served 300 search warrants, arrested 200 perpetrators, and removed 100 children from harm in one month.

[00:43:38] And that is replicated across thousands of law enforcement in 40 countries that use our software to do this work. And how we serve justice is make sure that these kids are not lost, that we can get to the information we need to remove them from harm and put the perpetrators somewhere that they cannot harm children again.

[00:44:00] Thank you so much, Julie. Chelsea, did you have anything to add to that last question?

[00:44:04] Yeah, for our part, being on child safety investigations, we use the technology that Thorne provides in the industry, sort of standards like hash matching and classifiers that detect child sexual abuse material. We also use text-based detection systems that we filter and triage and try to dig up the worst of the worst.

[00:44:30] Some things are just like policy violations, but some things are things that like require a referral to the National Center for Missing and Exploited Children. And as much as we can use our technology to speed up those investigations, we do.

[00:44:42] those investigations we do, but we have human investigator child safety expertise put together reports of what the possible risk to a child or children in the real world into what we hope is a very, you know, cohesive, understandable report to law enforcement with as much information as we could provide that would be helpful for them to identify a victim or identify a possible perpetrator and then send those reports out to NCMEC along with some descriptions of what our model capabilities are.

[00:45:28] I feel like our reports might look materially different from other traditional tech companies because we have generative products and also single users. So we do have an investigation like deep dive component where we try to do what I spoke about earlier, which is combining all of the elements of the case that make put the story together that a child might be at risk.

[00:45:52] Well, Julie, Chelsea, thank you so much for joining us here in the OpenAI forum. Clearly, there's a lot of curiosity and appetite for more conversations on this topic. So thank you so much for your time and your expertise. And I hope this is just the beginning of our conversations. Thank you so much for joining us.

[00:46:13] Thank you for making the time. Thank you both.

[00:46:19] Well, that was an awesome conversation. Thank you all for joining us, and thank you for bearing with me through the Wi-Fi challenges. Don't know what's going on today. Maybe Mercury's in retrograde.

[00:46:29] The event horizon in the OpenAI forum looks a little bit like this. We are going to be welcoming OpenAI's chief futurist, Joshua Aukiam, for, what is AGI and what is next? March 4th, we'll be streaming accelerating math and theoretical physics with AI, OpenAI and UCLA, Institute for Pure and Applied Math at the Fireside Chat with Terence Tao and Mark Chen, and then March 14th, I am super excited to welcome you all to my hometown of Austin, Texas, where we'll be at South by Southwest with the CEO of San Antonio NBA Coaches, RC Buford, and OpenAI's Chief Policy Officer, Chris Lahane, as well as our VP of Operations, and more, to discuss community impact and human-centered adoption of AI technology.

[00:47:20] I hope to see you all soon in the OpenAI Forum, whether virtually or in person, and as usual, if you have any questions, just reach out to Caitlin or me in the OpenAI Forum. Have a wonderful day, everyone, and thanks for joining us.

+ Read More
Comments (0)
Popular
avatar


Watch More

Event Replay: Using AI to Navigate Health: A Conversation with Kate Rouch, Nate Gross, and James Hairston
Posted Jan 16, 2026 | Views 1.5K
# Healthcare
# OpenAI Leadership
# ChatGPT for Health
# OpenAI for Healthcare
The Importance of Public Input in Designing AI Systems: In Conversation with The Collective Intelligence Project
Posted Mar 11, 2025 | Views 25K
# Democratic Inputs to AI
# Public Inputs AI
# AI Literacy
# Socially Beneficial Use Cases
# Social Science
Event Replay: Vibe Engineering with OpenAI’s Codex
Posted Dec 04, 2025 | Views 6.2K
# Codex
# Vibe Coding
# Developers
Terms of Service