Sign in or Join the community to continue

Event Replay: Using AI to Navigate Health: A Conversation with Kate Rouch, Nate Gross, and James Hairston

Posted Jan 16, 2026 | Views 21
# Healthcare
# OpenAI Leadership
# ChatGPT for Health
# OpenAI for Healthcare
Share

Speakers

user's Avatar
Nate Gross
VP of Health @ OpenAI

Dr. Nate Gross is the VP of Health at OpenAI. He previously co-founded Doximity and Rock Health. He graduated from the Emory University School of Medicine with an MD, Harvard Business School with an MBA, and Claremont McKenna College with a BA in Governme

+ Read More
user's Avatar
James Hairston
Director of Innovation Policy @ OpenAI

James Hairston is OpenAI's Global Head of Innovation Policy responsible for health, devices, and robotics. James previously led Meta's virtual and augmented reality policy team for a decade and was Policy Advisor to the Head of the US Small Business Administration. James serves on the board of the International African American Museum in Charleston, South Carolina and is a proud native of Jersey City, New Jersey.

+ Read More
user's Avatar
Kate Rouch
Chief Marketing Officer @ OpenAI

Kate Rouch is the Chief Marketing Officer at OpenAI, where she is focused on building a global brand that makes AI feel accessible, inspiring, and useful. She believes AI is the most transformative technology of our time and is passionate about helping people understand and engage with its potential.

Previously, she was CMO at Coinbase, leading global marketing as crypto went mainstream. She spearheaded high-profile campaigns, including Coinbase’s viral bouncing QR code Super Bowl ad, one of the most talked-about in history.

Before that, she spent over a decade at Meta, helping scale the company from 500M to nearly 3B users and leading marketing for Instagram, WhatsApp, Messenger, and Facebook.

Her work has earned multiple creative awards, including the Direct Grand Prix at Cannes. She and her husband are avid skiers and equestrians, living in the Sierra Nevada mountains with their two kids and four horses.

+ Read More

SUMMARY

In this Forum session, OpenAI leaders Kate Rouch and Dr. Nate Gross joined James Hairston to discuss the launch of ChatGPT Health and OpenAI for Healthcare. They explored how AI can responsibly support patients, clinicians, and health systems by offering personalized, secure, and contextual assistance. Nate emphasized that over 40 million people ask ChatGPT health-related questions daily, highlighting both the demand and responsibility for thoughtful design. Kate shared her personal experience navigating a breast cancer diagnosis, explaining how ChatGPT empowered her to better understand clinical literature, prepare for specialist visits, and communicate with her family. The speakers detailed how ChatGPT Health integrates with medical records, wearables, and health apps to provide a more holistic, patient-centered experience. For clinicians, OpenAI’s tools help reduce administrative burden, align care with institutional policies, and improve documentation. Together, they underscored the importance of trust, transparency, and collaboration in building AI tools that enhance—not replace—human judgment in healthcare.

+ Read More

TRANSCRIPT

[00:00:00] Speaker 1 (Natalie): Hi, everyone. Welcome to the Open AI Forum. I'm Natalie, and I lead our interdisciplinary community of experts. Today's conversation is especially timely. Health is one of the most common and most meaningful ways people already use chat GPT to make sense of information when it matters most. This event centers on the launch of chat GPT health and what it signals about how AI can responsibly support people and the healthcare system. Because health is uniquely high stakes, trust, safety, privacy, and close collaboration with clinicians have been foundational to how this work was built. We're joined today by members of the Open AI team who are leading this effort: James Hairston, Global Head of Innovation Policy, Nate Gross, VP of Health, and Kate Rouche, Chief Marketing Officer. Kate also brings a very personal perspective that helps shape how this product shows up for people in real moments of need. We'll begin with a fireside chat, followed by an audience Q&A with James. I'll return at the end to share what's coming up next in the forum, including an invitation to join us in person at Open AI later this month. For now, please help me in welcoming James, Kate, and Nate to the Open AI forum stage.

[00:01:27] Speaker 2 (James): Thank you so much, Natalie, and welcome Kate and Nate. It's a real pleasure to be here with the Open AI forum community. We've got an exciting conversation today. Why don't we go ahead and dive right in? We'll talk for a few minutes here and then I think open it up to Q&A. To get started, last week was a really big launch week in health for Open AI. Nate, I'll maybe turn to you to start us out. Could you just help ground the audience in the news and explain what ChatGPT Health and Open AI for Healthcare is, how it's different from the rest of ChatGPT, and the strategy behind these launches?

[00:02:04] Speaker 3 (Nate): Sure. Thanks, James. Hello, everybody. We had an exciting week at Open AI. Happy to walk you through it. For context, a lot of healthcare happens outside of the visit. Questions before and after appointments, results that arrive much later, all the paperwork logistics around care. People are already using ChatGPT to prepare, translate, summarize, and sense check what they're seeing in those gaps. In fact, of the 800 million people who use ChatGPT each week, one in four submits a health or healthcare related prompt each week. That comes out to be about 40 million people per day. So this is an incredible opportunity, but it's also a responsibility to treat these questions with the care and nuance that they deserve. And that's why we launched ChatGPT Health. It's a dedicated space that has additional privacy and security protections as well as connections into medical records and health apps so that the answers can be grounded in everyone's individual personal context. Then in parallel, we launched OpenAI for Healthcare. And this is our set of secure AI products for healthcare organizations that are designed to support across clinical, research, and administrative work, all while supporting things like HIPAA compliance requirements. This includes ChatGPT for Healthcare, which is an enterprise-grade secure workspace for clinical and operational teams, as well as the OpenAI API for developers who want to build inside or for the healthcare system. These products were developed in close collaboration with physicians, and we're really grateful to our early partners in helping shape our development playbook.

[00:03:43] Speaker 2 (James): That's awesome. Kate, part of the core mission of OpenAI is using AI to help people solve hard problems. Explain how products and health align with that goal from your point of view.

[00:03:56] Speaker 4 (Kate): Yeah, absolutely. I mean, I think just as Natalie said, I really am going to take off my hat as an OpenAI executive in this forum and really speak to you as a patient. I've been really public that I went through a breast cancer diagnosis and treatment journey in the past year. I really leaned on ChatGPT to help me do a lot of the things that Nate was describing as a patient. I had incredible doctors, but they were across many, many different specialties. There was a lot of very complex, nuanced clinical literature and clinical decision-making that I really benefited from having someone explain to me at the level of understanding that I had. So I was better equipped to use that very, very precious time with my specialist doctors to make sure I was really...

[00:04:58] Speaker 1: To make sure I was really asking them the most important questions and that I was as informed as possible going into those discussions. Making sure that I was getting insurance claims approved and getting some help writing insurance claims in a way when they got denied that they did get approved. Everything from navigating, kind of, frankly, a complex academic hospital system as a patient to really doing things like, hey, how can I talk to my three-year-old about cancer in an age-appropriate way? What are books that I can reference? I think, you know, people deal with diagnoses and health situations in, obviously, a complex set of areas of their lives, and really just having kind of a resource that is always on that can help equip you, providing more agency as you go through health journeys, I just personally found to be so extraordinarily helpful and I think really does bring the mission to life in such a powerful and tangible way.

[00:06:11] Speaker 2: Absolutely, absolutely. And, you know, Nate, you know, as you talked about the vision for some of this product, you know, one of the things we see is that health information is scattered across a lot of different portals and apps and locations. And, you know, Kate explained just many of the ways that people come to these products as consumers. Can you say a bit about how CHAT GPT Health changes the experience for people today?

[00:06:36] Speaker 1: Sure, the healthcare system really is fragmented. It's siloed and some of that is a side effect of how care delivery and business models, insurance, regulatory environment, how all of that evolved. And some of it is a trade-off between all of the specialization that benefits society and having advanced care, having different experts in different places. But at the end of the day, these silos create knowledge gaps and they create care challenges as people move in their lives or as they pinball through the system to manage a really tough condition. Engaging with health is always better if you have a more holistic view. So with AI, we can start to break down those silos and help patients and doctors see the full picture of a patient in a way that no part of the fragmented components of the system is really built to bring together today. With ChatGPT Health, a user can choose to sync their medical records that are spread across systems that today do not talk to each other. They can blend that with biosensor data from their wearables, which most doctors don't have the time or interfaces to easily review. And then they can connect into personal health apps like Peloton and Function Health. So unlike a search engine which has amnesia every time you talk to it and doesn't know if you're 25 or 75 years old, the ChatGPT queries are grounded in each user's personal context to whatever degree they choose to share and bring in that information. This allows ChatGPT to offer really relevant personalized support, such as when you're preparing for these doctor appointments or simple things like looking for guidance on a meal plan or exercise routine or even a vacation itinerary that happens to fit your health needs. So we think ChatGPT is another step towards turning the product into a personal super assistant, really, that can support you with the information and the tools to achieve your goals across any part of your life. That said, it's still really early and for now, waitlist only, but we're excited to get feedback and see where it goes.

[00:08:39] Speaker 2: Now, I think this is a really helpful overview of the consumer side, but Dave, we also just launched OpenAI for Healthcare. Do you wanna take a second to explain how that differs from ChatGPT Health and who it's designed for?

[00:08:52] Speaker 3: Sure, sure. So yes, OpenAI for Healthcare covers our products that give healthcare organizations a secure, trusted space for AI use across clinical, administrative, and research work. There are two things within this. This includes ChatGPT for Healthcare, and this is a secure workspace that again, these clinicians, administrators, and researchers can really take advantage of and actually now use in their day jobs, whereas they couldn't when they only had the personal product. We included eight launch partners across a real balance of care settings: children's hospitals like Boston Children's, academic centers like UCSF, specialty hospitals like Memorial Sloan Kettering Cancer Center, and large care delivery systems like HCA. And then it also includes an OpenAI API configured for healthcare. This is a platform to help empower the ecosystem for developers to build tools and products with our latest models. Today, this actually already powers much of the healthcare AI ecosystem across thousands of organizations. Many have configured it for HIPAA compliant use like a bridge and ambience and Elise AI.

[00:09:56] Speaker 1: And Ambience and Elise AI, we actually work now with over one million business customers at OpenAI. And part of that means getting it right for each individual industry that we serve. And of course, that includes meeting the regulatory needs of the industry and the governance, HIPAA alignment, and also focusing on the user and getting it right for the experts in those particular fields, making sure that our latest GPT-5 models are truly built for healthcare and evaluated through things like physician-led testing. And I'm curious, I mean, you're a doctor yourself, maybe a two-parter. What are some of the ways clinicians may be using this product? And then maybe, as you've talked to hospitals and health systems, what are the biggest bottlenecks that they're trying to solve for that AI can help with?

[00:10:44] Speaker 2: Well, the good news is because the products were developed in close collaboration with physicians and clinicians, we're able to get a glimpse into a few of the practical ways they're using it. So one, when they're working up a case or caring for a patient, they can ask for evidence-backed synthesis to get an answer that's grounded in real medical sources rather than just the Wild West of the internet. So this is peer-reviewed studies, public health guidance, clinical guidelines. Importantly, these include citations. So you are easily able to verify the title, publication date, journal, click through and read it and conduct a sanity check on the evidence so that AI becomes more of a partner that you still can really dig into to trust or go deeper as you're working on a differential or a workup or treatment. Second, they can keep that output aligned with how their organization actually practices. We practice care differently in different areas of the country. And so with integrations into enterprise tools, like say, Microsoft SharePoint, now the responses can actually reflect that hospital or that institution's approved policies and care pathways and how they treat particular conditions, so that teams start to follow the same playbook and level up their care to the gold standard. And then third, they can use it to speed up the documentation and communication work that piles up all around care when they otherwise could be practicing at the top of their license. So these are things like drafting discharge summaries or patient instructions, clinical letters, referral letters, prior authorization support. In practice, this means that for a given day, a clinician can pull together the medical evidence and institutional guidance, tailor it to that patient right in front of them, and then produce all the appropriate clinical and administrative docs that they have to get done, as well as adapting patient materials that are easy to read and translate for each individual patient, so that then they're able to follow along with their care plan better. And the goal here is less time lost to admins, better adherence to the best standards, and smoother patient experience while keeping clinicians firmly in charge of any of the decisions.

[00:12:53] Speaker 1: And so how that reflects into the hospital bottlenecks and their own administrative goals, I mean these hospitals they're gonna do more with less squeeze every day. Higher acuity, staffing gaps, lots of documentation, inbox work. The biggest bottleneck for hospitals is time. Clinicians don't have enough of it, none of the team members at a hospital have enough of it to read everything, explain things well, keep up with the fire hose. We think AI can help here by taking a first pass at all of the heavy lifting, you know, synthesizing knowledge, drafting, translating complex information. The other challenge that hospitals are facing is around fragmentation. So care is increasingly spread across specialists, across settings as these systems grow, across IT that does not talk to one another, and then the patient ends up having to be the integration layer of the entire system. But AI here is uniquely good at pulling these signals together, whether it's a note or an imaging report, or labs, problems with beds, wearables, and start to surface a coherent story. So the questions, the next best steps can really be brought together.

[00:14:07] Speaker 2: And then the final point is that the adoption was already happening. You know, hospital leaders, pharmaceutical leaders, they know their clinicians and their researchers are already using ChatGPT during the day because the need is real. And pretending otherwise just creates administrative and IT risk. And so the opportunity here is to bring that usage into the enterprise in a secure, audible, transparent way that aligns with institutional policies. And that's why we're excited to do this so that we can cut the admin work, help the institution, and keep the clinicians in charge.

[00:14:46] Speaker 1: Maybe turning back to some helpful real-world examples in your personal journey. Kate, you've spoken about navigating health and maybe it's helpful to hear a bit more about.

[00:14:54] Speaker 1: I hope it's helpful to hear a bit more about how these tools helped you feel more informed or less overwhelmed and how your personal health journey impacted the way you evaluated this product or how you pushed our teams internally.

[00:14:08] Speaker 2: Yeah, absolutely. I mean, as Nate was describing, there's such a wide range of places that I turn to ChatGPT for sanity checking information, helping me prioritize the most important list of questions to ask various specialists. Things like really understanding through some of the more advanced features, the clinical trial landscape, reading clinical literature with an eye toward the very specific type of breast cancer that I had and understanding in a given paper what was the sample size and was it statistically significant of not just breast cancer, but the specific subtype I had. Those are sort of more on the advanced ends, but then again, everything else from understanding, hey, is X, Y, Z a symptom of a treatment that I may be on? What do people do to mitigate these types of symptoms? Again, this was always information that I would pressure test and check with my clinicians, but in a complex diagnosis, given the time crunch and dynamics that Nate is describing, working across many, many specialists in a hospital setting and outside a hospital setting, whether that's nutrition, acupuncture, et cetera, there's just a lot going on and having that through line was absolutely just essential to me. Fortunately, I was also being treated at UCSF which happens to physically be on the same street as OpenAI and all of my doctors were very supportive and interested in how I was using the tools, which made it really helpful because we could have a very honest open dialogue about what I was finding, what their opinion was of various things that were surfacing and it really just made me feel way more informed, way more confident that I understood decisions that I was making along the way.

[00:17:29] Speaker 2: And then in terms of influencing the product, the interesting thing is so many employees at OpenAI have similar experiences. Nate was talking 40 million people a day are turning to ChatGPT to help navigate health and wellness questions. And certainly some of it is a very serious diagnosis like mine, a lot of it is, I wanna lose 10 pounds at the new year, but nevertheless, so many people have that point of view and that empathy, whether it's as a patient, whether it's as a caregiver for a parent or a child. So I think there's just so much of that empathy built into the business frankly, because we're all turning to the products in that way. And we released a data report that when we launched these products that talks more about some of the usage statistics we've seen and you mentioned how important it was having your medical team both nearby and sort of in the know on sort of a lot of the key data. I mean, one of the things that we've seen is just how important this is for expanding access, the amount of time that people who are far away from hospitals and clinics, how they're using this for support. And I think it's something like seven out of nine chats that we see using these tools come outside of clinician hours. So it's just such an important support for people navigating this.

[00:19:03] Speaker 1: Nate, I'll come back to you. Health is truly such a high stakes and unique context where questions are often incomplete and the cost of being wrong really matters. How did that reality shape the way ChatGPT Health was approached?

[00:19:15] Speaker 2: Good question. So first, we took accuracy and safety seriously all the way down to the model level. We worked with over 250 physicians to improve how our models handle health questions. Things like when to ask a follow-up question, how do you express uncertainty. Certainly when it is important to point someone towards professional care, rather than trying to just answer everything directly. Second, we invested heavily in evaluations. We have efforts like HealthBench, which allow us to score responses, not just on factual accuracy.

[00:19:52] Speaker 1: Not just on factual accuracy, but whether they included a critical consideration and penalized answers that might miss a really important risk to watch out for. And so this gives us a very realistic signal of how models perform across real healthcare workflows rather than the previous style or multiple choice examination style evaluation. And then third, we held ourselves to external benchmarks. So our models outperform industry experts now across all the health occupations in GDPVal. And CHA-CPT for healthcare went through over nine rounds of physician red teaming across the five months before launch. The goal wasn't perfection here, though. It was to build systems that would always behave thoughtfully, conservatively, transparently in these high-stakes settings.

[00:20:40] Speaker 1: And I think we've got time for maybe one more question before we go to Q&A. So maybe I'll turn to both of you just to say a bit about what do you think success would look like a year from now, not just in terms of features, but in how people feel using both tools? Maybe Nate, I'll turn it to you, and then Kate, we'd love to hear from you as well.

[00:21:00] Speaker 2: Sure. A year from now for consumers, I hope success feels like feeling less alone, less overwhelmed, feeling like you have a trusted place to make sense of things or prepare for conversations with clinicians, understand what's happening in your own care without feeling judged or without feeling talked down to. And for clinicians, I think success feels more like relief. It's less time lost to administrative work, it's more confident that you're working from the right evidence and shared standards, a little less burnout, maybe more time to see your kids. And then finally for our enterprise partners, I think success is trust. It's knowing that the AI is being used in a way that's secure and auditable and aligned with clinical judgment.

[00:21:45] Speaker 3: Yeah, I think from my perspective, really helping people make sense of the information that is coming in to them in a health and wellness setting. Whether that's just people who have been kind of using their Apple Watch for five years and have a bunch of data on their sleep and walking, and that we can actually help empower them to become more healthy in their day-to-day life in line with their goals, or whether it's people who are dealing with complex, serious diseases, and as Nate described, might have records in a variety of different places, really helping empower people to take advantage of insights from the information they have and turn those into better outcomes for themselves. I think it's really what I would see as success.

[00:22:45] Speaker 1: Thank you both for that. And I think now we're going to turn to the fantastic community we have here. I've got a few questions teed up here in Q&A, so I'll maybe get started with this first one from Daniel Green from Kansas City at the AI Collective. Daniel asks, what is one small experiment you would like people in this audience to run with ChatGPT Health and report back on? So either of you can take it away first.

[00:23:12] Speaker 2: Well, I would encourage everyone to try to make it practical. So if you have synced to an institution that maybe has some recent lab data, then maybe the next time you're at a restaurant, try some of the multimodal capabilities of ChatGPT. Snap a picture of the restaurant menu and say, hey, what kind of things should I look out for here? What kind of things should I maybe emphasize more in my order tonight?

[00:23:42] Speaker 3: Yeah, I mean, I think one tip I'd have is if there's an area in your health and wellness that you're feeling stuck on or just not really sure how to start, just putting that into ChatGPT and asking how they'd sort of approach or it would approach or break down that problem. I do think you'd be interested in answering those questions and be interested in what you could unstick for yourself by doing that.

[00:24:15] Speaker 1: And I think we've got another one here from Daniel. This one is for you, Kate. When people are anxious or overwhelmed about their health, what role do you think AI can play emotionally, not just informationally? And where should the line be drawn?

[00:24:30] Speaker 3: You know, I think this is such a personal question and what overwhelm means in different contexts can be such a huge range. As Nate said, we really have trained this product to focus on, you know, physical health. That's really where...

[00:24:50] Speaker 1: Health. That's really where we're focused and pointing people to support outside the product, frankly, for emotional support through difficult health situations. I do think that so much of the stress in a diagnosis and navigating a health setting can be in the admin, can be in things like insurance claims getting denied, not understanding your benefits. There's just so much anxiety and stress that comes from the pure information environment, and onslaught of information that can happen in these settings. I think purely actually even orienting to the product in those things like help me understand what's going on, help me prepare for these visits, help me understand my benefits. I actually think that is the core of reducing the pressure and stress on patients as they navigate these settings versus some kind of therapy or other support.

[00:26:05] Speaker 1: And here's another one for both of you. What has surprised you most about how people are already using Chat GPT for health, compared to what you expected going into this work?

[00:26:18] Speaker 2: Well, it's still early days in adoption. But I think the success, the first sign of success was probably just the sheer size of the waitlist. A lot of people want to try this out. We knew a lot of folks were already doing the upload, download and the copy-paste. And so we're treating it like a big responsibility to get them that transfer of data into secure spaces as quickly as feasible as we roll this out. But I think there's a reflection as people go through this process where they realize, "Oh, wow, I really did have my labs located over here, and my health data over here, that could actually be useful, and I've never even had the chance to think of them in the same context before." And so the merger and the crossover, and the connectivity into some of our application platforms, like, how do I take this and turn it into an interesting workout plan that might be sensitive to a recovery from a recent injury or illness? It kind of creates these delightful crossover moments that we've never really been able to achieve in the technology world or the healthcare world before.

[00:27:37] Speaker 3: Yeah, I just build on that. And I think this is something that you said, James, as well, the specificity, and that you can approach these questions with, and the ability of the AI to remember and understand over time your evolving situation, really unlocks use cases that we've just purely never seen before in the history of technology or medicine. It allows, you know, like the advice, for example, I was getting as a mother of three and six-year-olds who had this very specific type of diagnosis, a very specific kind of job, understanding the context of the actual hospital I was at and the point of view of the department that I might have been in and just like a level of specificity and context that unlocks a fundamentally different set of answers and education and empowerment for the patient in a way that something like a search query or a single conversation with a clinician simply can't do.

[00:29:00] Speaker 3: Absolutely, absolutely. And I think you know, again, that context, and really being able to meet people at so many of those junctures in their journey, is truly, I think, one of the big unlocks of what is possible here. I guess I'll jump to our next question. This is from Dr. Sharam Yazdani, who's the faculty at the UCLA Pediatric School of Medicine. Dr. Sharam asked, are there any plans to directly integrate health AI into EMR systems such as Epic so it functions seamlessly, especially for both workflow and cognitive assistant for MD, NP, RNs in the inpatient and outpatient settings? So Nate, maybe I'll turn it to you first.

[00:29:43] Speaker 2: Sure. Really good question. So of course, we started with some medical record integrations on ChatGPT.

[00:29:48] Speaker 1: Migrations on ChatGPT Health, the consumer side of the product. But similar technology is available for the professional side of the product. What was important to the team as we developed this product was to make sure that our deployment was truly aligned with the workflows that our end users—doctors, clinicians, et cetera—would find valuable. I think, you know, there's a long history of the technology industry and the healthcare industry being like oil and water, and solutions coming to bear that aren't necessarily a perfect fit for workflows. A big part of our early launch partners and some of the hospitals that are signing on post-launch is to sit down together and decide when are the moments where an electronic medical record integration would be most valuable to you within an interface such as ChatGPT or one of the apps that we can connect into. That could be something like being able to talk about your patient panel on your drive to work, or it could be pulling together a patient who's been referred in from a faraway place and has a lot of different disparate data points—maybe synthesizing and interrogating that information in an interesting way. So this is now, I think, a challenge of deployment rather than underlying technology, and it's one that we believe should be designed by and through our partners, which is why we're excited to be working on this kind of technology now.

[00:31:20] Speaker 1: Here's one that either of you should jump in on. This comes from Andy Olivo, who's a forum leader: How can hospitals integrate ChatGPT Health with their existing processes and patients? Nate, I think this one's for you.

[00:31:35] Speaker 2: Good question. With ChatGPT Health today, we have been focused primarily on helping patients with the best possible context for these really serious moments when they come to ChatGPT. But at the end of the day, it does go back to involvement and partnership with the existing healthcare ecosystem. We are excited to think about over the coming year what are ways that a patient, when they're choosing to follow guideline-recommended care or have an action to take or a follow-up question, might be able to have a really meaningful and delightful and potentially reinvented warmer experience interfacing with their systems that already serve them today. And so if folks have interest in thinking through what those experiences can look like, we welcome you to get in touch with us.

[00:32:33] Speaker 1: Our next question here comes from Joshua Roberts, Editor-in-Chief for the Journal for AI and Medicine: How does OpenAI balance the industry, plus academia, plus research of LLM healthcare? Are there benchmarks, white papers, research outputs? Maybe Nate?

[00:32:52] Speaker 2: Sure, yeah. So benchmarks are a huge part of what we do at OpenAI. A lot of people think it's just about the generation, but it's also about the listening and the pressure testing, and making sure that if you change something over here, it doesn't affect something over here. A good example would be, we benchmark ourselves explicitly on safety and effectiveness, not just correctness, because that's a very important nuance when you're interacting with someone in the healthcare system. This means evaluating how models behave around edge cases, rare conditions, early symptoms, and mental health. Technically, there are places where a correct answer can still not be a sufficient amount of an answer to give, and so physicians have actually helped us encode those instincts into the training, the evaluation, the red teaming of the products so that the model behavior better reflects how a thoughtful clinician would reason, but also creates the opportunities for those important handoffs when they're needed. There's the exact same corollaries for that sort of process in areas like foundational science supporting health.

[00:34:09] Speaker 3: Yeah, the only thing I would add there—and Nate is definitely the expert here as well— but we also do work with frontier scientists and research scientists. So just as one personal example, my oncologist at UCSF runs the early clinical trials program there for the breast cancer oncology department, and he is working with our OpenAI for Science team. So in addition to sort of patients, clinicians, and hospitals, we do have, as Nate mentioned, an active set of partnerships with researchers themselves.

[00:34:46] Speaker 1: Our next question here comes from Anna Haney Withrow, Director of the Institute of Innovative and Emerging Technologies at Florida Southwestern State College. Anna asks, if you treat this release as a signal, what do you think it tells us about the future in 10 years? So either of you.

[00:34:57] Speaker 2: Go for it, Nate.

[00:35:10] Speaker 3 (Nate): Well, one thing that I'm particularly excited about is I think medicine, while imperfect, has always been a set of building blocks. There's no way any individual person or expert working or navigating in the healthcare system today can do it all. It's dependent on trust. It's dependent on institutions and processes. The things that have been built in CHaT-CPT Health come from decades of work by bipartisan government groups for patients' rights of access to their data, standards for data exchanges, all sorts of different underlying technologies that went into the AI and the content that could help translate it. We hope that this will be yet another building block. I think that's one of the reasons why we're excited about the API components of OpenAI for Healthcare. There are so, so many challenges to solve in healthcare. There's no way we can say mission accomplished in one year, probably let alone a decade. And so empowering as many people as we can with additional building blocks that we're releasing now and will continue to release, I think is what makes me most excited about the future.

[00:36:33] Speaker 2: Yeah, I would just add, I think it's hard to really wrap our heads around the pace of change and how quickly this technology is moving. You know, only three years ago, ChatGBT itself only launched three years ago. So I think if we think about a decade, of course we always want to be grounded and not fall into wishful thinking. But I personally do think there is truly the potential for transformative outcomes for really millions of people around the world as this technology continues to improve and get deployed in these settings.

[00:37:16] Speaker 1: Another one here. Kate, you've talked about your personal experience shaping how this product was built. Was there a specific moment or insight that changed how you thought about what ChatGBT needed to do or not do in health?

[00:37:30] Speaker 4 (Kate): So you know, I don't, again, the product was not built from my personal experience alone. I think the product team does such an amazing job making sure that the voices of people that are using this technology, whether they're patients or whether they're people that are just looking to improve their health and wellness goals are included. And certainly I think mine was a voice, but there are many, many, many voices from patients as well as, as Nate mentioned, almost 300 doctors and clinicians. So I think from my personal perspective, the real thing that we are trying to build is a support for people and something that really can empower them to have better outcomes, but not to provide diagnoses or be any kind of substitution for clinicians themselves. So it's really just getting that kind of support, peeling away the unnecessary burden of things like insurance if you're a patient and navigating sort of bureaucracy, et cetera. I think that's really the like do, do not, that we're really focused on getting right.

[00:38:57] Speaker 1: I think that's a great place to wrap. I'm going to invite our colleague, Natalie, back on to help close us out here today. Thank you so much, James, Kate, and Nate. Kate, we're all so grateful to have you back. I have a feeling we'll be hosting you all again really soon. This is obviously a subject the community cares deeply about. I'm really sorry we couldn't get to all the questions today, but there will be a follow-up on the horizon. I want to also thank all of the clinicians and healthcare researchers who showed up here today, especially Sharam Yazdani, Jodi Picasso, Jessica Jimenez, Ray Loenzani. We couldn't do this work without your input and engagement. So thank you so much for being here today.

[00:39:44] Speaker 1: We're hosting our VP of Open AI for Science, Kevin Wheel, and UC Berkeley Professor and SETI Linguistics Director, Jasvir Bergus, for an in-person event at our San Francisco headquarters. It's actually a hybrid event, so you can join us in person or virtually on January 29th for "Understanding Animals: How AI helps scientists interpret language across species." Bergus and researchers have discovered that sperm whales use vowel-like sounds with AI. I can't wait to share this with you. You can request an invite on the registration page, or you can join virtually.

[00:40:19] We also have our people team joining us again this season for "Get Hired Using ChatGPT." We'll be hosting it in early February by our head of recruiting program, Selena Ma. She's going to teach us how to use ChatGPT to find relevant jobs, craft a resume, and practice for interviews. This will be especially relevant to our forum members who are seasoned in their lines of work but also trying to figure out a way to break into AI. It's also going to be relevant for new grads who are applying to jobs for the very first time. So I welcome you all.

[00:40:52] Last but not least, we'll be hosting "Using AI to Protect Children Online" with the CEO of Thorne during Safer Internet Week. Thorne is a nonprofit organization that transforms the way children are protected from sexual abuse and exploitation in the digital age. We've been in deep partnership with them from the very beginning at OpenAI, so I'm really excited and looking forward to hosting that event. Thank you all for joining us today. I can't wait to see you in a couple of weeks. I hope you have a wonderful rest of your week.

+ Read More
Comments (0)
Popular
avatar


Watch More

Event Replay: Using AI to Fast-Track Scientific Breakthroughs
Posted Dec 16, 2025 | Views 575
# AI Science
# Infrastructure as Destiny
# OpenAI Leadership
AI Literacy: The Importance of Science Communicator & Policy Research Roles
Posted Aug 28, 2023 | Views 40.8K
# AI Literacy
# Career
AI Art From the Uncanny Valley to Prompting: Gains and Losses
Posted Oct 18, 2023 | Views 39.3K
# Innovation
# Cultural Production
# Higher Education
# AI Research
Terms of Service