Lowering Barriers to AI Adoption
Conor Grennan is Chief AI Architect at NYU Stern School of Business, where he builds generative AI fluency across the institution, including MBAs, faculty and administration. He is also CEO and Founder of AI Mindset, an AI consulting company that trains professionals, leaders and organizations on a new and effective framework for generative AI. He has worked with teams across industry, including at McKinsey, NASA, PWC, ServiceNow, JP Morgan, DaVita Healthcare, Pfizer, NYU Langone, Providence Health, American Securities, and more.
Conor is also the instructor for his digital course, Generative AI for Professionals: A Strategic Framework to Give You an Edge.
Conor is also a NY Times and #1 international bestselling author, published in 15 languages. He is a keynote speaker, TEDx speaker, and executive coach. His AI insights have been featured in Entrepreneur, Business Insider, Vox and other media. He co-hosts the AI Applied Podcast (top 20 in Business News) and is a Top Voice in Public Speaking on LinkedIn, where he has over 70k followers that track his generative AI insights.
Outside of his work in AI, Conor has been named a Huffington Post Game Changer of the year for his humanitarian work with trafficked children in Nepal, and in 2014 was given the Unsung Hero of Compassion Award by His Holiness the Dalai Lama. He has a BA from the University of Virginia and an MBA from NYU Stern, and lives in Connecticut with his family.
Matt is the first appointed Chief Artificial Intelligence Officer of a private equity-backed life sciences consulting firm, Inizio, where he leads enterprise transformation, product development, customer engagement, staff upskilling , innovation, research, publication and partnership efforts both internally across this 10,000+ Company, and in collaboration with many leading health and biopharmaceutical, biotech, medtech, Healthtech and neurotech companies. Matt is the Co-Chair of the International Society for Medical Publications Professionals Artificial Intelligence Task Force, a Founding Board Member of the Foundation for Artificial Intelligence and Health, and CEO of LLMental, a venture studio for AI-powered mental wellness.
Conor provided a practical framework for rethinking how we use large language models like ChatGPT, emphasizing that the key to effective implementation is about changing behavior rather than just learning new technology. He presented a three-part framework that aligns with how we actually work: Learn, Execute, and Strategize. This approach promised to elevate generative AI from an intermittently-used tool to a transformative force that could dramatically improve decision-making, efficiency, and strategic planning. Conor spoke on topics such as Not Your Typical Tech Upgrade: Why our brains resist AI, Breaking Bad Habits: Shifting away from Google-centric thinking, The AI Mindset: Embracing a new paradigm for interaction, Hands-on Transformation: Live demo of AI in action, and Beyond Text: Exploring AI's expanding capabilities.
Hi everyone, happy Wednesday. Welcome to the OpenAI Forum. I'm Natalie Cone, your OpenAI Forum architect and community program manager, and I'd like to start all of our talks by reminding us of OpenAI's mission. OpenAI's mission is to ensure that artificial general intelligence, AGI, by which we mean highly autonomous systems that outperform humans and most economically valuable work benefits all of humanity.
Tonight we're here to learn from Conor Grennan. I invited Conor here tonight because I had the personal experience that despite being very familiar with machine learning, having built a machine learning community of practice at scale AI, learning so much from the engineers there and running the experts AI trainer program here at OpenAI, where we managed very special reinforcement learning, human feedback, data operations projects, I still did not feel confident adopting AI in my professional programmatic workflows until very recently. If I had trouble adopting AI, which by the way has massively improved my life and is enabling higher productivity, freeing up space for the work that we really want to be doing, then I'm sure many of our forum community members who are experts in an array of domains but not AI also experience similar barriers to adoption.
Conor's talk this evening will be the first in a series of educational programming, ranging from OpenAI tutorials to technical support office hours, and then what I hope will also be inspiring showcases composed of exciting use cases from an array of fields. Given we are first and foremost a research lab, we'll always make room for academic and industry researchers. But if you register for one of our enablement or technical success sessions, we're also surveying the community to understand what it is that you need support with.
While tonight we're focused on identifying and lowering the barriers to entry, remember this is step one. If this talk is a little too basic for you, doesn't quite resonate, we will be publishing many more educational opportunities on the horizon from now until the end of the year, and they will become increasingly more advanced as we go. But tonight we want to start where some of us are, which is enthusiastic, optimistic, curious, busy, and in need of support, but less technically inclined or experienced, and explore the very first steps of making AI work for us.
The person we've tapped to help us explore this challenge is Conor Grennan. Conor Grennan is one of our inaugural community members. His compass has been pointed in the direction of empowering non-technologists to benefit from chat GPT from the very beginning. He serves as chief AI architect at NYU Stern School of Business, enhancing generative AI fluency among MBAs, faculty, and administration, and is the CEO and founder of AI Mindset, a company providing AI training frameworks for professionals across various industries. His professional experience includes collaboration with high profile organizations like McKinsey, NASA, and Pfizer. He also teaches a digital course entitled Generative AI for Professionals, a strategic framework to give you an edge, which by the way, I'm enrolled in.
Apart from his professional achievements in AI, Conor is recognized for his humanitarian efforts, having been honored by the Huffington Post and the Dalai Lama for his work with trafficked children in Nepal. He's a bestselling author, notable public speaker, and hosts the AI Applied Podcast, maintaining a significant following on LinkedIn, where he shares insights into generative AI. Please help me welcome Conor Grennan.
That is a great welcome. Thank you, Natalie. It's so nice to be here again. I'm so excited to have you, Grennan. At Conor, you're always so generous and warm and kind, and it's always very fun to collaborate with you. Same. I feel the exact same. Well, let's dive into it after that very warm welcome. Thank you, Natalie. Do I have the green light to get going? I'm giving the mic to you, Mr. Grennan, is what I meant to say a moment ago. Thanks, Natalie. Yeah, this has been a fun collaboration with everybody over there. You guys obviously have an amazing team, and it's been super fun. Since last year, we've been working together on all this stuff. I am going to be talking exactly as Natalie said about adoption of generative AI. If you're on this call, you obviously are somewhere on this spectrum, but here's the really cool thing. As Natalie mentioned, I work across NYU Stern, obviously, but I also really work across industry doing workshops and trainings in pretty much every industry I can think of.
The cool thing about that is that I've talked to, at this point, thousands of individuals. The really fun thing is that no matter where you are on this journey, I think this is going to be a little bit of a different approach to it. That's the key. It's not just about where you are, but it's also critically, where is your team? Where are the people you work with? Sometimes we fall into this trap of thinking like, well, I just get this, and this is awesome, and why isn't everybody else around me getting it and doing it? That's also what I want to talk about today, about really taking a totally different approach to this, and why I think so much in the way of generative AI teaching is maybe on the wrong track. I've learned a ton from OpenAI, so I really appreciate their partnership on this.
By the way, at the end of this talk, I'm going to talk for a little bit, but then I'm going to bring in my friend, who some of you probably know, Matt Lewis, who is Chief AI Officer over at Inizio Medical. I just thought, I don't want you guys to get bored of me or anything like that, and Matt is just one of the leaders in this space. I always love talking to him, always love to learn a lot from him, and he and I have a very similar approach. I thought that would be fun before we jump into Q&A, so you'll get to hear from Matt as well.
Well, let me just start here. I am going to go ahead and share my screen with you all. Thanks to Ben in the background who's helping to run all this, and Caitlin who's doing an amazing job. Okay. Do you guys have that? I assume that somebody will scream if you don't. Oh, no. I see it too.
Okay. So let's start here. This is just the title slide. Who cares? Practical strategies, blah, blah, blah. That's not the point. The point is that what I have learned over tons of research around this and everything else is that we tend to go about learning and practicing, and I'm just going to use chat GBT and large language models as by what I mean by generative AI. I'm not going to go broad on this. But I want to get to just a very quick few points and paradigms right off the bat. Because most of the time, the way that companies are treating this and the way that we're treating this is sort of like some kind of digital transformation, like in the old school digital transformation way where you get a new technology, and then everybody moves over to that technology, and you burn down the old technology so nobody can be on that technology. And you have lessons for people and mentors for people. Think of like, I don't know, Salesforce or something like that. It doesn't matter. But that's how people are thinking about this technology. It's how I thought about this technology forever. And I love how Natalie framed it. It's like she understands all this, but how do we actually get it so that we're using this like 30 times a day? And even if you are, how do we get it so that others are? Because I'm sure that some of you have run into this problem where you're like, I'm using it all the time, but my team just isn't using it. So I want to try to give you a paradigm to think about. And this is what I do. And this is why I created this whole other digital course as well to really kind of hammer home this point.
So here's where we go wrong first. First of all, we kind of consider it like this digital transformation when in fact, it's not. It's actually all about change management and leadership. There's a lot of data behind that that I can show you. I'm not going to, because I want to cram in a little more and make this sort of a little more fast paced, but that's what I do with companies a lot. So let me just jump into the second slide here. A lot of things that we learn, especially around tech, and let me say right off the bat, I am not a tech person. I have no tech background. I have an academic background. So when I started learning this right after Chachapiti came out, I kept expecting, oh, I better take machine learning classes and things like that. And it turned out I didn't need any of those classes to do well on Chachapiti. It was very, very strange. And yet we're so accustomed to, oh my gosh, I'm going to learn this new thing, so I better start just learning. And when we think about that, we think about learning curves, right? We think about the things that have learning curves are numerous. We think about if we were learning, you know, any kind of CRM system or any kind of new software, if we're learning even, you know, Excel, or if we're learning French or if we're learning calculus, right, as you kind of see here, right? We have this learning curve that we spend weeks, months, years, whatever, learning, and then you go off and practice that, right? That's how learning curves work. However, there are certain things that do not have learning curves. So to give you an example, think about weight loss or, you know, getting into shape, something like that, right? There's no learning curve to that, right? We already have the information, right? We eat less, we exercise, and generally speaking, we can lose weight. So what's the problem here? Well, the problem is that it's not about learning it. It's about changing our behavior. So you can imagine there's a few things that it's not about learning, right? Like if you buy a treadmill, it's not about learning anything. And that's what I want to sort of like focus on right off the bat. ChatGBT is much more like a treadmill than learning Excel or something like that. And what I mean by that is that.
If we wanted to cure heart disease in America, there's an idea that we could put treadmills in every home in America and that would cure heart disease. But you know, and I know that that wouldn't work, right? So why would it not work? Well, it wouldn't work because when I get up, maybe not you guys, you guys are probably all in amazing shape. But when I get on my treadmill, currently gathering dust in my basement, when I get on that treadmill, I get off right away, right? Why? Because my brain doesn't prioritize what the treadmill gives me, right? My brain prioritizes things like quick rewards and it prioritizes conserving energy, things like that.
And things that, you know, sitting on the couch and watching Netflix and eating Doritos, that's what my brain loves. I don't get off the treadmill and think, oh, I gotta read the manual for this treadmill. It's not about that. It's about a change in behavior. That's CHA2GBT. It's a change in behavior.
So what does that change in behavior look like? Because a change in behavior is actually a pretty huge hill to climb usually. But that's what's so interesting about CHA2GBT. Usually with a change in behavior, it comes with the pain and suffering of trying to sort of train your brain to like yoga, which is insane to me, right? But like with this, when you start using it, you start to sort of just gain momentum and you wanna use it.
So what does this look like in practice? So what I would sort of suggest is that we really need this new paradigm for generative AI. We need to get our heads completely out of how we used to think about digital transformation. What does technology look like? All that kind of stuff. And we need to think about this very, very differently.
So here's how I kind of recommend doing it. So first of all, I think we need to understand that our brain is betraying us, right? Our brain kind of gives us these bad paradigms in terms of using CHA2GBT. So what do I mean by that? Well, I have talked about this before, but it's one of my favorite issues to talk about here. This is a slide that you may have seen in the past, but this slide sort of shows quickest to a million users.
And this is a slide that in the beginning of CHA2GBT, everybody's like, "Oh, look, CHA2GBT down at the bottom only took three days to get to a million users where everything else, whatever took longer." Why is this important? It's not important because CHA2GBT took three days. Who cares? The important thing and the interesting thing to me is everything else on this slide.
And so usually I do this kind of exercise if we were a live audience. I like doing it where you say, "So talk to me, like what do you think Netflix replaced, right? Or what did Airbnb replace? Or what did Dropbox replace, right?" You know, right? You know that Netflix replaced Blockbuster or whatever. You know that Spotify replaced your CD collection, whatever it is. But then you ask the question, "Okay, so what does CHA2GBT replace?" And the truth is that you can get a lot of bad answers from that, but there's not really any good answer because CHA2GBT doesn't really replace anything. It doesn't really replace one thing.
So that's more than just this kind of interesting intellectual exercise. It's interesting to me anyway. It's interesting because the way that we pick up new technologies, the way that we learn things, the way our brain learns things is essentially often by replacing one thing with another.
So in other words, like if you were back in the day when people just walked around with candles and then all of a sudden somebody gave you like a flashlight, you know, you wouldn't go back to the candle, right? You'd start using the flashlight because it was easy and it was more powerful. Your brain always knows what something replaces. And if your brain doesn't know, then it's going to come to a conclusion and it's going to set up camp in your brain and just say, "This is what it replaces." And it's going to be subconscious and not tell you.
So I say all that for a very specific reason. Also, one other analogy on this is that my dad, old Irishman, when I gave him his first iPhone, I didn't tell him about all the apps. I kind of figured, "Oh, he's going to figure this out." But he didn't figure it out. And then I'm like, "I wonder if he's actually figuring this out." And then he called me like two weeks later, all excited about his new iPhone. And it turned out that he was excited because he discovered the flashlight.
And the thing about that is that his brain was saying like, "I know what this is. It's a phone." It didn't occur to him that this wasn't replacing his phone. This was replacing his ability to do personal banking and order an Uber and watch Netflix and everything else. You see what I'm saying?
So your brain is replacing things. And if your brain is replacing, like trying to use CHA-2-GPT and replacing like the wrong thing, that has consequences. Now that's a little bit 30,000 foot view. I'm going to talk to you about exactly what that means and what's happening in our brains and why adoption is so hard. So we have to examine actually how we think and how our brain works.
By the way, this is when I get over to Matt Lewis a little later. This is why I want to talk to Matt about this because this is how he thinks too. We've kind of come to the same sort of conclusions on this, but he's much smarter than me.
So, okay, so how do we think, how does the brain work? Well, the brain works in ways and does a lot of different ways, obviously, but the brain does a few things very, very well, right? The brain does like pattern really well. It does prediction really well. It does automation really well, things like that.
So let's just focus for a second on automation. How does that work? Why are we even talking about it? Well, your brain has all these neural pathways and what neural pathways do is it's sort of muscle memory, right? So if you, and usually that's a good thing, right? Because you can go about life without thinking, like if you're about to pick up a mug, you don't have to think, how do I pick up this bottle? Your brain just knows, right? And it automates all that in order to free up your prefrontal cortex to think.
Same if you're like driving, you're probably driving in the rain, all that kind of stuff, and you can think about a million things because your body already knows, your body has automated that, but it can be bad. Because if you have been commuting to work the same way for 20 years and keep making that left out of your driveway, and then, you know, you change jobs and you just automatically make the left out of your driveway or something like that.
So your brain can kind of steer you wrong if something's automated. Another example, again, I promise I'm getting some of this, I'm about to hit the key thing that I discovered. If you were to talk to, like if you see a baby sitting there, right? You're going to talk to the baby like a baby, why? Because your brain already knows, it sees a baby, it knows, you don't have to think, how do I talk to this thing, right? You don't accidentally talk to it like a college professor or something like that, right?
So why is that important? Well, it's important because your brain uses those visual cues to sort of determine and to kind of call up, you know, in that neural pathway, what's the automated response to whatever it's seeing. So why is that so important? This took me so long to figure out, I'm going to show you something, you're going to laugh because it's so simple, but here it is, okay?
When your brain sees a chat GPT, you know, a window, a chat GPT window, it looks at this. And again, if we were together, we could do this more interactively, but what's actually happening is that your brain looks at this and your brain is not seeing chat GPT, your brain is seeing the same thing that your brain has been looking at since the dawn of internet time, right?
Your brain looks at this and it sees this, it looks at chat GPT and it sees Google. Now, I don't care how much of a super user you are, I am a super user, Matt, who's gonna be joining us, is a super user, but here's the thing, no matter how much of a super user you are, your brain has spent 20 years looking at a search bar. And when it sees chat GPT, even if you use it every day like this, there's a temptation to use chat GPT like Google.
And the bigger problem is actually that it kind of acts like Google. And so what I mean by that is that we go into chat GPT and we kind of treat it like a search engine. And again, it kind of acts like a search engine.
So to take that baby analogy again, like imagine if you went to another planet and all the babies were actually like super human MIT professors. After probably a couple of days, like your brain will be like, "Holy cow, this is, can't talk to this like babies, right?"
I mean, but at first you might have some trouble, right? Because your brain's like, "No, they're babies." So this is critical because when we are treating chat GPT like Google, it's pretty much the opposite way that you should be using chat GPT. It kind of feels like, well, it's no harm, but there is a harm because chat GPT is not a good Google. Google's a good Google. Chat GPT by definition hallucinates. That's how it works.
So why is this so critical? Well, it's so critical because if we don't shine a light on this bad paradigm about what we're doing wrong, if we don't shine a light on our bad habit and why our bad, like, if I'm thinking like, "Oh my gosh, I'm getting all this weight." And then my wife points out, it's like, "Well, we do keep like bags and bags of chips open on the counter, right?" It's like, "Oh, right." You have to shine a light on these bad paradigms.
And here's why it's so dangerous in this case. It's so dangerous in this case because when you are using chat GPT like Google, you're using it as like command and response kind of way, right? You're putting in an input and you're expecting to get out an output and then you walk away.
When in fact, you should be treating chat GPT, and you've heard this a million times, like a human, like having a conversation. So how do we do that? We're shining a spotlight on the bad problem, but how do we do that? The problem isn't just identifying bad behavior. That's the kind of the easy part. Well, not the easy part.
This took me 18 months and I know, yes, laugh at me all you want, but it took me a long time. But if we are going to change that, we have to think about, well, okay, that's the problem. How do we change that behavior?
So a couple of paradigms that I want to kind of like offer up here just quickly, cause I kind of want to get into us all talking together here. We have to retrain our brains and it's not about the tech and it's not about waiting for the better model of chat GPT to come out or anything like that. It's about working.
on us, not on the tool. So here's how I like to think about it. I'll give you a couple ways that I like to think about it.
First of all, I took this from my course that Natalie very kindly asked to take, but here's the thing, right? I call it the Costa Rica paradigm. Imagine if you are planning a trip to Costa Rica with your family, let's say, and at one area over here at this one desk you see a desktop or a laptop or whatever, and it's open to Google, right? You go over top 10 things to do for a family in Costa Rica, and it's going to give you something great, because the algorithm will push it to the top, and it's going to be super great. All these great ideas like horseback riding and beaches and restaurants, all that kind of stuff, right? So that's one thing you can do. You can do that, but imagine at this other table is not Google, right? That's the Google desk that's sitting right there, but imagine at the other desk, head of the Costa Rican tourism board. He's just sitting there. He's like, come talk to me. What do you need to know? I know everything about Costa Rica, and you go and sit down with him. He's like, I've blocked off a full hour to chat, right? Imagine that now you're sitting and talking to the head of the Costa Rican tourism board, and now he's saying, well, what do you, what kind of things do you like? Oh, well, what is your daughter like? Well, what is your son like? Well, you know, what kind of balance do you want? Well, does your wife prefer to stay near the beach? Or, well, how close are these restaurants together? Like, listen, well, you only have this much of the day, this type of year. That's how you'd have that conversation, right?
So in this analogy here, Google, the laptop, that represents Google. That's how Google works. ChatGPT is the head of the Costa Rican tourism board. I mean, you can convince it, it is the head of the Costa Rican tourism board, and it will talk to you like the head of the Costa Rican tourism board. That's how you have to do it. Why is this hard? It's hard because it doesn't look like the head of the Costa Rican tourism board. It looks like Google. That is unbelievably hard, but you have to find a way to give visual cues. And again, this is, I do this in more depth, but I want to just give you a little bit of a sense of how these paradigms can work to shift your own thinking on that. Here's another paradigm that I think is really important here.
Why are we even talking about generative AI and large language models now? I mean, like, you know, AI has been around forever. Well, the reason we're talking about it now is that AI, even though it's already existed, it's always existed in this very tight user interface. So what I mean by that is that, you know, if you have like autocorrect on your phone, and you do, that's, that's AI, right? But nobody is getting their word corrected and being like, you know, oh, my God, you know, science fiction has land, you know, here we are in the future. It doesn't, it doesn't feel like that. Because it's invisible. It's just doing what you know it can do. But it's just using AI. It's invisible, and you want it to be invisible. So it's sort of like an umpire in baseball, you want it to be invisible. It's not the point of anything.
So what's happening now with generative AI, and especially Chachapiti. Chachapiti is essentially ripping the user interface off AI. And it's saying, do anything with this, try, you know, go anywhere with this. And you're like, well, like what? And it's like, anything at all. Do you remember when Chachapiti started to have conversation starters? It's because of this reason. It's because the brain doesn't do well, when it doesn't have a template. Imagine for a second, two ninth grade classes, one class is saying, hey, write a one page paper or two page paper on anything. And you're like, like what? Like anything, anything at all. Other ninth grade class, write a two page paper on three people you met this weekend and what you did after and how your weekend was impacted by meeting up with those people. Number one should be easier. It should be easier because you can do anything. Number two is actually easier, because your brain craves templates. It's why I absolutely am allergic to the idea that we teach Chachapiti only through use cases. Now what I'm not going to do tonight, because I won't have time and I wanted to sort of get to an interactive discussion. I have this framework, AI mindset framework that went and I used to teach Chachapiti and generative AI and large language models, easy use cases to advanced use cases until I realized what's an easy use case and what's an advanced use case. It doesn't work like that.
So now this framework is learn, execute, strategize. And it's all about, you know, how do we make decisions better? How do we communicate better? And just thinking about how we use this to augment everything we do. But what I don't do is I don't say, Hey, you're in sales, let's talk about sales use cases, because your brain wants those templates. And it's going to hinder you, because you're going to think that Chachapiti is something you pull off the shelf when you need it. And it's not, it is something that you have to use fluently and fluidly throughout the day on everything you're using. That's why I say wait. Now, of course, when I do demos, I have to use use cases. But that's why I don't start there, because everybody does it that way. And I think it's a big mistake, big mistake, because, again, it's really limiting how people see Chachapiti, they think about it as a tool like a Salesforce, oh, well, I need to sort of figure out all my customers, I'm going to use this tool, or I need to put together a budget, I'm going to use this tool, this is not that it's very, very different. Even though our brain craves that.
So last paradigm here that I want to say, is how do we start thinking about this differently? And by the way, this is, again, just a very quick 30,000 foot view. But I do want to get into a conversation with all of you and with Matt as well, about how you see it about adoption. But this is why I believe adoption so slow, right? Before I get into this paradigm, let me say this. Adoption statistics are all over the place. Have you guys noticed that? Like when you see like the Microsoft LinkedIn study, it's like, oh, 70%, or whatever. And then you see other McKinsey's like, no, it's only 5%. And I've talked to 1000s of people, I'll be curious to see what Matt says on this as well. But I find adoption rates are very, very low at the enterprise level. And the reason is precisely because what we're talking about, people are not using, they don't know how to tackle this, they don't know how to use it. And even though people say they're using it, they're kind of using it like, well, use it to email. In the same way, you might say, hey, who here uses Excel, and everybody raises their hand, but who here uses Excel to run these, you know, giant models to do multibillion dollar valuations of companies? Only a few people, right? So just because people have used it, it doesn't mean that they're using it to its full potential, they're using it as like an email writer, and things like that. It's so much more than that. But we have to change our paradigm.
So last paradigm, I'll say on this. And I want to start getting into conversation, because I think that's the more interesting part here. Imagine you have a time machine, you have this time machine, you go back to whenever the light bulb was created. And you come out and you have like the light bulb, and you're like, hey, people of, you know, this time, I have brought you the light bulb. And they'd be like, oh, my gosh, this is amazing, right? And they like, they will instantly know how to make their lives better with the light bulb, right? It's like no more open flames in my daughter's bedroom, and our factories can stay open later, and our streets will be safer, and a million reasons, a million things you can do with the light bulb, right? You get back in your time machine, come back to 2024. Then imagine you go back a month later, and you're like, hey, I brought you something even more powerful. And they're like, well, what could be more powerful than the light bulb? And you're like, I brought you electricity. Don't worry about electricity and light bulb relationship. And they'd be like, that's even more powerful, you say? And you're like, oh, my gosh, so much more powerful. And they'd be like, that sounds amazing. How do you use it? And you're standing there, and you're like, well, that's a good question. Kind of broad, there's a lot you can do with it. They're like, well, like what? And you're like, well, they can heat your house. And like, well, we don't need to heat our house. Like, okay, I guess you can cool your house.
Point is that you could stand in front of 500 people that have never experienced electricity. And the worst way to describe electricity is to try to give use cases. Because depending on who you're talking to, if you're talking to coal miners from Bangladesh, and you're talking about all the ways that podcasters can use electricity, you could go through 500 examples, and they'll walk out being like, eh, not that great, right? You can't teach electricity through use cases. You have to say, do you need it now? Do you need it now? So with electricity, you just run into a problem and you use electricity, right? You walk into a dark room, electricity, you need to open your garage door, electricity. You need to call somebody, electricity. You need to fly from New York to Kansas, electricity. We just use it in the flow of our work. Nobody's waking up and saying, how am I going to use electricity today? What are the use cases for electricity today, right? You just know because we're used to it. But if you were to try to explain in which part of your life you use electricity, it's an impossibility.
That's generative AI. Generative AI can do an extraordinarily broad range of things, a broad array. If you think about use cases, it would be like talking about electricity in use cases. You just have to think, what are the things I'm doing, and can generative AI, and especially large language models like ChatGPT, can that help? Can this help this task? Can it help this task? And then you go home, can it help this task? Can it help this? I used it 40 times today. I used it for cooking, and picking up my daughter, and planning a trip with my wife, and I used it for It doesn't matter. What matters is you have to understand what you do. That's why I stay away from prompt engineering. That's why I say, don't worry about prompting. It's not as important as being able to just talk to it. So with that, I'll just sort of like lay this out. And then I want to kind of get over to Matt. So essentially, this is this three part framework that I was talking about. And again, I do demos around all this stuff. And it's just very different kinds of demos. It's about how we actually get to the answers we want, how do we actually use this to communicate all that kind of stuff. It's not about use cases, although you have to use use cases, obviously. But that's what's interesting to it.
That's me. Ben, if you want to kind of take that down, and Matt and I can kind of jump into it, which I think will be better than just sort of like hearing from hearing from me over and over. Matt Lewis. Welcome. How are you?
I'm well that I You're back now, okay Great. I'm well Connor. Thank you so much for having me and thanks to the open AI team for the invite It's pleasure to be here. So thanks so much
Matt Matt, let me read your bio real quick. Sorry. Sorry, but I just I just want people to know who I'm sure a lot Of people know you are Matt. It's huge on LinkedIn. I'm just gonna read this quick blurb. So, you know Matt's background He's a life sciences commercialization expert 75 successful drug device and digital therapeutic launch to date focus on areas of high unmet need including rare disease oncology Neuropsychiatric illnesses, I'd love that and Matt the really cool thing about Matt is that he's the first private equity backed chief AI Officer at the consulting firm in isio medical. That's how he and I connected He's obviously all over the place internationally recognized voice. But now also interestingly, I want to get into this a little bit too Matt is now the Founder and CEO of a huge passion project of his that he's been talking about for a long time behind the scenes Which is LL mental which is a venture studio focused on leveraging augmented Intelligence to improve mental wellness for those in health care and life sciences I mentioned all that Matt because you and I share so I learned from Matt all the time He and I are talking all the time I was once actually doing a presentation and Matt was in the audience and I made Matt answer all the questions Which was sorry about that Matt But Matt what I what I love talking to you about just generally speaking is You and I think share the same passion for this. You're you've talked to every big company I can possibly think of about this. You're hugely during the space But what I love about your approach is that you have this very human Centered approach what you and I talk about all the time is and you you articulate this I think really well is how this has so much more to do you sort of almost Advise people like don't worry about the tech. Don't worry about the latest model. What is it? What's going on with you? Who are you? How are you bringing yourself to this? Like what is the human element? Can you talk a little bit about that because I think that that's really beneficial
Yeah, yeah first of all, I completely agree and and both I agree that you and I are Simpatico and you know on the same page and so happy we found each other and continue to collaborate and it's such a Wonderful synergy that seeing kind of your disposition in the space and what we're building and trying to kind of really speed time to innovation I would health care and life sciences. So, you know, first of all, thank you for your friendship And I think you know that there is this You know I've been in advanced analytics for probably about 15 years doing machine learning deep learning and I'll pay for all that time generative really for the last three and a half years or so and there are these like off-quoted statistics of like how many AI projects and Large enterprises fail whether it's you know, 70% 80% 90% but people talk about this all the time They're just like oh, well, whatever, you know, they're gonna fail So we get a you know It's kind of seed the ground with lots of you know pilots and something will germinate something will take up and the rest of them you know, it is what it is, but I Believe in my experience has shown that a lot of what fails is not because of the software not because of the model Not because of the technology. It's almost always because of human factors if you will whether it's you know The leader of the business not getting behind an effort and really supporting it That was kind of a legacy consideration where you go in and do an advanced analytics project and you know The chief medical officer or chief transformation officer CEO would kind of support it financially But you never see them ever and their teams got the impression that this really didn't matter to the business So it would be supported on paper But no one really kind of actually put the time in to get it done So whether on the vine and die and you know now that's kind of been replaced by you know It's really clear and apparent when leaders within the business are actually using generative professionally and personally I don't know how many people here watch the Oprah special last week Sam Altman was interviewed on it and a number other visionaries in the field were on it, but it's really clear I think to all of us when someone is using generative it shows you can tell it in their voice You can tell it in what they actually speak about and they you know, they speak in kind of really specific Frustrations of what the models actually Could and should do or what they hope they will do in the near future or if they're not using generative enough It's these kind of like vague generalities of like projects and ideas and purpose and all the rest but it's really honestly BS because if they haven't been in Generative often enough to be frustrated and also have a sense of awe and wonder they don't have these like human reactions They're not in it often enough
Yeah, it's I love that I think our friend Ethan Mollix is that sort of like you have to sort of spend three sleepless nights You know with with this and until you sort of like really understand it and I I get that too So, um, and I love it, right? I totally agree and I love that characterization of like you can hear it in people's voice Just like you can tell a I ready just by their writing everything else But like when I'm working you and I work with kind of go in the same circles I work with a ton of senior leadership teams. You worked with a ton of leadership teams in your role and I don't know. Tell me it's sort of like if you see this a different way like adoption I mostly see adoption as very slow and even people who are using it aren't they're not they don't tend to be super users but the adoption sort of like from my standpoint has the lack of adoption from my standpoint has often been driven by the fact that You know CEOs and senior leadership often just tends to think hey like if some you know That person knows how to do it Why don't we just get it or like let's just get Microsoft copilot licenses for everybody or GPT licenses for everybody like and in fact That's not going to spread it. Like how have you sort of seen like the problem with adoption?
Yeah, I mean I completely agree with everything you said here and in most other places as well. I think you know people start with like Having a hammer and then looking for nails like they oh, you know, you know, we have AI like here. It's it's here So we need to go find some nails and I can't tell you how many conferences I speak at I did one this morning that happened and someone's like well give me a use case and it's like you're in life sciences There are a million use cases Are you able to get all the data from your clinical trials to patients that are suffering from? Every illness that you treat today if the answer is yes, then that's great. But most people can't do that So there are literally a multiplicity of use cases that generative can solve for but I think people really start from like the wrong place They try to kind of check the box and say oh we did this we did that and they're most of the adoption Currently happening in both health care on the clinical side and in life sciences is really what I would might call like incremental innovation Where like people are still kind of clinging to the old ways They want to kind of still do all the old things that we've done our entire careers to me I've been in the space for 27 years. They don't really want to change They want to kind of still do all the same templates the same deliverables the same processes the same systems They just want to kind of juxtapose On top of it do a little kind of like sprinkle a little AI dust on top of what you're doing or Kind of AI if I the the old stuff and make it like 10 20 percent better if you will and as a result There's a lot of resistance from both funders investors partners people on the ground etc, etc to to make it happen one because they're not really solving real problems and Two because when they actually try to implement it They give kind of like, you know voice to the idea that there's a human element to it But I haven't seen I don't know what your experience is that Leadership and teams really invest in the idea of human factors as a real entity I'll just give you a quick, you know And there's some some research that came out really within the last day or two There was a study in Harvard Business Review that came out I think today that suggests that when you use generative as an entrepreneur and if your business is already Flourishing and you use generative to provide strategic counsel to a founder it can amplify the business prospects significantly But if your business is floundering and you use generative that is no you're not doing well And you're kind of going to it for advice. It can actually sink you deeper in the hole and you know It's it's really an interesting paper because it suggests that how you show up to the platform has a real disposition to the results you get which you know most people would look at that and say like if I remember back to the days of like Lotus Notes or Now Outlook or Gmail you can't come to an email platform and get a different outcome based upon the state of your business That's like unheard of And then there's there's another paper that that came out that Suggested that you know for writing quality that if you use, you know It's similar to the BCG study last year with with Orton They eventually demolished that if you use generative for folks that are highly trained and expert communicators like consultants It significantly speeds time to completion and improves the learning quality by 30 40 percent But if you use it on people in higher education in colleges universities kids really that are students in school It actually decreases their time to comprehension and the strength of their arguments So while they they complete papers faster the quality of the writing suffers So it really has to do a lot with the human in the loop and people kind of give voice to that almost like lip service Yeah, almost no one in my experience really spends proper time on really the human and their lived experience
Yeah, no, but that's the thing, right? It's it You know I see this sort of all the time too and people are saying like well How will it drive revenue in our company and I think people are only talking about like well How do we put this into our product and all that? But I'm also saying you already had all these humans that were responsible for driving revenue And if you can increase their productivity and reduce their you know, non value add tasks and everything else But so what I do and Matt you and I talked about this all the time So what I try to do is not just tell people that there's a problem here But it's also like this is why I try to
to use these paradigms. I'm like, you have to be thinking about it in this way and try to make it just more accessible.
And I know for you, this is why I was kind of psyched to bring you on here specifically, because I wanted to hear, because you and I haven't, we're going to catch up full disclosure in the next few days about this, but I thought that this could be sort of like a little sneak peek.
And in our last kind of couple of minutes before we open it up to Q and A, you know, your passion I think is around, you know, mental illness and the human and how you, how people actually, as you sort of say, show up with our authentic self. Do you have either, you know, ideas about how you kind of like share that and promote that or inspire that, or have you seen people doing it that you've learned from, or how do you even think about that?
Yeah, no, it's a great question. You're right. This really has been my passion for a long time. I think, you know, about a year ago, I posted something on LinkedIn that I know you commented on that those of us that are deep in that are really kind of super users of generative.
And I think we have open eye to thank for that, honestly. And when you get this like time back in your life, and I really do treat CHAT GPT and the like as like a time machine.
And you know, when I speak, when I do keynotes, I always say that the CHAT GPT is like a time machine, like the DeLorean, you kind of get, you can't go back to 1958, but you get really, for me, 8, 12, 16 hours of time back in your life.
I used that time last year to, with three physicians, the head of the Zook Epidemiology Program and a colleague to stand up a new foundation for artificial intelligence and health.
And to use like a Peter Hinson term, we're imagining the day after tomorrow of saying in 2031, when everyone in the US and all English speaking countries have adopted generative AI, what is that world like for patients? What is it like for doctors? What is it like for your mom, for your sister, for your brother, for your cousin, for your uncle, for your neighbor?
And is it a world that we engineered by design? Or is it just going to be handed to us? And we want to kind of catalyze conversations around what that world should look like. And then also upskill the workforce in healthcare professions to really engineer that future.
So we've been leading that charge for the last year. And then, you know, I've been advising startups on the side, as I've been Israel for the last four years, mostly whom have juxtaposed generative into their operations to do things like clinical trial readouts and clinical practice guidelines and the rest.
And it just got me thinking, you know, if we just reimagined the clinical and life sciences space, kind of from a AI native first principles perspective, like what would the world look like in 2027, if all the things that we are used to never existed?
And we just started from scratch with AI native disposition, specifically focusing on human factors, lived experience and mental wellness. That's what Elemental is designed to solve for. What is the problem? It's augmented mental wellness.
I love that because it shows like where we're actually going to make a huge difference. Even people like, oh, it's not helpful. It is going to be hugely helpful.
I want to jump in. Natalie, I don't know if you can hear me, but I would love to see if we can get a conversation going with the rest of the folks.
Matt and I will be in the next session wherever we're jumping over to as well. Thank you so much, Conor. Thank you, Matt. It's really awesome to hear the perspective of a couple of non-technologists who have been adopting AI ever since ChatGPT was deployed and offered publicly.
And I really do hope that some of what Conor presented will help those of us who have experienced some barriers to adoption potentially feel more confident approaching it now.
So that will be the very last of our kind of entry level, lowering the barriers to adoption of ChatGPT Enterprise at OpenAI or even finding solutions for leveraging our APIs.
We're going to be moving into some more advanced techniques and webinars, including tomorrow, Thursday, September 19th at 5 p.m. PT.
We'll be hosting a virtual event hosted by one of our customer success managers at OpenAI. She's going to teach us ChatGPT Enterprise 101, a guide to your AI work assistant.
Later next week, we're actually going to host technical office hours. It's going to be hosted by the head of our solutions architects at OpenAI, Colin Jarvis.
That will be Friday, this Friday at noon, 12.45. And we're hosting it super early, hopefully to accommodate some of the parent schedules and also those of you who are joining us from Europe. We'd love to see you there.
If you register for those office hours by tomorrow, then Caitlin and I will be sending out a survey to learn a little more about what it is that you're trying to accomplish in the office hours.
And then Colin is going to tailor his technical success office hours to support some of the overarching themes.
And then last but not least, just want to remind everybody that Wednesday, September 25th, we will be hosting a virtual networking event, Bridging AI Across Industries. Please join us for that. Our community ambassador, Ben, will be hosting that.
And we are working to reschedule our last research event. The presentation has been moved because of the launch of Strawberry. And we couldn't tell you guys that in the midst of the information that we were having to move the presentation.
But as you can imagine, our research scientists were super hard at work and they asked us if we could push the research presentation back. So that's what we're going to do.
Of course, it is our mission here to accommodate the research scientists at OpenAI. So if you guys would like to continue the conversation with Matt and Conor, we're going to move into the live Q&A room.
So navigating to the live Q&A meeting room is you can join the link that you're seeing in the event right now. You can just drill into that notification or you can go into the agenda item, which is labeled Q&A, and we will all meet you there very shortly.
See you soon.