OpenAI Forum
+00:00 GMT
Sign in or Join the community to continue

Deploying ChatGPT at Scale: Best Practices for Adoption

Posted Oct 10, 2024 | Views 1.9K
# AI Literacy
# Technical Support & Enablement
# AI Adoption
Share
speaker
avatar
Lois Newman
Customer Success Manager @ OpenAI

Lois is a Customer Success Manager at OpenAI, specializing in user education and AI adoption. With over 10 years of experience in SaaS, she has extensive experience in developing and delivering engaging content, from large-scale webinars to stage presentations, aimed at enhancing user understanding and adoption of new technologies. Lois works closely with customers to ensure ChatGPT is integrated into daily activities and effectively utilized in the workplace. Lois is known for her storytelling approach, making complex technology relatable and accessible to all audiences.

+ Read More
SUMMARY

Lois Newman led another session in the exciting ChatGPT Enterprise Learning Lab series. During the session, participants gained valuable insights into deploying ChatGPT widely across their organizations, along with best practices for driving user adoption. Whether attendees were just beginning with ChatGPT or looking to scale existing initiatives, the session provided actionable strategies for ensuring success. Designed to guide users through the ins and outs of GPT technology, the series offered a comprehensive overview of essential topics.

The agenda covered:

  1. AI Strategy
  2. Change Management
  3. Understanding ChatGPT Users
  4. Developing Use Cases
  5. Adoption Initiatives
+ Read More
TRANSCRIPT

Hey, everyone! Well, first of all, welcome to everyone. I'm so excited to see everyone here. I was actually just looking at the chat before this and noticed a few familiar faces, so really excited that many of you are here. I see Emily, I see Kevin, and also if this is your first time attending, welcome to the forum. This is definitely where a lot of the magic happens.

Also, tonight, we get to be paired with Lois Newman, my colleague from OpenAI, to talk about all things ChatGPT. Many of you know me. I'm Ben, the OpenAI forum ambassador. I'm also on the human data team. Many of you also know my colleague Natalie Cone from OpenAI. She is the OG OpenAI forum, the OpenAI forum architect, also a change maker. Tonight, she's giving a talk to global affairs on what the power of community. First, I'm dying to hear updates from how that event goes, but until then, I'm actually hosting the event tonight, so thank you for everyone for coming.

We always like to start our talks by reminding us all of OpenAI's mission. OpenAI's mission is to ensure that artificial general intelligence, by which we mean highly autonomous systems that outperform humans and most economically valuable work, benefits all of humanity. I mentioned this before, but the OpenAI forum, we put on many different events. The OpenAI forum team, even last week, we did one with the researchers, and we have many coming up in the months of October and November. We have a series of events, both technical and non-technical, but tonight is a very special night. This is the second time we have hosted Lois Newman from the customer support team and the go-to-market team at OpenAI. We are here to learn from no one better than her, who actually did the 101 and 102 series.

Lois is a customer success manager at OpenAI. She specializes in user education and AI adoption. She has over 10 years of experience in SaaS. She's done these large-scale webinars to smaller-scale stage presentations. She has tons of experience in making sure that we are able to understand and adopt these new technologies. I know I'm not the only one to say this, although I'm a power user of ChatGPT, I think all of us walked away from her 102 series with at least 10 new things that we learned. Lois works closely with customers to ensure ChatGPT is integrated, not just one daily activity or two, but really all around the board, and it's being utilized in the workplace. Lois, I'm blown away by her ability to make complex technology relatable and even just distilling these complex topics into simple ways. I mentioned this is an extension from the 101 and 102 sessions that are actually in the form, so you can find those if you navigate over to the content session.

Without further ado, I would like to welcome Lois for her to provide her presentation. Welcome, Lois.

Hi, Ben. Great to be back. I think this is the second time now, so really excited to be here talking to the forum. I'm going to kick off my presentation, so we'll pull up some slides. We're switching gears today. In the 101 and the 102, I was really focused on teaching you how to use ChatGPT. Today, I want to really zoom out. Another thing that I do all day, every day in my role is I work with organizations and businesses to help them deploy ChatGPT at scale. User education and the webinar program that I built is one part of this, but this is another huge topic out there right now. A lot of organizations and businesses are really scrambling to pull together a strategy and to understand how to get AI into the hands of their workforce. I'm going to click through and move to the agenda and just going to highlight what we're going to cover today.

I'm going to briefly touch on AI strategy, so how as businesses you can be thinking about this. I'm going to talk about successful deployments. I think that piece is really interesting. We have actually launched over 700 customers now in the past year, so we have lots of insights there, which are fascinating. I'm going to move on to talk about adoption initiatives, so how businesses and organizations can ensure that users continue to use AI tools like ChatGPT. Then one of my favorite pieces, I'm going to talk more about understanding ChatGPT users and the type of journey they go through. The reason why I'm going to do that is because it is really helpful for developing use cases.

This session is definitely more geared towards business leaders and those that are thinking about deploying at scale, but hopefully if you're not a business leader right now, you're a student, you're from any other domain, hopefully there's some interesting topics in here as well. I'm going to kick off and talk about AI strategy. We really are at a critical moment in the adoption of AI capabilities, so the decisions that businesses and organizations make right now will shape the company for years, and that is because of these three core components.

Right now, employees are expecting AI tools to be part of their work package. Employees want access to tools like ChatGPT to help them with their work. They're also using AI in their personal lives, and it feels like a non-negotiable that they should be able to use AI at work. Giving tools like ChatGPT to employees also really helps attract some of the best talent. That's pillar one. Pillar two is more around strategy and the fact that most companies in the future will be AI by default. Companies are thinking about embedding AI into really every piece of the org to heighten their ability to ship more quickly and to grow. Finally as well, one of the reasons we're at this critical moment is that model intelligence is improving exponentially, and model intelligence is going to change everything over the coming years. The companies that are working with the newest model classes are much more likely to excel in innovation, cost optimization, and productivity.

That is why it is really important that businesses and organizations have a robust AI strategy and are deeply thinking about this. One thing I find fascinating, and one thing that OpenAI has noticed, is that really only 30% of companies truly have a long-term AI strategy in place. But what's even more fascinating about that is that 85% of company leaders do think that AI is a top priority, and 80% of employees want to learn more about AI, they want to use AI more, and they're already using AI in their personal lives. Businesses and organizations are really trying to scramble right now to understand what is our AI strategy, how do we actually build that, and then how do we go and execute on it.

I want to touch on or distill this down to three really simple strategies. At OpenAI, we believe that there are three ways to implement AI into your organization. So each of these will be their own strategy, but essentially strategy one is building out an AI-enabled workforce. What do I mean by that? I mean that every knowledge worker employee has access to a tool like ChatGPT to empower their workday, to help them do tasks more quickly, to alleviate some of the manual overhead so that they can be more creative, be more strategic, and actually do the work that they want to do.

Number two is about using AI to automate operations. So let's take this one level higher, let's think about teams and organizations. There are specific processes and operations that occur outside of the individual, and so this strategy or this piece is more about how do we lean on AI to automate some of these processes to reduce that overhead. So again that would be a separate strategy to building out that workforce. And then finally what we notice is there is a third and final piece, really critical, it's about infusing AI into your products and services. Now I want to be quite clear that you don't have to have all three strategies within your business or organization, and quite often what we find is starting out with one will lead you to explore and create the strategies in the other pillars.

The customers that we work with most, especially those that have purchased ChatGPT Enterprise, are really focused on employee empowerment and that first strategy. So they are dishing out ChatGPT to their workforce to really improve employee productivity. Just that one, number three there, and I think that this infographic explains it more, actually creating products and services that interact with your consumers that are infused with AI, that type of capability would be built using something like the OpenAI API. So number three is not really about ChatGPT, it's not about interacting with ChatGPT, it's actually building smart products and services that leverage OpenAI models.

What I really love about this view here is it highlights that you can use OpenAI products to support each of those strategies, and that's really how we think about these deployments now.

We think about chat GPT and how it can empower the business, but we also are thinking about how can our customers really also use the API to start to automate things and to build things.

And again, I just want to make this really simple and distill it down into three points. Again, any questions about this, please add it into the chat. I'll touch on this in the Q&A.

Okay. I want to focus a little bit on that first strategy. So that is about getting chat GPT into the hands of users and chat GPT is really the fastest way to see these results and to empower that workforce. So that is strategy one. It's how do we get these licenses out? How do we deploy it at scale?

And the reason why chat GPT is so powerful is as you would have seen in 101 and 102, chat GPT has specific capabilities that allow an individual to complete something in their day to day and speed them up. Chat GPT also has access to GPTs and GPTs are the things that can also start to automate some of those operational processes as well. So really the first step in all of this is getting broad access to AI tools dished out to your workforce.

We're going to move on from strategy, but very closely linked. I want to talk about the AI maturity model. So what does it look like to be an AI first or AI forward business that is mature in its thinking around AI? So slightly different to strategy, but they are interlinked. And again, really love this infographic.

Essentially what this is saying is that as you stack or layer these pillars over time, your organization becomes more mature and is thinking about AI across the entire business. I'm just going to step you through this one by one. We talk a lot about chat GPT, getting it into the hands of individuals, and that's that first underlying pillar.

So that pillar there is about individual productivity, maximizing effort, working more quickly. And then what you see is that as you layer on these pillars, you start to optimize areas like team productivity, organizational impact. You can start to customize that impact, and that's by actually building AI products and services using the API.

And then really advanced and really mature businesses are starting to customize and train their own models to do specific things. So what I'm showing here is really that you can layer these on over time to become more mature. But again, not every business is there right now. In fact, very few. And so really core message is about picking one or two of those strategies and really thinking about how you get going and how you deploy at scale.

I'm going to move on to talk about successful deployments and what makes a successful deployment. I've deployed about 35 instances of chat GPT, so worked with 35 different customers. I've learned a lot. I've seen a lot. And I feel very privileged that I've had access to be at the forefront of how AI is being implemented right now.

One thing I love is that the customer success team here at OpenAI, we did an analysis of all of our deployments. We looked at how customers were scaling this, and we noticed some common themes. So what I've done is I pulled these themes together.

The first thing I want to talk to you about is the four key strategies for a successful deployment. So if you are thinking about broadly deploying AI right now, and it doesn't have to be chat GPT, these are the four things that you will need to deeply think about to make sure that that is successful.

And I don't think these are particularly groundbreaking, but I'm going to go through them and talk to them anyway. The first one, as with any large scale deployment, that could be any SaaS or any technical product, it's really important that the executive are behind that move or behind that purchase and that they sponsor it. They want it to be successful. They understand why it is being brought into the business, and they are really driving that strategy. So that has been number one.

And you can see there that this was a combination of data that we collected from 250 deployments. The next one, this is really important. Having a strong project team and admin team come together to work out how to get licenses into the hands of users is probably as important as that executive sponsorship.

The project team is really the ones that are writing the communication to users. They're actually configuring the settings in chat GPT. They're communicating to users about the licenses. They're communicating to users about the training that's available. This team is core to a successful deployment.

The other thing, and I think this is quite unique to AI, successful deployments need vocal champions. Champions are those individuals in the business that are really advocating for AI. They want to see AI move forward. They want to see AI in the hands of everyone.

The one thing I love about champions, working with champions, seeing champions at different organizations and businesses, is that they don't have to be senior leaders. In fact, some of the best champions that I've worked with have been at all levels of the business. They can be power users. They can be AI advocates. They really are interested in AI, and they want to see it work at their company.

Vocal champions are going to be core to success. Then, of course, no surprises here. As with any deployment, any technology, making sure that training is easily accessible to every individual that has a license is so critical. That's why here at OpenAI, we have the webinar program. We have 101. We have 102. We have building workshops.

This technology is very new. A user interacts with chat GPT and sees a blank slate. It can take them a while to understand how to use it. Those are the four more obvious strategies. I have another slide for you. These are more of my own personal observations, so not necessarily what the whole team has seen, but this is more about me and my experience.

I've called this slide four less obvious strategies for successful deployments. I just spoke about executive sponsorship. In my opinion, I think that that should go one step further. Some of the most successful customers that I've worked with, they have execs using chat GPT and actually building GPTs.

It's almost like the executive are on the journey with the team and building at the same time. I've also heard some amazing GPTs that executives have built that are saving them time. They're able to do more of that strategic work and less of the work that they don't want to do. That was something that I noticed very quickly after starting this role.

The other thing that I am really passionate about, and I strongly believe, is that the most successful deployments have leaders who are actively encouraging experimentation. There is this culture of research and development and testing and iterating.

Leaders are getting users to really roll up their sleeves and try and testing chat GPT. I was talking to a customer and some users recently, and a user said that they really started to advance when they tried to actually break chat GPT. I really love that sentiment.

I think if you are testing chat GPT enough and you're trying to break it, you're doing the right thing. That's critical. The other thing that I personally have noticed is HR and learning and development are super important and should be part of that project team.

HR and L&D are really important functions within a business, especially when there is cultural change. AI is a huge cultural change. We shouldn't just be thinking about this deployment as a technical product that just gets assigned to everyone. There is a real shift happening in the way that we are working, and so HR is critical to this cultural change.

HR is also able to deal with any fear that arises around AI. I'll just call it out. There is fear right now, especially for employees, so good communication, good training, and all of that being directed and led by HR is a really great thing to do. Then the last one there, I think this one's quite fun. A lot of the most successful customers, they see the value in just using chat alone. They see how powerful that is.

They don't come to our sessions ready to build GPTs. They come and they learn about chat and interacting with chat first before they dive into the complexity of GPTs. Honestly, in my opinion, there is so much value in individuals just interacting with chat GPT without using a GPT.

I hope that has been insightful. You've got some insights from the team and then some of my own insights which I've just layered on top.

Okay. I'm sorry. I've missed a slide here.

here, very important message, so sorry about that. Again, this is not groundbreaking stuff, but it's really important to reiterate that the first four weeks of any deployment, and specifically chat GPT, really define long-term success. Again, the customers that I've been working with, the businesses and organizations that we see already getting a return on investment are the ones that have been really thoughtful about the activities they do in the first four weeks. And I'm actually going to take that one step further. We have noticed that users, chat GPT users, who find value in chat GPT within their first four weeks are significantly more likely to become a weekly user. Those who log into chat GPT and don't find much value in the first four weeks are typically those that drop off and never come back and use it. And so the first four weeks are not only critical for deployments, but they are critical for users. And that's why I'm going to talk a little bit more about adoption strategies and how we make sure that users get value within that first four weeks.

So moving on to adoption initiatives. And what I've done for you here is I've tried to make this as simple as possible. I have one slide, and I have 10 suggestions. We've worked as a team here to distill down the most important initiatives you can do within that first 30 days of your deployment. I am going to walk through those now. So the first one, as with any type of deployment and implementation, it's about setting clear objectives and metrics. And honestly, when I work with customers, one of the first things I say is, if we were to meet again in six months' time, how do we know that chat GPT has been successful? And I really don't think these need to be super intense KPIs, but you do really need to think about, what are you striving for here? What are your goals? What does success look like? And those goals should align with your OKRs and your higher business objectives. So that's number one.

Number two, really important that leadership are communicating top down and across the whole business about the importance of using AI and the impact. I've worked with a number of customers on what that communication looks like. It's about being very open. It's also about being informative and saying, hey, we've taken on chat GPT, and we've done that because of this reason, and we think it's going to be really helpful for you. Here is an opportunity for you to jump into this intranet site that we've created and find out more. So it's that type of communication, and not just written communication, talking about this at an all hands, in team meetings, really letting the communication trickle down is very important.

I've said this on a previous slide, and I really stand by this, but engaging with HR and L&D is going to be really critical for not only the communications piece, but also for that widespread training, making sure that training gets into the hands of users. I haven't spent much time on change management. Some of these are actually change management strategies. But again, change management, it's really important you have dedicated Slack channels and ways to communicate with users so that users and parts of the business can provide feedback and you're seeking feedback regularly.

Next on the list there is some kind of resource library. That can be as simple as a spreadsheet. That can be a Slack channel. That can be your intranet page. It can be a Google Doc. We don't have to have anything fancy here, but users need to know where they can go for specific things. So the resources, FAQ guides, user guides, best practices. Internally at OpenAI, we are charging really hard to try and ship this and to get this out. But it also is on the onus of each business and organization to make sure that there are resources that are more tailored to the business and the organization. And again, that's really important. If a user gets stuck or doesn't find value, they need a safety valve. They need to be able to go back to a resource library to self-help. Next thing, champion program. You saw that in the previous slide. That's one of the four key strategies to a successful deployment. A champion program is really about identifying those advocates within a business, bringing them together, really being clear on what their role is, and ensuring that they're supporting users. Champion program, again, really, really critical.

Next one is workshops and hackathons. So in that first 30 days, once everyone has been given a license, there should be some kind of interactive activity happening to help users explore and experiment. We have found that GPT hackathons specifically have been most successful. I've been part of a few of those. It's really fascinating to get entire pockets of the business together, thinking about workflows, building GPTs live, and actually solving problems. When you do things like that, that is what generates the excitement. And that's where users actually start to see the value and start to see the types of problems that AI can solve.

Just a couple of extras, 8, 9, and 10. So 8, reporting. Of course, if you're deploying this at scale, your stakeholders are going to want to understand how it's progressing, who's using it, how often are they using it. And also alongside that, running some user focus groups, checking in with some of your users, understanding how they're feeling about it. Really great feedback loop. Number 9, documenting use cases. And I'm going to talk about use cases in a second, but really important to gather and understand the types of use cases that your users within specific departments have discovered. And of course, that functional sharing. I've seen some things done really well, like effective lunch and learns. I've seen some champions do live demos at all hands. Any way that you can resurface what's happening in the business back to your users to highlight the power of AI is going to support adoption.

So hopefully, those steps 1 to 10 are really tangible ways that you can go back into your businesses and organizations and maybe rethink.

OK, last piece of today's session. Probably spend five or so minutes here. As with anything, adoption tactics are great, but this, at the end of the day, is about people. It's about people understanding the technology, finding it useful, and adopting it. So it's all well and good to go out and implement those 10 strategies. But I want to tell you what I know about users. So I'm going to talk a little bit about a typical ChatGPT user journey, and then I'm going to teach you how you actually start to think about building departmental use cases.

OK, this might be a little bit small on the screen. I have spent a lot of time with users, and I have spent a lot of time analyzing and studying my own user journey in ChatGPT. And I think this, for me, is what really lights my fire. I find this type of work absolutely fascinating. So in short, what I'm trying to show you here is that the user journey in ChatGPT is not linear. In fact, it's quite up and down. Users will often initially jump into ChatGPT. They will chat away, and they'll think, oh, this is kind of cool. It's like talking to a human. There's this initial buzz that they have. Then what I've noticed is there's a slight dip or a drop because they actually think that ChatGPT is like Google Search. And so suddenly, they're thinking, this is a bit confusing. I'm pretty sure Google Search can answer these questions. Why would I use this tool? Then what happens is maybe one of your colleagues or your friend comes to you and says, oh, I've been using ChatGPT in this specific way, and I'm getting a lot of value. And so then you start to adopt maybe how others are using it, and you kind of jump back up, and you see some value.

So you can see here that there are definitely some ups and downs. But I think what the key message on this slide is is A, no user journey is linear. You don't just jump in from day one and suddenly get value. ChatGPT is also a very personal experience. Different users in different roles are using it for different things. There's no two individuals that are using it in an identical way. What you've noticed is the first section of this user journey, I have labeled this as prescribed use cases. So I always think about this as this is the part of the journey where a user is being told how to use it. So I'm just going to share my story from when I joined OpenAI. It was my first week. I had used ChatGPT, probably not enough. But I understood what it was and how it worked. And then my manager came to me and said, hey, Lois, check out these 10 useful CSM use cases. So this is how the team are using ChatGPT. I just want to inspire you to get you started. That was really helpful for me.

was it allowed me to look at what the team was doing and it gave me that initial inspiration. But that only got me so far, because again, someone dictating to me how I should use it is not the most valuable.

The most value that I got through my learning journey was when I moved from taking what other people were telling me to do, and when I moved into experimentation. This was the part where, similar to that other user, I just decided I was gonna try and break it. I was gonna jump into ChatGPT, and any time I got asked a task, I was gonna test it and see if ChatGPT could do it.

And so what I have found is that it is a combination of telling your users what to do, and a combination of inspiring and encouraging experimentation that leads to really advanced usage in ChatGPT. I'm going to take you through to the next slide, because I've created another infographic for you all.

So really to summarize what I've just said, learning how to use ChatGPT is a combination of user experimentation and exploration and prescribed use cases. So businesses and organizations should be picking three to five flagship use cases per department. They should be providing these use cases or showing users these use cases, and then they should be using the adoption strategy or those initiatives that I mentioned earlier to encourage experimentation and exploration. And that is what is going to help with a large scale deployment.

So I've included some texts there, but ultimately the way that I've seen it and from my experience, a user starts with prescribed use cases and being told how to use it. They test these, they might find some value in these, but then they get encouraged to explore further, and then finally they reach this point where they've personalized ChatGPT to themselves and their role.

And I think that, no, that is not everything. I was about to wrap up then. So sorry, from this view here, I can only see one slide at a time. So following on from that, I want to talk about a method for developing use cases.

So at this point, you understand about your users and you understand about adoption strategies. I have a lot of customers come to me and say, Lois, how do we even think about developing a use case? What does that even mean? So I've created this infographic here, which highlights how to do that. And I'll go back to the other slide in a second.

But if we take that model I've just shown you where I've identified that learning how to use ChatGPT is a combination of experimentation and prescribed use cases, the focus for businesses and organizations should be really dialing into the prescribed use cases per department.

And what I've done for you here is I've broken out how to do that. And I've used my own team as an example. So what we did is we sat down as a team and we broke out our workflows. We looked at what we did in any given day or week. And we kind of looked at what was involved in that process.

For example, in the go-to-market team, we deliver hundreds of client calls a week. And so what we did is we looked at that activity and we literally broke it out. We literally said, what is involved in that? And we mapped all of the manual processes.

And then what we did once we'd mapped that out is we then asked ourselves the question, where can ChatGPT be used in this workflow to optimize some of the manual processes? And once we'd figured that out, that really became a use case for that specific thing that we do in our team.

And once we had captured that, that then went into our customer success use case repository. And I would, if you are a leader right now, I would really encourage you to do this activity. I would encourage you to sit your teams down, look at what you're doing in any given day and week, really break it out and then figure out can AI optimize some of these processes? I'm cognizant I did just miss a slide. So I do want to go back. I think what I was trying to say here on this slide is that there is a difference between ChatGPT use cases and API use cases.

So ChatGPT is really about internal productivity, team productivity, creating things like custom GPTs. How can a user use this tool to help them with their work? API use cases are definitely different. It's more about a use case for an end user application or fully automated workflows. So I did just want to stress that there is a difference there and that infographic here, this applies to creating ChatGPT use case libraries.

Okay, that comes to the end of today's session. So I'm gonna wrap up there. And I think we are going to move on to Q&A. So yeah, I hope that that was helpful and I'm looking forward to answering all of your questions.

I know, I would say more than helpful. At least 20 things, not being dramatic, especially the personalization. I do know in terms of helpfulness, even as looking at the chat, there are a lot of questions and so it's gonna be exciting for everyone to dig in and have some really good dynamic conversations.

So I will wrap things up here. So I will quickly mention that before we head over to the Q&A session, we always like to end with a few closing remarks. For the months of October and November, we have some amazing things lined up. So I wanna call out three pieces.

The first is October 18th, we have office hours. I did my first set of office hours last week. I see some people in the chat who joined me on those. Super informal, it's a round table event. You wanna just meet the team, chat with other forum members. You can come on in and just say hi for as little or as long as you'd like. You don't have to sign up. You just simply, when you go to your forum page, you can log in. But if you do want it to be on your calendar, my colleague Caitlin is going to drop a link in the chat. And so you can put your details in there and it'll show up in your calendar and make things really easy. So we have that October 18th.

We have two events also coming up. So the first is October 22nd. We have one with OpenAI's go-to-market education lead. So Sia Raj. If you are interested in education and AI, I would say don't miss this one. So Sia is gonna be speaking with Wharton Business School faculty on all things how AI can accelerate learning.

And then after, since we're also talking with our favorite person, October 24th, Lois will be coming back with her colleague Elan and they'll be discussing a data-driven workforce. And so there's a lot of things that are in the pipeline as well. Also November 13th, topic TBD. But that's why I would say, please come with me to your questions or Natalie or my colleague Caitlin. We're designing these events that are catered to your needs. So please reach out if you have any request questions or just simply say hi. Anyway, the reason you all came here is to now go to the Q&A session.

And so the Q&A, you'll see the link. I think it's on the left. So you'll see that on your tab. We will be joining the live Q&A meeting room. And so once we're there, we can ask questions, have a conversation, and I look forward to seeing everyone on the other side. So I will see everyone soon.

I, with the support of my colleague Caitlin, wrote down a few questions that were in the chat. And so we can begin by asking some of those. And then if you have a question, there's a little hand-raisey feature that you can raise your hand and we can even call on you.

So the first one, Fozia, maybe since I was just chatting with you, you had mentioned about your sort of what metrics can we use to measure how much improvement employees experience by being AI enabled. Lois, I would love some color on that to see the success of when you roll some of these programs out and what you see with your projects.

Yeah, that is a hot topic. Everyone is trying to figure that out. I would say that it really comes down to what do you determine success to look like within ChatGPT, for example. Are you expecting to see a reduction in hours spent on certain workflows? Are you expecting to see specific workflows automated and optimized? So it's all going to tie up into those high level goals. But some of the best metrics I've seen have been where companies and businesses have really spent the time to understand what is involved in the process pre-AI. So which teams are involved, who's involved, how long does that take? What kind of software is involved? And they kind of quantify whether it's hours or costs. They kind of understand the cost to the business to continue in that manual format. And then what they'll do is once they've been using ChatGPT for 90 days and they've really proved out this use case and it's working, that's when they'll go back in and measure, are we seeing a reduction in time or reduction in cost? So I could probably provide more materials and more calculations for that, but I think at a high level, that's probably the best answer to that. Great question. Thanks so much for that answer.

Just really quick follow up there. Is there like a tipping point that sort of determines whether they want to move forward? Like 20% savings in cost or 20% savings in hours of productivity? Just curious.

Yeah, so we've run surveys and got benchmarks now and tipping point is an hour reduction of two to five hours per employee. And that has been through either tracking the workflow or also self-reported by employees as well.

Cool, thank you. Great. Thanks for that question. I see Anastasia here. So Anastasia is a solutions architect at Databricks and sort of her-

question is, what tools or strategies are you using to monitor the performance of chat GPT post deployment? So I don't want to maybe like go too far into the technical details, but maybe some color or some use cases that you've seen in the past might be helpful. That one, I think I might have to come back to you on that one. I would say that probably some of our SEs and ADs would know the answer to that. But let's put a pin in that one and I will definitely get back to you. It's going to be different per customer, but I can definitely give you a list. Um, I think this is a, so this is one from Sainath. I'm not sure if Sainath, if I'm, hopefully I'm pronouncing your name correctly. If you're in the room, he's a managing partner at Human Sense Labs. You got it. Amazing. All right. And so his question, actually, if you want to go ahead and ask your question, you probably could say it better than I could. Perfect. Thank you. I believe I had a two part question. You know, the core question here is organizations tend to start their generative AI journey with all excitement and there's a pilot or two. And then at some point they quickly hit orthodoxies in terms of mental models, organization models, operating models, et cetera. In your experience working with many clients, what, what approaches have you taken to help clients re-imagine, rethink the art of possible solutions to develop entire new business models, new operating models, and not be beholden to their say 19th or 20th century type legacy that they're carrying into, you know, into the 21st century? Yeah, really, really great question. And I do see that. I do see deployments where interest kind of takes a nosedive. People stop using it. Um, it just goes back to, they go back to their old ways of working and manual processes. So again, um, really comes down to that first four weeks and first 30 days and inspiring and training the organization. And then what I found to be really impactful is you re-repeat some of those activities throughout the entire year. So you don't just focus on the first 30 days and do this big bang approach and make it all shiny and new. You also then rerun hackathons, you rerun lunch and learns, you rerun the training sessions. Um, you update your online material. You have another executive talk at an all hands. It's, it's really about the kind of rinse and repeat to make it habit. Um, and also to kind of make it fun and exciting again for employees. So it's a kind of a combination of the double down on the first 30 days, but then ensure that every 30 days after that, there is some kind of activity to continue to inspire. Awesome. I love the idea of fun and excitement and that keeps, that keeps us human, that keeps us engaged. Thank you. Definitely. And, um, just one thing on making it fun. Um, some of the best activities I've seen have been show and tell activities. I think most people here could agree that jumping into an intranet page that has a bunch of texts that tells you how to use something is way less exciting than going to a quick 30 minute demo with a leader from marketing who's showing you how they do campaign analysis with chat GPT. So it's all of the fun things that I think, uh, make a, make a big difference. That's awesome. If I may just three seconds, I was speaking with my fifth grade son's, uh, friends the other day. And I asked them, what are you Googling? They were, they were at a laptop and the kid says to me, Googling, we're the chat GPT generation. We're not Googlers. So that's amazing. Are they calling it, are they calling it chat? Cause I've heard that Gen Z are calling it chat now. It's just chat. Oh, interesting. Yeah. I'll keep my ears open for that. Thank you. They're amazing. Um, let me see. I see a hand most in, um, one of my favorite forum members also, uh, on the human data. Um, I see you had your hand raised. Do you want to ask your question? Yeah. Thank you so much, man. So nice to meet you here. And also Katie and other friends here and thanks so much. It was very informative and knowledgeable. So a little bit, if you don't mind, talk about any stage of startup, you know, what are your thoughts? You know, often these requests for a lot of computation resources that at the stage they don't have it. And, uh, especially in the context that let's say Google, they have these old Google cloud infrastructure, the offering plus Gemini and others. I personally much more inclined toward the GPT, you know, for the accuracy and precision, especially for us that we are working in the healthcare, but would love to hear your thoughts. Yeah, definitely. Um, I think it's a great point and listen, we don't just work with these massive large 500, like 5,000, sorry, person companies. We also work with small businesses, SMB customers who have a hundred employees. And so what we tried to do is we make a reduced version of what I showed you today. So we really try to, um, understand first of all, what are your business objectives? Again, what does success look like in 30 days? And a lot of smaller businesses don't have access to learning and development resources that are going to create assets. They don't have access to a HR individual that's going to tackle AI from a cultural perspective. So what we actually do is we will work with each customer to understand what does a deployment look like? What can you commit to? And what can we bring to the table here to make a more bespoke reduced plan? Um, let me think of an example. Um, I probably can't use their name, but was working, I was working with a really small startup in San Francisco. And they said to me, they said, Lois, we don't thanks for coming with this project tracker, but it's literally me and one other person deploying to like 70 people. So what are you going to do? And so we just, we created a really lightweight comms piece that went out to employees. We gave them a lot of digital materials that already exist from open AI. We sent everyone to the open AI webinar program so that they themselves didn't have to develop the training. Um, we made the survey really lightweight. We just kept everything really simple. And so everything, all of the material that I've showed you today can also be tailored to much smaller businesses so that they kind of see value quickly. Sure. Thanks so much. Yeah. Thanks so much, Mohsen. Um, I see a question from Sadie, um, in the chat, Sadie, do you want to go off mute and you can ask your question? Yes. Hi. Thank you, Lois. Great presentation. All of this has been so helpful. I also mentioned, um, was reading, ready to mention in the chat that we're getting ready to do a full, um, AI enablement for small to medium businesses in a city here in California. And so was loving all the comments that you were sharing and what you shared in your presentation. But yeah, I had two questions. One, I love the idea of experimentation, but what we've found is a lot of users maybe haven't been in an environment where they've been allowed to experiment or even have like a mindset of what experimenting looks like. You know, it's very like a hacking mindset. So you're coming from, you know, software development where maybe you're used to hacking. And so working with maybe marketers or financial analysts, they're not in that mindset of experimentation. Do you have any like tips for, I don't know how to get people to just like test things and be okay with like breaking things and truly experimenting? Yeah, definitely. So first of all, I think it does come down to what the leadership are comfortable with their people doing. So lots of businesses right now do have their own AI policies that will say, listen, you cannot use chat GPT in this scenario or you can't upload these documents. And so I'm a big believer in like messy, scrappy experimentation, but none of that should supersede what the company is telling their employees to do. I think it then also comes back down to leadership, champions and departments thinking about the small ways in any given week that they can inspire or encourage that experimentation. So one, first of all, is the company or the leadership actually on board with experimentation? If they are great, and then how can they distribute it across the org kind of per department? So what I've seen done really well is there is a champion per department and that champion has a weekly goal to do some kind of activity, whether it is checking in with users on Slack and saying, hey, I would recommend that you experiment with this prompt for this use case. It's been really helpful. Or another champion who sits in the data analysis team doing a 15 minute office hours where it's like, does anyone have any questions? Does anyone want to see me experiment? So I think, yeah, it's a couple of things is quite hard to do, but that champion program is going to be really important. And I think leadership also agreeing to experimentation is going to be critical as well. Yeah, that's great. Because we try and find our champions first. So we have people go through a survey and essentially what we're using is like a growth mindset survey. So if you were really high on growth mindset, we found there's like a correlation between growth mindset and you being like a champion of this. Just sharing that for others who may be looking for that. And then my last question was just in regards to the enterprise version. I think you have to have 150 seats right now. So we're working with clients that are like between 50 to 100. Will enterprise becoming available for like those types of clients as well? Yeah, that's a really great question. I think we are looking at our segmentation, our seat count and subscriptions. I would send me a message actually in the forum. I'll go talk to one of the sales leaders and just kind of get you a breakdown of how we're thinking about that and what's coming. Because I do think that at some point we will support less than 150 seats. I think we're just working through that. So I will get back to you on that. And maybe I can connect to you as well if that's helpful. Awesome. Thank you. This has been really great. Awesome. Thanks. I see, I know I'm mindful of time and we can take a few more questions, but I see a few hands raised. Christian, would you like to ask your question?

Oh, I think you're on, you may be on mute or we can't hear you. Okay, can you hear me now? Perfect, yeah. Okay, this has been a really great presentation. Thank you, I learned a lot.

A question that somebody asked me was, knowing that Jet GPT can hallucinate, what would you say can you do so that a human being doesn't make a really poor decision, maybe even a dangerous one, based on incorrect information, hallucinated information that Jet GPT just puts out, puts in front of them?

Yeah, absolutely. There's a couple of things involved in this. So yes, Chat GPT can hallucinate and it can make mistakes. So we need to educate users to expect that. And we need to educate them to be human in the loop. And so when I'm delivering education, I always say that you need to check Chat GPT's work and you need to be mindful that you've thoroughly checked it before you take anything from Chat GPT and send it out into the world.

The other piece on that as well is that companies should really in their strategy be thinking about the scenarios like this, where it is not appropriate for employees to use AI. And so there are some instances where you shouldn't be uploading data or you shouldn't be doing workflows because they're too risky and they need to be managed in a certain way. So it's also about thinking what's appropriate for Chat GPT, what's appropriate for AI, and then what processes should we not touch because we need 100% guarantee that they don't impact life changing or altering kind of situations.

Thank you.

Great. Well, we can take one more question.

Z, I'm not sure if you're still around, but I see that your hand is raised. Z Wahid.

Oh, there we go. Oh, we can't hear yet. I think you're on mute. If not, we can move to another question and then come back to you. I'm not sure.

Can I jump in really quickly?

Yes, please.

Yeah. Hey Z, try clicking the little gear icon next to the microphone at the bottom of your screen and reconfiguring your microphone. Maybe you are set up to your headphones or something, but in the meantime, we can...

On someone else.

Yeah.

Yeah. Yeah.

We can...

Yeah.

On someone else. I don't know what I would do without you, Caitlin. Thank you.

Sam, are you still online? Sam Erkner?

I'm trying. Can you guys hear me?

Yes. How are you doing, Sam?

Great. Great. Nice to see you again, Ben. It's been a week.

Likewise.

Well, so after Z's question, I actually have two parts. One, I just remembered the situation. We also work with small teams. I heavily invest in custom GPTs. I'm always pushed in my workshops to people, but some of the companies I'm working with, they have this free version of GPT-4 through CoPilot. And although I know JGPT is so much more beneficial when they have more features, I don't know how to defend that when I'm working with these type of businesses. Is there a sales document or a quick version of how you differentiate? That's my first part of the question.

And any update on custom GPTs? That's my actual real question. The market is not really growing. There are not many new features being added. And I'm really trying to reach someone from OpenAI in that topic. And Luis, you are the closest I can get to your team right now.

Yeah, absolutely. First part of that question about differentiation, I will talk to some of my sales colleagues. I don't think we'll be able to share that slide deck or that material, but I can probably give you some bullet points just to help arm you with those conversations. I know that we come across these competitive conversations all the time, and people are always like, well, hey, why would we go with Chat GPT if we've got CoPilot? So I'll get back to you on that. Could you just shoot me a message in the forum? Just a direct message, and I'll get back to you.

Second part of the question, GPTs. Great question. GPTs, yeah, I'll be candid. The development there has kind of stalled for a little while. We are focused on GPTs. In fact, we're focused on the whole user experience, and we're focused on Chat GPT for work and what it means to automate processes with GPTs. So as with anything at OpenAI, I can't communicate any public timelines, but I will say that kind of how we're thinking about things and where we're going, I'm sure at some point those GPTs will get a much needed refresh.

Thanks.

Thanks for the update. I will message you on the forum or another platform. Thanks.

Yeah, thank you.

And then we'll see, one, try Z. Does your microphone, were you able to get it connected? It's not, but if you want, if you have any questions, I know, unfortunately, I don't know why it's not connecting. You can feel free to add in the chat or directly DM me, and I can obviously relay it to Lois and her team. We'd be happy to do so. But being mindful of the time, I know it's New York here and it's going past nine. Other folks are in Europe and other folks are in Asia. So yeah, you're having your morning coffee. So thank you so much for attending tonight's event. This was incredible.

And not just to the audience, but also to Lois. We really appreciate your time and also your expertise to come to the forum.

We have you again. And then I think another time again in November.

So we have-

You're sick of me.

Exactly. I don't think that's possible internally. I'm not being dramatic. We have a Lois Newman fan club. We love your work. So thank you so much.

So thank you for everyone in joining and Caitlin's dropping some lovely links in the chat. But until then, I will see you all on the next event.

Amazing. Thanks everyone. Really appreciate your time. Take care.

+ Read More
Sign in or Join the community

Create an account

Change email
e.g. https://www.linkedin.com/in/xxx or https://xx.linkedin.com/in/xxx
I agree to OpenAI Forum’s Terms of Service, Code of Conduct and Privacy Policy.

Watch More

Practices for Governing Agentic Systems
Posted Apr 26, 2024 | Views 13.3K
# AI Safety
# AI Research
# AI Governance
# Innovation
AI Art From the Uncanny Valley to Prompting: Gains and Losses
Posted Oct 18, 2023 | Views 32.4K
# Innovation
# Cultural Production
# Higher Education
# AI Research
AI Literacy: The Importance of Science Communicator & Policy Research Roles
Posted Aug 28, 2023 | Views 30.8K
# AI Literacy
# Career