OpenAI Forum
+00:00 GMT
Sign in or Join the community to continue

Integrating AI Into Life and Work

Posted Jan 30, 2024 | Views 3.3K
Daniel Miessler
Daniel Miessler
Daniel Miessler
Founder @ Unsupervised Learning

Daniel Miessler is the founder of Unsupervised Learning, a company focused on building products that help companies, organizations, and people identify, articulate, and execute on their purpose in the world. Daniel has over 20 years of experience in Cybersecurity, and has spent the last several years focused on applying AI to business and human problems. Daniel has held senior positions at Apple, Robinhood, IOActive, HPE, and many other companies, as well as consulted for or been embedded in hundreds of others in the Fortune 500 and Global 1000.

+ Read More

Daniel Miessler is the founder of Unsupervised Learning, a company focused on building products that help companies, organizations, and people identify, articulate, and execute on their purpose in the world. Daniel has over 20 years of experience in Cybersecurity, and has spent the last several years focused on applying AI to business and human problems. Daniel has held senior positions at Apple, Robinhood, IOActive, HPE, and many other companies, as well as consulted for or been embedded in hundreds of others in the Fortune 500 and Global 1000.

+ Read More
Joel Parish
Joel Parish
Joel Parish
Security @ OpenAI

Joel Parish works on security applications of large language models at OpenAl. Prior to joining OpenAl, Joel spent ten years at Apple working on the red team, on the blue team, and in security engineering for projects like Apple Pay, Apple ID and iCloud Keychain. In his free time, Joel helps track missile proliferation using open source satellite imagery.

+ Read More

Joel Parish works on security applications of large language models at OpenAl. Prior to joining OpenAl, Joel spent ten years at Apple working on the red team, on the blue team, and in security engineering for projects like Apple Pay, Apple ID and iCloud Keychain. In his free time, Joel helps track missile proliferation using open source satellite imagery.

+ Read More

In this talk, Meissler shared his philosophy for integrating AI into all facets of life. He highlighted a framework built for leveraging custom prompts as APIs, as well as demonstrated several specific use cases that the speaker hopes will resonate with OpenAI Forum members and translate across disciplines and professional domains.

+ Read More

I'm Natalie Cone, your OpenAI Forum Community Manager. I like to start all of our talks by reminding us of OpenAI's mission, which is to ensure that Artificial General Intelligence, AGI, by which we mean highly autonomous systems that outperform humans at most economically valuable work, benefits all of humanity.

Given the Forum is an interdisciplinary community, and we come from an array of domains, and aren't all technologists, I get a lot of feedback from community members that it would be useful to have experts speak about practical applications of AI in their lives. So when I reached out to Daniel about presenting in our community, I thought it was rad that he wanted to present on AI integrations that he'd been working on. And of course, given his background as an information security expert, he will also touch tonight on how AI is changing corporate information security. Our talk tonight, Integrating AI into Life and Work, is presented by our very special guest, Daniel Meisler, and facilitated by OpenAI's very own Joel Parrish. Daniel is the founder of Unsupervised Learning, a company focused on building products that help organizations, companies, and people identify, articulate, and execute on their purpose in the world. Daniel has over 20 years of experience in cybersecurity, and has spent the last several years focused on applying AI to business and human problems. Daniel has held senior positions at Apple, Robinhood, IOactive, HPE, and many other companies, as well as consulted for, or been embedded in, hundreds of others in the Fortune 500 and Global 1000.

Welcome, Daniel. Thanks for joining us.

Thanks for having me.

Joel Parrish works on security applications of large language models at OpenAI. Prior to joining OpenAI, Joel spent 10 years at Apple working on the red team, on the blue team, and in security engineering for projects like Apple Pay, Apple ID, and iCloud Keychain. In his free time, now this is awesome, guys. In his free time, Joel helps track missile proliferation using open source satellite imagery. And if you missed it in the beginning of our talk, Joel got into that during the pandemic. So maybe you can DM him later about that super cool pastime project of his.

Welcome to the OpenAI Forum, Daniel and Joel. And I will hand the mic over to you gentlemen now.

Awesome. Thanks, Natalie.

So, Daniel, I remember, gosh, I don't remember the exact month, but almost a year and a half ago you got added to the GPT-4 Alpha and, I mean, like all of us, it kind of blew your mind and you started kind of thinking, like, hey, how can I start using this? And I've been, had the privilege of seeing you kind of use more and more of that in kind of all sorts of aspects. So, really excited to hear what you have to say and show us tonight.

Awesome. All right, I'm going to jump into it.

Okay, can we see a screen? Yes, we can see it, Daniel.

Okay, perfect. All right. Yeah. So, absolutely. Thanks for having me. And what I want to do is some show and tell, essentially, about how I've been using AI over the past year. And a few main things I want to talk about are basically some ideas I had that got me thinking and why I saw some problems I wanted to address, some infrastructure that I built to address those specific issues, and then a project that I've sort of just started that's kind of an encapsulation of all of that from last year.

So, first a few realizations. So, the first idea that I got pretty quickly, and I think maybe the whole internet did and definitely everyone listening today, is that prompts seemed like the primary unit for AI. It was just like the currency. So, everything I was thinking about was like in terms of a prompt. Every problem I thought of, I was thinking like, how can I make a prompt to address this?

And the second one was that quickly got big, and I was super excited for like a month or two. And then they just started accumulating. And then I had multiple websites to go to to use them. And then there was mobile apps and voice interfaces, and like it quickly kind of got out of control.

So, what that got me thinking was around, and this must have been around March or so, I basically didn't have a problem of AI not being able to do things. I had the problem of how do I integrate everything it can do into my actual life, right? And so, my takeaway was basically we didn't have a capabilities problem, we had like an integration problem. And that made me want to just like back up and think about what am I actually trying to accomplish. And this is kind of like what I try to do in life. I'm trying to like self-optimize. So, it's like what all am I reading, what am I listening to, what am I watching, talking about with friends, all these different inputs around me that has the opportunity to like teach me something, right? Or spawn an idea, or I can have an idea by myself, or whatever. And I want to take that idea and I want to update my model of the world, I want to turn that into like an algorithm, and then take action. So, that's kind of like the frame that I backed out to. And I further broke that down into like what are the actual challenges that I'm dealing with? It's like, okay, maybe it's calendar related, maybe it's like social integration related, or I don't have time to take notes, or I can't find good enough content anymore, maybe I don't like the content I'm seeing. So, all these individual problems, I started breaking into discrete components. And then there, at the component level, I started breaking that into even smaller pieces and turning that into like a knowledge or a workflow pipeline. And then from there, I can actually apply AI and specifically a specific prompt to that particular part of the problem. So, it basically became like I have a goal, I break it into components, I turn those components into a pipeline that I can apply AI to, then I use that pipeline. And then when new stuff happens, like models update or prompt techniques update or whatever, I just improve that and it improves the whole pipeline.

So, that was kind of the problem and the approach. And this is what I ended up building starting around February or March or so.

So, I spend a lot of time in the terminal and I know most consumers don't do that. So, it's kind of an interface problem for me, but it's where I'm most happy.

So, I started building everything in terms of a CLI interface. So, it was essentially a command line interface, a local client on my system would go up to my own server infrastructure, which happens to be Flask, and it would house all of my different prompts. And then it would take the input plus the prompt content and send that to OpenAI. And then that would come back to the client.

So, what I started doing was just going to all those various components and I started building individual solutions for each one of those. There's a whole bunch here. I probably have 70 plus, although this is probably 90 on the screen, something like that. But I'm going to step through a number of these. And if we have time at the end, I can actually do some live demos. It depends what Joel wants to do and what the audience wants to do. But I'm going to step through a few of these.

So, one of my favorite use cases is the idea of one of the workflows and one of the problems is like, okay, I have an idea. I've captured the idea. And maybe that's a voice capture of an idea, which I then use Whisper to get into text or however I get it in the text, oftentimes I'm just dictating into like Apple Notes. So, this is an Apple Note spawned by the fact that I'm reading this book called Generations right now. And so, I write this basically laying in bed and it's there. It's a piece of text.

Well, what I now do is I copy that piece of text and I have P alias to PB paste. So, I'm basically pasting into an API or a local command called SA. And that calls my forward slash SA command on the server side. And what that does is outputs like 30 seconds later or whatever, an essay in my voice. And this is not going to be horribly surprising to anyone that you could do something like this, but it's quite powerful if you have it tuned to the way you like to write and the way you like to communicate ideas. So, it's pretty powerful.

What's even crazier is though I can now stack that. I can now chain that together with other things. So, I have another one called thread, which makes a Twitter thread or Xthread. And essentially, it gives me prompts, also...

calling another API actually, it gives me prompts for what the image should be for that perfect part of that one part of the thread. Um, it tells me what links to do. It actually can fill in links for me if I have enough context there. And I actually use this to, um, I've used this like twice and one of the, one of the threads that it produced after I followed these instructions got like 280,000 views on, on Twitter. So it was, uh, it was pretty effective at building like the right type of thing. So basically the idea is you have an idea, you write an essay about it and you put it on social media. So that's like a pipeline and that's the tech that allows it to happen.

This is another one I put together. This is one of the early ones back in maybe March or something. Uh, so I can take the content of any article and I send it to this one called analyze incident. And again, this is going up to my own server and it, uh, and Joel actually helped, um, I think possibly with, um, some of the formatting here. Um, really, really difficult sort of thing to do. Um, it's getting easier now, but so attack type target name, corporate, um, corporation target size, business impact, business impact explanation. Lots of different pieces here that are actually components of, um, this particular incident. And you can actually build a database out of this. I, I haven't got around to doing that, but it's really, really powerful, um, to be able to extract data like this. Um, and this is not even a well formatted, like great thing. This is just a blog post. So that was one analyze paper is a more recent one I put together. This one, um, looks at rather opaque, um, academic papers, the opaque to most people. Um, and it breaks them down, uh, into clear findings, uh, details about the study and some pretty interesting, like quality analysis of the, uh, of the quality of the paper. Um, which of course, GPT-4 helps me come up with those criteria. And then, um, I've got a thing that's fairly loose at the bottom. It's, it's not super great yet, but it's a rough estimate of how replicable this paper might be.

This one is my absolute favorite. I would say if you don't get anything else from, from this talk, uh, this one is something you probably want to use. Um, so one of my favorite things is, is, um, deep content, uh, YouTube videos, like, uh, you know, long form podcasts essentially. So like two, two and a half, three hours, four hours, whatever. What I'm able to do now is, uh, I've got a little helper thing called PT. It's basically, um, it pulls a transcript from YouTube and it pipes it through this one API that I have called extract wisdom. And what that does is it, it breaks down first of all, a summary, which is, you know, table stakes at this point, but it breaks down all the interesting ideas from there. It also pulls out the most interesting quotes, the habits of the speakers, anything that they reference from like an art standpoint, they mentioned books or poems or whatever I've found so much, uh, man search for meaning. I actually found, uh, as a book that I hadn't read yet. And that sent me down a rabbit hole, but now I could find what all these really smart people are talking about and what, where they got inspiration from all from parsing this content. And the thing I love about it is it's like, I took notes, um, manually, right? That's what I tried to emulate here is that if I spent an hour, hour and a half on a four hour video, actually, that would be like five hours of work probably to, to parse this thing. And this thing comes back in whatever, 30, 40 seconds. So this, I I've got more value for this.

Okay. So before chat GPT came out before the GPT four came out, I literally would have paid tens of thousands of dollars for this, this is like so incredibly valuable. Um, so it's like, see something you like capture it for later in a format you can share. It's like basically the pipeline, um, Going pretty quick. We've got a lot of these, so, um, I'm trying to move pretty quick. So the next idea here, um, this is one of my workflows. So if I see anything, um, inside of a browser, I can do option command X, which is an automator script, which sends up to, um, Zapier actually, and Zapier does something to go and capture the content. It sends it to my API. And I end up with a Google doc, which has a summary of that thing. And, uh, I happen to run, run a newsletter. So this is fantastic for creating summaries, producing links, all sorts of things. Um, I can basically automatically parse anything that I see. And what's cool about that is if I do it from inside of Feedly, because Feedly is my main, uh, RSS reader. If I click the read later button, it does it. Or if I swipe on the phone while I'm mobile, it does the exact same thing. It all goes up to my custom API, gets parsed, and then gets sent to that Google doc. And you can basically do anything with this, right? So here I have my own posts going to my own API that does it, which then sends it on to a GPT-4. But you can basically rig this up as I'm sure everyone here knows, uh, to do pretty much anything. So these are all the different ways I have to basically capture things, right? It's keyboard shortcut. I could swipe and I also have Apple shortcuts set up with voice so I can actually call my own APIs, um, from, uh, my, my mobile device.

So a couple of other cool integrations is just, um, using browser lists to pull content, which then I can send to OpenAI, um, or to my API first and then to OpenAI. This one is really, really cool. I, I love this thing. So similar to sending the, the transcript of, of content to extract wisdom, I could also send it to label and rate, and it can give me a threshold score of whether or not I should go watch the whole thing, right? Because I, you can only watch so many four hour podcasts in a week. Uh, so this thing, I'm actually building a thing called threshold. I'm going to build to basically email me when it goes over a certain amount. Um, and I'll know I have to go and watch those videos. Oh, that was a video. Surprising.

All right, here's another one. Um, so Friday nights, including tomorrow, uh, we've got a big D and D game locally that I go to, uh, with some friends from high school and, um, that microphone in the bottom left records the whole session. So I send that up to whisper. Um, and then it extracts out what happened in the session. Um, no big deal, pretty standard, but, uh, the way I set it up is, um, if anyone is, uh, probably no one's as old as being here, but, um, In the eighties, they had this thing, um, previously on whatever, Knight Rider, previously on Knight Rider, and it would give this synopsis of it as a setup for the next show, so I've actually done that. Um, and then I sent it to like 11 labs and, uh, did it in my voice and the GM was very happy with me. Uh, I got, um, got some extra stuff for that. So a good example of like how you can enrich like a regular, tangible, normal life thing with, with, uh, the stuff that you all have built. Oh, good. It does have audio. Okay. Yeah. So that was the, uh, that was the deep fake voice. Um, okay. This one is relatively new and this is unbelievably powerful. So I'm sure everyone here has signed up for a million newsletters, probably a lot of AI ones, or maybe not. But, um, I'm subscribed to a whole bunch of them and I now have them. I'm subscribed, unsubscribed with my regular address. I resubscribed with Feedly. I have it drop into Feely. I send it over somewhere to get parsed. It gets parsed and then it goes through my whole workflow. And then I get this super clean, um, extraction of the content. And again, I can pipe that to any other place. I could pipe that to make a blog post about it. I can pipe that over to, um, write a tweet thread about it. I could do whatever with it. Um, because I'm just piping these command lines together. Uh, voice integration. This is a pretty standard fare. Um, I'm actually using the open AI functionality for this one.

Um, this one is cool. Uh, Um, pretty standard. Okay. So what I've done now is taken all of that, that I've done over that last year and all those custom APIs. And I've created a project around this on GitHub called fabric.

And the idea is a bit ambitious, but I'm going to go ahead and shoot for it. It's basically an open source framework for augmenting humans using AI. So what I'm trying to do is like turn all of these discrete problems into components and turn them into prompts and then put them on GitHub and have people improve them, right? So this project, uh, or this project has patterns, infrastructure, and standalone clients, and like, I'm actively working like today on a unified client, uh, which is going to be super fun and actually, um, having some great help, uh, from Jonathan Dunn on that right now.

So this is the project. It's just a GitHub slash fabric or a GitHub username slash fabric. And, um, here are the patterns that I've uploaded so far. So it's fabric. So that's the theme is, you know, material fabric. Um, the idea is you put multiple fabrics together. You have like a pattern, um, that, that sort of thing.

So the, the prompts themselves are called patterns and I've already uploaded tons of them and actually made a whole bunch of new ones just in the last week. And, um, what's cool about this that I'm so excited about is like how garbage my prompts are going to be shown to, to have been once they're actually exposed to the public.

They'll be like, Hey, that's not the best prompting technique. You should try that. And if we could pair this actually with a testing framework that I'm really excited about that. It's a whole nother, uh, talk show, but, but I would be super excited to have multiple versions of a similar prompt and then that are supposed to be doing very similar things, but like a testing framework that's grinding against it and producing objective ratings. That would be super cool. But, um, this is the structure that I'm using.

I really love Markdown for this. I've switched over to using Markdown for most of my prompts and I just, I love how it looks, I love how easy it is to read and how easy it is to edit, especially for a public project and I also think, and maybe someone here can correct me if I'm wrong, but I think that LLM is kind of like it too, because it's fairly clear. Um, the directory structure for, um, working with a project is, is pretty clear. It's essentially, um, you have the project and then you have the slash patterns and then you have the main pattern right after that. And then if you expand that to, um, to come up with your own version, you add the username to that and then you use versioning and, uh, Joel actually came up with, uh, that structure, which I think is brilliant and, um, I think it's really going to help the project along.

Um, so we have the server, uh, the server is also published. So I basically took all the stuff that I did with, with the, uh, with the prompts and also the server that I had built. And I basically just put the open source that, uh, last week. So it's up there. It's an HTTP only server. It doesn't have auth in there, obviously not TLS certs. So you got to, uh, set that all up yourself, but you can basically set this up in five minutes and it will work. Um, assuming you have a key, of course.

The other thing that's really cool about this is that the. Um, the end points themselves, this is, um, extract wisdom here. It's actually pointing dynamically to the fabric infrastructure, uh, or to, to the fabric pattern. So that way, when you make updates, um, to, uh, the fabric project and you're pointing to that particular location, you can actually just get the benefits, uh, immediately, uh, within the client and the server.

And there's also, I published a, um, a quick little, uh, flasky, uh, web version as well, uh, because often people don't want to use a command line. And the next thing that we're, I'm really excited about, um, is actually the standalone client. This is, um, or this is the standalone client. Uh, uh, no, which one is it? Yeah, this is independent clients. So basically this is independent client code. So each one of these client scripts that you saw that was on the command line, these are independent, uh, Python apps. They're just little Python scripts.

And this is the actual full code of the script. Like it's, it's pretty gross. So look away if, if you are a coder, um, but it gets the job done. It calls the location, it gets the content back. And of course we're going to be making lots of improvements. This is just the, uh, the dirty version that got it done last year.

What I'm really excited about is this, um, which is a standalone client where you don't have to like have all these individual little Python scripts that you have to manage. Um, but getting a standalone, uh, client, probably Python, uh, right into, uh, brew, so you just do like brew install fabric, and then you like curl the website, send it to fabric, switch P for pattern, extract wisdom, and then, uh, it goes and does its stuff.

And, um, we're looking at options for pointing to your own server or appointing, uh, or having it all happen in the client without even having a server, cause you could just do it all from the client as well.

And that is essentially what I had to share today. Um, and I'm happy to take questions.

Yeah. So, um, Dan, I'm super curious, uh, like of all of these problems that you kind of tried to, you know, go and automate, um, have there been like, you know, tasks that you went and started to write a prompt for and start working on?

Um, I have, I have, I wish I had. Yeah. I wish I had some examples for you. The main, uh, I would say the main obstacle that I've run into and the main thing I guess I wish I had along the way, and you might actually have this already. And I just, I don't, I don't, I don't, I don't, I don't, I don't, I don't, I don't, I don't have a lot of examples for you. I don't, I don't, I don't, I don't, I don't, I don't, I don't, I don't, I don't have this already and I just haven't used it yet.

So just, just, uh, let me know if that's the case. But if I could have a, like a rag manager that lives somewhere, and it was like a rag container for a specific type of data, and I could put in documents, I can edit documents, I could remove them or whatever, but it had like a rag ID or something, um, that I had full control over. And then when I submit a prompt, which is also coming along with the input that I'm sending from the client. So it's input plus, plus the, the prompt plus the rag ID. And now that, that, that combination of all three, I would have like all this context come back. That would be super powerful because a big part of the game, as everyone knows, has just been like, how do I get around context, uh, links and, you know, doing the rag thing, um, vector database, all that sort of stuff.

Joel, you want to go with one audience question in between? Yeah. Let's, let's have a audience question right now. Okay. John, had your hand up for a while. Welcome, John. Nice to see you again.

Hey, thank you so much. So, uh, thank you so much for presenting and I can't wait to check out the repo later and see, um, what you've written in prompts and what, what's easily discoverable there.

Um, I wonder how your approach to development has changed since November 6th, when OpenAI launched assistance and GPTs, which in my experience has really changed the fundamental structure of like how, what tools are available and how you can interact with the API.

Um, most notably in that you can do rag, you can do rag and you, and, um, you can do kind of piping by swapping assistance in on a thread. Um, have you changed?

Yeah, it's a conscious difference, so, yeah, yeah, I'm, I'm loosely aware that assistance, um, the server-side version is kind of what I was basically asking for. Is that, is that what you're saying? I mean, I think it is, I think it does fundamentally change how you interact with the API and the tools that you have to build.

Um, and as a solo developer, um, you know, I really had to, because of building UI and building applications for people is that it's, it's almost impossible to keep up with what chat GPT can do now out of the box. So, so building, building UI, building voice, building, highlighting in your messages, building all of those things, plus the opportunity to use actions so you can pop things like it just felt to me, like to me it felt like one a gift that like I didn't have to build all these things myself, but also like a really huge shift in thinking around like what I had to do and how to how to build this stuff.

So, yeah, yeah. So, so it sounds like, I mean, like I said before, the functionality might already be there and I just need to go mess with it. Well, um, I did have a couple of points, you were talking about like how you can test prompt that sort of efficacy of prompts. There's a site called prompt hub that you can sign up for a pretty, pretty easy link. And what it enables you to do is to kind of test responses to various prompts, like in parallel. So you can see that and you can share with teams so that they can test those prompts and you can get real feedback. It has version control so that you can, you can, you can, I mean, you can write a prompt and then go back like a, like a GitHub repo, you can restore your changes and that sort of thing. So that helps. And, uh, I loved your point earlier about markdown becoming kind of the, the central. It does. Yeah. And then it's, it's an issue in developing UI because, um, you know, a lot of packages out of the box just don't handle it super well, but I'm working in a purely sort of Python environment. So, um, but I do see the one that because so much of the output is coming out of QPT 4 in markdown or it's very easy to prompt so that it is formatted in markdown. I think it's also a really great way to, um, it's a really great way to just learn a few simple tools or use a simple markdown editor and have your prompts carry over. And there is some evidence that like using a heading in your prompt, um, or in my experience using heading or bullet points in your prompt, um, can draw attention or focus and attention of the model to get better, better results. So I've completely into markdown with some other things for, for prompt authoring. I felt like it was more effective, but you know how that goes with like feelings. So that's why I wanted that, uh, objective testing framework. But yeah, thanks for the thoughts.

Thank you so much, John. And just so everybody knows, while we get to know each other, John has been a member of OpenAI's creative community for a very long time. He's been working with Natalie Summers. He's a creative technologist. So thank you for joining us tonight, John. Next, let's go back to Joel for another question and then we'll have Radhika chime in.

So Daniel, you've, um, I mean, you've been helping, uh, companies out with security for a long time and, you know, advisory services and consulting. Um, what are some tasks that you think people doing security work day to day in, you know, corporate security departments would benefit from kind of like automating? Or like, what have you seen maybe work or not work in your own work with regard to security specifically?

Yeah, I, the most powerful use case that I've, I've seen so far is, um, and I've helped a couple of groups with this is, um, the routing of security assessments. So with, within a security, um, assessment group or a security team, you're basically, you're the blocker from all these, uh, applications from the business and, uh, engineering getting through to production. And you're the blocker, right? So the question is like, okay, we've got four people who do these assessments and they tend to be, uh, junior to mid level. And they're just like, the, the, the wave is going by and they're just grabbing a couple of fish as they go by. So sometimes they have to like, it's really hard to say this is the cutoff. If it has this type of data, that's all I'm going to look at. And they shave that little portion off and it's like, um, okay, it's 37 assessments. We're going to do 37 this month or whatever. But, um, what I love about this technology is you could actually look at all of them. Right. And you could potentially take the context of like, uh, what are the, um, what are the documents look like for the project? What are the, um, business proposals look like for the project? Throw them all into a rag container, give the assessment router access to the rag container. And now the assessment router has granularity of like 37 different assessment types that it could do. And it could start routing out. Hey, you know what? This one, we're only going to do like this cursory, um, two minute survey. Um, all the way up to the point of like, instead of having a high, where the, um, highest level is actually a red team assessment. There might be three levels of a red team assessment. One is one red team or for one day, which is kind of heavy. But maybe if it's the highest, highest level, maybe that's an external team and an internal team for like three weeks. So more granularity and more prescriptive guidance for a testing team.

Awesome. Thanks guys. I would love to hear from Radhika Dalal. Radhika comes to us as a biophysics PhD candidate from UCSF. Welcome Radhika. I think you're new to the forum. Nice to see you.

Yeah, this is my first time. Sorry. I'm not able to turn on my video right now, but, um, I hope you can hear me. Um, anyway. Okay. I don't know the ins and outs of like training. I don't know AI models and stuff, but I was just kind of curious about your, um, what you mentioned about like your extracting wisdom command and how you're able to pick out unique ideas from a video. And, um, I guess without going into the technical details of it, like, I guess, how are you able to identify, um, an idea as being unique?

Yeah. Great question. Um, so I did a little bit of, um, a little bit of priming to the prompt. I could actually, let me see if I could pull it up. I don't need to pull it up. I basically, I did a little priming at the top that said, I'm interested in things like, um, the pursuit of meaning in life, um, the intersection between humans and technology, um, positive impact of AI. So I seeded it with a few things that I was interested in. And then I simply told it like, I can't remember the exact verbiage. It was something like the most surprising and the most interesting ideas. And for that, all the hard work was done by GPT-4. It combined with what I primed it with, it figured out from the content and from my priming, it figured out what the most interesting things were. And that's, that's the beauty of the model.

I see. Okay. So it's, it's kind of like, well, kind of looking for how well it matches what you've already, I guess, primed your, your model on rather than maybe, I don't know. I was like, is this some sort of like distribution sort of thing where, I don't know, your ideas, I don't know, falling outside of some null distribution of ideas that are not as interesting to you.

Oh, yeah, yeah. So I did say surprising. And that was a powerful keyword, I think. I mean, someone might be able to tell me otherwise, but I'm using the word surprising because of Claude Shannon and the idea that information is the non-repeated parts. So I'm trying to like extract the interesting ideas from the content, because if they're just saying like, whatever, 80-20 rule or something like that, I'm hoping that it would filter those out and only put the stuff that's kind of novel. And it's really, really good at it. I mean, it closely matches. I actually did a manual notes taking thing, which took forever. And then I, and I game it against X-Wiz to see how comparable they are. And it's pretty close. Okay. Cool. Thank you so much, Radhika. It's really great to have you in the community. Okay, Joel, do you mind if we do one more audience member and then we'll head back to you? Go for it. Okay, I'm going to call on Greg Costello because Greg, you showed up a couple of weeks ago and I didn't get to your question. So I'd love to hear from you tonight. First of all, Daniel, this has been an amazing presentation. I've been to hopefully all of them or most of them. And Nicole always has these, I'm sorry amounts of data.

Then the next part was sort of synthesizing the data and getting answers out of it. And we are now at that point.

You can look at this and just based on the work you've done and what we're seeing out there generating content from other countries synthesizing it from our perspective is now a thing. That was very hard before.

But I wonder if you're sort of tackling and think about the next thing which is how do I absorb the important information. Do you see prompts for this being able to help you get to what's important. Because I do a lot of scientific research and I need to understand like on certain topics my sciences there might be a thousand publications a week on say a certain type of cancer. So finding the important stuff is so it becomes so valuable. I'm wondering if you're doing the same thing. I think that for your personal life.

Yeah I love your question. It is definitely what I think about. If you think about that slide that I have with all the different things on it with like reading at the top. The third level down was adjust the algorithm. So one way I think about this because I've been thinking about this for a long time. You go to a conference. You're like oh I think I got some out of this conference. Two weeks later you can't remember a single talk. You can't remember a single like good point that anyone made. You're like I've been going to these conferences for 10 years. Did I actually learn anything.

So so I had this idea a long time ago called algorithmic learning which is essentially the idea that like Joel and I are security testers. Right. So it's like here's the way that I want to test a Web app. OK. Here are the hundred ninety seven steps. Now what I love the idea of is that when I watch a talk and I started doing this manually a long time ago when I watch a talk I have one question.

Am I updating my list of steps my algorithm for testing as a result of watching this talk. If so I've just added value. Well so what I'm super excited to do and I'm going to do this inside a fabric is I want to publish the algorithm. I then I want to parse the extract wisdom. Have it make a recommendation for a modification to the algorithm. That's that's amazing. That's a great idea.

Yeah. Yeah. I'm sorry. I'm so excited. I'm definitely going to download Figma. Maybe I can contribute to it. You know we'd work in sort of this. Absolutely. But I think that what you're doing with fabric definitely applies.

Awesome. Thank you so much Greg. And we'll post this for on demand so you can grab it. You can put it through whisper. You can distill it for the main points and we'll have Daniel back again soon.

So Joel has graciously decided that we'll keep with the audience questions because they're kind of piled up right now. We'll start with Bram Adams and then Spencer Bentley next and Spencer's in London and he woke up at the crack of dawn to join us.

So Bram if you don't mind unmuting yourself we'd love to hear from you.

Hi. Yeah. Daniel. Great presentation. Really appreciate the direction that you took with fabric.

My question is kind of two parts. So the first part of the question is as someone who's also created a number of I tend to go with Alfred scripts and then run my GPT prompts through Alfred scripts. Some of them I end up forgetting about right. Like I'll create them and I'm like oh this is super useful and I never revisit it again. But then there's other ones that I'm like really drawn to over and over. So I'd like to know which of those kind of have appeared for you. It seems extract wisdom is one of those. And then secondly have you changed at the level that you think about problems now either higher or lower because of these technologies.

Yeah. Great question. I do really worry about that. I do worry about. Naturally I actually think of what can I write for AI to remind me of those things. But that's just that's all I think about right now. But I do worry about that. I feel like it's the reason like HomePod and Alexa aren't taking off is because people don't remember that they can say things. It's not intuitive enough. So they think they have to remember a list. Right. And they're just like I don't know. So I'm not going to talk to it. So it's not intuitive enough to like be ready to go. I don't really have that problem right now because I feel like I'm always in touch with my problems in search of like in terms of like I wish I had more content at the top of the funnel. I wish I wish I could filter faster. I wish I could incorporate that into my algorithms faster.

But I mean still I do sometimes go to my AC is my alias for it and look at all my different prompts and just be like oh crap I forgot I wrote that when I should use that. So it still does happen.

The second part. Tell me the second part again. I'm sorry.

I was wondering if it changed the level that you think about problems that if you're like oh well I would usually approach it like this pre having these tools. But now that I have these tools I approach it from a higher or lower level.

So the big thing that changed my thinking about that was actually a car path you talk. He was talking about the data pipeline for the Tesla. Both self driving. And he was like these are all like discrete problems that we have. And it's this giant pipeline that it moves through. And when wherever we have a problem we're like OK that was bad. And they track it all the way back to the little component. They're like it was here. And then they they take their team and they go focus on that one. That's what made me think wait a minute. Let me just do this for all of life.

Awesome. Thank you. I appreciate it.

OK. Spencer Bentley thank you for your patience. You're still on mute Spencer. There's a microphone icon the bottom center of your screen. There you go. Testing 1 2 3. Have you got that.

Yep. Oh good. Thanks for the presentation. Really in my wheelhouse.

The question. Your command line at the minute takes a piece of text as the. This is the source of what I'm interested in. And on the command you basically get a URL and then it would pass that. That's how it gets the input.

I was wondering if you'd consider different ways of getting the user's interest into the AI. I was just I was thinking sort of a screen grab. You know you define this area of the window and pass that to GPT for vision which would then look at it and pass that as the. This is the thing I'm interested in which would then start that whole wisdom thing. I wonder if that has crossed your mind.

It has. I have not messed with the vision stuff yet. But you're absolutely right. It's like I was saying earlier. Everything I'm thinking now is like OK let's build all these these APIs in the fabric that's available to everyone. Let's do that. Let's do that. But the second part of the problem is the integration problem. Right. Because not I'm not always on command line. Ultimately I want to be looking at something and I just indicate that parse that or whatever.

So I think merging with the vision API somehow. But then you would still have to do it like the like you said the targeting like each. I want this part or whatever. But yes 100 percent. It's like what are the on ramps to get it into the API.

Basically right.

Thank you so much Spencer. Great to meet you tonight. Joel back to you sir.

Yeah. So I'm shifting topics a little bit Daniel because I'd be remiss not to ask you about this while you're here. What do you think like how does these type of capabilities and like a person that has kind of integrated all of these type of prompts and fabric into their work.

For someone who wants to cause harm a someone who's like an offensive hacker or someone who wants to do good you know someone like us that's doing penetration testing and red teaming. How do you think that like that will balance in the long term.

Yeah. It's an interesting question. I think I think attackers always move faster. And I think that AI is moving so fast that it's it's running at a more similar speed as the attacker. The other advantage the attackers have is they could just be like oh I could use this for spear phishing. Let's spin this up. That's a weekend project. You know Monday morning they're launching campaigns as opposed to an org. You know a blue blue side especially for a big corporation. They have risk and caution to think about. They're like hold on. This stuff is crazy. Like are we really going to bring this in here. We're going to hook it up to our production systems.

So first of all they have natural caution of AI and they haven't they haven't even figured it out or learned it yet at the leadership level I would say. Then second of all it's like they have to build the systems and then test the systems. So their experimentation and their iterations are so much slower that I think they're just going To be behind the attack side for quite a while. That being said, and I think you and I might have talked about this before, and I think this might have been your point, that might flip at some point, two to three years, who knows, two to five years. At some point it's going to flip because the blue side has more context, and the blue side might have more ability to leverage that context to move even faster. But I think it's going to take a while for that flip to happen.

Yeah, definitely agree with that. And yeah, so I think all of the things that are hard in corporate information security are things that are kind of like boring, like asset management, documentation reading, compliance, you know, all of this vendor due diligence, managing supplier risk. The good news is that LLMs are really good at that stuff. They're really good at reading lots of things, synthesizing them, reading a policy, taking some input, making a decision. So yeah, I am hopeful. And an opportunity to plug, at OpenAI we have the cybersecurity grant program kind of acknowledging this kind of defender attacker time asymmetry of the adoption curves, and we're funding projects that increase adoption of API for defensive use cases. I'll post a link in the chat, but yeah.

Yeah, thanks for sharing that, Joel. Another thing I can share coming down the pipeline, and all of the community members will have access to this, our preparedness team will be hosting a capture the flag challenge as a means of probing frontier models for risks as well. So lots of cool opportunities to actually support this type of work and ensure safe, you know, the models are safe within this community.

Cool. All right. Thanks, Joel. We have one audience question. So we'll go with Fouzia Ahmad. We'd love to hear from you.

Hi, my name is Fouzia. Thanks, Natalie. Thank you, Daniel, for an amazing presentation. It's fascinating to see how you're able to just integrate all that stuff in your life. Okay, I want to piggyback off of Radhika's question. I was personally very fascinated with the whole extract wisdom, and I think you called it label and rate. So I'm really curious, how were you able to, or like, how did you specify what the threshold was for whether a human podcast is watchable or not? And I would totally pay for something like that. So I'm just curious. And on a lighter note, a second question is just, um, now that you you've managed to do all these integrations, what do you find yourself doing with the some of the time you may have got back in your life?

Oh, good question. To build more AI stuff. That's unfortunately what I'm doing with it. No, I I'm just so excited about it. What I'm really excited about now is like, okay, I've got all this power from, from one shot prompts, what happens when I start sending it to agents? Right. And so it's just, there's so much so much potential here, plus external calls, like most of the value I'm getting is from is from actually just asking the model's opinion. So I just published one a couple of days ago, it's called analyze claims. So you send opinion pieces. And I was like, 2024 is a perfect time for this. You send in political pieces like controversial topics, and it just beats up the argument, it goes and finds supporting evidence and data. And it goes and finds contradictory and contradictory arguments. And then it gives a rating, and it actually labels it, I submitted a couple of my own like essays, and it's like beat them up. And I was like, I was so happy to see it. So it's like, I'm slowly sort of transitioning towards what are the human problems. And that's, that's what I'm kind of doing with fabric now is like, let's try to think about the human problems and how we can build some of these to get around them. And there's one in particular that's really got to me in the last couple of days. Something happened in Georgia with people losing Medicare. And what the government basically did, I'm not 100% sure about this, it was hearsay. But I think they said, Hey, if you go fill out these 19 forms in this one particular order, you could stay on the on the thing. And the people who needed the most, first of all, they're working multiple jobs, they don't have time to go fill this stuff out. A lot of the language is designed to be opaque on purpose. So what I want to do is I want them to be able to submit the request, and an agent form goes off, helps them fill all the stuff out and helps helps them stay enrolled in something like this. So I guess to answer your question is, I'm trying to transition to like, larger scale human problems that can be addressed with it, rather than very discrete tasks.

That is one of the most useful, humane use cases I've heard, Daniel. I hope that you are able to execute on that and let us know and we'll spread the word and boost the visibility. Fuzia, so good to see you again. Thank you for joining us. Thank you. We have room for one more question. Joel, did you have one more in your repertoire this evening? I can ask one if there's not one from the audience. I think the audience is tapped out. Go for it. Yeah. So Daniel, a long time ago in your book, The Real Internet of Things, you kind of talked about like this, you know, personal daemon, this kind of like, I mean, before we were calling them agents, this kind of like agent that would kind of do things for you. You mentioned, you know, like filling out forms, like, what are some other things that you wish like a personal AI assistant could kind of do for you that you see as like, you know, this is something that could really change your life? Yeah, that's, that's a, that's a big question. The thing I'm most excited about there is providing the goals of the person. So you basically have that rag container. It's essentially your life. But most importantly, you have a mission statement in there. You have goals in there. Things like spend more time with my family, make sure I'm reaching out to my friends, like all these very human things. Like, that's the other thing I want to stress about this is people see all this AI stuff and they think it's tech. No, it's tech being applied to human problems, right? So you basically say what you want to be, the human you want to be when you grow up. And you give that all in this rag container. And then the agent or the digital assistant or whatever it is, it essentially monitors that. And it looks at what you're doing during the day. So it's like, hey, I want to eat a lot more greens. I want to make sure I'm talking to my friends. And like, suddenly it talks in your ear. It's like, hey, it's midnight. You're not getting any sleep. You ate four hamburgers today. And you haven't reached out to anybody. Would you like me to call your friend Joel? And you're like, yeah, yeah, go ahead. So add advocacy in the direction of where you're trying to go as a human, and just sort of like slow steering in that direction.

That was beautiful, Daniel. Thank you. Good question, Joel.

Okay. 805, guys. Daniel, I honestly didn't expect what you delivered tonight. Thank you so much. That was really beautiful use cases. I would love to get to know you better as well. Joel, Daniel, you guys bring me into your D&D group. I would love to be there. Just a few announcements before we take off for the evening. We launched a couple of weeks ago, the very first member referral campaign in the community. We started this community just last summer with it was invite only partnerships through some diversity, equity, inclusion, partnerships with universities on the West Coast, and then open AI referrals. But now we're asking the community that's here to please share this application, circulate it in your community of the other wonderful, smart, engaged, thoughtful people that can share their human use cases with us, teach us a little something. We would really appreciate that. Also next week, I don't know if you guys are science fiction fans, but one of my favorite authors, Emily, Emily St. John Mandel, who wrote Sea of Tranquility, and I don't know if any of you guys have also read Station Eleven, which is also on HBO now as a limited series, and it's really fascinating. She'll be here with us next Thursday. So that's a little out of the ordinary, but I hope if you guys are into science fiction, you'll join us and come with some cool questions. The next couple of weeks after that, we're going to dig into biology. We're going to host Anton Maximoff of the Scripps Research Institute for Deciphering the Complexity of Biological Neural Networks. And then we'll end February with the OpenAI team presenting their research, Weak to Strong Generalization with OpenAI's Super Alignment Team. We also have a blog that was just published today, guys, if you haven't checked it out yet, New Embedding Models and API Updates. Our developer community manager, Logan, asked me to share that with you. So lots of Daniel, Joel, thank you so much for being here. It was really, really an honor to host you both. Thanks for having me.

Thanks, Natalie. Do not hesitate to reach out to me if you want to come back and present for us. I would love to host you. Awesome. Thank you. All right. Happy Thursday, everybody. Hope to see you soon.

+ Read More
Sign in or Join the community

Create an account

Change email
I agree to OpenAI Forum’s Terms of Service, Code of Conduct and Privacy Policy.

Watch More

Posted Aug 22, 2023 | Views 2.7K