OpenAI Forum
+00:00 GMT
Sign in or Join the community to continue

Music is Math: An Interdisciplinary Exploration of Sound, Science, and Creativity

Posted Feb 28, 2025 | Views 425
Share

speakers

avatar
Christine McLeavey
Audio Researcher @ OpenAI

Christine McLeavey manages the Exploratory Audio Research team at OpenAI. The team works on current and future versions of ChatGPT's advanced voice mode, and they also created the well loved open source models Whisper and CLIP. She previously worked on music generation research, creating MuseNet and collaborating on Jukebox. In 2018, she participated in OpenAI's scholars and fellows programs, as she transitioned careers from professional classical pianist to researcher. She holds a masters degree in piano from Juilliard, and a bachelor's in physics, graduating valedictorian of Princeton University.

+ Read More
avatar
Ivan Linn
Founder/CEO @ Wavv

Ivan Linn is CEO of Generative AI music platform Wavv, the creator of the world's first large music language model: Musica. He is known for his work in the music production of video games in the Final Fantasy and Kingdom Hearts series. He is also the Music Director and Chief Conductor of the Assassin's Creed Symphony World Tour, winning a Grammy in 2023. Linn is a member of the National Arts and Sciences Recording Academy and ASCAP. Wavv is transforming music creation with its advanced generative AI technology. "Musica," the first large language model designed for music, democratizes the process of music creation, making it accessible to everyone, regardless of expertise. This technology is set to exponentially increase music content, positioning Wavv as the largest owner and provider of music assets globally.

+ Read More
avatar
Carol Reiley
AI Robotics Entrepreneur and Co-Founder @ DeepMusic.ai

A serial entrepreneur, scientist, and engineer with over 20 years of academic/industry experience in artificial intelligence and robotics (nickname: Mother of Robots). Strong leader and innovator who focuses on building teams, building trust, and building products. Strives for efficiency, high impact, and getting things done. Previously co-founded, was President, and board member of drive.ai, a self-driving car startup with the Stanford AI lab that was acquired by Apple. Built an 8 person company to 200+ employees over 4 years raising over $77M funding. Cofounded multiple startups and launched several robotics products in highly regulated industries. Creative collaborator of the SF Symphony, Board/Advisor of several AI companies. Also active in diversity outreach events that advocate for understanding bias in AI. Work profiled in NYTimes, Harper's Bazaar, Wired, The Atlantic, etc. In 2018 was on the Top Women Founders in Tech and AI lists from Forbes, Inc, Quartz. Educated at Johns Hopkins University and Santa Clara University specializing in Haptics and Computer Vision/AI. Taught two new short courses as an instructor at JHU. Published over a dozen papers at top conferences and inventor of over 8+ patents.Research interests are on the development of intelligent robotic systems that can aid humans in performing skillful tasks more effectively. Application areas include surgery, industrial robotics, remote exploration, creative expression, and education.

+ Read More
avatar
Greg Schmidt
Director, Solar System Exploration Research @ NASA

Greg Schmidt is the Director of Solar System Exploration Research Virtual Institute at NASA Ames Research Center. He's entrepreneurial and has a great deal of experience starting successful NASA projects, programs and endeavors. Greg has a passion for developing partnerships with commercial, academic and international organizations in order to further the broad goals of space exploration.

+ Read More
avatar
Kim Old
Chief Commercial Officer @ EMOTIV

Kim Old is the Chief Commercial Officer at EMOTIV, where she leads efforts to integrate neurotechnology into everyday life and drives innovation in the brain-computer interface space. Over the past decade, she has built a career at the intersection of neuroscience, technology, and impactful innovation, focusing on translating cutting-edge research into practical, meaningful solutions.

As a leader in the neurotechnology industry, she is dedicated to advancing ethical, accessible tools that address cognitive health, education, and mental wellness. At EMOTIV, she spearheads initiatives that empower individuals and organizations to harness neurotechnology for a deeper understanding of brain health and human potential.

Kim is passionate about fostering collaborations and driving advancements that leverage AI and neuroscience to transform lives. She looks forward to connecting with like-minded innovators to shape the future of this exciting field.

+ Read More
avatar
In Sun Jang
First Violinist @ San Francisco Symphony

In Sun Jang is a First Violinist at the San Francisco Symphony. A top prize winner at the International Henryk Szeryng Violin Competition, she has appeared as a soloist with the New World Symphony, Puchon Philharmonic, Nanpa Festival Orchestra, and Santa Cruz Symphony. She has collaborated closely with many of the world's leading musical artists such as MTT, Esa-Pekka Salonen, Zakir Hussain, Kanye West, and Sting, and her numerous engagements as a chamber musician have taken her to renowned venues in Asia and America including Carnegie Hall, Miyazaki Prefectural Arts Center, and Seoul Art Center. As a member of the San Francisco Symphony, she has toured frequently to Europe and Asia's most celebrated concert halls, and has recorded extensively with the orchestra on multiple Grammy Award winning albums. A dedicated educator, she is the first violin coach for the San Francisco Symphony Youth Orchestra.

A native of Seoul, Korea, She began studying violin and piano at age four. She graduated from Seoul National University, the Juilliard School, and the New England Conservatory, and earned a certificate in composing and producing electronic music from the Berklee School of Music.

+ Read More
avatar
Daniel Stewart
Music Director @ Santa Cruz Symphony

Daniel Stewart is a conductor, violist, composer, and educator, currently in his 10th season as Music Director of the Santa Cruz Symphony, where his leadership and reputation have earned international acclaim and attracted world class collaborators such as Yuja Wang. He has conducted many leading orchestras including the Metropolitan Opera, Boston Symphony, San Francisco Symphony, Los Angeles Philharmonic, Houston Symphony, St. Louis Symphony, Hessischer Rundfunk and Frankfurt Opern, Boston Ballet, and the New World Symphony. As a viola soloist, principal violist, and chamber musician, he has performed in over 40 countries on many of the world’s great stages including Carnegie Hall, the Musikverein, Het Concertgebouw, Théâtre des Champs-Élysées, the Mariinsky, Teatro Colón, Sydney Opera House, and Proms at the Royal Albert Hall. Past positions as an educator include four years of co-directing the Metropolitan Opera’s Young Artist Development Program, coaching the opera departments and orchestras of the Juilliard School, Curtis Institute, and Aspen Music Festival, and a recently completed five year tenure as Music Director of the San Francisco Symphony Youth Orchestra. As a guest lecturer he has given presentations at schools including Stanford, the SF Conservatory of Music, and UC Santa Cruz. He holds degrees from the Curtis Institute of Music and the Indiana University School of Music, and was awarded the Aspen Music Festival’s Conducting Prize. For more details and examples, including his compositions and arrangements, please visit danielstewartmusic.com

+ Read More
avatar
Jess Chang
Technical Program Manager @ OpenAI

Jess Chang is a Technical Program Manager on the Security team at OpenAI, where she focuses on defense and intelligence initiatives. Prior to OpenAI, she held senior roles in technical program management and behavioral security engineering at Vanta, Robinhood, and Dropbox, with a focus on building security and trust & safety programs. Jess has presented talks for global security conferences and industry organizations, peer companies, and federally-funded research centers. In her dual career as a professional violist, she is the founder of Chamber Music by the Bay, which brings interactive concerts to thousands of young audience members throughout the San Francisco Bay Area each year. In addition, her work as a teaching artist and chamber musician has led to festival appearances and concert residencies with Festival Mozaic, Tanglewood, Taos, Verbier, Aspen, Sound Impact, Savannah Music Festival, The Banff Centre and the Glenn Gould School at the Royal Conservatory of Music in Toronto. She holds degrees from Yale, The Juilliard School, and the Curtis Institute of Music.

+ Read More
avatar
Natalie Cone
Forum Community @ OpenAI

Natalie Cone launched and now manages OpenAI’s interdisciplinary community, the Forum. The OpenAI Forum is a community designed to unite thoughtful contributors from a diverse array of backgrounds, skill sets, and domain expertise to enable discourse related to the intersection of AI and an array of academic, professional, and societal domains. Before joining OpenAI, Natalie managed and stewarded Scale’s ML/AI community of practice, the AI Exchange. She has a background in the Arts, with a degree in History of Art from UC, Berkeley, and has served as Director of Operations and Programs, as well as on the board of directors for the radical performing arts center, CounterPulse, and led visitor experience at Yerba Buena Center for the Arts.

+ Read More

SUMMARY

At an event hosted by Natalie Cone, community architect of the OpenAI Forum, attendees celebrated the forum's impressive growth from 200 to 10,000 members through an evening rich with musical performances and insightful panel discussions highlighting the symbiosis of AI, music, and creativity.

The evening opened with "Ode to the Sun and Moon," a collaboration between Wavv, an AI music startup led by Ivan Linn, and NASA. This performance transformed astronomical data from the 2024 solar eclipse into a mesmerizing musical piece, performed by Ivan Linn and Jess Chang from OpenAI, setting a high bar for the fusion of art and science. Another standout was Richard Reed Perry's "Music for Heart and Breath," produced by Carol Reiley, where the music was intricately conducted by Reiley's own heartbeats, illustrating the profound integration of human biological elements with musical artistry. This performance featured the talented Daniel Stewart and In Sun Jang, enhancing the composition with their skilled interpretations. Adding a layer of interactive neuro-technology to the evening, Tony Jebara, VP of Engineering, Head of AI/ML at Spotify, wore an Emotiv headset that displayed his brainwave responses in real time. Emphasizing this integration, Kim Old from Emotive elaborated on the advancements in neurotechnology, particularly its application in enhancing musical performances through brain-computer interfaces.

The "Music is Math" panel discussion further delved into the interconnectedness of musical creativity and mathematical principles. The panel included insights from Ivan Linn and Carol Reiley, who shared how AI is transforming creative industries, and Greg Schmidt from NASA, who spoke on the potential of technology and space exploration to inspire future musical innovations. The panel was adeptly facilitated by Christine McLeavy from OpenAI’s Exploratory Audio Research Team, who steered the conversation towards exploring how AI is not just complementing but also revolutionizing traditional musical composition, thereby underscoring the forum's commitment to nurturing a community where technology and creativity converge to push the boundaries of what art and science can achieve together.

+ Read More

TRANSCRIPT

Hey, everyone. Wow, this is the perfect room. You never know who's going to show up, but you all showed up. It's so beautiful to see you. Oh my gosh, I'm really happy to see you.

Some people flew in all the way from New York, Ahmed, Dr. El-Gamal, yay. He's one of the first members of the Open AI Forum, a pioneer of AI art and a computer scientist from Rutgers, and he flew all the way in to see us, yay.

Anton Maximoff from Scripps University, also traveled quite a way to be here, not as far as New York, but still a ways. And oh my gosh, this is just a beautiful crowd.

Oh, Claudia from UC Berkeley, one of our first members as well. Wow. They say it's hard to build a real community, but it didn't feel that hard to us. I mean, here we are. Lex, one of our first members as well, wonderful.

So I'm Natalie Cone, community architect. I run the Open AI Forum. We launched the forum in August of 2023. We started with 200 members, and we now have 10,000 members, all your referrals and invites.

And we launched the community in 2023 to bring experts, people across disciplines and domains, to help us evaluate our models. Our members have collaborated with safety systems on evals to make sure that the models we release in the world are as safe as we can possibly make them.

And there's John Herliman. Hello. Nice to see you. A tangible impact on shaping the future of AI.

One thing that happened after two years of being in existence as a community, and we were hosting you all and telling your stories and sharing with one another how you were leveraging AI in your practices, was that all of a sudden we were coming upon 70 amazing expert talks.

And we decided it was time to share this with the world. So tonight is the very first time we'll be live streaming an event. Why I chose to live stream a musical performance, and I don't really have a background in doing that, don't ask me. This is the first time we're actually operating like a theater at OpenAI. We usually do talks. So please wish us luck tonight, because the world is watching, and this is the first time trying this format.

But I do think that it's going to go well. And mostly it's because of you guys. We're really just telling your story, and I've just been lifting you up and giving you a platform to talk about all the amazing things that you're doing in the world.

One thing I just want to add, it's my personal flavor, but the reason I love this job, and I think the reason that I'm good at it, is because I really just like to bring us together to experience joy. And getting together, sharing our stories, listening to music, learning from each other, we all leave feeling very inspired.

And I don't know about you guys, but many of you showed up really early, so you must have been excited. This must be a joyful place to commune. So I hope that it's working. I hope that we're creating spaces for us to experience joy while we're learning about AI. Thank you so much for being in the room tonight.

Without further ado, we have two really amazing performances this evening, and a panel discussion. The first performance is Ode to the Sun and Moon. Ode to the Sun and Moon is a collaboration between an AI music startup, Wave, founded by Ivan Lin, who's here tonight, and he'll be playing the piano.

And it's a collaboration between NASA and Wave. Greg Schmidt is here tonight. He's the Director of Solar System Research at NASA's Ames Research Center. NASA and Wave collaborated on a musical, immersive experience for people to watch the 2024 solar eclipse. Wave came into play because they prompted an AI model with astronomical data from the 2024 solar eclipse and created this beautiful soundtrack.

Tonight, what you're going to see is Ivan, the founder of Wave, on the piano, and our very own OpenAI staff member, Jess Chang. So this is really exciting.

Ivan Linn is CEO of Generative AI Music Platform Wave, the creator of the large music language model, Musica. Jess Chang is a Technical Program Manager on the Security Team at OpenAI. In her dual career as a professional violist, she's the founder of Chamber Music by the Bay, which brings interactive concerts to thousands of young audience members throughout the San Francisco Bay Area each year.

Please help me in welcoming Ivan and Jess to the stage. Thank you.

♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪
So that's what I'm talking about.

We're just here to experience joy and get inspired. There's nothing hard about that. Thank you so much. Let's give it up one more time for Ivan and Jess.

So our next performance will be music for Heart and Breath by Richard Reed Perry of the indie rock band Arcade Fire, but produced by Carol Reiley. And since we have Carol Reiley here tonight, I'm gonna just let her tell you about why she decided to produce this work. And we're also gonna hear from Kim Old, the chief commercial officer of Emotive.

And if you look behind the stage where our technical producers are, we have our friend Tony hooked up to an Emotive device, and during this next performance you're gonna see Tony's brain on music. So that's pretty rad.

So our producer for this next performance is Carol Reiley. She's a serial entrepreneur, a scientist, an engineer with over 20 years of academic industry experience in artificial intelligence and robotics. Her nickname friend is Mother of Robots. And when I was first introduced to her, it was by Christine McLeavy, she actually asked me, do you know the Mother of Robots? I thought, no, but that's amazing. I have to know the Mother of Robots. So now I know the Mother of Robots and I feel so honored. Previously, Carol co-founded, was president and board member of Drive AI, a self-driving car startup with a Stanford AI lab that was acquired by Apple. Reiley has co-founded multiple startups and launched several robotics products in highly regulated industries. She's a creative collaborator of the San Francisco Symphony and board advisor of several AI companies. Her work has been profiled in New York Times, Harper's Bazaar, Wired, The Atlantic, and more. In 2018, Carol was on the top Women Founders in Tech and AI list from Forbes, Inc., and Quartz. Reiley was educated at Johns Hopkins University and Santa Clara University, specializing in haptics and computer vision AI. She's published over a dozen papers at top conferences and is the inventor of more than eight patents. Her research interests include the development of intelligent robotic systems that can aid humans in performing skillful tasks more effectively. Application areas include surgery, industrial robotics, remote exploration, creative expression, and education.

Kim Old, who we'll also be hearing from in a moment, is the chief commercial officer at Emotive, where she leads efforts to integrate neurotechnology into everyday life and drives innovation in the brain-computer interface space. Over the past decade, she's built a career at the intersection of neuroscience, technology, and impactful innovation, focusing on translating cutting-edge research into practical, meaningful solutions. At Emotive, she spearheads initiatives that empower individuals and organizations to harness neurotechnology for a deeper understanding of the brain health and human potential. Kim is passionate about fostering collaborations and driving advancements that leverage AI and neuroscience to transform lives.

Please help me in welcoming Kim Old and Carol Reiley to the stage. Thank you all for coming today.

You know, we're so honored. This is a very special piece. It's written by Richard Reed Perry, and I thought it was really cool to mix classical music with a rock band. He came from Arcade Fire, and they were, I thought a fun fact was that they had produced the soundtrack for her, so I thought for the audience here that likes AI, you might appreciate that. But he had written his first, like, solo album that was in the classical space that was really honing in on the passive human sensors. So this is the first of its kind where he was wondering, could you actually be conducted by a heartbeat? And tonight you'll be listening to my heartbeat as I am part of this trio on stage, and so they're being conducted by a heartbeat, the breath, and also the brain. So Kim's gonna highlight and talk about the amazing technology of Motiv.

I just want to take a moment to thank Carol and the Open AI Forum for including us in this incredible performance. So thank you, Tony, for volunteering to be our brain in response to music. So he's wearing an EEG headset. We're measuring electroencephalography, so measuring the electrical patterns that emit when the neurons in your brain fire. We're able to translate that in terms of raw EEG. There's a lot of use, obviously, in the medical space, and Motiv over the last decade has opened up the technology to allow people to be able to access it beyond the medical field. So we're seeing lots of innovation in terms of the intersection of art and music like today. You're gonna be able to see, well, right now in real time, Tony's response to the music, and what you're seeing is each color represents a different frequency band, alpha, theta, beta, gamma. These are linked to different cognitive states, and so we'll actually be able to see his brain's response and identify different cognitive states around measuring creativity, relaxation, and meditation to active concentration.

So thank you, Tony.

So the musicians performing this next piece, Music for Heart and Breath by Richard Reed Perry, are quite accomplished. I'm gonna do my best to get through these amazing bios.

So Daniel Stewart is a conductor, violist, composer, and educator, currently in his 10th season as music director of the Santa Cruz Symphony, where his leadership and reputation have earned international acclaim and attracted world-class collaborators such as Yuja Wang. He's conducted many leading orchestras, including the Metropolitan Opera, Boston Symphony, San Francisco Symphony, Los Angeles Philharmonic, Houston Symphony, St. Louis Symphony, Boston Ballet, and the New World Symphony, and many, many more. As a violist, soloist, principal violist, and chamber musician, he's performed in over 40 countries on many of the world's great stages, including Carnegie Hall, Teatro Colón, Sydney Opera House, proms at the Royal Albert Hall, and many, many more.

Past positions as an educator include four years of co-directing the Metropolitan Opera's Young Artist Development Program, coaching the opera departments and orchestras of the Juilliard School, Curtis Institute, and Aspen Music Festival, and recently completed five-year tenure as music director of the San Francisco Symphony Youth Orchestra. He holds degrees from the Curtis Institute of Music and the Indiana University School of Music, and was awarded the Aspen Music Festival's Conducting Prize.

By the way, friends, you all got a program when you walked in. If you scan the QR code, I know this is a lot to take in, but you can be reminded of everybody's bios there, and Danny has a website, and you can visit it, and you can see a lot more than what I'm sharing right now. I also want to share that Danny and Insung, who I'm about to bring to the stage, are so amazingly kind and warm and humble. It has been such a pleasure to collaborate with them.

know how absolutely amazing and accomplished they are, because they've treated us all so amazingly well as collaborators.

So In Sun Jang is a first violinist, is the first violinist, at the San Francisco Symphony. A top prize winner at the international Henrik Schering violin competition, she's appeared as a soloist with the New World Symphony, Nanpa Festival Orchestra, and Santa Cruz Symphony. She's collaborated closely with many of the world's leading musical artists, such as MTT, Esa-Pekka Salonen, Zakir Hussain, and Sting. And her numerous engagements as a chamber musician have taken her to renowned venues in Asia and America, including Carnegie Hall, Miyazaki Prefectural Art Center, and Seoul Art Center.

As a member of the San Francisco Symphony, she's toured frequently to Europe and Asia's most celebrated concert halls, and has recorded extensively with the orchestra on multiple Grammy award-winning albums. As a dedicated educator, she's the first violin coach for the San Francisco Symphony Youth Orchestra. A native of Seoul, Korea, she began studying violin and piano at age four. She graduated from Seoul National University, the Juilliard School, and the New England Conservatory. She's earned a certificate in composing and producing electronic music from the Berklee School of Music.

So after this, we're actually going to have a rave dance party, and In Sun's going to also do some EDM for us. Please help me welcome to the stage In Sun and Danny.

Okay, friends, don't go anywhere. We're going to do a very quick set of the stage, so we can prepare for the panel discussion. Just a few moments. As soon as we get the chairs set up, we're going to be playing Greg Schmidt's video, so you can see and learn a little bit more about that very first piece that you saw, the immersive watching experience of the 2024 solar eclipse. It's like no other experience we have to watch the sun get blotted out by the moon and see that pearly white corona, the pink prominences.

and the beauty that nature has to offer. We are here in Wazatlan to be the very first organization to stream the Eclipse for nasa.gov. Being able to impact people's lives, quite honestly to me, is every bit as if not more important than anything else we do, particularly the children, who are just so excited and hopefully that will inspire some of them into a career in science.

Did you see the touchable moon rock? Yeah, we touched it. It's so cool. It felt interesting touching something not from our planet. Are there going to be any astronauts? Today? Yeah. Maybe. It's number six and it's still cool. Go NASA! Go NASA! Go NASA! Vamos NASA! Vamos NASA! Vamos NASA! Vamos NASA! Vamos NASA! Vamos NASA!

I mean, I can hardly imagine a view being better than the one we have right now. Big thank you to the Solar System Exploration Research Virtual Institute, or SERVI, for providing the telescope views from Mazatlan. Just so amazing to see the reaction from the people. Fantastic diamond ring with prominences that were just gorgeous, you could see them with the naked eye. Being able to sit there and try not to cry. It's a powerful feeling that washes over you that you can't help but get emotional from it.

I just want to thank everyone in Mazatlan for welcoming us with open arms. We have had so much fun sharing our love for planetary science and the eclipse. Go NASA! Go NASA!

Before I forget, Ellen, Ellen Oh, she leads Interdisciplinary Arts at Stanford. We have to introduce Greg and Joe Minerfa from NASA to Ellen because I could see a future collaboration there. So next up is our panel. You've already been introduced to Carol Reiley. You've already been introduced to Ivan Linn. Now you're going to meet Greg Schmidt, who you just saw in that video. Greg Schmidt is the Director of Solar System Exploration Research Virtual Institute at NASA Ames Research Center. He's entrepreneurial and has a great deal of experience starting successful NASA projects, programs, and endeavors. Greg has a passion for developing partnerships with commercial, academic, and international organizations in order to further broad the goals of space exploration. That is a very humble bio, Greg, because I know you've been at NASA for 40 years.

So the panel facilitator will be Christine McLeavey, who manages the Exploratory Audio Research Team at OpenAI. Yes, please. We can get a woo-hoo for Christine. So much of what's happening tonight, when we first discussed launching this Interdisciplinary Experts Program, Christine McLeavy was the biggest champion. She introduced me to so many amazing musicians, artists, producers, computer scientists, Carol included. And now after producing this event, I know why, because she was first a classical pianist.

So Christine McLeavy manages the Exploratory Audio Research Team at OpenAI. The team works on current and future versions of ChatGPT's advanced voice mode. They also created the well-loved open source models, Whisper and Clip. She previously worked on music generation research, creating Musenet, and collaborating on Jukebox. In 2018, she participated in OpenAI's Scholars and Fellows Program, as she transitioned careers from professional classical pianist to researcher. She holds a master's degree in piano from Juilliard, and a bachelor's in physics, graduating valedictorian of Princeton University.

Please help me in welcoming Christine, Greg, Ivan, and Carol to the stage. This one's on for you, Greg. This one's for you, Carol. Thank you so much for the intro and for this whole event. This is so exciting. And it's really an honor to be on the panel and to get to talk more to each of you. I feel like tonight's topic is really near and dear to my heart. I think I've always tried to describe myself as like half math person and half music person. And for a long time, that made no sense to most people I would talk to. And then something I've really loved being here at OpenAI is that watching in the last couple of years as this sort of line between music and math is getting just more and more tangible and less this sort of abstract thing that we all kind of felt but couldn't explain. And now something like models we can actually play with, concerts we can listen to. It's been really inspiring.

So I would love to just start out by asking each of you, what does it mean, this idea of music is math? And if you can kind of give us a little of like how it's been personally in your own life, the two fields.

A few months ago, I met Greg and we had some crazy ideas. The best kind. Exactly. And I'm still quite new to the Bay Area. I'm around two years in. But the thing is that I started to really experience the amazing and magical moments here and there. So our original idea was that, well, we sent the first

very first album from Jared Leto to the space, right? And now music is up there. But the thing is that, has there been any project that's like a outer space live concert, like sending astronauts with music training and also musicians with space training up there and they maybe form a band and they sort of perform up there, lower earth orbit and started to broadcast music and concert performance down back to earth. And so, and we were like, yeah, this idea is like, we should do it. And then when I visited Greg at NASA and the first thing wasn't even about like science or like space, the first thing is like, Ivan, I love Rachmaninoff, I love Chopin. And we start talking about music and whenever we meet, our conversation started to be like three hour long and we had lunch, we had a little beer. And at the end of the day, I still remember that first conversation, which is we had like two hour and 55 minute long meeting and the last five minute was like, oh, Ivan, we're gonna do some eclipse broadcast like in two weeks. Do you think we can add music to it so that emotion is enhanced? And I was like, I probably need a month. And Greg was like, well, don't you see that music is math? Don't you work on this little AI thing? Why don't you ask your AI to create music? And I was like, okay, I will try.

Then we come back and we found the empowerment of technology, math and science. This project couldn't have happened in time because, and that's the beauty there because music is probably the most advanced form of math. And we respect that and we honor that. And that's why we had this little music performance where Jez from OpenAI and I were performing with the AI generated music that's pretty much based on scientific prompt and also process timecode provided by you guys. So, and I also feel like I found my own place where like when I say to people that music is math, like when I was in Boston, was in New York, no one believed that. People were like, no, music is inspiration. You should treat it that way. But I was like, no, music is data. Music is math. So I really appreciate this conversation. Thank you.

So I have a little bit of a story to share, but first I wanna give a little shout out to a couple of my staff members who are here. Joe Manafra, who made the introduction in the first place. And we call Joe our survey ambassador. So Joe, could you raise your hand? Yeah, thank you. And then I wanna give a huge shout out to Ashkan Najad. So Ashkan, would you mind standing up? And Ashkan is the one that put these videos together, including that last one. He has a lot of talents and he and I have been having some really cool conversations about how to take AI forward and the kind of work that we do at NASA.

But I wanna take us back to the year 1947. And no, I wasn't born then. But my father was. And he had just gotten out of his bachelor's degree at Marquette University and he was looking for a job. And he first tried this little company that was pretty new at the time, only a few years old, called Hewlett-Packard. Few of you might have heard of. This is pre-Silicon Valley. That was pretty much it. Well, it turns out they weren't hiring at the time. And so he took a look at the Ames Aeronautical Laboratory, where I work. It's now called Ames Research Center. And they were hiring. And so he started there. And fast forward a few years to relate this to music, he was quite an accomplished musician himself. He played a lot of swing era stuff and he had a band called Smitty's Swingsters. And he actually told me once that he made more money from his weekend gigs with that band than he did working at what is now NASA Ames. But to me, I think that his life really is illustrative of how music and math are so intimately intertwined. Because if you fast forward a few years to right around the time I was born, 1959, he was asked by the brand new space agency, which had just been formed the previous year, to take a look at the problem of how to navigate to the moon. And that was not known. And all the theories at the time wouldn't work until he read a paper by a guy named Rudy Kalman and invited him to NASA Ames. And he thought, aha, this is it. But we need to do a little twist. And out of that came something called Schmidt-Kalman filtering and that was used to navigate the Apollo spacecraft to the moon. And it has been used in literally thousands, hundreds of thousands of applications since. And so for me personally, I grew up listening to him play music and I also grew up knowing of the work that he did at NASA. And so to me, of course music is math. Of course they go together. I think that, can any of you imagine living a life without music, right? But math is the same way. So for me, I guess that's why this has such a deep personal resonance. Thank you.

So I don't have a long story, but I feel like math and music are really representative. They generally save the left and the right side of the brain. But I feel like they're so intertwined, especially dealing with AI. Like one of the first applications of AI was to do music generation. I feel like there's always been these parallel paths between specifically AI and mathematics and music. So I feel like there's been this logical side and then this creative side. And I feel like we're finally at the point where this big explosion is happening and it's accelerated and people can utilize their mathematical skills in new creative ways. So I'm extremely excited for that. I grew up, like many of the people in the room here, also playing a musical instrument. Many people play, they're multi-instrumentalists. And I do think that there's something very strong with the rhythm, the logic, the data, the patterns especially, that feed into each other. The strongest like mathematicians I've met are also very strongly musically inclined. And at some point in their lives had wanted to go professional. So I've worked in many different applications of AI and robotics. And music especially has been one that hits a nerve with so many people and creativity. I feel like it challenges humanity in a lot of different ways. And it's been the area of AI that I've worked on that's been the most complex and interesting to peel back layer by layer. And I definitely think those two parts of the brain work together and that's why I was so excited for this piece and that you guys can see and visualize both sides of the brain.

I'm glad you found the performances compelling earlier today. It's interesting how there's a balance between the human aspects and the data in both. How do audiences react to this balance in performance pieces, especially in a world where some pieces are data-driven while others are more human-driven? As you compose projects, how do you consider this intricate balance?

I started exploring this after my self-driving car venture, seeking disruptive creativity with a deep human connection in 2018. Artists embody that humanity. I partnered with violin soloist Hillary Hahn to enhance artists' creativity using AI. We've undertaken various projects in the past six to seven years to explore new forms of art.

I reached out to Christine and used her tool Jukebox to compose music for the San Francisco Symphony. That collaboration led to a Grammy-nominated album.

When it comes to music history, there's a clear linear pattern of development.
Technology now empowers creators, moving from traditional methods to digital platforms. AI presents new possibilities, blurring the lines between human creativity and technological advancement.

Musical composition involves structuring music with melody, rhythm, and progression, but it's essential to remember that music is more than just sound. Western music theory has shaped music for centuries. AI opens up endless possibilities for composition, such as in the EDM genre.

The potential of music AI is vast, with the ability to discover new music genres and push boundaries. Models like chachi PT Dali Sora are just the beginning of this exciting journey.

It's intriguing to explore the impact of math and science on music. I'm curious about the other side, how sharing NASA data and ideas, intersects with music creativity.

the science, educate people, share this with the world. How, like, what makes you bring music into this? How do you see audiences or, I guess, not audiences, how do you see people come to it differently? It's a wonderful question, and that's one thing, and my tenure as director of our institute that I've really tried to enhance, because you saw from the video that we showed how important outreach is to us, you know, to people of all ages, but there's a lot of different ways of reaching people. You know, you can give them talks, you can show them things, you can give demonstrations, but I really feel strongly that music connects with a different part of us, of who we are, you know, as people, and so, for instance, one of my research teams at the institute, which is based in Maryland, they do a number of field trips, analog trips, and they went to Lava Beds National Monument up in Northern California, and the principal investigator of that group said, Greg, what if I were to bring a musician up with me, you know, and get his impressions of the field work, and create a composition around that, and I said, Nick, that's a fabulous idea. I love it, go for it, let's just do it, and so he did exactly that, and then that musician, at our next annual meeting that following July, played it, and it evoked some of the feelings that we all had being out in the field, looking at these terrains that are remarkably similar to things that we might see on the moon, so I think of it as one more tool, and a very, very powerful one to share our message.

That's great. I'm curious, Carol, I feel like you have a very interesting set of projects that you pick, and I'm curious, as you look through, how do you get inspiration? Like, what sort of tech do you look at, and you think, ah, I want to build a piece around that, or what future things do you feel like you'd be excited about? Yeah, I feel like it wasn't a charted career. You know, I feel like there are certain moments in life, there are certain insights that you get that sets you on a path somewhere different, and it's great to leave yourself open. You know, one piece of advice I always give like young up-and-comers is, you know, the job that you want may not exist right now, so look for a need. So I was growing up, I wanted to be a doctor, and I learned about the pacemaker when I was a hospital volunteer, and was like, wow, that's so interesting. While a doctor works tirelessly to help one person at a time, and being an engineer can impact millions of lives if you can invent something. And, you know, I worked in surgical robotics for a long time and thought I was gonna stay forever to work on something that saves lives, and then I, at the Stanford AI Lab, when they were working self-driving cars, they let me know that the largest cause of death for young adults was car accidents. And so I feel like it's these like little things that push you in a direction if you stay open. So I think my mission in life is to figure out how to save lives and have large impact. So I work in these type of highly regulated spaces, but, you know, nothing has been as interesting and fulfilling as this intersection of creativity and AI. So I think this is a really big chapter we're going through that is so exciting this time in life.

Well, thank you. I feel like we think often about how these AI models are learning from human data, from, let's say, AI models learning from music or from text, anything like that. I'm curious about the opposite side. I know when I was working on Musenet ages ago, like one of the things that really struck me was like, I play piano, but I am a terrible, terrible composer, and I wish I were a composer. And one of the really cool things I felt like I learned from the model was the model was just, it would just spit out ideas and tons and tons of ideas, and it didn't matter if they were good or bad or anything like that. Once in a while, there was a cool, like really good idea in there. And I feel like in a way it changed my own perspective on music and the same thing, I would sort of like have it start with a Chopin piece, but then ask it to continue. And like, I've played the same Chopin piece for, I don't even want to think how many years, but it always goes the same way, but like to the model, it could go anyway kind of thing. And so I felt like I learned from interacting with the models. And I'm curious, like if, you know, if any of you have had this experience where you were surprised or you felt like you learned something from interacting with the model.

Oh, okay. So when I wrote music, so prior to Wave, I worked with projects such as Final Fantasy, like Assassin's Creed, and I still remember pretty much all the musical pieces that I wrote. It wasn't about me composing. It was about me discovering when there's inspiration and my inspiration comes from sound. So I think everyone is different, right? They have different ways to be inspired, but for me, it's being always, when I take a shower, the sound of the water that could actually get me inspired. And for some reason I have a melody line there. So I guess that was my language model at the time, which is from shower, but if you look into the structure of music, such as a motive, a chords progression, a rhythm pattern, these are some of the initiation that could get you add different layers onto it. So it becomes the initial shape of music. Then you go from there, you started to put a little bit of different instruments, different bass and different bars. So eventually four bars become eight bars, become 16 bars. So we've been hearing a lot of feedback that, well, this can be a really creative tool that helps you come up with even more creative music components there. But I'm also very curious about how Greg and Carol think about that because when we were at NASA, we were jamming music on a Chopin and we started to improvise. And this human level improvise actually came from quite an organized, structured way that we know the music theory. So we have a framework, we started to create music and even jazz, we just talk about jazz. Even jazz has a framework that if you follow the chords progression, things are there. So these are like different models that, we can see this as model, right? So if now we're using software to create music, maybe going forward, are we using models to create music? Are we using modules to create music? Like this is something that's really new.

Interesting. I mean, one of the things that comes up for me personally, I mean, I was classically trained on piano as Ivan knows early on, not to his level, mind you. But, and then I gave up too early. My parents said that I would regret stopping and guess what? I regretted stopping. But then, and late in high school, I took it up again, but in a different way. And I mentioned my father being an accomplished musician, but what I didn't mention is that he could not read a single note on a sheet of music. And so he would improvise.

everything and play and play by ear and to me you know being classically trained that was a mystery I didn't know how he did it but he taught me the fundamentals and I learned how to connect the structure of music with an emotional structure in a way. So for me, that has helped in a wide variety of genres as I played bluegrass, for instance. Anyway, yeah, I think the language laws have been really interesting to see how people interact with it.

I think what I've observed is that novices find it really fun to create new pieces of music and play. But when we put this challenge out to serious composers, like Pulitzer Prize-winning composers and Grammy-nominated artists, to really play with these AI tools, they really struggled. A lot of them were saying it was like working with a really bad intern, and they didn't have enough control. It was interesting to see how they approached it, breaking it down into experiments and pieces.

One model alternated every other note and interpolated them. Another had them fill in certain places. Much of the feedback was that the interfaces of the models were lacking in control and intuitiveness. Several composers noted they were most interested in the failure cases, ones that didn't sound good, as they wanted to take those ideas and expand them. This was how more expert individuals were using the music generation tools compared to novices.

I'm really curious to explore where it can go and what sounds most interesting. What are the controls needed to make this a sticky tool? Right now, the novelty is what's driving it, but there doesn't seem to be much desire to keep coming back to it and play with it. I think there's a huge opportunity once that takes off.

Our models from time to time make mistakes, but what interests us is that these mistakes can lead to even better, more human music. Some dissonant chords that come out from these mistakes might be considered bad from a music theory perspective, but they can sound amazing. It's fascinating to see how machines make mistakes in music, creating something more beautiful.

Sure, yeah, thanks. I guess we're getting the sign to open this up now for everyone to join in with questions.

Hi, Ron Rivest, MIT. The title of this event is 'Music is Math', and thinking about the two of them, what distinguishes them for me, trained as a mathematician, is the emotional content. Music has emotional content, while math does not. I was wondering if you could comment on that idea.

I could take a stab at it. For me, not being as mathematically trained as you are, and more of a physicist, I think there's beauty in mathematical structures. There's also something very unique about music, not just for people, but even across species. This would be an interesting area of research to explore.

I kind of see the direction of creativity potentially going in two directions. One is where I can write three words and have a symphony created effortlessly, and the other is where I become a super auteur controlling every little piece. How do you see that evolving?

I think there's a democratization of creativity with these tools, leading to a large excess of content. The fight for attention will be fierce. I'm interested in how individuals can tap into their creative expression and build things they might not have the skills for. Exploring the intersection of humans with AI technology in real-time composition and live performance is intriguing.

to a couple different pieces, I'd be able to say, like, oh, I really like this one, or I don't like this one, and kind of hone into it that way. And so I think it's, like, it's really exciting, like, even if we're only in this world where it's, like, three worlds going to, three words going to a piece, you still need that human aspect of, like, the person has chosen, ah, this is, like, this is the one that then spoke to me and that I want to keep and put out there. And I think that, I think in a way, like, that then opens it up to people who don't have the experience or, like, you know, don't have the training to kind of go through in fine detail and do it and can still create in that way. And also, music's the most fun when you're a part of it. Like, there's a reason why musicians would love to get together. They love to read music. They love to jam music.

I was so surprised to learn that there are so many musicians in this room, in this building. And, for instance, I met Preston, like, a few days ago, and he introduced to me as a human data out person, but he apparently plays French horn. And also introduced me to the chamber music group here. So I guess there's a reason why we all get together and create music, whether it's music reading or jamming, because that's where the time becomes precious. And we all got together tonight to listen to music. So after that question, I actually saw Dr. Ahmed Elgamal, who came all the way from New York, shaking his head. So I'd actually love to hear what he has to say. And then Anton has a question. But I just, I want to hear what was on your mind when you were shaking your head.

No, I mean, when I always hear about AI and creativity, and I am in this area. But when I hear the word democratization coming in the context of AI, I don't really like that. Because, yes, I agree that AI, like in the past photography, help everybody to express themselves more. With any technology, more people can express themselves creatively more. But that doesn't make them artists. I think many of the room would agree with me that having art as a profession is very different from any human expressing themselves creatively. So this distinction is very, very important, especially in these discussions. So I know that every startup in that space always use the word democratizing music or democratizing art. But I feel very, I don't know what to do about that, but not really comfortable when I hear this in that context. That's why. Thank you.

First of all, thank you very much. It's been a wonderful evening. Thank you, Natalie. I would like to start my question with a disclaimer. I think my philosophy is very similar to yours. So what I'm about to say is not an attempt to challenge the panel. So with that being said, as you probably know quite well, many artists are very much against the idea of using AI to produce any content that reflects creativity. For whatever reason, your job was to convince them to change their position. What would you say to them? What's the question? The challenge? Well, what I was trying to say is that many artists are frightened, right? So by the introduction, and they think that it's completely inappropriate to actually produce content with the use of AI. How would you convince them that it's actually okay, that they shouldn't be frightened or dismayed?

Well, first of all, I came across this article over again and again, which is when people ask Picasso, what is art? His answer was, what is not art, right? That's the first layer. The second layer is that, as you remember, I actually started my classical music training very late when I was 12. But it took me 20 years to become a proper musician that I'm confident enough to actually get on the stage to perform. I'm confident enough to write music. I'm confident enough to jam music with my friends. But the thing is that when it comes to this entire process, and music is a multidimensional language where it's not just about playing one instrument, it's understand music theory, understand music history, getting to know the background of the composer, and train your technique, and to really get to know everything. And especially like when it comes to piano, it's not just about fingers, right? It's all about the entire body coordinating and how to use pedal, how to actually coordinate your pedal using with your fingers. And are we going to control different sound? And on this very stage, because the room is drier, so are we going to use a little more pedal? Are we going to use a little more touch? So all of these are really traditionally trained skills in, for instance, for classical piano playing. But when I moved to San Francisco, one thing that I really noticed is that while when I was in Germany and Boston, I was required to go through all the practical training. Here, people just wanted to jam, right? I wanted to play drums. I wanted to play guitar. I wanted to sing a little. And it's okay that you're not proficiently trained, but then here comes the question. Not everyone needs to be a professional musician, but we share the love about how we embrace music. So it's like now we have a coffee machine, so we can make a cup of coffee under two minutes. We have rice cooker. We can make rice under 30 minutes. We have phones, so we don't need to wait six weeks so that our friends from the other side of the globe would write back to us. So I guess what we're seeing here is actually the productivity and also how and the way that this new technology is being used, because it's a huge difference that you collect data set and try to train on different digital assets or music pieces from existing artists versus you're creating the new combinations. So I don't think I have a good answer to that because it's very early, but it seems some different approaches are coming in because, I mean, thanks for the different models that were introduced like two years ago. We sort of know how to analyze sound, right? But now there's another approach doing music notations and making music like still playing Legos. So can we actually own a C major? Hans Zimmer writes in D minor. Does he own D minor? No, but Hans Zimmer's music in D minor now actually presents cinematic epic. So it has quite a bit of a cultural and humanity consideration there when it comes to like human driven music composition and machine driven. And I do think in the next two to three years, we will have a better answer by then. Actually, I kind of want to answer too, if anything, I would trust artists on this, right? Like I think artists are super creative, super like very interesting, eager to try new things kind of people. And if they don't want to try our tools yet, like maybe our tools are just not there and we need to build cooler stuff, right? And it's like, I think, yeah, let's convince them and let's come up with cool ideas, but I don't really want to like talk, try to commit someone to do it. I want to just create stuff that's compelling enough that people just want to try it on their own. And I think that there are a lot of artists who might not be the pioneers coming out, but we've seen other artists step into utilizing AI and try to create new business models or actually actively try to contribute and create models around their voice to rev share. So I do think there's some experimentation from some bold artists, which I think we're going to be in experimentation phase, but I do think the artists need to, they should speak.

and we want to hear from them. We're gonna take one more, but we are way over time. Your hand's been up for a long time. Oh, Preston!

Okay, two more, but guys, let's keep it short, okay? Concise. There you go, Sean. Thank you so much. I'll make it quick. I wanted to take a philosophical turn on the conversation. I'm Sean, I'm a second year undergrad at Stanford. I'm a conductor, but I'm also studying AI, and I try to incorporate computer vision, multimodality into my performances with AI. Just a quick rapid fire, if you could give one piece of advice to a young mind who's trying to straddle the dual career, the dual path, what would it be? What are you thinking, what are you optimizing for, and what are you marinating in the next 10 years?

Wow, wow. Well, Sean is an incredible musician. Also, comes from the very established music family. I know that Mr. Tan won an Academy Award for Hidden Dragon, Quarantine Tiger movie with An Li many years ago, and I'm also curious to see how your father would actually see, now that you are also part of music AI, where he probably wrote music in a very traditional, hardcore way. But I was watching the latest interview from Masayoshi Son a few days ago, and his grandchildren also asked him the same question, what's the next 10 year, what's the future? And I was quite inspired by his answer. It's fine that there are challenges, we're getting used to it, but this is the generation that we probably are soon going to live with AI, breathe with AI. This might not be as comfortable as, not as comfortable to hear, but actually similar questions as how we're gonna adopt internet, how we're gonna adopt personal computer. This is actually not the first time where we're going through something that's really influential. And I'm optimistic, I think we'll be fine. My advice is, I believe in you, man, go create it. Just do it, do it. It's gonna, yeah, I would say, just talk to a lot of people, try to find your tribe, and go as fast as you can, and pivot. Just hear the feedback and build. I think you can do it. And he found his way through it. Yeah. Right? We don't even, I don't think he made it. I'm not aggressive, he made it. He's done doing something right.

Preston, last but not least. Okay, so my question, I've been thinking about the continuum of creativity, I guess, a little bit, between comparing music to math, and I think there's kind of like a societal reputation, a little bit, of like the musicians are like the super creative ones, but in my opinion, I think math also has like an extreme degree of creativity. And I would love to know how y'all think about like how math versus music is taught, and like how we think about those things, and how we can kind of come to like a unified vision of actually both of these have an equal share of creativity, and I guess like, I don't know, formulaicness in a certain way.

I'll take a first stab at that. I love that question, I love that question, because you're right, musicians, I mean, of course they're gonna be creative, right? I mean, this is how it's always been taught, but math is so often taught as very formulaic. Science is often taught the same way at all. When you actually get out there, and when you become a scientist, when you become a mathematician, it's anything but, it's not formulaic at all. Creativity, thank you, creativity is, I exercise it every day at NASA. I try to figure out new solutions, try to figure out in collaboration with my friend Ashkahn over there, how are we gonna use AI in solar system research? And we've found some really cool ideas, you know? And so it's, you put your finger on a really big problem. The teaching needs to change on these subjects, you know? Yeah, you need to learn some basic skills, it's the same in math and music, but kids, young people, such as yourself, the last two questions, you know, you need to see that everywhere is open to creativity, not just music. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

I think humans are innately creative, right? I feel like that's the thing that drives us, and that might be different than anything else. So, you know, I started Deep Music, and then I didn't want an engineer being at the front of this, so I felt like musicians should be at the front. You know, I think coding, for instance, is extremely, it could be really creative, but it's no fun when you're learning the baselines, like the ABCs, and you're, but once you've mastered the, like the fundamentals, then it starts to become more and more freeing and fun and creative, and then you can start building. I think you have to get a certain level of expertise of anything before you can start having fun and play within the parameters, but you need to learn some rules. So I think, you know, while I think today we talked about, like, music specifically, I think math, AI, like this whole unlocking of creativity comes at any application. So that, I think that's my two cents there.

Well, I guess very short to add to that is, is that without math, music probably wouldn't exist in many ways. Actually, later, Nelly's gonna talk a little bit more about that, but in terms of creativity, later we're gonna have a little bit of post-social, and we put together our AI-generated dance music together with Sora artists, and it's actually a good user case that you guys can see the creativity, because we were so surprised to see how good it actually matched. And those are not the traditional way of creating music and creating a visual, but they're matching. So I guess possibilities are everywhere, and we need to unleash our brain to actually pursue something that's even more powerful. And, like, we're talking about going to outer space, right? So sometimes we're gonna think, we're thinking creatively, in a creative way as well. So I guess when you open your mind, like, things and creatives are actually everywhere around you. Thank you so much, folks. That was really beautiful.

All right, folks, we can say goodbye to our panel, but, sorry, I'm kind of in your way, so let's see. We have dessert. Dessert is set up. We have about 15, 20 minutes for the dance party that Ivan was talking about. I did wanna just say thank you so much, especially for the folks that traveled. Let's see, who did I not recognize already? Oh, Dr. Ronald Rivest, thank you so much. I know you're visiting your grandchildren, but you're usually on the East Coast, so that is so awesome.

to have you here tonight. Thank you so much to Anton for traveling. Thank you so much for Dr. El-Gamal. It's really beautiful to have you all here. I hope you at least connect with one or two people, get their phone number, connect with their LinkedIn. So many cool interdisciplinary collaborations on the horizon after tonight, as long as you connect.

And last but not least, what you're gonna see on the stage is a Sora Alpha artist. So it's an artist that had really early access to Sora before we released it. His name's Tahir, and he's a pioneering AI artist who merges technology with creativity to produce innovative and thought-provoking visuals. His work explores the intersection of artificial intelligence and human expression, pushing the boundaries of digital art.

And I connected him with Ivan just for fun, and of course they produced something, Rapid Fire, so we decided to throw it on the television tonight. Thank you so much, everybody.

Christine, thank you so much for facilitating the panel. Ivan, thank you for being here. Greg, I think we'll be friends. It's so good to know you. And Carol, what an honor. Thank you. Thank you. Thank you. That was fun. It was a fun shot.

+ Read More
1
Comments (0)
Popular
avatar


Watch More

Exploring the Future of Math & AI with Terence Tao and OpenAI
Posted Oct 09, 2023 | Views 26.4K
# STEM
# Higher Education
# Innovation
AI Literacy: The Importance of Science Communicator & Policy Research Roles
Posted Aug 28, 2023 | Views 40.5K
# AI Literacy
# Career
AI Art From the Uncanny Valley to Prompting: Gains and Losses
Posted Oct 18, 2023 | Views 39.1K
# Innovation
# Cultural Production
# Higher Education
# AI Research