The Q&AI Podcast

Harmonizing Innovation with AI: Redefining Music Creation and Education

Juniper Networks

In this episode of The Q&AI Podcast, host Bob Friday welcomes Nagarjun Srinivasan, cofounder of Lune and former Juniper engineer, to explore how AI is transforming the music industry. From winning an innovation award at South by Southwest to developing AI-powered learning tools, Nagarjun shares insights on how technology is reshaping music education, creation, and performance.   

They discuss AI’s role in music transcription, gamification for skill building, the ethical challenges of AI-generated music, and the future of AI-assisted content creation. Tune in to discover how AI is making music more accessible and interactive for both beginners and professionals. 

 -----

Key points covered:  

Winning at South by Southwest: How Lune started as an experimental prototype and went on to win an Innovation Award for Human-Computer Interaction 

AI for music learning: AI’s role in transcribing music, generating real-time feedback, and making learning more engaging through gamification  

Gamification and muscle memory: How AI-powered games help musicians develop dexterity and improve faster 

AI for content creators: How AI is streamlining music production, mixing, and mastering, making professional-quality content more accessible 

The copyright debate: The ethical and legal implications of AI-generated music, including lawsuits against companies using unlicensed music for training 

AI vs human musicians: Will AI ever replace human musicians, or will it always be a tool for enhancing creativity? 

The role of hardware in AI music: Why specialized hardware and data collection are crucial for advancing AI-driven music experiences 

 -----

Where to find Nagarjun Srinivasan?   

LinkedIn - https://www.linkedin.com/in/nagarjun-srinivasan/  

Where to find Bob Friday?

LinkedIn - https://www.linkedin.com/in/bobfriday/  

-----

To stay updated on the latest episodes of The Q&AI Podcast and other exciting content, subscribe to our podcast on the following channels:

Apple Podcasts - https://podcasts.apple.com/us/podcast/the-q-ai-podcast/id1774055892

Spotify - https://open.spotify.com/show/0S1A318OkkstWZROYOn3dU?si=5d2347e0696640c2

YouTube - https://www.youtube.com/playlist?list=PLGvolzhkU_gTogP5IBMfwZ7glLp_Tqp-C

We hope you enjoyed this episode!

Bob: Hello, Bob Friday here and welcome to another episode of Q&AI. In today's episode we are going to be talking about AI and music, and today we are joined by Nag, who was previously a Juniper engineer, who is now a co-founder of a music startup called Lune. Nag, welcome, I thought maybe we start the discussion. I hear you guys won a Southwest, South by Southwest award last year. Maybe a little bit about what was the award all about?  


Nagarjun: Thank you for having me on the show, Bob. So, for those of you who don't know, South by Southwest is this music, arts/tech festival that happens every year around March in Austin. And interesting story about South by Southwest, it was in 2023 when we had a prototype of Lune and we just decided to crash the South by Southwest event with our product on a box. That's where we actually got our first customer and it was at that point we decided to start the company, incorporate the company, after we got the first customer.  

Then, in 2024, one of the representatives at South by Southwest reached back to us and asked us if we wanted to participate in the innovation awards. So, we did, and it was a really interesting experience because we were competing with 55 other companies or startups, but we were the only unfunded startup at that point of time.  

So, it was like the four of us carrying like huge equipment setting up the stage when other people had workers. So, it was a very surreal experience when we won the innovation award for human computer interaction. To give you a little bit of background on that. What we won the award for was creating this device and the interaction that you would have when you played music.  

For example, we showcased experiences where we would visualize the notes on your instrument on a screen. For example, we had this game called Gravity Balls, where a completely new user of a guitar or a ukulele or a piano can plug their instrument into Lune and as they explore the fretboard or the keys, they learn what notes are present on their instrument. It's really interesting to see small kids who were being introduced to music for the first time actually make interesting connections.  

There was, in fact, one kid who played with it for a little bit and he was like and I asked him what did you learn from the Lune? And he was like oh, it looks like the notes repeat itself after the 12th fret. And he was like, oh, these are the strings of the guitar. And that was really interesting for us, because what we kind of learned from there was, if you provide someone the appropriate tools and visualizations for them to explore, they naturally learn and get interested in that particular project.

  

Bob: So, this was the inspiration for Lune. This is how you did the whole venture, because I think, as I said, you worked for me, you worked for Mist, you're a location engineer and maybe the other thing, before we jump into AI for music is you jumped off the cliff from being an engineer at a big company to a founder of a music startup. I think my words of wisdom was don't do it. Yeah, you have a child on the way now, so maybe for the audience, what was going through your mind when you decided to jump off the cliff into this little new adventure?  


Nagarjun: Well, I think one thing that you probably are aware about me is I am famously known for making passionate decisions like this. If you remember, even back in the day when I was graduating from masters, I joined Mist just because I loved the environment, I loved being in that startup ecosystem, I loved the fact that there were no structures and rules and everyone was like into making something and around the COVID time is when I got back into music a little bit.  

I started experimenting with digital music. I started experimenting with production processors, effects, plugins, and started my YouTube channel and even published some music online. That's when my co-founder Shrey, he was also working for another startup at that time and he was also learning music on the side, so we would get together with jam sessions, just like how we have done our jam sessions.  

And from that point is when we realized that there's really an experiential problem with playing music. If you think about the regular person's music journey, you start with one guitar. You try to play a couple of songs. You reach a plateau. Then you realize, okay, maybe I need a better amp, maybe I need better pedals. You take them. You reach another plateau, then you probably have to learn better music and, maybe scales and stuff like that before you are at a comfortable level to even perform yeah.  


Bob: So, I mean, I can relate because you know I started learning the guitar when I was 60. I would tell you guys, this is something you want to learn when you're a kid, not when you're 60 years old. But I was great that you know you guys are going to build me this AI instructor that's going to help me learn to play guitar. Now, I think we talked about, AI for learning, AI for content creation I'm not sure you actually are building something for me yet, but maybe around AI for learning. How do you see AI is going to help the next generation of musicians start the journey?  


Nagarjun: Yeah. So, when you think about music learning and how AI plays a role in music learning, we can break it down into several different verticals. So, if you think about how you learn music with your music tutor, the first part is the music tutor has to, in the background, know what song you want to play and have to break down that song for you. He has to tell you these are the chords that you've played in the song. This is the down strum, this is the up strum.  

So, the first part of learning how to play a song or a musical piece involves understanding what that musical piece is. So, this is the transcription problem of music, which we have been trying to solve for a very long time, and now there are base models available for doing things like chord detection, beat detection, pitch detection. So, an AI, for example, can help you in the following way you can feed it a song and it can return to you exactly what chords to play, at what time, exactly what beats are present at what time. It could automatically tell you the key of the song and the BPM of the song.  


Bob: Well, I would say, when I was learning to play music, it was more like it took me a long time to get the muscle memory. So, when you talk about the award and South by Southwest and gamification, I thought that was brilliant because I thought, Guitar Hero, do you think AI is really going to help solve that problem? Because part of the problem learning is just getting that muscle memory down and learning the basics.  


Nagarjun: Yeah, it's a good point. You brought up the gamification, because I think the gamification is directly related to the muscle memory. One of the insights that we had was I went to play FIFA on an Xbox, but I'm used to playing it on a PlayStation, but I was able to quickly pick it up. That's when I realized that when you are playing a game, you are naturally gaining the dexterity on using the instrument that you are using for playing that game.  

Now, my co-founder Shrey his background comes with game development, so he's been a director of technology at Unity, so he's really good at creating these games. And that's when we started by creating a small subway surfer type of game where different notes make the player go in different directions. The motivation behind that was, if I can just take your mind off the playing music for a second and make you focus on maybe some other goal, like playing this game, you naturally gain the dexterity on your fingers while playing the game.  

Now, how can AI help? In this place would be, AI could help bring an infinite generation of such sequences for you to play. So, as long as you tell okay, these are my practice scales. I want to practice the pentatonic scale, I want to practice the major scale. AI could take that scale, extract a subset of notes from that scale and create the game play for you and create what notes to hit.  


Bob: My neuroscience friends when I was learning to play which I think is true is you learn to practice a little bit, you go to sleep, and yet that sleep process actually reinforces that muscle memory, and so I think that gamification will help that piece of the puzzle. Now the other interesting topic maybe for the audience any startups you know, when you look at the startups out there for the learning process, any startups you'd recommend the audience to take a look at if they're just starting their music journey.  


Nagarjun: Yeah. So back to my transcription. There are a couple of startups coming today that are putting up like transcription models out there. Two notable ones that I have used back in the past are Songster and Moises. So, Songster allows you to upload your music, like upload a YouTube link, for example, and it can generate the sheet music right out of the audio and that is a machine learning model they use that convert audio to midi.  

And Moises has a whole suite of different models that they use so you could upload your audio file in there and you could find the BPM of your song, you could find the chords of your song and you could even do things like speed up your song, slow down your song without affecting the frequency.  

So, there are a lot of startups that have solved the problem of the transcription for understanding what music what music you are giving to them. But the next part of learning, which is the actual real-time part, where either your music tutor sends you a video or you're sitting with the music tutor and following him along. Currently there are not really any startups that are working in the visualization field of the transcription. Right now all transcriptions are just visualized in a very basic sheet music sense, and that is where we are focusing some of our efforts on.  


Bob: Yeah, because I mean, if I talk to most of my friends who are trying to learn music, it seems like it's a combination of YouTube friends getting appointed thing. I mean, when you look at the cost of a music instructor is usually $100 an hour or something. Do you see us getting to a point where we have a combination of AI instructor plus real instructors helping this journey?  


Nagarjun: Yes, and I see it in two ways. One is I see the music instructor using AI in order to ease his workflows. So, using AI to string better songs for the students together, using AI to break down songs better for the students. That is one but two at Lune, we believe that any art form should be approached with the sequence of play, learn and create, while today, most of our art forms we approach it with learn, play and create. So the idea is to give you this playground of tools that you can use to understand or at least dabble in music, and I feel a big chunk of the initial person's journey can be replaced with these AI tools and experiences till they're at a point where they really need a human to take them to the next level.  


Bob: Yeah, yeah, now, for me personally, it looks like the journey starts and I've been on the, I think, stuck in this five-year part of practicing. You see, you go from that to open mic, to band and then we finally get to the. I want to create content for the YouTube or something. Yeah, and I think that's the other thing you're focused on now is, once you get to this other level, how is AI going to help the professional? When you're at the, I'm trying to create content to publish. It seems like AI has a role here, too, also.  


Nagarjun: Yeah, we think of this role as the bridging the gap type of role. If you think about at least my personal use case was when I started playing a little bit of music and producing it, I realized that in order to just produce a video and even through your podcast journey, it's not just about the content you have to do the post-production, you need to know how to capture the audio and the video, sync it. There are so many extra things to learn and I feel this is where AI can help the content creators. So, for example, in one of the last compositions I did, I just made the composition raw and I sent it to an AI that mixed all my tracks together so it adjusted the volume of my bass, my guitars and everything to make it sound good.  

Bob: Now, how's that AI? Because in networking, I always tell people AI is really just the next step in automation. If I look at music, we've had synthesizers and pedals and all types of tools for adjusting the sound of our music. Where does the AI really come in? Why is AI any different than the pedals I have at home? 

 

Nagarjun: That's a really good question, Bob, and I think, before we touch upon that question, it comes back down first to the architecture. You talked about synthesizers, so in the past they used to have a really big machine called a Eurorack where they used to plug cables together in order to create sounds. Now we are talking about sounds being able to. We are able to create sounds using a CPU and a GPU. You're right in the fact that we probably don't.  

We're probably on a subtle line of where is the difference between producing sound with the human, producing sound with the AI? But an AI can do things much faster and can experiment with way more parameters, much quicker than what a human can do. So, the real question to ask is the AI on par with the human in these use cases? So, when it comes to creating sound and mixing, mastering, the question is an AI on par with a sound engineer? When it comes to creating music, the question is AI on par with the musician?  


Bob: Well, that may lead into another topic. In acting, a year ago, a year or two ago, it was all about is AI going to replace acting or actors and everything? When you look at music as an art form, is AI ever going to be on par with humans and actually starting to replace humans musicians? Are we getting to that point?  


Nagarjun: That's a tough question, Bob, and I don't know if I can give an accurate idea of this, but in 2022, when these models started coming up, I did not think AI music was really good. But if you look at the recent generations from companies like Suno, UDO, or even a couple of open-source models that are out there now like a notable one is YUE the music is surprisingly become better. It's still not on par with a human and there are also some technical challenges. Like most of these, AIs produce only mono audio. It cannot produce studio quality audio. But if we have come this far this soon, I don't know how to extrapolate that into the future.  


Bob: Yeah, and I think we were talking a little before the show here about lawsuits and we know music is copyrighted for 17, 20 years. Is that copyright protection actually protecting that music from being used to train the models, or is that an issue right now?  


Nagarjun: That is a big issue right now. And let me go back to one of your famous things. Right, like good AI is like making good wine you need good grapes to make good wine. You need good data to make good AI. So obviously, it's obvious that the good data that you want to train a music model with is music that is available out there. But if you have been following since last year, there's been a lawsuit that Suno and UDO are under because they did admit that they have used unlicensed music for training their models. Now they claim this is inspiration or fair use, but that's what the lawsuit going on is about. Is it really fair use if your model is using the work of our artist to train itself?  


Bob: And what do you think?  


Nagarjun: I thought it was inspiration till one of these models spit out something verbatim. Then, if it spits out something verbatim that really makes you think at a philosophical level, ok, what did this model learn? Did it learn it or did it take inspiration?  


Bob: You think the copyright protection would protect that music from being used for publishing or even for training.  


Nagarjun: I think the game is more about the lawsuits after you break the copyright.  


Bob: Now, the other thing I'm known for saying is when I started Mist, I built an access point because I wanted to make sure I could get the data I needed to answer the question around why you're having a bad, poor user experience. Is this true in music? Where does the data come from? Do I need specialized hardware? Or how do I get the data for all this great AI that I'm doing, you're doing?  


Nagarjun: Yeah. So, if you think about Mist, the access point existed because another generic access point could not do what you needed in order to drive the experiences. The same way, if you look at a lot of music equipment today, most of them either run with analog circuits or FPGAs. By converting them to a CPU, what we are able to do is we are able to train models on each of these different pedals. So, we take an actual analog pedal, find out its impulse response using an impulse response matching algorithm and then that can be converted into a software module that goes in our product. So, in essence, what the specialized hardware or, more so, the change in architecture that we have brought in, what it helps you do is it helps you create a software module for almost all the physical products that exist out there in music and put it into this one product.  


Bob: Well, Nag, I have to say all my engineering friends most of them go off and do other engineering startups. I think you're the only one who jumped off the cliff to do a music startup. I want to thank you for joining us today. I want to thank the audience for joining us today and for other episodes of Q&AI. Please check it out on Spotify.  


Nagarjun: Thanks a lot, Bob. 

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.