The future of music

Google DeepMind’s Music AI tools: Transforming the future of music creation

November 26, 2023
Music AI tools
Source: Google DeepMind

Key takeaways

  • Lyria’s Music AI tools and music generation model unleashes a new era in music creation.
  • Google DeepMind leads with a responsible, ethical approach to AI’s role in music innovation.
  • Music AI tools are redefining music’s boundaries, creating a playground of unlimited musical possibilities.

Google DeepMind’s launch of their new ‘Music AI tools’ might be setting a new standard in music creation. 

Earlier this year, DeepMind showed us MusicLM, an amazing AI model that can create songs from words people type in. But it does more – it can turn a simple whistle or hum into different kinds of music, like an electronic synth sound, beautiful string music, or a cool guitar solo.

However, DeepMind chose not to release MusicLM to the general public. Their concerns? Potential misappropriation of creative content and the risk of misrepresentation. Instead, they made it available to a select group of researchers and developers in the musical AI field.

This careful choice by Google DeepMind shows they are serious about safely using new music tech. They want to find new ways to make music with AI, but in a way that is right and fair. It’s about creating a future for music that is exciting and responsible.

Just months later, once figuring out how to use these technologies responsibly, the company introduced Lyria—their ‘most advanced AI music generation model’ so far. 

Lyria is a set of tools created in collaboration with YouTube and known as ‘AI experiments’. 

What’s interesting is that this time, the tools are available for everyone to use. This means both regular people and music professionals can explore and use them.

On one hand, Google DeepMind launched Dream Track. It’s a cool feature in YouTube Shorts. You can write something, pick a voice from one of the 9 artists who agreed to let Google use their voices, and create a music track with AI for your video. A fun and interesting way to make YouTube more engaging for users. 

On the other hand, and here is where things get interesting for music creators, a few lucky ones got access to Lyria’s ‘Music AI tools’. These tools are being designed alongside artists, songwriters, and producers to help with their creative processes. 

Partnering with the industry for responsible development

The idea started at YouTube’s Music AI Incubator. This is where YouTube and its music industry partners come together to decide on building these tools responsibly and keeping up with fast-growing advancements in the field. 

Lyria’s ‘Music AI tools’ will be available to everyone at a later stage. You will be able to sing or hum a tune to make a saxophone line, change beatboxing into drum sounds, or turn music played on instruments into singing. Now, how amazing is that?!

The reason why these Music AI tools are being released to the public now is thanks to SynthID. This system puts a watermark on AI-generated music to show it’s made by a machine. Every piece of music generated by the Lyria model will have this watermark.

This system could also turn into the norm for preventing copyright violations in AI-generated music. A popular topic that governments are closely examining as well. 

Making the impossible happen with music AI tools

If you make music or produce songs, you’ve probably dreamed of having a tool this powerful. Countless hours are spent trying to craft the perfect song, transforming your thoughts into actual melodies. And consider the number of incredible songs that remained unheard because their creators couldn’t fully realize them.

Just recently, The Beatles surprised us with a new release, ‘Now and Then.’ After John Lennon’s passing and three decades of failed attempts, they finally managed to accurately restore Lennon’s vocals from a demo tape using AI machine learning and other music production tools.

This release would not have been possible without machine learning, which is a branch of artificial intelligence (AI) and computer science. It uses data and algorithms to mimic how humans learn, getting better and more accurate over time. 

So, it is not magic, it requires a lot of work and a very talented human being behind it. 

There are still concerns

Even though many famous musicians and producers are excitedly exploring music AI tools and their potential to change music creation and interaction, there are still quite a few negative reactions.

People seem less worried about using AI for restoration or processing. But they’re more cautious about using it to create a voice from scratch. Paul McCartney shared with the BBC, “It’s kind of scary but exciting because it’s the future. We’ll just have to see where that leads.” And Ringo Starr expressed frustration about ‘terrible rumors that it’s not John, it’s AI, whatever nonsense people said.’

Others think ‘creative work shouldn’t be this easy.‘ But what about all the other tools and technology we already use for artistic creation? They’ve made it easier for us to turn our creative ideas into reality, too.

Like any new technology, it’s normal to have concerns and fears. As humans, we often fear what we don’t know. But just as electronic music didn’t replace musicians, AI tools won’t replace them or creativity either. On the contrary, they will expand it, opening up amazing new possibilities and making it easier to bring musical ideas to life.

You can’t blame the computer

The key to understanding how music AI tools like Lyria will change music creation is to look at past tools and their impact.

Home-recording tools didn’t turn every hobbyist into a Grammy winner. They made creating music more accessible to everyone. And synthesizers didn’t replace acoustic instruments. Instead, they created new genres and gave musicians and producers new and exciting ways to make music.

Google DeepMind's Music AI tools: Transforming the future of music creation 1
Home recording tools have democratized music creation and production for decades. PHOTO CREDIT: Canva.com

The belief that machines can magically make someone a great artist is outdated. You need imagination, skills, and years of experience. Plus, you need tools to transform all that into a successful piece of music.

Re-imagining Björk’s famous quote for this situation: “I find it so amazing when people tell me that electronic music AI music has no soul. You can’t blame the computer. If there’s no soul in the music, it’s because nobody put it there.” One can say it louder, but not clearer.

A peak at the future of music creation

Music AI tools are here to stay, whether we like it or not, and they’re going to have a big impact on how we make music in the future.

One clear effect will be on boosting creativity and making music more accessible. AI will make it easier to turn ideas into real songs, letting more people create music and helping current artists and producers do even more. This could make music creation even more open to everyone, much like home-recording tools have done.

Another big effect will be streamlining production workflows. Music AI tools will act as powerful assistants, improving the creative process and offering helpful insights and support to musicians, producers, and audio engineers.

And new horizons in music composition will open up, pushing the boundaries and creating new, innovative sounds and genres that we can’t even imagine today.

Music AI tools will be the new normal

Google isn’t the only player exploring how AI can enhance musical creativity. Meta has also stepped in with MusicGen, their open-source AI-powered music generator, launched earlier this year. Plus, hundreds of start-ups are developing cool AI tools, diving into the endless possibilities this technology offers.

Music AI tools and models like Lyria are transforming the music scene. They’re turning music creation into a creative playground, breaking down old barriers, and revealing new sound worlds. The evolution of music creation and production is happening now and it shows no signs of stopping. 

Sabrina Bonini

Sabrina Bonini is a content specialist, writer, and educator focused on Web3 and entrepreneurship. She started her career as an audio engineer and musician, and has been passionate about the intersection of music and cutting-edge technology since then.

view profile

UK CMM
Previous Story

The UK’s Council of Music Makers (CMM) highlights a lack of clarity surrounding AI in the creative industries

music concerts
Next Story

4 Web3 platforms that host music concerts 

Latest from News