AI music generation - yay or nay?

I’ve been a musician for over ten years, playing various instruments like guitar, keyboards, drums, harmonicas, and more. Over the years, I’ve experimented with numerous DAWs, including FL Studio, GarageBand, and Ableton. The amount of time and effort I’ve invested in learning to mix, master, and produce high-quality music has been substantial.

Recently, I came across Suno, and I am genuinely amazed at how quickly it can generate music at a quality level that I would struggle to achieve even with a lot of time to spare. The efficiency and output of Suno are astounding, producing music that is not only listenable but often rivals professional work.

To illustrate this, I wanted to write a song for this post on the blog. Using Suno, I provided the prompt “Upbeat song about hosting a website on Azure Blob Storage”. The AI took this prompt and within moments, it generated a complete song. This experience perfectly encapsulates both the impressive capabilities of AI and the evolving landscape of music creation.

Initially, I was going to prompt about AWS S3 but decided to go with Azure Blob Storage instead because it was reading AWS like ‘ohs’ instead of ‘a-w-s’. ¯\_(ツ)_/¯

The potential of AI in music generation is undeniable. It’s a game-changer for musicians and producers, enabling us to create and explore new musical landscapes with unprecedented ease and efficiency.

When you do not use a custom prompt, it’s pretty common for Suno to stick to a standard song format: 2 verses, chorus, verse, bridge, chorus, with each part being 4 lines, similar in size. While this structure works, it doesn’t reflect the diversity and creativity often found in real-world music. To make a song more ‘‘original’’, adding elements like choirs in brackets or dividing phrases into smaller parts or lines can make a significant difference. These small tweaks can transform a song from sounding generic to having a unique and engaging character.

Even though the structure in the text is strictly specified, at some point, the song kind of takes the steering wheel and mixes a bit of this and that from the lyrics. So, it’s not written in stone if the lyrics are structured that way. This flexibility allows the generated music to feel more organic and less predictable, adding to the overall appeal of AI-generated compositions.

One downside for me, however, is the inability to receive separate tracks for the generated music. If I were a music producer, I would value the ability to correct any mistakes, tweak elements, and put my personal touch on the final product. While Suno’s output is impressively accurate, having access to individual tracks would elevate the experience, allowing for finer control and customization. However, there are other AIs that can split tracks, so if I really wanted, I suppose I could make surgical changes to these generated songs.

This rapid advancement also feels a bit strange. On one hand, it’s incredibly cool to have music generated so quickly based on minimal input. On the other hand, it raises questions about the future role of musicians. As AI continues to evolve, the need for human musicians may diminish, as AI can handle everything from composition to production.

As a musician, it’s a bittersweet realization. The tools and possibilities AI offers are extraordinary, yet they also hint at a future where our traditional roles and skills might need to adapt significantly.

Imagine a few years from now, with AI integrated into games. You walk into a place in an RPG, and suddenly, music starts playing with lyrics sung by a bard, describing how you just fought a guard behind the tavern. This level of dynamic, responsive music creation is exciting but also somewhat unsettling. It suggests a future where the unique touch of human musicianship might be less valued or required.

A friend of mine argued that AI-generated music is soulless and automatic. In Polish, he used the metaphor “na jedno kopyto,” which means “after one fashion.” While I understand this perspective, I don’t necessarily agree. AI-generated music can indeed follow certain patterns and structures, but it also has the potential to bring fresh ideas and combinations that might not be immediately apparent to human composers. The flexibility and creativity embedded within AI can complement human artistry, offering new tools and expanding our musical horizons rather than diminishing them. After all, aren’t we all, in some form, neural networks ourselves?