Artificial intelligence is transforming our music, starting with Kanye West
Kanye West has never sung Hey There Delilah, yet there’s his cover of the song online. It’s been made using AI, and it’s exemplary of how the technology is rocking the music industry, writes Elena Siniscalco
Since he went on an antisemitic spiral, Kanye West’s music has been accompanied by a bitter taste. But the rapper turned shoe designer has since had his voice co-opted by the masses, singing much loved favourites such as Don’t Stop Me Now by Queen or Hey There Delilah by the Plain White T’s. All of these have been made by artificial intelligence, using Kanye West’s extensive back catalogue to create a convincing replica of his voice.
AI-generated music has been rocking the industry for years now, with a surge in younger and independent artists experimenting with it during lockdowns, when access to other equipment and studios was almost impossible. Some went in very unconventional directions, with singer Holly Herndon creating a clone of herself through AI that can sing any song in her voice, in any language.
It was thrown under the spotlight again last week, when Universal Music Group asked Spotify and Apple Music to remove music made by AI. The corporation argued that to produce new music, AI borrows from existing content, breaching the artists’ copyright.
AI-generated music can mean pretty much anything. It is covers of existing songs generated by AI; it is beats that artists can use to take inspiration for their own music production, like with BandLab SongStarter; it’s music you generate by inputting descriptions, through software like MusicLM. The researchers behind the software suggest giving it inputs like “a calming violin melody backed by a distorted guitar riff”, but it even creates music to “match” paintings.
In some of these cases, it’s clear where the AI is taking inspiration from, or even copying existing music – and it’s easier to see the potential for copyright breach. In others, it’s much more complex to establish which information the AI was fed. The music industry is scratching its head trying to figure out who should draw the line, and where to draw it.
Legislation is not there yet, and the two sides are hard to reconcile. On the one hand you find the big labels and the artists. In this year’s report from the International Federation of the Phonographic Industry, the organisation representing the music industry worldwide, many industry types lament that AI systems get their knowledge from vast quantities of copyrighted content and only then develop their “own” IP. They claim the artists producing the material should receive compensation.
On the other hand, AI advocates in the tech industry argue they’re doing nothing wrong. In the US, they’ve been relying on the “fair use” exception in copyright law. As long as the work is transformative and has no impact on the original song, there’s no problem, they say.
Most countries only protect the copyright of works made by humans; not the UK. Here, copyright protections for work made by AI are accepted. The person responsible for creating the AI is usually considered the author. In this light, you could argue the UK is a friendlier country for tech bros experimenting with music. Yet there’s potential for copyright infringement under the Copyright Act, if the AI reproduces an entire song or a substantial part of it without the licence to do so or a legal exception.
It’s moving fast inside and outside the world of music. Getty Images recently filed a lawsuit against Stability AI here in the UK, accusing it of copying and processing millions of its images without a licence. Stability AI is behind the art tool and image generator Stable Diffusion, and Getty said without its pictures the tool wouldn’t have been possible. The results of the lawsuit could set an interesting precedent for the music world too.
Yet currently there are many questions and few answers. Where does responsibility lie? Should governments step in and create new copyright legislation tailored to these cases? How to regulate something that travels beyond borders like music when legislation is made nationally?
Some want a third party, possibly overseen by a government organisation, to act as a “licensing house”. It would agree to existing music’s fair market use, and become the repository for all the AI businesses who would know where to go to access licensed material to train their modules. Industry insiders also say there’s talk inside the UK government of a potential “digital watermark” that would help track the content used by AI.
It will be hard to strike a balance. If you play with software like MusicLM even for five minutes, you’re struck by the potential of the technology. But given how much music is intrinsically linked with human emotions and experiences, it’s hard to imagine a world where we all choose AI-made trance beats over the latest single by our favourite songwriter.
Music might be safe for a while; but the battle shaping up between labels and AI startups it’s unlikely to be a particularly friendly one.