The Impact of AI on Music Creation

School of Athens Newsletter 247. Written by ‍Dom Hodges, Head Of Music & Sound at Yoto
The Impact of AI on Music Creation

The Impact of AI on Music Creation

School of Athens Newsletter 247. Written by ‍Dom Hodges, Head Of Music & Sound at Yoto

Hi, it's Dom here.

"Music gives a soul to the universe, wings to the mind, flight to the imagination and life to everything." - Plato

Music has always been central to human culture. Long before LPs, microphones or synths, people made music using their voices, clapping hands, and simple instruments. It was communal, emotional, and passed down by ear. Music told stories, marked rituals, and brought people together. It’s one of the oldest, most natural forms of expression.

Technology has long disrupted music, shifting how and why we create and relate to it. AI is just the latest chapter in a saga of transformation, one that includes game-changers we now take for granted.

It began with the phonograph in the late 19th century, which allowed sound to be captured and replayed. Music could finally exist outside the moment it was performed. Before that, performances were fleeting; their magic lived in the now.

The 20th century brought rapid change: radio spread music globally; the electric guitar redefined genres; multitrack recording let artists layer and experiment. Sony’s Walkman and later Apple’s iPod made music portable. Then came synthesisers, drum machines, and samplers, democratizing music making and fuelling the rise of electronic music.

The early 2000s brought Napster and P2P sharing, which shook the music industry to its core and eventually led to the licensed streaming platforms we rely on today. With each shift, tech opened new doors and sparked new challenges. For every artist who resisted, others jumped in enthusiastically.

Now, AI is beginning to shape music creation in striking ways. AI systems can compose, perform, and analyse music. Tools like Google’s Magenta and OpenAI’s Jukebox generate original compositions based on prompts. Platforms like Suno, Udio, Amper Music, and Soundraw let users create custom soundtracks without traditional skills. These tools learn musical patterns from large datasets - melody, harmony, and rhythm. Some artists use AI as a creative partner; others use it for generating stems or variations quickly.

The results can be surprising. AI has completed unfinished works by famous composers, mimicked specific styles, and even sparked new genres. AI-powered mastering, songwriting assistants, and virtual singers now exist. The pace of change is astonishing, and the tools are becoming more intuitive.

But with innovation comes big questions. One major issue is training data. Many AI models are trained on huge collections of existing music. If copyrighted works are included, it raises concerns about consent and fair use. How are artists asked before their work trains a model? How should they be compensated? Debates around ethical training and copyright are ongoing, with figures like Ed Newton-Rex and the Fairly Trained campaign leading the charge in the UK.

There’s also the matter of originality and ownership. If an AI creates a song, who owns it? If it sounds like a famous artist, is it homage or infringement? These are murky areas where the law is still catching up.

Despite this, I don’t see AI as a threat to creativity. Like Fender Stratocasters or Ableton Live before it, AI is just a new (and powerful) tool. It can help artists move faster, explore new territory, and overcome blocks. It can open up music-making to people who otherwise wouldn’t have access. But as with any powerful tool, it should be used mindfully.

We need transparency. Platforms should disclose when and how AI is used, and audiences deserve to know when a track is machine-generated. Ideally, we’d have systems for fair credit and compensation, not to reject new tech, but to use it responsibly.

AI won’t replace human creativity, it will reshape it. Music evolves with the tools available. The heartbeat of music remains human: emotion, storytelling, connection. AI can support that, as long as we stay grounded in intention.

Decades ago, pioneers like Brian Eno introduced the idea of functional music soundscapes designed not for radio play, but to enhance environments, aid focus, or encourage calm. These pieces weren’t about traditional storytelling, but utility and atmosphere. AI-generated music fits this niche well. Algorithms can now create endless ambient tracks tailored to mood, time of day, or even biometrics, producing personalised soundscapes on demand.

What excites me is a future where generative sound exists alongside traditional, human-crafted songs. AI can excel at bespoke, utilitarian audio for everyday life, while human-made music continues to tell stories, express emotion, and carry cultural weight. This isn’t a battle between man and machine, it’s a rich coexistence.

Happy listening!

Dom Hodges,
Head Of Music & Sound at Yoto

Further reading:

A good rundown of the work that Fairly Trained does:

Some ever-wise words from Brian Eno:

Lateral uses of AI with the genius that is Imogen Heap:

By using this website, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.