Anyone who listens to commercial radio nowadays has probably been hit with the impression that a lot of pop music sounds very similar. It’s easy to dismiss this complaint as a gripe of the old and the cynical, but science actually bears this out: pop music has indeed been pretty homogenous throughout its history and is becoming ever more so.
In one 2014 study, researchers in the US and Austria analysed more than 500,000 albums, across 15 genres and 374 sub-genres. The complexity of each genre of music over time was compared to its sales. And almost always, as genres increase in popularity, they also become more generic.
In itself, this does not mean much – since genres and subgenres are always emerging. It may be considered a truism that a genre becomes accepted once its rules are defined – and once the genre is established, deviation will result in a new genre or sub-genre. For instance, funk emerged as a new genre out of soul and RnB, with a far stronger emphasis on rhythmic groove and the bass.
Another study, in 2012, measured the evolution of Western popular music (opens in new tab), using a huge archive known as the Million Song Dataset, which contains vast amounts of low-level data about the audio and music content in each song. They found that between 1955 and 2010, songs had become louder and less varied in terms of their musical structure.
These are trends – but the perception among many listeners is that this homogenisation of music has taken a big leap forward in recent years. And there are a couple of important technological developments that have made this happen.
The loudness war
Dynamic range compression is the (usually automated) continual adjustment of the levels of an audio signal, primarily intended to reduce the variations in loudness. Its overuse has led to a “loudness war”. The musician who wants a loud recording, the record producer who wants a wall of sound, the engineers dealing with changing loudness levels during recording, the mastering engineers who prepare content for broadcast and the broadcasters competing for listeners have all acted as soldiers in this loudness war.
But the loudness war may have already peaked. Audiologists have become concerned that the prolonged loudness of new albums might cause hearing damage and musicians have highlighted the sound quality issue. An annual Dynamic Range Day has been organised to raise awareness, and the non-profit organisation Turn Me Up! was created to promote recordings with more dynamic range. Standards organisations have provided recommendations for how loudness and loudness range can be measured in broadcast content, as well as recommending appropriate ranges for both. Together, these developments have gone a long way towards establishing a truce in the loudness war.
But there’s another technology trend that shows no signs of slowing down. Auto-Tune, which a surprising number of today’s record producers use to correct the pitch of their singers, actually originated as a byproduct of the mining industry.
From 1976 through to 1989, Andy Hildebrand worked for the oil industry, interpreting seismic data. By sending sound waves into the ground, he could detect the reflections and map potential drill sites – in effect, using sound waves to find oil underground. Hildebrand, popularly known as “Dr Andy (opens in new tab)”, studied music composition at Rice University in Houston, Texas and used his knowledge in both areas to develop audio processing tools – the most famous of which was Auto-Tune.
At a dinner party, a guest challenged him to invent a tool that would help her sing in tune. Based on the phase vocoder, which covers a range of mathematical methods to manipulate the frequency representation of signals, Hildebrand devised techniques to analyse and process audio in musically relevant ways. Hildebrand’s company, Antares Audio Technologies, released Auto-Tune in late 1996.
Auto-Tune was intended to correct or disguise off-key vocals. It moves the pitch of a note to the nearest true semitone (the nearest musical interval in traditional octave-based Western tonal music), thus allowing the vocal parts to be tuned.
The original Auto-Tune had a speed parameter which could be set between 0 and 400 milliseconds and determined how quickly the note moved to the target pitch. Engineers soon realised that this could be used as an effect to distort vocals and make it sound as if the voice leaps from note to note while staying perfectly and unnaturally in tune all the while. It also gives the voice an artificial, synthesiser-like sound, that can be appealing or irritating depending on your personal taste.
This unusual effect was the trademark sound of Cher’s December 1998 hit song, Believe, which was the first commercial recording to intentionally feature the audible side-effects of Auto-Tune.
Like many audio effects, engineers and performers found a creative use for Auto-Tune, quite different from the intended use. As Hildebrand said: “I never figured anyone in their right mind would want to do that.” Yet Auto-Tune and competing pitch correction technologies, such as Celemony’s Melodyne, are now widely applied (in amateur and professional recordings – and across many genres) for both intended and unusual, artistic uses.
Its became so prevalent, in fact, that these days it is expected almost universally on commercial pop music recordings. Critics say that it is a major reason why so many recordings sound the same nowadays (though the loudness wars and overproduction in general are also big factors). And some young listeners who have grown up listening to auto-tuned music think the singer lacks talent if they hear an unprocessed vocal track.
It has been lampooned in music and television and on social media, and Time magazine called it one of the “50 Worst Inventions”. But if anything, both its subtle, corrective use and overt, creative use continues to grow. So if you can’t tell your Chris Brown from your Kanye West, it may be down to Dr Andy.