It's not your father's pop music ...

http://www.nytimes.com/2012/09/16/opinion/sunday/the-computer-as-music-critic.html?hp

Interesting stuff, if one is interested in the psycho-phenomena of music appreciation (OK, I did just make that phrase up). I’ve bolded/colored the bits that caught my eye the most. Over the past 55 years:

  1. Music is getting “simpler”
  2. (Not that interesting, in that it’s been well-documented, but) - Music is getting louder.
  3. Something called “Zipf’s Law” can depressingly predict the number of chord patterns, phrases and changes in a pop song (I wonder if “standing out” by breaking Zipf’s Law is what makes a #1 hit vs. a #2?).

Hope you enjoy reading this as much as I did!

Gray Matter
The Computer as Music Critic
Mimi Leung
By JOAN SERRÀ and JOSEP LLUÍS ARCOS
Published: September 15, 2012

THANKS to advances in computing power, we can analyze music in radically new and different ways. Computers are still far from grasping some of the deep and often unexpected nuances that release our most intimate emotions. However, by processing vast amounts of raw data and performing unprecedented large-scale analyses beyond the reach of teams of human experts, they can provide valuable insight into some of the most basic aspects of musical discourse, including the evolution of popular music over the years. Has there been an evolution? Can we measure it? And if so, what do we observe?

In a recent article published in the journal Scientific Reports, we and our co-authors used computers to analyze 464,411 Western popular music recordings released between 1955 and 2010, including pop, rock, hip-hop, folk and funk. We first looked for static patterns characterizing the generic use of primary musical elements like pitch, timbre and loudness. We then measured a number of general trends for these elements over the years.

Common practice in the growing field of music information processing starts by cutting an audio signal into short slices — in our case the musical beat, which is the most relevant and recognizable temporal unit in music (the beat roughly corresponds to the periodic, sometimes unconscious foot-tapping of music listeners).

For each slice, our computers represented basic musical information with a series of numbers. For pitch, we computed the relative intensity of the notes present in every beat slice, thus accounting for the basic harmony, melody and chords. For timbre, what some call the “color” of a note, we measured the general waveform characteristics of each slice, thus accounting for the basic sonority of a given beat and the combinations of instruments and effects. And for loudness, we calculated the energy of each slice, accounting for sound volume or perceived intensity.

We then constructed a music “vocabulary”: we assigned code words to slice-based numbers to generate a “text” that could represent the popular musical discourse of a given year or age. Doing so allowed us to discover static patterns by counting how many different code words appeared in a given year, how often they were used and which were the most common successions of code words at a given point in time.

Interestingly, in creating a musical “vocabulary,” we found a well-known phenomenon common in written texts and many other domains: > Zipf’s law> , which predicts that the most frequent word in a text will appear twice as often as the next most frequent word, three times as often as the third most frequent, and so on. The same thing, we found, goes for music.

If we suppose that the most common note combination is used 100 times, the second most common combination will be used 50 times and the third 33 times. Importantly, we found that Zipf’s law held for each year’s vocabulary, from 1955 to 2010, with almost exactly the same “usage ordering” of code words every year. That suggests a general, static rule, one shared with linguistic texts and many other natural and artificial phenomena.

Beyond these static patterns, we also found three significant trends over time. Again using pitch code words, we counted the different transitions between note combinations and found that this number decreased over the decades. Our analysis also indicated that pop music’s variety of timbre has been decreasing since the 1960s, meaning that artists and composers tend to stick to the same sound qualities — in other words, instruments playing the same notes sound more similar than they once did. > Finally, we found that recording levels had consistently increased since 1955, confirming a so-called race toward louder music.

DID your parents tell you that today’s music is getting poorer and too loud? Well, maybe they were right. > But we will offer a different hypothesis: what if it is all about economy of resources? If today’s music still satisfies listeners the same way pop music did 50 years before, then maybe its creators are simply better at crafting pleasing songs.

If music is a form of information and musicians are using fewer “words” to convey their message, maybe they’re getting more efficient.

Far from being in decline, perhaps pop music is on the verge of a golden age. Critics may disagree, and the qualitative debate may never be resolved. But the data, gleaned from massive music collections and computers, objective and detailed as they are, might just say otherwise.

Joan Serrà and Josep Lluís Arcos are researchers at IIIA-CSIC, the Artificial Intelligence Research Institute of the Spanish National Research Council.
A version of this op-ed appeared in print on September 16, 2012, on page SR12 of the New York edition with the headline: The Computer as Music Critic.

The early age of the electric guitar (and the “small” orchestra) can be equated with the early age of the piano and the growth of the large orchestra. At both times this led to a burgeoning of technique and creative composition and then a steady decline in the freneticism of the art leading to “tribute” concerts in the former and tribute bands in the latter.
In fact there is a finite limit of the amount of creation as the basic building blocks of the two ears (edit: I’ll keep ears but I meant eras) :laughing: are similar when orchestration and sound texture is taken into account periodically as this is bounded by local and temporary fashion.

In between times both the ordinary musician and the geniuses will be engaged as cover-band musicians.
And teenagers thinking that old jokes are their very own.

I look forward to those pointing out my pretentiousness. :mrgreen:

And teenagers thinking that old jokes are their very own

Nicely put C,

When my daughter was three years old I thought I invented a game
with her called: 'Arms Up! Then I would tickle her.

How wrong, uninformed and ignorant can a person be.
That was/is me.

{’-’}

Nice article, thanks for sharing :slight_smile:

another statistical model interpreted as qualitative when the data is merely quantitative.

Appeal to popularity fallacy, kind of? Is this where trillions of flies and their eating habits come in?

Exactly, McDonalds sells the most hamburgers every single year. In every taste test ever done, they are consistently last or near last. One is a quantitative analysis, one is a qualitative analysis. The vast majority of the time the two have little or no correlation. Using quantitative analysis to derive a qualitative value is like hitting your finger with a hammer to see if you like chocolate ice cream. Unfortunately, it’s how most of our polling results are represented these days.

If music is a form of information and musicians are using fewer “words” to convey their message, maybe they’re getting more efficient.

Far from being in decline, perhaps pop music is on the verge of a golden age.

It’s interesting to me that this quote from that article is nearly a pure manifestation and archetype of decadence.

I’m still wondering if they measured loudness in VUs, using Bob Katz’ specs, or R128. :stuck_out_tongue: