It’s all about that vibe. Anyone who has ever compiled a mix-tape, or a Spotify playlist for that matter, knows that compilations succeed when they carry a certain emotional quality across their songs.
That’s why the music data specialists at Gracenote have long been classifying the world’s music by moods and emotions. Only, Gracenote’s team hasn’t actually listened to each and every one of the 100 million individual song recordings in its database. Instead, it has taught computers to detect emotions, using machine listening and artificial intelligence (AI) to figure out whether a song is dreamy, sultry, or just plain sad.
“Machine learning is a real strategic edge for us,” said Gracenote’s GM of music Brian Hamilton during a recent interview.
Gracenote began its work on what it calls sonic mood classification about 10 years ago. Over time, that work has evolved, as more traditional algorithms were switched out for cutting-edge neural networks. And quietly, it has become one of the best examples for the music industry’s increasing reliance on artificial intelligence.
How computers learn that Gaga’s “Lovegame” is a “sexy stomper”
First things first: AI doesn’t know how you feel. “We don’t know which effect a musical work will have on an individual listener,” said Gracenote’s VP of research Markus Cremer during an interview with Variety. Instead, it is trying to identify the intention of the musician as a kind of inherent emotional quality. In other words: It wants to teach computers which songs are truly sad, not which song may make you feel blue because of some heartbreak in your teenage years.
Still, teaching computers to identify emotions in music is a bit like therapy: First, you name your feelings. Gracenote’s music team initially developed a taxonomy of more than 100 vibes and moods, and has since expanded that list to more than 400 such emotional qualities.
|Gracenote’s engineers are teaching…|