The end of original

David Jenkins
6 min readApr 10, 2019

--

As I walked to work this morning listening to a brand new album recommendation, auto-playing from Spotify, I noted how remarkably similar it was in sound to another album I’d been listening to recently. It wasn’t bad per se, it was familiar, but most certainly it wasn’t anything new in any musical sense.

Then it occurred to me, whether or not Spotify is using algorithms to determine what I might want to listen to, perhaps through deep audio analysis of the waveforms within the track itself. A brief investigation confirms they have the API to support such analysis (e.g. rhythm, pitch and timbre). So it should come as no real surprise if they were then feeding this analysis into common predictive models to determine what I would like to hear, based on what it is that I like in a track. We know these techniques are in widespread use today as either content based or community based recommendation systems, or a combination thereof.

For the sake of the discussion let’s put these models under ‘machine learning’. Under the banner of machine learning (or artificial intelligence) we have a number of categories of predictive models that generally fall under classification, regression, supervised or unsupervised learning and reinforcement learning. Also included in here are neural networks of varying complexity.

Of these techniques, the most widespread use today are the first three, being based on decades old statistical models.

The main problem with these techniques when it comes to recommendation engines is my music listening tastes simply don’t work that way. Having spent considerable time in practically every genre of popular music, it’s predominantly through a random friend recommendation, a change in life perspective, an emotional connection with a particular lyric, a chance hearing on the radio or some other tangential influence that takes me off the beaten path to find something completely new. To discover something unique that essentially sets my musical tastes onto a new exploratory trajectory.

Almost never has it been the results of a non-human recommendation engine no matter graph based or supervised learning or any other of those myriad of techniques. At best for the basic machine learning techniques, you can think of some of these unsupervised classification and regression algorithms as the machine learning equivalent of the record store owner filing Herbie Hancock under ‘Acid Jazz’. Not helpful.

Now let’s take the concept of machine learning recommendation engines a step further and assume that it is happening on a wider scale with a larger population of listeners, essentially within a closed community. Music that has the tendency to sound similar and appeal to a broader group of listeners will get extra reinforcement and therefore additional discovery. Musicians will be rewarded with greater exposure through adherence to these algorithms (much like search engine optimisation) and essentially a self-reinforcing feedback loop is created that is based on historical data and is recursive in nature.

The widespread use of machine learning recommendation engines would do nothing but further facilitate this, as it appears that most of the popular algorithms are indeed mostly recursive in operation. The availability of virtually unlimited compute cycles, relative simplicity and data to crunch has made these types of techniques far more prevalent than let’s say, reinforcement learning, where we start to see some real usefulness in operation. We can get to that later, but aside from this, the statistical techniques in question, even with some supervised learning are quite simply past behaviour being used as a basic predictor of the future behaviour through repetitive training.

Now let’s assume a machine learning algorithm (e.g. Flow Machines) is generating the music which is being recommended by another machine learning algorithm. The only loser here is originality. Namely yours.

Some of the inherent weaknesses of machine learning techniques are;

  • Reliance on the depth and quality of training data
  • The amount of available data in an ingestible format to support a diverse view
  • An unknown amount of practice required to achieve real-world results
  • Ascertaining a line of best fit through seemingly dispersed data
  • The requirement for structured practice and a clear set of rules
  • Having to know the desired outputs from the inputs, in order to determine the rules.

These inherent weaknesses of the models essentially mean that either outliers with little relationship to your current behaviours will likely be ignored, or decision-tree or random forest technique will make the case that this is a seldom chosen branch and therefore seek to steer you down a more common path.

In the case of reinforcement learning, this is the technology we see being used to drive truly autonomous behaviour such as drones and automated vehicles. Reinforcement learning is generally better at operation in the real-world, being less dependent on historical data to provide context. In this case, the objective is music that I will listen to and my actions affect the state and reward of the agent. These single and multi-agent based techniques go as far back as NASA’s intelligent launch agent systems in the 1980’s and are largely outside the rest of this discussion. This area is evolving rapidly and the techniques are not in widespread use today.

If we exclude reinforcement learning, the limitations of machine learning go way beyond the hypothetical music streaming analogy. You can apply the basic principles to almost any area where machine learning is being used and make the same case. For automation of repetitive tasks, making sense of a large dataset and handling of the mundane, machine learning is excellent. For taking you off the beaten track into unknown terrain and figuring out what to do, it is next to useless. If there isn’t a break glass allowing a human to intervene when it comes to something life critical, it is downright scary.

Ultimately at this point in time, until we reach a point where reinforcement learning or other unknown techniques come into play which are truly autonomous in their operation and able to perceive and act on real-world problems with limited historical context, then we should proceed with caution wherever machine learning statistical algorithms are being used.

Human beings, with their vast ability to process disparate unstructured data into meaning and amass that ability through conscious and unconscious communication, are going to have the upper hand for a long time and are indispensable in the mix. It helps also that there are over seven billion of them available as of today, human economics so frequently underplayed.

Until then let the computers play AlphaGo, automate some mundane things and at best augment your creativity, but don’t expect them to break the mould or expand your choices. Manual intervention as curator of the real-world for the machines is indispensable, letting them create the rules from their myopic, non-contextual and limited view of the universe is probably not wise at this point, definitely not original and most probably regressive.

Additional Reading

[1] Does Deep Learning Actually Learn?, Michael K. Spencer

[2] How Spotify Saved the Music Industry (But Not Necessarily Musicians) — Stephen J.Dubner

[3] Multi-Agent Systems — Theory, Approaches and NASA Applications
Michael G. Hinchey, Emil Vassev

[4] Macnamara, Hambrick and Oswald “Deliberate Practice and Performance in Music, Games, Sports, Education and Professions: A Meta-Analysis”, Association for Psychological Science 2014

--

--