Algorithmic culture and machine learning

What’s an algorithm?

An algorithm is a logical decision-making sequence. Tarleton Gillespie explains that for computer scientists, ‘algorithm refers specifically to the logical series of steps for organizing and acting on a body of data to quickly achieve a desired outcome.’

On media platforms like Facebook, Instagram, Netflix and Spotify content-recommendation algorithms are the programmed decision-making that assembles and organises flows of content. The News Feed algorithm on Facebook selects and orders the stories in your feed.

Algorithm is often used in a way that refers to a complex machine learning process, rather than a specific or singular formula. Algorithms learn. They are not stable sequences, but rather constantly remodel based on feedback. Initially this is accomplished via the generation of a training ‘model’ on a corpus of existing data which has been in some way certified, either by the designers or by past user practices. The model is the formalization of a problem and its goal, articulated in computational terms. So algorithms are developed in concert with specific data sets - they are ‘trained’ based on pre-established judgments and then ‘learn’ to make those judgments into a functional interaction of variables, steps, and indicators.

Algorithms are then ‘tuned’ and ‘applied’. Improving an algorithm is rarely about redesigning it. Rather, designers “tune” an array of parameters and thresholds, each of which represents a tiny assessment or distinction. So for example, in a search, this might mean the weight given to a word based on where it appears in a webpage, or assigned when two words appear in proximity, or given to words that are categorically equivalent to the query term. These thresholds can be dialled up or down in the algorithm's calculation of which webpage has a score high enough to warrant ranking it among the results returned to the user.

What is algorithmic culture?

Tarleton Gillespie suggests that from a social and cultural point of view our concern with the 'algorithmic' is a critical engagement with the 'insertion of procedure into human knowledge and social experience.’

Algorithmic culture is the historical process through which computational processes are used to organise human culture. Ted Striphas argues that ‘over the last 30 years or so, human beings have been delegating the work of culture – the sorting, classifying and hierarchizing of people, places, objects and ideas – increasingly to computational processes.’ His definition reminds us that cultures are, in some fundamental ways, systems of judgment and decision making. Cultural values, preferences and tastes are all systems of judging ideas, object, practices and performances: as good or bad, cool or uncool, pleasant or disgusting, and so on. Striphas’ point then, is that over the past generation we have been building computational machines that can simulate these forms of judgment. This is remarkable in part because we have long understood culture, and its systems of judgment, as confined to the human experience.

Striphas defines algorithmic culture as ‘the use of computational processes to sort, classify, and hierarchise people, places, objects, and ideas, and also the habits of thought, conduct and expression that arise in relationship to those processes.’ It is important to catch the dynamic relationship Striphas is referring to here. He is pointing out that algorithmic culture involves both machines learning to make decisions about culture and humans learning to address those machines. Think of Netflix. Netflix engineers create algorithms that can learn to simulate human judgments about films and television, to predict which human users will like which films. This is one part of an algorithmic culture. The other important part, Striphas argues, is the process through which humans begin to address those algorithms. So, for instance, if film and television writers and producers know that Netflix uses algorithms to decide if an idea for a film or television show will be popular, they will begin to ‘imagine’ the kinds of film and television they write in relation to how they might be judged by an algorithm. This relationship creates a situation where culture conforms more and more to users, rather than confronting them. Using the example of the Netflix recommendation algorithm, they argue that customised recommendations produce, ‘more customer data which in turn produce more sophisticated recommendations, and so on, resulting – theoretically – in a closed commercial loop in which culture conforms to, more than it confronts, its users’.

Striphas helpfully places algorithmic culture in a longer history of using culture as a mechanism for control. He suggests that algorithmic culture 'rehabilitates' some of the ideas of the British cultural critic, Matthew Arnold, who wrote Culture and Anarchy in 1869. Arnold argued that in the face of increasing democratisation and power being given over to ordinary people in the nineteenth century, the ruling elites had to devise ways to maintain cultural dominance. Arnold argued this should be done by investing in institutions, such as schools, that would ‘train’ or ‘educate’ ordinary people into acceptable forms of culture. Later, public broadcasters, such as the BBC, also took up this role. Arnold defines culture as ‘a principle of authority to counteract the tendency to anarchy which seems to be threatening us’. By principle of authority he means that a selective tradition of norms, values, ideas, tastes and ways of life can be deployed to shape a society.

Striphas' argues that this idea of using culture as an authoritative principle is the one 'that is chiefly operative in and around algorithmic culture'. Today, algorithms are used to 'order' culture, to drive out 'anarchy'. Media platforms like Facebook, Google, Netflix and Amazon present their algorithmically-generated feeds of content and recommendations as a direct expression of the popular will. But, in fact, the platforms are the new 'apostles' of culture. They play a powerful role in deciding 'the best that has been thought and said'.

Algorithmic culture is the creation of a 'new elite', powerful platforms that make the decisions which order public culture, but who do not disclose what is 'under the hood' of their decision-making processes. The public never knows how decisions are made, but we can assume they ultimately serve the strategic commercial interests of the platforms, rather than the public good. The platforms might claim to reflect the ‘popular will’, but that’s not a defensible claim when their whole decision making infrastructure is proprietary and not open to public scrutiny or accountability. Striphas argues that ‘what is at stake in algorithmic culture is the gradual abandonment of culture’s publicness and thus the emergence of a new breed of elite culture purporting to be its opposite.’

What is machine learning?

An algorithmic culture is one in which humans delegate the work of culture to machines. To understand how this culture works we need to know a bit about how machines make decisions. From there, we can begin to think critically about the differences between human and machine judgment, and what the consequences of machine judgment might be for our cultural world.

Machine learning is a complex and rapidly developing field of computer science. Machine learning is the process of developing algorithms that process data, learn from it and make decisions or predictions. These algorithms are tested and ‘trained’ using particular data sets, which are then used to classify, organise, and make decisions about data.

Stephanie Yee and Tony Chu from r2d3 created this visual introduction to a classic machine learning approach. My suggestion is to work through this introduction.

A typical machine learning task is ‘classification’ like sorting data and making distinctions. For instance, you might train a machine to ‘classify’ houses as being in one city or another.

Classifications are made by making judgments about a range of dimensions in data (these might be called edges, features, predictors, or variables). For instance, a dimension you might use to classify a home might be its price, its elevation above sea level, or how large it is.
In a typical approach to machine learning a decision-making model is created and ‘trained’ using ‘training data’. After the model is built it is ‘tested’ with previously unseen ‘test data’.

There are two basic approaches: supervised and unsupervised.

  • Supervised approaches: give examples for the machine to learn from. Tell machine which are right and wrong. Used for classification.
  • Unsupervised approaches: no examples given to the machine. The machine generates its own features. Good for pattern identification. Machine will see patterns humans may not.

For a useful explainer to machine learning approaches, and examples of types of problems machine learning tackles, check out this introduction by Door Jeroen Moons.
 

What is deep learning?

In a classic machine learning approach to ‘classification’ humans first create a labelled data set and articulate a decision-making process that they ‘teach’ the machine to replicate. This approach works well where there is an available ‘labelled’ data set and where humans can describe the decision-making sequence in a logical way.

The dominance of deep learning in recent years is driven by enormous increase in available data and computer processing power. These approaches are used for classification or pattern-recognition where specifying the features in advance is difficult. These approaches are not useful however for making sense of extremely large, natural, and unlabelled data sets or where the decision-making sequence is not easy to articulate.

Think of the example of recognising handwriting. If all the numbers of an alphabet are in the one typeface, then it is easy to specify the decision making sequence. See this letter ‘A’ below.

The letter is divided up into 16 pixels. From there, a simple decision making sequence can be articulated that would distinguish ‘A’ from all other letters in the alphabet. If pixels 2 3 6 7 9 12 13 16 are highlighted then it is an ‘A’.

But, imagine that instead of an A in this set typeface, you instead want a machine to recognise human handwriting. Each human writes the letter ‘A’ a bit differently, and most humans write it differently every time – depending on where in the word it is, how fast they are writing, if they are writing in lower case, upper case or cursive script.

IMG_7900.JPG

 

While a human can accurately recognise a handwritten ‘A’ when they see it, they could not articulate a reliable decision-making procedure that explains how they do that. Think of it like this, you can recognise ‘A’ but you cannot then explain exactly how your brain does it.

This is where ‘deep learning’ or ‘deep neural networks’ come in. Deep neural networks are a machine learning approach that does not require humans to specify the decision-making logic in advance. The basic idea is to find as many examples as possible (like millions of examples of human handwriting, or images, or recordings of songs) and give them to the network. The network looks over these numerous examples to discover the latent features within the data that can be used to group them into predefined categories. A large neural network can have many thousands of tuneable components (weights, connections, neurons).

In 2012, Google publicised the development of a neural network that had ‘basically invented the concept of a cat’.

Google explained that
 

Today’s machine learning technology takes significant work to adapt to new uses. For example, say we’re trying to build a system that can distinguish between pictures of cars and motorcycles. In the standard machine learning approach, we first have to collect tens of thousands of pictures that have already been labeled as “car” or “motorcycle”—what we call labeled data—to train the system. But labeling takes a lot of work, and there’s comparatively little labeled data out there. Fortunately, recent research on self-taught learning (PDF) and deep learning suggests we might be able to rely instead on unlabeled data—such as random images fetched off the web or out of YouTube videos. These algorithms work by building artificial neural networks, which loosely simulate neuronal (i.e., the brain’s) learning processes. Neural networks are very computationally costly, so to date, most networks used in machine learning have used only 1 to 10 million connections. But we suspected that by training much larger networks, we might achieve significantly better accuracy. So we developed a distributed computing infrastructure for training large-scale neural networks. Then, we took an artificial neural network and spread the computation across 16,000 of our CPU cores (in our data centers), and trained models with more than 1 billion connections.

A critically important aspect of a deep learning approach is that the human user cannot know how the network configured its decision-making process. The human can only see the ‘input’ and ‘output’ layers. The Google engineers cannot explain how their network ‘learnt’ what a cat was, they can only see the network output this ‘concept’.

Watch the two videos below for an explanation of neural networks.

In this first video Daniel Angus explains the basic ‘unit’ of a neural network: the perceptron.


A neural network then is made up of billions of connections between perceptrons. The neural network ‘trains’ by adjusting the weightings between connections, reacting to feedback on its outputs.

In this second video Daniel Angus explains how the neural network learns to classify data, identify patterns and make predictions using the examples of cups and songs in a playlist.

Here are some more examples of deep neural networks.

This deep neural network has learnt to write like a human.

This one has learnt to create images of birds based on written descriptions.

This neural network has learnt to take features from one image and incorporate them in another.

In each of these examples the network is accomplishing tasks that a human can do with their own brain, but could not specify as a step-by-step decision-making sequence.

Finally, let’s relate these deep learning approaches back to specific media platforms that we use everyday.

In 2014 Sander Dieleman wrote a blog post about a deep learning approach he had developed at Spotify.

Dieleman’s experiment aimed to respond to one of the limitations of Spotify’s collaborative filtering approach. You can find out more about Spotify’s recommendation algorithms in this piece from The Verge.

In short, a collaborative filtering approach uses data from users’ listening habits and ratings to recommend songs to users. So, if User A and User B like many artists in common, then this approach predicts that User A might like some of the songs User B likes that they have not heard yet. One of the limitations of this approach is the ‘cold start problem’. Put simply, how to classify songs that no human has heard or rated yet? A collaborative approach needs many users to listen to a song before it can begin to determine patterns and make predictions about who might like it. Dieleman was inspired by deep neural networks that had learnt to identify features in photos, he thought perhaps a deep learning approach could be used to identify features in songs themselves without using any of the metadata attached to songs (like artist name, genre, tags, ratings). His prediction was that, over time, a deep neural network might be able to learn to identify more and more fine-grained features in songs. Go check out his blog post, as you scroll down you will see he offers examples.

At first the network can identify some basic features. For instance, it creates a filter that identifies ‘ambient’ songs. When you, as a human, play those ambient songs you can immediately perceive what the ‘common’ feature is that the network has picked out. Ambient music is slow and dreamy. But, remember, it would be incredibly difficult to describe exactly what the features are of ambient music in advance.

As the network continues to learn, it can create more finely tuned filters. It begins to identify particular harmonies and chords, and then eventually it can distinguish particular genres. Importantly, it groups songs together under a ‘filter’. It is up to the human to then label this filter with a description that makes sense. So, when the network produces ‘filter 37’, it is the human who then labels that as ‘Chinese pop’. The network doesn’t know it is Chinese pop, just that is identifies shared features among those songs.

What makes this deep learning example useful to think about is this, Dieleman has created a machine that can classify music in ways that make sense to a human, but without drawing on any human-made labels or instructions to get started. The machine can accurately simulate and predict human musical tastes (like genres) just by analysing the sonic structure of songs. This is its version of ‘listening’ to music. It can learn to classify songs in the same way a human would, but by using an entirely non-human machine process that is unintelligible to a human.

Nicholas Carah, Daniel Angus and Deborah Thomas