What is a platform?

What’s a platform?

I type ‘platform’ into Google. Ask a platform what a platform is. Google suggests a nearby bar, a train station, a Wikipedia entry. Let me try the Oxford Dictionary. The term platform emerges in the early 1500s. In basic terms it is a surface or area on which something may stand. That something might be a person making a speech or it might be a weapon like a cannon. There we go, to begin, a platform is infrastructural. A platform stands under something: a person, a weapon, software. A platform is something upon which other things happen. A stage upon which performances happen, hardware upon which software runs, a launch pad upon which a rocket is launched into outer space.

By the mid-1500s, the term platform also comes to mean something that enables other things to happen. It refers not just to a physical stage, but can also mean a plan or a scheme. To establish a platform, was to create the basis for taking some action in the world. For instance, a collection of individuals might gather together and establish a political platform. A set of ideas and a plan for executing them. By the late twentieth century a platform referred to a computer system architecture, a type of machine and operating system, upon which software applications are run. So, a platform is infrastructure. It is something upon which things happen. Platforms facilitate and enable: public speech, rocket launches, software applications, political agreements.

Platforms are also governed by technical and social rules. Think of a public stage. It is governed by technical rules. The platform can only extend as far as its capacity to amplify speech. The reach of the platform is limited to those who can hear the speaker. It is governed by social rules. Agreements form about who is allowed to take to space to speak, how long they can speak for, what they can speak about, and how people in the audience should act.

The past decade has seen the rise of ‘platform’ companies that are transforming the relationship between media and culture. The market shorthand for these platforms is the FANGs: Facebook, Amazon, Netflix and Google. Think of the list of social institutions and practices that have been irrevocably changed, and in some cases, destroyed by the emergence of the FANGs: journalism, television, advertising, shopping, finding your way around the city, politics, elections, dating, gambling and fitness. For a start.

Alongside the behemoths are an array of platforms that each in their own way are the site of major cultural disruption and innovation. Twitter is remaking the speed and quality of public speech. Instagram is reinventing photography, and along with it how we portray and imagine our lives and bodies. Snapchat is collapsing the boundary between the public and intimate. And, along with it, inventing an immersive augmented reality where we see our bodies and world overlaid with digital simulations. Tinder is changing the rituals of sex, love and dating. Fitbit is remodelling how we understand our bodies.

What do these corporations make?

The simple answer is that they engineer platforms that orchestrate the interplay between the calculative power of digital computing and the lived experience of humans. If the media institutions of the twentieth century were highly efficient factories for producing content, the FANGs make platforms. Of course, some of them, like Amazon and Netflix also produce content, but their value proposition and their disruption comes from the platform.


The major platforms are a central part of a larger culture of media engineering. By media engineering, I mean the industrial process of configuring and linking together digital devices, sensors, interfaces, algorithms, and databases. Importantly, media engineering is an experimental technocultural process of shaping the interplay between digital computers and the creative nature of cultural life. What do I mean, interplay between the calculative power of digital devices and the open-ended nature of lived experience?

This is the sound of a bio-reactive concert sponsored by Pepsi at the techno-culture festival SxSW. That’s right a bio-reactive concert. What does that mean? It’s a concert where everyone in the audience is wearing a wristband that senses information about their body, and that bio-data is used to augment the concert experience in real time.

Carey Dunne, writing in Fast Company, explains:

At South by Southwest this year–at the Pepsi Bioreactive Concert, deejayed by A-Trak–event attendees donned Lightwave’s sensor-equipped wristbands, which measured their body temperature and audio and motion levels. This data was transmitted wirelessly to Lightwave’s system, which created interactive visuals that represent audience members as pixels, and which also triggered confetti and smoke machines and unlocked boozy prizes. Now, Lightwave has released an elaborate visualization of the party’s alcohol and dubstep-altered biodata, arranged in a song-by-song timeline of the concert. When A-Trak says “Show your energy,” the crowd delivers, with temperatures spiking. The moment the beat drops on Skrillex’s NRG, you see the biological effects of a crowd going wild. The hotter and sweatier they got, the more rewards they’d unlock.

This bio-reactive branded dance party is media engineering in action. We have living humans: making culture, enjoying themselves, affecting one another. And, we have material technologies that are sensing, calculating and augmenting that human experience. Those technologies are a combination of sensors, databases, algorithms, interfaces, screens, and speakers to together constitute a media platform. In this case, people dancing wearing a digital wristband that can sense and transmit information about motion, audio and temperature, a DJ standing on a stage in a specially designed tent with decks and PA. The sound goes out through the speakers. The speakers stimulate the bodies of the attendees. They move, they sweat, they scream and clap. The wristband senses their bodily expressions. That information is conveyed back to a database. Algorithms process the information. The information is visualised on an interface. The dancers can see their collective temperature and excitement, they can see the ‘scores’ of individual dancers. Algorithms decide to ‘unlock’ features for the crowd like confetti and free drinks.
In Pepsi’s bio-reactive concert we have a condensed version of the larger logic of media platforms.

Media platforms like Facebook, Google, Instagram and Snapchat are all – in various ways – bio-reactive. They sense our living bodies, process information about them, react to them, stimulate them, and learn to modulate and control them. So then, in the present moment, what is a media platform? A platform is a computational infrastructure that shapes social acts. An infrastructure that senses, processes information about, and attempts to shape lived experience and living bodies. In The Culture of Connectivity Jose van Dijck argues that social media are socio-technical. What does that mean? ‘Online sociality has increasingly become a coproduction of humans and machines’.

In the Pepsi dance tent at SxSW the kind of ‘sociality’ produced, that is the shared sense of enjoyment and spectacle, is a co-production of humans dancing and DJing and machines sensing and augmenting the experience. Jose van Dijck calls this co-production of humans and machines ‘connectivity’. Media platforms engineer connectivity. According to her, we live in an age of ‘platform sociality’. A social situation where platforms shape social life. Earlier versions of the web were premised on a concept of networked sociality. Many individuals talking to each other on a relatively level playing field. The codes and protocols that governed interaction were relatively neutral, transparent and open to negotiation. This was possible, in part, because of the relatively small scale of early forms of online culture: a bulletin board, an email list, a chat room. The platform sociality of social media programs what users can do, how participation and content are ranked, judged and made visible. This way of thinking about media platforms prompts us to think not just about how they give us the capacity to speak and be heard, to express ourselves, but rather how they configure, engineer, and program social life. And, critically, whose interests drive that process.

Connectivity and Connectedness are different. Connectedness is an interaction between users that generates shared ways of life, whereas connectivity is the use of data and calculative tools that program these social connections in ways that control them for commercial and political purposes. That is to say connectedness builds community, connectivity makes money. This is also why I tend to say media platform rather than social media. The term social media suggests that these media are defined by the social participation they facilitate. The term media platforms shifts our focus in a productive direction, it puts the emphasis on the political economic project of engineering platform architecture.

 

Typewriters and self-trackers

This is Carl Schmitt writing in 1918 about a fictional civilisation, the Buribunks. Every person has a personal typewriter.

Every Buribunk, regardless of sex, is obligated to keep a diary on every second of his or her life. These diaries are handed over on a daily basis and collated by district. A screening is done according to both a subject and a personal index. Then, while rigidly enforcing copyright for each individual entry, all entries of an erotic, demonic, satiric, political, and so on nature are subsumed accordingly, and writers are catalogued by district. Thanks to a precise system, the organisation of these entries in a card catalogue allows for immediate identification of relevant persons and their circumstances. …the diaries are presented in monthly reports to the chief of the Buribunk Department, who can in this manner continuously supervise the psychological evolution of his province and report to a central agency.

The Buribunks use the typewriter to reflect on themselves, the information they record is archived in a database, where it is analysed by officials. They use the information to both monitor the thoughts of individuals, the mood of particular regions, and as a kind of market research to create entertainment and culture that reflects the interests of Buribunk citizens. This story just flattens me.

Here is Schmitt in 1918, looking at the typewriter. He sees a device that standardises written script, which enables vast amounts of information to be created, stored and processed. Schmitt sees in the typewriter the beginning of a civilisation where everyday life is extensively recorded. In 1918, Schmitt sees not just the smartphone, the wearable, the social media platform but also the kind of personhood and society that would go along with it. Here’s a critical point in his story: Buribunks are very liberal, they can write whatever they like in their diary. They can even write about how they hate being made to write a diary. But, they cannot not write in the diary. So, you can say whatever you like, but you cannot say nothing. You must make your thoughts, movements, moods and feelings visible to a larger technocultural system. Schmitt here envisions a mode of social control that doesn’t depend on limiting the specific ideas people express, but rather works by making their ideas visible so that they can be worked on and modulated.

I find this aspect of Buribunkdom startling, not because Schmitt is the only one to articulate a mode of control like this. Of course other critical thinkers in the twentieth century have too: Foucault, Deleuze, and Zizek to name some. I find it interesting because here in 1918, we have someone seeing personal media devices operating to manage the processes of representation and reconnaissance. That media technology was understood here as both instruments for symbolic communication and for data collection. So, here we are one hundred years later and we are the Buribunks. We use our smartphones every day to record reams of information about our lived experience: our expressions, preferences, conversations, movements, mood, sleep patterns and so on. This information is catalogued in enormous commercial and state databases. The information is used to shape the culture that we are immersed in. And, importantly, this system works by granting us individual freedom to express ourselves, and places relatively few limits on what we can say. But, this system does demand our participation. Participation is a forced choice. Very few of us successfully navigate everyday life without leaving behind data about our movements, preferences, habits and so on.

Schmitt imagined a large government bureaucracy where information would be stored on index cards. It was a kind of vast analogue database. Of course, instead of this, we have a complex network of digital databases owned by major platforms: Facebook, Google, Amazon and Netflix. And, these database function as enormous market research engines that capture and process information which is used to shape our cultural world. What Schmitt saw in the typewriter, has congealed in the smartphone, the critical device in a culture organised around the project of the self. The work of reflecting on and expressing the self, as a basis task in everyday life. And, super importantly, these tasks are shaped by the tools we use to accomplish them.

Here is a famous line from Nietzsche about his typewriter, which he experimented with in the late nineteenth century: ‘Our writing tools are also working on our thoughts’. What did he mean? As we use media technologies to reflect on and express ourselves, we become entangled with them. They shape the way we think, act and express ourselves. They shape the way we imagine the possibilities of expression, and we might say that in our own minds we begin to think like typewriters, or films, or smartphones. We think using their grammar, rhythms and conventions. So, with the typewriter and the smartphone, we might say that these devices ‘work on us’ in the sense that they facilitate a process through which we ‘monitor’ and record data about ourselves.

OK, so I’ve suggested here that in Schmitt’s early twentieth century we can see the pre-history of the smartphone. Well, Kate Crawford and her colleagues actually offer us a study of this history. They trace the genealogy of devices and practices we use to weigh ourselves since the 19th century through to present day self-trackers like FitBits. Think about how FitBit talks to us in its advertisements. The FitBit is presented as a radically new technology offering precise information about the ‘real’ state of our bodies. This knowledge will be useful to us, it will make us fitter, happier, more desirable and more productive.

What Crawford and co. remind us is that this set of claims are not all that new. Devices that ‘work on’ or shape our thoughts and feelings about our bodies have been around a long time.
Weight scales are one example. From the 19th century onwards both the cultural uses and technical capacities of weight scales have changed. In cultural terms, weight scales shifted from the doctor’s office, to the street, to the home. They gradually changed from a specialist medical device used only by doctors, to public entertainment, to a private everyday domestic discipline.

So, here’s a run through of Crawford’s narrative. Doctors began monitoring and recording patients’ weight toward the latter end of the 19th century, but this was not routine until the 20th century. In 1885, the public ‘penny scale’ was invented in Germany, which then appeared in the US in grocery and drug stores.  Modelled after the grandfather clock, with a large dial, the customer stepped on the weighing plate and placed a penny in the slot.

Some penny scales rang a bell when the weight was displayed, while others played popular songs like ‘The Anvil Chorus’ or ‘Oh Promise Me’. The machines would also dispense offerings to lure people into weighing themselves in public, such as pictures of movie stars, horoscopes, and gum and candy. Built in games such as Guess-Your-Weight would return your penny if you accurately placed the pointer at your weight before measurement. However, the extraction of money in exchange for data was the prime aim of the manufacturers; ‘It’s like tapping a gold mine’, claimed the Mills Novelty Company brochure in 1932.

The domestic weight scale first appeared in 1913. A smaller, more affordable device for the home, it allowed self-measurement in private to offset the embarrassment of public recording one’s weight with attendant noises and songs. The original weight scale is an analogue or technical form of media - our body weight makes an impression on a mechanism that is calibrated to record it on the scale. As a media device it collects and presents information to us but it is also important to consider how it is configured in broader social and identity-making processes. There is a gendered history of these devices.

Public weight scales were initially marketed to men but in the1920s women started to be encouraged to diet. Weight scales were presented to women as a private bathroom device to monitor their bodies, thus becoming a tool to ‘know’ and ‘manage’ ourselves. Here’s Crawford’s account of this:  

Tracking one’s weight via the bathroom scale was not only about weight management - as early as the 1890s it assumed a form of self-knowledge. This continues today where value and self-worth can be attached to the number of pounds weighed.

Crawford refers to a study, where a participant in an eating disorders group was asked how she feels if she does not weigh herself; ‘I don’t feel any way until I know the number on the scale. The numbers tell me how to feel’. That’s basically Nietzsche claim about the typewriter – the device is working on my thoughts. The numbers tell me how I feel. Similar claims are made around self-tracking devices. There are accounts of self-tracking and internalized surveillance taken to an extreme by people suffering from eating disorders.

So, the history of the weight scale reminds us that tracking devices are agents in shifting the process of knowing and controlling bodies, both individually and collectively, as they normalize and sometimes antagonize human bodies. The Fitbit turns the body’s movement into digital data: daily steps, distance travelled, calories burned, sleep quality, and so on. This is then fed into a ‘finely tuned algorithm’ that looks for ‘motion patterns’. There are two things at work here in this sequence from the personal weight scale to the FitBit. One, a moral epistemology: knowing one’s weight and body habits can lead to an improved, possibly ideal self and life. And, two: an economic imperative. Penny scales were significant money making enterprises and there was a strong profit motive in encouraging people to weigh themselves ‘often’. This exchange of money for data is clear: spend a penny, receive a datum, but the collection of data is also private, going no further unless the user willingly shared it with others. This is less clear in trackable devices. The user can reflect on their own data but that data will always be shared with the device maker and a range of unknown parties. What is then done with that data is not transparent and ultimately at the discretion of the company. Consumer data are mediated by a smartphone app or an online interface and the user never sees how their data is aggregated, analysed, sold, or repurposed, nor do they get to make active decisions about how that data is used.

As a tagline for an advertisement, for the wearable Microsoft Band, states, ‘this device can know me better than I know myself, and can help me be a better human.’ So then, Crawford argues, ‘the wearable and the weight scale offer the promise of agency through mediated self-knowledge, within rhetorics of normative control and becoming one’s best self.’ On one hand the ability to ‘know more through data’ can be experienced as pleasurable and powerful, the promise of which is evident in this advertisement for Microsoft band.

OK, and on and on it goes. Ugh, corporate brand vomit. But, also here’s the basic claim Microsoft are making: buy this device, it will work on you! It will change you. What wearables like the FitBit achieve that the personal weight scale could not, is the real-time aggregation of data about all bodies, and the feeding back of this information to each users via customised screens. Again here, Schmitt’s Buribunks had paper index cards and human-scale analysis of expressions. The FitBit is real-time biological analysis of millions of bodies. Here’s Crawford:
‘Statistical comparisons between bodies are necessarily contingent on a set of data points. Users get a personalized report, yet, the system around them is designed for mass collection and analysis, so the user becomes ‘a body amidst other tracked bodies.’ So ‘the user only gets to see their individual behaviour compared to a norm, a speck in the larger sea of data.’

Drawing on the work of Julie E Cohen, Crawford argues that this functions as a ‘bio-political public domain’… designed to ‘assimilate individual data profiles within larger patterns and nudge individual choices and preferences in directions that align with those patterns.’  So ‘while there is a strong rhetoric of participation and inclusion, there is a ‘near-complete lack of transparency regarding algorithms, outputs and uses of personal information’. And, this is the crucial point. Mark Andrejevic calls this the ‘big data divide’. The difference between individuals who record their data, and the corporations who collect and process that data.
The lesson then is to think about the evolution of media devices for collecting, storing, processing, and disseminating information over a hundred year period, as well as the individual and social facets of digital media.

The FitBit and similar tracking devices that collect data about us and present that back to us as customised and individualised media content, become part of a much larger social system of control in several ways. The data that we give and view at an individual level is logged in databases that operate at population level. These devices are implicated in a cultural process based on self-monitoring and self-improvement. They work on our thoughts. And, importantly, these devices normalise data-driven participation and computation in our everyday lives. They become a foundational model for how we do our lives, bodies and identities.

 

Cybernetics

On the night of October 15, 1940, the German air force sent 236 bombers to London. ‘British defences were dismal’. They ‘managed to destroy only two planes’. London, the heart of the British Empire, was under siege.

In Rise of the Machines Thomas Rid explains this moment’s historical significance. ‘For the first time in history, one state had taken an entire military campaign to the skies to break another state’s will to resist’. Survival for Britain, and the Allies, would depend on their ability to engineer a way of shooting those German bombers out of the sky.

This problem triggered a ‘veritable explosion of scientific and industrial research’ which would result in ‘new thinking machines’ capable of making ‘autonomous decisions of life and death.

Rid puts it this way:

Engineers often used duck shooting to explain the challenge of anticipating the position of a target. The experienced hunter sees the flying duck, his eyes send the visual information through nerves to the brain, the hunter’s brain computes the appropriate position for the rifle, and his arms adjust the rifle’s position, even ‘leading’ the target by predicting the duck’s flight path. The split-second process ends with a trigger pull. The shooter’s movements mimic an engineered system: the hunter is network, computer, and actuator in one. Replace the bird with a faraway and fast enemy aircraft and the hunter with an antiaircraft battery, and doing the work of eyes, brain, and arms becomes a major engineering challenge.

This challenge involved configuring the interplay between human and machine (to allow, for instance, a human operator to move an enormous weapon precisely at speed), engineering radar that could detect a plan before a human eye could see it, creating a network that would relay information from a radar to a computational device, building a computer that could predict the path of an enemy plane through the sky and predict where to fire, constructing the apparatus that could transfer the prediction into the operation of a weapon in real-time.

What was going on here? The creation, by a large network of scientists and engineers within a military-industrial system, of machines that could sense, learn, calculate and predict. Machines that could exert control over the material world via ongoing cycles of feedback and learning. In June 1944, the Germans launched V1 rockets across the English channel toward London. The V-1 was ‘a terrifying new German weapon: an entire unmanned aircraft that would dive into its target, not simply drop a bomb’. The first cruise missile.

At this moment, Rid explains

A shrewd, high-tech system lay in wait on the other side o the English Channel, ready to intercept the robot intruders. As the low-flying buzz bombs cruised over the rough waves of the Atlantic coast, invisible and much faster pulses of microwaves gently touched each drone’s skin, 1707 times per second. These microwaves set in motion a complex feedback loop that would rip many of the approaching unmanned flying vehicles out of the sky.

The Allies had engineered a ‘cybernetic’ system. A combination of technological devices that could sense, calculate, predict and execute decisions. These devices included the primitive digital computers.

Following the war, the mathematician Norbert Wiener was a key figure in popularising the idea of ‘cybernetics’. There are three critical concepts in ‘cybernetics’: feedback, learning and control. Cybernetics comes from a Greek word which means ‘to steer’. It articulates a process of exercising control by learning from feedback. A key feature of humans is that we can learn and adjust by using our senses and decision-making capacities. Cybernetics was the effort to construct ‘intelligent machines’ that could also learn. Wiener would often imply that he was central to solving the ‘prediction’ problem during the war.

It is true that Wiener was one of many scientists funded to undertake experiments, and Wiener did propose a mathematical model for predicting the path of an enemy aircraft. He did not however ‘solve’ the prediction problem, his model didn’t work. The lesson here is that complex technological systems are the result of a network of actors. There is rarely any one individual genius who ‘invents’ them. Jennifer Light makes this point emphatically in her study ‘When Computers Were Women’, explaining that while two male engineers often credited with automated ballistics computations during the war, critical to the effort were ‘nearly 200 young women’ who worked as ‘human ‘computers’, performing ballistics calculations during the war’. The first computers were hundreds of female mathematicians.

In 1948, Wiener coined and popularised the term ‘cybernetics’ as the science of ‘control and communication in the animal and the machine’. In short, cybernetics views the state of any system – biological, informational, economic, and political – in terms of the regulation of information. A cybernetic device can sense or collect information, and be programmed to respond to that information. In the case of wartime anti-aircraft defence, a radar detects movement, it tracks an enemy plane across the sky. Information is relayed to a primitive computer, which calculates aircraft trajectory. This calculation is passed on to an anti-aircraft weapon, which fires at the enemy aircraft.

Wiener is a significant figure in the story of cybernetics because he articulated how these computational technologies would reshape industry, society and culture. In his 1950 book, The Human Use of Human Beings: Cybernetics and Society, Wiener made an important historical move by placing ‘cybernetics’ at the heart of what he called the second industrial revolution.

The first industrial revolution bought about new forms of energy, such as steam and electricity created by machines. Harnessing these energy sources enabled the production of a goods on a scale far beyond what humans on their own could make. Wiener claims that in the first industrial revolution the machine acted as an ‘alternative to human muscle’. For example, one of the first applications of the steam engine was to pump water out of mines, a job that had previously been done by humans and animals. Many changes resulted from replacing human muscle with machines – factories emerged, urban labour forces created mass cities, and the demand for raw materials stimulated the growth of plantations and mines in the colonies, and hence rail and shipping networks for transportation.

For Wiener, machines ‘stimulated’ the development of an entire industrial and social system. In the second industrial revolution a new kind of a new kind of machine emerged - the computer which extended machines to the idea of communication. This is how he put it, ‘for a large range of computation work, [machines] have shown themselves much faster and more accurate than the human computer’. Wiener thought that computers would eventually communicate with and modulate a range of instruments. These instruments would act as ‘sense organs’. They would feed information back to the computer, so that it could make decisions and learn about its environment. Computers in factories would be programmed to generate and collect data to give feedback on production processes.

Thomas Rid makes the point that ‘Wiener didn’t change what the military did, the military changed what Wiener did’. What does mean? He means that Wiener’s peripheral involvement in wartime efforts to create machines that could sense, calculate, predict and execute decisions led him to perceive the development of a new kind of society. A society organised around devices and systems that were cybernetic – able to control their environment through processes of feedback and learning. Able to make collect, store and process information in ways that were once confined to the human.

Wiener and the other mid-century engineers, scientists and thinkers involved in the development of cybernetics imagined how media technologies would usher in complicated relationships between human forms of sense-making and decision-making and the capacity of computational devices to simulate, augment and even exceed those human capacities.

Media are symbolic, technical, digital

Media are technologies that organise human life and experience. They symbolically represent reality and they also collect information about reality.

How did they come to do this? First up, we often think of digital media as ‘new’. We register this most clearly in the advertising and corporate rhetoric of technology companies. Go and trace the history of Apple advertisements and product launches from their Macintosh personal computer in 1984 through to their iPod and iPhone launches. Listen to Mark Zuckerberg from Facebook when he tells us about the artificial intelligence he built, named Jarvis, that runs his house. Or, Facebook engineers when they tell us that they want to build a brain machine interface that will enable us to type from our brain. Or, Jeff Bezos from Amazon when he tells us that his AI Alexa will run our homes by listening in to our conversations.

Over and over the digital media industries present their technologies within a narrative of straightforward, linear progress. The next technology we build will be better than the last. And, implied in that sense of better, is what we might call a ‘technological imaginary’.

If we build all the cool gadgets, all the human problems will go away!

Here, I think of John Durham-Peters’ in Speaking into the Air, ‘’Communication’ whatever it might mean, is not a matter of improved wiring or freer self-disclosure but involves a permanent kink in the human condition. That we can never communicate like angels is a tragic fact, but also a blessed one.’

The kinks of human experience cannot be solved with technologies. And, new technologies are not ‘better’ than the last ones. As in, they don’t automatically make for a ‘better’ human experience. One way we can think about media technologies then is how they emerge out of the experimental effort of humans to exercise power in the world. This is not a straight-forward process.

That means we should listen carefully to Apple, Facebook and Google when they tell us what they are experimenting with, and where they think they are headed – not because this enable us to ‘narrate’ the development of technology, but because it offers us a way of thinking carefully about the kind of human experience they are imagining and creating.

With that in mind, let’s turn back to Kittler who takes this ‘genealogical’ approach to a history of media technology. Genealogy is a method inspired by Nietzsche and Foucault, a way of doing history that pays attention to how material technologies emerge as part of historically-conditioned discourses, social formations and modes of power.

Kittler identifies three historically significant media systems.

Symbolic

The first is a symbolic media system. In this system, writing, physical speech sounds, or musical tones are transposed by a human into visual symbols, which are then re-translated by users into a sound, word, or idea in the mind. Think the alphabet, musical notation, or paintings and drawings. These systems work because the human users create and follow rules. Alphabets and musical notation have technical specifications that the users have to follow if they are going to work. For example, the alphabet is a media system, with visible symbols and rules for how sounds in speech are to be captured, stored, and processed. This symbolic media system dominated until about 1900 and allowed for the development of new forms of social, cultural, and political life.

Technical

A second system, technical media, emerged during the 19th century and into the 20th century. While writing transposed physical sensations into symbols, technical media could capture physical sensations directly as impressions on a medium. The difference between symbolic and technical is crucial here. With a symbolic system the physical sensation – sound or light – has to pass through the human body to be transposed into a symbol. The human ear hears a word, and transposes it into letter from an alphabet. With a technical system that physical sensation is recorded directly as an impression in another medium, without the human body having to turn it into a symbol. Photographs, which capture light, and phonographs, which capture sound, are the key technologies. Both emerge in the 19th century, and become mass technologies in the early 29th century. Photography is a process where film records a physical impression of light on a media. Phonography records the physical impression of sound on a record or tape. Those impressions can then be ‘converted’ back via the medium into an accurate representation of the original image or sound.

The first phase of the age of technical media was the capacity to ‘capture’ and ‘store’ images and sound, while the second phase was the transmission of those images and sounds over distance, via radio and later, television. This system is analogue. Think about a vinyl LP where the physical grooves in the record are ‘impressions’ of a sound that are read and converted into an audio signal you can hear via the technical device of the record player. Analogue devices, such as record players and tapes, read the media by scanning the physical data off the device.

Digital

By the end of the twentieth century, the age of technical media gave way to our present epoch - the age of digital media. Rather than record data as a ‘physical’ trace, a digital system converts all data into a numerical system. The really important point Kittler makes here is that the digital system collapses all ‘senses’ into one medium. This enables media to calculate, process, and simulate. In the mid-1980s, Kittler predicts that sooner or later we will be hooked into an information channel that can be used for any medium. Movies, music, phone calls, and mail will reach households via fibre optic cables. Once any medium can be translated into 1s and 0s, and passed through the one infrastructure of digital computers and networks, the capacity of media to experiment with reality dramatically explodes in scale.

With digital media the physical properties of the input data are converted into numbers. Media processes are coded into the symbolic realm of mathematics, which can then be ‘immediately subjected to the mathematical processes of addition, subtraction, multiplication, and division through algorithms contained within software.’

Think of the present moment.

Our bodies permanently tethered to, and integrated with, digital devices like smartphones. These devices convert human experience into data. They store, they calculate, they predict as much as they represent. Our imagination is entangled with the data-driven, algorithmic, flows of images, sounds and texts streaming via their screens. This genealogy of this kind of human experience can be traced back, at least, to the mid-nineteenth century.

In the mid-1800s the technologies for storing reality emerge. The phonograph stores sound, the photograph stores light, the typewriter standardises the storage of alphabet, numbers and code. From the early 1900s, the technologies for electronic transmission of sound, light and code over distance emerges in the form of telegraph, radio and television. In the mid-1900s, around the schema of the typewriter, the capacity of media to calculate and predict emerges. For Kittler, Turing’s mathematical definition of computability, and the codebreaker he built during World War II mark the moment where media become first and foremost calculative devices for intervening in reality.

 

Media calculate

Imagine this.

Wars have been fought for as long as humans have sought to get together in groups occupy territory. To mark out and defend a space, and within that space to construct a kind of technocultural habitat. An atmosphere in which to live.

How to represent that territory though?

At first, territory is marked and defined in lived practices. The understanding of where territory begins and ends is carried in the living bodies, brains and practices of people. Features in a landscape are known to inhabitants. Over time, humans develop symbolic methods for drawing territory. Think of a map. A map represents territory. It draws in the features, it marks borders.

In warfare, from ancient times to through till the early twentieth century the commander of an Army sees the territory they are invading or defending as it appears to their own two eyes, with their feet on the ground, or as it is drawn on a handmade map. Imagine the moment then when territory is first seen, like a bird, out of an airplane.

In his Optical Media lectures the German media theorist Friedrich Kittler takes up this moment. I’m riffing here a bit on his account. In 1914 the French and Germans are engaged in trench warfare on the Western front. Trenches are dug into the ground, partly so that opposing army cannot see the enemy lines clearly.

Imagine though you can see the trenches like a bird, fly over them and see their exact formation in the landscape. This happens for the first time in 1914. Aircrafts are used to undertake reconnaissance of enemy lines. In August 1914, the French led a successful counter-strike on the Germans using photographic records made by reconnaissance aircraft. Coupling an aircraft with a camera enabled armies to view territory from the sky, to disclose invisible soldiers, camouflaged artillery positions, and unnoticed rearward connections to the enemy.

The Germans urgently needed to respond. What emerged was an experimental interplay between aircraft engineers, photographers and cinematographers. We often think of cinema playing a role in the two World Wars of propaganda machine. Cinema was used to induce in mass populations fear of the enemy and support for national the war effort. But, here at the same time, we can see that the camera was always too an instrument of reconnaissance, surveillance and data collection.

Kittler explains to us that in 1916,one of the founding directors of the German film industry, Oskar Messter, who had been charged by the government with filming propaganda newsreels on the War front, ‘constructed and patented his target practice device for the detection of deviations by means of photographic records.’ Simply, he mounted a camera in the machine gun turret of a plane, and used a clock mechanism, to make the camera take an automatic sequence of photos of the ground below. The planes would fly the same route day after day, taking the same sequence of photos, in order to produce detailed surveys of changes in enemy lines. The fact that the camera was mounted in the gun turret of the plane is a crucial detail.
Kittler writes, ‘Messter’s ingenious construction… could only be improved by combining shooting and filming, serial death, and serial photography, into a single act’.

What does he mean?

Well, of course, in warfare Kittler is pre-empting autonomous weapons like drones. Weapons that can ‘see’ the target and then shoot it. But, beyond the specific illustration of warfare, there is a fundamental conceptual point being made here about what media are. The camera mounted in the plane is a device that collects and stores information. The plane goes up, collects footage, comes back down. A photographer develops the film in the camera. Army commanders view the photographs, compare them to previous reconnaissance. They make a plan on how to attack enemy lines.

That’s a relatively convoluted process. What if, the camera in the plane was linked to some kind of device that could ‘read’ the image in real time and then shoot? That is, if the media device didn’t just collect and store information, it could also then process that information and execute an action. Think here of the line of technological development that stretches from these first camera-enabled planes in 1914 to the autonomous drones used by the US in warfare today. This process of development is what began to unfold during World War I.

Another German filmmaker drafted into the war effort, Guido Seeber, constructed a machine gun sight for fighter planes, which was combined with a small film camera that shot frames whenever the gun fired. Filming and flying coincide. World War I produced ‘a new kind of film director’. A film director whose visual perception had been ‘technologically altered’. That is, once you’ve seen landscape, territory, human habitat from the birds eye view, you never forget it, you imagine human territory differently.

The bird’s eye shot we are familiar with as viewers of film and television is created in the reconnaissance flights above German and French lines in 1914. Kittler explains that the ‘experimental and entertainment films made with a camera that was’ now mobile and airborne ‘converted the perceptual world of World War I’ – it’s reconnaissance vision – into ‘mass entertainment’. Kittler shows us the technical role that cameras and cinema played in warfare. There is widespread awareness of the use of cinema as war propaganda during the twentieth century, but less attention to its use as a reconnaissance tool. Media technologies, like film, develop not just out of cultural or artistic interest but as part of the technical requirements of other industries and activities.

As an aside, this historical description of the use of planes for reconnaissance in World War I reminds me of Jesse Lecavalier’s account of Walmart in The Rule of Logistics. Lecavalier explains that Walmart founder Sam Walton would use a plane to fly around the outskirts of regional towns and cities to scout for Walmart locations. He was looking for the patterns of urban expansion, in order to find land ahead of time for future Walmart stores. This was from the 1970s. So, you can see here the logic of using aircraft and cameras for surveillance extends beyond military uses. By the 1980s, Walmart became one of the first retailers to invest in their own satellites, which they could use to manage their distribution network of stores, trucks and warehouses; but also to scout for new locations, to track urban expansion – in the way that we might do now on Google Maps.

For Kittler, war is a critical incubator of new media technologies. The relationship between media as promotional or entertainment technology, and as reconnaissance technologies, is a dynamic one. Kittler quips that ‘all media are a misuse of military equipment’. By which he means that many aspects of our everyday media culture, are products of the military-industrial complex. The ‘perspective’ created in the reconnaissance flights of World War I inform the cinematic narratives and images on our screens. He describes cinemas as ‘churches of state propaganda’ that praised ‘war technology and electrification’.

This argument is a familiar one. Think how many Hollywood blockbusters celebrate violence, war, military dominance. How many of our cinematic experience place us in the perspective of the omnipotent soldier of fighter pilot unleashing firepower upon the enemy. Perhaps my favourite example Kittler offers of the ‘misuse of military equipment’ is the strobe light in discos, concerts and clubs. The strobe light mimics the flashing light of machine gun fire, was used to distract and disorient the enemy. And, for Kittler, one way to make sense of the dark, pulsating, strobing club is that it is the simulation of the fantasies and pleasures of warfare. Soldiers and clubbers alike mangled on amphetamines.

OK. So, where are we headed with this?

Kittler shows us how a media technology – the camera, the cinema, the strobe light – can be placed in a longer history. Media technologies are used for both promotion and reconnaissance. Promotion and persuasion via symbolic narratives and sensory stimulation. When we sit in the cinema and what films the world is represented to us, when the strobe light pulses the club our body is aroused. But, media technologies are also always invented, experimented with and used as technologies for data collection, storage and processing.

The big point, and this really matters, is that media are calculative and symbolic technologies. Too often, much of our attention focuses on their symbolic capacities. Think of how we often talk about Facebook, Instagram or Snapchat. Our accounts of them focus on how they enable new forms of participatory expression. But, they are also technologies of calculation. They collect, store and process data. And, I’d argue, if we follow the investment of resources and the logic of the business models they are much more driven by calculative rather than symbolic control. That is, while the cinema of the twentieth century is central to symbolic modes of control. The creation of narratives that inform, promote and persuade. That represent the world to populations, and make certain ways of life appear desirable.

If we look at a platform like Facebook or Google, well – they seem much more fundamentally organised around the logic of calculation. Facebook or Google don’t make symbolic narratives, they build media technologies that collect, store and process data. That’s why Kittler’s account of the technical data collection, storage and processing capacity of media from the 19th century matters so much. It enables us to ‘revisit’ the media technologies and cultures of the twentieth century and recognise that they were never just symbolic.

So, what do media technologies and platforms do? Well, they have symbolic and calculative functions. They create symbols like images, sounds, and narratives that convey the meanings out of which shared ways of life are constructed. And, they calculative by collecting, storing, and processing information.

 

Media Experiment

In 1878 the photographer Edward Muybridge ran an experiment to settle a bet made by Leland Stanford, the founder of Stanford University.

The bet was this: when a horse was at full gallop was it ever completely airborne? No part of it touching the ground.

Muybridge set up 12 cameras along a race track on Stanford’s estate. As the horse galloped by it tripped wires attached to the cameras, triggering a sequence of photos as the horse went by.

Once developed, the sequence of images showed clearly that at the top of its stride all four legs are tucked beneath the horse.

Here’s the thing though, the experiment set off something much bigger than settling Stanford’s bet.

Let me explain.

Muybridge was one of a number of people experimenting with photography as a new technology for capturing and storing light in the 19th century.

Technologies like the camera and phonograph dramatically changed how humans represented reality.

Prior to photography and image could only be captured by a human who drew or painted. Think about it like this. Light made an impression, through the eyeball onto the optical nerve, where it was somehow turned into an image in the brain and then, converted, via the hand into a painting. Photography transferred this previously human process to a machine. Light passed through a lens and made an impression on another medium: a metal plate or film.

This enabled reality, in the form of light, to be stored in a medium without having to first pass through the living human body.

This is an incredible period in the relations between humans and their ‘reality-producing’ technologies.

A symbolic media system began to give way to a technical one.

By symbolic I mean that reality has to be transposed into a socially-constructed symbol – letters and words, musical notes, a handmade drawing.

The human hears a song. They cannot captured the sound directly. They instead need to discern the sounds with their ear and then write it onto paper using musical notation, which they could pass on to another human who, if they could read the symbolic code – in this case, the musical score – could play back the sound. Same goes for writing and reading. Someone talks to me, I discern the words and transpose them into letters on a page, which someone else could read back.

Technical media is different to this. With technical media like the photograph or the phonograph, one medium – light or sound – makes an impression on another – vinyl, wax, a metal plate, film.

Muybridge’s experiment is a critical part of this experimental social process of developing techniques for capturing and storing reality because he figures out a way to capture and store moving reality, something more akin to ‘living’ or ‘live’ images.

This had been a huge problem. Humans knew that when they looked at the world it was both colourful and full of movement, and yet devices like early cameras could only capture a still black and white image. The big question was whether humans could create devices that captured moving images that looked more like the images we saw in our own heads.

So, here’s Muybridge, looking at his twelve images of the horse and realising that not only had they settled the bet, they could be taken and passed via the eye in a steady flow to give the appearance of the horse actually moving.

Muybridge kept experimenting, and a couple of years later – in 1879 – created a device called, elegantly, a zoopraxiscope – which was critical in the creation of cinema.

The zoopraxiscope was a small wheel that had a sequence of images printed around the outer edge. When spun the images appeared, to the human eye, to move.

This device inspired Edison and Dickson’s kinetoscope, the first commercial form of moving image film.

Why tell this story now?

Well, it is one of those critical moments during the late 19th and early 20th century where humans developed ways of storing light and sound, and in a sense, storing impressions of reality outside the living body.

So, it alerts us to something important about media cultures and technologies.

Media are technocultural processes through which humans store, process, augment, and play with reality.

Muybridge was experimenting with techniques for representing reality in ways that went beyond storing it in the human mind and senses.

But, that’s not all. Listen to this.

If Muybridge’s was one of the great experiments for developing media devices that represented reality, he returns in 2017 with a cameo in one of the contemporary efforts to create forms of bio-technical media that experiment with lived experience, and life, itself.

In 2017, Harvard scientists encoded a moving image gif of Muybridge’s horse experiment into the DNA of a living cell. Where, as The New York Times explains,

it can be retrieved at will and multiplied indefinitely as the host divides and grows. The advance, reported on Wednesday in the journal Nature by researchers at Harvard Medical School, is the latest and perhaps most astonishing example of the genome’s potential as a vast storage device.

The scientists involved in the experiment think that it

may be possible one day to do something even stranger: to program bacteria to snuggle up to cells in the human body and to record what they are doing, in essence making a “movie” of each cell’s life. When something goes wrong, when a person gets ill, doctors might extract the bacteria and play back the record.

Or, outside the human body, we might create living bacteria or organisms that monitor the environment, or to record how the brain words.

One of the geneticists involved in the project at Harvard says, ‘What we’re trying to develop is a molecular recorder that can sit inside living cells and collect data over time’.

I’ll be honest, I don’t really get it. As in, I don’t really get the science – the bit where the image is transposed into information that can be stored in a living cell. But, to be really crude about it, it follows – I think – the logic of the digital. Once all information can be collapsed into 1s and 0s, then the contents of any medium can be stored in another medium. The contents of film can be stored in bacteria.

OK, but apart from its fantastic strangeness, this experiment is one of many taking place in the early 21st century that are transforming what we understand media to be.

If Muybridge’s was one of a series of 19th and 20th century experiments in capturing lived experience, then the Harvard scientists who put his film in the DNA of a living cell are part of early 21st century experiments with developing technologies that engineer and experiment with lived experience.

If, in the 19th and 20th century media represented reality, in the 21st century media experiment with reality.

Storing information in DNA is very experimental, but I’d argue we should see it as part of the development of media technologies in two important ways.

The first is conceptual: media are devices for capturing, storing and processing information.

The second is more industrial: the major platforms like Facebook, Google, Amazon, the techno-capitalist Elon Musk are all investing in these kinds of technologies.

This is Regina Duggan, a developer at Facebook talking at F8 in 2017.

Think about that, here’s a Facebook developer saying ‘let’s start with your brain’. Facebook are calling this a brain-machine interface project. What’s important here is not what Facebook can do now, but what they are trying to do. They are trying to reduce the ‘friction’ between you living biological body and the calculative capacities of their media technologies.

Platforms like Facebook and Google have been imagining stuff like this for years. In 2004, one of Google’s co-founders Larry Page told Wired magazine that ‘eventually you’ll have the implant where if you think about a fact, it will just tell you the answer’.

When Elon Musk launched Neuralink he told the media that ‘over time I think we will see a closer merger of biological intelligence and digital intelligence’.

Gizmodo reported in 2015 that

A group of chemists and engineers who work with nanotechnology published a paper this month in Nature Nanotechnology about an ultra-fine mesh that can merge into the brain to create what appears to be a seamless interface between machine and biological circuitry. Called "mesh electronics", the device is so thin and supple that it can be injected with a needle — they have already tested it on mice, who survived the implantation and are thriving. The researchers describe their device as "syringe-injectable electronics", and say it has a number of uses, including monitoring brain activity, delivering treatment for degenerative disorders like Parkinson's, and even enhancing brain capabilities.

Neural lace, wetware, brain-machine interfaces. Whatever we call it we can see the impulse here, if the effort in the 19th and 20th century was to store reality outside the living body, in the 21st century the impulse is to incorporate the living body into media technology itself. To engineer life itself, and to incorporate lived experience within the technical, calculative, logistical infrastructure of media platforms.

When Donna Haraway wrote her Cyborg Manifesto in the 1980s it, super importantly, contained a dialectic impulse. Horror at the effort of technologists to transform the human body and experience to fundamentally, and the incorporation of that effort within the political economy of global capitalism and empire. But, also, fascination with the way these visions opened up new ways of imagining what it might mean to be human. The human was no longer, if we ever were, just a living body. The human is entangled, integrated with its machines.

So, here we are. In the first part of the 21st century at least one of our tasks is to think about media platforms’ experiments with reality, lived experience and living bodies.

To think about what these experiments mean for living cultures and societies. To think about media platforms like Facebook, Google, Amazon, Netflix, Instagram, Snapchat, Tinder and what they do in our world we need to go back – at least to the early twentieth century – to think about the effort to create media as logistical technologies that collect, store and process data about the human experience.

Here’s John Durham-Peters on how we might think about media in this way from his essay 'Infrastructuralism'.

Media are infrastructures that regulate traffic between nature and culture. They play logistical roles in providing order and containing chaos. […] Once communication is understood not only as sending signals – which is certainly an essential function – but as altering existence, media cease to be only studios and stations, messages and channels, and become infrastructures, habitats, and forms of life. Media are not only important for scholars and citizens who care about news and entertainment, education and public opinion, art and ideology, but for everyone who breathes, stands on two feet, or navigates the ocean of memory. Media are our environments, our infrastructures of being, the habitats and materials through which we act and are.

To continue along John Durham-Peters' line of thinking, here's an excerpt from his book The Marvellous Clouds, ancient media like ‘registers, indexes, census, calendars, catalogues have always been in the business of recording, transmitting, and processing culture […] or organising time, space and power’.

The symbolic understanding of media as audio-visual ‘entertainment machines’ which undergirds most accounts of advertising and society is something of an historical exception, ‘digital media return us to the norm of media as data-processing devices’.

We spent much of the twentieth century thinking about how media represent reality, we must also pay attention to the historical process through which media experiment with reality.

 

 

I thought the world would automatically be a better place

Saturday morning on Twitter.

‘Donald Trump gets to go on his first big boy Trip’ – Slate magazine.
‘If Mueller Indicts Kushner, there goes Mideast peace. And we were so close’ – Dan Gilmor retweeting Robert Mann.
‘What’ – Asher Wolf liked Petra Starke.
‘Brilliant’ – Nikki Bradley quotes Salvador Hernandez tweet of CNN video with comment ‘If he took a dump on your desk you would defend it’ with a poo emoji and ‘@andersoncooper what?’
‘Even Harvard can’t ignore it. The partisan press is out to destroy President Trump’ – Brent Bozell.
The New Yorker retweets ‘What does Donald Trump mean when he says his body is a battery? @alanburdick investigates!’
Brian Stelter retweets ‘First on CNN: Former FBI Director James Comey now believes Donald Trump was trying to influence him, a source says’
Jon Kudelka posts a cartoon, two bankers smoking cigars, one says to other ‘Do you ever get the feeling we would have been better off just copping a royal commission’.  
‘Jesus just fuck already’ Marieke Hardy retweeting Ashley Feinberg. A screenshot of the New York Times telling followers to follow The Washington Post.

That’s the first ten stories from one pull at the home feed.

This is news, but not a newspaper.

It’s more like a poker machine. Every time you spin the wheels: Trump, Trump, Trump.

I have no idea how many times I’ve already bounced the Twitter feed this morning. Wandering around the house, the garden, making coffee, putting the washing on. How many tweets have I seen today already? Twitter would know. My guess is several hundred, at least.

The news cycle of Saturday morning used to be slow. Different to the rest of the week. I’d often walk to the newsagent and buy a newspaper. The newsagent was a mate, we’d chat for a bit. I’d come home. Make a coffee. Sit down and read it. Maybe a couple of hours spent with it over the course of the day.

Chat about stories with housemates. Stick a good cartoon on the fridge.

It strikes me how similar these rituals were in their rhythm and mood to that great story Walter

Lipmann tells at the start of his 1922 book Public Opinion. There’s an island where a ship delivers the newspapers once every six weeks. The islanders would gather on the dock to get ‘the news’ from home. One morning, as they wait, they are unaware the ship is bringing news of the outbreak of war in Europe, a month earlier.

It seems so splendid now. Imagine how different the news would sound if it only told you what was important from the last month, rather than the last minute.

I spin the Twitter feed again.

First three.

Nadine von Cohen. All caps. ‘Fuck yeah there should be beds every few streets so we can take naps when we go for a walk or run for the homeless I guess’.
JR Hennessy quoting a Jason Kuznicki tweet. Kuznicki wrote ‘I see today that ‘Haha, Trump likes well done steak with ketchup’ is making the rounds again. So here’s a tweets storm about that. Sadly’. Hennessy responds ‘This is possibly the stupidest thread I have ever read. I’m not even mad I’m just impressed’.
Duska Sulicich ‘What happens when Trump eventually leaves office? How will he be stopped from blurting national secrets in a fit of bigly boasting? Seriously.’

In her book Addiction by Design, Natascha Dow Schull describes how poker machines work. The machines collect data about the patterns of players. The data trains algorithms that run the machine, they learn to dispense just the right amount of incremental wins and free spins to keep the player engaged on the machine for as long as possible, while ensuring that more often than not, the player loses money. Within a few spins, a machine has worked out what kind of player you are and will calibrate the game play accordingly.

The algorithmic design of the poker machine, is a good analogy for understanding the logic of the Twitter home feed, the Facebook News Feed, the Instagram home feed. The algorithm learns how to produce a feed of content that keeps you bouncing the top of that feed. Spinning the reels over and over.

Elizabeth Wissinger describes this as a profound shift in the kind of attention our culture fosters. From the cinematic gaze, to the televisual glance, to the digital blink. When we bounce our social media feeds our attention is spliced into glances and blinks, moments scanning a non-narrative, hyper and respective sequence of stimuli.

Bit by bit over the past few years, this is what my consumption of news has become.
Scanning stories, glancing and blinking, at a constantly updating, potentially endless, feed. It seems to work against the possibility of reflection or narrative. It has no long, slow narratives.
The voices are different too.

In the old fat Saturday newspaper the voices were reflective, contemplative, critical, sharp, witty, thoughtful – as often as they were incendiary, biased, and hot-headed.

The tweeter though is always a pundit, laying out a hot-take, a wise crack, a conspiratorial explanation of what’s really happening, an emotively charged spew about someone who is plainly an idiot.

The pundit is a persona that emerges from the ‘meta-coverage’ or horse race journalism of cable news. This is a kind of journalism that is more concerned with describing how the game of politics work, than in explaining and interrogating the ideas of the political class. The voice of the tweeter seems less interested in reflecting on the world, than in rapidly decomposing other people’s views of the world.

News on Twitter is fast and corrosive. In the past couple of years I have stopped buying newspapers, stopped watching broadcast television altogether, and pretty much stopped listening to radio. My news consumption is Twitter and podcasts. I’m not alone.

We are rapidly dismantling a culture of ‘news’ that came to dominate everyday life in the democratic mass societies of the twentieth century.

One thing that ‘news’ did during the twentieth century was play a role in shaping mass identities and shared ways of life that millions of people identified with. This made democratic societies possible. For all its shortcomings, this system of mass media representation created a kind of fabric that held enormous groups of people together.

Twitter is startling because it is a media platform where we do not participate in creating a shared narrative. Twitter is instead a machine for dismantling the possibility that shared narratives might emerge. This weekend The New York Times wrote a kind of ridiculous profile about Evan Williams, the co-founder of Twitter:

Evan Williams is the guy who opened up Pandora’s box. Until he came along, people had few places to go with their overflowing emotions and wild opinions, other than writing a letter to the newspaper or haranguing the neighbors.
Mr. Williams — a Twitter founder, a co-creator of Blogger — set everyone free, providing tools to address the world. In the history of communications technology, it was a development with echoes of Gutenberg.

Gutenberg!

It is surprising to see a paper as sober as The New York Times compare Twitter to the printing press. That said, Twitter does share in common with the printing press a massive explosion in the volume and qualities of public speech. And with that, a fundamental shift in how we represent our world.

In the wild, chaotic, repetitive, snarky, violent feeds of Twitter I more and more feel like I’m looking at the programmatic destruction of a system of representation.
Representation becomes less a social process for making sense of our shared experience, and more a system to be dissected and debunked.

This was not how participatory media was meant to turn out, it was meant to foster a new age of representation. One which, by allowing the expression of more diverse voices, would enable a better reflection of our lived experience.

Williams offered The New York Times a frank admission about the withering of that dream:

“I thought once everybody could speak freely and exchange information and ideas, the world is automatically going to be a better place,” Mr. Williams says. “I was wrong about that.”

Journalism is always a public form of speech that is produced historically, conditioned by politics, culture and technology.

News will persist. Our challenge is to figure out how to create a culture of news that doesn’t just stimulate our reactions, and cultivate our disbelief, but rather fosters our capacity to understand each other.

I deleted Twitter over summer. I’m going to delete it again. I’ve had it back on my phone for two weeks, and I think I'm going to delete it again. Its mood is dark, frantic and unsettling.

Ten years after its invention it might have been easy to see what the printing press would disrupt, but perhaps lest clear what it would facilitate and enable. We live in a similar moment. It is clear what media platforms like Twitter dissolve, and it is also clear that the participatory culture they promised will not be what the one that they create.

The present moment calls for us to tackle the platforms that program public culture: Twitter, Facebook and Google. To address them now as public institutions that we each have a stake in because they are the architecture that will either thwart or enable our capacity to give an account of our lives, to understand the lives of others and imagine our futures.

 

My last status update

It is five years since I posted my last status update to Facebook.

It was posted by a robot.

At the Splendour in the Grass music festival in 2013 the festival wristbands had RFID tags in them.

You could link your tag to your Facebook. Then, as you wandered about the festival going to see bands play, you could ‘swipe’ your wrist at these sensor stations to record your movements around the site. The festival said if you did this they would send you a customised playlist of every band you saw after the festival.

This is algorithmic culture. A culture where machines are trained to make judgments about our tastes and experiences.

I couldn’t resist. I linked the band to my Facebook. Every time I swiped at a sensor point, it would automatically post a status update in my name on my Facebook timeline.

The first one was ‘So glad you’re back this year The Rubens #returntosplendour’, the next one was ‘Getting my second dose of You Am I #neverenough’ and on it went, merrily posting this remarkably lame promotional pap to my timeline.

My friends made hay. This sounded nothing like the wry snark I usually traded in.

The Rubens was a dead giveaway. The uncharitable things I’ve been known to say about suburban dad rock bands over the years! The machine was pretending to be me, but failing hopelessly.

I am drawn to these moments where the interplay between the rich texture of human experience and the computational machinery of media break down. They prompt us to think about not just what media technology can do today, but what technologists are imagining media might do tomorrow.

This moment at Splendour is one of those moments where the open-ended performance of cultural life collides with the computational capacities of media technologies. The system could only send out the pre-loaded responses, and they sounded inauthentic coming out of the mouths of festival-goers. No one speaks like a music publicist.

Imagine if, when I swiped at the sensor though, an algorithm scanned my Spotify playlist to figure out if I actually like this band, and then my Facebook messages to see what the tone of my voice is, and then created a post that simulated my actual voice together with a live video from the performance in real time?

Splendour is a cultural space rich with human judgments, feelings and expressions. And so, this moment makes me ask: can machines be trained to understand, predict and make these judgments?  

Let me do a little detour.

There is an episode of the British television series Black Mirror called Be Right Back.

Spoiler Alert. If you haven’t seen it and don’t want it spoiled, stop here and go watch it.

Be Right Back is about a young couple: Martha and Ash. Ash dies in a car accident.
Martha, grieving, seeks out a connection with him. This is how we grieve: old photos, old letters, clothes, places you visited together, songs you listened to.

A friend suggests Martha log into a service where an artificial intelligence simulates the voice of your dead loved one by learning their style of expression from their emails, messages, social media posts, voice mails and other media they created while they were alive.

Martha does it.

She becomes entangled in these conversations.

Immersed in them.

A new product becomes available, a robot that can be designed based on photographs and videos of Ash while he was alive.  

Martha orders it. It arrives. She activates it. The robot Ash is uncanny. His looks, his voice. The robot is much better in bed than Ash was. But, he’s spooky. How to put it? The robot has no human presence.

Lying in bed beside Martha, robot Ash is unnervingly still. It cannot ‘spend time’ with another human.

Things get complicated.

Martha: get out, you are not enough of him…
Robot: did I ever hit you?
Martha: of course not, but you might have done.
Robot: I can insult you, there is tons of invective in the archive, I like speaking my mind, I could throw some of that at you.

 

The robot can manipulate how Martha feels, but it can’t understand her feelings or feel for itself. The robot falls short of being able to communicate.

Here again, just like Splendour, we have another moment where machines fail to simulate human experience. And yet, we still enjoy their company. We still play with them.

The philosopher of communication John Durham-Peters (1999: 30) writes

Communication, in the deeper sense of establishing ways to share one’s hours meaningfully with others, is sooner a matter of faith and risk than of technique and method. Why others do not use words as I do or do not feel or see the world as I do is a problem not just in adjusting the transmission and reception of messages, but in orchestrating collective being, in making space in the world for each other. Whatever ‘communication’ might mean, it is more fundamental a political and ethical problem than a semantic one.

This is what Martha has encountered. The robot has no capacity to experience faith, politics and ethics as anything other than data to be processed

Machines can displace forms of human known and doing in the world, but they do that by formatting communication as a series of logical procedures, calculations and predictions.

Martha takes robot Ash to the edge of a cliff they used to visit when Ash was alive. She asks him to jump off.

Martha: Jump
Robot: What over there? I never express suicidal thoughts or self harm.
Martha: Yeah well you aren’t you are you?
Robot: That’s another difficult one.
Martha: You’re just a few ripples of you, there is no history to you, you’re just a performance of stuff that he performed without thinking and its not enough.
Robot: C’mon I aim to please.
Martha: Aim to jump, just do it.
Robot: OK if you are absolutely sure.
Martha: See Ash would’ve been scared, he wouldn’t have just leapt off he would’ve been crying.

 

Ash doesn’t jump. Martha takes him home and puts in in the attic.

The writer Charlie Brooker is asking us to imagine what happens as we knit out lived experience together with intelligent machines.

Machines fail to make the textured judgments about human experience that we ourselves can make. And yet, we find ourselves in increasingly complicated relationships with them.

Splendour, a year later. Evening is falling and I stand watching the Skywhale, a hot air balloon whale with breasts for wings, being inflated. Festival goers, many affected by alcohol and drugs, are having visceral reactions. I join in with the hundreds of others taking images and videos, translating our experience into images that get circulated mostly through Snapchat and Instagram.

Where the RFID tag tried to tightly program the translation of festival moments into media content, I’m struck here by the continuous translation of the festival into images. Hundreds per hour are uploaded to the public festival hashtags.

We express our cultural experiences in flows of images. Those images double as data.
This is what a participatory media system is like. We address machines as much as we address each other. We are training machines – bit by bit – to make judgments about culture, and ourselves to incorporate intelligent machines within our human experience.

 

Configuring an audience

Let’s go into Facebook advertising model, and set ourselves up a fake campaign, as a way of thinking through how Facebook package and sell their audience to advertisers.

The advertising model shows us how the audience do two kinds of labour.

First, by allowing the platform to monitor them they do the work of being watched, of providing the data that enables advertisers to target them.

Second, by using the platform and viewing advertiser’s posts in their news feeds they do the work of watching, of paying attention to advertiser’s messages.

Let’s see how Facebook use the data generated about audiences to sell refined moments of audience attention to advertisers. To do that we’re going to have a look at Facebook’s advertising creation tools. The advertising tool illustrates how Facebook packages audiences that can be endlessly customised and configured based on audience preferences.

Here, you can see how your individual archive of data becomes useful when put into combination with the hundreds of millions of other personal archives Facebook has. OK, so you go back to that menu up the top of the page, the drop down arrow at the far right of the blue bar across the top of the page. Drop it down, you’ll see the ‘Create Adverts’ link, hit that.
First thing you’ll see is a screen asking you to specify for your campaign objective.

Choose one and then set up an account.

You need to enter some basic info: make it Australian currency and time zone. And then hit ‘Continue’.

OK now we are through to the part of the model where Facebook uses its data to help you configure an audience for your advertisement.

There are three important parts of this page. First up, see on the right hand side the ‘audience definition’ panel. It starts off by defaulting to all users in the United States. You can see that 212 million users, and the little meter tells you that’s a very broad audience definition. Then on the left hand side you can enter in details about your audience to refine it. Once you have refined your audience you then scroll down to the budget section to find out how much it costs to reach that audience.

OK, so first, let’s define the audience. Start off by making the audience yourself, see how many people like you Facebook can reach and what it would cost.

A screenshot from Facebook's advertising interface. Here I have narrowed the audience to people who live in Brisbane. On the right we can see this audience is 1.9 million people, and estimated daily reach would be 1500 to 5300 users.

A screenshot from Facebook's advertising interface. Here I have narrowed the audience to people who live in Brisbane. On the right we can see this audience is 1.9 million people, and estimated daily reach would be 1500 to 5300 users.

First I enter ‘Australia’ and it says 14 million. I change that to ‘Brisbane’ and immediately it reduces from 14 million to 1.5 million people who I share that characteristic with. And, you’ll see on the audience definition meter I’m now in the middle. And, you’ll see it shows on the map the geographical positioning of this audience.

Then I’ll change the age range to a five year window around my age. Y’know – late 20s to mid 30s. You do the same for your details as we go through.

OK, the reach now once I’ve filtered for age is 220,000. If I make it only men, it splits it again down to 110000.

A screenshot from Facebook's advertising interface. Here I have added age restrictions to the audience. On the right we can see that Facebook is telling us it can reach 330,000 29 to 35 year olds in Brisbane.

A screenshot from Facebook's advertising interface. Here I have added age restrictions to the audience. On the right we can see that Facebook is telling us it can reach 330,000 29 to 35 year olds in Brisbane.

Where it gets interesting is on the detailed targeting and connections. So, with detailed targeting this is where we start to add interests. OK, if I add ‘craft beer’ it narrows it right down to less than 1000 people. If I change it to ‘beer’ though it comes in at 29,000 people.
If I ad beer and coffee it goes to 41000 people. So, that’s Facebook telling me that there are 41000 men about my age in Brisbane who like either beer or coffee.

You’ll see below, I can narrow it further, by clicking the ‘narrow audience’ link. Here, rather than search ‘beer’ or ‘coffee’, I’m going to delete coffee from the first box, and add it to the second. So the equation becomes must like ‘beer’ and ‘coffee’. I do that and it comes out at an audience of 22000. Then if we click on the left ‘Budget and Schedule’, we can see what Facebook would charge us to reach this audience on both Facebook and Instagram.

A screenshot from Facebook's advertising interface. Here I have added 'men' and 'craft beer' as a specific targeting category. This narrows the reach down to 3800 users.

A screenshot from Facebook's advertising interface. Here I have added 'men' and 'craft beer' as a specific targeting category. This narrows the reach down to 3800 users.

Let’s change the budget to $50 a day. If I spent that much, Facebook estimate I could reach between 1300 and 3500 people per day on Facebook and 3900-5100 on Instagram. What’s happening here is a live auction. Facebook works out how much ‘space’ it has on the platform and sells it to the highest bidder. The more refined the audience, the more you might pay because you are competing for a smaller slice of attention.

The more competition there is for that audience the more you might pay. So, if you are the only advertiser that wants to speak to older men, you might pay less than for a demographic where there is high competition to speak to them. Let’s fiddle with it a little. If I change the demographics from men in their early 30s, to all men and women between 20 and 50 who like beer and coffee, you’ll see the audience size jumps back up to 220000.

Then, scroll back down to the budget. Facebook is now telling me for $50 a day I could reach up to 12,000 people on Facebook and 65000 people on Instagram. OK, to look at the dynamics of the ‘live auction’ that takes place though, change ‘daily budget’ to ‘lifetime budget’ and set the period for one month starting today. It tells me that for $350 I could read 1600-4200 people per day on Facebook over the next month.

But, say I’m a big player and I really want to speak to this audience. I could tell Facebook I will pay more, effectively outbid the other advertisers in the market. So, say I bump my budget way up – to $10,000 for the month. Facebook will now sell me 14000-36000 slots per day. OK, but now let’s go higher. Say, I have basically an endless amount of cash to throw at this campaign.
For $100000 Facebook would give me up to 78000 people per day across the month. It ‘tops out’ at about half the audience. Facebook won’t sell a single advertiser more than that. Most likely because it would affect engagement with the platform by swamping feeds with repetitive content.

 

The bigger the audience is the more ‘slots’ of advertising space you are in the auction for, but also the less refined your audience is so your advertisements might be more hit and miss.
Another factor to keep in mind here is that there is a ‘balance’ Facebook have to strike between advertising and audience engagement. If you logged onto facebook and saw only ads, or too many ads, in your news feed, and not enough posts from your friends you’d probably stop using Facebook. And, if you and many others like you stopped using Facebook it would have no audience to sell to advertisers.

So, Facebook runs data analysis to strike the right balance between putting ads in your news feed to generate revenue, but not so many that you stope engaging with the platform. It needs you to keep coming back to be able to keep selling slots in your feed to advertisers. The audience product that Facebook are packaging and selling is quite different to the traditional packaging of audiences by commercial media organisations. I’ll illustrate this using Frankie magazine's media kit.

The ‘media kit’ is the traditional way that advertising funded media like magazines, television and newspapers have ‘promoted’ their audience to advertisers. In this case, Frankie magazine has a media kit that ‘packages’ their audience to explain to advertisers who they will reach and be able to influence if they purchase advertising space in the magazine. The first and most obvious difference between the Frankie media kit and Facebook’s advertising tool is that you cannot ‘configure’ the Frankie audience based on specific characteristics.

This is arguably Facebook’s advantage, offering a configurable audience. Facebook can also provide very accurate data on who views or clicked on the ad. Frankie cannot. But, perhaps Frankie’s advantage is the durable and ongoing relationships they establish by forming a specific audience that regularly reads the magazine.

Let’s look through their kit though, you’ll see that they ‘describe’ their audience using many of the variables we saw in the Facebook model. First page tells us you can reach 340000+ readers per issue. The third page tells you the demographic variables: the audience is 70% female, has a median income of $75000 and is between 20 to 35 years old.

94% visit websites they read about in Frankie, 89% have read it for more than a year, 96% consider themselves to be creatives, 89% feel inspired after reaching frankie, 70% have purchased something after seeing it in frankie. The frankie audience as a whole spends 23 million dollars online.  This is based on survey data from 2015. Frankie have less capability to let you configure your audience, for instance you cannot say to Frankie – OK great, but I only want my ad to be seen by your male readers, not your female ones. Facebook can let you do that.

While the Facebook model we see does not provide information about other web use or purchasing habits, Facebook do provide this functionality to large clients. They especially do this by getting clients to link their customer databases to Facebook’s data sets to evaluate how Facebook ads impact on purchasing actions. Then, Frankie go on to explain how they take great care to ensure that your ad fits the right ‘vibe’ of the magazine. So, there’s a real attention to creative control between brands and magazine.

Facebook also pay attention to this, but not with human judgments about the quality of ads. Rather, Facebook use data on click throughs to gauge how readers respond to ads, and ensure they get the right balance between the quality and quantity of ads and user engagement. OK then on page six of the Frankie media kit we can see what they charge for ads. $7200 for a full page ad, $14,400 for a double, $11150 for a back cover.

OK, now let’s go back to that Facebook advertising model. If you Go to Facebook’s ‘Create Ads’ set your location to Australia. And the demographic variables to match Frankie’s. Gender: female. Age: 20 to 35. Facebook is giving us a reach of 3 million. But we want to narrow that some. OK, we know Frankie readers are creative, into fashion, craft, design. If I narrow the details to women between 20 and 35 who like both fashion and interior design. Then change the ‘daily budget’ to ‘lifetime budget’ and put in $7200 – what a Frankie ad costs. And make the schedule one month.

The estimated daily reach is up to 43000 on Facebook and 38000 on Instagram. So, conservatively 25000 a day. Or 750,000 in a month. That’s twice as many as Frankie. But, I could continue refining this audience. 

If you buy advertising you don’t want ‘wasted eyeballs’ – as it’s called in the industry – that is you don’t want to pay to put your advertisement in front of people who are not in your target market. This is Facebook’s big competitive advantage. It claims it can put your ads in front of the exact people you want to talk to, when and where you want to talk to them. But, as we see when we read Frankie’s media kit, there are other judgments that matter too, like the relationship a media organisation establishes with its readership and how it leverages that.

For our purposes though, we can think about how two different media organisations package and sell their audiences to advertisers. In this respect, we might see our activity on Facebook as ‘productive’ – we do the work of registering our lives as data that enables ads to be targeted at us, and then we do the work of viewing and clicking on those ads, and sometimes liking and sharing them too. The engine room of commercial media businesses is creating, packaging and selling audiences to advertisers. Media platforms depend on the creative and continuous work that audiences do in expressing and registering data about themselves as part of their everyday lives.

 

Audience labour

There is a famous question that Dallas Smythe, a Canadian political economist of communication, asked: What do media make?

He was thinking particularly of commercial, advertiser-funded media like television. What does a commercial television station make? The first answer that might spring to mind is ‘television programs’ – news, dramas, reality TV. OK, sure, people go to work at TV stations and make this content.

But, Smythe suggests, think again. The ‘shows’ you see are on the TV screen are not really the product. He says the real answer to the question ‘what do media make?’ is ‘audiences’.
The ‘product’ being bought and sold is not television shows, it is audience attention, your attention.

The money that flows into commercial media organisations like television, newspapers, and now social media platforms comes from advertisers. Those advertisers are purchasing the attention, or in the industry terminology ‘eyeballs’, of audience members. To be profitable a commercial media business needs to produce the kinds of audience attention that advertisers want to buy. Media content like commercial television is free, social media platforms like Facebook and Instagram are free. What is being sold is your attention as a user. So the industry phrase goes: ‘if you are not paying for it, you are the product’.

Smythe suggested then that if audience attention was being bought and sold, then audience members were ‘labourers’ in the media system. He suggested that in watching television, or now scrolling through the feeds of social media, we are doing the work of watching. The work of watching involves the act of paying attention: most importantly, watching advertisements.
When audiences watch advertisements they gain knowledge about, form desires for, and learn to classify brands, products and services. Audience members learn how to incorporate brands and products into their identities and lifestyles. And, they go and buy them.

When we buy the products we see advertised in the media, we are effectively funding the media. When you buy a can of Coke, part of the money you pay flows back into the media system in the form of advertising revenue, which in turn funds the media content you consume and the platforms that you use. In the era of social media platforms, audiences also undertake the work of being watched. This work takes two forms: user generated content and user generated data.

Users generate content when they translate their lives into media content that others consume. When audiences participate on reality and lifestyle TV, upload photos to social media, comment on news stories and so on they undertake the productive activity of both producing and circulating media content. Their lives and social world become an integral part of the content they watch.

Users generate data where they submit to forms of monitoring and surveillance. As audiences watch, scroll, and tap on social media platform they produce data about their interest, preferences, practices, moods and movements. This data is used to produce and sell a much more refined audience commodity to advertisers.  Audiences also generate content by judging and promoting content, products, services and experiences.

With the broadcast media of the twentieth century like television advertisers could only buy large audiences that met broad demographic criteria. This meant they paid for a lot of wasted eyeballs. On social media platforms, advertisers can purchase the specific audiences members they want, including in the specific times and places they want to reach them. If you have a Facebook account, then the whole time you have been using it Facebook has been translating all your actions into data that it uses to create a set of ‘preferences’ it attaches to you.

Think about these preferences like this. These are the work of being watched. You have always had interests and preferences, but when you start expressing them on Facebook they acquire a value. You are doing the work of converting your preferences into data that Facebook can use to sell your attention to advertisers. The preferences that Facebook creates for you are then used to shape the kind of promoted posts or advertisements you are shown in your News Feeds.

Your Facebook Ad Preferences

Facebook has a function that shows you how ads are delivered to you based on your preferences. Buzzfeed has a useful explainer on how to find them called ‘Here’s how to find out what Facebook thinks you like’.

Let’s go through it. Go back to the menu of options, in the top right of the browser or bottom left of the app. Click settings. On the app you need to hit ‘account settings’.

Then, select ‘adverts’. You’ll see up the top there is a section called ‘your interests’. This contains all the ‘preferences’ that Facebook has assigned you, and allows advertisers to target you based on. You’ll get a list of stuff that Facebook thinks you like, and tells advertisers that you like.

A screenshot from Facebook's Advert preferences portal.

A screenshot from Facebook's Advert preferences portal.

I have 65 food and drinks, 43 lifestyle and culture, 46 news and entertainment interests and so on. Have a look at yours and see if you can figure out what Facebook associates that interest with you.

A screenshot from Facebook's Advert preferences page showing the range of 'Food & Drink' preferences Facebook has assigned me.

A screenshot from Facebook's Advert preferences page showing the range of 'Food & Drink' preferences Facebook has assigned me.

If I drop down my 65 food and drink interests what do I see?

Some of them are general: beer, wine, coffee. Others are specific: scotch whiskey, and a specific whiskey - Laphoraig.

If you hover your mouse or hold your finger over the preference Facebook will tell you why they associate this preference with you. If I hover over the coffee it says ‘you have this preference because we think that it may be relevant based on what you do on Facebook or pages you’ve liked or clicked’.

That’s true. I like some of my local cafes on Facebook. But it is also incredibly vague. And in that sense, basically misleading.

Firstly, there would be specific pages or clicks that made Facebook make this judgment – why not tell me?

Secondly, I guess there would also be a relative strength value assigned to this preference. So, I’m sure Facebook would determine how much I liked coffee compared to others based on how many likes, clicks, or times I mention coffee in posts or messages.

It is right, I do like coffee. But, Facebook are not really telling me why they know it is right.

You'll see there is a little ‘x’ in the corner of each interest on the browser, or the three dots in the bottom right on the app were you can remove the interest. Basically, tell Facebook they are wrong – this is not a preference of yours. This is kind of amusing. Facebook present this as giving users ‘control’ over ads they see. And, the public argument Facebook, other platforms and advertisers make about targeted advertising is that its better for consumers because you see ads you want to see, and that are relevant to you.

But think about that. It would be a very peculiar kind of user who takes the time to edit their Facebook ad preferences to make sure that they see the most relevant ads possible on the platform. You’ll see that Facebook explain to you on this page that if you remove all your preferences it does not mean you’ll see less advertisements, it means you’ll see less relevant advertisements.

Actually, that's got me curious.

So, I’m gonna go through and remove all my preferences except alcohol and coffee and see what happens. It is an irritating thing to do because you have to unclick every preference individually, for me about 400 preferences. As I go through them – some are kinda peculiar. Facebook says they think I’m a ‘late technology adopter’ based on what I do on Facebook, and they also know that I’m a Windows user. OK, I am. But it prompts the question, how would that piece of information be used by an advertiser?

Others, that I’m interested in ‘family based households’ and ‘fermentation based food processing’. I think that means I like yoghurt, pro-biotics and beer. Huh?
So, will this mean I get heaps of alcohol and coffee ads? Random ads? That Facebook will generate me new preferences?

OK, so I deleted a bunch of them. I thought I could undo them. You can’t. Just so you know.
So, I’ve gone from about 500 preferences to about 50 – and they are all related to coffee and alcohol.

A screenshot from Facebook's Advert preferences page. This one shows the information Facebook gives me about why they assigned 'beer' to me as an advertising preference, together with an example on the kinds of ads I would've seen in my News Feed based on this preference.

A screenshot from Facebook's Advert preferences page. This one shows the information Facebook gives me about why they assigned 'beer' to me as an advertising preference, together with an example on the kinds of ads I would've seen in my News Feed based on this preference.

A year later...

I checked back a year later and, unsurprisingly, Facebook just went about reassembling all my preferences. The lesson here, if you keep using Facebook, it keeps learning about you.
There is no way to use Facebook without doing the work of being watched.

Facebook’s wager is that we want targeted ads. They annoy us less. Their data tells them that’s right too. And, if I look at my ads I’ve clicked on in my archive data, well, it is true, I do click on the ads – something like 1 ad every couple of days that I find relevant and click on. Again though, this is kind of a pre-school version of what’s going on. In that, this kind of ad-targeting is based simply your specifically expressed preferences. It doesn’t take account of more subtle contextual factors like where you are, or what a friend is doing, or places you go.

Notice too that the kind of information these preferences are based on: pages you clicked, ads you clicked, are not included in your Facebook archive that you can download. You might begin to notice now how your ad preferences inform the way your news feed is shaped. I realise that I see a lot of promoted posts relating to coffee on a Saturday morning. Nearly every Saturday I see a post like this from my local café. They are doing their targeting I’d say. The dude who runs it looks like he’s pretty on it: that wouldn’t surprise me.

The platform is telling us it has data about our activities but it doesn’t make that available to us to download. This little experiment tells us something about the work of being watched. Our everyday use of Facebook might be enjoyable to us, but it also doubles as work in that it creates valuable data that enables Facebook’s advertising model to function. This is the work of being watched, of allowing the platform to turn our everyday activity into data.

 

Brands and media

During the twentieth century branding and advertising became central to shaping our ideas of what the good life is and how it can be realised through our consumption of commercial products and experiences. Branding and advertising are a fundamental part of our media culture. In advertiser-funded media - like newspapers, magazines, television and social media - brands are arguably the critical component. These media businesses depend on creating value for brands, without funds flowing from advertisers the business would collapse.

By following brands, and how they change over time, we can understand how media are used to exercise power and to organise cultural life. To begin, my provocation is that during the twentieth century brands operated predominantly in an ideological fashion. They purchased advertisements in mass media like newspapers, radio and television. Those advertisements made claims about the qualities of products or the people who consumed them. If consumers found those claims persuasive or appealing they would go and buy the products.

By the late twentieth century however, brands had become woven into our lives and identities in more complicated ways. This process kicked off in the 1960s with the creative revolution. Advertising creatives like Bill Bernbach created a style of ‘anti-advertising’ that played on the public’s mistrust of advertising. He created advertisements that critiqued the mass society and poked fun at advertising itself. By making this move, advertising was no longer caught up in trying to protect the sincerity of its claims. Instead, advertising acknowledged that savvy consumers had grown cynical about the claims brands made, and responded by weaving themselves into our everyday lives. Making less emphatic claims about the qualities of their products, and instead claims about the hip and savvy people who consumed them.

Advertisers began to devise ways to let consumers play a role in attaching whatever meanings they liked to brands. In the past decade, social media platforms have engineered a system of media that works entirely on the terms set by brands. Platforms like Facebook, Instagram and increasingly Snapchat depend on attracting money from brands to make a profit. And so, they are invested in engineering a platform for participatory and data-driven promotion that brands want to pay for.  Branding on social media is participatory in the sense that we weave brands into the depiction of our everyday lives. And, branding on social media is data-driven in the sense that we submit to surveillance of our expressions, movements, preferences and relationships. This data is incorporated into the algorithmic judgments of brand culture.

Brands are not just logos, they are social processes. By this we mean that to understand how brands work we need to pay attention not just to texts and their meanings, but also to the social interactions between people of which those texts are a part. From the mid-20th century brands and advertisements began to transition beyond simply informational claims about a product, to more deliberately position products within desirable ways of life. Advertisements did not just teach consumers about product attributes, they also taught consumers how to incorporate products within their lifestyles.

Here’s a Folgers coffee advertisement from the 1950s.

Folgers coffee ads are all over YouTube because they are infamous examples of the sexism of mid-century advertising. In this one, a young housewife seeks advice on how to please her husband, who is dissatisfied with the coffee she is making at home. Papa Eddie suggests Folgers because it is never bitter. Here, the advertisement makes a definitive claim about the qualities of a product: the coffee is natural, not bitter, grown in the mountains. Advertising and branding started out making claims about the specific attributes of products, and they still do this of course. But, that isn’t the whole story.

What we see develop over time is that brands make claims not just about product qualities, but also how the product is positioned in a way of life. So, in this Folgers ad the coffee is positioned within an idealised suburban life – complete with its antiquated gender norms. The claim being made is not just that Folgers coffee tastes good, but that you the housewife should buy Folgers coffee because it would please your husband. That consuming this kind of coffee makes you a ‘good’ housewife, something it assumes women want to be.

The advertisement doesn’t just say something about the quality of the coffee, it makes a claim about the kind of person who buys this coffee. It addresses women as if what they desire is to be a good housewife. This advertisement would of course not work today because it does not fit with today’s cultural norms and values. A brand could not successfully address women in this way. This is how brands work as social processes, they reflect the cultural norms of the specific cultural setting they are operating within. Brands have a long history of engaging with and reflecting gender norms. Here’s a famous Australian example.

In VB’s Hard Earned Thirst advertisements real men – men who do manual labour, engage exclusively in homosocial pastimes, play football, drive utes – drink VB. The ad says, more or less, who wouldn’t want to be one of these real men? This ad is from the 1970s. Let’s note an important difference with the Folger’s coffee ad from the 1950s. This VB ad says nothing about the product itself, it only talks about the kind of man that consumes the product. The ad positions the product within a way of life and its identities. This ad represents what the consumption of a product means and says about the consumer. Throughout the twentieth century brands increasingly put themselves at the centre of our cultural experiences.

Since the turn of the millennium brands have also increasingly established themselves as platforms for ethical and political action. They offer an opportunity or tool for acting as an ethical consumer. Here’s an example from Singapore.

A mobile phone company launches an app that helps vision-impaired people navigate the world. They take a photo with their phone, share it to an app, and micro-volunteers write a description of what’s in the image. The brand offers a tool to help people be ‘ethical’ or ‘good’ people. The brand offers tools for you to ‘act out’ your values and ethics. The brand becomes part of how you see yourself as a good person in the world. The brand helps you make your values tangible actions.

We see this everywhere. When we buy Starbucks coffee we are told it is not just a cup of coffee but a ‘coffee ethics’ – the coffee is fair trade, sustainable, contributes toward development projects and so on. When we buy Tom’s shoes, a pair of shoes is sent to the developing world.

In a café I was in this morning I could buy the toilet paper in the bathroom, with the profits going to sanitation projects in the developing world. In each of these cases I don’t just buy the product and its utilitarian attributes: the coffee, the shoes, the toilet paper. I also am buying a certain ethics, an opportunity to feel good about myself, and convey my values to the world, and in a small way share the good life with others. When we purchase products we are often making decisions not just about the specific attributes or uses of the product, but also what purchasing that product says about us: our taste, politics, ethics and values.

The problem is though that this can feel futile. We somehow feel responsible in our individual consumption choices for larger political, social and market structures. Here’s a provocative illustration of this dilemma from the satire of hipster life in Portland, Oregan – Portlandia – the famous chicken scene.

So, in the scene we have two hip ethical consumers deliberating about whether to eat an organic chicken. It is satire. But, it contains a kernel of truth. In these moments when we feel our individual consumption decisions have larger ramifications, we somehow feel a sense of absurd responsibility. The decision to eat a chicken isn’t just a decision about feeling hungry or what you like to eat, it is also a decision about your ethics and values, and your role in perpetuating a system of industrial farming. Here’s the thing though. Individual moments of consumption are probably not where larger market dynamics get changed. Better or more ethical farming could only emerge through collective action: policy change, industry accountability, regulatory frameworks and so on.

Ethical consumption is arguably then more a way for brands to present themselves as in-sync with our values, or to offer us symbolic resources to convey our values, rather than a really politically effective way of orchestrating change in the world. This discussion alerts as though to the various ways in which consumption is hard work. Not just because it conveys our ethics and values, but also because of the ways in which our consumption decisions convey our sense of taste. Often, this is part of a kind of loop between our consumption practices and our social media use. When we go to cool venues, fancy restaurants, buy new clothes they’ll often appear conspicuously or subtly in our social media profiles. They say something about our taste. We incorporate brands into our self-narratives. Our consumption choices can say a lot about us.

In the Netflix comedy series Master of None by Aziz Ansari there’s a wonderful scene where he is obsessively searching google and Yelp to try and find the best taco in New York. He spends so long looking for the best taco truck in New York that by the time he finds it, it is shut down.
It’s cutting because it’s something many of us spend a lot of time doing – trying to figure out the best place to go.

These are the judgments of making ‘tasteful’ consumption decisions. Furthermore, our media devices, apps and platforms are central to the work of searching, evaluating, locating the best options – and once we’ve made our choice, promoting our good judgement to our peer networks in the form of images and updates. We pull out the phone and research the best place to go for a drink with friends, once there we use the phone to take photos that tell everyone what a great decision we made.

If, during the past generation, brands came to rely more and more on our capacity to incorporate brands within our lives, then social media provides the tools dramatically intensify these practices. On social media, brands rely less on telling us what to think and more on providing us with the resources we need to include brands within the streams of images, videos, comments and likes that we create.

Brands teach us something profound about the how our current media system operates.
This is a media system organised around the logics of participation and surveillance.

Participation. The continuous translation of our lived experience into images, comments and ratings. We do the work of creating narratives about consumer culture that our peers sees. This enables an incredibly powerful form of branding to emerge, one where brands can operate in highly reflexives and customised ways.  A brand can come to mean many different things, depending on the cultural context and social network within which it is being made meaningful by consumers.

Surveillance. Brands can operate in a more participatory way because social media platforms facilitate the translation of lived experience into streams of data. This data enables brands to make predictions about and respond to consumers in increasingly customised ways.
On social media platforms brands don’t rely on our sincere belief of their claims, as much as they rely on our continuous participatory incorporation of them into our lives and our willingness to be monitored.

Facebook is an engineering project

Facebook launched as a public company in 2012, immediately after its launch the stock price sunk.

Facebook had a big problem and the market weren’t convinced Facebook could solve it.

Users were starting to ‘go mobile’. To access Facebook predominantly through their smartphones.

That seems strange now, but remember for the first half a decade or so Facebook was a social networking site that people nearly exclusively used via a web browser.

This shift to mobile was a big problem because Facebook had no tools for generating revenue from mobile users. It’s revenue came from web-based advertising. Zuckerberg and Facebook put together a team whose job it was to take the platform mobile. They had to figure out how to make Facebook a ‘mobile-first’ company in a profitable way. Until about 2013 many observers were not sure that Facebook could make this transition. There was the possibility that some other natively mobile platform might rise up in their place.

That’s not what happened, Facebook have clearly transitioned to mobile successfully.  How did they do it?

Let’s chart this story, not just because it tells us some important things about the becoming mobile of media. But also because it illustrates how media platforms operate as engineering projects. They are never static or stable, they are always in the process of imagining and inventing the next version of their infrastructure.

This makes them radically difference from the media institutions of the twentieth century. For all the ways that television came to saturate everyday life in mass societies, as a technological form it changed very little. It remained, more or less, a box sitting in the home that you turned on or off, selected from a number of channels, and viewed professionally-programmed content.

In the ten to fifteen years that media platforms like Facebook have been with us they have re-engineered their infrastructure in significant ways multiple times.

The first important development in this story about Facebook engineering itself as an algorithmic and mobile platform arguably happens in 2006 when Facebook launched the News Feed.

Try to imagine Facebook without a News Feed. Maybe you were never even a user before the feed. Back then, it was more like MySpace, or an online dating site, in that individual user profiles were the organising point of the network. When you logged on you were placed at your personal profile page, and went from user profile to profile. There was no feed that aggregated recent content produced by everyone in your network. You only knew if a friend had posted some new photos if you went and checked their actual profile.

Facebook realised this was a problem because they couldn’t manage user engagement with the site. This also made serving advertising content difficult. MySpace ran into this issue too, where advertising began to clutter profile pages or appear as interstitials between page loading – disrupting the user experience. This problem killed off MySpace. Facebook thought if they could develop a feed of content they curated, and then refine it over time, they could learn to serve users content that would keep them engaged with the platform more often and for longer periods.

Rather than us navigating our way randomly around the network, Facebook would use the News Feed to predict and program our engagement with it. Users were initially not happy about the new feed. The News Feed has been at the heart of the Facebook experience for ten years now. That means it has ten years of data about users it can use to shape what they see.

The News Feed gave Facebook greater control over user engagement, which was crucial, and it would also prove to be critical to solving the problem with their mobile advertising revenue.

I wonder how many hours of scrolling in the feed it has on me? Facebook says it captures, on average, 50 minutes of our attention each day. Over ten years that’s more than 3000 hours of time Facebook has monitored our habits, preferences and interests.

The News Feed began to increase engagement with the platform, but Facebook’s value was still limited by the fact that they had very low click through rates on the advertisements that appeared on the right hand side of the browser interface. While Facebook had the capacity to customise the targeting of ads to individual users based on data it collected about them, the value of this was decreasing because users never clicked on the ads.

Once it became a public company, Facebook's advertising model came under significant scrutiny.

'Here are are Facebook's 7 biggest advertising problems' reported Business Insider.

The Atlantic wrote 'Suddenly, Facebook's Advertising Problem is a Problem'.

Advertisers didn’t really value these ads as a media property because it couldn’t offer strong user engagement. And, the problem was made worse because these ads were not visible on the mobile app at all. At the start of 2009, Facebook had 20 million users on their mobile platform. By 2010, they had 100 million mobile users. There was rapid growth in mobile use with the penetration of smartphones and large mobile data plans. Early versions of the mobile app had fewer features than the desktop version and the app was notoriously slow to load and scroll. Still users kept going to the mobile app for convenience over the desktop site and Facebook was forced to catch up.

By 2011, 430 million users were mobile, making up 50% of daily engagement with the platform. So, before Facebook even launched as a public company, mobile had become a critical strategic issue. They needed to figure out how to make the mobile app work seamlessly and how to get most of their revenue from it.

When Facebook went public in 2012 its user base was 60% mobile but they could make no revenue from any of this mobile engagement. Think about that, in 2012 Facebook could not generate a single dollar from 60% of their user base. Facebook was also under threat form emerging mobile-first platforms like Instagram. When they bought Instagram for $1 billion in 2012, they were making a strategic play. Instagram had been siphoning off users from Facebook, and they needed them back. But, Instagram brought another version of the mobile problem. It had no advertising model and therefore made no money.

Both Facebook and its new acquisition Instagram had to work out how to generate value from their mobile apps. The answer was to develop a native advertising model, something that had never been done at scale before. A native model weaves paid advertising into all other content on the platform. Rather than have the ads appear separately on the side, they would flow through the News Feed or Home feed along with everything else. Facebook had to figure out a way to integrate advertisers’ content into the everyday flow of the News Feed.

In 2012 Facebook launched promoted posts in the news feed, and over the past several years these have become the backbone of Facebook’s revenue. Today more than 90% of Facebook advertising revenue comes from native content integrated into the News Feed. Creating the News Feed was the first step in ‘natively’ integrating advertiser content into the platform.

The news feed uses algorithms and data analysis to determine the right ‘balance’ between selling the attention of users to advertisers and maintaining their ongoing engagement with the platform. Too much irrelevant paid content and you’ll stop using Facebook. Not enough and the platform doesn’t make enough revenue. Facebook finds the right balance for each user via complex data analytics.

Once this native model was working revenue started to flow from the mobile user base and Facebook’s share price started to recover. In solving this major strategic problem Facebook also dramatically reshaped the whole media system: it invented a model of native advertising that worked at scale. By 2013, Facebook’s user base was 80% mobile. Users were using Facebook and Instagram more often in more places. This offered the platforms more points of data and opportunities for engagement.

The platforms could ‘auction’ an expanding array of moments from our everyday life to advertisers, increasingly based not just on who we are, but also where we are, who we are with and what we are doing. The becoming of mobile media is associated with the intensification of the amount of our daily lives which can be made visible to media platforms. By 2013 the business press started to acknowledge that Facebook solved their mobile problem, and therefore established themselves as a durable platform in the media landscape. By 2014 50% of Facebook's revenue came from mobile. By 2016 Facebook was 90% mobile and 85% of its revenue came via promoted posts in the mobile app.

Facebook’s re-engineering of its platform does not stop. The platform has begun to experiment with integrating shopping into Pages, Profiles and News Feeds. At the 2017 F8 developer conference Mark Zuckerberg presented a vision of what Facebook looks like after the smartphone.

You look into your bedroom and see what your bed would look like with a new bedspread you are thinking of buying.

You walk past a bar and see in the window a video and review from a friend when they went there.

You have a packet of muesli on the kitchen bench while you are having breakfast, you see a person leap out of the package and stand on your kitchen bench staging a demonstration of the farm when your muesli is made.

Facebook are imagining forms of consumption that are even more natively woven into our experience.

 

Media goes mobile

The launch of the first Apple iPhone will stand the test of time as a significant historical event. It marks the transition from the television to the smartphone as the organising device of the media system. During the product launch Apple and Google stage a demonstration of the Google maps app. Jobs searches for the nearest Starbucks and it shows it to him on the map together with the shortest route. This is kind of banal now, a part of everyday life in the city. But in 2007 this was truly staggering, you can see and hear the audiences are amazed.

I worked for a phone company in the early noughties who were trying to build one of these ‘internet phones’. To many in the company it sounded like science fiction. Apple surprised the telecommunication world, but perhaps it isn’t so surprising that it was a computer company rather than a telecommunication company that invented the ‘killer device’. The telecommunications industries had been trying to invent an ‘internet phone’ for a decade, but they somehow always ended up looking too much like phones.

Social media’s popularity depended on the invention of the smartphone, for the social-ness of media to intensify the media infrastructure had to be embedded within our everyday lives and practices. The smartphone was a critical piece of infrastructure in building the 'culture of connectivity' of media platforms. Media platforms signal a dramatic change in the business model of the culture industry. The cultural and media industries are still primarily in the business of producing content and audience attention that they sell to advertisers. But, the flows of attention and associated revenue have shifted dramatically.

For much of the twentieth century ‘rivers of gold’ in the form of advertising revenue flows into media institutions like newspapers and television stations. These rivers of gold funded ‘quality’ content: investigative journalism and local television drama, as just two examples. Those rivers of gold no longer flow through these mass media institutions, instead they flow into media platforms like Google and Facebook. These media platforms do not invest their profits in quality content however, but rather in media engineering projects – augmented and virtual reality technologies, logistical extensions like driverless cars and machine learning. As they do, the role that media plays in the organisation of everyday life extends beyond simply shaping webs of meaning.

Media are no longer just a tool for managing the social process of representation, they also act as infrastructure for organising the logistics of everyday life.

Media platforms

Facebook’s motto ‘move fast and break things’ captures the ‘disruptive’ spirit of Silicon Valley technologists. In the past decade our public culture and media system has been dramatically disrupted by the emergence of major media platforms. Investors call them the FANGs. Facebook, Amazon, Netflix, and Google.

Alex Hearn and Nick Fletcher wrote about the FANGs in The Guardian in April 2017:

From Standard Oil at the turn of the 20th century to IBM and General Motors in the 1970s and General Electric in the 1990s, the US has always produced behemoth corporations that bestride the world. But this is the era of the Fangs, the “big four” of technology, and they are currently growing at breakneck speed.
Facebook, Amazon, Netflix and Google have been roaring away since the turn of this year. Their share prices have climbed so far, so fast, that together they are now worth an extraordinary $250bn more than just four months ago.
To put that sum into perspective, compare it to the value of all the gold mined in the world in a year , which is worth about $115bn. Or look at it another way - $250bn is about the same as the annual GDP of countries such as Venezuela, Pakistan and Ireland.
Together the four firms are now valued on Wall Street at more than $1.5tn, about the same as the Russian economy.

Think of the list of social institutions and practices that have been irrevocably changed, and in some cases, destroyed by the emergence of the FANGs: journalism, television, advertising, shopping, finding your way around the city, politics, elections, dating, and fitness. For a start.

Alongside the behemoths are an array of platforms that each in their own way are the site of major cultural disruption and innovation. Twitter is remaking the speed and quality of public speech. Instagram is reinventing photography, and along with it how we portray and imagine our lives and bodies. Snapchat is collapsing the boundary between the public and intimate. And, along with it, inventing an immersive augmented reality where we see our bodies and world overlaid with digital simulations. Tinder is changing the rituals of sex, love and dating. Fitbit is remodelling how we understand our bodies.

What do these corporations make?

The simple answer is that they engineer platforms.

If the media institutions of the twentieth century were highly efficient factories for producing content, the FANGs make platforms. Of course, some of them, like Amazon and Netflix also produce content, but their value proposition and their disruption comes from the platform.

So, what is a platform?

A platform is a computational infrastructure that shapes social acts.

Jose van Dijck explains that platforms are:

Computational and architectural concepts, but can also be understood figuratively, in a sociocultural and a political sense, as political stages and performative infrastructures.
A platform is a mediator rather than an intermediary: it shapes the performance of social acts instead of merely facilitating them.
Technologically speaking platforms are the providers of software, (sometimes) hardware, and services that help code social activities into a computational architecture; they process (meta)data through algorithms and formatted protocols before presenting their interpreted logic in the form of user-friendly interfaces with default settings that reflect the platform owner’s strategic choices.

In this definition van Dijck sets out five technical components of a media platform: data, algorithms, protocols, interfaces and defaults.

These components offer a schema for analysing how platforms work and conceptualising how they orchestrate the interplay between humans and computational processes.

Data are any information converted to a digital form for computer processing. The categories of data expand. Numbers, text, images, sounds, movements.

Metadata is data about data. Metadata are ‘structure information’ that describes, explains and locates ‘information resources…’ to make it ‘easier to retrieve, use, or manage’ data.

When you like or tag an image on Instagram you are adding metadata to it.

When you upload an image the platform attached metadata to it: a time stamp, location coordinates, faces recognised in the image.

An algorithm is a programmed decision-making sequence. A list of ‘well-defined instructions’ or ‘a step-by step directive’ for undertaking a procedure.

Facebook’s News Feed algorithm is a decision-making sequence drawing on thousands of data-points to decide what content to put at the top of your feed.

Algorithms learn. They are not stable sequences, but rather constantly remodel based on feedback they get about the effectiveness of their previous predictions and decisions.

Protocols are the platform rules. Users must obey the protocols to use a platform.
The first protocol of Facebook, Instagram, Snapchat, and Twitter is that to use it you must create a user profile.

Interfaces are the touchpoint between a user and the computational infrastructure. When your finger touches down on the screen of your smartphone to like a piece of content, scroll through a feed, or flick away a potential date your lived experience is interfacing with the platform.

Interfaces contain buttons, scroll bars, and icons. Interfaces emerge, and are customised, based on the decisions of algorithms.

Defaults are the pre-loaded settings of a media platform, designed to nudge user behaviour in certain directions.

Your Instagram profile defaults to public unless you set it to private. Defaults make platforms user-friendly by cutting down the decisions a user needs to make, but they also establish preferred ways of using the platform that align with the commercial strategies of platform owners and investors.

Media engineering

Where the media institutions of the twentieth century mostly employed professionals who produced content, the media platforms of the present employ a range of people whose job is to engineer connectivity between platforms and human users. They work to construct new applications of data, to expand the capacity of algorithms to make contextual predictions about users, and to design interfaces that more seamlessly capture and direct human attention.

Platforms expand the range of things that media technologies ‘do’ in our society. They no longer just convey symbolic meaning, they increasingly function as the logistical infrastructure of everyday life. In the past decade media platforms have made media mobile and logistical. Via the smartphone our bodies are continuously tethered to media platforms.

Media are logistical in the sense that they organise flows of information, bodies and ideas.
Google maps locate us in space and then offer us directions or suggestions depending on what we are searching for. Google intends to go beyond maps to also providing the driverless car for us to sit in. Think about that, a media company may bring the most disruptive innovation to transport since the invention of the car. Tindr or Grindr organises potential hook ups and dates based on proximity. The FitBit monitors and visualises our bodily rhythms like movement or heartbeat. Amazon is a global logistics networks organising the production and consumption of nearly all the material objects in our homes. Amazon images their platform integrated into our homes, listening and sensing, predicting what we need and delivering it to our homes in real-time.

At present, our point of engagement with media platforms is mostly via a touchscreen.
This won’t be the case for long. Google, Facebook and Amazon are all major investors in augmented reality and artificial intelligence engineering projects that will transform the interface between humans and media.

Facebook’s Chief Operating Officer Sheryl Sandberg explained at Recode in 2016 that while the current business plan focussed on monetisation and optimisation of the existing platform. Their ten year strategic plan is focussed on ‘core technology investments’ that will transform the platform infrastructure.

 

Mark Zuckerberg echoes Silicon Valley consensus when he says it is ‘pretty clear’ that soon we will have glasses or contact lenses that augment our view of reality.

Amazon’s Jeff Bezos imagines we will soon live in homes surrounded by ‘voice first’ devices that listen and respond to our conversations.

Asked about the where the Amazon platform is headed next Bezos pointed to artificial intelligence and machine learning and said it is

quite hard to state how big of an impact it is going to have on society over the next twenty years… new and better algorithms, vastly superior computing power and the ability to harness huge amounts of training data. Those three things are coming together to solve some huge problems.

Media platforms are media engineering projects.

These projects are approaching perhaps the end of their first wave: weaving themselves into everyday lives, attaching them to our bodies, building the infrastructural underlay of everyday relationships, communication and market activity.

They are now pressing into their next stage driven by machine learning, artificial intelligence and augmented reality. Their disruption of media is just beginning.

 

Cohabitation with data-processing machines

I deleted the Facebook app from my phone in November last year. It was like breaking up with a machine. I was pulling at the News Feed a lot. It was habitual, autonomic even. I would find my finger bouncing the top of the feed, my gaze in a trance, even when I was in the middle of doing something: cooking dinner, walking to my car, looking for something at the supermarket.

This intriguing relationship between a human and a non-human computational procedure is now at the heart of everyday media use. Ted Striphas (2015) calls this algorithmic culture. We now live in societies where we delegate a range of judgments about our culture to machines.

I don’t think I deleted Facebook because I was ‘addicted’. This was not a moment of moral panic about my lack of self-control. I deleted it because it was too immersive, too programmed, too affirmative.

What happens in that moment when you open your app or pull at the top of your News Feed?
Facebook engineers explains that 'News Feed ranks each possible story (from more to less important) by looking at thousands of factors relative to each person'.

Facebook designed the algorithm to do this because

there is now more content being made than there is time to absorb it. On average there are 1500 stories that could appear in a person’s News Feed each time they log onto Facebook. Rather than showing people all possible content, News Feed is designed to show each person on Facebook that content that’s most relevant to them. Of the 1500+ stories a person might see whenever they log onto Facebook, News Feed displays approximately 300.

The News Feed is driven by a content-recommendation algorithm that learns how to satisfy each individual user, to keep them engaged with Facebook more often and for longer periods of time.

Designing the algorithm is complicated. Computer scientists and engineers create machine learning processes that process the expanding array of data Facebook collects about users.
Psychologists, anthropologists and user-experience researchers observe human users to understand how they use Facebook in real life.

One of those anthropologists, Jane Justice-Leibrock, describes an exercise she undertakes, asking Facebook users to ‘reverse engineer’ their News Feeds.

I gave each participant a stack of recent stories from their feed, printed out on paper, and asked them to pick out the ones that interested them and discard the rest. News I asked them to sort the remaining, interesting stories by putting them into piles separated by what they liked about each.

She took her insights back to Facebook and engineers tested them against their machine learning models and incorporated them into the algorithm design.

Designing the News Feed is a process of harmonising the interplay between human and machine decision making.

The machine learns to predict what the human wants by observing them. I deleted Facebook shortly after Trump was elected because the News Feed kept immersing me in a loop of outrage, anxiety and disbelief. On the one hand it had learned my preferences correctly. I had been clicking lots of Trump stories. On the other hand, it didn’t know how to make a judgment about what I was thinking. I didn’t want to read any more stories that simply affirmed and recycled progressive outrage. There was no point to it.

The News Feed algorithm though could not break out of this loop.

This relationship between human and algorithm is critical to Facebook, Instagram, Twitter, Tinder, Netflix, Amazon, Google. Reading news, watching television, shopping for clothes, looking for dates. We do the work of coding our lived experience into the databases of media platforms, that data feeds the algorithmic machines that then shape our experiences.

There are two kinds of information we create: data and metadata.

The data is all of the content you ever create on the platform: status updates, chat logs, photos. Metadata is data about data. A recording of your engagements with a media platform: what you view, like, share, comment on, which groups you join, what pages you search, which friends you ‘stalk’, what kinds of content you ‘pause’ on as you scroll down your news feed and so on.

Facebook let you download a portion of the data they store about you.

Go to Facebook, along the blue bar at the top, the drop down arrow at the far right.

Click on settings.

You’ll see at the bottom of your general account settings there’s a link titled ‘download a copy of your Facebook data’.  

Facebook give you a list of what’s included in the archive you can download: your about me section, the ads you’ve clicked on over a limited period, ad topics associated with your profile, history of your chats, photos you’ve uploaded, personal information you’ve entered.

Hit the ‘start my archive’ button.

Now wait, sometimes a while, for Facebook to send an email with a link to download your archive in a zip file.

It arrives. I open it.

My first impression is that a lot of stuff is missing.

Take ‘ads clicked on’ and ‘ad preferences’. Facebook will only give you the past two months or so of the ads you clicked on. With ad preferences they will only give you vague categories like your political view.

My archive records what ‘preferences’ Facebook assigned me, but not how those preferences were generated. It tells me what ads I clicked on, but not what data was used to target those ads to me.

That said, there are some revealing observations.

Go to ‘ads you clicked on’ and have a look. Like most users, I thought ‘I don’t click on the ads’.

But when I look, there are a number of things I clicked on that I didn’t realise were ads.

There is a Buzzfeed story about food safety. Turns out that was a promoted story. The Queensland government paid Buzzfeed to write and promote the story into the Facebook news feeds of Queenslanders.

Nearly all the ‘ads I clicked on’ are not ‘ads’ in the traditional sense, they are promoted stories in my News Feed like this Buzzfeed one. Some are oblique. A story about the best cafes in my area, turns out that one of those cafes was promoting that story as a promoted post into the feeds of people who like their page.

For the most part though, this archive of ‘personal information’ is not very useful. This information is only valuable to Facebook because they can put it to work within their platform eco-system. Take the facial recognition data for instance. Go look for that in your archive. You’ll see that Facebook have assigned your face a unique number, which is basically a biometric identifier for you. OK, so they’ll tell you the number – but what’s the point?

You don’t have access to the software that can compute that number and use it to find your face on Facebook. To really be transparent about biometrics Facebook would tell you not just the unique biometric number for your face, but perhaps how many times your face has been recognised or classified on Facebook. Or, they would enable access to a tool that enables you to generate an archive of every image Facebook thinks you are in (or has assigned you to) regardless of whether you tagged yourself or not.

All the data in this archive – your preferences, interests, your age, your relationships – only become valuable when Facebook puts it into a database that enables them to make judgments about you in relation to the 1 billion other users of the platform.

The value Facebook generates from your activity comes from them associating your data with the data of other users. It is the scale of the social networks that Facebook can analyse that enables them to generate value. One archive on its own has hardly any value.

I have a box under my house full of stuff from when I was a teenager. Invitations to parties, actually printed on paper. Notes and letters. Playlists. Sets of photographs from parties, and festivals and road trips. Concert tickets. Street press. Gig posters. Beer coasters. I don’t know what to do with it. My Facebook archive contains about ten years of personal information, so does the box under my house. The difference between them though is that the box under my house is connected to nothing but my own living memory.

In The Culture of Connectivity Jose van Dijck (2013) suggests that social media platforms are socio-technical architecture. By socio-technical she means that 'online sociality has increasingly become a coproduction of humans and machines'.

The Facebook platform is a constantly evolving assembly of living human experience and the data-processing power of computational machines. In this computational architecture a personal archive acquires a value it never previously had. Data about each individual becomes the basis of orchestrating the relationships between humans and the media platform: shaping their experience in granular ways, often driven by the commercial imperatives of platforms and the advertisers that fund them.

 

Write like you speak: writing for audio

The key principle of writing for audio is write like you speak

This post sets out some principles for writing for audio.

An audio script should be sharply written, engaging and thought-provoking. An educated member of the general public should be able to understand it and find it interesting.

For this post, I’m talking specifically to a task where you turn a walkthrough and vignette into a provocative first-person analysis of your engagement with media.

A script should prompt the listener to think critically about contemporary media technologies, their uses, and impacts on society, culture or politics. In this case, your script should evocatively describe and critically reflect on your media use and draw on key debates in the field of media, communication and cultural studies.

Below I set out some principles of writing for audio together with some effective examples from radio programs and podcasts that offer provocative critical reflections on our contemporary media culture.

Principles of writing for audio

In this section I set out some principles and tips for writing from audio. These links and suggestions are drawn from the National Public Radio training website, I've provided links to these sources below.

Basic components of an audio script

A script for audio, radio or podcasts contains two basic elements: ‘tracks’ and ‘acts’.

Tracks (short for voice track): is the script read by the narrator. It is perfectly fine for your script to contain only tracks. A script that just contains the voice of the narrator is a ‘voicer’.

Acts (short for actuality): are the ‘voices in a story that are not the narrators. These can include interviews, scripted re-enactments or dialogue and found footage like news reports and video. They are also called ‘grabs’ or ‘sound bites’.

Key principles and tips

1. Write like you speak. Use your own voice. Use your own vernacular. This is the flow you know.

2. Start emphatically. Jump into the narrative. Something is happening. Pose a question. Create mystery.

3. Keep your sentences short. Audio requires more straight-forward sentences because listeners do not have the benefit of re-reading or reading slowly. Speak directly. Repeat key words to give emphasis. Only use phrases and expressions that you would use in normal conversation.

4. Structure the writing with two or three key points or narrative elements. Limit general description, and focus on precise and evocative illustrative details. Only you know what was left out. You might write a part so you ‘get it’, but your listeners don’t need to hear it, what they need is the key moment, the telling illustration, the specific critical insight.

5. Be a helpful narrator. Explain where you are going. Sketch a map. Set out why it is interesting. Spell out how things are connected.

6. More action than description. Verbs are better than adjectives. Edit closely at the sentence level.

7. Transition between ‘tracks’ and ‘acts’ need to have flow. Think about how an act picks up and extends your thought. Don’t have acts that simply repeat what the narrator has already said.

8. Speak definitively. Read your script aloud. This helps identify difficult phrasing, get a sense of tone and pace, and identify where to breathe. You might cut a sentence into a fragment or use an em-dash or ellipses in order to create space for a breath.

9. Paragraphs are typically short, even just one sentence long.

10. Signal emphasis. Denote which word in a sentence the speaker should emphasise. This helps a reader 'hear' what it sounds like as they read it. The easiest way to signal emphasis is to place a work in capitals or italics.

Examples: scripted audio that reflects on our lived experience with media

Here are some examples of podcasts that are researched and scripted that engage with our experience with media. Each of these episodes offer examples of writers using audio effectively to describe, reflect and critically analyse media and cultural life.

Ira Glass' 'Finding the Self in the Selfie' (573 Status Update), This American Life, 27 November 2015

Listen to the prologue and first act of this episode. This features Ira Glass narrating and talking with Julia, Jane and Ella about selfies and Instagram. Glass evocatively describes the practice of taking and commenting on selfies. He makes this ordinary ritual seem strange and intriguing. He then sets about reflecting on how these practices work and what they mean. You can listen here or read the transcript here.

David Rakoff's '29' (328 What I learned from television), This American Life, 16 March 2007

David Rakoff reads a piece about an 'experiment' he undertook: watching 29 hours of television in one week. That's how much television the average American watches. Unlike the example above, this piece is entirely a first person narration where Rakoff reads some quotes from friends and family. The tone is snarky and funny, but the reflection is insightful. Rakoff carefully illustrates and then reflects on the forms of cynical and snarky enjoyment we get from watching 'trash'. You can listen here or read the transcript here.

Malcolm Gladwell's 'The Satire Paradox', Revisionist History 

Malcolm Gladwell carefully illustrates and critically examines how satire became part of our political culture. The writing demonstrates both evocative examples and sharp insights into the limits of satire. You can listen here.

Karina Longworth's 'Madonna: From Sean Penn to Warren Beatty', You Must Remember This

Longworth's long-running podcast You Must Remember This is a great example of scripted audio in the genre of creative non-fiction. Longworth takes the history of Hollywood and scripts it as creative narratives. You can listen here.

Reflect, describe and critique: walkthroughs and vignettes

Cohabitation with media

The habitats we live in are made up of complicated relations between humans and technologies. Digital and media technologies are woven into our homes and public spaces, tethered to our bodies, and entangled with our imaginations.

Once we might have ‘studied’ media mostly by examining its symbolic content, or by examining how symbols are produced and consumed.

Now we need to also account for media as socio-technical infrastructure. That is, as infrastructure made up of humans and material media technologies that jointly construct the cultural, economic, social and political systems within which we live.

Drawing inspiration from critical media and cultural researchers I sketch out two techniques here – walkthroughs and vignettes – for purposefully reflecting on our cohabitation with media.

What’s a walkthrough?

A walkthrough is a method used by researchers and technology designers who examine how people use media in their everyday life.

A walkthrough is useful for carefully documenting the interplay between the design of media technologies and the way people use them. This is a dynamic relationship.

On the one hand, human users play an active and creative role in using media in our lives.

On the other, media technologies are purposefully designed to structure how humans make use of them.

For research purposes, a walkthrough is a systematic analysis of how a media technology is designed to work and how it is used within everyday life.

Walkthroughs are most often used to examine interactive digital media technologies like apps or social media platforms. That said, the set of questions and concerns below work just as well for examining any media technology from books and newspapers, to cinema and television, to wearables and augmented reality. The principles of examining carefully the interplay between humans and technology remain the same.

Here are some of the things a walkthrough does:

1. Explains how something works.

A walkthrough is a step-by-step explanation of how a media technology works.

In a vernacular sense, we see walkthroughs all the time. For instance, if you ever wanted to block a number on your phone, or change your privacy settings on Facebook, or know how to create a particular effect with make-up you probably jumped on Google or YouTube and search ‘how to…’. These ‘instructional’ videos on how to do things are an everyday example of a ‘walkthrough’.

2. Critically examines the social, political and economic setting of media use.

A walkthrough examines how a media technology is shaped by commercial, social, cultural, political and legal conditions.

This can mean paying attention to aspects such as the commercial business model a media technology and examining how shapes its design, or examining the privacy policy of a media

technology and thinking about how that institutionalises relations between users and platforms.

Questions you might ask:

  • What are the commercial arrangements that sustain this product, service, technology, app or platform?
  • What contractual arrangements does the media technology create with users?
  • Do the users of this media technology undertake labour?
  • Do the users of this media technology create content or data?
  • If so, what value does that content or data have?
  • How does its advertising model work?

To answer these questions your walkthrough might lead you to investigate the privacy settings, privacy policy, and advertising interface of the platform or media technology.

3. Critically explores the choices and affordances a media technology offers users.

A walkthrough examines how media technologies structure the range of choices users have.

This involves carefully exploring the options given to users as they go about engaging with a media technology. This might involves examining the interface, protocols, defaults and algorithms of a technology.

For example, when I log on to Facebook I cannot choose what content I see, the News Feed algorithm makes that choice for me. Or, when you sign up to a dating app you might be given a limited set of categories for describing your gender and sexual preferences.

Questions you might ask:

  • What options do the users of this technology have to describe and express themselves?
  • What kinds of participation does this media technology facilitate?
  • What options or choices does the interface give users?
  • Do users need to create a personal profile to use this media technology? If so, what choices do they have in constructing the profile?
  • What ‘rules’ govern the use of this media technology? What are users able and not able to do with it?
  • Is data recorded about users?
  • Are algorithms used to shape the experience or engagement with this media technology?
  • What kinds of data are collected?
  • What public, political or cultural consequences does this data-processing have?

4. Critically analyses the symbolic qualities and practices unfolding in and around a media technology.

A walkthrough can pay attention to the symbolic landscape of a media technology.

This might involve a semiotic analysis of the design or interface of the media technology itself. For instance, until recently Facebook only enabled users to ‘like’ content, now they allow a limited range of ‘reactions’. It might also involve a critical analysis of the kinds of symbolic content that flow through a platform. For instance, we might pay attention to the kinds of narratives and identities that are represented on Netflix.

Questions you might ask:

  • What icons, images or colours are used to describe and facilitate use of the technology?
  • What kinds of lives, identities and bodies are visible via this technology?
  • What are the dominant and non-dominant forms of expression taking place via this technology?
  • Who is creating, sharing and consuming representations?
  • What is being represented? Your own life? The lives of others? Bodies? Brands? Cultural experience? Tastes? Political viewpoints?
  • How are these ideas being represented?
  • How are users engaging with them?
  • What controls are placed on the forms of symbolic expression allowed by this technology?

5. Critically documents and reflects on the use of a media technology in an everyday setting.

A walkthrough documents how a media technology is actually used as part of everyday life. Researchers often recruit informants who help them make sense of their media user. But, you can also do a walkthrough ‘solo’, by carefully documenting your own use.

Questions you might ask:

  • Where am I when using this media technology?
  • What time is it?
  • Who am I with?
  • What am I doing?
  • Am I using this technology to search for information, organise something, purchase something, express myself, post images of my body, monitor my mood or body?
  • Do I create ‘hacks’ or ‘workarounds’ to get around the rules of the platform? For instance, having two Instagram profiles, one private and one public; using a fake name on Facebook.
  • What actions do I take to manage my visibility or privacy?
  • How do I use this technology as part of my self-expression or relationships with others?
  • What are my feelings and mood?
  • Am I scrolling, glancing, and tapping?
  • Am I using two screens at once?
  • Am I relaxed, bored, anxious, just passing time, quiet and reflective, thoughtful, agitated?
  • What are the touchpoints between your body and the media technology? For instance, if you put headphones on does this technology create a kind of immersive and private experience?

 

These are not a definitive list of questions or concerns, the important principle is that walkthroughs encourage us to think carefully about the dynamic relationships between media and human users.

In particular, the walkthrough helps us think about the relationship between our creative uses of media and the way they are purposefully designed. We come up with all sorts of creative uses of media as part of living our lives and expressing ourselves. At the same time, media technologies are designed to encourage certain practices and achieve certain strategic goals like increasing user engagement, generating profit, or advancing political objectives.

Walkthroughs also help us think about the active engagement of users with media, we aren’t just passive consumers of symbolic content we are actively involved in the process of incorporating media into our everyday practices.

Sometimes users do things with media that designers didn’t intend, sometimes media constrain users or reinforce existing power relationships or rituals of communication.
Walkthroughs help us think about how power relationships work in participatory digital media cultures.

They focus our attention on the ‘socio-technical architecture’ of media. That is, the relationships between the ‘technical’ design of media technologies and their ‘social use’ by humans.

More on walkthroughs

The description of a walkthrough I’ve sketched here draws inspiration from the methods of a range researchers who critically explore the interplay between media and cultural life.

If you want to read more about some of these methods, here are a few leads.

Livingstone, S. (2008). Taking risky opportunities in youthful content creation: teenagers' use of social networking sites for intimacy, privacy and self-expression. New Media & Society, 10(3), 393-411.

Livingstone’s (2008) method involves sitting down with young informants who explain in detail their use of social networking sites as part of their practices of identity formation and self-expression.

Light, B., Burgess, J., & Duguay, S. (2016). The walkthrough method: An approach to the study of apps. New Media & Society.

Light, Burgess and Duguay’s (2016) method involves carefully documenting smartphone apps drawing on approaches from Human Computer Interaction, Science and Technology Studies and Cultural Studies.

Robards, B., & Lincoln, S. (2017). Uncovering longitudinal life narratives: scrolling back on Facebook. Qualitative Research.
 
Robards and Lincoln’s (2017) involves sitting down with informants who ‘scroll back’ through their Facebook timelines as a method of both reflecting on their life narratives and the affordances of the platform.

Caliandro, A. (2017). Digital Methods for Ethnography: Analytical Concepts for Ethnographers Exploring Social Media Environments. Journal of Contemporary Ethnography.

Caliandro (2017) offers the principles of 'follow the medium' and 'follow the natives' by which he means that to understand digital media we must both 'observe and describe' how media technologies work and observe and describe the practices of users and how users give meaning to thier practices.

Walkthrough questions

In this walkthrough exercise, begin by selecting a ‘moment’ of engagement with media that you want to critically examine and reflect upon.

Take this moment and ask the following questions as a way of describing it.

Question 1. What day did this moment occur?
Question 2. What time did this moment occur?
Question 3: What media platform or channel were you using?
Question 4: What kind of media device were you using?
Question 5: Where you are when using the media technology?
Question 6: Were you consuming media content?
Question 7: Were you producing media content?
Question 8: Were you adding ‘information’ to media content produced by someone else by sharing, commenting or liking it?
Question 9: Was data being recorded about you?
Question 10: Are algorithms being used to shape your experience or engagement with this media technology?
Question 11: Do you need to create a user profile to use this media technology?
Question 12: What is the business model of this media technology?
Question 13: Are other users of this media technology visible to you?
Question 14: Do you interact with other users?
Question 15: Portray this moment in a short vignette.

Writing vignettes

Vignettes capture and express not just events and actions, but also their character and feeling.

Vignettes are used by researchers to document, reflect upon, and analyse everyday practices and relationships.

A ‘vignette’ is a brief evocative description and illustration of this moment of media engagement. A vignette combines descriptive detail with analytic commentary and critical reflection.

Your vignette only needs to be a 100-200 words. Writing the vignette is preparation for preparing the script.

Your vignette should provide a step-by-step account of your actions with the media technology from the moment you first ‘pay attention’ or ‘engage’ with it to the moment you ‘disengage’.

The vignette should respond to these two questions:

  • What are you doing, thinking and feeling?
  • How is the media technology shaping your actions and experience?

You can also take up any of the questions in the explanation of walkthroughs above.

Below is an example of a vignette.

Watching Netflix: an example vignette

Friday night. 8pm. On the couch waiting for Netflix to load. We are going to watch season 2 of Love. We start. I want ice-cream. We pause it. I flick to the football to see what’s happening, then back to Netflix. Half the time I’m scrolling through Instagram or Twitter. Nicola is looking at Facebook and ASOS. She finds a good suit for me. When I look on my phone it costs twice what it costs on her phone. Netflix starts buffering.
A decade ago. Friday night. 8pm. I live in a share house that has no ‘tuned’ TV, just a screen and a DVD player. Nothing doing on a Friday night, no money. We get beers and rent The Wire on DVD. You rent it one disc at a time, about three episodes per disc. Watch one disc and then scramble back to Network Video before it shuts at midnight to get the next one. As we are renting it, a group from a share house down the road start cursing us. We are renting the disc they need. We wander back down the street and invite them to ours to watch it. We sit out the front smoking and drinking.
The ritual hasn’t changed. Both Friday nights are organised around the television screen, the home and the people you live with and love. And yet, a decade ago there was only one screen in the room, that screen wasn’t connected to a communication network. It didn’t stream and it didn’t watch. Network video was finite, Netflix is endless. The DVD collected no information about me, the Netflix interface does. I started watching The Wire because a friend gave me a burned copy, and then I saw it on the ‘top picks’ list of one of the video store staff. I started watching Love because the Netflix algorithm predicted I ‘might like it’. ASOS offered Nicola a better deal on that suit because she flicks and scrolls and buys on the app more often. A decade ago the shops shut at 9pm on a Friday night and they didn’t know who I was.

 

Flexible production

Every so often someone on my Facebook shares a story about abandoned buildings, deserted factories, or broken-down shopping malls.

Michael Day’s photographs of Chernobyl and the abandoned town of Pripyat.

Christian Richter’s photographs of abandoned buildings in Europe.

Seph Lawless’ photographs of abandoned malls in America.

Zach Fein, Yves Marchand, and Romain Meffre’s images of abandoned industrial buildings in Detroit.

These images tap a fascination with the collapse of a prior world, a society organised around mass industrial factories. They evoke a nostalgia for a kind of city that has disappeared.

In the case of Detroit, the abandoned factories don’t just represent the collapse of the automobile industry they mark the dissolution of a whole way of life that went along with that economic formation.

Factories, hotels, skyscrapers and homes lie abandoned throughout the city since the collapse of car manufacturing there over the past generation.

These empty buildings signify the disappearance of working class families, communities and neighbourhoods.

In these abandoned buildings we see an eerie portrait of what happens when an industrial economy disappears and no new economic activity arises in those same spaces to replace it.

The networked information economy has created new winners and losers.

The cultural geographer David Harvey calls these changes in how capitalism organises itself ‘flexible accumulation’ and ‘uneven development’.

By this he means that some countries, regions, and even parts of cities have developed rapidly, while other parts have not.

Manufacturing jobs left the highly developed Western economies and is now located in emerging economies like Southern China where investors can find the right mix of affordable yet well-trained labour, functioning transport and energy infrastructure and stable government.

The factories are now in the rapidly expanding industrial cities of Southern China.

Other parts of the world might have surplus or cheap labour, but not the stable infrastructure to support industry.

In the West we increasingly find ourselves living and working in post-industrial societies. The inner-city neighbourhoods in my home city of Brisbane are like the inner-city neighbourhoods of many Australian, American, UK and European cities: while the old industrial buildings remain, they no longer house factories. Instead, they are full of loft apartments, art galleries, fashion stores, bars, cafes, co-working spaces, gyms, restaurants, pilates and yoga studios. They are dominated by the service, leisure, knowledge and lifestyle economies.

Meanwhile, cities like New York, Los Angeles, London, Frankfurt, Singapore, and Sydney become densely wired. These are the places where internet and telecommunication infrastructure converge. They are the nodal point from which the flows of information that coordinate global production and consumption emanate.

If you are studying a media or communication degree you are basically training to become part of this info-rich knowledge worker class.

Knowledge industry jobs seem ideal. Google image search ‘google office’ or ‘facebook office’ and you’ll see images of bright bean bags, gourmet food bars, and green campuses.

These are the new factories emblematic of the info-tech economies. Factories for the production of ideas.

You’ll no doubt do a very different kind of work to that your parents or grandparents did.

Your grandparents probably had a very material job. My pop did, he was a furniture polisher. There are hardly any furniture makers left in Australia anymore.

Where most people a century ago had a job making something material, now most of us are immaterial labourers: we provide services, produce ideas and manage networks.

We make ideas, creative solutions, analytics, code, social networks, events, feelings and so on.

Doing this work relies less on our physical prowess and more on our cultural capital: the ability to have the right ideas, tastes, communication skills, ways of speaking and dressing to both envision and communicate the good life to others.

The emergence of an information rich global economy is very much a story of uneven development: new winners and new losers.

Check out the iPhone factories run by corporations like Foxconn in China. Compare these images to children from New York slums in the early 19th century. Both are easy to find on Google images.

The children in New York sweatshops made clothes, shoes, and household items.

There are no kids working in organised sweatshops in New York today, but if you search for images of the dorms that iPhone manufacturers in China live in you’ll see new kinds of labour exploitation.

The iPhone factory might not be as dirty, but is deleterious to the health and wellbeing of workers in other ways. For more of this story, check out the BBC documentary 'Apple's Broken Promises' on iPhone production that came out in 2014.

David Harvey explains that flexible accumulation reorganises rather than ‘ends’ industrial modes of production. Industrial factories, with their low-paid workforces, are shifted to the ‘periphery’ of the network, where they can be managed from afar by highly skilled managers in global cities.

Harvey notes that these workers are ‘a highly privileged, and to some degree empowered, stratum within the labour force’. They become powerful because ‘capitalism depends more and more on mobilising the powers of intellectual labour’.

Global network capitalism is flexible enough to manage many alternative forms of labour. When Harvey argues that production is shifted to the ‘periphery’ of the network, he is talking about the shift of industrial factories from developed countries to developing countries, but he is also talking about the emergence of flexible and exploitative forms of work within developed countries themselves like casual labour, subcontractors, and other informal labour practices that are part of industries like media, cultural and fashion production.

To conclude this brief sketch of the emergence of a global and networked mode of economic and cultural production, let’s make three claims about the changes that have re-ordered society and its media system over the past generation.

Firstly, a system of mass consumption that catered to homogenous cultural identities has given way to a mode of consumption – and cultural life – organised around multiple niche identities. We are less consumers of identities distributed by the mass media, and more active workers in fashioning and presenting ourselves as individuals who fit into the entrepreneurial and flexible culture of global capitalism.

Secondly, mass centralised forms of production have been replaced by responsive, just-in-time and networked modes of production. Enterprises from manufacturers to media are organised in highly flexible data-driven networks that respond to multiple and rapidly changing markets.

Thirdly, disciplinary and repressive modes of control are augmented with responsive and reflexive modes of control. A networked social system can manage populations via monitoring, affirmation and rewards. The culture industry doesn’t only shape mass populations with homogenous ideologies, it increasingly offers a data-driven infrastructure that is increasingly entangled with our bodies via the smartphone.

The global information economy is characterised by new modes of production, consumption and control. In global network capitalism culture is central to making and maintaining power relationships.

 

Networked capitalism

At the heart of a global information economy is an organising idea or principle: the network.

Networks are both a technical or social achievement.

By technical achievement I mean the ability to build telecommunication networks, the internet and digital computation that enables the collection, storage, transmission and processing of vast amounts of data across time and space.

By social achievement I mean the way that the network becomes an idea for organising society, workplaces and cultural life.

Take the example of Amazon’s warehouses.

Apart from the enormous scale, look at the automated machines that sort and organise the books and household items that Amazon sell.

An individual consumer clicks on a website, through an information network that triggers a robot to go and fetch the book they requested in the ware house, scan it, package it, label it and send it to you.

This is a new kind of factory, one that can respond automatically and simultaneously to the requests of individual consumers.

Amazon prides itself on being a highly flexible and data-driven organisation.

Its founder Jeff Bezos insists that the corporation will not adopt bureaucratic processes, but rather will remain a lean and flexible data-processing machine. Amazon is organised as a networked series of teams, each team responsible for producing outcomes that it can demonstrate with evidence from data analysis.  

We as consumers experience the results of data mining and analysis in the form of customer service. In conversation with Walt Mossberg at the 2016 Code Conference, Jeff Bezos describes their use of both flexible data processing and a networked manufacturing and delivery systems within the company:

Mossberg: Google knows a lot about you, Facebook knows a lot about you, you know a lot about what books people want to read, what things people want to buy.  I don’t know, maybe you know some other things. Want to tell us domains in which you know…
Bezos:  [laughs] I want to talk to you Walt, in particular.
Mossberg:  What about privacy?
Bezos: One of the reasons we always want to greet you by nameon Amazon is so that as soon as you come to the site you see ‘Welcome back Jeff Bezos.’ You know you’re not anonymous on our site. And you know that in a way that would never be as clearly articulated by a set of terms and conditions.  Because we’re greeting you by name, we’re showing you your past purchases. So to the degree to which you can arrange to have transparency combined with and explanation for what the consumer benefit is, that’s sort of the commercial piece. And then you get into the tension between privacy and national security, and that’s what you see.  We’re very likeminded with Apple on this point, we filed an Amicus brief on their behalf. But I believe that this is an issue of our age, we as a citizen run democracy are going to half to deal with that.

In this interview, Bezos points to the persistent tension between data processing systems used as an extension of customer service and the privacy and security concerns of citizens in a networked society. 

He also points to the casual way we encounter evidence of data mining, processing and analysis. When he describes customers being welcomed by name on Amazon’s website, he sees this as a form of transparency, one that is more clear of Amazon’s data use than a terms and conditions.

But what he also alludes to in this clip is that when we come across these clues of how our information is gathered and used on media platforms- being greeted by name, our past purchases- we are providing our consent. We consent to data processing systems when we continue to use platforms, apps, and services, and as consumers we are not particularly being duped. We are aware of that our data is collected and more and more view data collection as a kind of admission ticket to participate in a larger networked society.

Amazon is a huge and complicated operation, but because it is structured around an information and data-processing network: individuals, teams and management can manage it efficiently.

Amazon management don’t attempt to control the whole business, they instead set the parameters and systems of rewards, within which teams and employees compete with each other.

This makes for a ruthless, yet highly innovative and efficient workplace culture.

In 2015, the New York Times published an extensive investigative report on the Amazon workplace.

Jodi Kantor and David Streitfeld reported that:

To prod employees, Amazon has a powerful lever: more data than any retail operation in history. Its perpetual flow of real-time, ultradetailed metrics allows the company to measure nearly everything its customers do: what they put in their shopping carts, but do not buy; when readers reach the ‘abandon point’ in a Kindle book; and what they will stream based on previous purchases. It can also tell when engineers are not building pages that load quickly enough, or when a vendor manager does not have enough gardening gloves in stock.

Amazon employees are also encouraged to monitor and produce data about each other using an Anytime Feedback Tool. This feedback informs annual rankings of team members, those at the bottom of the rankings each year are eliminated.

In a networked and data driven organisation like Amazon employees are not told what to do from above, as much as they are given incentives to beat other colleagues and teams and provide data to demonstrate it. Only the best survive.

As much as this account of Amazon might be unsettling in terms of thinking about employee wellbeing and corporate culture, that’s not so much the point I want to emphasise here.

The point that matters to us is that you can only imagine, create and maintain a corporation like Amazon if you have networked information technology: the capacity to constantly monitor every aspect of the operation, produce detailed data about it, and determine which elements are functioning and which are not in real time.

Also important to note here is the way that networked information-driven organisations combine the best, or worst, depending on your perspective, of managerialist and networked approaches.

Just like Henry Ford's factory a century ago, Amazon monitors ‘warehouse employees with sophisticated electronic systems to ensure they are packing enough boxes per hour’.

While in its Seattle headquarters Amazon:

[Uses a] self-reinforcing set of management, data and psychological tools to spur its tens of thousands of white-collar employees to do more and more. ‘The company is running a continual performance improvement algorithm on its staff,’ said Amy Michaels, a former Kindle marketer.

There are three fundamental differences between the industrial managerial mode of production and the networked mode of production we see in a brutal form in Amazon. But, Amazon is no different to many important ways to other information and network driven corporations.

Firstly, where the managerialist set up hierarchies, issued commands, disseminated ideas and information, the global networker processes, networks, coordinates and controls flows of ideas and data.

Secondly, Where the managerialist directs and commands, the networker facilitates and steers.

Thirdly, where the managerialist controlled particular activities, the networker sets parameters or boundaries within which they encourage and exploit innovation and creativity.

If the assembly line was a symbol of the production methods of the industrial society, post-industrial societies are characterised by clean computer-driven factories, robots in the factors can adapt and retool to make different goods by reprogramming.

If the industrial factory was an engine for mass production: the same good produced over and over again.

The post-industrial computer-driven flexible factory is an engine for mass customisation: individually tailored goods and services depending on the changing demands of individual consumers or niche markets.

Managing the good life

Where did corporations like Apple, Google, Amazon, Netflix and Facebook come from?

They are now among the largest companies on earth, collectively worth about 1.5 trillion dollars. That’s bigger than the entire Australian economy.

In a recent talk Julianne Schultz' described the disruption of these corporations to our economy and culture:

What makes this different to the great corporations that sold energy, transport and consumer goods throughout the 20th century is that culture and the art of making meaning are at the heart of these new corporations. As a result, the FANGs [Facebook, Apple, Amazon, Netflix and Google] are shaping the ways we live, the information we have access to, the stories we treasure. The old technology companies sold machines and systems, software and business solutions. The new mega-profitable firms make their billions by capturing and creating meaning and belonging: from news, video, music, and the information that is the sinew of every day life—directions, health, banking, and shopping. Theirs is an enterprise that is as much cultural as it is technological.

How did they come to dominate everyday life, the global economy, and political processes in the way they have?

These corporations are distinctive because they are networked, responsive, data-driven and participatory.

In order to understand the development of these corporations, and the way they have reorganised our media system, we have to take a few steps back.

Let’s go back to the managerial capitalism of the twentieth century.

Managerialism and the assembly line

The first half of the twentieth century was dominated by mass production.

Figures like Ford and Taylor ran factories that attempted to control the action of every individual in highly managed assembly lines. Power was exerted by commanders from above. While these factories were highly efficient, they were also inflexible. Once an assembly line was set up to produce one kind of thing – like a car – that was what it produced. It was difficult to retool the factory to produce anything else.

Henry Ford famously quipped, ‘any customer can have a car painted any colour that he likes, so long as it is black’.

By reducing the features and choice on the cars he produced, he could dramatically increase the scale of production.

The factories of the mass society were efficient at churning out standardised goods, but, they could not easily adapt to change or trust workers to solve problems as they arose.

This mode of industrial production changed dramatically from the 1970s with the emergence of flexible just-in-time production, computer assisted design and robotics.

The upshot was that where the industrial-era factory produced standardised goods for mass markets, today’s factories produce customised goods, responding rapidly, to multiple changing niche markets.

The rigid and highly planned industrial economies of the mass society ran into trouble because they got too big to manage.

Power relationships change when those in power run into some kind of trouble and new groups challenge them with new ideas and new ways of doing things.

In the case of mass industrial production, decision makers at the centre of large corporations and governments were no longer able to get and process accurate and timely information about the machines they managed. This meant they were unable to make the right decisions at the right time and became less efficient.

New organisations and technologies emerged that were able to manage complex processes much more efficiently. The mass industrialised economies were remade as a highly flexible, information-driven, global network.

So, in what follows, I’ll chart a very basic map of what happened.

 

Managing the mass society: Keynesianism and state socialism

As societies around the world industrialised throughout the nineteenth and twentieth centuries a distinctive cycle emerged.

Capitalist markets had frequent booms and busts.

Periods of massive investment and growth would push production beyond demand, and an inevitable crash or correction would follow.

The problem this generated for elites managing a society – the political and economic leaders – was that periods where busts happen were prone to social and political upheaval.

If people are out of work with no resources, no prospects, no access to the good life they could start to violently resist the current arrangements of power.

For political and economic elites this was a legitimate threat. They had to keep the mass populations in cities especially, more or less happy, or they risked their cities and societies becoming chaotic and unmanageable.

One key ‘fix’ to this problem of booms and busts in capitalist societies is what we refer to today as Keynesianism.

Keynesianism is named after JM Keynes, a British economist and share trader who advocated for a larger role of the state in managing the economy. Keynesianism promoted the creation of a large ‘socialist’ or ‘welfare state’.

This idea gained traction during to the Great Depression in the 1940s. As a response to mass unemployment and looming civil unrest, governments began to play a more direct role in managing society.

President Roosevelt’s New Deal in the US is a key historical example. The New Deal involved the government investing in large-scale infrastructure projects – like building dams – and other public works to directly employ people. If volatile markets could not provide jobs, the state would step in and provide them.

In the UK, the Labour government at the time adopted similar policies, in what was known as ‘post-war consensus’ creating manufacturing demand in addition to services like national health care until unemployment stabilised.

In New Zealand, Keynesian responses to the economic depression took the form of hardship reduction, providing increased social services and incentives in agriculture and forestry.

The Australian government did not adopt Keynesian policies in the interwar years.  They viewed the economic recession to be both the fault and responsibility of foreign governments of which they were economically dependent. This was a terrible mistake. The unemployment rate rose to 32% and many families lost their land and homes. As a form of correction, the Australian government took on Keynesian policies, later, after WWII, to make up for their failure to do so in the interwar years and to assure their citizens a high quality of life. For more on this see Donald Markwell and Tim Battin on Keynesianism in Australia.

All Keynesian economic policies were ultimately concerned with stabilising citizen’s quality of life to prevent social unrest through different forms of government intervention. To promote the idea of an attainable ‘good life’.

This highly managed economy worked well into the 1970s. Most developed liberal-democratic societies had a sustained boom of growth from the 1940s to the 1970s.

In a country like Australia this was a kind of Golden Age: there were more jobs then people, most had access to the ‘Australian dream’ of a house in the suburbs on a quarter-acre block, a motorised lawn more, and a muscle car in the driveway.

What else could you want?

For a wonderful portrayal of Australia throughout this period check out George Megalogenis’ documentary series Making Australia Great.

More recently, you might remember when Prime Minister Kevin Rudd gave everyone $900 to spend at the height of the Global Financial Crisis.

This move by Rudd was classic Keynesianism: the state steps in when the market shrinks to prevent a dramatic downturn.

What emerges is a managerial form of capitalism. The state plays a large and active role in ‘balancing’ power relationships in society, seeing its position as making social and economic institutions stable over time, of ensuring that a basic standard of living is available at all times to most citizens.

Keynesianism involves the state playing this active role in flattening out booms and busts via a pattern of taxation and investment. In periods of high growth the state taxes industrialists, this flattens the peak, but saves money that the state can spend in times of recession to prevent a full-scale bust.

The 20th century was also characterised by another highly centralised managerial economic and social system: state socialism.

Keynesianism has often drawn comparison to forms of socialism, particularly in the United States. State socialism, especially in Eastern Europe, was on the rise alongside theories of Keynesian economics in the early 20th century.

State socialism in the Soviet Union, China and elsewhere is similar to managed capitalist economies in some important ways. The state places itself at the centre of social and economic life and controls the production and consumption process.

The state socialist model of course is much more centralised in its mode of decision making. It also promotes it’s own version of the good life by promising citizens a utopian future in exchange for their participation in state systems.

However, both the highly managed economies of the capitalist and socialist states run into a similar structural problem: they become too big and difficult to manage from a central commanding point.

Similar to large factories, in order to make complex decisions about when and how to intervene in the management of production and consumption of goods, they need more and more information and instruments, for which they build larger and larger bureaucracies.

But again, it becomes impossible to get the right information to the right people, to make the right decisions, and then implement those decisions at the right time.

In state socialism the bureaucracy became so big it broke down, similar to industrial capitalism. It was unable to provide citizens and workers with the basics of everyday life, let alone the good life.

At the same time this crisis engulfed state socialism, it must be remembered that the capitalist democracies of the West faced similar problems – and no one was sure at the time whether the socialist or capitalist system would collapse first.

In the West economies were so inefficient they were beset by ‘stagflation’ where prices were rising even as demand was falling, a situation economists describe as theoretically impossible.

For the West this was most evocatively illustrated by the oil shocks of the 1970s, when the US state was unable to control the flow of petrol into the economy. During the crisis, petrol stations had large queues of cars waiting for limited supplies of fuel.

In the Soviet Union especially, shortages of basic goods and services caused social unrest.

In the 1980s, the Soviet Union’s leader Mikhail Gorbachev initiated a series of reforms, most notably Perestroika and Glasnost, which basically aimed to open up both the economy and information, by removing forms of state management.

He aimed to make the socialist states more like the West by adopting aspects of the open market system.

But the reforms came too late. State institutions did not have the capacity to manage the integration of these changes into the already-existing state socialism. The Soviet socialist state collapsed and caused a ripple effect which led to the fall of state socialism throughout Eastern Europe.

A flexible, networked mode of production emerges

Meanwhile, In the 1980s in the West leaders like President Reagan in the US, Margaret Thatcher in the UK, and the Hawke and Keating governments in Australia undertook a series of reforms that are broadly characterised as neo-liberal.

For the sake of simplifying, the neoliberal move goes something like this: ‘OK if managerialists ran society with a huge bureaucratic apparatus that now does not work, let’s get rid of it’.

In the US, UK, Australia and elsewhere we see a massive retreat of the welfare state during this period.

Leaders like Regan, Thatcher, and in Australia Hawke, Keating and Howard sell public infrastructure, privatise public assets, outsource welfare, health, education, and transport services to private companies.

A range of institutions that were once owned by the state become commercial, privately owned and managed by corporations.

In Australia over the last generation things like employment and disability services, training and education services, railways, roads, banks, airlines, health insurers, telecommunications, and hospitals were just some of the institutions or elements of the state that have been sold to private providers and are now market institutions.

In a neoliberal society individuals need to take care of themselves. A consequence of the state retreat from employment, healthcare, education and other public utilities is that is safety net is no longer as comprehensive as it once was.

The reforms instigated in the West to make the state’s role in the economy smaller and more efficient helped, but they also got lucky in two important respects.

Firstly, the collapse of state socialism opened up new markets for Western expansion, more countries to consume products than ever before.

Secondly, the emergence of information technologies and computerisation enabled the West to seize the opportunity by building a new mode of economic production and management.

If the collapse of the socialist states opened up new regions for the West to expand into, information technology provided the means for managing a new global – rather than national – mode of economic production and consumption.

And, here is where digital technology and the transition from a mass to a networked economy intersect.

It is hard to imagine this now, you have to think a counterfactual, but imagine a period in history where elites recognise that both capitalism and communism are in a deep structural crisis and no one is sure which system will either collapse or emerge from the crisis first. There is an element of luck for the West in the socialist system collapsing because it opened up vast opportunities for the expansion of a new kind of economy: technology and information driven: fast, flexible and networked.