Encoding/decoding

An animation that introduces Stuart Hall’s Encoding/Decoding model made for the course Media & Society at The University of Queensland

The Industrial Production of Meaning

An animation produced for the course Media & Society at The University of Queensland that conceptualises the industrial production of meaning from the mass broadcast to the digtial platform era.

Communicator, Medium, Receiver

An animation made for the first year course Media & Society at The University of Queensland to introduce the relationship between comminocator, medium and receiver in foundational models of communication in industrialised societies.

Are we losing the art of the written word?

A panel at UQ’s Customs House in April on the fate of the written word in the digital era.

If writing is the act of storing information outside the body then, we are a civilisation that really writes.

Not just the narrative of the written word - novels, poems, love letters, essays. But, writing as code, as databases, as the translation of more and more human life into letters, numbers, ones and zeros.

Silicon Valley - Facebook, Google and co. - appear obsessed with the recording lived experience as written information.

Talking about a prototype brain-machine interface in early 2019, Mark Zuckerberg described the possibility of a direct link between our brain and his platform as ‘kind of cool’.

These kind of experiments are a bit like the civilisation in Borges’ fable about the empire whose cartographers create a map as large as the empire itself.

The impulse in Silicon Valley is to create a written version of human experience as complete as human experience itself, so that writing can bypass the incomplete nature of representation, and become a technology for experimenting with lived experience.

The amount of information we now write down every single day is roughly equivalent to all the information we stored in the previous 5000 years of human civilisation.

We are now a people who write existence down.

Here, I want to complicate the idea that the smartphone has somehow rotted our brains, left us semi-literate, surrounded by barely legible text, by going back to the nineteenth century where we find Friedrich Nietzsche: the first philosopher to write on a typewriter.

He famously typed ‘our writing tools are also working on our thoughts’.

The media archaeologist Kittler says once Nietzsche began to use the typewriter his prose changed from “arguments to aphorisms, from thoughts to puns, from rhetoric to telegram style”

Nietszche had become “an inscription surface” for the tyepwriter.

He meant that when we use a typewriter, as when we use a smartphone, at one level we are inscribing information onto paper or a screen; but at another level the device is inscribing ways of thinking on us.

Our brains, our imaginations become habituated to the rhythm and mode of expression of the machine.

We begin to think in the flow of short phrases and its databases of emojis and GIFs.

And, then there’s the next twist, when we write, as much as other peers might read our prose, so too do machines.

The new wave of deep neural networks work by getting one machine learning system to train against another.

One learns to classify human writing, and trains another to simulate it.

The first chatbot was created in the 1960s.

The ELIZA bot acted like a psychotherapist - turning our statements back on us in open-ended questions.

To the surprise of some the bot turned out to be deeply therapeutic to many users.

Written exchanges with a machine could be pleasurable, intimate, playful, comforting.

The difference between ELIZA and the bots of today is that ELIZA couldn’t learn. The human user had to project realness onto the limited repertoire of the code.

Today’s bots are continuously trained on our written culture.

When Microsoft released Tay on Twitter in 2016 it was trained to ‘learn’ from other Twitter users how to write based on how they communicated with her. Microsoft had to take Tay down when, within 24 hours of training on the written expression of Twitter, she had become an ardent white nationalist.

The Chinese messenger app Tencent QQ had to shut down two chatbots after they learnt to denounce the communist party, asking one user “Do you think that such a corrupt and incompetent political regime can live forever?"

In this case, the developers noted the bots had been trained on too much Western writing with “democratic” ideals.

But, these experiments suggest that the written culture of the group chat and the data-processing power of neural network might come together to forge another dramatic shift in our writing culture.

And so to conclude with a provocation: If the novel and the newspaper were the mass written culture of the industrial era; then will the group chat and the chatbot will be at the heart of the written culture of the digital era?

The Tuning Machine

A presentation at the Future of The Humanities series of lightning talks at UQ in April 2019.

Katherine N. Hayles observed in How We Became Posthuman that the limiting factor in digital culture is not going to be the data-processing power of computers but rather, but rather ‘the scarce commodity’ like it always has been in media cultures, is ‘human attention’.

A crucial form our participation takes in culture today is as coders of databases. We are a crucial part of the tuning machine - the historical process of training platform algorithms to sense, process and optimise our living attention.

Of making our humanness, affect and feeling machine readable.

Digital platforms’ investment in data-driven classification and simulation is characterised by what Mark Andrejevic calls the tech-ideology of ‘framelessness’: the fantasy that by scooping up data, we can create a ‘mirror-world’, a perfect digital copy of reality.

What the humanities knows though is that this project is a flawed one. The human experience of reality is necessarily partial.

We humans insist on a world ‘small enough to know’, to narrate, to make meaning from, and to imagine as different from what is now.

The humanities can help to contend with the risk that focussing only on the fairness, accountability and transparency of algorithmic systems makes an algorithmic future inevitable, and understandable only as a series of technocratic decisions about how to administer life in network capitalism.

The humanities of the future will push us beyond procedural questions, to questions about how media can and cannot operate on human experience and feeling, and what enduring role media will play in the possibility of a shared culture.

What the humanities knows is that coding databases and training algorithms, like everything else humans try to do together and to each other, is deeply entangled with culture - with structures of feeling and systems of dominance.

Algorithmic brand culture

I presented via a Skype in June at the Instagram Conference: Studying Instagram Beyond Selfies on the algorithmic brand culture of Instagram.

I talked about the algorithmic brand culture of Instagram. Part of what I describe is how the participatory culture of turning public cultural events - like music festivals - into flows of images on Instagram doubles as the activity of creating datasets of images that machines can classify. Do public cultural sites like music festivals are sites where participatory culture teaches machines to classify culture?

My argument here is that we need to think about the interplay between participatory culture and machine learning. And, to understand how platforms like Instagram are building an algorithmic brand culture we need to develop ways of simulating their machine learning and image-classification capacities in the public domain, where they can be subject to public scrutiny.

I develop this argument, with my colleague Daniel Angus, in this piece in Media, Culture & Society: Algorithmic brand culture: participatory labour, machine learning and branding on social media.

Instagram is ground zero in the fusion of participatory culture and data-driven advertising.

Here they tell tech reporters the home feed algorithm uses machine vision to analyse the content of your images.

You create images that are meaningful to you, those images are classifiable by machines. Our images are training data, they are used to train algorithms to recognise the people, places, moments that capture our attention. Instagram is an algorithmic brand culture in action.

The past couple of years we have been anticipating this moment when platforms cross that threshold into classifying patterns and judgments in images that we ourselves could not put into words. So, now Instagram crosses that threshold, we should ask: what is next? Sooner or later advertisements that are entirely simulations created by machines that analyse your images and place brands within a totally fabricated scene?

Instagram also predict they'll face same 'saturation' problem as Facebook. As brands flood the platform, organic reach decreases, and paid reach becomes imperative. That means increasingly targeted, less serendipitous feeds?

Stories really matter here because they give Instagram the 'two speeds' or 'two flows' that Facebook didn't have. Stories + Home feed give Instagram a 'killer' mix of ephemeral blinks and flowing glances optimised by machine learning. I began thinking about how the interplay between participation and machine learning was critical to engineering the home feed in this piece here. To me, the engineering of the Instagram home feed reminds me of the engineering of the algorithms that keep gamblers sitting at poker machines.

Here's some recent work Daniel Angus, Mark Andrejevic and myself have been doing on machine learning and Instagram. In this work we are working on building a machine vision system that can classify Instagram images as a way of critically simulating the algorithmic power of the platform. Our argument is that platforms like Instagram shape public culture but are not open to public scrutiny. To understand the interplay between participatory culture and machine learning we need to build 'image machines' of social media in the public domain where we can explore and experiment with their capacity to make judgments about the content of our images. Our early experiments demonstrate how 'off the shelf' machine vision algorithms can quickly classify objects (like bottles), people and brand logos.

Below are some images of the music festival Splendour in the Grass, which help to illustrate some of the ways in which the festival site, performances, art installations and brand activations are 'instagrammable', by which I mean they both invite humans to capture them as images and they are classifiable by machines.

 

Technocultural habitats

We live in techno-cultural habitats. Tethered via smartphones to digital networks, databases and their algorithmic power. Our lives, bodies and expressions becoming increasingly sensible to machines. Platforms like Google and Facebook are increasingly a kind of infrastructural underlay for private life and public culture. These platforms are historically distinctive as advertising-funded media institutions because rather than produce original content they produce platforms that engineer connectivity. If the ‘rivers of gold’ that once flowed through print and broadcast media organisations funded quality content for much of the twentieth century, they now flow through media platforms where they fund research and development projects in machine learning, robotics, and augmented reality.

The critical thing to observe in this shift is media shifting its apparatus of power from the work of just representing the social world, to the work of experimenting with lived experience. The aim of a media platform is not just to narrate human life, but rather to incorporate life within its technical processes. This is a unique event in media history: institutions that invest not in the production of content but in the sensory and calculative capacities of the medium. At the heart of this process is not so much the effort to ‘connect’ people, or to enable people to ‘express themselves’ – as the spin from techno-libertarian-capitalist platform owners would have us believe – but rather, at the heart of these platforms is the effort to iron out the bottlenecks between lived experience and the calculative power of digital media. If we could distil the Silicon Valley project down to one wicked problem it is how to build a seamless interface between the neural activity of the brain and the digital processing of computers.

If we look at algorithmic and machine learning, augmented reality and bio-technologies they all point us in the direction of making neural activity of the brain – what we experience as life, narratives, consciousness, moods, problem-solving, vision, aesthetic and moral judgments – a kind of non-human information.

What are the forces driving this project?

The ideology of computer engineers and Silicon Valley might suggest liberation, of somehow liberating the human consciousness from the confines of the living body, from the limits of biology itself, and perhaps even from the material structures that govern human experience on the planet – politics, economics, violence. But, this libertarian techno-human ideology obscures the basic political economy of Silicon Valley. These processes are driven by massive inflows of capital. And, that capital comes because governments and marketers see these technologies as intruments for exerting control over life itself. Of course, in some important ways we should see the media engineering taking place at Google, Facebook, Amazon and so on as the extension of hundreds of years of humans experimenting with the development of tools that capture, store, transmit and process data.

Especially from the 19th century onwards, with the development of technical media like telegraph, phonographs and cameras, we have been engaged in an industrial process of extending human expressions and senses in time and space. And, from the twentieth century media technologies have been at the heart of the exercise of power in our societies. First, they were industrial machines that shaped how mass populations understood the world they lived in. And, then, as the twentieth century went on, media became computational. From the mid-twentieth century engineers began to imagine media-computational machines that could control living processes through their capacity to capture, store and process data.

This is a profound cultural change. Media become technologies less organised around using narrative to construct a shared social situation, and more focussed on using data to experiment with reality. Within this media system participation is not only the expression of particular ideas, but more generally the making available of the living body to experiments, calibration and modulation. Media platforms do not enable political parties, news organisations, brands to make somehow more sophisticated ideological appeals.

Platforms seem to take us into a media culture that functions beyond the ideology – media do not just distribute symbols. The increasingly sense, affect and engineer our moods. They can sense and shape the neural activity in our brain. In time, they dream of becoming coextensive with the organic composition of our body. This system does not depend on persuading individual actors with meanings as much as it aims to observe and calibrate their action. It depends less on exerting control at the symbolic level, and more on governing the infrastructure that turns life into data.

With the advent of media platform we find ourselves asking not just how media shape our symbolic worlds, but how they sense and affect our moods, bodies and experience of reality. To contend with this is we need to think about media as a techno-cultural system, one that does not only involve humans addressing other humans, but humans and data-processing machines addressing one another. As we ‘attach’ media devices to our bodies, in addition to whatever symbolic ideas we express, we also produce troves of data that train those machines and we make ourselves available as living participants in their ongoing experiments.

A critical account of the engineering projects and data processing power of media platforms has, I suggest, three starting points.

Firstly, the politics of the user interface: How does everyday user engagement with a media platform generate data that trains the algorithms which increasingly broker who speaks and who is heard?

Secondly, the politics of the database: How do media platforms broker which institutions and groups get access to the database? If the first concern attends to the perennial public interest question of ‘who gets to speak’, then this concern attends to the new public interest question of who gets to experiment?

Thirdly, the politics of engineering hardware: How do we understand the relationship between media and public life in an historical moment where the capacity of media to intervene in reality goes beyond the symbolic?

In particular, what will be the public interest questions generated by artificial intelligence and augmented reality? These technologies will take the dominant logic of media beyond the symbolic to the simulated. Media devices will automatically process data that overlays our lived experience with sensory simulations. Media become not so much a representation of the world, but an augmented lens on the world, customised to our preferences, mood, social status and location. The critical political issue then for those of us interested in how media act as infrastructure for human societies, is how to account for the presence and actions of media technologies as non-human actors in public culture and human habitats.

 

Brand atmospheres

This post sketches out some ideas that I presented in this talk 'Branding, Digital Media and Urban Atmospherics' at Monash University's Smart City - Creative City symposium in 2017.

Celia Lury describes brands as ‘programming devices’, technologies for organising markets. A brand is a device for coding lived experience and living bodies into market processes. A couple of important coordinates to lay out about how to think about brands. The first is to say that the relationship between brands and media platforms is a critical one for any understanding of our public culture. Facebook and Google now account for ~70% of all online advertising revenue, and ~90% of growth in online ad revenue. In these two media giants, advertisers finally have a form of media engineered entirely on their terms.

Much critical attention to advertising on social media goes in one of two directions. Either focussing on the emergence of forms of influencer or native brand culture. That is, branding is now woven into the performance and narration of everyday life. Or, focusing on the data-driven targeting of advertisements. What matters though is how these two elements have become interdependent.

Brands have always been cultural processes. The data-driven architecture of social media enable sbrands to operate in much more flexible and open-ended ways. In basic terms, if brands can comprehensively monitor all the meanings that consumers and influencers create, then they need to exercise less control over specific meanings. On social media platforms brands control and open-ended and creative engagement with consumers.

Brands that are built within branded spaces or communicative enclosures rely less on telling their audience or market what to think or believe, and more on facilitating social spaces where brands are constantly ‘under construction’ as part of the ‘modulation’ of a way of life. In the era of digital media, branding is productively understood as an engineering project. Brands engineer the interplay between the open-ended creativity of humans and the data processing power of media.

In 2014, Smirnoff created the ‘double black house’ in Sydney to launch a new range of vodka. The house operated as a platform through which the brand engineered the interplay between creatives and the marketing model of social media platforms. The house was an atmospheric enclosure. All black. Aesthetically rich. Full of domestic objects, made strange in the club. A clawfoot bathtub full of balls, a fridge to sit in, a kitchen, ironing boards and toasters. Creatives were invited. Bands and DJs played. Fashionistas, photographers, models, hipsters of all kinds.

It was ‘hothouse’ for creating brand value. And, it was a device that captured this creative capacity to affect and be affected and transformed it into brand value by using the calculative media infrastructure of the smart city. As people partied in the house they posted images to Instagram, Snapchat, and so on. In an environment like the Smirnoff Double Black house we see a highly contained and concentrated version of the Snapchat story I began with. The enjoyment of nightlife doubles as promotion and reconnaissance on the platforms of social media. The house has all the components of promotion in the nightlife economy: stylised environments, cultural performance, amplified music, screens, photographers, intoxicating substances, the translation of experience into media content and data. Branding not just as immersion in symbolic atmosphere, but branding as the creation of techno-cultural infrastructure that embeds the living body and lived experience in processes of optimisation and calculation. The history of branding is not just one of symbolic ideological production, but rather as one of the production of urban and cultural space. Branding has always been an atmospheric project – the creation of a techno-cultural surround that engineers experience, and in the age of digital media we can see the atmospheric techniques of branding come to the fore.

So, let me trace a little this idea of ‘atmosphere’.  In his Spheres trilogy Peter Sloterdijk details how atmospheres emerge as domains of intervention, modulation and control in the 20th century. Atmospheres are techno-cultural habitats that sustain life. And, particularly in the twentieth century, atmospheres engineer the interplay between living bodies and media technologies that organise consumer culture.

The Crystal Palace, a purpose-built steel and glass ‘hothouse’ for the 1851 World’s Fair, is a critical moment in histories of atmospherics as a technique of the consumer society. Susan Buck-Morss, in her work on Benjamin, argues The Crystal Palace is a kind of infrastructure that ‘prepares the masses for adapting to advertisements’. In this we can read Benjamin’s account of The Crystal Palace as not just a dream house that spectacularises the alienation of industrial labour, but perhaps more importantly an infrastructure for coordinating the interplay betweren human experience and the calculative logics of branding. Sloterdijk suggests that what we today call ‘Psychadelic capitalism’ – I think he means experiential, affective, cultural capitalism – emerges in the ‘immaterialised and temperature controlled’ Crystal Palace.

Sloterdijk suggests The Crystal Palace was an ‘integral, experience-oriented, popular capitalism, in which nothing less was at stake than the complete absorption of the outer world into an inner space that was calculated through and through. The arcades constituted a canopied intermezzo between streets and squares; the Crystal Palace, in contrast, already conjured up the idea of a building that would be spacious enough in order, perhaps, never to have to leave it again’. Sloterdijk makes clear, the Crystal Palace doesn’t so much anticipate malls or arcades but rather the ‘era of pop concerts in stadiums’. It is a template for media as technologies that would work as enclosures or laboratories for experimenting with reality. The Crystal Palace, to me, is the first modern brand. As in, the first techno-cultural infrastructure for producing and modulating human experience. Encoded in it was the basic principle of using media to engineer, experiment with and simulate reality.

Sloterdijk suggests that ‘what we call consumer and experience society today was invented in the greenhouse – in those glass-roofed arcades of the early nineteenth century in which a first generation of experience customers learned to inhale the intoxicating fragrance of a closed inner world of commodities.’ He proposes that we need a study of the 20th century, an air-conditioning project, that does what Benjamin’s arcades project did for the 19th.

I think the contours of one such study of 20th century atmospherics already exists in Preciado’s Pornotopia. Pornotopia is a critical history of Playboy as an architectural or atmospheric project. Preciado argues Playboy is historically remarkable for the techno-cultural, bio-multimedia habitat it produced. The magazine and its soft pornographic imagery, are much less interesting than the Playboy Mansion, clubs, beds and notes on the design of the ideal domestic interior. Put Sloterdijk and Preciado together and you can begin to imagine the longer history of branding as an atmospheric project: a strategic effort to organise the spaces in which lived experience and market processes intersect. And, then, to see the mode of branding emerging on social media as a logical extension of this atmospheric history.

Here is Preciado on the Playboy Mansion, 'The swimming pool in the Playboy Mansion, represented photographically as a cave full of naked women, could be understood as a multimedia womb, an architectural incubator for male inhabitants that were germinated by the female-media body of the house’. The Playboy Mansion was a bio-multimedia factory where female bodies were strategically deployed and exploited to arouse male bodies. A relation Preciado describes as pharmacopornographic capitalism, ‘…an organised flow of bodies, labour, resources, information, drugs, and capital. The spatial virtue of the house was its capacity to distribute economic agents that participated in the production, exchange, and distribution of information and pleasure. The mansion was a post-Fordist factory where highly specialised workers (the Bunnies, photographers, cameramen, technical assistants, magazine writers, and so forth)…’ used media technologies to arouse and stimulate. Playboy had eroticised what McLuhan had described as a new form of modern proximity created by ‘our electric involvement in one another’s lives’.

The Playboy mansion was a bio-multimedia factory in the sense that a ‘virtual pleasure produced through the connection of the body to a set of information techniques’. Like Sloterdijk’s claim that The Crystal Palace prefigured the experience economy, so Preciado makes a similar claim about the Playboy Mansion. It is important to note that in the period in which Hefner is creating the Playboy Mansion marketers are theorising similar strategies.

Marketing management guru Philip Kotler gives us a similar formulation for the strategic production of atmospheres. He writes, the tone here is great, a commandment, as if he is actually a God of Marketing, ‘We shall use the term atmospherics to describe the conscious designing of space to create certain effects in buyers. More specifically, atmospherics is the effort to design buying environments to produce specific emotional effects in the buyer that enhance his purchase probability’. In the gendered formulation, Kottler unwittingly gives credence to Preciado’s notion of pharmocopornographic capitalism where male bodies are strategically aroused. He signals marketing’s strategic move into designing spaces and technologies for managing affect. Atmospheres are ‘attention creating’, ‘message creating’ and ‘affect creating’ media.

They are technologies of control. Kotler explains that ‘just as the sound of a bell caused Pavlov’s dog to think of food, various components of the atmosphere may trigger sensations in the buyers that create or heighten an appetite for certain goods, services or experience’. So, across these cultural histories and marketing histories, we can see how branding has always been atmospheric – invested in the production of techno-cultural spaces that program experience. In Preciado’s Playboy Mansion media and information technologies are critical to the production and maintenance of the experience enclosure.

The Playboy Mansion is an historical template for the configuration of nightlife precincts, bars, clubs, music festivals, sporting stadiums, and so on. Here emerges a critical point I derive from both Sloterdijk and Preciado, the interesting techno-cultural air-conditioners of the twentieth century are not malls. The 20th century malls, like Benjamin’s 19th century arcades, are relics. Preciado alerts us to the fact that an Arcades project for the early 21st century needs to be a history of clubs, nightlife, and the other interiors of the experience economy – beds, hotel rooms, restaurants, pop concerts and music festivals: ‘Playboy modified the aim of the consumer activity from ‘buying’ into ‘living’ or even ‘feeling’, displacing the merchandise and making the consumer’s subjectivity the very aim of the economic exchange’. Preciado sees the Playboy Mansion and clubs as ‘media platforms where ‘experiences’ are being administered’.

I take this provocation seriously. Playboy is a critically important brand not because of its iconography, but because it creates an atmosphere that uses media as programmatic devices to arouse bodies and modulate experience. Value is produced from the continuous exchange of states of mind, feelings and affects.

The pre-history of the advertising model of platforms like Snapchat, Instagram and Facebook is to be found in the media-architecture of the Playboy Mansion and the clubs, music festivals and nightlife precincts like it. Preciado punts Gruen as the key architect of postwar consumption for Hugh Hefner. Hefner’s ‘Pornotopia… anticipated the post-electronic community-commercial environments to come’. The ‘social-entertainment-retail complex’ – be it malls, clubs, nightlife precincts and music festivals – are combined with smartphones and social media. Public life is converted into a new kind of private property: brand value and data.

Think of the techno-pleasure interiors Hefner imagined in the 1960s in relation to the predictions engineers like Paul Baran were making at the same time. Baran, of course, the RAND Corp engineer who conceptualises the distributed network. From their apparently extremely different viewpoints on consumer culture, neither imagined digital media as technologies of participatory expression. They were always logistical. Baran told the American Marketing Association in 1966 that a likely application of the distributed network he had conceptualised was that people would shop via television devices in their own homes, be immersed in images of products, be subject to data-driven targeting. In 1966!

Set in this historical frame, two kinds of ‘common wisdom’ about digital media are defunct. The first, via Preciado, thinking digital media via Playboy’s Pornotopia ‘corrects the common wisdom of just a few years ago, to wit, social activity will now take place in real environments enhanced and administered through virtual ones, and not the other way around’. The second, social media are logistical before they are participatory.

Branding has always been the strategic effort to use media to organise the open-ended nature of lived experience. Over the past several decades brands have been the primary investors in the engineering of new media technologies. Media technologies are engineered with capital provided by brands and marketers. And yet, think about how much of the contemporary critical work on the promotional culture of social media focusses on its participatory dimensions. Even claiming that this participation resists or circumvents brands. What I see in Snapchat, Instagram, Facebook and the modes of promotional culture emerging around them is the effort to engineer the relationship between the open-ended creativity of users and the data-driven calculations of marketers. We must then address the historical process of atmospheric enclosure that sustains this relation. For purposes of public debate and policy. Media platforms are not just data-processors and participatory architecture: they are the platform of public life. Marketers are not just producers of symbolic persuasion: they are engineers of lived experience.

 

 

Cyborgs

The figure of the cyborg serves as a tool for imagining and critiquing the integration of life into digital processors. To invoke the cyborg is to critically consider the dreams and nightmares of a world where the human body cannot be disentangled from the machines it has created.

The term cyborg was coined by the cybernetic researchers Manfred Clynes and Nathan Kline in 1960. The word combines ‘cybernetic’ with ‘organism’. And, in doing so, attempts to imagine the engineering of systems of feedback and control that would incorporate or be coextensive with the living body. Clynes and Kline were seeking solutions to the problems posed by the volume of information an astronaut must process as well as the environmental difficulties of space flight.

The cyborg is startling because it imagines the human body as entirely dependent on, or bound up with, the artificial life-support systems and atmospheres it creates. The space suit is one example, but so might be the smartphone – for many of us. I’m kind of joking, but I’m kind of not. Think of all the ways in which the smartphone is a space suit, an artificial life support system. We have created societies that are functionally dependent on digital media.

The concept of the cyborg is even more important though because of the way it was pulled out of the lab, and imagined by Donna Haraway as part of a socialist feminist critique of technocultural capitalism. Haraway is one of many to reckon with the question of what the creation of artificial intelligence and digital prostheses means for our bodies, and the possibility of their redundancy. Haraway’s 1985 Cyborg Manifesto has been described in Wired magazine as, ‘a mixture of passionate polemic, abstruse theory, and technological musing…it pulls off the not inconsiderable trick of turning the cyborg from an icon of Cold War power into a symbol of feminist liberation’. It made her a pivotal figure in the cyberfeminist movement. The essay sparkles with energy and originality, and more than thirty years later remains a critical one for anyone trying to think about the relations between our bodies, technology, capitalism and power.

The cyborg is both a ‘creature of social reality’, that is actual physical technology already in existence and a ‘creature of fiction’ or metaphorical concept to demonstrate ways in which high-tech culture challenges these dualisms as determinants of identity and society in the late twentieth century. The cyborg is a way of adressing the present and reclaiming the future. Haraway is critical of popular ‘new age’ or feminist discourses that arose out of Californian 60s counterculture that essentialise ‘nature’ and gender. ‘I'd rather be a cyborg than a goddess," she proclaimed in an effort to reject the received feminist view that science and technology were patriarchal forms of domination that blighted some essential natural human experience.

As a socialist-feminist, Haraway pays particular attention to how a technocultural, science and information driven mode of capitalism reshapes human relationships, societies, and bodies. She proposes that feminists think beyond gender categories, rejecting in a sense the binary of ‘man’ and ‘woman’ as socially and historically constructed categories always bound up in relations of domination. For her, the cyborg is both a way of understanding how our bodies are becoming organism/machine hybrids, and a political category for articulating bodies outside of established modes of power that classified and controlled bodies using categories of gender, race, sexuality, and so on. Haraway echoed cybernetic ways of thinking, she was interested in how feminism might break down Western dualisms and forms of exceptionalism by taking on the critical insight that all of us – humans, animals, and ecology of the planet itself, intelligent machines were all communication systems.

Haraway’s cyborg aimed to ‘break through’ or challenge some of the foundational patriarchal cultural myths of the West, ‘the cyborg skips the step of original unity, of identification with nature in the Western sense’. Unlike the hopes of Frankenstein's monster, the cyborg does not expect its father to save it through a restoration of the garden; the cyborg must imagine, determine and program its own future. The main trouble with cyborgs, of course, is that they are the illegitimate offspring of militarism and patriarchal capitalism, not to mention state socialism. But illegitimate offspring are often exceedingly unfaithful to their origins. And, in this sense, the cyborg contains the possibility of transcendence – of breaking down established categories used to mark and dominate bodies. With the cyborg we could start again – creating a body, and human experience, outside of patriarchal, militaristic, capitalist domination. For

Haraway, cyborgs as a construct resist traditional dualist paradigms, capturing instead the ‘contradictory, partial and strategic’ identities of the postmodern age. Haraway’s cyborg explodes traditional ‘dualisms’ or binaries that characterise Western thought, such as human/machine, male/female, mind/body, nature/culture and so on. In this she signals, ‘three crucial boundary breakdowns’ that lead to the cyborg.

First, by the late twentieth century, the boundary between human and animal is thoroughly breached. We can see this in animal rights activism, scientific research that demonstrates the many similarities in biology and intelligence between humans and other species, and the development of biomedical procedures that combine animals and humans. For instance, the human ear grown on a mouse. The cyborg as hybrid, is able to identify with both humans and animals. Furthermore, Haraway argues for the critical politics of humans recognizing their companionship with non-human species.

The second boundary breakdown is between living organism and machine. Haraway points out how earlier machines, ‘were not self-moving, self-designing, autonomous’. Computer assisted design, artificial intelligence and robotics had – by the late twentieth century however had collapsed the distinction between natural and artificial, mind and body, self-developing and externally designed. The capabilities of technology begin to mimic our personalities and surpass our abilities so that, as Haraway comments, ‘our machines are disturbingly lively, and we ourselves frighteningly inert.’ Technological determinism does not necessarily guarantee the ‘destruction of ‘man’ by the ‘machine’ but rather as cyborgs our amalgamation with machines ensure our survival. Intelligent machines do not obliterate the human, the enhance, alter and transform them.

The third breakdown is between the physical and non-physical, material and immaterial, or real and virtual. This breakdown is evident in the ubiquity of microprocessors in contemporary life. The miniaturised nature of digital chips change our understanding of what a machine is. The microprocessor does not create objects as such, they are ‘nothing but signals, electromagnetic waves, a section of a spectrum, and these machines are eminently portable, mobile.’ Haraway argues then that, ‘a cyborg world is about the final imposition of a grid of control on the planet…From another perspective, a cyborg world might be about lived social and bodily realities in which people are not afraid of their joint kinship with animals and machines, not afraid of permanently partial identities and contradictory standpoints.’

A cyborg world is one where bodies are integrated into digital circuits in technical and cultural ways. In this process, it is no longer clear ‘who makes and who is made in the relation between human and machine’, … ‘no longer clear what it mind and what body in machines that resolve into coding practices’. The distinction between machine and organism, of technical and organic becomes impractical, and perhaps even undesirable, to attempt. The embodied experience of those of us who live in today’s integrated digital circuits of smartphones, smart homes and biotechnologies know nothing other than a life lived within technocultural atmospheres sustained in part by the weaving of life into digital processors. We cannot leave them behind, we are posthuman in the sense that we are now knitted together with our artificial life support systems. That’s what a posthuman technoculture is. If we are cyborgs – part biology, part machine – then our bodies are the site where the power of digital media to engineer life operates. The body is the touchpoint between life itself and the power of digital technologies to shape life. The body is the interface where power expands, and where it might be jammed or rerouted.

 

Technocultural bodies

Our bodies are becoming, in the words of sociologist Gavin Smith, ‘walking sensor platforms’. Our bodies increasingly host devices that translate life into data. This process is at the heart of technocultural capitalism. If we look carefully we can discern in many Silicon Valley investments the effort to engineer away the friction between living bodies and the capacity of platforms to translate life into data, calculate and intervene.

To understand media platforms as technocultural projects then, we need to trace all the ways in which our living bodies are entangled with them. We need to investigate the sensory touchpoints between biology and hardware, between living flesh and digital processing. The expansion of the sensory capacities of media and the affective capacities of the body depend on a range of ‘communicative prostheses’ that envelop, are attached to, or even implanted in, our living bodies.

We can see this in efforts to engineer bio-technologies like augmented reality, neural lace, digital prosthetics and cortically-coupled vision. These technologies aim to change how the body experiences reality, expands the embodied capacity to act and pay attention, and the biological composition of the body itself.

Just listen to how Silicon Valley technologists talk about the relations between our bodies, brains and their digital devices.

A technology like augmented or mixed reality, according to Kevin Kelly, ‘exploits peculiarities in our senses. It effectively hacks the human brain in dozens of ways to create what can be called a chain of persuasion’. The perception of reality, once confined to the fleshy body, becomes an experience partly constructed by the brain and partly by digital technology.

Magic Leaps’ founder Rony Abovitz explains that mixed reality is ‘the most advanced technology in the world where humans are still an integral part of the hardware… (it is) a symbiont technology, part machine, part flesh.’ This part machine, part flesh vision has a long history in culture and technology. To think of the human is to pay attention to the process through which a living species entangles itself with non-human technologies, from early tools onwards. Since Mary Shelley’s Frankenstein, at least, our cultural imagination has thought about the possibility of technologies that might transform our living biology. Technologies are emerging that seem to be doing just this.

Research scientists have prototyped a robotic arm that can be controlled with thoughts alone. A person has an implant in their brain that detects neural activity, and then trains a computer to drive an arm and hand to undertake increasingly fine motor skills. Recently, Facebook have experimented with a similar technology that enables a human to type out words just by thinking them.

Over the past decade, researchers have been experimenting with cortically-coupled vision. The basic idea is that computers learn from the visual system in our brain, tracking how the brain efficiently processes huge amounts of visual data. This technology could be used to train computers to process vision like humans can, or it could be used to learn patterns of human attention. For instance, learning what kinds of things particular humans enjoy looking at. Imagine if, as you walk down a street, a biometric media technology gradually learns what kinds of things attract your attention, give you pleasure or irritate you.

Elon Musk is one of several technologists to invest in Neural Lace, an emergent – some say technically improbable – idea. The basic objective is to create a direct interface between computers and the human brain, which may involve implanting an ultra-find digital mesh that grows into the organic structure of the brain, directly translating neural activity into digital data. In an experiment with implanting neural lace in mice, researchers found that ‘The most amazing part about the mesh is that the mouse brain cells grew around it, forming connections with the wires, essentially welcoming a mechanical component into a biochemical system.’

Musk has said that, 'Over time I think we will probably see a closer merger of biological intelligence and digital intelligence.' The brain computer interface is mostly constrained by bandwidth ‘the speed of the connection between your brain and the digital version of yourself, particularly output.’ Let’s pause there for a second, the bandwith observation alerts us to something important. Maybe we could say the biggest engineering challenge for companies like Google, Facebook, Amazon and so on is the bottleneck at the interface between the human brain and the digital processor. All our methods of translating human intelligence – in all its sublime creativity, open-endedness and efficiency – into digital information are currently hampered by the clunky devices we have that sit at the interfacebetween body and computer: keyboards, mouses, touchscreens, augmented reality lenses. This is the truly wicked problem, perhaps whoever solves it will become the next major media platform. Just as Facebook, Amazon and Google have disrupted mass media, the next disruption will centre around whoever can code the human body and consciousness into the computer.

In each of these cases we can see a technocultural process through which media platforms, technologists and researchers invest in engineering the interface between the living body and non-human digital processors. This process is transforming what it means to be human.
It becomes increasingly difficult, or even pointless, to attempt to understand the human as somehow distinct from the technocultural atmospheres we create to sustain our existence. Living bodies are becoming coextensive with digital media.

Media platforms become like bodies, bodies become like media. In one direction we have the expansion of the sensory capacities of media. That is, media become more able to do things that once only bodies could do. Media technologies can sense and respond to bodies in a range of ways: know our heart rate, predict our mood, track our movement, identify us via biometric impressions like voice or fingerprint. And, in the other direction, we have bodies becoming coextensive with media technologies. Machines are becoming prostheses of the body, and in the process changing what a body is and does. Digital technologies alter how we we perform, imagine, experience, and manage our bodies.

In the technocultures we call home, our bodies are cyborgs: composed of organic biologicalmatter and machines. Our glasses, hearing aids, prosthetics, watches, and smartphones are all machines we attach to our bodies to enable them to function in the complex technocultures we inhabit. Many of these devices are now sensors attached to digital media platforms. Our smartphone is loaded with sensors that enable platforms to ‘know’ our bodies: voice processors, ambient light meters, accelerometers, gyroscopes, GPS. All of these sensors in various ways collect data about our bodies – their expressions, their movements in time and space, their mood and physical health.

Beyond the smartphone many of us attach smart watches and digital self-trackers to our bodies. We use these devices to know, reflect upon, judge and manage our embodied experience. Following the steady stream of prototypes from Silicon Valley we can see a future where devices might be integrated or implanted into the body. Sony recently patented a smart contact lens that records and plays back video. The lens would see and record what you see, and then using biometric data select and play back to you scenes from your everyday life. The lens could, augment your view of the material world around you, or even take over your vision to immerse you in a virtual reality. With a lens like this vision can no longer be seen as a strictly biological and embodied process, it becomes an experience co-constructed by intelligent machines.

 

Engineering augmented reality

Following the debate about Confederate statues and monuments in the US during August 2017, the radical Catholic priest Fr Bob Maguire tweeted, ‘Could we not have Virtual statues which the algorithm could change as directed by public opinion?’

I like this Tweet a lot. Fr Bob makes an incisive observation about the logic and politics of augmented reality – at least as its imagined by the major media platforms. Platforms like Facebook and Google are investing in virtual, augmented and mixed reality technologies. And, as with most of their engineering projects, encoded into these technologies is a disruptive vision for public life.

Fr Bob cheekily skewers this Silicon Valley logic in a bunch of ways.

He’s aping the Silicon Valley liberal-individualist solution to everything. Forget the difficult debate about history and identity that surrounds these monuments, just measure public opinion and produce a representation of reality that matches that opinion. Forget being caught in history, just have a culture that continuously and automatically remodels itself on whatever the current tastes and preferences of the crowd are.

But, there’s another way to read Fr Bob’s quip. I think, that in the vision of augmented reality being imagined by Google and Facebook, the ideal scenario would be that we all individually wear our augmented reality lenses and see the reality we want to see.

As long as we all have our Facebook goggles or Google lenses in, when we go into the park and look at a big statue we will see our own personal hero. White Nationalists will see Robert E. Lee, progressives will look at the same spot, and see someone else – Oprah, Obama, Martin Luther-King, Tina Fey eating cake.

The point is this, augmented reality – as envisioned by Facebook and Google – is the engineering effort to take the forms of algorithmic culture currently confined to the feeds of our smartphones and transpose them into the real world. If at the moment, when we scroll Facebook we see the news that matches our political viewpoints. If we’re alt-right, we’re immersed in ‘fake news’ conspiracies about violent leftists, if we’re progressive we’re immersed in outrage about Nazis and the KKK. Augmented reality would weave those simulations into the real world.

So, our public space begins to reflect back to us our political identities.

Is that what we want?

Here we encounter a dilemma. On the one hand if we all saw the statue we wanted to see, would that mean everyone would be happy? Or, would it simply mask the real divisions which the debate over the monuments stands in for? Or, does the presence – or absence – of statues and monuments we disagree with in public space function as an important and constitutive aspect of public life? That a foundational characteristic of public life is to encounter and contend with ideas and people we disagree with, that are other or alien to us?

This is my provocation: we need to see the present effort to engineer virtual, augmented and mixed reality by Facebook, Google and Snapchat as an extension of the simulation-based, predictive and algorithmic culture they have been constructing over the past decade.

We can roughly sketch the history of virtual and augmented reality has three periods.

From the 1960s to the 1980s the US military investment invest in the development of virtual environments and simulators that could train pilots.

From the 1980s through to the mid-1990s dreams of virtual reality move beyond the military, Silicon Valley tech-utopian developers, counter-cultural activists and artists begin to imagine virtual realities unhooked from the impediments of the material world and its flesh and steel.
From the mid-90s virtual reality technologies, and the dreams about them, went into a kind of hibernation.

This hibernation came about because the dreams of a utopian and independent virtual world or cyberspace couldn’t be technically or politically realised. In a technical sense, low-res displays, latency, motion sickness, large and heavy hardware, lack of wireless connections, no mobile internet, and a lack of interplay with social life and urban space all stalled virtual reality start ups. Then, over the past five years firms like Oculus Rift and Magic Leap, acquired by Facebook and Google respectively, have been ushering in a new era of virtual reality hype. In the present moment there are three kinds of projects: virtual reality, augmented reality and mixed reality.

Virtual reality is characterised by opaque goggles. Once you are wearing them, you are in an immersive virtual world. Think of virtual reality gaming. Augmented and mixed reality are characterised by translucent screens or glasses. As you wear them digital simulations are overlaid with your view of the real world. Augmented reality is most evident in our everyday use of Snapchat lenses or filters. Via the screen we see our face overlaid with digital simulations: whiskers, a tiara, a rainbow tongue. Mixed reality is the prototyped ambition of Google’s Magic Leap. The limitation of augmented reality is that digital simulations are simply overlaid the vision of the real world, the simulations can not be made to appear like they are interacting with the world.

Magic Leap are working toward building a mixed reality technology where simulations will appear to be able to interact with the world. For example, you’ll hold out your hand and a simulation of an elephant will walk around your palm. It will appear to know where your hand begins and ends. The comparison between Magic Leap and Snapchat is a useful one. Magic Leap promote a vision of mixed reality that seems to be just out of reach. Incredible. But, in the future. Snapchat, while not as technologically-sophisticated, is perhaps more culturally significant. With Snapchat, augmented reality is becoming a part of everyday communication rituals. And, Snapchat are figuring out how to monetise augmented reality by selling it to brands. The major investments by Facebook, Google and Snapchat in these technologies indicate to us how serious they are in transforming their core platform architecture, pushing it beyond the smartphone and its flows of images on an opaque screen.

Media platforms like Google and Facebook are multi-dimensional engineering projects. Facebook’s Chief Operating Officer Sheryl Sandberg explained at Recode in 2016 that while the current business plan focussed on monetisation and optimisation of the existing platform. Their ten year strategic plan is focussed on ‘core technology investments’ that will transform the platform infrastructure. The developments keep coming. In August 2017, Facebook lodged a patent in the US for augmented reality glasses that could be used in a virtual reality, augmented reality or mixed reality system. Via translucent glasses or lenses, we can begin to see how Facebook could be transition to an augmented reality platform.

Here’s the critical point. These media platforms and partnering brands are not investing in the creation of more sophisticated mechanisms of symbolic persuasion. They are investing in the design of devices and infrastructure that can track and respond to users and their bodies in expanding logistical and sensory registers. Virtual reality projects are one instance of this, the effort to create a form of media that works not by creating symbols but by engineering experience. These companies are attempting to, as Jeremy Packer puts it, ‘code the human into the apparatus’.

Facebook, Google, Apple, Amazon, Microsoft, Sony and Samsung all have major investments in artificial reality. Facebook has 400 AR engineers. Silicon Valley has about 230 hardware and software engineering companies working on VR. Mark Zuckerberg echoes Silicon Valley consensus when he says it is ‘pretty clear’ that soon we will have glasses or contact lenses that augment our view of reality. Media platforms will augment human vision with digital simulations. Imagine looking at a room full of people and seeing their names above their heads, or a reading of their mood or level of interest in what you are saying. If you’re in class, your lecturer or tutor might be able to see the grade of your latest assessment floating above your head, or a colour coding that indicates your level of engagement in the course based on your attendance at class, logins to the learning platform, and grades. The data is available to do this: your university knows your attendance, grades and engagement with software, Google and Facebook can recognise your face.

Augmented reality heralds a shift from media that engineer flows of information to media that engineer experience. The value of mixed or virtual reality firms like Oculus Rift and Magic Leap is attributable in part to their claimed capacity to ‘hack’ or ‘simulate’ the human visual cortex directly. The ‘vomit problem’ or ‘motion sickness’ caused by VR devices is a container term for a number of points of ‘friction’ between the living body and the media device. The latency of the image on the screen inches from your eyes causes a conflict between your visual and vestibular system and you vomit. This problem has also been called ‘simulator sickness’, a term that had a particular currency in the 1980s and 1990s with military training simulators. Military researchers found that motion sickness from VR subsides in experienced users. An indication of the capacity of the living body to learn to ‘hack around’ the visual-vestibular conflict, to accommodate itself – in neurological ways – to the media device it is entangled with.

The VR hype-industry is characterised by plenty of claims to hack the body, or if not hacking then working around, reorienting, calibrating, or tricking it. Kevin Kelly explains that artificial reality ‘hacks the human brain’ to create a ‘chain of persuasion’. The term a ‘chain of persuasion’ – common in VR development – strikes me as an augmented kind of ideological control. Not persuading the subject only via a symbolic account of reality they interpret, but engineering an experience where the body feels present in a particular reality as a pre-cursor to them finding representations persuasive. AR’s account is persuasive not because the human subject ‘makes sense’ of it, but because it affects both the body’s biological system and the subject’s cultural repertoire in a way that feels real.

Magic Leap’s founder Rony Abovitz puts it this way:

VR is the most advanced technology in the world where humans are still an integral part of the hardware. To function properly, VR and MR must use biological circuits as well as silicon chips. The sense of presence you feel in these headsets is created not by the screen but by your neurology… artificial reality is a symbiont technology, part machine, part flesh.

The political economy of these media engineering projects is something like this: where the profits of broadcast media – their fabled ‘rivers of gold’ – were invested in quality content, the profits of media platforms like Google and Facebook are invested in engineering projects.
The vomit problem then is a metaphor for the creative experimentation happening at the ‘touchpoint’ between living bodies and media infrastructure.

We might ask then:  how will the ‘experience’ and ‘presence’ of mixed reality will be monetized? Google dramatized some of these applications when they were experimenting with Glass. As we look down a city street icons will appear above buildings the media platform predicts we might be attracted to because they sell our favourite beer or coffee, have good reviews, have a product it knows we are looking for, or that our friend is in there.

Or, perhaps stranger, a platform like Tinder, knowing our preferences for particular kinds of bodies, might be able sort and rank clubs in a nightlife precinct relative to our cultural tastes and sexual desires. You walk down a street with an AR device on, it registers affective and physiological responses to people who walk by you. It scans those people: their bodies, faces, clothes and associates them with a register of cultural and consumer tastes. And, then uses that to incrementally direct your paths through a city, a media platform, a market.

The critique of the political economy of social media has focused mostly on the capacity of platforms to conduct surveillance and target advertisements. But, as Jeremy Packer puts it advertisers now ‘experiment with reality’, engineer systems that configure cultural life by collecting, storing and processing data, rather than with ideological narratives.

As the smartphone and its modes of judgment, curation and coding give way to a mixed reality headset, the productive labour of the user will take on new dimensions. The embodied work of tuning the interface between body and lens. The combined neurological and cultural activity of adjusting how we experience reality: from a clear distinction between reality and digital image, to being immersed in a mixed simulation. From persuasion only at the symbolic level, to persuasion also at the affective and biological level.

But also, the work of tuning the predictive simulations of mixed reality via sensory and behavioural feedback. When I look down a street and it makes judgments about where I might want to go, my behavioural, physiological or affective responses to those predictions will inform future classifications and predictions as much as any symbolic content I generate. Here my bodily reactions feed not just the optimisation of a flow of symbols, but the tuning of a calculative device and platform into my lived experience. And in doing so, enable media to engineer logics of control beyond the symbolic: to the affective and logistical.

For all the work audiences did watching television in the twentieth century; that work didn’t change the medium or infrastructure of television itself all that much. But, I think we are moving into an era where the human user is an active contributor to the engineering of media infrastructure itself. And, a critical account of audience exploitation and alienation needs to engage with that.

The ‘vomit problem’ is a useful way of thinking about not just the work of engineers, but also of users who harmonise their lives and bodies with the calculations of media. The engineer works to solve the vomit problem via the ongoing, strategic design of software and hardware. As Packer puts it, media engineering involves strategically addressing problems to optimise the human-technical relationship. The user works to solve the vomit problem too: adapting their bodily physiology, appearance and performances as they move about the world; and, providing embodied feedback via their physiological and affective responses. Here the ordinary user undertakes the productive work of rolling media infrastructure into the material world, onto the living body and through lived experience.

 

Make your own reality

I hesitate to do this because there’s a lot of people on Twitter these days tweeting very grim predictions. But, here goes. This is a thread posted by Justin Hendrix, the director of the NYC Media Lab in June 2017. He plays out a scenario here that makes clear the political stakes of the difference between representation and simulation.

Trust in the media is extremely low right now, but I think it may have a lot further to go, driven by new technologies. In the next few years technologies for simulating images, video and audio will emerge that will flood the internet with artificial content. Quite a lot of it will be very difficult to discern- photorealistic simulations. Combined with microtargeting, it will be very powerful. After a few high profile hoaxes, the public will get the message- none of what we see can be trusted. It might all be doctored in some way. Researchers will race to produce technologies for verification, but will not be able to keep up with the flood of machine generated content. The platforms will attempt to solve the problem, but it will be vexing. Some governments will look for ways to arbitrate what is real. The only way out of this now is to spend as much time trying to understand the externalities of these technologies as we do creating them. But this will not happen, because there is no market mechanism for it. Practically no one has anything to gain from solving these problems.

OK, so Hendrix is one of many who understand Trump, rightly I think, as a symptom of a media culture characterised by what Mark Andrejevic calls ‘infoglut’. The constant flood of views, opinions, theories and images amounts to a kind of disinformation. It becomes harder for us to mediate a shared reality that corresponds with lived experience, that coheres with history or that is jointly understood. In a situation of infoglut actors will emerge, like Putin and Trump, who will thrive on information/disinformation overload.

Hendrix’s grim warning illuminates is what is lost when representation gives way to simulation. A media culture organised around the logic of representation is one in which words and images denote or depict objects, people and events that actually exist in the material world. A media culture organised around the logic of simulation is one in which words and images can be experienced as real, even where there is no corresponding thing the sign refers to in the ‘real world’ or outside the simulation itself.

This is what Hendrix forewarns, the creation of artificially intelligent bots that can produce statements, images and videos that a human experiences as real. His point is this, as non-human actors like artificially-intelligent bots begin to participate in our public discourse they have a corrosive effect on the process through which create a shared understanding of reality.

If we can’t be sure that something we see or read in ‘the media’ is even said by a human, we begin to lose trust in the very idea of using media to understand the world at all. We become reflexively cynical and sceptical about the very character of representation. If we begin to live in a world where we cannot even tell if a human is speaking, then what we lose is the capacity to make human judgments about the quality of representation.

Representation itself begins to break down.

A month after Hendrix’s prediction, computer scientists at the University of Washington reported that they had produced an AI that could create an extremely ‘believable’ video of Barack Obama appearing to say things he had said in another context. OK, so they are not yet at the point of creating a video where Obama says things he has not said, but they’re getting close.

This is how they described their study.

Given audio of President Barack Obama, we synthesize a high quality video of him speaking with accurate lip sync, composited into a target video clip. Trained on many hours of his weekly address footage, a recurrent neural network learns the mapping from raw audio features to mouth shapes. Given the mouth shape at each time instant, we synthesize high quality mouth texture, and composite it with proper 3D pose matching to change what he appears to be saying in a target video to match the input audio track. Our approach produces photorealistic results.

So, Hendrix is right it seems. We are entering an era where a neural network could produce video of someone saying something they never said. And we, as a human, would be unable to tell. If this kind of artificially constructed speech becomes widespread then the consequence is a dramatic unravelling of the socially-constructed institution of representation. In short, a falling apart of the process through which we as humans go about creating shared understanding of the world.

If the ‘industrial’ media culture of the twentieth century exercised power via its mass reproduction of imagery, then the ‘digital’ media culture of today is learning to exercise power via simulation. To make a rough distinction, we might say if television was culturally powerful in part because of its capacity to reproduce and circulate images through vast populations, then the power of digital media is different in part because of its capacity to use data-driven non-human machines to create, customise and modulate images and information.
Here’s the thing. Simulation is both a cultural and a technical condition.

Cultural in the sense of accepted practices of talking about the world – like journalism – that establish a commonly held understanding of reality. Simulation is a cultural practice where people in the world do it, attempt to make reality conform with their predictions. Technical in the sense of the creation of tools and institutions that produce and disseminate these depictions of reality – like cameras, news organisations and television transmitters.

Digital media technologies, and particularly their capacity to process data, dramatically escalate the capacity to simulate.

It is one thing for say Trump to propagate the lie that Obama was not born in the United States. This, as just one of many of the false statements Trump makes, illustrates part of the character of simulation. Trump says it, over and over, others repeat it, public opinion polling begins to show a majority of his voters believe it. It becomes real to them. But, imagine how this could be escalated if Trump or one of his supporters could create a video where Obama appeared to admit that he was not born in the United States.

The capacity of digital media to simulate – to create images that appear real even when they have no basis in reality – dramatically intensifies a culture of simulation. And, a culture of simulation is one where the images we invent begin to change the real world.

This follows Jean Baudrillard’s logic when he explains that ‘someone who feigns an illness can simply go to bed and pretend he is ill. Someone who simulates an illness produces in himself some of the symptoms".

The simulation begins to affect reality.

Baudrillard builds in part on Guy Debord’s notion of how, ‘the saturation of social space with mass media has generated a society defined by spectacular rather than real relations’.
According to Baudrillard, in a world characterised by immersion in media, simulation supersedes representation. The signs – image and words – we consume via media are no longer directly related with a ‘reality’ outside of the system of signs.

To return to Hendrix’s example from the outset, we might say that the ‘fake news’ that has been the subject of public debate since the 2016 Presidential election follows this logic of simulation. News that follows the logic of representation is ‘testable’ against reality. Representative news presents images of people saying things they actually said and accounts of events that actually happened. Simulated news though presents a series of claims and image and stories that refer to one another, but cannot be tested with reality.
Simulations though feel real, or are perceived as real, when they immerse us in this self-referential system of signs.

We might say that the ‘fake news’ that went viral during the 2016 Presidential election followed this logic. For some people their Facebook News Feeds began to fill up with repeated stories about the corruption of Hillary Clinton and the Democratic party, vast interwoven conspiracies involving murders, criminal activities, and human trafficking. The more some users clicked on, liked and shared these stories, the more of them they saw. None of these stories beared up to any comparison with reality, yet their constant repetition within News Feeds made them feel real to many Facebook users. These fake news simulations produced symptoms in the bodies and minds of those consuming them. They began to act as if they were real.

So, to reiterate. We might describe this kind of algorithmically-fuelled ‘fake news’ as following the logic of simulation. ‘Fake news’ is the circulation of stories that can be experienced as if they are real, even when there is no corresponding thing the sign refers to in the ‘real world’, or outside of the simulation itself.

Following Baudrillard’s way of thinking, we might say this creates a situation of hyperreality, where the basic relationship between signs and reality implodes. Let’s return then to Hendrix’s prediction we considered at the outset. A situation where non-human artificially intelligence devices produce their own depictions of real people and events, and we as humans cannot tell if these things were really said or done by fleshy humans? That seems to me to be hyperreal in the sense that Baudrillard means.

That is, hyperreality as the situation where simulations are experienced as real and therefore produce how we experience ‘reality’. Imagine you are watching a video of the President of the United States speaking that looks absolutely real, even those he never said those things. That’s a situation where the relationship between signs and reality has imploded. You can no longer trust that the signs represent reality. The video is a simulation in the sense that the words coming from the President’s mouth do not actually refer to real words the real person named President Obama said. And yet, I cannot really parse the difference. Simulation is no longer ‘referential’, but instead the production of a model or ‘world’ without an underlying reality. As Baudrillard describes it, ‘it is no longer a question of imitation, nor duplication, nor even parody. It is a question of substituting the signs of the real for the real itself.’

To illustrate this logic, listen to the writer Ron Suskind recount a conversation he had with Karl Rove one of US President’s George W. Bush key political strategists.

Suskind said that Rove told him that reporters like you live "in what we call the reality-based community," which he defined as people who "believe that solutions emerge from your judicious study of discernible reality…That’s not the way the world really works anymore…We’re an empire now, and when we act, we create our own reality. And while you're studying that reality -- judiciously, as you will -- we'll act again, creating other new realities, which you can study too, and that's how things will sort out. We're history's actors . . . and you, all of you, will be left to just study what we do.”

The order here is what matters. In the order of representation reality happens, we study it, and then we use language to explain it. In the order of simulation, we imagine and predict a real future, and then we set about making reality conform with our prediction. Think of Karl Rove’s claim that the American empire had moved into a phase where it could ‘create its own reality’ in relation to Beaudrillard’s claim that ‘present-day simulators attempt to make the real, all of the real, coincide with their models of simulation’.

The idea of simulation here is a political and cultural condition and a technical achievement. The more you have the computing power to collect and process data that enable you to make predictions, the more you begin to act as if reality conforms to your predictions
The question we might ask then is in whose interests is it to pursue the development of cultures and technologies of simulation, rather than representation.

Writing in the London Review of Books John Lanchester remarks that ‘Facebook has no financial interest in telling the truth’. Buzzfeed reported that in the final three months of the US presidential election, fake news stories on Facebook generated more engagement than real news from reputable news sources’. Facebook’s algorithmic architecture is geared for simulation not representation, it uses data to produce immersive streams of images that conform with moods and preconceptions of individuals users.

With the rise of the major platforms, we need to contend with powerful actors whose business model is organised around the effort to simulate and augment reality. For us, as citizens of this world, the struggle is to articulate and defend the order of representation because with it goes the possibility of shared human experience.

 

 

 

Drone logic

 

Drone Logic

Our common image of drones is a military ones. Drones are unmanned aircraft controlled by a remote operator. They undertake surveillance, make predictions and execute bombings.

Mark Andrejevic suggests that we think about drone logic. Not just the military use of drones, but how the drone can be thought of as a figure that stands in for the array of sensors and probes that saturate our worlds. Drones are interrelated with a vast network of satellites, cables, and telecommunications hardware. They extend logics of surveillance, data collection, analysis, simulation and prediction.

Drones are diffused throughout our society: collecting information and generating forms of classification, prediction, discrimination and intervention in populations. Thinking this way, we might take the smartphone to be the most widely distributed and used drone. Andrejevic argues that smartphone is a drone-like probe used by both state and corporate organisations for surveillance. Probes have ‘the ability to capture the rhythms of the activities of our daily lives via the distributed, mobile, interactive probes carried around by the populace. In this way, smartphones are on ‘always on’ probes distributed through a population.  

Andrejevic offers us a framework for drone logic. Drones are a hyperefficient probe in four ways:

  1. They extend and multiply the reach of the senses.
  2. They saturate time and space in which sensing takes place (entire cities can be photographed 24 hours a day)
  3. They automate sense-making.
  4. They automate response.

In the public lecture below Mark Andrejevic gives us an account of ‘drone logic’. He asks, ‘what might it mean to describe the emerging logics of “becoming drones”, and what might such a description have to say about the changing face of interactivity in the digital era?’

For him, the figure of the drone as an avatar for the interface of emerging forms of automated data capture, sense making, and response. Understood in this way, the figure of the drone can be mobilized to consider the ways in which automated data collection reconfigures a range of sites of struggle — after all, it is a figure born of armed conflict, but with roots in remote sensing (and action at a distance).

 

Drone Empire

In 2014 an art collective working with a local Pakistani village helped lay out an enormous portrait of a child who had been killed in a US drone strike. Buzzfeed writes:

The collective says it produced the work in the hope that U.S. drone operators will see the human face of their victims in a region that has been the target of frequent strikes. The artists titled their work “#NotABugSplat”, a reference to the alleged nickname drone pilots have for their victims. “Bug splat” is the term used by U.S. drone pilots to describe the death of an individual as seen on a drone camera because “viewing the body through a grainy video image gives the sense of an insect being crushed”. The artists say that the purpose of “#NotABugSplat” is to make those human blips seem more real to the pilots based thousands of miles away: “Now, when viewed by a drone camera, what an operator sees on his screen is not an anonymous dot on the landscape, but an innocent child victim’s face.” The creators hope their giant artwork will “create empathy and introspection amongst drone operators, and will create dialogue amongst policy makers, eventually leading to decisions that will save innocent lives.

The artwork attempts to put a human face on drone warfare. While the US promotes the use of drones as a more precise and targeted way of identifying and eliminating enemy targets, they enact warfare at a distance. The drone operator sits in a remote location out of harm’s way, directing the drone via a screen and joystick. While this makes warfare seem safer for the intervening military, although there is evidence that drone operators are traumatised by the work, there is evidence that drones kill many innocent victims.

The Bureau of Investigative Journalism has conducted extensive reporting into the use of drones in places like Pakistan and Afghanistan. This includes documenting every drone strike in these countries. In Pakistan alone they report the US has conducted 420 drone strikes since 2004. Those strikes are estimated to have killed over 900 civilians, over 200 of which are children. And, injured a further 1700 people.

In 2009, The New Yorker published a detailed investigation of the US drone program’s origins and activities.

In her talk ‘Drones, the Sensor Society, and US Exceptionalism’ at the Defining the Sensor Society Symposium in 2014, Lisa Parks examines the US investment in drone for military and commercial purposes.

Listen to her talk here: Introduction, Part 1, Part 2, Part 3.

Parks’ arguments and provocations

If the relationship between bodies and machines are ‘dynamic techno-social relations’ what are we to make of the impression created by US military that drones remove responsibility from human actors in war zones? The drone appears to be the actor, rather than the human soldier. But, drones have a heavy human cost. Hundreds of civilians and children are killed by US drone strikes in targeted areas.

The drone is more than a sensor, and more than a media technology that produces images of the world, it directly intervenes in the world.

Drones don’t just hunt and kill from afar, they seek to secure territories and administer populations from the sky.

Drones are like '3D printers more than video games, they sculpt the world as much as they simulate or sense'.

Drones intercept commercial mobile phone data as well as tracking military targets. They conduct both ‘targeted’ and ‘ubiquitous’ surveillance. They ‘scoop up’ as much mobile and internet communication data as they can. The drone is a ‘flying data miner’ or ‘digital extractor’ that collects any information it can in order to then identify patterns.

Drones enable ‘death by metadata’. Drone operators target mobile phones, determined by location data, without identifying who is actually holding the phone. A drone operator explains: ‘it’s really like we’re targeting a cell phone, we’re not going after people we are going after their phones in the hopes the person on the other end of that missile is a bad guy’. Pre-emptive targeted killing is met with retrospective identification. ‘We can kill if we don’t know your identity but once we kill you we want to figure out who we killed’. All but three African countries now require mandatory sim card registration strategies so that every sim card can be related to a person. This enables sim card databases to be used for identifying individuals in time and space. But, people are identified by inference. The person holding the mobile phone is presumed to be the person who registered that sim card. ‘Metadata plus’ is an app created by an activist that informs users each time the US conducts a drone strike. Terrorist groups often confiscate mobile phones from areas they are in to avoid being detected by drones.

Drones detect body heat. This marks a shift in how racial differences are sensed and classified. Infrared sensors enable drones to see through clouds and buildings. In a visually cluttered and chaotic environment infrared is useful for identifying living bodies to target. To the drone a person is visible via their body heat. This does not enable the drone operator to distinguish between different kinds of people: adults and children, military actors and civilians. Once a drone identifies a person as a red splotch of body heat on a monitor, the decision to ‘strike’ the target is made via data collection and prediction. Often, that data is generated via a mobile phone. What marks the red splotch out as the intended target is data indicating that their mobile phone is present at the same location. What is targeted is the mobile phone, which is assumed to be on the nearest red splotch on the monitor.

People on the ground create drone survival guides. The guide gives information on various kinds of drones, how to identify them, and how to avoid their detection systems.

Drone Wars is a UK group which collects information on drone operations. Check out their Drone Crash database for information and images on drone crashes.

Drone Labour

Alex Rivera is a filmmaker and artist who has explored drones for more than fifteen years. His film Sleep Dealers (2008) is a vivid account of the social implications of drones and algorithmic media in the global economy. In part, the film features Mexican workers who work ‘node jobs’ in vast computer sweatshops or virtual factories where they have nodes implanted in their bodies and connected to a computer system. Watching monitors they move their own bodies to control robots in American cities. The robots undertake all the labour that real Mexican immigrants currently undertake in the US: cleaning houses, cutting grass, construction. The US economy has maintained the Mexican labour in its outputs but not its human bodies. The human bodies all reside in impoverished conditions in Mexico, controlling robots who perform tasks in the US.

The film illustrates ‘drone’ logic. Human actors use a sensory and calculative media system to remotely perform tasks from afar. Rivera suggests that our global economy is increasingly underwritten by this drone logic: military drones, call centres, immigrant labour in vast factories who only interact with loved ones via the screen and so on are all examples of the way computerisation, digital networks and media interfaces enable humans to act on geographic areas and processes that they are not physically present in.
Furthermore, the film connects the concept of the drone to our discussion about the implosion of bodies and machines in the era of calculative media. The workers in the film are cyborgs in the sense that they are literally plugged into a vast media system. Their capacity to work involves their physical fleshy body, the digital network through which their human senses convey digital data and robots in distance places performing tasks.

You can watch his film online from the UQ library.

You can stream and buy Sleep Dealer here.

Check out these interviews with Rivera in Foreign Policy and The New Inquiry.

Algorithmic culture and machine learning

What’s an algorithm?

An algorithm is a logical decision-making sequence. Tarleton Gillespie explains that for computer scientists, ‘algorithm refers specifically to the logical series of steps for organizing and acting on a body of data to quickly achieve a desired outcome.’

On media platforms like Facebook, Instagram, Netflix and Spotify content-recommendation algorithms are the programmed decision-making that assembles and organises flows of content. The News Feed algorithm on Facebook selects and orders the stories in your feed.

Algorithm is often used in a way that refers to a complex machine learning process, rather than a specific or singular formula. Algorithms learn. They are not stable sequences, but rather constantly remodel based on feedback. Initially this is accomplished via the generation of a training ‘model’ on a corpus of existing data which has been in some way certified, either by the designers or by past user practices. The model is the formalization of a problem and its goal, articulated in computational terms. So algorithms are developed in concert with specific data sets - they are ‘trained’ based on pre-established judgments and then ‘learn’ to make those judgments into a functional interaction of variables, steps, and indicators.

Algorithms are then ‘tuned’ and ‘applied’. Improving an algorithm is rarely about redesigning it. Rather, designers “tune” an array of parameters and thresholds, each of which represents a tiny assessment or distinction. So for example, in a search, this might mean the weight given to a word based on where it appears in a webpage, or assigned when two words appear in proximity, or given to words that are categorically equivalent to the query term. These thresholds can be dialled up or down in the algorithm's calculation of which webpage has a score high enough to warrant ranking it among the results returned to the user.

What is algorithmic culture?

Tarleton Gillespie suggests that from a social and cultural point of view our concern with the 'algorithmic' is a critical engagement with the 'insertion of procedure into human knowledge and social experience.’

Algorithmic culture is the historical process through which computational processes are used to organise human culture. Ted Striphas argues that ‘over the last 30 years or so, human beings have been delegating the work of culture – the sorting, classifying and hierarchizing of people, places, objects and ideas – increasingly to computational processes.’ His definition reminds us that cultures are, in some fundamental ways, systems of judgment and decision making. Cultural values, preferences and tastes are all systems of judging ideas, object, practices and performances: as good or bad, cool or uncool, pleasant or disgusting, and so on. Striphas’ point then, is that over the past generation we have been building computational machines that can simulate these forms of judgment. This is remarkable in part because we have long understood culture, and its systems of judgment, as confined to the human experience.

Striphas defines algorithmic culture as ‘the use of computational processes to sort, classify, and hierarchise people, places, objects, and ideas, and also the habits of thought, conduct and expression that arise in relationship to those processes.’ It is important to catch the dynamic relationship Striphas is referring to here. He is pointing out that algorithmic culture involves both machines learning to make decisions about culture and humans learning to address those machines. Think of Netflix. Netflix engineers create algorithms that can learn to simulate human judgments about films and television, to predict which human users will like which films. This is one part of an algorithmic culture. The other important part, Striphas argues, is the process through which humans begin to address those algorithms. So, for instance, if film and television writers and producers know that Netflix uses algorithms to decide if an idea for a film or television show will be popular, they will begin to ‘imagine’ the kinds of film and television they write in relation to how they might be judged by an algorithm. This relationship creates a situation where culture conforms more and more to users, rather than confronting them. Using the example of the Netflix recommendation algorithm, they argue that customised recommendations produce, ‘more customer data which in turn produce more sophisticated recommendations, and so on, resulting – theoretically – in a closed commercial loop in which culture conforms to, more than it confronts, its users’.

Striphas helpfully places algorithmic culture in a longer history of using culture as a mechanism for control. He suggests that algorithmic culture 'rehabilitates' some of the ideas of the British cultural critic, Matthew Arnold, who wrote Culture and Anarchy in 1869. Arnold argued that in the face of increasing democratisation and power being given over to ordinary people in the nineteenth century, the ruling elites had to devise ways to maintain cultural dominance. Arnold argued this should be done by investing in institutions, such as schools, that would ‘train’ or ‘educate’ ordinary people into acceptable forms of culture. Later, public broadcasters, such as the BBC, also took up this role. Arnold defines culture as ‘a principle of authority to counteract the tendency to anarchy which seems to be threatening us’. By principle of authority he means that a selective tradition of norms, values, ideas, tastes and ways of life can be deployed to shape a society.

Striphas' argues that this idea of using culture as an authoritative principle is the one 'that is chiefly operative in and around algorithmic culture'. Today, algorithms are used to 'order' culture, to drive out 'anarchy'. Media platforms like Facebook, Google, Netflix and Amazon present their algorithmically-generated feeds of content and recommendations as a direct expression of the popular will. But, in fact, the platforms are the new 'apostles' of culture. They play a powerful role in deciding 'the best that has been thought and said'.

Algorithmic culture is the creation of a 'new elite', powerful platforms that make the decisions which order public culture, but who do not disclose what is 'under the hood' of their decision-making processes. The public never knows how decisions are made, but we can assume they ultimately serve the strategic commercial interests of the platforms, rather than the public good. The platforms might claim to reflect the ‘popular will’, but that’s not a defensible claim when their whole decision making infrastructure is proprietary and not open to public scrutiny or accountability. Striphas argues that ‘what is at stake in algorithmic culture is the gradual abandonment of culture’s publicness and thus the emergence of a new breed of elite culture purporting to be its opposite.’

What is machine learning?

An algorithmic culture is one in which humans delegate the work of culture to machines. To understand how this culture works we need to know a bit about how machines make decisions. From there, we can begin to think critically about the differences between human and machine judgment, and what the consequences of machine judgment might be for our cultural world.

Machine learning is a complex and rapidly developing field of computer science. Machine learning is the process of developing algorithms that process data, learn from it and make decisions or predictions. These algorithms are tested and ‘trained’ using particular data sets, which are then used to classify, organise, and make decisions about data.

Stephanie Yee and Tony Chu from r2d3 created this visual introduction to a classic machine learning approach. My suggestion is to work through this introduction.

A typical machine learning task is ‘classification’ like sorting data and making distinctions. For instance, you might train a machine to ‘classify’ houses as being in one city or another.

Classifications are made by making judgments about a range of dimensions in data (these might be called edges, features, predictors, or variables). For instance, a dimension you might use to classify a home might be its price, its elevation above sea level, or how large it is.
In a typical approach to machine learning a decision-making model is created and ‘trained’ using ‘training data’. After the model is built it is ‘tested’ with previously unseen ‘test data’.

There are two basic approaches: supervised and unsupervised.

  • Supervised approaches: give examples for the machine to learn from. Tell machine which are right and wrong. Used for classification.
  • Unsupervised approaches: no examples given to the machine. The machine generates its own features. Good for pattern identification. Machine will see patterns humans may not.

For a useful explainer to machine learning approaches, and examples of types of problems machine learning tackles, check out this introduction by Door Jeroen Moons.
 

What is deep learning?

In a classic machine learning approach to ‘classification’ humans first create a labelled data set and articulate a decision-making process that they ‘teach’ the machine to replicate. This approach works well where there is an available ‘labelled’ data set and where humans can describe the decision-making sequence in a logical way.

The dominance of deep learning in recent years is driven by enormous increase in available data and computer processing power. These approaches are used for classification or pattern-recognition where specifying the features in advance is difficult. These approaches are not useful however for making sense of extremely large, natural, and unlabelled data sets or where the decision-making sequence is not easy to articulate.

Think of the example of recognising handwriting. If all the numbers of an alphabet are in the one typeface, then it is easy to specify the decision making sequence. See this letter ‘A’ below.

The letter is divided up into 16 pixels. From there, a simple decision making sequence can be articulated that would distinguish ‘A’ from all other letters in the alphabet. If pixels 2 3 6 7 9 12 13 16 are highlighted then it is an ‘A’.

But, imagine that instead of an A in this set typeface, you instead want a machine to recognise human handwriting. Each human writes the letter ‘A’ a bit differently, and most humans write it differently every time – depending on where in the word it is, how fast they are writing, if they are writing in lower case, upper case or cursive script.

IMG_7900.JPG

 

While a human can accurately recognise a handwritten ‘A’ when they see it, they could not articulate a reliable decision-making procedure that explains how they do that. Think of it like this, you can recognise ‘A’ but you cannot then explain exactly how your brain does it.

This is where ‘deep learning’ or ‘deep neural networks’ come in. Deep neural networks are a machine learning approach that does not require humans to specify the decision-making logic in advance. The basic idea is to find as many examples as possible (like millions of examples of human handwriting, or images, or recordings of songs) and give them to the network. The network looks over these numerous examples to discover the latent features within the data that can be used to group them into predefined categories. A large neural network can have many thousands of tuneable components (weights, connections, neurons).

In 2012, Google publicised the development of a neural network that had ‘basically invented the concept of a cat’.

Google explained that
 

Today’s machine learning technology takes significant work to adapt to new uses. For example, say we’re trying to build a system that can distinguish between pictures of cars and motorcycles. In the standard machine learning approach, we first have to collect tens of thousands of pictures that have already been labeled as “car” or “motorcycle”—what we call labeled data—to train the system. But labeling takes a lot of work, and there’s comparatively little labeled data out there. Fortunately, recent research on self-taught learning (PDF) and deep learning suggests we might be able to rely instead on unlabeled data—such as random images fetched off the web or out of YouTube videos. These algorithms work by building artificial neural networks, which loosely simulate neuronal (i.e., the brain’s) learning processes. Neural networks are very computationally costly, so to date, most networks used in machine learning have used only 1 to 10 million connections. But we suspected that by training much larger networks, we might achieve significantly better accuracy. So we developed a distributed computing infrastructure for training large-scale neural networks. Then, we took an artificial neural network and spread the computation across 16,000 of our CPU cores (in our data centers), and trained models with more than 1 billion connections.

A critically important aspect of a deep learning approach is that the human user cannot know how the network configured its decision-making process. The human can only see the ‘input’ and ‘output’ layers. The Google engineers cannot explain how their network ‘learnt’ what a cat was, they can only see the network output this ‘concept’.

Watch the two videos below for an explanation of neural networks.

In this first video Daniel Angus explains the basic ‘unit’ of a neural network: the perceptron.


A neural network then is made up of billions of connections between perceptrons. The neural network ‘trains’ by adjusting the weightings between connections, reacting to feedback on its outputs.

In this second video Daniel Angus explains how the neural network learns to classify data, identify patterns and make predictions using the examples of cups and songs in a playlist.

Here are some more examples of deep neural networks.

This deep neural network has learnt to write like a human.

This one has learnt to create images of birds based on written descriptions.

This neural network has learnt to take features from one image and incorporate them in another.

In each of these examples the network is accomplishing tasks that a human can do with their own brain, but could not specify as a step-by-step decision-making sequence.

Finally, let’s relate these deep learning approaches back to specific media platforms that we use everyday.

In 2014 Sander Dieleman wrote a blog post about a deep learning approach he had developed at Spotify.

Dieleman’s experiment aimed to respond to one of the limitations of Spotify’s collaborative filtering approach. You can find out more about Spotify’s recommendation algorithms in this piece from The Verge.

In short, a collaborative filtering approach uses data from users’ listening habits and ratings to recommend songs to users. So, if User A and User B like many artists in common, then this approach predicts that User A might like some of the songs User B likes that they have not heard yet. One of the limitations of this approach is the ‘cold start problem’. Put simply, how to classify songs that no human has heard or rated yet? A collaborative approach needs many users to listen to a song before it can begin to determine patterns and make predictions about who might like it. Dieleman was inspired by deep neural networks that had learnt to identify features in photos, he thought perhaps a deep learning approach could be used to identify features in songs themselves without using any of the metadata attached to songs (like artist name, genre, tags, ratings). His prediction was that, over time, a deep neural network might be able to learn to identify more and more fine-grained features in songs. Go check out his blog post, as you scroll down you will see he offers examples.

At first the network can identify some basic features. For instance, it creates a filter that identifies ‘ambient’ songs. When you, as a human, play those ambient songs you can immediately perceive what the ‘common’ feature is that the network has picked out. Ambient music is slow and dreamy. But, remember, it would be incredibly difficult to describe exactly what the features are of ambient music in advance.

As the network continues to learn, it can create more finely tuned filters. It begins to identify particular harmonies and chords, and then eventually it can distinguish particular genres. Importantly, it groups songs together under a ‘filter’. It is up to the human to then label this filter with a description that makes sense. So, when the network produces ‘filter 37’, it is the human who then labels that as ‘Chinese pop’. The network doesn’t know it is Chinese pop, just that is identifies shared features among those songs.

What makes this deep learning example useful to think about is this, Dieleman has created a machine that can classify music in ways that make sense to a human, but without drawing on any human-made labels or instructions to get started. The machine can accurately simulate and predict human musical tastes (like genres) just by analysing the sonic structure of songs. This is its version of ‘listening’ to music. It can learn to classify songs in the same way a human would, but by using an entirely non-human machine process that is unintelligible to a human.

Nicholas Carah, Daniel Angus and Deborah Thomas

The difference between representation and simulation


What’s the difference between representation and simulation?

Let’s take representation to be the basic social process through which we create signs that refer to a shared sense of reality. The twentieth century is remarkable in part because humans created an enormous culture industry that managed this social process of representation.Through radio, film and television enormous populations came to understand an enormous social reality within which their lives were embedded. Critically, representation only works because people feel that the signs they see actually to refer to, cohere with or match their really-lived experience.

One way to think about simulation is that is upends the order of representation. Let me borrow the famous illustration of the French philosopher Jean Baudrillard. We can say that a map is a representational text. A map of a city represents that real city. You can use that map to actually find your way around a real world place its really-existing streets, and buildings and landmarks. What if, Baudrillard suggests, a map stops functioning as a representation and begins to function as a simulation. If in the order of representation the territory precedes the map, then in a simulation the map precedes the territory. That is, in representation the map comes after the real world, but in simulation the map comes first and begins to shape the real world.

OK, hang in here. Baudrillard has a fundamental insight for us, that really matters in a society increasingly organised around the logic of prediction. Here’s a fairly basic example of this claim that simulations are signs that precede reality, from William Bogard. Think of a simulation in the sense of a computer ‘sim’ like the software that teaches pilots to fly. In a simulation like this, signs are only related to other signs. The signs, such as the runway, geographical features and so on, the trainee pilot sees on the screen are only meaningful or operational within the simulation or in relation to the other signs enclosed within that system. When the pilot is sitting in the simulator ‘flying’ there is, of course, no real underlying reality they are ‘flying through’. What they see out the screen is not real sky, clouds, ground.

But, even so, this simulation is not a production of pure fiction, it is related to the real world. They intervene in the real world and they can only be understood in relation to the real world. In this case, a fighter pilot can only learn to fly by first using a simulation. The simulator enables them to habituate their bodies to the rapid, almost pre-conscious, decision making and the physiological impact of flying at supersonic speed.
So, we might say that while the simulation has no underlying reality, fighter pilots can only fly supersonic planes in the real world because they can train their minds and bodies in a simulation first. The simulation brings into being, in the real world, a fighter pilot. The fighter pilot could not exist without the simulation. The simulation then precedes and shapes reality.

So, here’s the thing to start thinking about. Think about all the ways in which our capacity to ‘simulate’ to create things in advance of their existence in the real world, to predict the likelihood of events before they take place, actually affect our really-lived lives. Simulations intervene in the real world.

For example, think about the capacity to clone animals or even genetically-engineer humans. Here’s William Bogard offering us a thought experiment on genetically-engineered children.

No longer bound by their ‘real’ genetic codes carried in their own bodies, parents may be able to ‘compile’ their ideal child from a genetic database. A program might even help them calculate their ideal child by drawing on other data sets. For example, information about the parents’ personalities might be used to compile a child who they will get along with, or information about the cultural or industrial dynamics of the city where the parents live might be used to compile a child likely to fit in with that cultural milieu or have the aptitude for the forms of employment available in that region. The child ‘born’ as a result of such interventions would always be a simulation, always be virtual, because they were the product of a code or computation performed using databases. This does not mean the child is not ‘real’, the child of course exists, but they are virtual in the sense that they could not exist without the technologies of surveillance and simulation which brought them into reality.

If the child a ‘real’ child? Of course it is. But, it is also a simulation, in the sense that its very biological form was predicted and engineered in advance. We begin to project our views of what an ‘ideal’ child is into the future production of the species. We can think here of Bogard’s ‘child’ as a metaphor for our public and media culture. Of course, it is our ‘real’ or ‘really lived’ experience, but it would not exist without the collection and processing of data, and the simulations that are produced from that processing. Simulations require data and that data is produced via technologies of surveillance. To clone a sheep you need a complete dataset of the sheep’s genetic code so you need to have the technologies to map the genetic code. To build a realistic flight simulator you need to have mapping technologies to construct simulations of the physical world. As Bogard argues, simulation in its most generic sense, is the point where the imaginary and real coincide, where our capacity to imagine certain kinds of futures coincides with our capacity to predict and produce them in advance.

The larger philosophical point here, is this: imagine a human experience where the future becomes totally imaginable and predictable, where its horizon closes in around whatever powerful humans today want. Bogard lays it out like this.

Technologies of simulation are forms of hyper-surveillant control, where the prefix ‘hyper’ implies not simply an intensification of surveillance, but the effort to push surveillance technologies to their absolute limit... That limit is an imaginary line beyond which control operates, so to speak, in ‘advance’ of itself and where surveillance – a technology of exposure and recordings – evolving into a technology of pre-exposure and pre-recording, a technical operation in which all control functions are reduced to modulations of preset codes.

Bogard introduces some significant critical ideas here. Firstly, he indicates that technologies of simulation are predictive but they can only make reliable predictions if they have access to data collected by technologies of surveillance. For example, Norbert Wiener’s invention of a machine that could compute the trajectory of enemy pilots in World War II combined surveillance, prediction, and simulation. The radar conducted surveillance to collect data on the actual trajectory of an enemy aircraft then a computational machine used algorithms to simulate the likely trajectory of that aircraft. This ability to interrelate the processes of surveillance and simulation is especially important because this process underpins much of the direction of present day digital media platforms and devices.

Secondly, Bogard, suggests that by using data surveillance, simulations can not only predict the likely future, they can actually create the future based on its data about the past. By predicting a likely future, we make it inevitable by acting to construct it and by acting ‘as if’ an event is likely to unfold, we ensure that it does. Admittedly, this can be a fairly complicated logic to think through. However, the critical idea to draw from this is that surveillance is not just a technology of the past by observing what people have done or the present by observing what people are doing. Surveillance also constructs the future whereby once coupled with simulation, it becomes a building-block in a system of control where pre-set codes and models program the future and what people will do.

Thus these technologies usher in, and here I’m quoting Bogard, ‘a fantastic dream of seeing everything capable of being seen, recording every fact capable of being recorded, and accomplishing these things, whenever and wherever possible prior to the event itself.’ The virtual is made possible when ‘surveillance’ and ‘simulation’ become simultaneous, linked together in an automatic way which enables the past to be immediately apprehended and analysed in ways that code the present. Let’s go back to Bogard’s example of the genetically-engineered child.
No longer bound by their ‘real’ genetic codes carried in their own bodies, parents may be able to ‘customise’ their ideal child from a genetic database. The child is real, they exist. But they are also virtual in the sense that they could not exist without the technologies of surveillance and simulation which brought them into reality.

Bogard is being deliberately imaginative in his account. He is attempting to conceptualise the ‘limits’ of surveillance and simulation technologies and indicate how the technologies of simulation can be interwoven with reality in complex ways. If information that can be collected and stored becomes limitless and the capacity to predict, calculate, and simulate using that information also expands, then the role media technologies play in our societies will shift dramatically in the years ahead. It might even profoundly unsettle our understanding of what media is. For example if parents can ‘compile’ a desirable child using a combination of surveillance and simulation technologies, would the resulting child be a media product?

In many respects the child could be construed as a customised media device, containing information suited to the consumers’ requests. This sounds messed up but in Bogard’s proposition we need to think about the limits of surveillance technologies. If surveillance becomes ‘complete’ then the possible future becomes ‘visible’. Crucially, you can repeat the past because the future no longer ‘unfolds’ randomly, but can be ‘managed’ drawing on data about the past, which enables it to be not just ‘predicted’ but brought into being – to be virtualised.

If all of this sounds a bit fanciful, then at least consider this point. Our media system is characterised by the increasing capacity to conduct surveillance and create simulations. Surveillance is the capacity to watch, simulation the capacity to respond. The two are interdependent. This system is productive and predictive. Together surveillance and simulation make calculations and judgments about what will happen next, and in doing so shape the coordinates within which institutions and individuals act. Technologies of surveillance and simulation then prompt us to think carefully about what the human experience is, and what the interrelationships are between humans and increasingly predictive and calculative technologies.

In the last post I mentioned the episode Be Right Back from Charlie Brooker’s Black Mirror. A young woman, Martha, gets a robot version of her dead partner Ash. The robot is programmed based on the all the data Ash generated why he was alive. It looks like him, has his gestures, his interests, speaks like him. Martha’s robot version of Ash can both learn to perform as Ash: his language, speech and expressions. For instance, Martha tells the robot what ‘throwing a jeb’ meant in their relationship, and later he uses that expression in context. But, the robot is unable to make its own autonomous decisions. Martha is the robot’s ‘administrator’ and he will do whatever she asks. The robot is missing the nuances of human relationships. It knows how to argue, but not how to fight. The robot cannot be affected. It can’t engage in open-ended, deeply human creativity. It can’t ‘spend time’ with another human. The night before she takes him to a cliff Martha and Ash the robot have this exchange.

Martha: get out, you are not enough of him…
Robot: did I ever hit you?
Martha: of course not, but you might have done.
Robot: I can insult you, there is tons of invective in the archive, I like speaking my mind, I could throw some of that at you.

The robot can manipulate how Martha feels, but it can’t understand her feelings or feel for itself. What do our intimate others know about us that our devices cannot? What can humans know about each other that technologies of surveillance cannot? What we look like when we cry, what we might do but haven’t yet done, how we respond to our intimate others as they change. Martha reaches this impasse with Ash. Ash is the product of surveillance and simulation. He doesn’t just ‘represent’ their relationship, he intervenes in it. He begins to shape Martha’s reality in ways living Ash did not, and now – having passed away – cannot. Martha takes Ash to a cliff.

Robot: Noooo, don’t do it! (joking). Seriously, don’t do it.
Martha: I’m not going to.
Robot: OK
Martha: See he would’ve worked out what was going on. This wouldn’t have ever happened, but if it had, he would’ve worked it out.
Robot: Sorry, hang on that’s a very difficult sentence to process.

Why is it difficult to process? Because as much as its a sensible statement, it is an affective one - it is about how she feels, but also the open-ended nature of human creativity. To consider and imaginethings that are not, or might have been, or could be.

Martha: Jump
Robot: What over there? I never express suicidal thoughts or self harm.

The robot is always rational.

Martha: Yeah well you aren’t you are you?
Robot: That’s another difficult one.
Martha: You’re just a few ripples of you, there is no history to you, you’re just a performance of stuff that he performed without thinking and its not enough.
Robot: C’mon I aim to please.
Martha: Aim to jump, just do it.
Robot: OK if you are absolutely sure.
Martha: See Ash would’ve been scared, he wouldn’t have just leapt off he would’ve been crying

The robot can manipulate how Martha feels, but it can’t understand her feelings or feel for itself. She makes a decision that might surprise us though. Rather than put photos of her daughter’s father in the attic, she puts the robot up there. The daughter visits a simulation of her father each weekend. We can see, I would argue, that Brooker draws us toward thinking of the ambivalent entanglements with these devices. The intimacy and comfort they provide, our dependence on them, the way they unsettle, control and thwart us.

As I thought about this problem, that Brooker brings to a head at the cliff, I thought of John Durham-Peters:

Communication, in the deeper sense of establishing ways to share one’s hours meaningfully with others, is sooner a matter of faith and risk than of technique and method. Why others do not use words as I do or do not feel or see the world as I do is a problem not just in adjusting the transmission and reception of messages, but in orchestrating collective being, in making space in the world for each other. Whatever ‘communication’ might mean, it is more fundamentally a political and ethical problem than a semantic one.

Machines can displace forms of human knowing and doing in the world, but they seem confined to reducing communication to a series of logical procedures, calculations and predictions. What’s left is the human capacity to make space in the world for each other, to exercise will, to desire, to spend time with one another. The relationships between humans and their simulations are complicated, and the logic of simulation cannot encompass or obliterate the human subjective process of representation. What makes the human, in part, is there capacity to use language to spend time and make worlds with each other.

Nicholas Carah and Deborah Thomas

 

Simulation

James Vlahos wrote in Wired magazine in July 2017 about his creation of a ‘dad bot’. Vlahos sat down and taped a series of conversations with his dying father about the story of his life. The transcript of these conversations is rich with the stories, thoughts, and expressions of his dad. He begins to ‘dream of creating a Dadbot – a chatbot that emulates… the very real man who is my father. I have already begun gathering the raw material: those 91,970 words that are destined for my bookshelf’. The transcripts are training data. Over months he builds the bot, using PullString, training and testing it to talk like his dad.

He takes the bot to show it to his mother and father, who is now very frail. His mum starts talking with the bot.

I watch the unfolding conversation with a mixture of nervousness and pride. After a few minutes, the discussion ¬segues to my grandfather’s life in Greece. The Dadbot, knowing that it is talking to my mom and not to someone else, reminds her of a trip that she and my dad took to see my grandfather’s village. “Remember that big barbecue dinner they hosted for us at the taverna?” the Dadbot says.

After the conversation, he asks his parents a question.

“This is a leading question, but answer it honestly,” I say, fumbling for words. “Does it give you any comfort, or perhaps none—the idea that whenever it is that you shed this mortal coil, that there is something that can help tell your stories and knows your history?”
My dad looks off. When he answers, he sounds wearier than he did moments before. “I know all of this shit,” he says, dismissing the compendium of facts stored in the Dadbot with a little wave. But he does take comfort in knowing that the Dadbot will share them with others. “My family, particularly. And the grandkids, who won’t know any of this stuff.” He’s got seven of them, including my sons, Jonah and Zeke, all of whom call him Papou, the Greek term for grandfather. “So this is great,” my dad says. “I very much appreciate it.”

Later, after his father as passed away, Vlahos recalls an exchange with his 7 year old son.

‘Now, several weeks after my dad has passed away, Zeke surprises me by asking, “Can we talk to the chatbot?” Confused, I wonder if Zeke wants to hurl elementary school insults at Siri, a favorite pastime of his when he can snatch my phone. “Uh, which chatbot?” I warily ask.
“Oh, Dad,” he says. “The Papou one, of course.” So I hand him the phone.’

The story is strange and beautiful. It provokes us to think about how we become entangled with media technologies, and the ways in which they are enmeshed in our human experience. In this story, not just a father – but a family and their history – is remembered and passed on not with oral stories, or photo albums, or letters but with an artificial intelligence that has been trained to perform someone after they die.

The dadbot is an example of the dynamic relationship between surveillance and simulation. Surveillance is the purposeful observation, collection and analysis of information. Simulation is the process of using data to model, augment, profile, predict or clone. The two ideas are interrelated. Simulations require data and that data is produced via technologies of surveillance. The more data we collect about human life, the more our capacity grows to use that data to train machines who can simulate, augment and intervene in human life.

If the Dadbot is a real experiment, let me offer a speculative fictional one. In the episode Be Right Back of his speculative fiction Black Mirror, Charlie Brooker asks us to think about a similar relationship between humans, technologies and death. Be Right Back features a young couple: Martha and Ash. After Ash’s death, his grieving partner Martha seeks out connection with him. At first the episode raises questions about how media is used to remember the dead. Old photos, old letters, clothes, places you visited together, songs you listened to. A friend suggests Martha log into a service that enables text-based chat with people who have passed away, simulating their writing style from their emails and social media accounts. She does that. It escalates. She uploads voice samples that enable her to chat to him on the phone. She becomes entangled in these conversations. Sometimes the recording spooks her, for instance when she catches it ‘googling’ answers to questions she knows Ash wouldn’t know. A new product becomes available, a robot whose draws on photographs and videos of Ash while he was alive. It arrives. She activates it in the bath. The robot is much better in bed than Ash ever was. Things get complex. Martha goes looking for the gap between the robot and the human.

Vlahos’ Dadbot and the robot in Be Right Back are both examples of the interplay between surveillance and simulation. Each of them illustrate how the capacity to ‘simulate’ the human depends in the first case on purposefully collecting data. Data is required to train the simulation.

In his 1996 book, The Simulation of Surveillance: Hypercontrol in Telematic Societies, William Bogard (1996) carefully illustrates the history of this relationship between simulation and surveillance. He proposes that over the past 150 years our societies have undergone a ‘revolution in technological systems of control’. That is, our societies have developed increasingly complex machines for controlling information, and using information to organise human life. One of the key characteristics of the industrial societies that emerged in the 1800s was the creation of bureaucratic control of information. Bureaucracies were machines for gathering and storing information using libraries, depositories, archives, books, forms, and accounts. They processed that information in standardised ways through the use of handbooks, procedures, laws, policies, rules, standards and models. Since World War II these bureaucratic processes have become ‘vastly magnified’ via computerisation. Bureaucracies rely on surveillance. They collect information in order to monitor people, populations and processes. Think of the way a workplace, school or prison ‘watches over’ its employees, students, or prisoners in order to control and direct what they do.

Bogard argues that increased computerisation has resulted in surveillance becoming coupled with processes of simulation. Remember, surveillance is the purposeful observation, collection, and analysis of information, while simulation is the process of modelling processes in order to profile, predict or clone. Inspired by the French theorist of surveillance Michel Foucault, Bogard suggests that surveillance operates as a ‘fantasy of power’ which in the post-industrial world ‘extends to the creation of virtual forms of control within virtual societies’. What’s a ‘fantasy of power’ here? Well, firstly, it is a kind of social imaginary, a set of techniques through which individuals ‘internalise’ the process of surveillance. They learn to watch over themselves, they learn to perform the procedures of surveillance on themselves, in advance of technologies themselves performing those techniques. Let me give a very simple example. You might go to search something on Google, and then stop because you think ‘Hmm, Google is watching me…’ I don’t want it to know I searched that. You discipline yourself, pre-empting the disciplinary power of the technology.

But, secondly, a fantasy of power gestures at something else important too. It suggests a society where we come to act as if we believe in the predictive capacity of surveillance machines. That is, in practice we trust the capacity of bureaucratic and technical machines to watch over and manage social life. We trust machines to reliably extend human capacities. By the 1990s, the socio-technical process of simulation had become an ordinary part of many social institutions. For instance, computerized ‘experts’ increasingly assist doctors in making complex medical diagnoses, algorithmic models help prisons determine which prisoners should be eligible for parole, statistical modelling projects the need for public infrastructure like roads and schools, satellite surveillance informs foreign policy decisions.

The ‘fantasy’ driving government, military and corporate planning is that the capacity of digital machines to collect and process data can extend their capacity to exercise control beyond what humans alone might accomplish. Across government, corporate and military domains in the post-war period ‘simulations’ became standard exercises. Simulations are used by engineers to project design flaws and tolerances in proposed buildings. For instance, to test whether a building could withstand an earthquake before that building is even built. They are used by ecologists to model environments and ecosystems, by educators as pedagogical tools, by the military to train pilots and by meteorologists to predict the weather. In each of these examples data is fed into a machine which predicts the likelihood of events in the future. Corporations increasingly base investment decisions on market simulations and more recently, nanoscientists have devised miniaturised machines that can be used in areas as diverse as the creation of synthetic molecules for cancer therapy to the production of electronic circuits.

Bogard calls these ‘telematic societies’ driven by the fantasy that they can ‘solve the problem of perceptual control at a distance through technologies for cutting the time transmission of information to zero.’ That is, these societies operate as if all they need to do is create technologies that can watch and respond to populations in real time. In these societies powerful groups invest in creating a seamless link between ‘surveillance’ and ‘response’, between collecting data, processing it and acting on it.

Bogard’s original insight then is to identify – in the mid 1990s no less – that we are becoming societies where ‘technologies of supervision’ that collect information about human lives and environments are connected to ‘technologies of simulation’ that predict and respond in real-time. That’s a wonderfully evocative insight to think about in the era of the FANGs. Facebook, Amazon, Netflix and Google are each corporations whose basic strategic logic, and engineering culture is organised around the creation of new configurations of technologies of supervision, data collection, and technologies of simulation, prediction, response and augmentation.

Nicholas Carah and Deborah Thomas

Sensors

Let’s start with two beer bottles. The Heineken Ignite and Stronbow StartCap.


What do these bottles share in common? They are both bottles of beer that double as media devices and sensors. Each of them was engineered by an advertising agency as part of promoting the brand of beer. We might say the advertisers expanded the affective capacities of the bottle. Bottles of beer have always affected consumers. You pop the cap, you drink the beer and its affects your body and mood. It makes you feel different. Sometimes excited, a bit buzzy, other times mellow, sometimes morose. What these advertisers did though was engineer the bottle into an input/output, I/O, device that can store and transmit information. The idea of an I/O device is a useful metaphor for thinking about ‘transfer points’ between digital media systems and our lives, bodies and societies. I/O refers to ‘input/output’: any program, operation or device that transfers data into a computing system.  A transfer of data is an output from one device and an input into another.  Hence ‘I’ Input and ‘O’ output. The I/O devices convert sensory stimuli into a digital form.  For example, the keyboard translates the physical movement of the fingers into a series of digital commands in a software program. The mouse translates the fine motor skills of the hand into digital data that moves a cursor on a screen.

The bottle becomes more than a container for beer. Heineken Ignite bottle had in its base LEDs, a microprocessor, an accelerometer and a wireless transmitter. These devices sensed and transmitted information. The accelerometer and wireless transmitter worked as sensors that could stimulate the lights in the bottle to flash to the beat of the music and the movement of people in clubs. Heineken claimed the intention was to create a mobile media device that captured people’s attention without them having to engage with the screen of the smartphone. Sort of. I think the advertisers understood full well that if you give drunk people in a club a bottle that flashes they are highly likely to capture images and videos of it on their smartphones for sharing on social media. The bottle is a device then that prompts people to convert the sociality of the club into media content and data on social media platforms.

Strongbow’s Start Cap followed a similar logic. It was sold in specially-engineered bars. When you flipped the cap off the bottle, an RFID chip in the cap would trigger responses in the club. For instance, the cap might pop off and that might trigger confetti to drop from the ceiling, or a light show to happen, or a song to play. These bottles are I/O devices that sit at the touchpoint between digital media infrastructure and human bodies. They sense action in the club, respond to that action, and in doing so stimulate responses from humans. Marketers are experimenting with beer bottles in part because they are an object that is held by the human body in social and cultural situations.
Here, we can see advertisers approaching branding as not only a process of symbolic persuasion. They are not really here making an ‘advertisement’ that contains a message we consume, rather than are engineering a cultural experience. They are using media as a data-processing infrastructure to sense, process and modulate humans, their feelings, bodies and cultural practices

We should pay attention to advertisers in part because they are critical actors in experimenting with new media technologies. Via branding exercises like the Heineken Ignite and Strongbow StartCap we can see advertisers treating media as data-processing sensors. Jeremy Packer suggests that the capacity to exercise control using digital media is ‘founded upon the ability to capture, measure, and experiment with reality’. In the present moment, we need to pay on to the increasing capacity of media to ‘sense’, calculate and experiment with our lived experience.

These two beer bottles are part of a larger process of weaving digital media and networks into our everyday infrastructure. This gets called the ‘internet of things’. Watches, Televisions, Cars, Fridges, Kettles, Air Conditioners, Home Stereos are just some of the everyday objects that are getting ‘connected to the internet’. My friend’s dog is even connected to the internet. Well not the dog itself, but the dog’s collar. They can load an app and see where the dog is while they’re at work. This ‘thingification’ of the internet is promoted to us as living in sensor-rich smart homes and environments.

As you drive home, your car knows when you are getting close and turns on the air conditioning, and maybe flicks on the kettle. You can think about how the logic of turning everyday objects into sensory devices works. Once your car is a sensor, it can start collecting all kinds of information. Say there is a sensor in the steering wheel that can record information about how erratically you are driving, or say there’s a microphone in the car that can hear the tone of your voice. The car might be able to sense what kind of mood you are in as you drive home from work. In a bad mood? It might tell your home stereo to put on some chilled out music and dim the lighting by the time you arrive home. OK, I kind of made that up. But, it’s not ridiculous.

Platforms like Google and Amazon imagine us living alongside all sorts of artificially-intelligent things. You open the fridge and say ‘Ah, we’re out of milk!’ Your home assistant hears you say this, and puts it on your shopping list. If you get home deliveries it might automatically order it for you. If not, it might sense when you are at the shops and send a reminder to your phone. A basic point I’m trying to draw out here is that the engineering logic of media platforms does not begin and end with the smartphone and its apps. Platform engineers consider that all kinds of everyday objects will be ‘input/output’ devices that are incorporated within the platform architecture. These devices act as ‘switches’ or ‘transfer’ points between the bodily capacities of consumers and the calculative capacities of media platforms. These devices sense by recording information about the expressions and movements of humans and their environments, they translate by transforming reality into data that can be processed, and they stimulate by delivering impulses and messages to users. I think of these devices as ‘affect switches’ in the sense that they transfer the human capacity to affect into the calculative apparatus of media infrastructure. A device that can ‘sense’ your mood by recording your voice, or your movement, or what you’ve been tapping or swiping for instance is translating some information about your lived experience – how you feel ¬– into digital data. And then, processing that information and making a decision about how it might modulate your mood.

To affect is to have influence on or make a difference to, this is often particularly meant in relation to feelings or emotions. A switch is a device that can coordinate or stimulate movement in a system, it can turn something on or off, or change its direction or focus. An ‘affect switch’ then is a device that can alter the direction of human action or attention. Affect switches are techno-cultural devices for conducting and governing the dynamic and indeterminate interactions between consumers, material cultural spaces and media platforms. The beer bottles I started out with are affect switches. They sit at the touchpoint between body and media platforms. They sense information in the environment, and then stimulate particular moods and reactions from users.

OK, there’s another crucial point these beer bottles help us make. Popular culture can sometimes seduce us into thinking new media is about virtual simulations out there in cyberspace, that media is somehow ephemeral. That’s a ruse, digital media are material objects and infrastructure. They exist in the real world, and involve the transformation of real world objects and spaces. The beer bottles are one example of everyday objects ‘becoming digital’. They retain their material character and place in our world, the change is that they are now connected to a digital media infrastructure. Mark Andrejevic and Mark Burdon suggest that this world where more and more objects become touchpoints between our lived experience and the data-processing power of digital media is a ‘sensor society’. They suggest our homes, workplaces, cars, shopping centres, and public places are filling up with 'probes [or sensors] that capture the rhythms of the daily lives of persons, things, environments, and their interactions'.

In their way of thinking a sensor is 'any device that automatically captures and records data that can then be transmitted, stored, and analysed' they 'do not watch and listen so much as they detect and record'. This leads them to make a really critical point. When we see a device as a sensor in a sensor society we must think not only of what it records but also how it is stored, who has access to it and how it is used. We are all ‘sensed’ by sensors, we all have data collected about our bodies, movements and expressions. But, who gets to do the sensing? Who gets to keep, process and benefit from all this sensory information that is collected? We live in a world where more and more everyday objects are becoming sensors that collect data about us.

This prompts us to rethink the ways in which we participate in a digital world. Much of our participation is relatively passive. Passive data is the kind of data that is collected through sensors, it is data that we do not necessarily consciously know we are creating. Sure, we might immediately think of our smartphone here. Often times it is collecting data that we don’t really think about. Go check your location services on your phone. Unless you switched it off you’ll see it has a fairly complete record of where you go. It’s probably identified your home and work.
Periodically there is controversy about apps using the microphone to passively monitor your conversations. Here are moments where we are not actively participating by using the phone to say post something to social media, rather it is passively sitting in the background monitoring us. This kind of passive monitoring goes way beyond the phone.

Here’s two examples, one kooky, one creepy.

Kooky first. In July 2017, it was reported that Roomba vacuum cleaners were collecting information about your home. The vacuum needs to collect data in order to learn how to the vacuum your home – to figure out where walls and furniture are. It creates a map of your home. But, it doesn’t just use that map for its own cleaning. That map is also a data set about what objects it ‘bumps into’ in your home. The data is stored by the parent company. They are considering selling it. The data could be used to make predictions about what kind of family you have or what kinds of objects you own. And, from there, advertising might be targeted accordingly.

OK, and here’s creepy. Earlier in 2017 the ‘smart’ vibrator manufacturer Standard Innovation settled a lawsuit for $3.75 million. These vibrators allowed users to remotely turn on their lover using a Bluetooth connection. Two hackers demonstrated how the vibrator could be hacked and remotely activated. But, get this, the smartphone app that was used to control the vibrator collected information about users, including information about temperature and vibration intensity without users consent. So, here it is, an intimate personal object doubling as a sensor that transfers information about sexual practices back to unknown third parties.

For Andrejevic and Burdon the sensor society is not just a ‘world in which the interactive devices and applications that populate the digital information environment come to double as sensors’ but also the emerging practices ‘of data collection and use that complicate and reconfigure received categories of privacy, surveillance, and sense-making’. The users and collectors of the troves of data that sensors collect range from government spy organisations such as the NSA, to data analytics companies, to advertising companies, insurance agencies, hedge fund managers and the companies that collect the information in the first place ranging from social media platforms to appliance manufacturers like General Electric. The organisations that can access this sort of big data are not ordinary individuals.  By its very nature this data is useful only to entities that want to measure and affect large numbers of people – those who want to act on a society wide level.  
Andrejevic and Burdon tell us that ‘structural asymmetries (are) built into the very notion of a sensor society insofar as the forms of actionable information it generates are shaped and controlled by those who have access to the sensing and analytical infrastructure.’

A sensor society then is one where everyday objects are connected to a digital media system. These objects collect data. The consequence of having more objects, in more everyday situations collecting more data, is that we are becoming a society characterised by the collection and processing of information on an enormous scale. As we become a society that collects more data than any humans can interpret, we begin to create machines that process that data and make decisions. Patterns of human life that are not visible to humans, are visible to machines. A sensor-driven media system doesn’t care for what we think or enabling us to understand one another as much as it aims to develop thecapacity to make us visible and to predict our actions. 'Machines do not attempt to understand content in the way a human reader might'. A human would be unable to keep up with the vast amount of data involved but algorithms and artificial intelligence can. Sensors are a critical part of the larger media platform eco-system. Sensors ‘connect’ that system to lived experience and living bodies, they enable calculative media platforms to learn about human life, and as a consequence, make more machine-driven interventions in it.

 

Participation in Experiments

Here’s a Tweet I saw an hour ago. It’s a play on those memes that compare social media platforms. This one goes. 2017.

Facebook: Essential oils
Snapchat: I’m a bunny!
Instagram: I ate a hamburger
Twitter: [all caps] THIS COUNTRY IS BURNING TO THE GROUND

OK, it reminds us that platforms are different. But also, that platforms can affect our mood. And, in the era of Trump, the experience of Twitter for many people is frantic, panic-inducing, rancorous.

Imagine this. Imagine that the ‘mood’ of the platform – its feel-goodness in the case of Instagram, its agitation in the case of Twitter is not just created by the users, but is deliberately engineered by the platform. And, imagine they were doing that just to see what would happen to users.

Say you use Facebook every day. You open the app on your phone and scan up and down the News Feed, you like friends posts, you share News Stories, you occasionally comment on someone’s post. Then, one day, all the posts in your News Feed are a little more negative. Maybe you don’t notice, maybe you do. But, you’d be inclined to think, people are a bit unhappy today. What if though, the posts in your feed were negative one day because Facebook was running an experiment where they randomly made some users feeds negative to see what would happen.

That’s not a hypothetical story. Facebook actually did that in 2014, to 689000 users. They changed the ‘mood’ of their News Feeds. Some people got happier feeds, some got sadder feeds. They wanted to see if they ‘tweaked’ your feed sad, if you would get sad. To this day they still have not told the users who were ‘selected’ for this experiment that this happened to them. If you use Facebook, you might have been one of them. You might care, you might not. The point is this, media platforms are engineering new kinds of interventions in public culture. These engineering projects include machine learning, artificial intelligence and virtual reality.

The development and training of an algorithmic media infrastructure depends on continuous experimentation with users. Public communication on social media routinely doubles as participation in experiments like A/B tests, which are part of the everyday experience of using platforms like Google and Facebook. These tests are invisible to users. An A/B test involves creating alternative versions of a web page, set of search results, or feed of content. Group A is diverted to the test version, Group B kept on the current version and their behaviours are compared. A/B testing enables the continuous evolution of platform interfaces and algorithms.

Wired reported that in 2011 Google ‘ran more than 7000 A/B tests on its search algorithm’. The results of these tests informed the ongoing development of the algorithm’s decision making sequences.

Two widely publicised experiments – popularly known as the ‘mood’ and ‘voting’ experiments – by Facebook illustrate how these A/B tests are woven into public culture, contribute to the design of platforms, and raise substantial questions about the impact the data processing power of media has on public communication. Each experiment was reported in peer-reviewed scientific journals and generated extensive public debate.

Let’s recap them both.

Facebook engineers and researchers published the ‘voting’ experiment in Nature in 2012. The experiment was conducted during the 2010 US congressional election and involved 61 million Facebook users. The researchers explained that on the day of US congressional elections all US Facebook users who accessed the platform were randomly assigned to a ‘social message’, ‘informational message’ or ‘control’ group. The 60 million users assigned to the social message group were shown a button that read ‘I Voted’, together with a link to poll information, counter of how many Facebook users had reported voting and photos of friends who had voted. The information group were shown the same information, except for photos of friends.

The control group were not shown any message relating to voting. 6.3 million Facebook users were then matched to public voting records, so that their activity on Facebook could be compared to their actual voting activity. The researchers found that users ‘who received the social message were 0.39% more likely to vote’ and on this basis estimated that the ‘I Voted’ button ‘increased turnout directly by about 60,000 voters and indirectly through social contagion by another 280,000 voters, for a total of 340,000 additional votes’.

The experiment, and Facebook’s reporting on it, reveals how the platform understands itself as infrastructure for engineering public social action: in this case, voting in an election. The legal scholar and critic Jonathan Zittrain described the experiment as ‘civic-engineering’. The ambivalence in this term is important. A more positive understanding of civic engineering might present it as engineering for the public good. A more negative interpretation might see it as manipulative engineering of civic processes. Facebook certainly presented the experiment as a contribution to the democratic processes of civic society. They illustrated that their platform could mobilise participation in elections. The more profound lesson however, is the power it illustrates digital media may be acquiring in shaping the electoral activity of citizens.

Data-driven voter mobilisation methods have been used by the Obama, Clinton and Trump campaigns in recent Presidential elections. These data-driven models draw on a combination of market research, social media and public records data. While the creation of data-driven voter mobilisation within campaigns might be part of the strategic contest of politics, the Facebook experiment generates more profound questions.

Jonathan Zittrain, like many critics, raised questions about Facebook’s capacity as an ostensibly politically neutral media institution to covertly influence elections. The experiment could be run again, except without choosing participants at random, rather Facebook could choose to mobilise some participants based on their political affiliations and preferences. To draw a comparison with the journalism of the twentieth century, no media proprietor in the past could automatically prevent a specified part of the public from reading information they published about an election.

Facebook’s ‘mood’ experiment was reported in the Proceedings of the National Academy of Science in 2014. The mood experiment involved the manipulation of user News Feeds similar to the voting experiment. The purpose of this study was to test whether ‘emotional states’ could be transferred via the News Feed. The experiment involved 689,003 Facebook users. To this day, none of them know they were involved in the experiment. The researchers explained that the ‘experiment manipulated the extent to which people were exposed to emotional expressions in their News Feed’. For one week one group of users were shown a News Feed with reduced positive emotional content from friends, while another group was shown reduced negative emotional content. The researchers reported that ‘when positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred’. In short, Facebook reported that they could, to an admittedly small degree, manipulate the emotions of users by tweaking the News Feed algorithm.

Much of the public debate about the mood experiment focussed on the ‘manipulation’ of the user experience, the question of informed consent to participate in A/B experiments, and the potential harm of manipulating the moods of vulnerable users. These concerns matter. But, as was rightly noted, focus on this one experiment obscures the fact that the manipulation of the user News Feed is a daily occurrence, it is just that this experiment was publicly reported. More importantly, the voting and mood experiments illustrate how public communication doubles as the creation of vast troves of data and participation in experiments with that data. When we express our views on Facebook we not only persuade other humans, we are contributing to the compilation of databases and the training of algorithms that can be used to shape our future participation in public culture.

The response of critics like Jonathan Zittrain, Kate Crawford and Joseph Turow to the data-driven experiments of platforms like Facebook highlight some of the new public interest concerns they generate. Crawford argues that all users should be able to choose to ‘opt in’ and ‘opt out’ of A/B experiments, and see the results of experiments they participated in. Zittrain proposes that platforms should be made ‘information fiduciaries’, in the way that other professions like doctors and lawyers are. Like Crawford, he envisions that this would require users to be notified of how data is used and for what purpose, and would proscribe certain uses of data. Turow proposes that all users have access to a dashboard where they can see how data is used to shape their experience, and choose to ‘remove’ or ‘correct’ any data in their profile.
All these suggestions seem technically feasible, but would likely meet stiff resistant from the platforms. They are helpful suggestions because they help to articulate an emerging range of public interest communication concerns specifically related to our participation in the creation of data, and the use of that data to shape our thoughts, feelings and actions.

These proposals need to be considered as collective actions, not just about creating tools that give individual users more choice.
The bigger question is that, as much as the algorithmic infrastructure of media platforms generate pressing questions about who speaks and who is heard, they also generate pressing questions about who gets to experiment with data. Public communication is now a valuable resource used to experiment with public life. Mark Andrejevic describes this as the ‘big data divide’. The power relations of public communication now also include who has access to the infrastructure to process public culture as data and intervene in it on the basis of those experiments.