Technocultural habitats

We live in techno-cultural habitats. Tethered via smartphones to digital networks, databases and their algorithmic power. Our lives, bodies and expressions becoming increasingly sensible to machines. Platforms like Google and Facebook are increasingly a kind of infrastructural underlay for private life and public culture. These platforms are historically distinctive as advertising-funded media institutions because rather than produce original content they produce platforms that engineer connectivity. If the ‘rivers of gold’ that once flowed through print and broadcast media organisations funded quality content for much of the twentieth century, they now flow through media platforms where they fund research and development projects in machine learning, robotics, and augmented reality.

The critical thing to observe in this shift is media shifting its apparatus of power from the work of just representing the social world, to the work of experimenting with lived experience. The aim of a media platform is not just to narrate human life, but rather to incorporate life within its technical processes. This is a unique event in media history: institutions that invest not in the production of content but in the sensory and calculative capacities of the medium. At the heart of this process is not so much the effort to ‘connect’ people, or to enable people to ‘express themselves’ – as the spin from techno-libertarian-capitalist platform owners would have us believe – but rather, at the heart of these platforms is the effort to iron out the bottlenecks between lived experience and the calculative power of digital media. If we could distil the Silicon Valley project down to one wicked problem it is how to build a seamless interface between the neural activity of the brain and the digital processing of computers.

If we look at algorithmic and machine learning, augmented reality and bio-technologies they all point us in the direction of making neural activity of the brain – what we experience as life, narratives, consciousness, moods, problem-solving, vision, aesthetic and moral judgments – a kind of non-human information.

What are the forces driving this project?

The ideology of computer engineers and Silicon Valley might suggest liberation, of somehow liberating the human consciousness from the confines of the living body, from the limits of biology itself, and perhaps even from the material structures that govern human experience on the planet – politics, economics, violence. But, this libertarian techno-human ideology obscures the basic political economy of Silicon Valley. These processes are driven by massive inflows of capital. And, that capital comes because governments and marketers see these technologies as intruments for exerting control over life itself. Of course, in some important ways we should see the media engineering taking place at Google, Facebook, Amazon and so on as the extension of hundreds of years of humans experimenting with the development of tools that capture, store, transmit and process data.

Especially from the 19th century onwards, with the development of technical media like telegraph, phonographs and cameras, we have been engaged in an industrial process of extending human expressions and senses in time and space. And, from the twentieth century media technologies have been at the heart of the exercise of power in our societies. First, they were industrial machines that shaped how mass populations understood the world they lived in. And, then, as the twentieth century went on, media became computational. From the mid-twentieth century engineers began to imagine media-computational machines that could control living processes through their capacity to capture, store and process data.

This is a profound cultural change. Media become technologies less organised around using narrative to construct a shared social situation, and more focussed on using data to experiment with reality. Within this media system participation is not only the expression of particular ideas, but more generally the making available of the living body to experiments, calibration and modulation. Media platforms do not enable political parties, news organisations, brands to make somehow more sophisticated ideological appeals.

Platforms seem to take us into a media culture that functions beyond the ideology – media do not just distribute symbols. The increasingly sense, affect and engineer our moods. They can sense and shape the neural activity in our brain. In time, they dream of becoming coextensive with the organic composition of our body. This system does not depend on persuading individual actors with meanings as much as it aims to observe and calibrate their action. It depends less on exerting control at the symbolic level, and more on governing the infrastructure that turns life into data.

With the advent of media platform we find ourselves asking not just how media shape our symbolic worlds, but how they sense and affect our moods, bodies and experience of reality. To contend with this is we need to think about media as a techno-cultural system, one that does not only involve humans addressing other humans, but humans and data-processing machines addressing one another. As we ‘attach’ media devices to our bodies, in addition to whatever symbolic ideas we express, we also produce troves of data that train those machines and we make ourselves available as living participants in their ongoing experiments.

A critical account of the engineering projects and data processing power of media platforms has, I suggest, three starting points.

Firstly, the politics of the user interface: How does everyday user engagement with a media platform generate data that trains the algorithms which increasingly broker who speaks and who is heard?

Secondly, the politics of the database: How do media platforms broker which institutions and groups get access to the database? If the first concern attends to the perennial public interest question of ‘who gets to speak’, then this concern attends to the new public interest question of who gets to experiment?

Thirdly, the politics of engineering hardware: How do we understand the relationship between media and public life in an historical moment where the capacity of media to intervene in reality goes beyond the symbolic?

In particular, what will be the public interest questions generated by artificial intelligence and augmented reality? These technologies will take the dominant logic of media beyond the symbolic to the simulated. Media devices will automatically process data that overlays our lived experience with sensory simulations. Media become not so much a representation of the world, but an augmented lens on the world, customised to our preferences, mood, social status and location. The critical political issue then for those of us interested in how media act as infrastructure for human societies, is how to account for the presence and actions of media technologies as non-human actors in public culture and human habitats.

 

Platform power

The great hope attached to networked forms of digital media was that they would make for better democratic participation, and with that bring more freedom. The past two decades seem historically significant because of the opening up of technologies that expand public expression. The utopian view of digital culture was really dominant in the first decade of this century. It was a kind of orthodoxy. Digital media were often taken in popular, political and academic debate to be somehow inherently democratic. In a basic way, it is true. Smartphones and social media do allow ordinary people to express themselves in new ways. Message boards, then blogs, then social media sites were seen as an infrastructure of democratic expression and participation.

This seemed to reach a kind of historical high-point in the late noughties. The Obama campaign of 2008 propagated the mythology that digital media were not just inherently democratic, but even disposed toward progressive political causes and ideologies. The professed corporate ideology of firms like Google and Facebook leant progressive, and conservative forces appeared ill-at-ease in the emerging culture of social media. Elsewhere around the world, political events like the Twitter revolution in Iran in 2009, and the series of political upheavals that followed in Tunisia, Egypt and elsewhere during the Arab Spring added further to the mythology of digital media as somehow a hard-coded democratic technology. In this popular mythology, digital media platforms seemed to somehow give expression to democratic voices.

Almost a decade on, this all seems kind of quaint. In the present moment, we find ourselves having to reckon with the profound changes digital platforms have brought to our public culture. While they represent the greatest democratisation of public speech in history, they simultaneously mark the greatest commercialisation of public culture in history, and the epochal reformatting of public culture as data that machines can process. Participation in digital networks opens us up to experiments with our cultures, minds, moods and bodies.
Jodi Dean describes this configuration as communicative capitalism. The offer of expanding opportunities to participate in public culture, the construction of these opportunities as empowering, and yet they are captured within digital enclosures that profit from them. Here’s how Dean puts it.

On the one hand, networked communication technologies materialize the values heralded as central to democracy. Democratic ideals of access, inclusion, discussion, and participation are realised in and through expressions and intensifications of global telecommunication networks. On the other hand, the speed, simultaneity, and interconnectivity of electronic communications produce massive distortions and concentrations of wealth as communicative exchanges and their technological preconditions become commodified and capitalised.

Digital media, lo and behold, didn’t turn out to be the solution to age old questions about how humans can organise their shared lives together.

Have you heard of Anonymous?

An online network that sporadically self-organises and coordinates forms of activism like leaking classified materials, hacking corporate and government websites, doxing people by exposing their personal information and publicly shaming sex offenders.

They accompany their actions with videos like this one.

Anonymous are a kind of mirror image of the hopeful, utopian and optimistic account of digital media that emerged, and dominated public life, from the mid-1990s through the first decade of this century. Anonymous are not formed around agreed upon values and tactics. They have no leader. They have no formal organisational structure. They morph and change. They engage in activities we might recognise as having a social justice spirit, while also engaging in deeply offensive and criminal actions. They seem in part of encompass the hopeful, progressive, populist, democratising spirit often attributed to digital culture, mixed together with its capacity to generate offensive pranks, chaos and mayhem. And so, via them, we can see a kind of alternative history of digital media and the ways in which it might empower and oppress, construct and corrode.

The cultural anthropologist Gabrielle Coleman, author of Hacker, Hoaxer, Whistleblower, Spy, the most detailed portrait and critical examination of Anonymous, wrote this in a recent online debate about internet activism and trolls. ‘Anonymous was particularly potent… for two reasons: insisting on a politics of collectivism… and their direct action, aka, hacking and leaking levelled against sleazy firms and government.’ She goes on to explain that Anonymous supported movements such as Occupy and the Arab Spring. They were behind hacking of governments, police departments and sleazy security and intelligence firms in the US, Canada, Italy and all over Latin America. And, they undertook ‘operations against police brutality, homelessness, rape culture, Nazis, and for the environment’.

Anonymous are difficult to describe. And, that’s part of the point I’m trying to develop here about how power works on digital media platforms. Anonymous first attracted widespread attention with an action against the Church of Scientology in 2008. The Church of Scientology tried to sue websites hosting a leaked video of Tom Cruise giving an hysterical and hilarious pep-talk to members of the church. Anonymous reacted to this suppression of free speech. Their actions included pranks like faxing black documents that used up ink in fax machines in Scientology offices to Distributed Denial of Service attacks on Scientology websites. The following year, Anonymous became involved in the Arab Spring. They provided publicity, information and computer support for activists in North Africa as they rebelled against their governments.

Over the following years Anonymous doxed accused rapists, most famously in Steubenville, Ohio in 2012. They carried various actions in support of Wikileaks, Julian Assange, Edward Snowden and Chelsea Manning.  They shut down various MIT websites in 2013 in memory of Aaron Swartz after he died.  They hacked various government websites in acts of protest and they also hack the websites of political and religious organisations such as ISIS and the Westboro Baptist church.  And, they have been known to seek out and attack child pornography websites.  Gabriella Coleman says “Anonymous is like an antialgorithm: hard to predict and difficult to control”.  The term ‘anti-algorithm’ gets at something important. In the age of media platforms, public life seems increasingly subject to forms of algorithmic control, where data-processing engineers and shapes public life in the interests of the powerful. In opposition to this, Anonymous disrupt these systems of algorithmic control by hacking them and creating chaos.

Anonymous here offer a kind of counter-narrative of digital media. They illustrate that there was always something much more complicated happening than the simplistic idea that digital media were creating smoother, more efficient, more empowering forms of democratic public life. Anonymous seem to be symptomatic of a range of competing impulses within digital culture. Let me point to two critical ones.

Firstly, in Anonymous we see arguably the key contradiction of digital media for those interested in a functioning progressive democracy: a commitment to social justice, collectivism and direct action; but also the decomposition of structures that give public life order, build solidarity and make shared understandings of the world possible. On this point, I think we can compare Anonymous with Facebook. As in, Facebook claim to be a great facilitator of public culture, but it seems more and more apparent that during the 2016 US Presidential election they profited from selling thousands of dark social ads to a shadowy Russian firm with ties to the Kremlin. Facebook have admitted ‘The ads ‘appeared to focus on amplifying divisive social and political messages across the ideological spectrum' including race, immigration and gun rights. We might say the Russian operatives were, like Anonymous, doing for the lulz. Being trolls. Sowing mayhem. The critical point though is this, Facebook profited from it.

And, secondly, Anonymous demonstrated that politics in the digital era would be as much about who controlled the infrastructure through which data are collected, stored, processed and distributed as it would be about who got to speak and who got heard. When Anonymous did things like dox powerful people and corporations, or build botnets that crashed corporate and government websites, they were taking aim at the infrastructure that corporations and governments were using to engineer and control public culture.

There are other activists in the first decade of this century who illustrate this point too. I think particularly of Aaron Swartz. Check out the great documentary about his life The Internet’s Own Boy. Swartz was a programming wunderkind. Involved from a young age in open-source and public software movements. He did a lot of things, but one of the important things was using his ability to code to make, sometimes constructive, sometimes disruptive, political moves.

Swartz was not nearly as radical as Anonymous. His views and activism fit much closer to well-established democratic traditions. But, Swartz and Anonymous share something important in common: they saw digital media platforms as data-processing infrastructure, as well as platforms for public speech. This had two consequences for their activism. First, they undertook direct actions that sought to use, exploit and disrupt the infrastructure itself. And second, they understood that power struggles in the digital era would be about who had access to, and control, over the infrastructure itself.

Powerful actors understand these points too. If we look back at the Obama campaign of 2008, it seems historically important not just because of the hopeful grassroots rhetoric, but because Obama – and particularly his strategist David Axelrod – understood that digital media could be used to undertake a highly orchestrated form of data-driven campaigning. This would enable the participatory energies and commitments of a large grassroots network to be controlled and directed in new ways. And, it would enable campaigns to target voters in precise and highly specified ways. Obama and Axelrod understood from the get-go that digital media were data-driven logistical infrastructure as much as they were networks of participatory expression. The thread I’m drawing out here is that the early narratives of frictionless, utopian democratic online culture, of course, haven’t been borne out.

And, yes, of course, that shouldn’t be a surprise. Since the late-1990s Jodi Dean has been describing digital media as symptomatic of ‘communicative capitalism’, a formation that ‘values the fast circulation of everything’. Talking to Doug Henwood on Behind the News in 2017, Dean explained that ‘on social media this makes people write in ways that are going to get hits, shares, likes, forwards, little hearts…’. Online life now is largely formatted by a small number of major commercial media platforms. These platforms are engineered to generate valuable forms of attention for advertisers. Powerful actors like major political parties, brands and governments see these platforms as logistical infrastructure for monitoring and steering populations. But this relationship is an increasingly complicated one.

Platforms like Facebook and Google need to manage relationships with the brands, political parties and governments that fund and regulate them. This leads us to what seems to be one of the instructive lessons of the 2016 Presidential campaign. One way to look at the criticism of Facebook for propagating fake news and abetting Russian disinformation is that powerful actors – like establishment media and political parties – have decided Facebook is disrupting the democratic processes that used to work in their interests. Facebook is creating forms of political communication that operate outside of established norms. In 2008 the popular mythology around Obama was that Facebook was a tool for grassroots mobilisation, in 2016 the rhetoric is that Facebook is a tool of misinformation and manipulation.

There are two critical issues. Firstly, platforms have engineered a kind of public communication where people can be immersed in an endless loop of information that confirms already help views and feelings. And, secondly, platforms are creating forms of public speech that are not open to appropriate scrutiny. In September 2017, The Washington Post reported that special prosecutor Denis Mueller – tasked with investigating Russian interference in the Presidential election – had obtained information from Facebook about Russian operatives buying ‘dark social’ ads. CNN reported that ‘Facebook informed Congress last week that it had identified 3,000 ads that ran between June 2015 and May 2017 that were linked to fake accounts. Those accounts, in turn, were linked to the pro-Kremlin troll farm known as the Internet Research Agency.’

Dark posts are promoted posts only visible to those who have been targeted. They are not open to public scrutiny. This is a problem in the case of political advertising because it falls outside of established norms, and often regulations, that require adequate disclosure of paid political messages by media organisations. Unlike broadcast media, platforms offer data-driven, targeted and covert access to individuals at massive scale. Digital media platforms are participatory, but that doesn’t mean they are inherently democratic. What we are participating in is making our lived experience available to data-processing, experiments and manipulation.

The capacity of platforms to shape public life has been driven, over the past decade, by the strategic effort to create evermore refined slices of audience attention and engagement for sale to advertisers – be they corporate brands, political campaigns, or foreign operatives.
The kinds of public speech and advertising they have created in the past decade are not just mostly unregulated. They are also conceptually different. The platforms have created a kind of speech that doubles as data that trains machines to experiment with and shape public life.
It seems that one consequence of the 2017 Presidential regulation will be a kind of reckoning with platforms as public infrastructure.

Here’s Jim Rutenberg writing in The New York Times.

This much should be clear: arguments that sites like Facebook are merely open ‘platforms’ – and not ‘media companies’ that make editorial judgments about activity in the digital worlds they created – fall woefully flat when it comes to meddling in our democracy. The platforms have become incredibly powerful in a short amount of time. With great power has come great profit, which they are only too happy to embrace; the great responsibility part, not always so much.

We might come to see them as socio-technical infrastructure. We build and use these platforms, they shape our lives together. The questions that now press upon us are now ‘are digital media platforms democratic?’ but rather ‘how should we make these platforms democratic?’ and ‘how might we move democracy beyond the platform architecture?’

Digital media platforms seem most constructive when they make a community present to itself, to use Jodi Dean’s formulation. And, this seems most important for communities that are otherwise marginalised in public space, dispersed, or vulnerable. But, confined only to online forms of affirmation, these communities create soothing and commercially lucrative formations of attention that circulate wholly within the platform. The big question, is whether that fosters or disperses meaningful forms of public life beyond the platform.

So, the challenge is two-fold. We must now take up the difficult work of reforming the platforms. They must be contained by our democratic and public culture, not the other way around. And, to do that, we must avoid mistaking the forms of self-soothing affirmation the platforms foster for the work of building public culture and shared lives together.

 

Brand atmospheres

Celia Lury describes brands as ‘programming devices’, technologies for organising markets. A brand is a device for coding lived experience and living bodies into market processes. A couple of important coordinates to lay out about how to think about brands. The first is to say that the relationship between brands and media platforms is a critical one for any understanding of our public culture. Facebook and Google now account for ~70% of all online advertising revenue, and ~90% of growth in online ad revenue. In these two media giants, advertisers finally have a form of media engineered entirely on their terms.

Much critical attention to advertising on social media goes in one of two directions. Either focussing on the emergence of forms of influencer or native brand culture. That is, branding is now woven into the performance and narration of everyday life. Or, focusing on the data-driven targeting of advertisements. What matters though is how these two elements have become interdependent.

Brands have always been cultural processes. The data-driven architecture of social media enable sbrands to operate in much more flexible and open-ended ways. In basic terms, if brands can comprehensively monitor all the meanings that consumers and influencers create, then they need to exercise less control over specific meanings. On social media platforms brands control and open-ended and creative engagement with consumers.

Brands that are built within branded spaces or communicative enclosures rely less on telling their audience or market what to think or believe, and more on facilitating social spaces where brands are constantly ‘under construction’ as part of the ‘modulation’ of a way of life. In the era of digital media, branding is productively understood as an engineering project. Brands engineer the interplay between the open-ended creativity of humans and the data processing power of media.

In 2014, Smirnoff created the ‘double black house’ in Sydney to launch a new range of vodka. The house operated as a platform through which the brand engineered the interplay between creatives and the marketing model of social media platforms. The house was an atmospheric enclosure. All black. Aesthetically rich. Full of domestic objects, made strange in the club. A clawfoot bathtub full of balls, a fridge to sit in, a kitchen, ironing boards and toasters. Creatives were invited. Bands and DJs played. Fashionistas, photographers, models, hipsters of all kinds.

It was ‘hothouse’ for creating brand value. And, it was a device that captured this creative capacity to affect and be affected and transformed it into brand value by using the calculative media infrastructure of the smart city. As people partied in the house they posted images to Instagram, Snapchat, and so on. In an environment like the Smirnoff Double Black house we see a highly contained and concentrated version of the Snapchat story I began with. The enjoyment of nightlife doubles as promotion and reconnaissance on the platforms of social media. The house has all the components of promotion in the nightlife economy: stylised environments, cultural performance, amplified music, screens, photographers, intoxicating substances, the translation of experience into media content and data. Branding not just as immersion in symbolic atmosphere, but branding as the creation of techno-cultural infrastructure that embeds the living body and lived experience in processes of optimisation and calculation. The history of branding is not just one of symbolic ideological production, but rather as one of the production of urban and cultural space. Branding has always been an atmospheric project – the creation of a techno-cultural surround that engineers experience, and in the age of digital media we can see the atmospheric techniques of branding come to the fore.

So, let me trace a little this idea of ‘atmosphere’.  In his Spheres trilogy Peter Sloterdijk details how atmospheres emerge as domains of intervention, modulation and control in the 20th century. Atmospheres are techno-cultural habitats that sustain life. And, particularly in the twentieth century, atmospheres engineer the interplay between living bodies and media technologies that organise consumer culture.

The Crystal Palace, a purpose-built steel and glass ‘hothouse’ for the 1851 World’s Fair, is a critical moment in histories of atmospherics as a technique of the consumer society. Susan Buck-Morss, in her work on Benjamin, argues The Crystal Palace is a kind of infrastructure that ‘prepares the masses for adapting to advertisements’. In this we can read Benjamin’s account of The Crystal Palace as not just a dream house that spectacularises the alienation of industrial labour, but perhaps more importantly an infrastructure for coordinating the interplay betweren human experience and the calculative logics of branding. Sloterdijk suggests that what we today call ‘Psychadelic capitalism’ – I think he means experiential, affective, cultural capitalism – emerges in the ‘immaterialised and temperature controlled’ Crystal Palace.

Sloterdijk suggests The Crystal Palace was an ‘integral, experience-oriented, popular capitalism, in which nothing less was at stake than the complete absorption of the outer world into an inner space that was calculated through and through. The arcades constituted a canopied intermezzo between streets and squares; the Crystal Palace, in contrast, already conjured up the idea of a building that would be spacious enough in order, perhaps, never to have to leave it again’. Sloterdijk makes clear, the Crystal Palace doesn’t so much anticipate malls or arcades but rather the ‘era of pop concerts in stadiums’. It is a template for media as technologies that would work as enclosures or laboratories for experimenting with reality. The Crystal Palace, to me, is the first modern brand. As in, the first techno-cultural infrastructure for producing and modulating human experience. Encoded in it was the basic principle of using media to engineer, experiment with and simulate reality.

Sloterdijk suggests that ‘what we call consumer and experience society today was invented in the greenhouse – in those glass-roofed arcades of the early nineteenth century in which a first generation of experience customers learned to inhale the intoxicating fragrance of a closed inner world of commodities.’ He proposes that we need a study of the 20th century, an air-conditioning project, that does what Benjamin’s arcades project did for the 19th.

I think the contours of one such study of 20th century atmospherics already exists in Preciado’s Pornotopia. Pornotopia is a critical history of Playboy as an architectural or atmospheric project. Preciado argues Playboy is historically remarkable for the techno-cultural, bio-multimedia habitat it produced. The magazine and its soft pornographic imagery, are much less interesting than the Playboy Mansion, clubs, beds and notes on the design of the ideal domestic interior. Put Sloterdijk and Preciado together and you can begin to imagine the longer history of branding as an atmospheric project: a strategic effort to organise the spaces in which lived experience and market processes intersect. And, then, to see the mode of branding emerging on social media as a logical extension of this atmospheric history.

Here is Preciado on the Playboy Mansion, 'The swimming pool in the Playboy Mansion, represented photographically as a cave full of naked women, could be understood as a multimedia womb, an architectural incubator for male inhabitants that were germinated by the female-media body of the house’. The Playboy Mansion was a bio-multimedia factory where female bodies were strategically deployed and exploited to arouse male bodies. A relation Preciado describes as pharmacopornographic capitalism, ‘…an organised flow of bodies, labour, resources, information, drugs, and capital. The spatial virtue of the house was its capacity to distribute economic agents that participated in the production, exchange, and distribution of information and pleasure. The mansion was a post-Fordist factory where highly specialised workers (the Bunnies, photographers, cameramen, technical assistants, magazine writers, and so forth)…’ used media technologies to arouse and stimulate. Playboy had eroticised what McLuhan had described as a new form of modern proximity created by ‘our electric involvement in one another’s lives’.

The Playboy mansion was a bio-multimedia factory in the sense that a ‘virtual pleasure produced through the connection of the body to a set of information techniques’. Like Sloterdijk’s claim that The Crystal Palace prefigured the experience economy, so Preciado makes a similar claim about the Playboy Mansion. It is important to note that in the period in which Hefner is creating the Playboy Mansion marketers are theorising similar strategies.

Marketing management guru Philip Kotler gives us a similar formulation for the strategic production of atmospheres. He writes, the tone here is great, a commandment, as if he is actually a God of Marketing, ‘We shall use the term atmospherics to describe the conscious designing of space to create certain effects in buyers. More specifically, atmospherics is the effort to design buying environments to produce specific emotional effects in the buyer that enhance his purchase probability’. In the gendered formulation, Kottler unwittingly gives credence to Preciado’s notion of pharmocopornographic capitalism where male bodies are strategically aroused. He signals marketing’s strategic move into designing spaces and technologies for managing affect. Atmospheres are ‘attention creating’, ‘message creating’ and ‘affect creating’ media.

They are technologies of control. Kotler explains that ‘just as the sound of a bell caused Pavlov’s dog to think of food, various components of the atmosphere may trigger sensations in the buyers that create or heighten an appetite for certain goods, services or experience’. So, across these cultural histories and marketing histories, we can see how branding has always been atmospheric – invested in the production of techno-cultural spaces that program experience. In Preciado’s Playboy Mansion media and information technologies are critical to the production and maintenance of the experience enclosure.

The Playboy Mansion is an historical template for the configuration of nightlife precincts, bars, clubs, music festivals, sporting stadiums, and so on. Here emerges a critical point I derive from both Sloterdijk and Preciado, the interesting techno-cultural air-conditioners of the twentieth century are not malls. The 20th century malls, like Benjamin’s 19th century arcades, are relics. Preciado alerts us to the fact that an Arcades project for the early 21st century needs to be a history of clubs, nightlife, and the other interiors of the experience economy – beds, hotel rooms, restaurants, pop concerts and music festivals: ‘Playboy modified the aim of the consumer activity from ‘buying’ into ‘living’ or even ‘feeling’, displacing the merchandise and making the consumer’s subjectivity the very aim of the economic exchange’. Preciado sees the Playboy Mansion and clubs as ‘media platforms where ‘experiences’ are being administered’.

I take this provocation seriously. Playboy is a critically important brand not because of its iconography, but because it creates an atmosphere that uses media as programmatic devices to arouse bodies and modulate experience. Value is produced from the continuous exchange of states of mind, feelings and affects.

The pre-history of the advertising model of platforms like Snapchat, Instagram and Facebook is to be found in the media-architecture of the Playboy Mansion and the clubs, music festivals and nightlife precincts like it. Preciado punts Gruen as the key architect of postwar consumption for Hugh Hefner. Hefner’s ‘Pornotopia… anticipated the post-electronic community-commercial environments to come’. The ‘social-entertainment-retail complex’ – be it malls, clubs, nightlife precincts and music festivals – are combined with smartphones and social media. Public life is converted into a new kind of private property: brand value and data.

Think of the techno-pleasure interiors Hefner imagined in the 1960s in relation to the predictions engineers like Paul Baran were making at the same time. Baran, of course, the RAND Corp engineer who conceptualises the distributed network. From their apparently extremely different viewpoints on consumer culture, neither imagined digital media as technologies of participatory expression. They were always logistical. Baran told the American Marketing Association in 1966 that a likely application of the distributed network he had conceptualised was that people would shop via television devices in their own homes, be immersed in images of products, be subject to data-driven targeting. In 1966!

Set in this historical frame, two kinds of ‘common wisdom’ about digital media are defunct. The first, via Preciado, thinking digital media via Playboy’s Pornotopia ‘corrects the common wisdom of just a few years ago, to wit, social activity will now take place in real environments enhanced and administered through virtual ones, and not the other way around’. The second, social media are logistical before they are participatory.

Branding has always been the strategic effort to use media to organise the open-ended nature of lived experience. Over the past several decades brands have been the primary investors in the engineering of new media technologies. Media technologies are engineered with capital provided by brands and marketers. And yet, think about how much of the contemporary critical work on the promotional culture of social media focusses on its participatory dimensions. Even claiming that this participation resists or circumvents brands. What I see in Snapchat, Instagram, Facebook and the modes of promotional culture emerging around them is the effort to engineer the relationship between the open-ended creativity of users and the data-driven calculations of marketers. We must then address the historical process of atmospheric enclosure that sustains this relation. For purposes of public debate and policy. Media platforms are not just data-processors and participatory architecture: they are the platform of public life. Marketers are not just producers of symbolic persuasion: they are engineers of lived experience.

 

 

Cyborgs

The figure of the cyborg serves as a tool for imagining and critiquing the integration of life into digital processors. To invoke the cyborg is to critically consider the dreams and nightmares of a world where the human body cannot be disentangled from the machines it has created.

The term cyborg was coined by the cybernetic researchers Manfred Clynes and Nathan Kline in 1960. The word combines ‘cybernetic’ with ‘organism’. And, in doing so, attempts to imagine the engineering of systems of feedback and control that would incorporate or be coextensive with the living body. Clynes and Kline were seeking solutions to the problems posed by the volume of information an astronaut must process as well as the environmental difficulties of space flight.

The cyborg is startling because it imagines the human body as entirely dependent on, or bound up with, the artificial life-support systems and atmospheres it creates. The space suit is one example, but so might be the smartphone – for many of us. I’m kind of joking, but I’m kind of not. Think of all the ways in which the smartphone is a space suit, an artificial life support system. We have created societies that are functionally dependent on digital media.

The concept of the cyborg is even more important though because of the way it was pulled out of the lab, and imagined by Donna Haraway as part of a socialist feminist critique of technocultural capitalism. Haraway is one of many to reckon with the question of what the creation of artificial intelligence and digital prostheses means for our bodies, and the possibility of their redundancy. Haraway’s 1985 Cyborg Manifesto has been described in Wired magazine as, ‘a mixture of passionate polemic, abstruse theory, and technological musing…it pulls off the not inconsiderable trick of turning the cyborg from an icon of Cold War power into a symbol of feminist liberation’. It made her a pivotal figure in the cyberfeminist movement. The essay sparkles with energy and originality, and more than thirty years later remains a critical one for anyone trying to think about the relations between our bodies, technology, capitalism and power.

The cyborg is both a ‘creature of social reality’, that is actual physical technology already in existence and a ‘creature of fiction’ or metaphorical concept to demonstrate ways in which high-tech culture challenges these dualisms as determinants of identity and society in the late twentieth century. The cyborg is a way of adressing the present and reclaiming the future. Haraway is critical of popular ‘new age’ or feminist discourses that arose out of Californian 60s counterculture that essentialise ‘nature’ and gender. ‘I'd rather be a cyborg than a goddess," she proclaimed in an effort to reject the received feminist view that science and technology were patriarchal forms of domination that blighted some essential natural human experience.

As a socialist-feminist, Haraway pays particular attention to how a technocultural, science and information driven mode of capitalism reshapes human relationships, societies, and bodies. She proposes that feminists think beyond gender categories, rejecting in a sense the binary of ‘man’ and ‘woman’ as socially and historically constructed categories always bound up in relations of domination. For her, the cyborg is both a way of understanding how our bodies are becoming organism/machine hybrids, and a political category for articulating bodies outside of established modes of power that classified and controlled bodies using categories of gender, race, sexuality, and so on. Haraway echoed cybernetic ways of thinking, she was interested in how feminism might break down Western dualisms and forms of exceptionalism by taking on the critical insight that all of us – humans, animals, and ecology of the planet itself, intelligent machines were all communication systems.

Haraway’s cyborg aimed to ‘break through’ or challenge some of the foundational patriarchal cultural myths of the West, ‘the cyborg skips the step of original unity, of identification with nature in the Western sense’. Unlike the hopes of Frankenstein's monster, the cyborg does not expect its father to save it through a restoration of the garden; the cyborg must imagine, determine and program its own future. The main trouble with cyborgs, of course, is that they are the illegitimate offspring of militarism and patriarchal capitalism, not to mention state socialism. But illegitimate offspring are often exceedingly unfaithful to their origins. And, in this sense, the cyborg contains the possibility of transcendence – of breaking down established categories used to mark and dominate bodies. With the cyborg we could start again – creating a body, and human experience, outside of patriarchal, militaristic, capitalist domination. For

Haraway, cyborgs as a construct resist traditional dualist paradigms, capturing instead the ‘contradictory, partial and strategic’ identities of the postmodern age. Haraway’s cyborg explodes traditional ‘dualisms’ or binaries that characterise Western thought, such as human/machine, male/female, mind/body, nature/culture and so on. In this she signals, ‘three crucial boundary breakdowns’ that lead to the cyborg.

First, by the late twentieth century, the boundary between human and animal is thoroughly breached. We can see this in animal rights activism, scientific research that demonstrates the many similarities in biology and intelligence between humans and other species, and the development of biomedical procedures that combine animals and humans. For instance, the human ear grown on a mouse. The cyborg as hybrid, is able to identify with both humans and animals. Furthermore, Haraway argues for the critical politics of humans recognizing their companionship with non-human species.

The second boundary breakdown is between living organism and machine. Haraway points out how earlier machines, ‘were not self-moving, self-designing, autonomous’. Computer assisted design, artificial intelligence and robotics had – by the late twentieth century however had collapsed the distinction between natural and artificial, mind and body, self-developing and externally designed. The capabilities of technology begin to mimic our personalities and surpass our abilities so that, as Haraway comments, ‘our machines are disturbingly lively, and we ourselves frighteningly inert.’ Technological determinism does not necessarily guarantee the ‘destruction of ‘man’ by the ‘machine’ but rather as cyborgs our amalgamation with machines ensure our survival. Intelligent machines do not obliterate the human, the enhance, alter and transform them.

The third breakdown is between the physical and non-physical, material and immaterial, or real and virtual. This breakdown is evident in the ubiquity of microprocessors in contemporary life. The miniaturised nature of digital chips change our understanding of what a machine is. The microprocessor does not create objects as such, they are ‘nothing but signals, electromagnetic waves, a section of a spectrum, and these machines are eminently portable, mobile.’ Haraway argues then that, ‘a cyborg world is about the final imposition of a grid of control on the planet…From another perspective, a cyborg world might be about lived social and bodily realities in which people are not afraid of their joint kinship with animals and machines, not afraid of permanently partial identities and contradictory standpoints.’

A cyborg world is one where bodies are integrated into digital circuits in technical and cultural ways. In this process, it is no longer clear ‘who makes and who is made in the relation between human and machine’, … ‘no longer clear what it mind and what body in machines that resolve into coding practices’. The distinction between machine and organism, of technical and organic becomes impractical, and perhaps even undesirable, to attempt. The embodied experience of those of us who live in today’s integrated digital circuits of smartphones, smart homes and biotechnologies know nothing other than a life lived within technocultural atmospheres sustained in part by the weaving of life into digital processors. We cannot leave them behind, we are posthuman in the sense that we are now knitted together with our artificial life support systems. That’s what a posthuman technoculture is. If we are cyborgs – part biology, part machine – then our bodies are the site where the power of digital media to engineer life operates. The body is the touchpoint between life itself and the power of digital technologies to shape life. The body is the interface where power expands, and where it might be jammed or rerouted.

 

Technocultural bodies

Our bodies are becoming, in the words of sociologist Gavin Smith, ‘walking sensor platforms’. Our bodies increasingly host devices that translate life into data. This process is at the heart of technocultural capitalism. If we look carefully we can discern in many Silicon Valley investments the effort to engineer away the friction between living bodies and the capacity of platforms to translate life into data, calculate and intervene.

To understand media platforms as technocultural projects then, we need to trace all the ways in which our living bodies are entangled with them. We need to investigate the sensory touchpoints between biology and hardware, between living flesh and digital processing. The expansion of the sensory capacities of media and the affective capacities of the body depend on a range of ‘communicative prostheses’ that envelop, are attached to, or even implanted in, our living bodies.

We can see this in efforts to engineer bio-technologies like augmented reality, neural lace, digital prosthetics and cortically-coupled vision. These technologies aim to change how the body experiences reality, expands the embodied capacity to act and pay attention, and the biological composition of the body itself.

Just listen to how Silicon Valley technologists talk about the relations between our bodies, brains and their digital devices.

A technology like augmented or mixed reality, according to Kevin Kelly, ‘exploits peculiarities in our senses. It effectively hacks the human brain in dozens of ways to create what can be called a chain of persuasion’. The perception of reality, once confined to the fleshy body, becomes an experience partly constructed by the brain and partly by digital technology.

Magic Leaps’ founder Rony Abovitz explains that mixed reality is ‘the most advanced technology in the world where humans are still an integral part of the hardware… (it is) a symbiont technology, part machine, part flesh.’ This part machine, part flesh vision has a long history in culture and technology. To think of the human is to pay attention to the process through which a living species entangles itself with non-human technologies, from early tools onwards. Since Mary Shelley’s Frankenstein, at least, our cultural imagination has thought about the possibility of technologies that might transform our living biology. Technologies are emerging that seem to be doing just this.

Research scientists have prototyped a robotic arm that can be controlled with thoughts alone. A person has an implant in their brain that detects neural activity, and then trains a computer to drive an arm and hand to undertake increasingly fine motor skills. Recently, Facebook have experimented with a similar technology that enables a human to type out words just by thinking them.

Over the past decade, researchers have been experimenting with cortically-coupled vision. The basic idea is that computers learn from the visual system in our brain, tracking how the brain efficiently processes huge amounts of visual data. This technology could be used to train computers to process vision like humans can, or it could be used to learn patterns of human attention. For instance, learning what kinds of things particular humans enjoy looking at. Imagine if, as you walk down a street, a biometric media technology gradually learns what kinds of things attract your attention, give you pleasure or irritate you.

Elon Musk is one of several technologists to invest in Neural Lace, an emergent – some say technically improbable – idea. The basic objective is to create a direct interface between computers and the human brain, which may involve implanting an ultra-find digital mesh that grows into the organic structure of the brain, directly translating neural activity into digital data. In an experiment with implanting neural lace in mice, researchers found that ‘The most amazing part about the mesh is that the mouse brain cells grew around it, forming connections with the wires, essentially welcoming a mechanical component into a biochemical system.’

Musk has said that, 'Over time I think we will probably see a closer merger of biological intelligence and digital intelligence.' The brain computer interface is mostly constrained by bandwidth ‘the speed of the connection between your brain and the digital version of yourself, particularly output.’ Let’s pause there for a second, the bandwith observation alerts us to something important. Maybe we could say the biggest engineering challenge for companies like Google, Facebook, Amazon and so on is the bottleneck at the interface between the human brain and the digital processor. All our methods of translating human intelligence – in all its sublime creativity, open-endedness and efficiency – into digital information are currently hampered by the clunky devices we have that sit at the interfacebetween body and computer: keyboards, mouses, touchscreens, augmented reality lenses. This is the truly wicked problem, perhaps whoever solves it will become the next major media platform. Just as Facebook, Amazon and Google have disrupted mass media, the next disruption will centre around whoever can code the human body and consciousness into the computer.

In each of these cases we can see a technocultural process through which media platforms, technologists and researchers invest in engineering the interface between the living body and non-human digital processors. This process is transforming what it means to be human.
It becomes increasingly difficult, or even pointless, to attempt to understand the human as somehow distinct from the technocultural atmospheres we create to sustain our existence. Living bodies are becoming coextensive with digital media.

Media platforms become like bodies, bodies become like media. In one direction we have the expansion of the sensory capacities of media. That is, media become more able to do things that once only bodies could do. Media technologies can sense and respond to bodies in a range of ways: know our heart rate, predict our mood, track our movement, identify us via biometric impressions like voice or fingerprint. And, in the other direction, we have bodies becoming coextensive with media technologies. Machines are becoming prostheses of the body, and in the process changing what a body is and does. Digital technologies alter how we we perform, imagine, experience, and manage our bodies.

In the technocultures we call home, our bodies are cyborgs: composed of organic biologicalmatter and machines. Our glasses, hearing aids, prosthetics, watches, and smartphones are all machines we attach to our bodies to enable them to function in the complex technocultures we inhabit. Many of these devices are now sensors attached to digital media platforms. Our smartphone is loaded with sensors that enable platforms to ‘know’ our bodies: voice processors, ambient light meters, accelerometers, gyroscopes, GPS. All of these sensors in various ways collect data about our bodies – their expressions, their movements in time and space, their mood and physical health.

Beyond the smartphone many of us attach smart watches and digital self-trackers to our bodies. We use these devices to know, reflect upon, judge and manage our embodied experience. Following the steady stream of prototypes from Silicon Valley we can see a future where devices might be integrated or implanted into the body. Sony recently patented a smart contact lens that records and plays back video. The lens would see and record what you see, and then using biometric data select and play back to you scenes from your everyday life. The lens could, augment your view of the material world around you, or even take over your vision to immerse you in a virtual reality. With a lens like this vision can no longer be seen as a strictly biological and embodied process, it becomes an experience co-constructed by intelligent machines.

 

Engineering augmented reality

Following the debate about Confederate statues and monuments in the US during August 2017, the radical Catholic priest Fr Bob Maguire tweeted, ‘Could we not have Virtual statues which the algorithm could change as directed by public opinion?’

I like this Tweet a lot. Fr Bob makes an incisive observation about the logic and politics of augmented reality – at least as its imagined by the major media platforms. Platforms like Facebook and Google are investing in virtual, augmented and mixed reality technologies. And, as with most of their engineering projects, encoded into these technologies is a disruptive vision for public life.

Fr Bob cheekily skewers this Silicon Valley logic in a bunch of ways.

He’s aping the Silicon Valley liberal-individualist solution to everything. Forget the difficult debate about history and identity that surrounds these monuments, just measure public opinion and produce a representation of reality that matches that opinion. Forget being caught in history, just have a culture that continuously and automatically remodels itself on whatever the current tastes and preferences of the crowd are.

But, there’s another way to read Fr Bob’s quip. I think, that in the vision of augmented reality being imagined by Google and Facebook, the ideal scenario would be that we all individually wear our augmented reality lenses and see the reality we want to see.

As long as we all have our Facebook goggles or Google lenses in, when we go into the park and look at a big statue we will see our own personal hero. White Nationalists will see Robert E. Lee, progressives will look at the same spot, and see someone else – Oprah, Obama, Martin Luther-King, Tina Fey eating cake.

The point is this, augmented reality – as envisioned by Facebook and Google – is the engineering effort to take the forms of algorithmic culture currently confined to the feeds of our smartphones and transpose them into the real world. If at the moment, when we scroll Facebook we see the news that matches our political viewpoints. If we’re alt-right, we’re immersed in ‘fake news’ conspiracies about violent leftists, if we’re progressive we’re immersed in outrage about Nazis and the KKK. Augmented reality would weave those simulations into the real world.

So, our public space begins to reflect back to us our political identities.

Is that what we want?

Here we encounter a dilemma. On the one hand if we all saw the statue we wanted to see, would that mean everyone would be happy? Or, would it simply mask the real divisions which the debate over the monuments stands in for? Or, does the presence – or absence – of statues and monuments we disagree with in public space function as an important and constitutive aspect of public life? That a foundational characteristic of public life is to encounter and contend with ideas and people we disagree with, that are other or alien to us?

This is my provocation: we need to see the present effort to engineer virtual, augmented and mixed reality by Facebook, Google and Snapchat as an extension of the simulation-based, predictive and algorithmic culture they have been constructing over the past decade.

We can roughly sketch the history of virtual and augmented reality has three periods.

From the 1960s to the 1980s the US military investment invest in the development of virtual environments and simulators that could train pilots.

From the 1980s through to the mid-1990s dreams of virtual reality move beyond the military, Silicon Valley tech-utopian developers, counter-cultural activists and artists begin to imagine virtual realities unhooked from the impediments of the material world and its flesh and steel.
From the mid-90s virtual reality technologies, and the dreams about them, went into a kind of hibernation.

This hibernation came about because the dreams of a utopian and independent virtual world or cyberspace couldn’t be technically or politically realised. In a technical sense, low-res displays, latency, motion sickness, large and heavy hardware, lack of wireless connections, no mobile internet, and a lack of interplay with social life and urban space all stalled virtual reality start ups. Then, over the past five years firms like Oculus Rift and Magic Leap, acquired by Facebook and Google respectively, have been ushering in a new era of virtual reality hype. In the present moment there are three kinds of projects: virtual reality, augmented reality and mixed reality.

Virtual reality is characterised by opaque goggles. Once you are wearing them, you are in an immersive virtual world. Think of virtual reality gaming. Augmented and mixed reality are characterised by translucent screens or glasses. As you wear them digital simulations are overlaid with your view of the real world. Augmented reality is most evident in our everyday use of Snapchat lenses or filters. Via the screen we see our face overlaid with digital simulations: whiskers, a tiara, a rainbow tongue. Mixed reality is the prototyped ambition of Google’s Magic Leap. The limitation of augmented reality is that digital simulations are simply overlaid the vision of the real world, the simulations can not be made to appear like they are interacting with the world.

Magic Leap are working toward building a mixed reality technology where simulations will appear to be able to interact with the world. For example, you’ll hold out your hand and a simulation of an elephant will walk around your palm. It will appear to know where your hand begins and ends. The comparison between Magic Leap and Snapchat is a useful one. Magic Leap promote a vision of mixed reality that seems to be just out of reach. Incredible. But, in the future. Snapchat, while not as technologically-sophisticated, is perhaps more culturally significant. With Snapchat, augmented reality is becoming a part of everyday communication rituals. And, Snapchat are figuring out how to monetise augmented reality by selling it to brands. The major investments by Facebook, Google and Snapchat in these technologies indicate to us how serious they are in transforming their core platform architecture, pushing it beyond the smartphone and its flows of images on an opaque screen.

Media platforms like Google and Facebook are multi-dimensional engineering projects. Facebook’s Chief Operating Officer Sheryl Sandberg explained at Recode in 2016 that while the current business plan focussed on monetisation and optimisation of the existing platform. Their ten year strategic plan is focussed on ‘core technology investments’ that will transform the platform infrastructure. The developments keep coming. In August 2017, Facebook lodged a patent in the US for augmented reality glasses that could be used in a virtual reality, augmented reality or mixed reality system. Via translucent glasses or lenses, we can begin to see how Facebook could be transition to an augmented reality platform.

Here’s the critical point. These media platforms and partnering brands are not investing in the creation of more sophisticated mechanisms of symbolic persuasion. They are investing in the design of devices and infrastructure that can track and respond to users and their bodies in expanding logistical and sensory registers. Virtual reality projects are one instance of this, the effort to create a form of media that works not by creating symbols but by engineering experience. These companies are attempting to, as Jeremy Packer puts it, ‘code the human into the apparatus’.

Facebook, Google, Apple, Amazon, Microsoft, Sony and Samsung all have major investments in artificial reality. Facebook has 400 AR engineers. Silicon Valley has about 230 hardware and software engineering companies working on VR. Mark Zuckerberg echoes Silicon Valley consensus when he says it is ‘pretty clear’ that soon we will have glasses or contact lenses that augment our view of reality. Media platforms will augment human vision with digital simulations. Imagine looking at a room full of people and seeing their names above their heads, or a reading of their mood or level of interest in what you are saying. If you’re in class, your lecturer or tutor might be able to see the grade of your latest assessment floating above your head, or a colour coding that indicates your level of engagement in the course based on your attendance at class, logins to the learning platform, and grades. The data is available to do this: your university knows your attendance, grades and engagement with software, Google and Facebook can recognise your face.

Augmented reality heralds a shift from media that engineer flows of information to media that engineer experience. The value of mixed or virtual reality firms like Oculus Rift and Magic Leap is attributable in part to their claimed capacity to ‘hack’ or ‘simulate’ the human visual cortex directly. The ‘vomit problem’ or ‘motion sickness’ caused by VR devices is a container term for a number of points of ‘friction’ between the living body and the media device. The latency of the image on the screen inches from your eyes causes a conflict between your visual and vestibular system and you vomit. This problem has also been called ‘simulator sickness’, a term that had a particular currency in the 1980s and 1990s with military training simulators. Military researchers found that motion sickness from VR subsides in experienced users. An indication of the capacity of the living body to learn to ‘hack around’ the visual-vestibular conflict, to accommodate itself – in neurological ways – to the media device it is entangled with.

The VR hype-industry is characterised by plenty of claims to hack the body, or if not hacking then working around, reorienting, calibrating, or tricking it. Kevin Kelly explains that artificial reality ‘hacks the human brain’ to create a ‘chain of persuasion’. The term a ‘chain of persuasion’ – common in VR development – strikes me as an augmented kind of ideological control. Not persuading the subject only via a symbolic account of reality they interpret, but engineering an experience where the body feels present in a particular reality as a pre-cursor to them finding representations persuasive. AR’s account is persuasive not because the human subject ‘makes sense’ of it, but because it affects both the body’s biological system and the subject’s cultural repertoire in a way that feels real.

Magic Leap’s founder Rony Abovitz puts it this way:

VR is the most advanced technology in the world where humans are still an integral part of the hardware. To function properly, VR and MR must use biological circuits as well as silicon chips. The sense of presence you feel in these headsets is created not by the screen but by your neurology… artificial reality is a symbiont technology, part machine, part flesh.

The political economy of these media engineering projects is something like this: where the profits of broadcast media – their fabled ‘rivers of gold’ – were invested in quality content, the profits of media platforms like Google and Facebook are invested in engineering projects.
The vomit problem then is a metaphor for the creative experimentation happening at the ‘touchpoint’ between living bodies and media infrastructure.

We might ask then:  how will the ‘experience’ and ‘presence’ of mixed reality will be monetized? Google dramatized some of these applications when they were experimenting with Glass. As we look down a city street icons will appear above buildings the media platform predicts we might be attracted to because they sell our favourite beer or coffee, have good reviews, have a product it knows we are looking for, or that our friend is in there.

Or, perhaps stranger, a platform like Tinder, knowing our preferences for particular kinds of bodies, might be able sort and rank clubs in a nightlife precinct relative to our cultural tastes and sexual desires. You walk down a street with an AR device on, it registers affective and physiological responses to people who walk by you. It scans those people: their bodies, faces, clothes and associates them with a register of cultural and consumer tastes. And, then uses that to incrementally direct your paths through a city, a media platform, a market.

The critique of the political economy of social media has focused mostly on the capacity of platforms to conduct surveillance and target advertisements. But, as Jeremy Packer puts it advertisers now ‘experiment with reality’, engineer systems that configure cultural life by collecting, storing and processing data, rather than with ideological narratives.

As the smartphone and its modes of judgment, curation and coding give way to a mixed reality headset, the productive labour of the user will take on new dimensions. The embodied work of tuning the interface between body and lens. The combined neurological and cultural activity of adjusting how we experience reality: from a clear distinction between reality and digital image, to being immersed in a mixed simulation. From persuasion only at the symbolic level, to persuasion also at the affective and biological level.

But also, the work of tuning the predictive simulations of mixed reality via sensory and behavioural feedback. When I look down a street and it makes judgments about where I might want to go, my behavioural, physiological or affective responses to those predictions will inform future classifications and predictions as much as any symbolic content I generate. Here my bodily reactions feed not just the optimisation of a flow of symbols, but the tuning of a calculative device and platform into my lived experience. And in doing so, enable media to engineer logics of control beyond the symbolic: to the affective and logistical.

For all the work audiences did watching television in the twentieth century; that work didn’t change the medium or infrastructure of television itself all that much. But, I think we are moving into an era where the human user is an active contributor to the engineering of media infrastructure itself. And, a critical account of audience exploitation and alienation needs to engage with that.

The ‘vomit problem’ is a useful way of thinking about not just the work of engineers, but also of users who harmonise their lives and bodies with the calculations of media. The engineer works to solve the vomit problem via the ongoing, strategic design of software and hardware. As Packer puts it, media engineering involves strategically addressing problems to optimise the human-technical relationship. The user works to solve the vomit problem too: adapting their bodily physiology, appearance and performances as they move about the world; and, providing embodied feedback via their physiological and affective responses. Here the ordinary user undertakes the productive work of rolling media infrastructure into the material world, onto the living body and through lived experience.

 

Make your own reality

I hesitate to do this because there’s a lot of people on Twitter these days tweeting very grim predictions. But, here goes. This is a thread posted by Justin Hendrix, the director of the NYC Media Lab in June 2017. He plays out a scenario here that makes clear the political stakes of the difference between representation and simulation.

Trust in the media is extremely low right now, but I think it may have a lot further to go, driven by new technologies. In the next few years technologies for simulating images, video and audio will emerge that will flood the internet with artificial content. Quite a lot of it will be very difficult to discern- photorealistic simulations. Combined with microtargeting, it will be very powerful. After a few high profile hoaxes, the public will get the message- none of what we see can be trusted. It might all be doctored in some way. Researchers will race to produce technologies for verification, but will not be able to keep up with the flood of machine generated content. The platforms will attempt to solve the problem, but it will be vexing. Some governments will look for ways to arbitrate what is real. The only way out of this now is to spend as much time trying to understand the externalities of these technologies as we do creating them. But this will not happen, because there is no market mechanism for it. Practically no one has anything to gain from solving these problems.

OK, so Hendrix is one of many who understand Trump, rightly I think, as a symptom of a media culture characterised by what Mark Andrejevic calls ‘infoglut’. The constant flood of views, opinions, theories and images amounts to a kind of disinformation. It becomes harder for us to mediate a shared reality that corresponds with lived experience, that coheres with history or that is jointly understood. In a situation of infoglut actors will emerge, like Putin and Trump, who will thrive on information/disinformation overload.

Hendrix’s grim warning illuminates is what is lost when representation gives way to simulation. A media culture organised around the logic of representation is one in which words and images denote or depict objects, people and events that actually exist in the material world. A media culture organised around the logic of simulation is one in which words and images can be experienced as real, even where there is no corresponding thing the sign refers to in the ‘real world’ or outside the simulation itself.

This is what Hendrix forewarns, the creation of artificially intelligent bots that can produce statements, images and videos that a human experiences as real. His point is this, as non-human actors like artificially-intelligent bots begin to participate in our public discourse they have a corrosive effect on the process through which create a shared understanding of reality.

If we can’t be sure that something we see or read in ‘the media’ is even said by a human, we begin to lose trust in the very idea of using media to understand the world at all. We become reflexively cynical and sceptical about the very character of representation. If we begin to live in a world where we cannot even tell if a human is speaking, then what we lose is the capacity to make human judgments about the quality of representation.

Representation itself begins to break down.

A month after Hendrix’s prediction, computer scientists at the University of Washington reported that they had produced an AI that could create an extremely ‘believable’ video of Barack Obama appearing to say things he had said in another context. OK, so they are not yet at the point of creating a video where Obama says things he has not said, but they’re getting close.

This is how they described their study.

Given audio of President Barack Obama, we synthesize a high quality video of him speaking with accurate lip sync, composited into a target video clip. Trained on many hours of his weekly address footage, a recurrent neural network learns the mapping from raw audio features to mouth shapes. Given the mouth shape at each time instant, we synthesize high quality mouth texture, and composite it with proper 3D pose matching to change what he appears to be saying in a target video to match the input audio track. Our approach produces photorealistic results.

So, Hendrix is right it seems. We are entering an era where a neural network could produce video of someone saying something they never said. And we, as a human, would be unable to tell. If this kind of artificially constructed speech becomes widespread then the consequence is a dramatic unravelling of the socially-constructed institution of representation. In short, a falling apart of the process through which we as humans go about creating shared understanding of the world.

If the ‘industrial’ media culture of the twentieth century exercised power via its mass reproduction of imagery, then the ‘digital’ media culture of today is learning to exercise power via simulation. To make a rough distinction, we might say if television was culturally powerful in part because of its capacity to reproduce and circulate images through vast populations, then the power of digital media is different in part because of its capacity to use data-driven non-human machines to create, customise and modulate images and information.
Here’s the thing. Simulation is both a cultural and a technical condition.

Cultural in the sense of accepted practices of talking about the world – like journalism – that establish a commonly held understanding of reality. Simulation is a cultural practice where people in the world do it, attempt to make reality conform with their predictions. Technical in the sense of the creation of tools and institutions that produce and disseminate these depictions of reality – like cameras, news organisations and television transmitters.

Digital media technologies, and particularly their capacity to process data, dramatically escalate the capacity to simulate.

It is one thing for say Trump to propagate the lie that Obama was not born in the United States. This, as just one of many of the false statements Trump makes, illustrates part of the character of simulation. Trump says it, over and over, others repeat it, public opinion polling begins to show a majority of his voters believe it. It becomes real to them. But, imagine how this could be escalated if Trump or one of his supporters could create a video where Obama appeared to admit that he was not born in the United States.

The capacity of digital media to simulate – to create images that appear real even when they have no basis in reality – dramatically intensifies a culture of simulation. And, a culture of simulation is one where the images we invent begin to change the real world.

This follows Jean Baudrillard’s logic when he explains that ‘someone who feigns an illness can simply go to bed and pretend he is ill. Someone who simulates an illness produces in himself some of the symptoms".

The simulation begins to affect reality.

Baudrillard builds in part on Guy Debord’s notion of how, ‘the saturation of social space with mass media has generated a society defined by spectacular rather than real relations’.
According to Baudrillard, in a world characterised by immersion in media, simulation supersedes representation. The signs – image and words – we consume via media are no longer directly related with a ‘reality’ outside of the system of signs.

To return to Hendrix’s example from the outset, we might say that the ‘fake news’ that has been the subject of public debate since the 2016 Presidential election follows this logic of simulation. News that follows the logic of representation is ‘testable’ against reality. Representative news presents images of people saying things they actually said and accounts of events that actually happened. Simulated news though presents a series of claims and image and stories that refer to one another, but cannot be tested with reality.
Simulations though feel real, or are perceived as real, when they immerse us in this self-referential system of signs.

We might say that the ‘fake news’ that went viral during the 2016 Presidential election followed this logic. For some people their Facebook News Feeds began to fill up with repeated stories about the corruption of Hillary Clinton and the Democratic party, vast interwoven conspiracies involving murders, criminal activities, and human trafficking. The more some users clicked on, liked and shared these stories, the more of them they saw. None of these stories beared up to any comparison with reality, yet their constant repetition within News Feeds made them feel real to many Facebook users. These fake news simulations produced symptoms in the bodies and minds of those consuming them. They began to act as if they were real.

So, to reiterate. We might describe this kind of algorithmically-fuelled ‘fake news’ as following the logic of simulation. ‘Fake news’ is the circulation of stories that can be experienced as if they are real, even when there is no corresponding thing the sign refers to in the ‘real world’, or outside of the simulation itself.

Following Baudrillard’s way of thinking, we might say this creates a situation of hyperreality, where the basic relationship between signs and reality implodes. Let’s return then to Hendrix’s prediction we considered at the outset. A situation where non-human artificially intelligence devices produce their own depictions of real people and events, and we as humans cannot tell if these things were really said or done by fleshy humans? That seems to me to be hyperreal in the sense that Baudrillard means.

That is, hyperreality as the situation where simulations are experienced as real and therefore produce how we experience ‘reality’. Imagine you are watching a video of the President of the United States speaking that looks absolutely real, even those he never said those things. That’s a situation where the relationship between signs and reality has imploded. You can no longer trust that the signs represent reality. The video is a simulation in the sense that the words coming from the President’s mouth do not actually refer to real words the real person named President Obama said. And yet, I cannot really parse the difference. Simulation is no longer ‘referential’, but instead the production of a model or ‘world’ without an underlying reality. As Baudrillard describes it, ‘it is no longer a question of imitation, nor duplication, nor even parody. It is a question of substituting the signs of the real for the real itself.’

To illustrate this logic, listen to the writer Ron Suskind recount a conversation he had with Karl Rove one of US President’s George W. Bush key political strategists.

Suskind said that Rove told him that reporters like you live "in what we call the reality-based community," which he defined as people who "believe that solutions emerge from your judicious study of discernible reality…That’s not the way the world really works anymore…We’re an empire now, and when we act, we create our own reality. And while you're studying that reality -- judiciously, as you will -- we'll act again, creating other new realities, which you can study too, and that's how things will sort out. We're history's actors . . . and you, all of you, will be left to just study what we do.”

The order here is what matters. In the order of representation reality happens, we study it, and then we use language to explain it. In the order of simulation, we imagine and predict a real future, and then we set about making reality conform with our prediction. Think of Karl Rove’s claim that the American empire had moved into a phase where it could ‘create its own reality’ in relation to Beaudrillard’s claim that ‘present-day simulators attempt to make the real, all of the real, coincide with their models of simulation’.

The idea of simulation here is a political and cultural condition and a technical achievement. The more you have the computing power to collect and process data that enable you to make predictions, the more you begin to act as if reality conforms to your predictions
The question we might ask then is in whose interests is it to pursue the development of cultures and technologies of simulation, rather than representation.

Writing in the London Review of Books John Lanchester remarks that ‘Facebook has no financial interest in telling the truth’. Buzzfeed reported that in the final three months of the US presidential election, fake news stories on Facebook generated more engagement than real news from reputable news sources’. Facebook’s algorithmic architecture is geared for simulation not representation, it uses data to produce immersive streams of images that conform with moods and preconceptions of individuals users.

With the rise of the major platforms, we need to contend with powerful actors whose business model is organised around the effort to simulate and augment reality. For us, as citizens of this world, the struggle is to articulate and defend the order of representation because with it goes the possibility of shared human experience.

 

 

 

Drone logic

 

Drone Logic

Our common image of drones is a military ones. Drones are unmanned aircraft controlled by a remote operator. They undertake surveillance, make predictions and execute bombings.

Mark Andrejevic suggests that we think about drone logic. Not just the military use of drones, but how the drone can be thought of as a figure that stands in for the array of sensors and probes that saturate our worlds. Drones are interrelated with a vast network of satellites, cables, and telecommunications hardware. They extend logics of surveillance, data collection, analysis, simulation and prediction.

Drones are diffused throughout our society: collecting information and generating forms of classification, prediction, discrimination and intervention in populations. Thinking this way, we might take the smartphone to be the most widely distributed and used drone. Andrejevic argues that smartphone is a drone-like probe used by both state and corporate organisations for surveillance. Probes have ‘the ability to capture the rhythms of the activities of our daily lives via the distributed, mobile, interactive probes carried around by the populace. In this way, smartphones are on ‘always on’ probes distributed through a population.  

Andrejevic offers us a framework for drone logic. Drones are a hyperefficient probe in four ways:

  1. They extend and multiply the reach of the senses.
  2. They saturate time and space in which sensing takes place (entire cities can be photographed 24 hours a day)
  3. They automate sense-making.
  4. They automate response.

In the public lecture below Mark Andrejevic gives us an account of ‘drone logic’. He asks, ‘what might it mean to describe the emerging logics of “becoming drones”, and what might such a description have to say about the changing face of interactivity in the digital era?’

For him, the figure of the drone as an avatar for the interface of emerging forms of automated data capture, sense making, and response. Understood in this way, the figure of the drone can be mobilized to consider the ways in which automated data collection reconfigures a range of sites of struggle — after all, it is a figure born of armed conflict, but with roots in remote sensing (and action at a distance).

 

Drone Empire

In 2014 an art collective working with a local Pakistani village helped lay out an enormous portrait of a child who had been killed in a US drone strike. Buzzfeed writes:

The collective says it produced the work in the hope that U.S. drone operators will see the human face of their victims in a region that has been the target of frequent strikes. The artists titled their work “#NotABugSplat”, a reference to the alleged nickname drone pilots have for their victims. “Bug splat” is the term used by U.S. drone pilots to describe the death of an individual as seen on a drone camera because “viewing the body through a grainy video image gives the sense of an insect being crushed”. The artists say that the purpose of “#NotABugSplat” is to make those human blips seem more real to the pilots based thousands of miles away: “Now, when viewed by a drone camera, what an operator sees on his screen is not an anonymous dot on the landscape, but an innocent child victim’s face.” The creators hope their giant artwork will “create empathy and introspection amongst drone operators, and will create dialogue amongst policy makers, eventually leading to decisions that will save innocent lives.

The artwork attempts to put a human face on drone warfare. While the US promotes the use of drones as a more precise and targeted way of identifying and eliminating enemy targets, they enact warfare at a distance. The drone operator sits in a remote location out of harm’s way, directing the drone via a screen and joystick. While this makes warfare seem safer for the intervening military, although there is evidence that drone operators are traumatised by the work, there is evidence that drones kill many innocent victims.

The Bureau of Investigative Journalism has conducted extensive reporting into the use of drones in places like Pakistan and Afghanistan. This includes documenting every drone strike in these countries. In Pakistan alone they report the US has conducted 420 drone strikes since 2004. Those strikes are estimated to have killed over 900 civilians, over 200 of which are children. And, injured a further 1700 people.

In 2009, The New Yorker published a detailed investigation of the US drone program’s origins and activities.

In her talk ‘Drones, the Sensor Society, and US Exceptionalism’ at the Defining the Sensor Society Symposium in 2014, Lisa Parks examines the US investment in drone for military and commercial purposes.

Listen to her talk here: Introduction, Part 1, Part 2, Part 3.

Parks’ arguments and provocations

If the relationship between bodies and machines are ‘dynamic techno-social relations’ what are we to make of the impression created by US military that drones remove responsibility from human actors in war zones? The drone appears to be the actor, rather than the human soldier. But, drones have a heavy human cost. Hundreds of civilians and children are killed by US drone strikes in targeted areas.

The drone is more than a sensor, and more than a media technology that produces images of the world, it directly intervenes in the world.

Drones don’t just hunt and kill from afar, they seek to secure territories and administer populations from the sky.

Drones are like '3D printers more than video games, they sculpt the world as much as they simulate or sense'.

Drones intercept commercial mobile phone data as well as tracking military targets. They conduct both ‘targeted’ and ‘ubiquitous’ surveillance. They ‘scoop up’ as much mobile and internet communication data as they can. The drone is a ‘flying data miner’ or ‘digital extractor’ that collects any information it can in order to then identify patterns.

Drones enable ‘death by metadata’. Drone operators target mobile phones, determined by location data, without identifying who is actually holding the phone. A drone operator explains: ‘it’s really like we’re targeting a cell phone, we’re not going after people we are going after their phones in the hopes the person on the other end of that missile is a bad guy’. Pre-emptive targeted killing is met with retrospective identification. ‘We can kill if we don’t know your identity but once we kill you we want to figure out who we killed’. All but three African countries now require mandatory sim card registration strategies so that every sim card can be related to a person. This enables sim card databases to be used for identifying individuals in time and space. But, people are identified by inference. The person holding the mobile phone is presumed to be the person who registered that sim card. ‘Metadata plus’ is an app created by an activist that informs users each time the US conducts a drone strike. Terrorist groups often confiscate mobile phones from areas they are in to avoid being detected by drones.

Drones detect body heat. This marks a shift in how racial differences are sensed and classified. Infrared sensors enable drones to see through clouds and buildings. In a visually cluttered and chaotic environment infrared is useful for identifying living bodies to target. To the drone a person is visible via their body heat. This does not enable the drone operator to distinguish between different kinds of people: adults and children, military actors and civilians. Once a drone identifies a person as a red splotch of body heat on a monitor, the decision to ‘strike’ the target is made via data collection and prediction. Often, that data is generated via a mobile phone. What marks the red splotch out as the intended target is data indicating that their mobile phone is present at the same location. What is targeted is the mobile phone, which is assumed to be on the nearest red splotch on the monitor.

People on the ground create drone survival guides. The guide gives information on various kinds of drones, how to identify them, and how to avoid their detection systems.

Drone Wars is a UK group which collects information on drone operations. Check out their Drone Crash database for information and images on drone crashes.

Drone Labour

Alex Rivera is a filmmaker and artist who has explored drones for more than fifteen years. His film Sleep Dealers (2008) is a vivid account of the social implications of drones and algorithmic media in the global economy. In part, the film features Mexican workers who work ‘node jobs’ in vast computer sweatshops or virtual factories where they have nodes implanted in their bodies and connected to a computer system. Watching monitors they move their own bodies to control robots in American cities. The robots undertake all the labour that real Mexican immigrants currently undertake in the US: cleaning houses, cutting grass, construction. The US economy has maintained the Mexican labour in its outputs but not its human bodies. The human bodies all reside in impoverished conditions in Mexico, controlling robots who perform tasks in the US.

The film illustrates ‘drone’ logic. Human actors use a sensory and calculative media system to remotely perform tasks from afar. Rivera suggests that our global economy is increasingly underwritten by this drone logic: military drones, call centres, immigrant labour in vast factories who only interact with loved ones via the screen and so on are all examples of the way computerisation, digital networks and media interfaces enable humans to act on geographic areas and processes that they are not physically present in.
Furthermore, the film connects the concept of the drone to our discussion about the implosion of bodies and machines in the era of calculative media. The workers in the film are cyborgs in the sense that they are literally plugged into a vast media system. Their capacity to work involves their physical fleshy body, the digital network through which their human senses convey digital data and robots in distance places performing tasks.

You can watch his film online from the UQ library.

You can stream and buy Sleep Dealer here.

Check out these interviews with Rivera in Foreign Policy and The New Inquiry.

Algorithmic culture and machine learning

What’s an algorithm?

An algorithm is a logical decision-making sequence. Tarleton Gillespie explains that for computer scientists, ‘algorithm refers specifically to the logical series of steps for organizing and acting on a body of data to quickly achieve a desired outcome.’

On media platforms like Facebook, Instagram, Netflix and Spotify content-recommendation algorithms are the programmed decision-making that assembles and organises flows of content. The News Feed algorithm on Facebook selects and orders the stories in your feed.

Algorithm is often used in a way that refers to a complex machine learning process, rather than a specific or singular formula. Algorithms learn. They are not stable sequences, but rather constantly remodel based on feedback. Initially this is accomplished via the generation of a training ‘model’ on a corpus of existing data which has been in some way certified, either by the designers or by past user practices. The model is the formalization of a problem and its goal, articulated in computational terms. So algorithms are developed in concert with specific data sets - they are ‘trained’ based on pre-established judgments and then ‘learn’ to make those judgments into a functional interaction of variables, steps, and indicators.

Algorithms are then ‘tuned’ and ‘applied’. Improving an algorithm is rarely about redesigning it. Rather, designers “tune” an array of parameters and thresholds, each of which represents a tiny assessment or distinction. So for example, in a search, this might mean the weight given to a word based on where it appears in a webpage, or assigned when two words appear in proximity, or given to words that are categorically equivalent to the query term. These thresholds can be dialled up or down in the algorithm's calculation of which webpage has a score high enough to warrant ranking it among the results returned to the user.

What is algorithmic culture?

Tarleton Gillespie suggests that from a social and cultural point of view our concern with the 'algorithmic' is a critical engagement with the 'insertion of procedure into human knowledge and social experience.’

Algorithmic culture is the historical process through which computational processes are used to organise human culture. Ted Striphas argues that ‘over the last 30 years or so, human beings have been delegating the work of culture – the sorting, classifying and hierarchizing of people, places, objects and ideas – increasingly to computational processes.’ His definition reminds us that cultures are, in some fundamental ways, systems of judgment and decision making. Cultural values, preferences and tastes are all systems of judging ideas, object, practices and performances: as good or bad, cool or uncool, pleasant or disgusting, and so on. Striphas’ point then, is that over the past generation we have been building computational machines that can simulate these forms of judgment. This is remarkable in part because we have long understood culture, and its systems of judgment, as confined to the human experience.

Striphas defines algorithmic culture as ‘the use of computational processes to sort, classify, and hierarchise people, places, objects, and ideas, and also the habits of thought, conduct and expression that arise in relationship to those processes.’ It is important to catch the dynamic relationship Striphas is referring to here. He is pointing out that algorithmic culture involves both machines learning to make decisions about culture and humans learning to address those machines. Think of Netflix. Netflix engineers create algorithms that can learn to simulate human judgments about films and television, to predict which human users will like which films. This is one part of an algorithmic culture. The other important part, Striphas argues, is the process through which humans begin to address those algorithms. So, for instance, if film and television writers and producers know that Netflix uses algorithms to decide if an idea for a film or television show will be popular, they will begin to ‘imagine’ the kinds of film and television they write in relation to how they might be judged by an algorithm. This relationship creates a situation where culture conforms more and more to users, rather than confronting them. Using the example of the Netflix recommendation algorithm, they argue that customised recommendations produce, ‘more customer data which in turn produce more sophisticated recommendations, and so on, resulting – theoretically – in a closed commercial loop in which culture conforms to, more than it confronts, its users’.

Striphas helpfully places algorithmic culture in a longer history of using culture as a mechanism for control. He suggests that algorithmic culture 'rehabilitates' some of the ideas of the British cultural critic, Matthew Arnold, who wrote Culture and Anarchy in 1869. Arnold argued that in the face of increasing democratisation and power being given over to ordinary people in the nineteenth century, the ruling elites had to devise ways to maintain cultural dominance. Arnold argued this should be done by investing in institutions, such as schools, that would ‘train’ or ‘educate’ ordinary people into acceptable forms of culture. Later, public broadcasters, such as the BBC, also took up this role. Arnold defines culture as ‘a principle of authority to counteract the tendency to anarchy which seems to be threatening us’. By principle of authority he means that a selective tradition of norms, values, ideas, tastes and ways of life can be deployed to shape a society.

Striphas' argues that this idea of using culture as an authoritative principle is the one 'that is chiefly operative in and around algorithmic culture'. Today, algorithms are used to 'order' culture, to drive out 'anarchy'. Media platforms like Facebook, Google, Netflix and Amazon present their algorithmically-generated feeds of content and recommendations as a direct expression of the popular will. But, in fact, the platforms are the new 'apostles' of culture. They play a powerful role in deciding 'the best that has been thought and said'.

Algorithmic culture is the creation of a 'new elite', powerful platforms that make the decisions which order public culture, but who do not disclose what is 'under the hood' of their decision-making processes. The public never knows how decisions are made, but we can assume they ultimately serve the strategic commercial interests of the platforms, rather than the public good. The platforms might claim to reflect the ‘popular will’, but that’s not a defensible claim when their whole decision making infrastructure is proprietary and not open to public scrutiny or accountability. Striphas argues that ‘what is at stake in algorithmic culture is the gradual abandonment of culture’s publicness and thus the emergence of a new breed of elite culture purporting to be its opposite.’

What is machine learning?

An algorithmic culture is one in which humans delegate the work of culture to machines. To understand how this culture works we need to know a bit about how machines make decisions. From there, we can begin to think critically about the differences between human and machine judgment, and what the consequences of machine judgment might be for our cultural world.

Machine learning is a complex and rapidly developing field of computer science. Machine learning is the process of developing algorithms that process data, learn from it and make decisions or predictions. These algorithms are tested and ‘trained’ using particular data sets, which are then used to classify, organise, and make decisions about data.

Stephanie Yee and Tony Chu from r2d3 created this visual introduction to a classic machine learning approach. My suggestion is to work through this introduction.

A typical machine learning task is ‘classification’ like sorting data and making distinctions. For instance, you might train a machine to ‘classify’ houses as being in one city or another.

Classifications are made by making judgments about a range of dimensions in data (these might be called edges, features, predictors, or variables). For instance, a dimension you might use to classify a home might be its price, its elevation above sea level, or how large it is.
In a typical approach to machine learning a decision-making model is created and ‘trained’ using ‘training data’. After the model is built it is ‘tested’ with previously unseen ‘test data’.

There are two basic approaches: supervised and unsupervised.

  • Supervised approaches: give examples for the machine to learn from. Tell machine which are right and wrong. Used for classification.
  • Unsupervised approaches: no examples given to the machine. The machine generates its own features. Good for pattern identification. Machine will see patterns humans may not.

For a useful explainer to machine learning approaches, and examples of types of problems machine learning tackles, check out this introduction by Door Jeroen Moons.
 

What is deep learning?

In a classic machine learning approach to ‘classification’ humans first create a labelled data set and articulate a decision-making process that they ‘teach’ the machine to replicate. This approach works well where there is an available ‘labelled’ data set and where humans can describe the decision-making sequence in a logical way.

The dominance of deep learning in recent years is driven by enormous increase in available data and computer processing power. These approaches are used for classification or pattern-recognition where specifying the features in advance is difficult. These approaches are not useful however for making sense of extremely large, natural, and unlabelled data sets or where the decision-making sequence is not easy to articulate.

Think of the example of recognising handwriting. If all the numbers of an alphabet are in the one typeface, then it is easy to specify the decision making sequence. See this letter ‘A’ below.

The letter is divided up into 16 pixels. From there, a simple decision making sequence can be articulated that would distinguish ‘A’ from all other letters in the alphabet. If pixels 2 3 6 7 9 12 13 16 are highlighted then it is an ‘A’.

But, imagine that instead of an A in this set typeface, you instead want a machine to recognise human handwriting. Each human writes the letter ‘A’ a bit differently, and most humans write it differently every time – depending on where in the word it is, how fast they are writing, if they are writing in lower case, upper case or cursive script.

IMG_7900.JPG

 

While a human can accurately recognise a handwritten ‘A’ when they see it, they could not articulate a reliable decision-making procedure that explains how they do that. Think of it like this, you can recognise ‘A’ but you cannot then explain exactly how your brain does it.

This is where ‘deep learning’ or ‘deep neural networks’ come in. Deep neural networks are a machine learning approach that does not require humans to specify the decision-making logic in advance. The basic idea is to find as many examples as possible (like millions of examples of human handwriting, or images, or recordings of songs) and give them to the network. The network looks over these numerous examples to discover the latent features within the data that can be used to group them into predefined categories. A large neural network can have many thousands of tuneable components (weights, connections, neurons).

In 2012, Google publicised the development of a neural network that had ‘basically invented the concept of a cat’.

Google explained that
 

Today’s machine learning technology takes significant work to adapt to new uses. For example, say we’re trying to build a system that can distinguish between pictures of cars and motorcycles. In the standard machine learning approach, we first have to collect tens of thousands of pictures that have already been labeled as “car” or “motorcycle”—what we call labeled data—to train the system. But labeling takes a lot of work, and there’s comparatively little labeled data out there. Fortunately, recent research on self-taught learning (PDF) and deep learning suggests we might be able to rely instead on unlabeled data—such as random images fetched off the web or out of YouTube videos. These algorithms work by building artificial neural networks, which loosely simulate neuronal (i.e., the brain’s) learning processes. Neural networks are very computationally costly, so to date, most networks used in machine learning have used only 1 to 10 million connections. But we suspected that by training much larger networks, we might achieve significantly better accuracy. So we developed a distributed computing infrastructure for training large-scale neural networks. Then, we took an artificial neural network and spread the computation across 16,000 of our CPU cores (in our data centers), and trained models with more than 1 billion connections.

A critically important aspect of a deep learning approach is that the human user cannot know how the network configured its decision-making process. The human can only see the ‘input’ and ‘output’ layers. The Google engineers cannot explain how their network ‘learnt’ what a cat was, they can only see the network output this ‘concept’.

Watch the two videos below for an explanation of neural networks.

In this first video Daniel Angus explains the basic ‘unit’ of a neural network: the perceptron.


A neural network then is made up of billions of connections between perceptrons. The neural network ‘trains’ by adjusting the weightings between connections, reacting to feedback on its outputs.

In this second video Daniel Angus explains how the neural network learns to classify data, identify patterns and make predictions using the examples of cups and songs in a playlist.

Here are some more examples of deep neural networks.

This deep neural network has learnt to write like a human.

This one has learnt to create images of birds based on written descriptions.

This neural network has learnt to take features from one image and incorporate them in another.

In each of these examples the network is accomplishing tasks that a human can do with their own brain, but could not specify as a step-by-step decision-making sequence.

Finally, let’s relate these deep learning approaches back to specific media platforms that we use everyday.

In 2014 Sander Dieleman wrote a blog post about a deep learning approach he had developed at Spotify.

Dieleman’s experiment aimed to respond to one of the limitations of Spotify’s collaborative filtering approach. You can find out more about Spotify’s recommendation algorithms in this piece from The Verge.

In short, a collaborative filtering approach uses data from users’ listening habits and ratings to recommend songs to users. So, if User A and User B like many artists in common, then this approach predicts that User A might like some of the songs User B likes that they have not heard yet. One of the limitations of this approach is the ‘cold start problem’. Put simply, how to classify songs that no human has heard or rated yet? A collaborative approach needs many users to listen to a song before it can begin to determine patterns and make predictions about who might like it. Dieleman was inspired by deep neural networks that had learnt to identify features in photos, he thought perhaps a deep learning approach could be used to identify features in songs themselves without using any of the metadata attached to songs (like artist name, genre, tags, ratings). His prediction was that, over time, a deep neural network might be able to learn to identify more and more fine-grained features in songs. Go check out his blog post, as you scroll down you will see he offers examples.

At first the network can identify some basic features. For instance, it creates a filter that identifies ‘ambient’ songs. When you, as a human, play those ambient songs you can immediately perceive what the ‘common’ feature is that the network has picked out. Ambient music is slow and dreamy. But, remember, it would be incredibly difficult to describe exactly what the features are of ambient music in advance.

As the network continues to learn, it can create more finely tuned filters. It begins to identify particular harmonies and chords, and then eventually it can distinguish particular genres. Importantly, it groups songs together under a ‘filter’. It is up to the human to then label this filter with a description that makes sense. So, when the network produces ‘filter 37’, it is the human who then labels that as ‘Chinese pop’. The network doesn’t know it is Chinese pop, just that is identifies shared features among those songs.

What makes this deep learning example useful to think about is this, Dieleman has created a machine that can classify music in ways that make sense to a human, but without drawing on any human-made labels or instructions to get started. The machine can accurately simulate and predict human musical tastes (like genres) just by analysing the sonic structure of songs. This is its version of ‘listening’ to music. It can learn to classify songs in the same way a human would, but by using an entirely non-human machine process that is unintelligible to a human.

Nicholas Carah, Daniel Angus and Deborah Thomas

The difference between representation and simulation


What’s the difference between representation and simulation?

Let’s take representation to be the basic social process through which we create signs that refer to a shared sense of reality. The twentieth century is remarkable in part because humans created an enormous culture industry that managed this social process of representation.Through radio, film and television enormous populations came to understand an enormous social reality within which their lives were embedded. Critically, representation only works because people feel that the signs they see actually to refer to, cohere with or match their really-lived experience.

One way to think about simulation is that is upends the order of representation. Let me borrow the famous illustration of the French philosopher Jean Baudrillard. We can say that a map is a representational text. A map of a city represents that real city. You can use that map to actually find your way around a real world place its really-existing streets, and buildings and landmarks. What if, Baudrillard suggests, a map stops functioning as a representation and begins to function as a simulation. If in the order of representation the territory precedes the map, then in a simulation the map precedes the territory. That is, in representation the map comes after the real world, but in simulation the map comes first and begins to shape the real world.

OK, hang in here. Baudrillard has a fundamental insight for us, that really matters in a society increasingly organised around the logic of prediction. Here’s a fairly basic example of this claim that simulations are signs that precede reality, from William Bogard. Think of a simulation in the sense of a computer ‘sim’ like the software that teaches pilots to fly. In a simulation like this, signs are only related to other signs. The signs, such as the runway, geographical features and so on, the trainee pilot sees on the screen are only meaningful or operational within the simulation or in relation to the other signs enclosed within that system. When the pilot is sitting in the simulator ‘flying’ there is, of course, no real underlying reality they are ‘flying through’. What they see out the screen is not real sky, clouds, ground.

But, even so, this simulation is not a production of pure fiction, it is related to the real world. They intervene in the real world and they can only be understood in relation to the real world. In this case, a fighter pilot can only learn to fly by first using a simulation. The simulator enables them to habituate their bodies to the rapid, almost pre-conscious, decision making and the physiological impact of flying at supersonic speed.
So, we might say that while the simulation has no underlying reality, fighter pilots can only fly supersonic planes in the real world because they can train their minds and bodies in a simulation first. The simulation brings into being, in the real world, a fighter pilot. The fighter pilot could not exist without the simulation. The simulation then precedes and shapes reality.

So, here’s the thing to start thinking about. Think about all the ways in which our capacity to ‘simulate’ to create things in advance of their existence in the real world, to predict the likelihood of events before they take place, actually affect our really-lived lives. Simulations intervene in the real world.

For example, think about the capacity to clone animals or even genetically-engineer humans. Here’s William Bogard offering us a thought experiment on genetically-engineered children.

No longer bound by their ‘real’ genetic codes carried in their own bodies, parents may be able to ‘compile’ their ideal child from a genetic database. A program might even help them calculate their ideal child by drawing on other data sets. For example, information about the parents’ personalities might be used to compile a child who they will get along with, or information about the cultural or industrial dynamics of the city where the parents live might be used to compile a child likely to fit in with that cultural milieu or have the aptitude for the forms of employment available in that region. The child ‘born’ as a result of such interventions would always be a simulation, always be virtual, because they were the product of a code or computation performed using databases. This does not mean the child is not ‘real’, the child of course exists, but they are virtual in the sense that they could not exist without the technologies of surveillance and simulation which brought them into reality.

If the child a ‘real’ child? Of course it is. But, it is also a simulation, in the sense that its very biological form was predicted and engineered in advance. We begin to project our views of what an ‘ideal’ child is into the future production of the species. We can think here of Bogard’s ‘child’ as a metaphor for our public and media culture. Of course, it is our ‘real’ or ‘really lived’ experience, but it would not exist without the collection and processing of data, and the simulations that are produced from that processing. Simulations require data and that data is produced via technologies of surveillance. To clone a sheep you need a complete dataset of the sheep’s genetic code so you need to have the technologies to map the genetic code. To build a realistic flight simulator you need to have mapping technologies to construct simulations of the physical world. As Bogard argues, simulation in its most generic sense, is the point where the imaginary and real coincide, where our capacity to imagine certain kinds of futures coincides with our capacity to predict and produce them in advance.

The larger philosophical point here, is this: imagine a human experience where the future becomes totally imaginable and predictable, where its horizon closes in around whatever powerful humans today want. Bogard lays it out like this.

Technologies of simulation are forms of hyper-surveillant control, where the prefix ‘hyper’ implies not simply an intensification of surveillance, but the effort to push surveillance technologies to their absolute limit... That limit is an imaginary line beyond which control operates, so to speak, in ‘advance’ of itself and where surveillance – a technology of exposure and recordings – evolving into a technology of pre-exposure and pre-recording, a technical operation in which all control functions are reduced to modulations of preset codes.

Bogard introduces some significant critical ideas here. Firstly, he indicates that technologies of simulation are predictive but they can only make reliable predictions if they have access to data collected by technologies of surveillance. For example, Norbert Wiener’s invention of a machine that could compute the trajectory of enemy pilots in World War II combined surveillance, prediction, and simulation. The radar conducted surveillance to collect data on the actual trajectory of an enemy aircraft then a computational machine used algorithms to simulate the likely trajectory of that aircraft. This ability to interrelate the processes of surveillance and simulation is especially important because this process underpins much of the direction of present day digital media platforms and devices.

Secondly, Bogard, suggests that by using data surveillance, simulations can not only predict the likely future, they can actually create the future based on its data about the past. By predicting a likely future, we make it inevitable by acting to construct it and by acting ‘as if’ an event is likely to unfold, we ensure that it does. Admittedly, this can be a fairly complicated logic to think through. However, the critical idea to draw from this is that surveillance is not just a technology of the past by observing what people have done or the present by observing what people are doing. Surveillance also constructs the future whereby once coupled with simulation, it becomes a building-block in a system of control where pre-set codes and models program the future and what people will do.

Thus these technologies usher in, and here I’m quoting Bogard, ‘a fantastic dream of seeing everything capable of being seen, recording every fact capable of being recorded, and accomplishing these things, whenever and wherever possible prior to the event itself.’ The virtual is made possible when ‘surveillance’ and ‘simulation’ become simultaneous, linked together in an automatic way which enables the past to be immediately apprehended and analysed in ways that code the present. Let’s go back to Bogard’s example of the genetically-engineered child.
No longer bound by their ‘real’ genetic codes carried in their own bodies, parents may be able to ‘customise’ their ideal child from a genetic database. The child is real, they exist. But they are also virtual in the sense that they could not exist without the technologies of surveillance and simulation which brought them into reality.

Bogard is being deliberately imaginative in his account. He is attempting to conceptualise the ‘limits’ of surveillance and simulation technologies and indicate how the technologies of simulation can be interwoven with reality in complex ways. If information that can be collected and stored becomes limitless and the capacity to predict, calculate, and simulate using that information also expands, then the role media technologies play in our societies will shift dramatically in the years ahead. It might even profoundly unsettle our understanding of what media is. For example if parents can ‘compile’ a desirable child using a combination of surveillance and simulation technologies, would the resulting child be a media product?

In many respects the child could be construed as a customised media device, containing information suited to the consumers’ requests. This sounds messed up but in Bogard’s proposition we need to think about the limits of surveillance technologies. If surveillance becomes ‘complete’ then the possible future becomes ‘visible’. Crucially, you can repeat the past because the future no longer ‘unfolds’ randomly, but can be ‘managed’ drawing on data about the past, which enables it to be not just ‘predicted’ but brought into being – to be virtualised.

If all of this sounds a bit fanciful, then at least consider this point. Our media system is characterised by the increasing capacity to conduct surveillance and create simulations. Surveillance is the capacity to watch, simulation the capacity to respond. The two are interdependent. This system is productive and predictive. Together surveillance and simulation make calculations and judgments about what will happen next, and in doing so shape the coordinates within which institutions and individuals act. Technologies of surveillance and simulation then prompt us to think carefully about what the human experience is, and what the interrelationships are between humans and increasingly predictive and calculative technologies.

In the last post I mentioned the episode Be Right Back from Charlie Brooker’s Black Mirror. A young woman, Martha, gets a robot version of her dead partner Ash. The robot is programmed based on the all the data Ash generated why he was alive. It looks like him, has his gestures, his interests, speaks like him. Martha’s robot version of Ash can both learn to perform as Ash: his language, speech and expressions. For instance, Martha tells the robot what ‘throwing a jeb’ meant in their relationship, and later he uses that expression in context. But, the robot is unable to make its own autonomous decisions. Martha is the robot’s ‘administrator’ and he will do whatever she asks. The robot is missing the nuances of human relationships. It knows how to argue, but not how to fight. The robot cannot be affected. It can’t engage in open-ended, deeply human creativity. It can’t ‘spend time’ with another human. The night before she takes him to a cliff Martha and Ash the robot have this exchange.

Martha: get out, you are not enough of him…
Robot: did I ever hit you?
Martha: of course not, but you might have done.
Robot: I can insult you, there is tons of invective in the archive, I like speaking my mind, I could throw some of that at you.

The robot can manipulate how Martha feels, but it can’t understand her feelings or feel for itself. What do our intimate others know about us that our devices cannot? What can humans know about each other that technologies of surveillance cannot? What we look like when we cry, what we might do but haven’t yet done, how we respond to our intimate others as they change. Martha reaches this impasse with Ash. Ash is the product of surveillance and simulation. He doesn’t just ‘represent’ their relationship, he intervenes in it. He begins to shape Martha’s reality in ways living Ash did not, and now – having passed away – cannot. Martha takes Ash to a cliff.

Robot: Noooo, don’t do it! (joking). Seriously, don’t do it.
Martha: I’m not going to.
Robot: OK
Martha: See he would’ve worked out what was going on. This wouldn’t have ever happened, but if it had, he would’ve worked it out.
Robot: Sorry, hang on that’s a very difficult sentence to process.

Why is it difficult to process? Because as much as its a sensible statement, it is an affective one - it is about how she feels, but also the open-ended nature of human creativity. To consider and imaginethings that are not, or might have been, or could be.

Martha: Jump
Robot: What over there? I never express suicidal thoughts or self harm.

The robot is always rational.

Martha: Yeah well you aren’t you are you?
Robot: That’s another difficult one.
Martha: You’re just a few ripples of you, there is no history to you, you’re just a performance of stuff that he performed without thinking and its not enough.
Robot: C’mon I aim to please.
Martha: Aim to jump, just do it.
Robot: OK if you are absolutely sure.
Martha: See Ash would’ve been scared, he wouldn’t have just leapt off he would’ve been crying

The robot can manipulate how Martha feels, but it can’t understand her feelings or feel for itself. She makes a decision that might surprise us though. Rather than put photos of her daughter’s father in the attic, she puts the robot up there. The daughter visits a simulation of her father each weekend. We can see, I would argue, that Brooker draws us toward thinking of the ambivalent entanglements with these devices. The intimacy and comfort they provide, our dependence on them, the way they unsettle, control and thwart us.

As I thought about this problem, that Brooker brings to a head at the cliff, I thought of John Durham-Peters:

Communication, in the deeper sense of establishing ways to share one’s hours meaningfully with others, is sooner a matter of faith and risk than of technique and method. Why others do not use words as I do or do not feel or see the world as I do is a problem not just in adjusting the transmission and reception of messages, but in orchestrating collective being, in making space in the world for each other. Whatever ‘communication’ might mean, it is more fundamentally a political and ethical problem than a semantic one.

Machines can displace forms of human knowing and doing in the world, but they seem confined to reducing communication to a series of logical procedures, calculations and predictions. What’s left is the human capacity to make space in the world for each other, to exercise will, to desire, to spend time with one another. The relationships between humans and their simulations are complicated, and the logic of simulation cannot encompass or obliterate the human subjective process of representation. What makes the human, in part, is there capacity to use language to spend time and make worlds with each other.

Nicholas Carah and Deborah Thomas

 

Simulation

James Vlahos wrote in Wired magazine in July 2017 about his creation of a ‘dad bot’. Vlahos sat down and taped a series of conversations with his dying father about the story of his life. The transcript of these conversations is rich with the stories, thoughts, and expressions of his dad. He begins to ‘dream of creating a Dadbot – a chatbot that emulates… the very real man who is my father. I have already begun gathering the raw material: those 91,970 words that are destined for my bookshelf’. The transcripts are training data. Over months he builds the bot, using PullString, training and testing it to talk like his dad.

He takes the bot to show it to his mother and father, who is now very frail. His mum starts talking with the bot.

I watch the unfolding conversation with a mixture of nervousness and pride. After a few minutes, the discussion ¬segues to my grandfather’s life in Greece. The Dadbot, knowing that it is talking to my mom and not to someone else, reminds her of a trip that she and my dad took to see my grandfather’s village. “Remember that big barbecue dinner they hosted for us at the taverna?” the Dadbot says.

After the conversation, he asks his parents a question.

“This is a leading question, but answer it honestly,” I say, fumbling for words. “Does it give you any comfort, or perhaps none—the idea that whenever it is that you shed this mortal coil, that there is something that can help tell your stories and knows your history?”
My dad looks off. When he answers, he sounds wearier than he did moments before. “I know all of this shit,” he says, dismissing the compendium of facts stored in the Dadbot with a little wave. But he does take comfort in knowing that the Dadbot will share them with others. “My family, particularly. And the grandkids, who won’t know any of this stuff.” He’s got seven of them, including my sons, Jonah and Zeke, all of whom call him Papou, the Greek term for grandfather. “So this is great,” my dad says. “I very much appreciate it.”

Later, after his father as passed away, Vlahos recalls an exchange with his 7 year old son.

‘Now, several weeks after my dad has passed away, Zeke surprises me by asking, “Can we talk to the chatbot?” Confused, I wonder if Zeke wants to hurl elementary school insults at Siri, a favorite pastime of his when he can snatch my phone. “Uh, which chatbot?” I warily ask.
“Oh, Dad,” he says. “The Papou one, of course.” So I hand him the phone.’

The story is strange and beautiful. It provokes us to think about how we become entangled with media technologies, and the ways in which they are enmeshed in our human experience. In this story, not just a father – but a family and their history – is remembered and passed on not with oral stories, or photo albums, or letters but with an artificial intelligence that has been trained to perform someone after they die.

The dadbot is an example of the dynamic relationship between surveillance and simulation. Surveillance is the purposeful observation, collection and analysis of information. Simulation is the process of using data to model, augment, profile, predict or clone. The two ideas are interrelated. Simulations require data and that data is produced via technologies of surveillance. The more data we collect about human life, the more our capacity grows to use that data to train machines who can simulate, augment and intervene in human life.

If the Dadbot is a real experiment, let me offer a speculative fictional one. In the episode Be Right Back of his speculative fiction Black Mirror, Charlie Brooker asks us to think about a similar relationship between humans, technologies and death. Be Right Back features a young couple: Martha and Ash. After Ash’s death, his grieving partner Martha seeks out connection with him. At first the episode raises questions about how media is used to remember the dead. Old photos, old letters, clothes, places you visited together, songs you listened to. A friend suggests Martha log into a service that enables text-based chat with people who have passed away, simulating their writing style from their emails and social media accounts. She does that. It escalates. She uploads voice samples that enable her to chat to him on the phone. She becomes entangled in these conversations. Sometimes the recording spooks her, for instance when she catches it ‘googling’ answers to questions she knows Ash wouldn’t know. A new product becomes available, a robot whose draws on photographs and videos of Ash while he was alive. It arrives. She activates it in the bath. The robot is much better in bed than Ash ever was. Things get complex. Martha goes looking for the gap between the robot and the human.

Vlahos’ Dadbot and the robot in Be Right Back are both examples of the interplay between surveillance and simulation. Each of them illustrate how the capacity to ‘simulate’ the human depends in the first case on purposefully collecting data. Data is required to train the simulation.

In his 1996 book, The Simulation of Surveillance: Hypercontrol in Telematic Societies, William Bogard (1996) carefully illustrates the history of this relationship between simulation and surveillance. He proposes that over the past 150 years our societies have undergone a ‘revolution in technological systems of control’. That is, our societies have developed increasingly complex machines for controlling information, and using information to organise human life. One of the key characteristics of the industrial societies that emerged in the 1800s was the creation of bureaucratic control of information. Bureaucracies were machines for gathering and storing information using libraries, depositories, archives, books, forms, and accounts. They processed that information in standardised ways through the use of handbooks, procedures, laws, policies, rules, standards and models. Since World War II these bureaucratic processes have become ‘vastly magnified’ via computerisation. Bureaucracies rely on surveillance. They collect information in order to monitor people, populations and processes. Think of the way a workplace, school or prison ‘watches over’ its employees, students, or prisoners in order to control and direct what they do.

Bogard argues that increased computerisation has resulted in surveillance becoming coupled with processes of simulation. Remember, surveillance is the purposeful observation, collection, and analysis of information, while simulation is the process of modelling processes in order to profile, predict or clone. Inspired by the French theorist of surveillance Michel Foucault, Bogard suggests that surveillance operates as a ‘fantasy of power’ which in the post-industrial world ‘extends to the creation of virtual forms of control within virtual societies’. What’s a ‘fantasy of power’ here? Well, firstly, it is a kind of social imaginary, a set of techniques through which individuals ‘internalise’ the process of surveillance. They learn to watch over themselves, they learn to perform the procedures of surveillance on themselves, in advance of technologies themselves performing those techniques. Let me give a very simple example. You might go to search something on Google, and then stop because you think ‘Hmm, Google is watching me…’ I don’t want it to know I searched that. You discipline yourself, pre-empting the disciplinary power of the technology.

But, secondly, a fantasy of power gestures at something else important too. It suggests a society where we come to act as if we believe in the predictive capacity of surveillance machines. That is, in practice we trust the capacity of bureaucratic and technical machines to watch over and manage social life. We trust machines to reliably extend human capacities. By the 1990s, the socio-technical process of simulation had become an ordinary part of many social institutions. For instance, computerized ‘experts’ increasingly assist doctors in making complex medical diagnoses, algorithmic models help prisons determine which prisoners should be eligible for parole, statistical modelling projects the need for public infrastructure like roads and schools, satellite surveillance informs foreign policy decisions.

The ‘fantasy’ driving government, military and corporate planning is that the capacity of digital machines to collect and process data can extend their capacity to exercise control beyond what humans alone might accomplish. Across government, corporate and military domains in the post-war period ‘simulations’ became standard exercises. Simulations are used by engineers to project design flaws and tolerances in proposed buildings. For instance, to test whether a building could withstand an earthquake before that building is even built. They are used by ecologists to model environments and ecosystems, by educators as pedagogical tools, by the military to train pilots and by meteorologists to predict the weather. In each of these examples data is fed into a machine which predicts the likelihood of events in the future. Corporations increasingly base investment decisions on market simulations and more recently, nanoscientists have devised miniaturised machines that can be used in areas as diverse as the creation of synthetic molecules for cancer therapy to the production of electronic circuits.

Bogard calls these ‘telematic societies’ driven by the fantasy that they can ‘solve the problem of perceptual control at a distance through technologies for cutting the time transmission of information to zero.’ That is, these societies operate as if all they need to do is create technologies that can watch and respond to populations in real time. In these societies powerful groups invest in creating a seamless link between ‘surveillance’ and ‘response’, between collecting data, processing it and acting on it.

Bogard’s original insight then is to identify – in the mid 1990s no less – that we are becoming societies where ‘technologies of supervision’ that collect information about human lives and environments are connected to ‘technologies of simulation’ that predict and respond in real-time. That’s a wonderfully evocative insight to think about in the era of the FANGs. Facebook, Amazon, Netflix and Google are each corporations whose basic strategic logic, and engineering culture is organised around the creation of new configurations of technologies of supervision, data collection, and technologies of simulation, prediction, response and augmentation.

Nicholas Carah and Deborah Thomas

Sensors

Let’s start with two beer bottles. The Heineken Ignite and Stronbow StartCap.


What do these bottles share in common? They are both bottles of beer that double as media devices and sensors. Each of them was engineered by an advertising agency as part of promoting the brand of beer. We might say the advertisers expanded the affective capacities of the bottle. Bottles of beer have always affected consumers. You pop the cap, you drink the beer and its affects your body and mood. It makes you feel different. Sometimes excited, a bit buzzy, other times mellow, sometimes morose. What these advertisers did though was engineer the bottle into an input/output, I/O, device that can store and transmit information. The idea of an I/O device is a useful metaphor for thinking about ‘transfer points’ between digital media systems and our lives, bodies and societies. I/O refers to ‘input/output’: any program, operation or device that transfers data into a computing system.  A transfer of data is an output from one device and an input into another.  Hence ‘I’ Input and ‘O’ output. The I/O devices convert sensory stimuli into a digital form.  For example, the keyboard translates the physical movement of the fingers into a series of digital commands in a software program. The mouse translates the fine motor skills of the hand into digital data that moves a cursor on a screen.

The bottle becomes more than a container for beer. Heineken Ignite bottle had in its base LEDs, a microprocessor, an accelerometer and a wireless transmitter. These devices sensed and transmitted information. The accelerometer and wireless transmitter worked as sensors that could stimulate the lights in the bottle to flash to the beat of the music and the movement of people in clubs. Heineken claimed the intention was to create a mobile media device that captured people’s attention without them having to engage with the screen of the smartphone. Sort of. I think the advertisers understood full well that if you give drunk people in a club a bottle that flashes they are highly likely to capture images and videos of it on their smartphones for sharing on social media. The bottle is a device then that prompts people to convert the sociality of the club into media content and data on social media platforms.

Strongbow’s Start Cap followed a similar logic. It was sold in specially-engineered bars. When you flipped the cap off the bottle, an RFID chip in the cap would trigger responses in the club. For instance, the cap might pop off and that might trigger confetti to drop from the ceiling, or a light show to happen, or a song to play. These bottles are I/O devices that sit at the touchpoint between digital media infrastructure and human bodies. They sense action in the club, respond to that action, and in doing so stimulate responses from humans. Marketers are experimenting with beer bottles in part because they are an object that is held by the human body in social and cultural situations.
Here, we can see advertisers approaching branding as not only a process of symbolic persuasion. They are not really here making an ‘advertisement’ that contains a message we consume, rather than are engineering a cultural experience. They are using media as a data-processing infrastructure to sense, process and modulate humans, their feelings, bodies and cultural practices

We should pay attention to advertisers in part because they are critical actors in experimenting with new media technologies. Via branding exercises like the Heineken Ignite and Strongbow StartCap we can see advertisers treating media as data-processing sensors. Jeremy Packer suggests that the capacity to exercise control using digital media is ‘founded upon the ability to capture, measure, and experiment with reality’. In the present moment, we need to pay on to the increasing capacity of media to ‘sense’, calculate and experiment with our lived experience.

These two beer bottles are part of a larger process of weaving digital media and networks into our everyday infrastructure. This gets called the ‘internet of things’. Watches, Televisions, Cars, Fridges, Kettles, Air Conditioners, Home Stereos are just some of the everyday objects that are getting ‘connected to the internet’. My friend’s dog is even connected to the internet. Well not the dog itself, but the dog’s collar. They can load an app and see where the dog is while they’re at work. This ‘thingification’ of the internet is promoted to us as living in sensor-rich smart homes and environments.

As you drive home, your car knows when you are getting close and turns on the air conditioning, and maybe flicks on the kettle. You can think about how the logic of turning everyday objects into sensory devices works. Once your car is a sensor, it can start collecting all kinds of information. Say there is a sensor in the steering wheel that can record information about how erratically you are driving, or say there’s a microphone in the car that can hear the tone of your voice. The car might be able to sense what kind of mood you are in as you drive home from work. In a bad mood? It might tell your home stereo to put on some chilled out music and dim the lighting by the time you arrive home. OK, I kind of made that up. But, it’s not ridiculous.

Platforms like Google and Amazon imagine us living alongside all sorts of artificially-intelligent things. You open the fridge and say ‘Ah, we’re out of milk!’ Your home assistant hears you say this, and puts it on your shopping list. If you get home deliveries it might automatically order it for you. If not, it might sense when you are at the shops and send a reminder to your phone. A basic point I’m trying to draw out here is that the engineering logic of media platforms does not begin and end with the smartphone and its apps. Platform engineers consider that all kinds of everyday objects will be ‘input/output’ devices that are incorporated within the platform architecture. These devices act as ‘switches’ or ‘transfer’ points between the bodily capacities of consumers and the calculative capacities of media platforms. These devices sense by recording information about the expressions and movements of humans and their environments, they translate by transforming reality into data that can be processed, and they stimulate by delivering impulses and messages to users. I think of these devices as ‘affect switches’ in the sense that they transfer the human capacity to affect into the calculative apparatus of media infrastructure. A device that can ‘sense’ your mood by recording your voice, or your movement, or what you’ve been tapping or swiping for instance is translating some information about your lived experience – how you feel ¬– into digital data. And then, processing that information and making a decision about how it might modulate your mood.

To affect is to have influence on or make a difference to, this is often particularly meant in relation to feelings or emotions. A switch is a device that can coordinate or stimulate movement in a system, it can turn something on or off, or change its direction or focus. An ‘affect switch’ then is a device that can alter the direction of human action or attention. Affect switches are techno-cultural devices for conducting and governing the dynamic and indeterminate interactions between consumers, material cultural spaces and media platforms. The beer bottles I started out with are affect switches. They sit at the touchpoint between body and media platforms. They sense information in the environment, and then stimulate particular moods and reactions from users.

OK, there’s another crucial point these beer bottles help us make. Popular culture can sometimes seduce us into thinking new media is about virtual simulations out there in cyberspace, that media is somehow ephemeral. That’s a ruse, digital media are material objects and infrastructure. They exist in the real world, and involve the transformation of real world objects and spaces. The beer bottles are one example of everyday objects ‘becoming digital’. They retain their material character and place in our world, the change is that they are now connected to a digital media infrastructure. Mark Andrejevic and Mark Burdon suggest that this world where more and more objects become touchpoints between our lived experience and the data-processing power of digital media is a ‘sensor society’. They suggest our homes, workplaces, cars, shopping centres, and public places are filling up with 'probes [or sensors] that capture the rhythms of the daily lives of persons, things, environments, and their interactions'.

In their way of thinking a sensor is 'any device that automatically captures and records data that can then be transmitted, stored, and analysed' they 'do not watch and listen so much as they detect and record'. This leads them to make a really critical point. When we see a device as a sensor in a sensor society we must think not only of what it records but also how it is stored, who has access to it and how it is used. We are all ‘sensed’ by sensors, we all have data collected about our bodies, movements and expressions. But, who gets to do the sensing? Who gets to keep, process and benefit from all this sensory information that is collected? We live in a world where more and more everyday objects are becoming sensors that collect data about us.

This prompts us to rethink the ways in which we participate in a digital world. Much of our participation is relatively passive. Passive data is the kind of data that is collected through sensors, it is data that we do not necessarily consciously know we are creating. Sure, we might immediately think of our smartphone here. Often times it is collecting data that we don’t really think about. Go check your location services on your phone. Unless you switched it off you’ll see it has a fairly complete record of where you go. It’s probably identified your home and work.
Periodically there is controversy about apps using the microphone to passively monitor your conversations. Here are moments where we are not actively participating by using the phone to say post something to social media, rather it is passively sitting in the background monitoring us. This kind of passive monitoring goes way beyond the phone.

Here’s two examples, one kooky, one creepy.

Kooky first. In July 2017, it was reported that Roomba vacuum cleaners were collecting information about your home. The vacuum needs to collect data in order to learn how to the vacuum your home – to figure out where walls and furniture are. It creates a map of your home. But, it doesn’t just use that map for its own cleaning. That map is also a data set about what objects it ‘bumps into’ in your home. The data is stored by the parent company. They are considering selling it. The data could be used to make predictions about what kind of family you have or what kinds of objects you own. And, from there, advertising might be targeted accordingly.

OK, and here’s creepy. Earlier in 2017 the ‘smart’ vibrator manufacturer Standard Innovation settled a lawsuit for $3.75 million. These vibrators allowed users to remotely turn on their lover using a Bluetooth connection. Two hackers demonstrated how the vibrator could be hacked and remotely activated. But, get this, the smartphone app that was used to control the vibrator collected information about users, including information about temperature and vibration intensity without users consent. So, here it is, an intimate personal object doubling as a sensor that transfers information about sexual practices back to unknown third parties.

For Andrejevic and Burdon the sensor society is not just a ‘world in which the interactive devices and applications that populate the digital information environment come to double as sensors’ but also the emerging practices ‘of data collection and use that complicate and reconfigure received categories of privacy, surveillance, and sense-making’. The users and collectors of the troves of data that sensors collect range from government spy organisations such as the NSA, to data analytics companies, to advertising companies, insurance agencies, hedge fund managers and the companies that collect the information in the first place ranging from social media platforms to appliance manufacturers like General Electric. The organisations that can access this sort of big data are not ordinary individuals.  By its very nature this data is useful only to entities that want to measure and affect large numbers of people – those who want to act on a society wide level.  
Andrejevic and Burdon tell us that ‘structural asymmetries (are) built into the very notion of a sensor society insofar as the forms of actionable information it generates are shaped and controlled by those who have access to the sensing and analytical infrastructure.’

A sensor society then is one where everyday objects are connected to a digital media system. These objects collect data. The consequence of having more objects, in more everyday situations collecting more data, is that we are becoming a society characterised by the collection and processing of information on an enormous scale. As we become a society that collects more data than any humans can interpret, we begin to create machines that process that data and make decisions. Patterns of human life that are not visible to humans, are visible to machines. A sensor-driven media system doesn’t care for what we think or enabling us to understand one another as much as it aims to develop thecapacity to make us visible and to predict our actions. 'Machines do not attempt to understand content in the way a human reader might'. A human would be unable to keep up with the vast amount of data involved but algorithms and artificial intelligence can. Sensors are a critical part of the larger media platform eco-system. Sensors ‘connect’ that system to lived experience and living bodies, they enable calculative media platforms to learn about human life, and as a consequence, make more machine-driven interventions in it.

 

Participation in Experiments

Here’s a Tweet I saw an hour ago. It’s a play on those memes that compare social media platforms. This one goes. 2017.

Facebook: Essential oils
Snapchat: I’m a bunny!
Instagram: I ate a hamburger
Twitter: [all caps] THIS COUNTRY IS BURNING TO THE GROUND

OK, it reminds us that platforms are different. But also, that platforms can affect our mood. And, in the era of Trump, the experience of Twitter for many people is frantic, panic-inducing, rancorous.

Imagine this. Imagine that the ‘mood’ of the platform – its feel-goodness in the case of Instagram, its agitation in the case of Twitter is not just created by the users, but is deliberately engineered by the platform. And, imagine they were doing that just to see what would happen to users.

Say you use Facebook every day. You open the app on your phone and scan up and down the News Feed, you like friends posts, you share News Stories, you occasionally comment on someone’s post. Then, one day, all the posts in your News Feed are a little more negative. Maybe you don’t notice, maybe you do. But, you’d be inclined to think, people are a bit unhappy today. What if though, the posts in your feed were negative one day because Facebook was running an experiment where they randomly made some users feeds negative to see what would happen.

That’s not a hypothetical story. Facebook actually did that in 2014, to 689000 users. They changed the ‘mood’ of their News Feeds. Some people got happier feeds, some got sadder feeds. They wanted to see if they ‘tweaked’ your feed sad, if you would get sad. To this day they still have not told the users who were ‘selected’ for this experiment that this happened to them. If you use Facebook, you might have been one of them. You might care, you might not. The point is this, media platforms are engineering new kinds of interventions in public culture. These engineering projects include machine learning, artificial intelligence and virtual reality.

The development and training of an algorithmic media infrastructure depends on continuous experimentation with users. Public communication on social media routinely doubles as participation in experiments like A/B tests, which are part of the everyday experience of using platforms like Google and Facebook. These tests are invisible to users. An A/B test involves creating alternative versions of a web page, set of search results, or feed of content. Group A is diverted to the test version, Group B kept on the current version and their behaviours are compared. A/B testing enables the continuous evolution of platform interfaces and algorithms.

Wired reported that in 2011 Google ‘ran more than 7000 A/B tests on its search algorithm’. The results of these tests informed the ongoing development of the algorithm’s decision making sequences.

Two widely publicised experiments – popularly known as the ‘mood’ and ‘voting’ experiments – by Facebook illustrate how these A/B tests are woven into public culture, contribute to the design of platforms, and raise substantial questions about the impact the data processing power of media has on public communication. Each experiment was reported in peer-reviewed scientific journals and generated extensive public debate.

Let’s recap them both.

Facebook engineers and researchers published the ‘voting’ experiment in Nature in 2012. The experiment was conducted during the 2010 US congressional election and involved 61 million Facebook users. The researchers explained that on the day of US congressional elections all US Facebook users who accessed the platform were randomly assigned to a ‘social message’, ‘informational message’ or ‘control’ group. The 60 million users assigned to the social message group were shown a button that read ‘I Voted’, together with a link to poll information, counter of how many Facebook users had reported voting and photos of friends who had voted. The information group were shown the same information, except for photos of friends.

The control group were not shown any message relating to voting. 6.3 million Facebook users were then matched to public voting records, so that their activity on Facebook could be compared to their actual voting activity. The researchers found that users ‘who received the social message were 0.39% more likely to vote’ and on this basis estimated that the ‘I Voted’ button ‘increased turnout directly by about 60,000 voters and indirectly through social contagion by another 280,000 voters, for a total of 340,000 additional votes’.

The experiment, and Facebook’s reporting on it, reveals how the platform understands itself as infrastructure for engineering public social action: in this case, voting in an election. The legal scholar and critic Jonathan Zittrain described the experiment as ‘civic-engineering’. The ambivalence in this term is important. A more positive understanding of civic engineering might present it as engineering for the public good. A more negative interpretation might see it as manipulative engineering of civic processes. Facebook certainly presented the experiment as a contribution to the democratic processes of civic society. They illustrated that their platform could mobilise participation in elections. The more profound lesson however, is the power it illustrates digital media may be acquiring in shaping the electoral activity of citizens.

Data-driven voter mobilisation methods have been used by the Obama, Clinton and Trump campaigns in recent Presidential elections. These data-driven models draw on a combination of market research, social media and public records data. While the creation of data-driven voter mobilisation within campaigns might be part of the strategic contest of politics, the Facebook experiment generates more profound questions.

Jonathan Zittrain, like many critics, raised questions about Facebook’s capacity as an ostensibly politically neutral media institution to covertly influence elections. The experiment could be run again, except without choosing participants at random, rather Facebook could choose to mobilise some participants based on their political affiliations and preferences. To draw a comparison with the journalism of the twentieth century, no media proprietor in the past could automatically prevent a specified part of the public from reading information they published about an election.

Facebook’s ‘mood’ experiment was reported in the Proceedings of the National Academy of Science in 2014. The mood experiment involved the manipulation of user News Feeds similar to the voting experiment. The purpose of this study was to test whether ‘emotional states’ could be transferred via the News Feed. The experiment involved 689,003 Facebook users. To this day, none of them know they were involved in the experiment. The researchers explained that the ‘experiment manipulated the extent to which people were exposed to emotional expressions in their News Feed’. For one week one group of users were shown a News Feed with reduced positive emotional content from friends, while another group was shown reduced negative emotional content. The researchers reported that ‘when positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred’. In short, Facebook reported that they could, to an admittedly small degree, manipulate the emotions of users by tweaking the News Feed algorithm.

Much of the public debate about the mood experiment focussed on the ‘manipulation’ of the user experience, the question of informed consent to participate in A/B experiments, and the potential harm of manipulating the moods of vulnerable users. These concerns matter. But, as was rightly noted, focus on this one experiment obscures the fact that the manipulation of the user News Feed is a daily occurrence, it is just that this experiment was publicly reported. More importantly, the voting and mood experiments illustrate how public communication doubles as the creation of vast troves of data and participation in experiments with that data. When we express our views on Facebook we not only persuade other humans, we are contributing to the compilation of databases and the training of algorithms that can be used to shape our future participation in public culture.

The response of critics like Jonathan Zittrain, Kate Crawford and Joseph Turow to the data-driven experiments of platforms like Facebook highlight some of the new public interest concerns they generate. Crawford argues that all users should be able to choose to ‘opt in’ and ‘opt out’ of A/B experiments, and see the results of experiments they participated in. Zittrain proposes that platforms should be made ‘information fiduciaries’, in the way that other professions like doctors and lawyers are. Like Crawford, he envisions that this would require users to be notified of how data is used and for what purpose, and would proscribe certain uses of data. Turow proposes that all users have access to a dashboard where they can see how data is used to shape their experience, and choose to ‘remove’ or ‘correct’ any data in their profile.
All these suggestions seem technically feasible, but would likely meet stiff resistant from the platforms. They are helpful suggestions because they help to articulate an emerging range of public interest communication concerns specifically related to our participation in the creation of data, and the use of that data to shape our thoughts, feelings and actions.

These proposals need to be considered as collective actions, not just about creating tools that give individual users more choice.
The bigger question is that, as much as the algorithmic infrastructure of media platforms generate pressing questions about who speaks and who is heard, they also generate pressing questions about who gets to experiment with data. Public communication is now a valuable resource used to experiment with public life. Mark Andrejevic describes this as the ‘big data divide’. The power relations of public communication now also include who has access to the infrastructure to process public culture as data and intervene in it on the basis of those experiments.

 

Conceptualising media platforms: from a culture of connectivity to a platform society

Let’s get conceptual. I’m going to go through here Jose van Dijck’s conceptual framework for social media platforms from her book The Culture of Connectivity. van Dijck offers us one of the most useful tools for thinking about media platforms as socio-technical engineering projects.

Van Dijck is a leading public intellectual in debate and understanding about the culture of social media platforms and the impact they have on public life. In addition to reading her book, check out van Dijck's public lectures, many of which are available online. I've posted below two. The first is a public lecture at the London School of Economics 'From a Culture of Connectivity to a Platform Society' and the other is a Keynote address 'The Platform Society' to the Association of Internet Researchers in Berlin. Both these lectures are from 2016, and they extend and develop van Dijck's way of thinking about media platforms that she sets out in her book in 2013.

From a Culture of Connectivity to a Platform Society

The Platform Society

Jose van Dijck's framework for social media platforms

This is Jose van Dijck’s definition of a platform:

The providers of software, (sometimes) hardware, and services that help code social activities into a computational architecture; they process (meta)data through algorithms and formatted protocols before presenting their interpreted logic in the form of user-friendly interfaces with default settings that reflect the platform owner’s strategic choices.

Van Dijck argues that platforms are not only computational but must also be understood culturally and politically, they can be seen as a metaphor for 'political stages and performative infrastructures'.  Platforms should not be seen as mere facilitators.  As well as hosting human interactions they also shape the form of those interactions. Social media platforms are defined by their use of a number of components including: data, metadata, algorithms, protocols, interfaces, defaults. These generic components offer a useful schema for identifying the significant elements of social media platforms and conceptualising the interplay between these elements at the technical level and with users at the socio-technical level.

Data, metadata, algorithms, protocols, interfaces, defaults. Let’s go through each of these.

Data. At its simplest level data is simply information coded for use in computer based communications.  It is bits and bytes, pixels, code etc.  Data can be seen as the base material that is needed by a social media platform in order for it to work.  It is the material that is provided to the platform by the user.  It is made up of both the information that the user knows they are providing and the information that the platform can collect.  It is profile information like name and date of birth as well as anything else uploaded by a user such as pictures and video.  It is also, eventually, information created by the user when they undertake connectivity – who they connect with and how they describe those connections.  

Metadata is data about data.  It is in one sense information that can be used to manage other piece of information.  Examples include tags provided by users that include keywords.  It is also information about where and when the data was created.  So the information about when a person posted to Facebook and from what location is metadata.  Again metadata can be collected by platforms and treated as further valuable data.  

Algorithm. An algorithm is 'a finite list of well-defined instructions for calculating a function, a step-by-step directive for processing or automatic reasoning that orders the machine to produce a certain output from a given input'.  In other words it is the mathematics or code by which the vast amount of data and metadata provided to and created by social networking platforms is organised.  The algorithm makes raw data into something that can be used.  For example the way that information is ordered on an individual’s Facebook newsfeed is calculated by an algorithm that takes into account data about the individuals taste, their connections, when and how they access Facebook, what other individuals in their network are looking at and how what Facebook knows about them fits with the commercial interests of Facebook’s advertisers.  Or the Netflix algorithm that attempts to anticipate and recommend films and TV a user will like based on the past preferences.  

Protocol. Protocols are the rules of the social media platform.  However they are not a set of laws that people may choose to obey – rather they are way that a social media platform is set up so that, in order to use it, a user must follow the protocols.  For example in order for anything to happen on Facebook one must set up a profile.  In order to gain any benefit from Facebook one must ‘friend’ people and like pages.  In undertaking these activities the user generates data that can be utilised by Facebook.  Not all protocols are compulsory but they are strongly suggested.  Protocols on other platforms might include following people on Twitter or creating wish lists on Amazon or Netflix.  Protocols govern how users can use platforms, they “guide users through its preferred pathways; they impose a hegemonic logic onto a mediated social practice”.

Interface. There are two kinds of interfaces in social media platforms the internal and the external.  The external is the interface seen by users – basically just what the platform looks like.  So Facebook has scrollable columns and various types of featured information, Netflix sorts film and TV into somewhat idiosyncratic genres etc.  These interfaces are designed to improve connectivity.  The internal interface is seen only by the platform owner.  In these the visibility and availability of aspects of the external interface are controlled.  

Defaults. Defaults are settings automatically assigned to a software application to channel user behaviour in a certain way. Defaults are not just technical but also ideological manoeuvrings: if changing a default takes effort, users are more likely to conform to the site’ decision architecture. A notorious default is Facebook’s setting to distribute a message to everyone, rather than to friends only. These platform elements are designed in order to construct or imagine a particular kind of user and to direct them in particular ways. Decisions about how to imagine and direct users are often shaped by the commercial imperatives of platform owners.

Within the microsystem of a particular platform the imperative toward generating value leads to the construction of a particular kind of user – often one who is engaged frequently and extensively, is expressive and integrates the platform into their everyday routines. The objective of platform designers is to ‘create’ this kind of user. The platform designer aims to orchestrate the interplay between the human user and the machinery of the platform. Within the larger social media eco-system relationships form between platforms as they aim to make their databases ‘mutually profitable. For example, Facebook and Spotify integrate their databases in various mutually beneficial ways. Over time, this interdependence leads to a certain level of platform interoperability as data formats and protocols become standardised. Van Dijck’s account prompts us to consider how social media platforms ‘engineers’ sociality. Connectivity is the process of ‘doing’ this engineering.

Van Dijck argues that 'social media are inevitably automated systems that engineer and manipulate connections.' Social media identify what people want by using algorithms to establish how people interact.  Given this the word ‘social’ can be seen to mean both connectedness, which is how humans interact with one another, and connectivity – which, as I have established, is how humans interact with one another in a manner facilitated by the logic of machines such as platforms.

Connectedness and connectivity are publically treated as the same thing by the creators of social media platforms because it serves their economic interests.  In fact they focus on connectedness and try and distract users from the connectivity.  They try to hide the fact that social media platforms make money by engineering connectivity, that is developing platforms that can sense, process and modulate the interplay between humans and humans, and humans and machines.

A final work from van Dijck:

Zuckerberg deploys a sort of newspeak when claiming that technology merely enables or facilitates social activity however, ‘making the web social’ in reality means ‘making sociality technical.
 

What is a platform?

What’s a platform?

I type ‘platform’ into Google. Ask a platform what a platform is. Google suggests a nearby bar, a train station, a Wikipedia entry. Let me try the Oxford Dictionary. The term platform emerges in the early 1500s. In basic terms it is a surface or area on which something may stand. That something might be a person making a speech or it might be a weapon like a cannon. There we go, to begin, a platform is infrastructural. A platform stands under something: a person, a weapon, software. A platform is something upon which other things happen. A stage upon which performances happen, hardware upon which software runs, a launch pad upon which a rocket is launched into outer space.

By the mid-1500s, the term platform also comes to mean something that enables other things to happen. It refers not just to a physical stage, but can also mean a plan or a scheme. To establish a platform, was to create the basis for taking some action in the world. For instance, a collection of individuals might gather together and establish a political platform. A set of ideas and a plan for executing them. By the late twentieth century a platform referred to a computer system architecture, a type of machine and operating system, upon which software applications are run. So, a platform is infrastructure. It is something upon which things happen. Platforms facilitate and enable: public speech, rocket launches, software applications, political agreements.

Platforms are also governed by technical and social rules. Think of a public stage. It is governed by technical rules. The platform can only extend as far as its capacity to amplify speech. The reach of the platform is limited to those who can hear the speaker. It is governed by social rules. Agreements form about who is allowed to take to space to speak, how long they can speak for, what they can speak about, and how people in the audience should act.

The past decade has seen the rise of ‘platform’ companies that are transforming the relationship between media and culture. The market shorthand for these platforms is the FANGs: Facebook, Amazon, Netflix and Google. Think of the list of social institutions and practices that have been irrevocably changed, and in some cases, destroyed by the emergence of the FANGs: journalism, television, advertising, shopping, finding your way around the city, politics, elections, dating, gambling and fitness. For a start.

Alongside the behemoths are an array of platforms that each in their own way are the site of major cultural disruption and innovation. Twitter is remaking the speed and quality of public speech. Instagram is reinventing photography, and along with it how we portray and imagine our lives and bodies. Snapchat is collapsing the boundary between the public and intimate. And, along with it, inventing an immersive augmented reality where we see our bodies and world overlaid with digital simulations. Tinder is changing the rituals of sex, love and dating. Fitbit is remodelling how we understand our bodies.

What do these corporations make?

The simple answer is that they engineer platforms that orchestrate the interplay between the calculative power of digital computing and the lived experience of humans. If the media institutions of the twentieth century were highly efficient factories for producing content, the FANGs make platforms. Of course, some of them, like Amazon and Netflix also produce content, but their value proposition and their disruption comes from the platform.


The major platforms are a central part of a larger culture of media engineering. By media engineering, I mean the industrial process of configuring and linking together digital devices, sensors, interfaces, algorithms, and databases. Importantly, media engineering is an experimental technocultural process of shaping the interplay between digital computers and the creative nature of cultural life. What do I mean, interplay between the calculative power of digital devices and the open-ended nature of lived experience?

This is the sound of a bio-reactive concert sponsored by Pepsi at the techno-culture festival SxSW. That’s right a bio-reactive concert. What does that mean? It’s a concert where everyone in the audience is wearing a wristband that senses information about their body, and that bio-data is used to augment the concert experience in real time.

Carey Dunne, writing in Fast Company, explains:

At South by Southwest this year–at the Pepsi Bioreactive Concert, deejayed by A-Trak–event attendees donned Lightwave’s sensor-equipped wristbands, which measured their body temperature and audio and motion levels. This data was transmitted wirelessly to Lightwave’s system, which created interactive visuals that represent audience members as pixels, and which also triggered confetti and smoke machines and unlocked boozy prizes. Now, Lightwave has released an elaborate visualization of the party’s alcohol and dubstep-altered biodata, arranged in a song-by-song timeline of the concert. When A-Trak says “Show your energy,” the crowd delivers, with temperatures spiking. The moment the beat drops on Skrillex’s NRG, you see the biological effects of a crowd going wild. The hotter and sweatier they got, the more rewards they’d unlock.

This bio-reactive branded dance party is media engineering in action. We have living humans: making culture, enjoying themselves, affecting one another. And, we have material technologies that are sensing, calculating and augmenting that human experience. Those technologies are a combination of sensors, databases, algorithms, interfaces, screens, and speakers to together constitute a media platform. In this case, people dancing wearing a digital wristband that can sense and transmit information about motion, audio and temperature, a DJ standing on a stage in a specially designed tent with decks and PA. The sound goes out through the speakers. The speakers stimulate the bodies of the attendees. They move, they sweat, they scream and clap. The wristband senses their bodily expressions. That information is conveyed back to a database. Algorithms process the information. The information is visualised on an interface. The dancers can see their collective temperature and excitement, they can see the ‘scores’ of individual dancers. Algorithms decide to ‘unlock’ features for the crowd like confetti and free drinks.
In Pepsi’s bio-reactive concert we have a condensed version of the larger logic of media platforms.

Media platforms like Facebook, Google, Instagram and Snapchat are all – in various ways – bio-reactive. They sense our living bodies, process information about them, react to them, stimulate them, and learn to modulate and control them. So then, in the present moment, what is a media platform? A platform is a computational infrastructure that shapes social acts. An infrastructure that senses, processes information about, and attempts to shape lived experience and living bodies. In The Culture of Connectivity Jose van Dijck argues that social media are socio-technical. What does that mean? ‘Online sociality has increasingly become a coproduction of humans and machines’.

In the Pepsi dance tent at SxSW the kind of ‘sociality’ produced, that is the shared sense of enjoyment and spectacle, is a co-production of humans dancing and DJing and machines sensing and augmenting the experience. Jose van Dijck calls this co-production of humans and machines ‘connectivity’. Media platforms engineer connectivity. According to her, we live in an age of ‘platform sociality’. A social situation where platforms shape social life. Earlier versions of the web were premised on a concept of networked sociality. Many individuals talking to each other on a relatively level playing field. The codes and protocols that governed interaction were relatively neutral, transparent and open to negotiation. This was possible, in part, because of the relatively small scale of early forms of online culture: a bulletin board, an email list, a chat room. The platform sociality of social media programs what users can do, how participation and content are ranked, judged and made visible. This way of thinking about media platforms prompts us to think not just about how they give us the capacity to speak and be heard, to express ourselves, but rather how they configure, engineer, and program social life. And, critically, whose interests drive that process.

Connectivity and Connectedness are different. Connectedness is an interaction between users that generates shared ways of life, whereas connectivity is the use of data and calculative tools that program these social connections in ways that control them for commercial and political purposes. That is to say connectedness builds community, connectivity makes money. This is also why I tend to say media platform rather than social media. The term social media suggests that these media are defined by the social participation they facilitate. The term media platforms shifts our focus in a productive direction, it puts the emphasis on the political economic project of engineering platform architecture.

 

Typewriters and self-trackers

This is Carl Schmitt writing in 1918 about a fictional civilisation, the Buribunks. Every person has a personal typewriter.

Every Buribunk, regardless of sex, is obligated to keep a diary on every second of his or her life. These diaries are handed over on a daily basis and collated by district. A screening is done according to both a subject and a personal index. Then, while rigidly enforcing copyright for each individual entry, all entries of an erotic, demonic, satiric, political, and so on nature are subsumed accordingly, and writers are catalogued by district. Thanks to a precise system, the organisation of these entries in a card catalogue allows for immediate identification of relevant persons and their circumstances. …the diaries are presented in monthly reports to the chief of the Buribunk Department, who can in this manner continuously supervise the psychological evolution of his province and report to a central agency.

The Buribunks use the typewriter to reflect on themselves, the information they record is archived in a database, where it is analysed by officials. They use the information to both monitor the thoughts of individuals, the mood of particular regions, and as a kind of market research to create entertainment and culture that reflects the interests of Buribunk citizens. This story just flattens me.

Here is Schmitt in 1918, looking at the typewriter. He sees a device that standardises written script, which enables vast amounts of information to be created, stored and processed. Schmitt sees in the typewriter the beginning of a civilisation where everyday life is extensively recorded. In 1918, Schmitt sees not just the smartphone, the wearable, the social media platform but also the kind of personhood and society that would go along with it. Here’s a critical point in his story: Buribunks are very liberal, they can write whatever they like in their diary. They can even write about how they hate being made to write a diary. But, they cannot not write in the diary. So, you can say whatever you like, but you cannot say nothing. You must make your thoughts, movements, moods and feelings visible to a larger technocultural system. Schmitt here envisions a mode of social control that doesn’t depend on limiting the specific ideas people express, but rather works by making their ideas visible so that they can be worked on and modulated.

I find this aspect of Buribunkdom startling, not because Schmitt is the only one to articulate a mode of control like this. Of course other critical thinkers in the twentieth century have too: Foucault, Deleuze, and Zizek to name some. I find it interesting because here in 1918, we have someone seeing personal media devices operating to manage the processes of representation and reconnaissance. That media technology was understood here as both instruments for symbolic communication and for data collection. So, here we are one hundred years later and we are the Buribunks. We use our smartphones every day to record reams of information about our lived experience: our expressions, preferences, conversations, movements, mood, sleep patterns and so on. This information is catalogued in enormous commercial and state databases. The information is used to shape the culture that we are immersed in. And, importantly, this system works by granting us individual freedom to express ourselves, and places relatively few limits on what we can say. But, this system does demand our participation. Participation is a forced choice. Very few of us successfully navigate everyday life without leaving behind data about our movements, preferences, habits and so on.

Schmitt imagined a large government bureaucracy where information would be stored on index cards. It was a kind of vast analogue database. Of course, instead of this, we have a complex network of digital databases owned by major platforms: Facebook, Google, Amazon and Netflix. And, these database function as enormous market research engines that capture and process information which is used to shape our cultural world. What Schmitt saw in the typewriter, has congealed in the smartphone, the critical device in a culture organised around the project of the self. The work of reflecting on and expressing the self, as a basis task in everyday life. And, super importantly, these tasks are shaped by the tools we use to accomplish them.

Here is a famous line from Nietzsche about his typewriter, which he experimented with in the late nineteenth century: ‘Our writing tools are also working on our thoughts’. What did he mean? As we use media technologies to reflect on and express ourselves, we become entangled with them. They shape the way we think, act and express ourselves. They shape the way we imagine the possibilities of expression, and we might say that in our own minds we begin to think like typewriters, or films, or smartphones. We think using their grammar, rhythms and conventions. So, with the typewriter and the smartphone, we might say that these devices ‘work on us’ in the sense that they facilitate a process through which we ‘monitor’ and record data about ourselves.

OK, so I’ve suggested here that in Schmitt’s early twentieth century we can see the pre-history of the smartphone. Well, Kate Crawford and her colleagues actually offer us a study of this history. They trace the genealogy of devices and practices we use to weigh ourselves since the 19th century through to present day self-trackers like FitBits. Think about how FitBit talks to us in its advertisements. The FitBit is presented as a radically new technology offering precise information about the ‘real’ state of our bodies. This knowledge will be useful to us, it will make us fitter, happier, more desirable and more productive.

What Crawford and co. remind us is that this set of claims are not all that new. Devices that ‘work on’ or shape our thoughts and feelings about our bodies have been around a long time.
Weight scales are one example. From the 19th century onwards both the cultural uses and technical capacities of weight scales have changed. In cultural terms, weight scales shifted from the doctor’s office, to the street, to the home. They gradually changed from a specialist medical device used only by doctors, to public entertainment, to a private everyday domestic discipline.

So, here’s a run through of Crawford’s narrative. Doctors began monitoring and recording patients’ weight toward the latter end of the 19th century, but this was not routine until the 20th century. In 1885, the public ‘penny scale’ was invented in Germany, which then appeared in the US in grocery and drug stores.  Modelled after the grandfather clock, with a large dial, the customer stepped on the weighing plate and placed a penny in the slot.

Some penny scales rang a bell when the weight was displayed, while others played popular songs like ‘The Anvil Chorus’ or ‘Oh Promise Me’. The machines would also dispense offerings to lure people into weighing themselves in public, such as pictures of movie stars, horoscopes, and gum and candy. Built in games such as Guess-Your-Weight would return your penny if you accurately placed the pointer at your weight before measurement. However, the extraction of money in exchange for data was the prime aim of the manufacturers; ‘It’s like tapping a gold mine’, claimed the Mills Novelty Company brochure in 1932.

The domestic weight scale first appeared in 1913. A smaller, more affordable device for the home, it allowed self-measurement in private to offset the embarrassment of public recording one’s weight with attendant noises and songs. The original weight scale is an analogue or technical form of media - our body weight makes an impression on a mechanism that is calibrated to record it on the scale. As a media device it collects and presents information to us but it is also important to consider how it is configured in broader social and identity-making processes. There is a gendered history of these devices.

Public weight scales were initially marketed to men but in the1920s women started to be encouraged to diet. Weight scales were presented to women as a private bathroom device to monitor their bodies, thus becoming a tool to ‘know’ and ‘manage’ ourselves. Here’s Crawford’s account of this:  

Tracking one’s weight via the bathroom scale was not only about weight management - as early as the 1890s it assumed a form of self-knowledge. This continues today where value and self-worth can be attached to the number of pounds weighed.

Crawford refers to a study, where a participant in an eating disorders group was asked how she feels if she does not weigh herself; ‘I don’t feel any way until I know the number on the scale. The numbers tell me how to feel’. That’s basically Nietzsche claim about the typewriter – the device is working on my thoughts. The numbers tell me how I feel. Similar claims are made around self-tracking devices. There are accounts of self-tracking and internalized surveillance taken to an extreme by people suffering from eating disorders.

So, the history of the weight scale reminds us that tracking devices are agents in shifting the process of knowing and controlling bodies, both individually and collectively, as they normalize and sometimes antagonize human bodies. The Fitbit turns the body’s movement into digital data: daily steps, distance travelled, calories burned, sleep quality, and so on. This is then fed into a ‘finely tuned algorithm’ that looks for ‘motion patterns’. There are two things at work here in this sequence from the personal weight scale to the FitBit. One, a moral epistemology: knowing one’s weight and body habits can lead to an improved, possibly ideal self and life. And, two: an economic imperative. Penny scales were significant money making enterprises and there was a strong profit motive in encouraging people to weigh themselves ‘often’. This exchange of money for data is clear: spend a penny, receive a datum, but the collection of data is also private, going no further unless the user willingly shared it with others. This is less clear in trackable devices. The user can reflect on their own data but that data will always be shared with the device maker and a range of unknown parties. What is then done with that data is not transparent and ultimately at the discretion of the company. Consumer data are mediated by a smartphone app or an online interface and the user never sees how their data is aggregated, analysed, sold, or repurposed, nor do they get to make active decisions about how that data is used.

As a tagline for an advertisement, for the wearable Microsoft Band, states, ‘this device can know me better than I know myself, and can help me be a better human.’ So then, Crawford argues, ‘the wearable and the weight scale offer the promise of agency through mediated self-knowledge, within rhetorics of normative control and becoming one’s best self.’ On one hand the ability to ‘know more through data’ can be experienced as pleasurable and powerful, the promise of which is evident in this advertisement for Microsoft band.

OK, and on and on it goes. Ugh, corporate brand vomit. But, also here’s the basic claim Microsoft are making: buy this device, it will work on you! It will change you. What wearables like the FitBit achieve that the personal weight scale could not, is the real-time aggregation of data about all bodies, and the feeding back of this information to each users via customised screens. Again here, Schmitt’s Buribunks had paper index cards and human-scale analysis of expressions. The FitBit is real-time biological analysis of millions of bodies. Here’s Crawford:
‘Statistical comparisons between bodies are necessarily contingent on a set of data points. Users get a personalized report, yet, the system around them is designed for mass collection and analysis, so the user becomes ‘a body amidst other tracked bodies.’ So ‘the user only gets to see their individual behaviour compared to a norm, a speck in the larger sea of data.’

Drawing on the work of Julie E Cohen, Crawford argues that this functions as a ‘bio-political public domain’… designed to ‘assimilate individual data profiles within larger patterns and nudge individual choices and preferences in directions that align with those patterns.’  So ‘while there is a strong rhetoric of participation and inclusion, there is a ‘near-complete lack of transparency regarding algorithms, outputs and uses of personal information’. And, this is the crucial point. Mark Andrejevic calls this the ‘big data divide’. The difference between individuals who record their data, and the corporations who collect and process that data.
The lesson then is to think about the evolution of media devices for collecting, storing, processing, and disseminating information over a hundred year period, as well as the individual and social facets of digital media.

The FitBit and similar tracking devices that collect data about us and present that back to us as customised and individualised media content, become part of a much larger social system of control in several ways. The data that we give and view at an individual level is logged in databases that operate at population level. These devices are implicated in a cultural process based on self-monitoring and self-improvement. They work on our thoughts. And, importantly, these devices normalise data-driven participation and computation in our everyday lives. They become a foundational model for how we do our lives, bodies and identities.

 

Cybernetics

On the night of October 15, 1940, the German air force sent 236 bombers to London. ‘British defences were dismal’. They ‘managed to destroy only two planes’. London, the heart of the British Empire, was under siege.

In Rise of the Machines Thomas Rid explains this moment’s historical significance. ‘For the first time in history, one state had taken an entire military campaign to the skies to break another state’s will to resist’. Survival for Britain, and the Allies, would depend on their ability to engineer a way of shooting those German bombers out of the sky.

This problem triggered a ‘veritable explosion of scientific and industrial research’ which would result in ‘new thinking machines’ capable of making ‘autonomous decisions of life and death.

Rid puts it this way:

Engineers often used duck shooting to explain the challenge of anticipating the position of a target. The experienced hunter sees the flying duck, his eyes send the visual information through nerves to the brain, the hunter’s brain computes the appropriate position for the rifle, and his arms adjust the rifle’s position, even ‘leading’ the target by predicting the duck’s flight path. The split-second process ends with a trigger pull. The shooter’s movements mimic an engineered system: the hunter is network, computer, and actuator in one. Replace the bird with a faraway and fast enemy aircraft and the hunter with an antiaircraft battery, and doing the work of eyes, brain, and arms becomes a major engineering challenge.

This challenge involved configuring the interplay between human and machine (to allow, for instance, a human operator to move an enormous weapon precisely at speed), engineering radar that could detect a plan before a human eye could see it, creating a network that would relay information from a radar to a computational device, building a computer that could predict the path of an enemy plane through the sky and predict where to fire, constructing the apparatus that could transfer the prediction into the operation of a weapon in real-time.

What was going on here? The creation, by a large network of scientists and engineers within a military-industrial system, of machines that could sense, learn, calculate and predict. Machines that could exert control over the material world via ongoing cycles of feedback and learning. In June 1944, the Germans launched V1 rockets across the English channel toward London. The V-1 was ‘a terrifying new German weapon: an entire unmanned aircraft that would dive into its target, not simply drop a bomb’. The first cruise missile.

At this moment, Rid explains

A shrewd, high-tech system lay in wait on the other side o the English Channel, ready to intercept the robot intruders. As the low-flying buzz bombs cruised over the rough waves of the Atlantic coast, invisible and much faster pulses of microwaves gently touched each drone’s skin, 1707 times per second. These microwaves set in motion a complex feedback loop that would rip many of the approaching unmanned flying vehicles out of the sky.

The Allies had engineered a ‘cybernetic’ system. A combination of technological devices that could sense, calculate, predict and execute decisions. These devices included the primitive digital computers.

Following the war, the mathematician Norbert Wiener was a key figure in popularising the idea of ‘cybernetics’. There are three critical concepts in ‘cybernetics’: feedback, learning and control. Cybernetics comes from a Greek word which means ‘to steer’. It articulates a process of exercising control by learning from feedback. A key feature of humans is that we can learn and adjust by using our senses and decision-making capacities. Cybernetics was the effort to construct ‘intelligent machines’ that could also learn. Wiener would often imply that he was central to solving the ‘prediction’ problem during the war.

It is true that Wiener was one of many scientists funded to undertake experiments, and Wiener did propose a mathematical model for predicting the path of an enemy aircraft. He did not however ‘solve’ the prediction problem, his model didn’t work. The lesson here is that complex technological systems are the result of a network of actors. There is rarely any one individual genius who ‘invents’ them. Jennifer Light makes this point emphatically in her study ‘When Computers Were Women’, explaining that while two male engineers often credited with automated ballistics computations during the war, critical to the effort were ‘nearly 200 young women’ who worked as ‘human ‘computers’, performing ballistics calculations during the war’. The first computers were hundreds of female mathematicians.

In 1948, Wiener coined and popularised the term ‘cybernetics’ as the science of ‘control and communication in the animal and the machine’. In short, cybernetics views the state of any system – biological, informational, economic, and political – in terms of the regulation of information. A cybernetic device can sense or collect information, and be programmed to respond to that information. In the case of wartime anti-aircraft defence, a radar detects movement, it tracks an enemy plane across the sky. Information is relayed to a primitive computer, which calculates aircraft trajectory. This calculation is passed on to an anti-aircraft weapon, which fires at the enemy aircraft.

Wiener is a significant figure in the story of cybernetics because he articulated how these computational technologies would reshape industry, society and culture. In his 1950 book, The Human Use of Human Beings: Cybernetics and Society, Wiener made an important historical move by placing ‘cybernetics’ at the heart of what he called the second industrial revolution.

The first industrial revolution bought about new forms of energy, such as steam and electricity created by machines. Harnessing these energy sources enabled the production of a goods on a scale far beyond what humans on their own could make. Wiener claims that in the first industrial revolution the machine acted as an ‘alternative to human muscle’. For example, one of the first applications of the steam engine was to pump water out of mines, a job that had previously been done by humans and animals. Many changes resulted from replacing human muscle with machines – factories emerged, urban labour forces created mass cities, and the demand for raw materials stimulated the growth of plantations and mines in the colonies, and hence rail and shipping networks for transportation.

For Wiener, machines ‘stimulated’ the development of an entire industrial and social system. In the second industrial revolution a new kind of a new kind of machine emerged - the computer which extended machines to the idea of communication. This is how he put it, ‘for a large range of computation work, [machines] have shown themselves much faster and more accurate than the human computer’. Wiener thought that computers would eventually communicate with and modulate a range of instruments. These instruments would act as ‘sense organs’. They would feed information back to the computer, so that it could make decisions and learn about its environment. Computers in factories would be programmed to generate and collect data to give feedback on production processes.

Thomas Rid makes the point that ‘Wiener didn’t change what the military did, the military changed what Wiener did’. What does mean? He means that Wiener’s peripheral involvement in wartime efforts to create machines that could sense, calculate, predict and execute decisions led him to perceive the development of a new kind of society. A society organised around devices and systems that were cybernetic – able to control their environment through processes of feedback and learning. Able to make collect, store and process information in ways that were once confined to the human.

Wiener and the other mid-century engineers, scientists and thinkers involved in the development of cybernetics imagined how media technologies would usher in complicated relationships between human forms of sense-making and decision-making and the capacity of computational devices to simulate, augment and even exceed those human capacities.

Media are symbolic, technical, digital

Media are technologies that organise human life and experience. They symbolically represent reality and they also collect information about reality.

How did they come to do this? First up, we often think of digital media as ‘new’. We register this most clearly in the advertising and corporate rhetoric of technology companies. Go and trace the history of Apple advertisements and product launches from their Macintosh personal computer in 1984 through to their iPod and iPhone launches. Listen to Mark Zuckerberg from Facebook when he tells us about the artificial intelligence he built, named Jarvis, that runs his house. Or, Facebook engineers when they tell us that they want to build a brain machine interface that will enable us to type from our brain. Or, Jeff Bezos from Amazon when he tells us that his AI Alexa will run our homes by listening in to our conversations.

Over and over the digital media industries present their technologies within a narrative of straightforward, linear progress. The next technology we build will be better than the last. And, implied in that sense of better, is what we might call a ‘technological imaginary’.

If we build all the cool gadgets, all the human problems will go away!

Here, I think of John Durham-Peters’ in Speaking into the Air, ‘’Communication’ whatever it might mean, is not a matter of improved wiring or freer self-disclosure but involves a permanent kink in the human condition. That we can never communicate like angels is a tragic fact, but also a blessed one.’

The kinks of human experience cannot be solved with technologies. And, new technologies are not ‘better’ than the last ones. As in, they don’t automatically make for a ‘better’ human experience. One way we can think about media technologies then is how they emerge out of the experimental effort of humans to exercise power in the world. This is not a straight-forward process.

That means we should listen carefully to Apple, Facebook and Google when they tell us what they are experimenting with, and where they think they are headed – not because this enable us to ‘narrate’ the development of technology, but because it offers us a way of thinking carefully about the kind of human experience they are imagining and creating.

With that in mind, let’s turn back to Kittler who takes this ‘genealogical’ approach to a history of media technology. Genealogy is a method inspired by Nietzsche and Foucault, a way of doing history that pays attention to how material technologies emerge as part of historically-conditioned discourses, social formations and modes of power.

Kittler identifies three historically significant media systems.

Symbolic

The first is a symbolic media system. In this system, writing, physical speech sounds, or musical tones are transposed by a human into visual symbols, which are then re-translated by users into a sound, word, or idea in the mind. Think the alphabet, musical notation, or paintings and drawings. These systems work because the human users create and follow rules. Alphabets and musical notation have technical specifications that the users have to follow if they are going to work. For example, the alphabet is a media system, with visible symbols and rules for how sounds in speech are to be captured, stored, and processed. This symbolic media system dominated until about 1900 and allowed for the development of new forms of social, cultural, and political life.

Technical

A second system, technical media, emerged during the 19th century and into the 20th century. While writing transposed physical sensations into symbols, technical media could capture physical sensations directly as impressions on a medium. The difference between symbolic and technical is crucial here. With a symbolic system the physical sensation – sound or light – has to pass through the human body to be transposed into a symbol. The human ear hears a word, and transposes it into letter from an alphabet. With a technical system that physical sensation is recorded directly as an impression in another medium, without the human body having to turn it into a symbol. Photographs, which capture light, and phonographs, which capture sound, are the key technologies. Both emerge in the 19th century, and become mass technologies in the early 29th century. Photography is a process where film records a physical impression of light on a media. Phonography records the physical impression of sound on a record or tape. Those impressions can then be ‘converted’ back via the medium into an accurate representation of the original image or sound.

The first phase of the age of technical media was the capacity to ‘capture’ and ‘store’ images and sound, while the second phase was the transmission of those images and sounds over distance, via radio and later, television. This system is analogue. Think about a vinyl LP where the physical grooves in the record are ‘impressions’ of a sound that are read and converted into an audio signal you can hear via the technical device of the record player. Analogue devices, such as record players and tapes, read the media by scanning the physical data off the device.

Digital

By the end of the twentieth century, the age of technical media gave way to our present epoch - the age of digital media. Rather than record data as a ‘physical’ trace, a digital system converts all data into a numerical system. The really important point Kittler makes here is that the digital system collapses all ‘senses’ into one medium. This enables media to calculate, process, and simulate. In the mid-1980s, Kittler predicts that sooner or later we will be hooked into an information channel that can be used for any medium. Movies, music, phone calls, and mail will reach households via fibre optic cables. Once any medium can be translated into 1s and 0s, and passed through the one infrastructure of digital computers and networks, the capacity of media to experiment with reality dramatically explodes in scale.

With digital media the physical properties of the input data are converted into numbers. Media processes are coded into the symbolic realm of mathematics, which can then be ‘immediately subjected to the mathematical processes of addition, subtraction, multiplication, and division through algorithms contained within software.’

Think of the present moment.

Our bodies permanently tethered to, and integrated with, digital devices like smartphones. These devices convert human experience into data. They store, they calculate, they predict as much as they represent. Our imagination is entangled with the data-driven, algorithmic, flows of images, sounds and texts streaming via their screens. This genealogy of this kind of human experience can be traced back, at least, to the mid-nineteenth century.

In the mid-1800s the technologies for storing reality emerge. The phonograph stores sound, the photograph stores light, the typewriter standardises the storage of alphabet, numbers and code. From the early 1900s, the technologies for electronic transmission of sound, light and code over distance emerges in the form of telegraph, radio and television. In the mid-1900s, around the schema of the typewriter, the capacity of media to calculate and predict emerges. For Kittler, Turing’s mathematical definition of computability, and the codebreaker he built during World War II mark the moment where media become first and foremost calculative devices for intervening in reality.

 

Media calculate

Imagine this.

Wars have been fought for as long as humans have sought to get together in groups occupy territory. To mark out and defend a space, and within that space to construct a kind of technocultural habitat. An atmosphere in which to live.

How to represent that territory though?

At first, territory is marked and defined in lived practices. The understanding of where territory begins and ends is carried in the living bodies, brains and practices of people. Features in a landscape are known to inhabitants. Over time, humans develop symbolic methods for drawing territory. Think of a map. A map represents territory. It draws in the features, it marks borders.

In warfare, from ancient times to through till the early twentieth century the commander of an Army sees the territory they are invading or defending as it appears to their own two eyes, with their feet on the ground, or as it is drawn on a handmade map. Imagine the moment then when territory is first seen, like a bird, out of an airplane.

In his Optical Media lectures the German media theorist Friedrich Kittler takes up this moment. I’m riffing here a bit on his account. In 1914 the French and Germans are engaged in trench warfare on the Western front. Trenches are dug into the ground, partly so that opposing army cannot see the enemy lines clearly.

Imagine though you can see the trenches like a bird, fly over them and see their exact formation in the landscape. This happens for the first time in 1914. Aircrafts are used to undertake reconnaissance of enemy lines. In August 1914, the French led a successful counter-strike on the Germans using photographic records made by reconnaissance aircraft. Coupling an aircraft with a camera enabled armies to view territory from the sky, to disclose invisible soldiers, camouflaged artillery positions, and unnoticed rearward connections to the enemy.

The Germans urgently needed to respond. What emerged was an experimental interplay between aircraft engineers, photographers and cinematographers. We often think of cinema playing a role in the two World Wars of propaganda machine. Cinema was used to induce in mass populations fear of the enemy and support for national the war effort. But, here at the same time, we can see that the camera was always too an instrument of reconnaissance, surveillance and data collection.

Kittler explains to us that in 1916,one of the founding directors of the German film industry, Oskar Messter, who had been charged by the government with filming propaganda newsreels on the War front, ‘constructed and patented his target practice device for the detection of deviations by means of photographic records.’ Simply, he mounted a camera in the machine gun turret of a plane, and used a clock mechanism, to make the camera take an automatic sequence of photos of the ground below. The planes would fly the same route day after day, taking the same sequence of photos, in order to produce detailed surveys of changes in enemy lines. The fact that the camera was mounted in the gun turret of the plane is a crucial detail.
Kittler writes, ‘Messter’s ingenious construction… could only be improved by combining shooting and filming, serial death, and serial photography, into a single act’.

What does he mean?

Well, of course, in warfare Kittler is pre-empting autonomous weapons like drones. Weapons that can ‘see’ the target and then shoot it. But, beyond the specific illustration of warfare, there is a fundamental conceptual point being made here about what media are. The camera mounted in the plane is a device that collects and stores information. The plane goes up, collects footage, comes back down. A photographer develops the film in the camera. Army commanders view the photographs, compare them to previous reconnaissance. They make a plan on how to attack enemy lines.

That’s a relatively convoluted process. What if, the camera in the plane was linked to some kind of device that could ‘read’ the image in real time and then shoot? That is, if the media device didn’t just collect and store information, it could also then process that information and execute an action. Think here of the line of technological development that stretches from these first camera-enabled planes in 1914 to the autonomous drones used by the US in warfare today. This process of development is what began to unfold during World War I.

Another German filmmaker drafted into the war effort, Guido Seeber, constructed a machine gun sight for fighter planes, which was combined with a small film camera that shot frames whenever the gun fired. Filming and flying coincide. World War I produced ‘a new kind of film director’. A film director whose visual perception had been ‘technologically altered’. That is, once you’ve seen landscape, territory, human habitat from the birds eye view, you never forget it, you imagine human territory differently.

The bird’s eye shot we are familiar with as viewers of film and television is created in the reconnaissance flights above German and French lines in 1914. Kittler explains that the ‘experimental and entertainment films made with a camera that was’ now mobile and airborne ‘converted the perceptual world of World War I’ – it’s reconnaissance vision – into ‘mass entertainment’. Kittler shows us the technical role that cameras and cinema played in warfare. There is widespread awareness of the use of cinema as war propaganda during the twentieth century, but less attention to its use as a reconnaissance tool. Media technologies, like film, develop not just out of cultural or artistic interest but as part of the technical requirements of other industries and activities.

As an aside, this historical description of the use of planes for reconnaissance in World War I reminds me of Jesse Lecavalier’s account of Walmart in The Rule of Logistics. Lecavalier explains that Walmart founder Sam Walton would use a plane to fly around the outskirts of regional towns and cities to scout for Walmart locations. He was looking for the patterns of urban expansion, in order to find land ahead of time for future Walmart stores. This was from the 1970s. So, you can see here the logic of using aircraft and cameras for surveillance extends beyond military uses. By the 1980s, Walmart became one of the first retailers to invest in their own satellites, which they could use to manage their distribution network of stores, trucks and warehouses; but also to scout for new locations, to track urban expansion – in the way that we might do now on Google Maps.

For Kittler, war is a critical incubator of new media technologies. The relationship between media as promotional or entertainment technology, and as reconnaissance technologies, is a dynamic one. Kittler quips that ‘all media are a misuse of military equipment’. By which he means that many aspects of our everyday media culture, are products of the military-industrial complex. The ‘perspective’ created in the reconnaissance flights of World War I inform the cinematic narratives and images on our screens. He describes cinemas as ‘churches of state propaganda’ that praised ‘war technology and electrification’.

This argument is a familiar one. Think how many Hollywood blockbusters celebrate violence, war, military dominance. How many of our cinematic experience place us in the perspective of the omnipotent soldier of fighter pilot unleashing firepower upon the enemy. Perhaps my favourite example Kittler offers of the ‘misuse of military equipment’ is the strobe light in discos, concerts and clubs. The strobe light mimics the flashing light of machine gun fire, was used to distract and disorient the enemy. And, for Kittler, one way to make sense of the dark, pulsating, strobing club is that it is the simulation of the fantasies and pleasures of warfare. Soldiers and clubbers alike mangled on amphetamines.

OK. So, where are we headed with this?

Kittler shows us how a media technology – the camera, the cinema, the strobe light – can be placed in a longer history. Media technologies are used for both promotion and reconnaissance. Promotion and persuasion via symbolic narratives and sensory stimulation. When we sit in the cinema and what films the world is represented to us, when the strobe light pulses the club our body is aroused. But, media technologies are also always invented, experimented with and used as technologies for data collection, storage and processing.

The big point, and this really matters, is that media are calculative and symbolic technologies. Too often, much of our attention focuses on their symbolic capacities. Think of how we often talk about Facebook, Instagram or Snapchat. Our accounts of them focus on how they enable new forms of participatory expression. But, they are also technologies of calculation. They collect, store and process data. And, I’d argue, if we follow the investment of resources and the logic of the business models they are much more driven by calculative rather than symbolic control. That is, while the cinema of the twentieth century is central to symbolic modes of control. The creation of narratives that inform, promote and persuade. That represent the world to populations, and make certain ways of life appear desirable.

If we look at a platform like Facebook or Google, well – they seem much more fundamentally organised around the logic of calculation. Facebook or Google don’t make symbolic narratives, they build media technologies that collect, store and process data. That’s why Kittler’s account of the technical data collection, storage and processing capacity of media from the 19th century matters so much. It enables us to ‘revisit’ the media technologies and cultures of the twentieth century and recognise that they were never just symbolic.

So, what do media technologies and platforms do? Well, they have symbolic and calculative functions. They create symbols like images, sounds, and narratives that convey the meanings out of which shared ways of life are constructed. And, they calculative by collecting, storing, and processing information.

 

Media Experiment

In 1878 the photographer Edward Muybridge ran an experiment to settle a bet made by Leland Stanford, the founder of Stanford University.

The bet was this: when a horse was at full gallop was it ever completely airborne? No part of it touching the ground.

Muybridge set up 12 cameras along a race track on Stanford’s estate. As the horse galloped by it tripped wires attached to the cameras, triggering a sequence of photos as the horse went by.

Once developed, the sequence of images showed clearly that at the top of its stride all four legs are tucked beneath the horse.

Here’s the thing though, the experiment set off something much bigger than settling Stanford’s bet.

Let me explain.

Muybridge was one of a number of people experimenting with photography as a new technology for capturing and storing light in the 19th century.

Technologies like the camera and phonograph dramatically changed how humans represented reality.

Prior to photography and image could only be captured by a human who drew or painted. Think about it like this. Light made an impression, through the eyeball onto the optical nerve, where it was somehow turned into an image in the brain and then, converted, via the hand into a painting. Photography transferred this previously human process to a machine. Light passed through a lens and made an impression on another medium: a metal plate or film.

This enabled reality, in the form of light, to be stored in a medium without having to first pass through the living human body.

This is an incredible period in the relations between humans and their ‘reality-producing’ technologies.

A symbolic media system began to give way to a technical one.

By symbolic I mean that reality has to be transposed into a socially-constructed symbol – letters and words, musical notes, a handmade drawing.

The human hears a song. They cannot captured the sound directly. They instead need to discern the sounds with their ear and then write it onto paper using musical notation, which they could pass on to another human who, if they could read the symbolic code – in this case, the musical score – could play back the sound. Same goes for writing and reading. Someone talks to me, I discern the words and transpose them into letters on a page, which someone else could read back.

Technical media is different to this. With technical media like the photograph or the phonograph, one medium – light or sound – makes an impression on another – vinyl, wax, a metal plate, film.

Muybridge’s experiment is a critical part of this experimental social process of developing techniques for capturing and storing reality because he figures out a way to capture and store moving reality, something more akin to ‘living’ or ‘live’ images.

This had been a huge problem. Humans knew that when they looked at the world it was both colourful and full of movement, and yet devices like early cameras could only capture a still black and white image. The big question was whether humans could create devices that captured moving images that looked more like the images we saw in our own heads.

So, here’s Muybridge, looking at his twelve images of the horse and realising that not only had they settled the bet, they could be taken and passed via the eye in a steady flow to give the appearance of the horse actually moving.

Muybridge kept experimenting, and a couple of years later – in 1879 – created a device called, elegantly, a zoopraxiscope – which was critical in the creation of cinema.

The zoopraxiscope was a small wheel that had a sequence of images printed around the outer edge. When spun the images appeared, to the human eye, to move.

This device inspired Edison and Dickson’s kinetoscope, the first commercial form of moving image film.

Why tell this story now?

Well, it is one of those critical moments during the late 19th and early 20th century where humans developed ways of storing light and sound, and in a sense, storing impressions of reality outside the living body.

So, it alerts us to something important about media cultures and technologies.

Media are technocultural processes through which humans store, process, augment, and play with reality.

Muybridge was experimenting with techniques for representing reality in ways that went beyond storing it in the human mind and senses.

But, that’s not all. Listen to this.

If Muybridge’s was one of the great experiments for developing media devices that represented reality, he returns in 2017 with a cameo in one of the contemporary efforts to create forms of bio-technical media that experiment with lived experience, and life, itself.

In 2017, Harvard scientists encoded a moving image gif of Muybridge’s horse experiment into the DNA of a living cell. Where, as The New York Times explains,

it can be retrieved at will and multiplied indefinitely as the host divides and grows. The advance, reported on Wednesday in the journal Nature by researchers at Harvard Medical School, is the latest and perhaps most astonishing example of the genome’s potential as a vast storage device.

The scientists involved in the experiment think that it

may be possible one day to do something even stranger: to program bacteria to snuggle up to cells in the human body and to record what they are doing, in essence making a “movie” of each cell’s life. When something goes wrong, when a person gets ill, doctors might extract the bacteria and play back the record.

Or, outside the human body, we might create living bacteria or organisms that monitor the environment, or to record how the brain words.

One of the geneticists involved in the project at Harvard says, ‘What we’re trying to develop is a molecular recorder that can sit inside living cells and collect data over time’.

I’ll be honest, I don’t really get it. As in, I don’t really get the science – the bit where the image is transposed into information that can be stored in a living cell. But, to be really crude about it, it follows – I think – the logic of the digital. Once all information can be collapsed into 1s and 0s, then the contents of any medium can be stored in another medium. The contents of film can be stored in bacteria.

OK, but apart from its fantastic strangeness, this experiment is one of many taking place in the early 21st century that are transforming what we understand media to be.

If Muybridge’s was one of a series of 19th and 20th century experiments in capturing lived experience, then the Harvard scientists who put his film in the DNA of a living cell are part of early 21st century experiments with developing technologies that engineer and experiment with lived experience.

If, in the 19th and 20th century media represented reality, in the 21st century media experiment with reality.

Storing information in DNA is very experimental, but I’d argue we should see it as part of the development of media technologies in two important ways.

The first is conceptual: media are devices for capturing, storing and processing information.

The second is more industrial: the major platforms like Facebook, Google, Amazon, the techno-capitalist Elon Musk are all investing in these kinds of technologies.

This is Regina Duggan, a developer at Facebook talking at F8 in 2017.

Think about that, here’s a Facebook developer saying ‘let’s start with your brain’. Facebook are calling this a brain-machine interface project. What’s important here is not what Facebook can do now, but what they are trying to do. They are trying to reduce the ‘friction’ between you living biological body and the calculative capacities of their media technologies.

Platforms like Facebook and Google have been imagining stuff like this for years. In 2004, one of Google’s co-founders Larry Page told Wired magazine that ‘eventually you’ll have the implant where if you think about a fact, it will just tell you the answer’.

When Elon Musk launched Neuralink he told the media that ‘over time I think we will see a closer merger of biological intelligence and digital intelligence’.

Gizmodo reported in 2015 that

A group of chemists and engineers who work with nanotechnology published a paper this month in Nature Nanotechnology about an ultra-fine mesh that can merge into the brain to create what appears to be a seamless interface between machine and biological circuitry. Called "mesh electronics", the device is so thin and supple that it can be injected with a needle — they have already tested it on mice, who survived the implantation and are thriving. The researchers describe their device as "syringe-injectable electronics", and say it has a number of uses, including monitoring brain activity, delivering treatment for degenerative disorders like Parkinson's, and even enhancing brain capabilities.

Neural lace, wetware, brain-machine interfaces. Whatever we call it we can see the impulse here, if the effort in the 19th and 20th century was to store reality outside the living body, in the 21st century the impulse is to incorporate the living body into media technology itself. To engineer life itself, and to incorporate lived experience within the technical, calculative, logistical infrastructure of media platforms.

When Donna Haraway wrote her Cyborg Manifesto in the 1980s it, super importantly, contained a dialectic impulse. Horror at the effort of technologists to transform the human body and experience to fundamentally, and the incorporation of that effort within the political economy of global capitalism and empire. But, also, fascination with the way these visions opened up new ways of imagining what it might mean to be human. The human was no longer, if we ever were, just a living body. The human is entangled, integrated with its machines.

So, here we are. In the first part of the 21st century at least one of our tasks is to think about media platforms’ experiments with reality, lived experience and living bodies.

To think about what these experiments mean for living cultures and societies. To think about media platforms like Facebook, Google, Amazon, Netflix, Instagram, Snapchat, Tinder and what they do in our world we need to go back – at least to the early twentieth century – to think about the effort to create media as logistical technologies that collect, store and process data about the human experience.

Here’s John Durham-Peters on how we might think about media in this way from his essay 'Infrastructuralism'.

Media are infrastructures that regulate traffic between nature and culture. They play logistical roles in providing order and containing chaos. […] Once communication is understood not only as sending signals – which is certainly an essential function – but as altering existence, media cease to be only studios and stations, messages and channels, and become infrastructures, habitats, and forms of life. Media are not only important for scholars and citizens who care about news and entertainment, education and public opinion, art and ideology, but for everyone who breathes, stands on two feet, or navigates the ocean of memory. Media are our environments, our infrastructures of being, the habitats and materials through which we act and are.

To continue along John Durham-Peters' line of thinking, here's an excerpt from his book The Marvellous Clouds, ancient media like ‘registers, indexes, census, calendars, catalogues have always been in the business of recording, transmitting, and processing culture […] or organising time, space and power’.

The symbolic understanding of media as audio-visual ‘entertainment machines’ which undergirds most accounts of advertising and society is something of an historical exception, ‘digital media return us to the norm of media as data-processing devices’.

We spent much of the twentieth century thinking about how media represent reality, we must also pay attention to the historical process through which media experiment with reality.