The difference between representation and simulation

What’s the difference between representation and simulation?

Let’s take representation to be the basic social process through which we create signs that refer to a shared sense of reality. The twentieth century is remarkable in part because humans created an enormous culture industry that managed this social process of representation.Through radio, film and television enormous populations came to understand an enormous social reality within which their lives were embedded. Critically, representation only works because people feel that the signs they see actually to refer to, cohere with or match their really-lived experience.

One way to think about simulation is that is upends the order of representation. Let me borrow the famous illustration of the French philosopher Jean Baudrillard. We can say that a map is a representational text. A map of a city represents that real city. You can use that map to actually find your way around a real world place its really-existing streets, and buildings and landmarks. What if, Baudrillard suggests, a map stops functioning as a representation and begins to function as a simulation. If in the order of representation the territory precedes the map, then in a simulation the map precedes the territory. That is, in representation the map comes after the real world, but in simulation the map comes first and begins to shape the real world.

OK, hang in here. Baudrillard has a fundamental insight for us, that really matters in a society increasingly organised around the logic of prediction. Here’s a fairly basic example of this claim that simulations are signs that precede reality, from William Bogard. Think of a simulation in the sense of a computer ‘sim’ like the software that teaches pilots to fly. In a simulation like this, signs are only related to other signs. The signs, such as the runway, geographical features and so on, the trainee pilot sees on the screen are only meaningful or operational within the simulation or in relation to the other signs enclosed within that system. When the pilot is sitting in the simulator ‘flying’ there is, of course, no real underlying reality they are ‘flying through’. What they see out the screen is not real sky, clouds, ground.

But, even so, this simulation is not a production of pure fiction, it is related to the real world. They intervene in the real world and they can only be understood in relation to the real world. In this case, a fighter pilot can only learn to fly by first using a simulation. The simulator enables them to habituate their bodies to the rapid, almost pre-conscious, decision making and the physiological impact of flying at supersonic speed.
So, we might say that while the simulation has no underlying reality, fighter pilots can only fly supersonic planes in the real world because they can train their minds and bodies in a simulation first. The simulation brings into being, in the real world, a fighter pilot. The fighter pilot could not exist without the simulation. The simulation then precedes and shapes reality.

So, here’s the thing to start thinking about. Think about all the ways in which our capacity to ‘simulate’ to create things in advance of their existence in the real world, to predict the likelihood of events before they take place, actually affect our really-lived lives. Simulations intervene in the real world.

For example, think about the capacity to clone animals or even genetically-engineer humans. Here’s William Bogard offering us a thought experiment on genetically-engineered children.

No longer bound by their ‘real’ genetic codes carried in their own bodies, parents may be able to ‘compile’ their ideal child from a genetic database. A program might even help them calculate their ideal child by drawing on other data sets. For example, information about the parents’ personalities might be used to compile a child who they will get along with, or information about the cultural or industrial dynamics of the city where the parents live might be used to compile a child likely to fit in with that cultural milieu or have the aptitude for the forms of employment available in that region. The child ‘born’ as a result of such interventions would always be a simulation, always be virtual, because they were the product of a code or computation performed using databases. This does not mean the child is not ‘real’, the child of course exists, but they are virtual in the sense that they could not exist without the technologies of surveillance and simulation which brought them into reality.

If the child a ‘real’ child? Of course it is. But, it is also a simulation, in the sense that its very biological form was predicted and engineered in advance. We begin to project our views of what an ‘ideal’ child is into the future production of the species. We can think here of Bogard’s ‘child’ as a metaphor for our public and media culture. Of course, it is our ‘real’ or ‘really lived’ experience, but it would not exist without the collection and processing of data, and the simulations that are produced from that processing. Simulations require data and that data is produced via technologies of surveillance. To clone a sheep you need a complete dataset of the sheep’s genetic code so you need to have the technologies to map the genetic code. To build a realistic flight simulator you need to have mapping technologies to construct simulations of the physical world. As Bogard argues, simulation in its most generic sense, is the point where the imaginary and real coincide, where our capacity to imagine certain kinds of futures coincides with our capacity to predict and produce them in advance.

The larger philosophical point here, is this: imagine a human experience where the future becomes totally imaginable and predictable, where its horizon closes in around whatever powerful humans today want. Bogard lays it out like this.

Technologies of simulation are forms of hyper-surveillant control, where the prefix ‘hyper’ implies not simply an intensification of surveillance, but the effort to push surveillance technologies to their absolute limit... That limit is an imaginary line beyond which control operates, so to speak, in ‘advance’ of itself and where surveillance – a technology of exposure and recordings – evolving into a technology of pre-exposure and pre-recording, a technical operation in which all control functions are reduced to modulations of preset codes.

Bogard introduces some significant critical ideas here. Firstly, he indicates that technologies of simulation are predictive but they can only make reliable predictions if they have access to data collected by technologies of surveillance. For example, Norbert Wiener’s invention of a machine that could compute the trajectory of enemy pilots in World War II combined surveillance, prediction, and simulation. The radar conducted surveillance to collect data on the actual trajectory of an enemy aircraft then a computational machine used algorithms to simulate the likely trajectory of that aircraft. This ability to interrelate the processes of surveillance and simulation is especially important because this process underpins much of the direction of present day digital media platforms and devices.

Secondly, Bogard, suggests that by using data surveillance, simulations can not only predict the likely future, they can actually create the future based on its data about the past. By predicting a likely future, we make it inevitable by acting to construct it and by acting ‘as if’ an event is likely to unfold, we ensure that it does. Admittedly, this can be a fairly complicated logic to think through. However, the critical idea to draw from this is that surveillance is not just a technology of the past by observing what people have done or the present by observing what people are doing. Surveillance also constructs the future whereby once coupled with simulation, it becomes a building-block in a system of control where pre-set codes and models program the future and what people will do.

Thus these technologies usher in, and here I’m quoting Bogard, ‘a fantastic dream of seeing everything capable of being seen, recording every fact capable of being recorded, and accomplishing these things, whenever and wherever possible prior to the event itself.’ The virtual is made possible when ‘surveillance’ and ‘simulation’ become simultaneous, linked together in an automatic way which enables the past to be immediately apprehended and analysed in ways that code the present. Let’s go back to Bogard’s example of the genetically-engineered child.
No longer bound by their ‘real’ genetic codes carried in their own bodies, parents may be able to ‘customise’ their ideal child from a genetic database. The child is real, they exist. But they are also virtual in the sense that they could not exist without the technologies of surveillance and simulation which brought them into reality.

Bogard is being deliberately imaginative in his account. He is attempting to conceptualise the ‘limits’ of surveillance and simulation technologies and indicate how the technologies of simulation can be interwoven with reality in complex ways. If information that can be collected and stored becomes limitless and the capacity to predict, calculate, and simulate using that information also expands, then the role media technologies play in our societies will shift dramatically in the years ahead. It might even profoundly unsettle our understanding of what media is. For example if parents can ‘compile’ a desirable child using a combination of surveillance and simulation technologies, would the resulting child be a media product?

In many respects the child could be construed as a customised media device, containing information suited to the consumers’ requests. This sounds messed up but in Bogard’s proposition we need to think about the limits of surveillance technologies. If surveillance becomes ‘complete’ then the possible future becomes ‘visible’. Crucially, you can repeat the past because the future no longer ‘unfolds’ randomly, but can be ‘managed’ drawing on data about the past, which enables it to be not just ‘predicted’ but brought into being – to be virtualised.

If all of this sounds a bit fanciful, then at least consider this point. Our media system is characterised by the increasing capacity to conduct surveillance and create simulations. Surveillance is the capacity to watch, simulation the capacity to respond. The two are interdependent. This system is productive and predictive. Together surveillance and simulation make calculations and judgments about what will happen next, and in doing so shape the coordinates within which institutions and individuals act. Technologies of surveillance and simulation then prompt us to think carefully about what the human experience is, and what the interrelationships are between humans and increasingly predictive and calculative technologies.

In the last post I mentioned the episode Be Right Back from Charlie Brooker’s Black Mirror. A young woman, Martha, gets a robot version of her dead partner Ash. The robot is programmed based on the all the data Ash generated why he was alive. It looks like him, has his gestures, his interests, speaks like him. Martha’s robot version of Ash can both learn to perform as Ash: his language, speech and expressions. For instance, Martha tells the robot what ‘throwing a jeb’ meant in their relationship, and later he uses that expression in context. But, the robot is unable to make its own autonomous decisions. Martha is the robot’s ‘administrator’ and he will do whatever she asks. The robot is missing the nuances of human relationships. It knows how to argue, but not how to fight. The robot cannot be affected. It can’t engage in open-ended, deeply human creativity. It can’t ‘spend time’ with another human. The night before she takes him to a cliff Martha and Ash the robot have this exchange.

Martha: get out, you are not enough of him…
Robot: did I ever hit you?
Martha: of course not, but you might have done.
Robot: I can insult you, there is tons of invective in the archive, I like speaking my mind, I could throw some of that at you.

The robot can manipulate how Martha feels, but it can’t understand her feelings or feel for itself. What do our intimate others know about us that our devices cannot? What can humans know about each other that technologies of surveillance cannot? What we look like when we cry, what we might do but haven’t yet done, how we respond to our intimate others as they change. Martha reaches this impasse with Ash. Ash is the product of surveillance and simulation. He doesn’t just ‘represent’ their relationship, he intervenes in it. He begins to shape Martha’s reality in ways living Ash did not, and now – having passed away – cannot. Martha takes Ash to a cliff.

Robot: Noooo, don’t do it! (joking). Seriously, don’t do it.
Martha: I’m not going to.
Robot: OK
Martha: See he would’ve worked out what was going on. This wouldn’t have ever happened, but if it had, he would’ve worked it out.
Robot: Sorry, hang on that’s a very difficult sentence to process.

Why is it difficult to process? Because as much as its a sensible statement, it is an affective one - it is about how she feels, but also the open-ended nature of human creativity. To consider and imaginethings that are not, or might have been, or could be.

Martha: Jump
Robot: What over there? I never express suicidal thoughts or self harm.

The robot is always rational.

Martha: Yeah well you aren’t you are you?
Robot: That’s another difficult one.
Martha: You’re just a few ripples of you, there is no history to you, you’re just a performance of stuff that he performed without thinking and its not enough.
Robot: C’mon I aim to please.
Martha: Aim to jump, just do it.
Robot: OK if you are absolutely sure.
Martha: See Ash would’ve been scared, he wouldn’t have just leapt off he would’ve been crying

The robot can manipulate how Martha feels, but it can’t understand her feelings or feel for itself. She makes a decision that might surprise us though. Rather than put photos of her daughter’s father in the attic, she puts the robot up there. The daughter visits a simulation of her father each weekend. We can see, I would argue, that Brooker draws us toward thinking of the ambivalent entanglements with these devices. The intimacy and comfort they provide, our dependence on them, the way they unsettle, control and thwart us.

As I thought about this problem, that Brooker brings to a head at the cliff, I thought of John Durham-Peters:

Communication, in the deeper sense of establishing ways to share one’s hours meaningfully with others, is sooner a matter of faith and risk than of technique and method. Why others do not use words as I do or do not feel or see the world as I do is a problem not just in adjusting the transmission and reception of messages, but in orchestrating collective being, in making space in the world for each other. Whatever ‘communication’ might mean, it is more fundamentally a political and ethical problem than a semantic one.

Machines can displace forms of human knowing and doing in the world, but they seem confined to reducing communication to a series of logical procedures, calculations and predictions. What’s left is the human capacity to make space in the world for each other, to exercise will, to desire, to spend time with one another. The relationships between humans and their simulations are complicated, and the logic of simulation cannot encompass or obliterate the human subjective process of representation. What makes the human, in part, is there capacity to use language to spend time and make worlds with each other.

Nicholas Carah and Deborah Thomas