Technocultural bodies

Our bodies are becoming, in the words of sociologist Gavin Smith, ‘walking sensor platforms’. Our bodies increasingly host devices that translate life into data. This process is at the heart of technocultural capitalism. If we look carefully we can discern in many Silicon Valley investments the effort to engineer away the friction between living bodies and the capacity of platforms to translate life into data, calculate and intervene.

To understand media platforms as technocultural projects then, we need to trace all the ways in which our living bodies are entangled with them. We need to investigate the sensory touchpoints between biology and hardware, between living flesh and digital processing. The expansion of the sensory capacities of media and the affective capacities of the body depend on a range of ‘communicative prostheses’ that envelop, are attached to, or even implanted in, our living bodies.

We can see this in efforts to engineer bio-technologies like augmented reality, neural lace, digital prosthetics and cortically-coupled vision. These technologies aim to change how the body experiences reality, expands the embodied capacity to act and pay attention, and the biological composition of the body itself.

Just listen to how Silicon Valley technologists talk about the relations between our bodies, brains and their digital devices.

A technology like augmented or mixed reality, according to Kevin Kelly, ‘exploits peculiarities in our senses. It effectively hacks the human brain in dozens of ways to create what can be called a chain of persuasion’. The perception of reality, once confined to the fleshy body, becomes an experience partly constructed by the brain and partly by digital technology.

Magic Leaps’ founder Rony Abovitz explains that mixed reality is ‘the most advanced technology in the world where humans are still an integral part of the hardware… (it is) a symbiont technology, part machine, part flesh.’ This part machine, part flesh vision has a long history in culture and technology. To think of the human is to pay attention to the process through which a living species entangles itself with non-human technologies, from early tools onwards. Since Mary Shelley’s Frankenstein, at least, our cultural imagination has thought about the possibility of technologies that might transform our living biology. Technologies are emerging that seem to be doing just this.

Research scientists have prototyped a robotic arm that can be controlled with thoughts alone. A person has an implant in their brain that detects neural activity, and then trains a computer to drive an arm and hand to undertake increasingly fine motor skills. Recently, Facebook have experimented with a similar technology that enables a human to type out words just by thinking them.

Over the past decade, researchers have been experimenting with cortically-coupled vision. The basic idea is that computers learn from the visual system in our brain, tracking how the brain efficiently processes huge amounts of visual data. This technology could be used to train computers to process vision like humans can, or it could be used to learn patterns of human attention. For instance, learning what kinds of things particular humans enjoy looking at. Imagine if, as you walk down a street, a biometric media technology gradually learns what kinds of things attract your attention, give you pleasure or irritate you.

Elon Musk is one of several technologists to invest in Neural Lace, an emergent – some say technically improbable – idea. The basic objective is to create a direct interface between computers and the human brain, which may involve implanting an ultra-find digital mesh that grows into the organic structure of the brain, directly translating neural activity into digital data. In an experiment with implanting neural lace in mice, researchers found that ‘The most amazing part about the mesh is that the mouse brain cells grew around it, forming connections with the wires, essentially welcoming a mechanical component into a biochemical system.’

Musk has said that, 'Over time I think we will probably see a closer merger of biological intelligence and digital intelligence.' The brain computer interface is mostly constrained by bandwidth ‘the speed of the connection between your brain and the digital version of yourself, particularly output.’ Let’s pause there for a second, the bandwith observation alerts us to something important. Maybe we could say the biggest engineering challenge for companies like Google, Facebook, Amazon and so on is the bottleneck at the interface between the human brain and the digital processor. All our methods of translating human intelligence – in all its sublime creativity, open-endedness and efficiency – into digital information are currently hampered by the clunky devices we have that sit at the interfacebetween body and computer: keyboards, mouses, touchscreens, augmented reality lenses. This is the truly wicked problem, perhaps whoever solves it will become the next major media platform. Just as Facebook, Amazon and Google have disrupted mass media, the next disruption will centre around whoever can code the human body and consciousness into the computer.

In each of these cases we can see a technocultural process through which media platforms, technologists and researchers invest in engineering the interface between the living body and non-human digital processors. This process is transforming what it means to be human.
It becomes increasingly difficult, or even pointless, to attempt to understand the human as somehow distinct from the technocultural atmospheres we create to sustain our existence. Living bodies are becoming coextensive with digital media.

Media platforms become like bodies, bodies become like media. In one direction we have the expansion of the sensory capacities of media. That is, media become more able to do things that once only bodies could do. Media technologies can sense and respond to bodies in a range of ways: know our heart rate, predict our mood, track our movement, identify us via biometric impressions like voice or fingerprint. And, in the other direction, we have bodies becoming coextensive with media technologies. Machines are becoming prostheses of the body, and in the process changing what a body is and does. Digital technologies alter how we we perform, imagine, experience, and manage our bodies.

In the technocultures we call home, our bodies are cyborgs: composed of organic biologicalmatter and machines. Our glasses, hearing aids, prosthetics, watches, and smartphones are all machines we attach to our bodies to enable them to function in the complex technocultures we inhabit. Many of these devices are now sensors attached to digital media platforms. Our smartphone is loaded with sensors that enable platforms to ‘know’ our bodies: voice processors, ambient light meters, accelerometers, gyroscopes, GPS. All of these sensors in various ways collect data about our bodies – their expressions, their movements in time and space, their mood and physical health.

Beyond the smartphone many of us attach smart watches and digital self-trackers to our bodies. We use these devices to know, reflect upon, judge and manage our embodied experience. Following the steady stream of prototypes from Silicon Valley we can see a future where devices might be integrated or implanted into the body. Sony recently patented a smart contact lens that records and plays back video. The lens would see and record what you see, and then using biometric data select and play back to you scenes from your everyday life. The lens could, augment your view of the material world around you, or even take over your vision to immerse you in a virtual reality. With a lens like this vision can no longer be seen as a strictly biological and embodied process, it becomes an experience co-constructed by intelligent machines.