I deleted the Facebook app from my phone in November last year. It was like breaking up with a machine. I was pulling at the News Feed a lot. It was habitual, autonomic even. I would find my finger bouncing the top of the feed, my gaze in a trance, even when I was in the middle of doing something: cooking dinner, walking to my car, looking for something at the supermarket.
This intriguing relationship between a human and a non-human computational procedure is now at the heart of everyday media use. Ted Striphas (2015) calls this algorithmic culture. We now live in societies where we delegate a range of judgments about our culture to machines.
I don’t think I deleted Facebook because I was ‘addicted’. This was not a moment of moral panic about my lack of self-control. I deleted it because it was too immersive, too programmed, too affirmative.
What happens in that moment when you open your app or pull at the top of your News Feed?
Facebook engineers explains that 'News Feed ranks each possible story (from more to less important) by looking at thousands of factors relative to each person'.
Facebook designed the algorithm to do this because
there is now more content being made than there is time to absorb it. On average there are 1500 stories that could appear in a person’s News Feed each time they log onto Facebook. Rather than showing people all possible content, News Feed is designed to show each person on Facebook that content that’s most relevant to them. Of the 1500+ stories a person might see whenever they log onto Facebook, News Feed displays approximately 300.
The News Feed is driven by a content-recommendation algorithm that learns how to satisfy each individual user, to keep them engaged with Facebook more often and for longer periods of time.
Designing the algorithm is complicated. Computer scientists and engineers create machine learning processes that process the expanding array of data Facebook collects about users.
Psychologists, anthropologists and user-experience researchers observe human users to understand how they use Facebook in real life.
I gave each participant a stack of recent stories from their feed, printed out on paper, and asked them to pick out the ones that interested them and discard the rest. News I asked them to sort the remaining, interesting stories by putting them into piles separated by what they liked about each.
She took her insights back to Facebook and engineers tested them against their machine learning models and incorporated them into the algorithm design.
Designing the News Feed is a process of harmonising the interplay between human and machine decision making.
The machine learns to predict what the human wants by observing them. I deleted Facebook shortly after Trump was elected because the News Feed kept immersing me in a loop of outrage, anxiety and disbelief. On the one hand it had learned my preferences correctly. I had been clicking lots of Trump stories. On the other hand, it didn’t know how to make a judgment about what I was thinking. I didn’t want to read any more stories that simply affirmed and recycled progressive outrage. There was no point to it.
The News Feed algorithm though could not break out of this loop.
This relationship between human and algorithm is critical to Facebook, Instagram, Twitter, Tinder, Netflix, Amazon, Google. Reading news, watching television, shopping for clothes, looking for dates. We do the work of coding our lived experience into the databases of media platforms, that data feeds the algorithmic machines that then shape our experiences.
The data is all of the content you ever create on the platform: status updates, chat logs, photos. Metadata is data about data. A recording of your engagements with a media platform: what you view, like, share, comment on, which groups you join, what pages you search, which friends you ‘stalk’, what kinds of content you ‘pause’ on as you scroll down your news feed and so on.
Go to Facebook, along the blue bar at the top, the drop down arrow at the far right.
Click on settings.
You’ll see at the bottom of your general account settings there’s a link titled ‘download a copy of your Facebook data’.
Facebook give you a list of what’s included in the archive you can download: your about me section, the ads you’ve clicked on over a limited period, ad topics associated with your profile, history of your chats, photos you’ve uploaded, personal information you’ve entered.
Hit the ‘start my archive’ button.
Now wait, sometimes a while, for Facebook to send an email with a link to download your archive in a zip file.
It arrives. I open it.
My first impression is that a lot of stuff is missing.
Take ‘ads clicked on’ and ‘ad preferences’. Facebook will only give you the past two months or so of the ads you clicked on. With ad preferences they will only give you vague categories like your political view.
My archive records what ‘preferences’ Facebook assigned me, but not how those preferences were generated. It tells me what ads I clicked on, but not what data was used to target those ads to me.
That said, there are some revealing observations.
Go to ‘ads you clicked on’ and have a look. Like most users, I thought ‘I don’t click on the ads’.
But when I look, there are a number of things I clicked on that I didn’t realise were ads.
There is a Buzzfeed story about food safety. Turns out that was a promoted story. The Queensland government paid Buzzfeed to write and promote the story into the Facebook news feeds of Queenslanders.
Nearly all the ‘ads I clicked on’ are not ‘ads’ in the traditional sense, they are promoted stories in my News Feed like this Buzzfeed one. Some are oblique. A story about the best cafes in my area, turns out that one of those cafes was promoting that story as a promoted post into the feeds of people who like their page.
For the most part though, this archive of ‘personal information’ is not very useful. This information is only valuable to Facebook because they can put it to work within their platform eco-system. Take the facial recognition data for instance. Go look for that in your archive. You’ll see that Facebook have assigned your face a unique number, which is basically a biometric identifier for you. OK, so they’ll tell you the number – but what’s the point?
You don’t have access to the software that can compute that number and use it to find your face on Facebook. To really be transparent about biometrics Facebook would tell you not just the unique biometric number for your face, but perhaps how many times your face has been recognised or classified on Facebook. Or, they would enable access to a tool that enables you to generate an archive of every image Facebook thinks you are in (or has assigned you to) regardless of whether you tagged yourself or not.
All the data in this archive – your preferences, interests, your age, your relationships – only become valuable when Facebook puts it into a database that enables them to make judgments about you in relation to the 1 billion other users of the platform.
The value Facebook generates from your activity comes from them associating your data with the data of other users. It is the scale of the social networks that Facebook can analyse that enables them to generate value. One archive on its own has hardly any value.
I have a box under my house full of stuff from when I was a teenager. Invitations to parties, actually printed on paper. Notes and letters. Playlists. Sets of photographs from parties, and festivals and road trips. Concert tickets. Street press. Gig posters. Beer coasters. I don’t know what to do with it. My Facebook archive contains about ten years of personal information, so does the box under my house. The difference between them though is that the box under my house is connected to nothing but my own living memory.
In The Culture of Connectivity Jose van Dijck (2013) suggests that social media platforms are socio-technical architecture. By socio-technical she means that 'online sociality has increasingly become a coproduction of humans and machines'.
The Facebook platform is a constantly evolving assembly of living human experience and the data-processing power of computational machines. In this computational architecture a personal archive acquires a value it never previously had. Data about each individual becomes the basis of orchestrating the relationships between humans and the media platform: shaping their experience in granular ways, often driven by the commercial imperatives of platforms and the advertisers that fund them.