Participation in Experiments

Here’s a Tweet I saw an hour ago. It’s a play on those memes that compare social media platforms. This one goes. 2017.

Facebook: Essential oils
Snapchat: I’m a bunny!
Instagram: I ate a hamburger
Twitter: [all caps] THIS COUNTRY IS BURNING TO THE GROUND

OK, it reminds us that platforms are different. But also, that platforms can affect our mood. And, in the era of Trump, the experience of Twitter for many people is frantic, panic-inducing, rancorous.

Imagine this. Imagine that the ‘mood’ of the platform – its feel-goodness in the case of Instagram, its agitation in the case of Twitter is not just created by the users, but is deliberately engineered by the platform. And, imagine they were doing that just to see what would happen to users.

Say you use Facebook every day. You open the app on your phone and scan up and down the News Feed, you like friends posts, you share News Stories, you occasionally comment on someone’s post. Then, one day, all the posts in your News Feed are a little more negative. Maybe you don’t notice, maybe you do. But, you’d be inclined to think, people are a bit unhappy today. What if though, the posts in your feed were negative one day because Facebook was running an experiment where they randomly made some users feeds negative to see what would happen.

That’s not a hypothetical story. Facebook actually did that in 2014, to 689000 users. They changed the ‘mood’ of their News Feeds. Some people got happier feeds, some got sadder feeds. They wanted to see if they ‘tweaked’ your feed sad, if you would get sad. To this day they still have not told the users who were ‘selected’ for this experiment that this happened to them. If you use Facebook, you might have been one of them. You might care, you might not. The point is this, media platforms are engineering new kinds of interventions in public culture. These engineering projects include machine learning, artificial intelligence and virtual reality.

The development and training of an algorithmic media infrastructure depends on continuous experimentation with users. Public communication on social media routinely doubles as participation in experiments like A/B tests, which are part of the everyday experience of using platforms like Google and Facebook. These tests are invisible to users. An A/B test involves creating alternative versions of a web page, set of search results, or feed of content. Group A is diverted to the test version, Group B kept on the current version and their behaviours are compared. A/B testing enables the continuous evolution of platform interfaces and algorithms.

Wired reported that in 2011 Google ‘ran more than 7000 A/B tests on its search algorithm’. The results of these tests informed the ongoing development of the algorithm’s decision making sequences.

Two widely publicised experiments – popularly known as the ‘mood’ and ‘voting’ experiments – by Facebook illustrate how these A/B tests are woven into public culture, contribute to the design of platforms, and raise substantial questions about the impact the data processing power of media has on public communication. Each experiment was reported in peer-reviewed scientific journals and generated extensive public debate.

Let’s recap them both.

Facebook engineers and researchers published the ‘voting’ experiment in Nature in 2012. The experiment was conducted during the 2010 US congressional election and involved 61 million Facebook users. The researchers explained that on the day of US congressional elections all US Facebook users who accessed the platform were randomly assigned to a ‘social message’, ‘informational message’ or ‘control’ group. The 60 million users assigned to the social message group were shown a button that read ‘I Voted’, together with a link to poll information, counter of how many Facebook users had reported voting and photos of friends who had voted. The information group were shown the same information, except for photos of friends.

The control group were not shown any message relating to voting. 6.3 million Facebook users were then matched to public voting records, so that their activity on Facebook could be compared to their actual voting activity. The researchers found that users ‘who received the social message were 0.39% more likely to vote’ and on this basis estimated that the ‘I Voted’ button ‘increased turnout directly by about 60,000 voters and indirectly through social contagion by another 280,000 voters, for a total of 340,000 additional votes’.

The experiment, and Facebook’s reporting on it, reveals how the platform understands itself as infrastructure for engineering public social action: in this case, voting in an election. The legal scholar and critic Jonathan Zittrain described the experiment as ‘civic-engineering’. The ambivalence in this term is important. A more positive understanding of civic engineering might present it as engineering for the public good. A more negative interpretation might see it as manipulative engineering of civic processes. Facebook certainly presented the experiment as a contribution to the democratic processes of civic society. They illustrated that their platform could mobilise participation in elections. The more profound lesson however, is the power it illustrates digital media may be acquiring in shaping the electoral activity of citizens.

Data-driven voter mobilisation methods have been used by the Obama, Clinton and Trump campaigns in recent Presidential elections. These data-driven models draw on a combination of market research, social media and public records data. While the creation of data-driven voter mobilisation within campaigns might be part of the strategic contest of politics, the Facebook experiment generates more profound questions.

Jonathan Zittrain, like many critics, raised questions about Facebook’s capacity as an ostensibly politically neutral media institution to covertly influence elections. The experiment could be run again, except without choosing participants at random, rather Facebook could choose to mobilise some participants based on their political affiliations and preferences. To draw a comparison with the journalism of the twentieth century, no media proprietor in the past could automatically prevent a specified part of the public from reading information they published about an election.

Facebook’s ‘mood’ experiment was reported in the Proceedings of the National Academy of Science in 2014. The mood experiment involved the manipulation of user News Feeds similar to the voting experiment. The purpose of this study was to test whether ‘emotional states’ could be transferred via the News Feed. The experiment involved 689,003 Facebook users. To this day, none of them know they were involved in the experiment. The researchers explained that the ‘experiment manipulated the extent to which people were exposed to emotional expressions in their News Feed’. For one week one group of users were shown a News Feed with reduced positive emotional content from friends, while another group was shown reduced negative emotional content. The researchers reported that ‘when positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred’. In short, Facebook reported that they could, to an admittedly small degree, manipulate the emotions of users by tweaking the News Feed algorithm.

Much of the public debate about the mood experiment focussed on the ‘manipulation’ of the user experience, the question of informed consent to participate in A/B experiments, and the potential harm of manipulating the moods of vulnerable users. These concerns matter. But, as was rightly noted, focus on this one experiment obscures the fact that the manipulation of the user News Feed is a daily occurrence, it is just that this experiment was publicly reported. More importantly, the voting and mood experiments illustrate how public communication doubles as the creation of vast troves of data and participation in experiments with that data. When we express our views on Facebook we not only persuade other humans, we are contributing to the compilation of databases and the training of algorithms that can be used to shape our future participation in public culture.

The response of critics like Jonathan Zittrain, Kate Crawford and Joseph Turow to the data-driven experiments of platforms like Facebook highlight some of the new public interest concerns they generate. Crawford argues that all users should be able to choose to ‘opt in’ and ‘opt out’ of A/B experiments, and see the results of experiments they participated in. Zittrain proposes that platforms should be made ‘information fiduciaries’, in the way that other professions like doctors and lawyers are. Like Crawford, he envisions that this would require users to be notified of how data is used and for what purpose, and would proscribe certain uses of data. Turow proposes that all users have access to a dashboard where they can see how data is used to shape their experience, and choose to ‘remove’ or ‘correct’ any data in their profile.
All these suggestions seem technically feasible, but would likely meet stiff resistant from the platforms. They are helpful suggestions because they help to articulate an emerging range of public interest communication concerns specifically related to our participation in the creation of data, and the use of that data to shape our thoughts, feelings and actions.

These proposals need to be considered as collective actions, not just about creating tools that give individual users more choice.
The bigger question is that, as much as the algorithmic infrastructure of media platforms generate pressing questions about who speaks and who is heard, they also generate pressing questions about who gets to experiment with data. Public communication is now a valuable resource used to experiment with public life. Mark Andrejevic describes this as the ‘big data divide’. The power relations of public communication now also include who has access to the infrastructure to process public culture as data and intervene in it on the basis of those experiments.