email from Nathan (Item 18)

Searching for Justice

Revisiting some writing I did for a rough experimental bit of group work a couple of months ago, I felt there was more to learn from it so I’m going to extract, rephrase and abridge some of the justification I put together for it at the time. The work itself was a month-long bit of anecdotal and personal research in which each of us in the group took daily screenshots of our instagram FYPs, varying activities to see how we could nudge the algorithm, in particular through a literal daily search for ‘justice‘ (lol?)

I’m not sharing the final outcomes here because they were, honestly, a bureaucratic necessity for some institutional recognition, and the piece was intended as a participative, ongoing educational activity rather than a conclusive result. The exercise was malleable and somewhat passive to allow us to develop varied studies influenced by our own social media accounts and personal experiences. Despite my deflections from outcomes though, the activity does direct itself towards retrospective meta-analysis of all of the collated data: it’s only since completing the month-long process that any holistic understanding of it really feels possible.

Searching for Justice couldn’t achieve a predetermined result, but it could and did form a naĭve framework, allowing emergent, explorative results to appear. To make this clearer, we could maybe contrast exploration here with discovery: discovery we might see as a direct, active process of simply obtaining a determined piece of knowledge. Exploration we could envision rather as the setting up of a less direct and more open-ended process in which what we will discover is not known, but previously unknown results are allowed to emerge, revealing themselves.

At its core, the exercise was little more than an exploration of the algorithm. I’ve referred a lot in my work this year to Graham Meikle, whose poorly-titled book Deepfakes offers a greater breadth of insight into socio-political digital phenomena than the specificity of its name implies. Meikle points to Thomas H. Cormen‘s broad definition of an algorithm: it’s simply a set of steps to accomplish a task, like brushing your teeth or driving to work. The algorithms which mediate our experiences on an app such as Instagram are machine algorithms, as Corman defines them, “a set of steps to accomplish a task that is described precisely enough that a computer can run it”. Meikle notes that in this way the algorithm is not a magic spell as it is often perceived, but more like a precise recipe. As I provoked in my work Sparring Partners last year, Betti Marenko describes computational ‘glitch events‘, unpredictable effects or errors which are generated not by faulty instructions being provided to the machine, but by valid instructions being performed in unpredictable situations. These glitch events are, as she poetically remarks, “The machine caught in the act of revealing itself”. The Instagram algorithm as we look at it is performing a precise calculation, and our collective exercise in prompting the algorithm formed an attempt to force the calculation to make itself evident.

Through an experimental approach we underline the potential for the exploration to fail and emphasise its propensity to remain a productive process regardless. The experimentation introduces an amount of risk to the perceived success of the work: what if the results become themselves indecipherable, and the machine doesn’t reveal itself? What if the results are offensive, or worse, simply mundane? The tension invoked by tenuously entering the unknown at the outset of the experiment feels artistically relevant for the work as it poignantly reflects the uncertainty, fear and paranoia that often epitomises users’ interaction with technology, and in particular with their experiences of technological mediation determined by algorithm.

James Bridle’s *New Dark Age* is a venerable text these days, its mid-2010s technoskepticism feeling both cringe and irritatingly prescient; I’m loathe to refer to it simply because I don’t know how to truly position it in light of now living through futures it discusses. Yet I can’t deny the validity of such a statement: “paranoia in the age of network excess produces a feedback loop: the failure to comprehend a complex world leads to the demand for more and more information, which only clouds our understanding” Bridle asserts that we attempt to control this spiral by creating oversimplified narratives, reducing a complex, overwhelming tide of data into paranoiac theories of the world.

Instagram’s public-facing legally-safe corporate explanation of how its Explore page selects content to display is predictably vague, stating that users are offered posts “automatically selected on a variety of factors” including posts the user has liked, accounts the user follows, and the user’s “connection on Instagram”.

The intricacies of Instagram’s variety of factors is left to the reader’s imagination, and imagine they do: one cloyingly persistent conspiratiorial myth claims that our phones are listening to and transmitting all of our conversations to advertisers. Despite tech platforms’ long-standing denial that this occurs, and more importantly, technical and rational evidence that it does not, the belief continues to propagate as users form simplified, navigable narratives for the uncanny ability of digital spaces to serve advertising so precisely targeted that it feels eerie. Tech and advertising corporations don’t really need to waste the significant resources that would be required to perform the computationally-intensive and illegal task of surreptitiously listening to users’ audible conversations, as users already voluntarily provide a massive quantity of much more processable data, and they provide it in huge amounts sufficient to procedurally determine their interests, activities and friendships with unnerving precision. In this way, we can see the algorithm’s opaque functions as a ‘black box’ creating tension; its inexplicability causing deep uncertainty and paranoia.

A surprising and salient feature of Searching for Justice was the relatively slow pace with which shifts would occur in the algorithmically-generated content. Rather than rapid changes in response to user stimulus was a gradual shifting and growing; the algorithm progressively iterating.

This stands in contrast to what we might expect of contemporary technological mediation. Anecdotally, assumptions seem to abound that narratives develop quickly, instigating sudden, perhaps drastic cultural shifts. However the expectation of media to iterate through immediate or powerful narrative shifts is not new; I enjoy coming back again and again to studies performed by the Glasgow Media Group throughout the 1980s (books with nice titles like Bad News and Really Bad News), looking how TV news shaped the opinions of its viewers. In Seeing and Believing, Greg Philo of the GMG points to an early phase in communications research which assumed that television and radio could cause “direct media effects” on its viewership, and which failed to prove that this was the case. Summarising earlier research from Dennis McQuail, “the search for instant, measurable effects on the individual has led to a neglect of the role of the media in developing political and social cultures over long periods of time”.

This to me has felt crucial: rather than cultures and beliefs being drastically and rapidly shifted by the dictats of the media, these fundamental parts of society gradually evolve through social processes in which they’re continuously produced and contested and reproduced over time. I saw echoes of this process in Searching for Justice: as we traverse through time, fragments of media remain day after day, persisting through the daily tides of the algorithm; others submerge beneath the waves while yet more emerge from unknown depths.

from Nathan