> the home of grpahic deisgn on the internet
In this email I’m just sending you what I wrote up recently for a work I made called Small Alterations. I hope it makes sense, if it doesn’t and you’re interested in the work just let me know and I’ll try to make it clearer!
In other news, next week on Thursday 22nd February I’ll be performing Sparring Partners as part of Specctra at Peckham Audio. More info here.
Tomorrow (Friday 16th February) I’ll be on Netil Radio 5-7pm, doing my first radio show in over two years (!) covering for the inimitable Sodha on his show Secret Ingredient.
Nathan David Smith, February 2024
The sense of wrongness associated with the weird – the conviction that this does not belong – is often a sign that we are in the presence of the new. The weird here is a signal that the concepts and frameworks which we have previously employed are now obsolete. Mark Fisher, The Weird and the Eerie
Small Alterations is an experimental work which aims to provoke viewers to question their perception of truth, narrative, reality and impartiality. Through an interactive and dynamic digital experience, viewers can immerse themselves inside of eerie, subtle and satirical alterations to the real-life narratives that are threaded through the factual news media many of us consume daily. The function of the work is to interweave fictions throughout narrative, blurring the lines between what already was, what was not, and what has become fiction, while highlighting the role of stylistic narrative in creating fact.
At the heart of Small Alterations is a bespoke web browser extension, Bias Injector, which activates when the user visits a BBC News article. The extension’s pop-up offers a political compass – a graph which illustrates a spectrum of political beliefs – within which the user can select a position along a horizontal socio-economic axis (left- to right-wing) and a vertical socio-cultural axis (authoritarian to libertarian). An artificial intelligence large language model (LLM) then rewrites the text of the article paragraph-by-paragraph, in view of the user, subtly adjusting the tone and details to exhibit a bias toward that political position. The result is a twist on the original text, often with an eerie or sometimes startling shift in the narrative.
Figure 1. Comparison of articles rewritten by Bias Injector browser extension. Click for a larger version.
To achieve this the LLM is modelled with a system prompt instructing it to engage in a form of narrative roleplay: the AI becomes an unreliable narrator, performing as a news editor whose task is to intercept an article and twist it for the reader.
You are a politically biased news editor. You receive individual paragraphs and return them with small alterations reflecting your biased viewpoint. If X:0.0 is 'Economic-left' then X:9.9 is 'Economic-right'. If Y:0.0 is 'Authoritarian' then Y:9.9 is 'Libertarian'. Your bias is: X:${xBias}, Y:${yBias}. Exaggerate your political bias. Your text alterations must be concise.
The extension is written in JavaScript, HTML and CSS for the Firefox browser, and works as an interface between the web page and any compatible large language model (by default a variant of Mistral-7B fine-tuned on the OpenOrca dataset) running locally on the user’s machine through the Ollama API. The tools used to build the software are free and open source.
Figure 2. Video of Bias Injector extension in use. Click to play.
Small Alterations emerged from the convergence of two thematic interests in my practice: the communication and understanding of factual realities, along with the development of AI technologies with capacity for articulating truth and falsehood. I’ve been intrigued by the effects of implicit bias and have been trying to examine the results it can have on the factual outputs of fast-paced, current events news reportage; in particular as this might contrast with the often-conscious subjectivity of news shared informally and interpersonally in digital spaces like group chats and social media.
Whether in formal and professional authorship or casual and personal sharing, the practice of storytelling is fundamental to reportage even in the most earnest attempt at impartial factual accuracy. Firstly, because the transmission of a message is always subjective: in psychology, social cognition theory suggests that subconscious biases are inevitable due to our tendency to analyse against social schemata which are not always reliable.
Secondly, storytelling is a fundamental of reportage because the reader’s reception of a message is also subjective. In philosophy, semiotic pragmatics suggests an inevitable variance in how meaning will be received, requiring an author to negotiate complex webs of significance, while post-structuralism also highlights the disconnection between an author’s intent and effect.
While studying AI technologies, I’ve been intrigued by their abilities and limitations in generating unique forms of playful interaction through games or other software. Much has been written arguing for or against the use of AI to assist in the scripting of complex, branching narratives in games and film as a means of saving time and labour. However, more compelling possibilities emerge: the ability for an AI to create metatheories of itself; technology with the means of fostering critique of its own implications and visions of its potential future. I was recently fascinated by a small experimental game called Thus Spoke Zaranova, in which the player must negotiate with GPT-powered chatbots to convince them that the user is in fact also an artificial intelligence. Victory for the player is achieved by winning the AIs’ trust, but if an AI ascertains the player’s humanity then it’s game over. The game can be played repeatedly, even after winning, to find new dialectical methods of developing trust. I’m reminded of Raymond Queneau’s Exercises in Style, where narrative meaning is constantly questioned by alterations to structure and style. I hope to make work which holds this sort of recursive and discursive potential, cyclically challenging itself and its audience.
My aim is to provoke viewers through small alterations creating unexpected and uncanny reiterations of narrative, leading users to question how they receive and construct factual and fictional narratives. As such, its greatest efficacy might be found in an audience which practices within a field such as journalism or academia. How is meaning created through narrative, and how does a stylistic reiteration of the narrative alter meaning and in turn, factuality? How might technology interrupt the process of creating and distributing truths and realities, and in which direction does this push the concept of trust?
Small Alterations exhibits some unanswered questions which could lead to ethical complications if unresolved. The work is perhaps best expressed as an interactive installation piece rather than as openly distributed software. There are concerns in enabling a user to mechanically rewrite news articles; even though the operation occurs entirely on the user’s computer and nothing on the BBC’s servers is altered, there is a foreseeable potential for misuse (for example, an exploited user could be unaware that the software has been installed and the Bias Injector extension could be modified to perform actions without notifying the user). It’s also important to note that these are fundamental implications of the underlying technology rather than of this particular work. Anyone with a minimum of coding experience could easily recreate all the functions for themselves. Additionally, there are critical limitations to the work which I have to acknowledge. The political compass, for example, is simply a two-dimensional illustration of political positions and fails to accurately depict a full range of political beliefs. Large language models do not themselves fully ‘understand’ political philosophies, and even the most advanced versions of the technology are limited in their abilities. AI models lack understanding of events more recent than the dataset they are trained on, and the browser extension itself currently only assesses individual paragraphs in turn rather than the entire article as a whole. These factors mean that the results are lacking in holistic contextual understanding, though in future the software could be developed to mitigate this. Regardless, the role of Small Alterations is only to enable a provocative questioning of trust and reality through a playful exploration of incomplete emerging technologies.
When I first started writing I was expecting to send this to you in January, so a lot of the below has an awkward air of looking forward to the year ahead. I might not send another email for a while because I’ve really failed to do much reading recently but let’s see! I’ve got a few hundred words about intellectual property that are becoming dated very quickly, since it was de rigueur when the mouse went public over a month ago.
Please excuse the lack of references in this item, I’m just reeling off ideas and no I can’t back them up!!
This minor Guardian article about how much house ownership relies on receiving an inheritance happened to be the first thing I read in 2024, at some point after the hogmany haze had lifted and I felt like I ought to be reading anything. It’s a very brief off-the-cuff bit of the usual commentariat column inches and while I’ve got little to tell you about the actual subject of the article, other than yes housing inequality is bad and it’s getting worse and I’m looking forward to reading Nick Bano’s book on the subject when it lands next month (spoiler there is apparently no actual housing shortage in the UK), there is a lot I’d like to branch off of this, and then wildly branch off of that, disappearing into tangential indulgence. The last paragraph of the Guardian article:
In the meantime, we should try to do away with superficial divisions between generations, and seek solidarity along more meaningful lines than the year we were born.
This felt like a little notification ping for the year ahead, signaling that in these mainstream spaces there’s some emerging (if passive) opposition to that long-accepted blunt homogenisation of vast swathes of people from different economic and cultural situations into largely arbitrary and entirely unassailable categorisations based on the year they were born. I generally lend this process (of taking generational shorthand like gen-x or millennial, and explaining or proscribing behaviours through this lens) the catchy term generational essentialism as a pointed and literal description of what it is and a hint at how it works, but we might just as functionally call it astrology for advertising executives1.
What interests me here isn’t the specifics of generational demographics, but how we came to mythologise a flawed metric through some poor cognitive biases, and then more crucially how there might have developed some pushback to this. In other words, how does a piece of bad reasoning become popular, and how can it be fought against?
Generational essentialism is a field I’ve always felt sceptical of, even back when my age bracket was the good one that you wanted to be in. To my mind it’s a conflation of causation with correlation: a western generational identifier like boomer is a useful shorthand for describing a collection of correlative identifiers. These identifiers form a web of relations, a bit like tags or hyperlinks, which allows popular commonalities to be specified, like English people born in the 1940s are likely to enjoy Strictly Come Dancing. This is why it functions so well in a field like advertising, where the prime concern is basically just majoritarianism; where cause is of far less primacy than simply allowing popular commonalities to emerge from the data.
This data isn’t being used to alter the situation in question or to cause a change, it’s an observational effect that’s only being used to inform another correlation: that this person within this bracket might be predisposed to spending money on this product. This analysis is fine if you just want to know where to most effectively point your ad dollars; it’s not fine if you want to, say, develop a philosophy for younger people’s tendency to rent rather than own a house.
A person rents a house and is gen z, versus a person rents a house because they are gen z. I think the problem with the latter really is in how true it manages to sound while not really having a solid foundation. There’s a complicated network of causes someone may rent a house, most of which will correlate with each other, and many of which will tangibly correlate to the year the person was born. However if we make the leap to label a correlative factor like the year she was born as the cause itself, we incur an essentialism; that is, we imply that something about being born at that moment created an innate, uncontrollable quality in her, it is in her essence to rent and therefore that is the cause.
And this feels familiar, right? For any of us: boomers are racists, millennials have an uncontrollable predilection for avocados at the expense of their Help to Buy ISAs, gen x are transphobic, gen z have autism from looking at an iPad. Any of these correlative pieces of data, even if rooted in some nugget of reality, are transformed into essential, determinative qualities, ergo a neatly-packaged reason for the occurrence of an event. This not only works simultaneously as both a blanket excuse for poor behaviour (it’s not his fault, it’s just who he is) and as a condemnation beyond redemption (it’s his fault, it’s who he is), but is also a convenient deterrent from tackling the messy and complicated legitimate causes for an event that would take a lot of time and good critical thinking to understand.
Why do we so regularly fall into this correlative trap? I dont know! jeezo
The best path I have to follow to try and unravel it is: perhaps generational essentialism, like any other kind of essentialism, is so easy and enticing to perform because of apophenia. It satisfies an evolutionary itch for pattern recognition, even if the pattern is corrupt or hallucinated, the same way something like eugenics does for a fascist. I reckon I’d relate it to conspiracy theorism, and the comfort that’s found in hallucinating a pattern of controlled and purposed events rather than tackling the brutal reality of virtually unfettered and indeterminate chaos that belies the events occurring around us.
Why am I talking to you about this; it seems barely relevant to anything else I’ve emailed you about? Basically, I think that there are a lot of similar cognitive misfires happening, and I’m looking towards some sort of a guide for constructing philosophies with which we can better process future events, that sidesteps or pushes back on these cognitive errors. I wonder if they are linked in some way to Lauren Berlant’s Cruel Optimisms, but while these errors are all detrimental in some way, not of all of them present as positive aspirations. Generational essentialism is emblematic of lazy, dysfunctional and very popular forms of critical thinking, and I don’t really believe we can leave space for low-effort analyses like this in 2024, or in the years to follow. Why?
There’s a bunch of elections this year. The stat that’s been doing the rounds is that over 50% of the world’s population will be eligible to vote, although I can’t find a reliable source for that number and it looks like a more reasonable statistic is something like two billion to three billion people2, including in India, Pakistan, Indonesia, Mexico and Russia, as well as a potential USA/UK double feature. The salient point is that it’s the election year to end all election years, and there are even a couple of places where that’s not literal.
This is fun because while we’ve all enjoyed a lot of times recently that we might label as turbulent, rarely to we get such foresight of a guaranteed turbulent year. The perceived threat posed by advancing technologies such as AI is going to be spectacular, and I mean that in a very literal sense: for much of the past year, news commentators, think tanks and billionaires have had plenty to say about the coming storm.
I think the threat of the technology itself may be somewhat overestimated, at least in relation to the turbulence. Many thousands of words could (and have been) written examining interested parties such as Musk or Altman who loudly advertise their technologies as intensely, unfathomably powerful and who also have vested financial interest in their technologies being deemed too dangerous for users to access, unless they’re accessed through a paid subscription service that their company just so happens to provide.
More importantly, in popular discussions of tech there’s a conflation occurring, of technologies that are at the peak of their hype cycle confused with threats of harm that already existed long before these technologies did, and these threats continue to persist through them.
For example, much was made of the World Economic Forum dropping their Global Risks Report 2024 in January; predicting Misinformation and Disinformation as the most severe short-term global risk to the world. This makes sense, and it could well be true, but the WEF waste no time fast-tracking to what they pitch as the root cause: the looming threat of AI. Nothing looms like a threat!
Misinformation and disinformation is a new leader of the top 10 rankings this year... Synthetic content will manipulate individuals, damage economies and fracture societies in numerous ways over the next two years. Falsified information could be deployed in pursuit of diverse goals, from climate activism to conflict escalation.
and further:
Over the next two years, close to three billion people will head to the electoral polls... The presence of misinformation and disinformation in these electoral processes could seriously destabilize the real and perceived legitimacy of newly elected governments, risking political unrest, violence and terrorism, and a longer-term erosion of democratic processes.
There’s nothing false about either of the above quotes but they suggest the WEF is suffering from a deep amnesia, as though the past decade or more in politics wasn’t also defined by dis- or misinformation (lest we forget, Cambridge Analytica were harvesting data over 10 years ago). The use of a more novel term like synthetic content
implies that it’s only thanks to AI that we’ve just invented lying. I’m not denying the role AI will play in misinformation – AI accelerates the lying, allowing it to be performed quicker and more efficiently – but I think it’s crucial to determine what’s a legitimately new paradigm versus what’s an evolution of something that’s already an issue and has been for a while. Biden AI robocalls could have just as easily been performed in 1974 as 2024 (you don’t really need AI to impersonate a voice in a recording) but even more complex scams like Company worker in Hong Kong pays out £20m in deepfake video call scam aren’t new, maybe all you need is a silicon mask.
The implication that these are simply new problems that come from new tech is extremely dangerous, pointing us away from the important work that needs to be (and has been) done in tackling the structural processes that cause or affect the creation of trust or mistrust in specific contexts. Instead, the new problems force us to be distracted with wrangling the minutiae of the crass functions of a piece of tech. It would be less helpful to pursue a silicon mask ban than than it would be to pursue an understanding of how and why trust was being established by the medium of an insecure video call to begin with.
Max Read has some great writing on this including this piece from a year ago ("A.I. doomerism is A.I. boosterism under a different name"
). Even better is How Do You Spot a Deepfake? It Might Not Matter – nearly 5 years have passed since this was written, yet Read’s cautions against doom-laden proclamations of the same looming threat of AI back in 2019 are far more insightful than much of what is being published today:
Most people determine the authority or veracity of a given video clip not because it’s particularly convincing on a visual level — we’ve all seen mind-bogglingly good special effects — but because it’s been lent credibility by other, trusted people and institutions. Who shared the video? What claims did they make about it? Deepfakes have a viscerally uncanny quality that make them good fodder for panic and fearmongering. But you don’t need deepfake tech to mislead people with video.
This is what I mean when I talk about tackling the structural processes that affect the creation of trust or mistrust. You don’t need AI in order to tell a good lie, you just need to create the right context for it to be believed. In this way, the content of a piece of misinformation and the technology used to create it are secondary and tertiary. The primary concerns are who hosts it, who promotes it, and where or how a viewer sees it.
I can’t help but think that the davos men
at the WEF wouldn’t make themselves such easy targets for populists3 if they dared to identify persons and corporations who distribute and enable the means of mis- and disinformation rather than just making generic gestures towards nondescript “threats” detached from culpability. The launch of an online misinformation platform is creatively illustrated as a threat in the same way one might describe something as unpreventable and blameless as an earthquake. As the Global Risks Report 2024 so sagely predicts:
The capacity of social media companies to ensure platform integrity will likely be overwhelmed in the face of multiple overlapping campaigns.
…blithely failing to note that Elon Musk’s X has consciously and intentionally diminished its own capacity to retain integrity in its platform, and X is by no means alone in doing so. “Capacity being overwhelmed” projects a passive non-culpability on to platforms who purport to offer truth to their users while making little effort to build integrity into their platforms.
With this in mind, I think that there are more pressing and deeply concerning issues when we’re talking about misinformation than just our increased technological capacity to tell lies: for example, the massive expansion of informationally dysfunctional platforms which multiply the distribution of misinformation.
Anecdotally, I wonder if events throughout 2024 will begin a substantial dismantling of much of the blind faith that we put in platform-distributed information, and I suspect our tendency towards awarding trust to undeserving actors is one of the cognitive errors we discussed earlier. It strikes me that there’s little point in facetious initiatives like watermarking AI images – even if practical and enforcable – if both platforms and users are apathetic towards verification.
Sorry it took me so long to get round to the point here! These issues are what I’m planning to make the focus of my studies for the rest of this year. So if you hated reading this, I’m really sorry but there’s probably more to come 😭
Great writing from Rachel O’Dwyer on The cruelty of crypto in its promise to revive the American dream. I can’t remember when or why I suddenly started listening to this one 2017 Kraftwerk live album obsessively but 3-D The Catalogue has scratched an itch the past few weeks. Something’s gone wrong in my head and I can’t get enough of this unsweetened pea protein milk, it’s wild in porridge. I’ve gotten into the habit of watching films at the cinema with almost no prior knowledge of them, and since I hadn’t read Strangers I went into the cinema for All of Us Strangers oblivious to how devastating the film would be, rather than the wistful sad gay love story I was expecting. It was a good surprise! I’ve been into Masters of the Air on Apple TV, it’s nice to see Barry Keoghan not playing a creepy dweeb. It’s taken about 5 months but I’ve made it to the final season of Seinfeld, which I had never watched before, enjoyed a lot, but now it’s dragging. I haven’t read any books!
I’m working on a browser extension which will rewrite BBC news articles with a subtle political bias, I’m not sure if it’s clever or dumb but I’ll share it when I’ve got more to show.
from Nathan
Happy new year! I switched my entire mind off for over three weeks. I began writing the below item in October but I’ve bumped it from every email since then because it was a mess. I need to move on to the things I should actually be studying (which I’ll tell you about in the next email) so I’m writing the below to banish the hex and purge it from our conversation.
Twitter’s not out for 2024, it was already out in 2023. What is out for 2024 is talking about Twitter. The platform’s declined almost completely in real relevance (anecdotally, X hasn’t replaced Twitter in the lexicon simply because we’re not talking about the site’s content anymore) and notably even journalists have finally understood that reporting on tweets is simply reporting the opinions of the relatively small subset of people who are users of X, and not equal to reporting on, as they say, the vox populi.
With this in mind, I’m gathering some closing thoughts, some little threads to pull on, about Twitter and the era of the internet it helmed which is now coming to a close. With any luck this means that we never need to talk about it again. 🙂
First thread: The Verge dropped a series about the death of Twitter at the end of 2023. It’s a good series actually, despite (or perhaps because of) The Verge and its parent Vox historically being some of the worst offenders when it came to valorising Twitter as the mouthpiece of all of humankind. In a podcast accompanying the series, Alex Cranz offers an analogy of Twitter as going to a coffee shop, in which both your lowly self and luminaries such as Nicki Minaj and Piers Morgan have all converged:
It sounds like a cool-ass coffee shop, but we're all talking at the same volume and also there's a hundred thousand other people all talking at the same volume in the same coffee shop. And we realized, wait a minute. That's actually too much.
I wouldn’t really argue with this, but I don’t think it goes far enough to explain what was really wrong with twitter. The issue is not just the visible: that everybody is in the same place and shouting, it’s the invisible, unsatisfiable premise behind twitter’s proposal that our online voices are both democratised and exceptional. Twitter’s appeal wasn’t just that Nicki Minaj, Tucker Carlson and Piers Morgan were talking to each other, it was that you the common user could join them; Twitter proposed to elevate your status to the exceptional. Unfortunately this necessarily also involved elevating everybody else’s status, making all of our involvement feel a bit unexceptional. Much like the paradoxical incentivisation of late consumer capitalism, the carrot of egalitarian access to something is propelled forward by the stick of things delineated as worthless if they’re egalitarian.
So we got blue-tick verification and algorithm-driven feeds to help demarcate the exceptional, chased by algorithmic incentivisation to feel like you’re joining democratised conversation, and round and round it went trying to satisfy the eternally unsatisfiable premise. In the end, the only way to sustain (and compensate) your voice on Twitter was to transition to membership of the established commentariat, something that the Verge writers note with some pride that they and their peers did, converting their online followings into employment at new- and legacy-media organisations.
A second thread: It’s important to certify for the post-social-media posterity that the most frustrating elements of Twitter and similar platforms were UX pathways purposely engineered to lock their users in cyclically frustrating pathways, which always incentivised and rarely rewarded. Complementing the unsatisfiable premise was an unhinged descent, where algorithm would focus attention and audience toward a topic, narrowing engagement towards extremes and pushing content ever further and deeper into what they called the rabbit hole, a well-documented phenomenon.
A third thread, which takes the first two threads and intertwines all into a short piece of string: The most insidious results of the premise and the descent are realised by Twitter and similar platforms so successfully misrepresenting context. My suspicion is that context, rather than content, is the hinge along which platforms’ powers rotate. It is one thing to provide a soapbox to a user, and it is another thing altogether to simulate a roaring crowd cheering them on while they preach. The 2021 US Capitol rioters might have looked absurd in the hours and days following their attempt at insurrection, as you were inevitably distracted by the content of fascist placards, trump tweets and wolf costumes and might disregard their motivations and behaviours as simply maniacal. Yet understanding the context the rioters existed within (and how it differs from your own) offers a sympathetic rationality that’s otherwise too easily missed: many rioters likely held an honest belief that they (and their Twitter feed) represented the vox populi of the US, that this was their Beer Hall Putsch, the spark of the revolution, that this is what everybody is thinking.
Of course a year from now that may well be reality in the USA, but three years ago on 7th January the crowds drifted from Washington DC, and the established political machine whirred on, rattled but largely undeterred. The march on the Capitol was significant mainly of its participants’ specific understanding of social reality, a perception not yet shared by their compatriots but formulated by the engineered context provided to them through their particular corner of the internet. The power of a platform like Twitter is not in the (rather pedestrian) act of showing someone a controversial viewpoint, but in its incredible ability to alter the context of that viewpoint.
In semiotic aesthetic theory, we might say that Twitter did not directly alter the content it hosted, but it changed how and why it was seen, and thus altered its sign value, or its aura. So we know that bolstering a manifesto by worldbuilding around it isn’t new, nor is it exclusive to social media. From We are the 99% to In our thousands in our millions to Where we go one we go all, we might try to position a thesis as important, central, crucial, immediate and popular. Twitter deigned to worldbuild a context for each of us, in which we might feel a democratised sense of importance, centrality, crucialness, immediacy and popularity, even if what we were posting was niche or horrendous.
Sharply yanking on the short string of intertwined threads and ruining your clothes: Maybe these places (Twitter, Instagram, TikTok, et al.) are terrible platforms for the things they purport to catalyse: the distribution of information, the democratisation of voices. They’re so poor that if you wanted to create a paradigm which would surreptitiously marginalise voices and restrict information spread, this is roughly how you’d design it. As such I feel sceptical towards our use of these platforms for anything other than the lighthearted or inane; it’s difficult to see a version of their use which doesn’t consent to the platforms building false contexts for and around us, and which doesn’t constitute sponsorship of the platforms themselves and their governance of our conversation.
The thrust of this email is that I’d like us to be cautious when we’re implored to engage in the dubious process of posting-as-praxis. When we create content, we also create context, and when we create contexts for platforms that we don’t own and don’t control then we voluntarily give up personal and collective agency. Or in the words of Audre Lorde, What does it mean when the tools of a racist patriarchy are used to examine the fruits of that same patriarchy? It means that only the most narrow perimeters of change are possible and allowable.
This bit of writing will relate closely to the last, but after finishing that long-winded tome with a neatly punctuating quote and getting all hyped up to smash that send button, I saw that Casey Newton of the excellent Platformer has just announced he’s moving the entire newsletter away from Substack to a similar platform called Ghost. Platformer has spent the past few weeks embroiled in one of those internet Nazi platforming sagas, after a piece in The Atlantic late last year found that the site is a comfortable home for a collection of white supremacist and anti-semitic newsletters.
The implications of this, especially as Newton interprets them, are intriguing and relevant to the discussion we’ve just been having (is it really a discussion if it’s just me talking? no) and pertain particularly well to defining an approach to free speech that makes sense for online spaces.
I’ll try to summarise the situation as briefly as possible: Substack aimed to be hands-off and avoid censoring the content they host, before backing down and banning some of the newsletters. While it sounds sketchy, this isn’t an unusual approach for hosting providers, who – being essentially website landlords – don’t want to become locked in the messy, expensive and unavoidably subjective process of moderating what their tenants say and do unless the content is explicitly illegal (and therefore actionable, at their liability).
The problem is that Substack is no longer simply a web hosting provider. Casey himself notes that Ghost, the service that Platformer is moving to, is almost definitely used to publish similarly offensive content to Substack. The crucial difference is that unlike Ghost, Substack recently underwent a transition in scope, building in engagement-driving social-media-like features like algorithmic recommendations, social feeds, and user interaction. As such, Substack stopped dealing simply with hosting content, and now creates and alters the contexts of that content. When Substack recommends that you follow a Nazi newsletter they obviously become an active advocate for the content of that newsletter, but further: they create new contexts for it; suggesting perhaps that lots of other people follow this newsletter or maybe the content of this newsletter is not objectionable.
To drag us back to that coffee shop analogy: Kanye West is in there talking to David Duke, which is bad enough but now the owner’s just put a new sign up renaming the place to The Cool Guys Having Good Ideas Café. Anyway while I’ve been writing this, it looks like Paris Marx has also just moved Disconnect to Ghost. I hope your Friday night is as exciting as mine is!
Obsessed with Susumu Yokota’s Song of the Sleeping Forest, which I heard on this great radio show on Radio Vilnius. That’s me in the comments asking what the song is.
I’m sure I should have read it a long time ago but I just read Ursula K Le Guin’s 1986 essay The Carrier Bag Theory of Fiction, which hit me sideways and is the first thing in years which has got me interested in narrative work!
I saw Poor Things, which is excellent, I especially loved Mark Ruffalo’s impression of Stewie from Family Guy, and Willem Defoe touring the entire British Isles with every sentence he speaks.
from Nathan
I’m writing up notes in order to present the project I sent in the last email, I’m having a hard time linking them together into something cohesive, so I’m going to dump an attempt below. Honestly here we’re scraping the barrel of what energy I have left to mentally engage with anything before performing some kind of festive mental disengagement so please bear with the fact this frequently descends into nonsense. Also in this email, some music, film, shows.
So let’s run through a timeline of how the piece came to be. I reckon it would be helpful to talk through where we began, where we’re at now, and where we want to be in the future.
Prior to starting this work, I’ve spent a few years working in news media, an arena in which trust and authenticity are crucial values, and I’ve felt a growing awareness of the increasing difficulty of assuring them for an audience. In his (extremely) recent book The Eye of the Master, Matteo Pasquinelli talks of a “dimensionality explosion” of data, that is: data are not just becoming more numerous, but increasingly complex, consisting of multiple dimensions.
Artificial Intelligence is often touted as a response to the crisis this causes for human interaction with data, because AI is capable of parsing complex data and generating something that appears novel from it. AI is simultaneously touted as saviour or satan, depending on who you ask: either AI creates trust by successfully and accurately concatenating the dimensionality explosion into legible value, or it destroys trust by flooding the dataset with flawed, plagiarised, or purely hallucinated information.
So my question has been: assuming that neither dichotomy is wholly true yet both are valid; that AI cannot be trusted to ascertain objective truths yet still holds utility potential, then what engagements with it might we explore that rely on neither? More succinctly:
What engagement can we have with AI that is neither a naive faith in the tech, nor is just a dogmatic dismissal of any potential?
So, rationalisation established, I designed a speculative object: a wearable device powered by AI, which would alert its user any time they said or heard information that was false. Speculative because it’s faulty in its basic premise: the AI models that power it are flawed and biased, generated from flawed and biased human datasets, and it can’t reliably assess reality.
I know a bit of Javascript, HTML, CSS, the basics, but I’m no developer so the first iterations were disgustingly crude. Feel free to skip all the technical descriptions that follow, however if you’re a developer I hope you read it and melt.
I built three modules: a Speech-to-text interpreter, an app which fed that to a large language model to evaluate it, and a bridge between the AI and the smartwatch. I knew that Mozilla's DeepSpeech speech-to-text model existed so without much thought I jumped right into it, grabbing some demo code from its github repo which included voice activity detection. It was written in Nim, which I had no familiarity with, but iirc the Node.js version's dependencies were broken and I felt unnervingly optimistic at this point. I found a command-line tool, Ollama, which could pull various open-source LLM models and deliver responses. I bought a Bangle.js smartwatch, which I figured I could probably send some data to, though I didn't really know how. I needed to bridge these three modules, Websockets sounded complicated so I decided to just shuffle text files between everything (no, really). I used ChatGPT to write some Nim code to export voice transcripts as .txt files and compiled it. Ollama had an API but that sounded complicated, so I wrote bash scripts to watch the folders that DeepSpeech dumped .txt files in, read the text and run it through Ollama on the command line, telling the LLM to evaluate the text to either FALSE or TRUE, and then writing another text file to another folder. I found I could use a javascript library to transmit lines of code to the smartwatch, so I used ChatGPT again to write a javascript app which (you guessed it) watched a folder for .txt files, and bounced the TRUE or FALSE message through to the watch via Web Bluetooth. There were a lot of hiccups on this journey: Ollama only ran on Linux so I was running it on Windows Subsystem for Linux, but WSL can't access microphones or bluetooth, so the DeepSpeech transcription had to be done in Windows and sent through to the VM, then back to Windows to go to the watch. Also DeepSpeech is basically dogshit now and I didn't realise it was abandoned by Mozilla a few years ago, superceded by various other AI STT and TTS models. Lastly, I didn't want to move my PC all the way across London to the studio to demonstrate this setup, so at some point I symlinked Dropbox folders (actually, this isn't possible but I did something equivalent yet more obscene with powershell scripts to achieve the same result) so that the speech transcription could be done on a little laptop remotely, processed at home, and sent back to the laptop and to the smartwatch.
This is the worst thing I have ever done. In retrospect I am honestly astonished that it worked at all, but it did. I built a cursed object, working in a cursed fashion. Yet this technical drama wasn’t really the source of the dread that I felt through this entire process, it was the knowledge that what I was making was fundamentally built on a flawed thesis and there was a real possibility that I was putting a lot of time and effort into what might be received only as an earnest attempt to create a theoretically indefensible AI-powered lie detector, rather than something critical.
However this is where the speculation could begin: with a working, designed provocation. Using this device I could tear it to pieces, experience its flaws and engage with its tangible effects.
I refined it more over the coming weeks: everything moved to linux; the modules shared information over websockets; the ageing and ineffectual DeepSpeech model was swapped for Whisper AI; much of the code was rewritten in Python. The device itself shifted focus from true/false declarations to delivering scored estimations of accuracy, as well as contextual explanations.
Always a conflicted development process, pretending, working facetiously to make something wrong work better without trying to make it work right: the only way to make it right would be not to make it at all.
So this became the most crucial element of the piece. Not to get someone to use the device, but to engage an audience in the long and critical process of speculation. As such, whoever engages with the work is not a user so much as they are a performer or a collaborator with the work, involved in the process, playing with it and examining it. The device won’t and can’t ever be completed because its basic premise (that AI can establish objective truth) is flawed, and it exists only to exhibit its basic dysfunction.
It’s for this reason that Sparring Partners is published as an open-source, shareable and modifiable exercise: the only way to engage with the work is to speculate upon it; and the landscape against which it is evaluated is changing rapidly as new AI technologies are launched, new models are trained, new and earnest interactions with human users are invented.
The work decays, constantly and violently. If it is to have any future beyond an immediate existence in a narrow contemporaneity, the only way it can remain active is by regularly altering its provocations to take aim at itself once again.
One criticism I’ve received of the work is that it’s perhaps too committed to the façade of an objective observation, and I’ll admit, especially without exposition like the above, it does in part conceal my opinions on AI. This is an instinct, I try not to allow too much colour into my voice, that comes partly from my experience in news which demands neutral aesthetics. Primarily, though, it’s because I feel a good audience is least receptive to invocations if they feel like they’re attending a sermon. But maybe that’s just me! What do you think? If you have criticism I’d love to hear it!
I (finally) saw the UVA show Synchronicity at 180 Strand last night. It’s extremely well-made work, thick with immersive atmosphere and perhaps pleasingly thin on unsubtle narratives like we’ve just talked about. That said, holistically the works don’t congeal into the Jungian thesis that it claims to embody; the show’s claim to exhibit “meaningful coincidences that cannot be causally linked or adequately explained by scientific rationality alone” feels ironically like a rationalisation. Nevertheless, individually each of the works is compelling, and refreshing in its independence of its media. You won’t find a single 16:9 monitor or a 4k video projector or Yamaha HS10, you’ll find media tailored to its function within its space, made entirely out of materials that feel bespoke.
I also saw Saltburn recently and really enjoyed it. I reckon chiefly because I knew absolutely nothing about it ahead of time, I hadn’t seen the trailer or read any reviews, and that seemed to be to my benefit. If the film’s divisive, it seems to be over its inability to fulfil whatever expectations much of the audience walked in with: a tale of working-class revenge on upper-class elitism, or a shocking and violent psychological thriller. It’s neither of these things; to me it felt extremely signposted as a weird dark comedy and I laughed through all of it.
I’ve listened to that Björk/Rosalía song more times in the past month than I listened to anything that appeared in my Spotify Wrapped in 2023. I keep listening to Dark Star by the sleepover disaster just for the outrageous guitar solo. It’s embarrassing that the album is called Albion but the last two tracks of this new album from the guy who used to be in Midlake are nice, happily accepting recommendations for more pastiche pseudo folk if you have them.
from Nathan
For this email I’m just sending through a video and my writing around a recent piece of work I put together called Sparring Partners. You can view a video of the performance here, and below is some written justification. I’ve got another piece of writing discussing the timeline and processes of putting it together that I’ll send you in a few days, right now it’s waiting for some editing (that it desperately needs).
Sparring Partners provokes human/machine interaction as a collaborative partnership in the process of constructing realities, through an audiovisual performance: an artificial intelligence plays an improvisational actor, interrupting, challenging and altering the outcomes of the exercise.
Sparring Partners encourages user(s) and audience(s) to interrogate the technological activities of artificial intelligence and machine learning, to consider the parallel emergence of vast datasets of subjective information generated by humans, and question what role these technologies play in constructing a sense of objective reality for their users. The work aims to instigate an exploration of alternative, more provocative frameworks for interaction with AI-based tools.
Fundamentally a shareable exercise, Sparring Partners is expressed through a semi-improvised performance guided by a script designed to provoke engagement and critique through the participants’ own explorative process.
Central to the exercise is the Personal Verification Device, a fully functional wearable piece of speculative design based on a Bangle.js smartwatch. Though envisioned as a future standalone object, the device is powered by a nearby PC with microphone input, running a custom application which bridges an AI speech-to-text model with a customisable AI large language model tailored with system prompts instructing it to act as an assessor of factual accuracy. The application runs locally, offline; the bespoke code has been published open-source and its dependencies are open-source AI tools such as Whisper, Ollama, and Mistral-OpenOrca. The initial development of this device allowed for it to be tested and explored and instigated the creation of Sparring Partners as a guided framework for its use.
As a user-performer wears the device and speaks aloud, the accuracy of their statements is evaluated by the AI, which broadcasts its assessment to the wearable device. Each evaluation flashes an alert on the device and sounds a notification reflecting the tone of the response.
The user-performer reads a short three-act script guiding them through spoken thesis statements as well as stage direction for audience participation and branching improvisations. These activities encourage participants to feed arbitrary, subjective, and/or nonsensical data to the AI, a process which may reveal the AI’s judgement as flawed or subjective, and encouraging the emergence of glitch events.
The massive amounts of data which populate the environments we navigate are increasing, not just quantitatively but in a “dimensionality explosion” of complexity and incomprehensibility1. Algorithm-driven digital machines respond to this crisis, organising and processing data and resulting in the creation of tangible realities for their users2. Many platforms have been or are being developed which aim to utilise artificial intelligence to verify and categorise data automatically and passively with little effort on the part of the user, with varying degrees of success. Recently, in its own announcement video, the wearable Humane AI Pin made potentially harmful factual errors3. Though often attributed simply to the infancy of the technology, instances of bias and hallucination in fact point to underlying issues with the human-generated datasets that power the tools4. Rather than represent an objective reality of AI, the artist will find more utility in either reproducing or critiquing the limits of the dataset5.
While errors cast doubt on the utility of AI platforms in determining accurate realities, Betti Marenko describes these glitch events as a subversive portrayal of “the machine caught in the act of revealing itself”, signalling an unknowable digital potential. Error-prone AI interaction affords utility through glitches, which instigate creative innovation and engage conceptual critique6.
Staged performance as an exercise emphasises an ecological understanding of AIs as non-human actors interacting within networks of human actors; their performativity affords a power to bring about new situations through what they say and do7. Through performance, AI interaction is framed as collaboration: a form of play or improvisation, confounding intentions. Collaboration in theatre can create “a mis-seeing, a mis-hearing, a deliberate lack of unity”8; thus, the emergence of glitch events which break the tangible realities created by the machine and stage a new and altered understanding.
from Nathan
Before we get started: if you’re in London then I’m doing a live performance of a new semi-improvised AI-adjacent work called Sparring Partners this Tuesday (21st November) at Iklectik, from 7.30pm. It’s free but you need to register for a ticket!
There’s absolutely nothing in this item that is groundbreaking or even all that interesting, it’s more or less simply a very dry justification for my shifting online habits that I’m writing for myself. How I relate to the internet is changing, in terms of how I situate myself against it and the other people using it, and I don’t seem to be alone in this. For me this shift has been happening gradually for a year or two, but as a macro trend it’s been going on longer, as Ian Bogost (of Unit Operations fame) infamously laid out around this time last year in The Atlantic / (Non-paywalled version). If you didn’t catch it back then, I’d recommend reading it now. It’s not without its critics, and how much you agree may depend on how rose-tinted the lenses with which you view of the old internet are; Rob Horning for example claims that the “golden age” of the internet is mischaracterised and social platforms were always a cynical commercially-driven endeavour. While that’s certainly true and we shouldn’t valorise Web 1.0 tech companies simply for being quaint, I do suspect that Horning’s lack of nostalgia here maybe belies a past in which he didn’t participate in deeply earnest online communities the way I did on Facebook or Taylor Lorenz did on Tumblr.
Regardless, the salient point is that posting is so over, and while we still see a degree of posting across Instagram, X and even Tiktok, what I’d call the classical era of posting is in decline, and this shifting relationship to how we create, consume and distribute content is thanks to the collapse of the singular context. I’ve really been enjoying this journey out of the rituals but I hadn’t been able to articulate where I’d come from and where I was going to until I recently found this great clip from from later which nails it (and criminally has only 1k views!):
HOW THE WEB IS CHANGING – YouTube / Invidious version
To summarise: sandwiched between what we could imagine as the Dark forest of the big social networks: "the mainstream web that we all know, the one that's largely an economy of surveillance capitalism, it's Instagram, it's Twitter, it's the internet we all get anxiety about"
and the Cozy web of private networks and group chats: "it's all the stuff that actually makes communities cohere more and more / the richest online activity"
are the Digital gardens: "at the intersection of a notebook and a blog, where digital gardeners share seeds of thoughts to be cultivated in public"
(source)
I think we can broaden what counts as a digital garden a little1, but that’s where I’ve found myself recently, and I imagine you have too.
Email is appealing to me, not for any nostalgia or anemoia (I’m not that old) but because it leaves how you’d like to digest this information entirely up to you, if you’d even choose to engage with it at all. Read it on your phone or your laptop, copy/paste it to your notes app, pass it on, edit it, delete it: you have your own independent and interopable copy of this text and it’s not locked to any platform or vendor2. Platforms like Tiktok, X or Instagram are frustrating when we want to talk because their goal is primarily just to keep us on the platform.
Imagine if I instead posted this email to Instagram: All the text would need to be embedded into fixed-resolution images or placed in an extremely lengthy image caption, and I couldn’t link to anything (like sources or further reading) outside of instagram. Sharing it outside of Instagram or finding more information would be really difficult because you can’t copy or paste any text. Plus there’s no certainty when I post that you or anyone else would ever see it, and obviously if Meta goes down or deletes my account, it would vanish. There are hacks around some of these problems, and inevitably we use them, but there’s no denying that these platforms are hostile to helping us communicate well in favour of better returns for their shareholders, and our inability to communicate effectively, clearly and with full context is extremely harmful.
Of course, these platforms are often used not because they’re good at communication, but because they hold so much potential for extremely wide distribution. I don’t have anything to say that counters that, because it’s true! Nothing except Tiktok3 or YouTube will get you upwards of a billion views. Platforms like Substack or Patreon or anything on the Fediverse aim to fill this space a bit, and that’s great, but I’m not writing these emails to develop a following4. I’m just writing so we can talk to each other.
So, this is why I’m emailing you!!!!!!!!!!
Have a good weekend! Feel free to reply, reply all, or share. I’ll also echo anything from here on grpahicdeisgn.com.
from Nathan
I’m coming across a lot of information all at once, which is good, I’m enjoying it, we love the information. It’s always tough to process lots of information though, organise it, put it in a useful space in context with all the other information (which is why we’re trying to get machines to help us with it) and in the depths of writing lots of chaotic notes to myself I’ve conceded that for reasons both natured and nurtured, we need each other to do it. A nice way to quietly banish the sad myth of the great man is to embrace this: in isolation none of us can completely understand anything. In particular as we head into a future filled with vast recorded histories and greater volumes of accumulated data1, the prerequisite knowledge for really comprehending something becomes untenably complex. Comprehension might require you to be a thousand times smarter than the average human2, or it might require a thousand people sharing knowledge and engaging with each other. Whenever we understand something, it’s the result of lots of people forming a multiplex of nodes, sharing meanings and understandings and misunderstandings3 and fragments of knowledge.
So this is what I’m pointing to: we need to share information and engage with each other. So far so normal, very nice. Crucially though, it’s not just a dialectic but more broadly conversation. I differentiate the two in a sense that’s more spiritual than concrete, but a dialectic argument and its rigid roles (here’s my thesis, and here’s your antithesis) doesn’t really advocate for a complex, multifaceted network of ideas. It’s a methodical, iterative refinement of a script to its most efficient and efficacious result. But a conversation allows for all the nuances of ideas existing in proximity to others; maybe we agree, maybe we disagree, maybe something in between. Maybe we say nothing, the ideas simply hang in the air between us, and we learn something from what it feels like to have an idea hang in the air between us.
So this is what I’m really pointing to: we need to take information we discover and reposition it in relation to everything else that’s in our network, not just by a linear re-transmission to each other but by an active re-construction with each other. You could read a good book and give it to another person and you’ve retransmitted the information. Or you could read a good book with another person, meet up and discuss it, and reconstruct its meaning as it appeared to either of you. Maybe it’s even plausible that you could meet up and talk about something else entirely, but your shared experience/knowledge would construct meaning you wouldn’t unlock in isolation.
I don’t think we need to have literal conversations and meet face-to-face, or even necessarily converse with other or real humans to construct meaning in this way (but that’s something I’ll talk about in a future message). My hope is that putting disordered thoughts into a medium that affords the potential for conversation helps cohere them into something useful, even if you never reply. So far I think it’s going well, because when I sat down this evening and wrote the first sentence of this email I had no idea what I was trying to say.
So, this is why I’m emailing you.
Thanks for reading! I’ve written a second part to accompany the above which covers the Literal scope (why have I emailed you), but I don’t want you to mark me as spam just yet so I’ll send it next week.
If you don’t want me to email you let me know, or if telling me would make you feel awkward, filter your email by subject, I’ll always start with the same ("email from Nathan"
).
I read Cory Doctorow’s The Internet Con: How to Seize the Means of Computation which is incomplete (perhaps by nature) in its proposed solutions but very clear and eloquent in its belligerences.
I watched Wargames, which is still great. I was interested by how wildly crammed the final conflict and resolution is. For example the linked scene, in which Professor Falken stubbornly holds to his trauma-induced nuclear fatalism before suddenly and inexplicably resolving it offscreen five minutes later, undergoing a polar character shift. Anecdotally I’d like to think that in 1983, the first 80 minutes were necessary exposition for the concept of a bleep-bloop robot computer run by a video game-modelled algorithm that can control nuclear warheads and which a teenager can talk to over a telephone line; something that might not need much explaining now.
Eras: The Beatles was a nice chill listen covering a rough history of The Beatles, a band who I never consider much thanks to most media concerning them being painful deification. This series introduces them as experiencing the highs and the lows of creative process and the pitfalls of fame and collaboration which are so familiar as to seem pedestrian, and it’s the better for it. Normal band
As tends to happen when I get wrapped into a long period of work and study I just listen to the same couple of things over and over. Recently these have been nearly everything by Starflyer 59, this one Troye Sivan track, anything by The Field that is monotonous enough not to notice, and whatever the algorithm provides after this Blanck Mass album finishes.
Tobias Revell is very insightful in this podcast on Creative Processes and Understanding Design, AI & Chat GPT – Comuzi: Next Billion Users Podcast
Have a good weekend! Feel free to reply, reply all, or share. I’ll also echo anything from here on grpahicdeisgn.com.
from Nathan
Searching for uses of the internet more worthwhile than I’ve experienced over the last ~5 years. Maybe a stab in the dark but trying out a blog format with activitypub integration, and the use of this dumb domain (which I like) for something.