here’s why (the looming threat of AI)
There’s a bunch of elections this year. The stat that’s been doing the rounds is that over 50% of the world’s population will be eligible to vote, although I can’t find a reliable source for that number and it looks like a more reasonable statistic is something like two billion to three billion people1, including in India, Pakistan, Indonesia, Mexico and Russia, as well as a potential USA/UK double feature. The salient point is that it’s the election year to end all election years, and there are even a couple of places where that’s not literal.
This is fun because while we’ve all enjoyed a lot of times recently that we might label as turbulent, rarely to we get such foresight of a guaranteed turbulent year. The perceived threat posed by advancing technologies such as AI is going to be spectacular, and I mean that in a very literal sense: for much of the past year, news commentators, think tanks and billionaires have had plenty to say about the coming storm.
I think the threat of the technology itself may be somewhat overestimated, at least in relation to the turbulence. Many thousands of words could (and have been) written examining interested parties such as Musk or Altman who loudly advertise their technologies as intensely, unfathomably powerful and who also have vested financial interest in their technologies being deemed too dangerous for users to access, unless they’re accessed through a paid subscription service that their company just so happens to provide.
More importantly, in popular discussions of tech there’s a conflation occurring, of technologies that are at the peak of their hype cycle confused with threats of harm that already existed long before these technologies did, and these threats continue to persist through them.
For example, much was made of the World Economic Forum dropping their Global Risks Report 2024 in January; predicting Misinformation and Disinformation as the most severe short-term global risk to the world. This makes sense, and it could well be true, but the WEF waste no time fast-tracking to what they pitch as the root cause: the looming threat of AI. Nothing looms like a threat!
Misinformation and disinformation is a new leader of the top 10 rankings this year... Synthetic content will manipulate individuals, damage economies and fracture societies in numerous ways over the next two years. Falsified information could be deployed in pursuit of diverse goals, from climate activism to conflict escalation.
and further:
Over the next two years, close to three billion people will head to the electoral polls... The presence of misinformation and disinformation in these electoral processes could seriously destabilize the real and perceived legitimacy of newly elected governments, risking political unrest, violence and terrorism, and a longer-term erosion of democratic processes.
There’s nothing false about either of the above quotes but they suggest the WEF is suffering from a deep amnesia, as though the past decade or more in politics wasn’t also defined by dis- or misinformation (lest we forget, Cambridge Analytica were harvesting data over 10 years ago). The use of a more novel term like synthetic content
implies that it’s only thanks to AI that we’ve just invented lying. I’m not denying the role AI will play in misinformation – AI accelerates the lying, allowing it to be performed quicker and more efficiently – but I think it’s crucial to determine what’s a legitimately new paradigm versus what’s an evolution of something that’s already an issue and has been for a while. Biden AI robocalls could have just as easily been performed in 1974 as 2024 (you don’t really need AI to impersonate a voice in a recording) but even more complex scams like Company worker in Hong Kong pays out £20m in deepfake video call scam aren’t new, maybe all you need is a silicon mask.
The implication that these are simply new problems that come from new tech is extremely dangerous, pointing us away from the important work that needs to be (and has been) done in tackling the structural processes that cause or affect the creation of trust or mistrust in specific contexts. Instead, the new problems force us to be distracted with wrangling the minutiae of the crass functions of a piece of tech. It would be less helpful to pursue a silicon mask ban than than it would be to pursue an understanding of how and why trust was being established by the medium of an insecure video call to begin with.
Max Read has some great writing on this including this piece from a year ago ("A.I. doomerism is A.I. boosterism under a different name"
). Even better is How Do You Spot a Deepfake? It Might Not Matter – nearly 5 years have passed since this was written, yet Read’s cautions against doom-laden proclamations of the same looming threat of AI back in 2019 are far more insightful than much of what is being published today:
Most people determine the authority or veracity of a given video clip not because it’s particularly convincing on a visual level — we’ve all seen mind-bogglingly good special effects — but because it’s been lent credibility by other, trusted people and institutions. Who shared the video? What claims did they make about it? Deepfakes have a viscerally uncanny quality that make them good fodder for panic and fearmongering. But you don’t need deepfake tech to mislead people with video.
This is what I mean when I talk about tackling the structural processes that affect the creation of trust or mistrust. You don’t need AI in order to tell a good lie, you just need to create the right context for it to be believed. In this way, the content of a piece of misinformation and the technology used to create it are secondary and tertiary. The primary concerns are who hosts it, who promotes it, and where or how a viewer sees it.
I can’t help but think that the davos men
at the WEF wouldn’t make themselves such easy targets for populists2 if they dared to identify persons and corporations who distribute and enable the means of mis- and disinformation rather than just making generic gestures towards nondescript “threats” detached from culpability. The launch of an online misinformation platform is creatively illustrated as a threat in the same way one might describe something as unpreventable and blameless as an earthquake. As the Global Risks Report 2024 so sagely predicts:
The capacity of social media companies to ensure platform integrity will likely be overwhelmed in the face of multiple overlapping campaigns.
…blithely failing to note that Elon Musk’s X has consciously and intentionally diminished its own capacity to retain integrity in its platform, and X is by no means alone in doing so. “Capacity being overwhelmed” projects a passive non-culpability on to platforms who purport to offer truth to their users while making little effort to build integrity into their platforms.
With this in mind, I think that there are more pressing and deeply concerning issues when we’re talking about misinformation than just our increased technological capacity to tell lies: for example, the massive expansion of informationally dysfunctional platforms which multiply the distribution of misinformation.
Anecdotally, I wonder if events throughout 2024 will begin a substantial dismantling of much of the blind faith that we put in platform-distributed information, and I suspect our tendency towards awarding trust to undeserving actors is one of the cognitive errors we discussed earlier. It strikes me that there’s little point in facetious initiatives like watermarking AI images – even if practical and enforcable – if both platforms and users are apathetic towards verification.
Sorry it took me so long to get round to the point here! These issues are what I’m planning to make the focus of my studies for the rest of this year. So if you hated reading this, I’m really sorry but there’s probably more to come 😭
- Maybe some number wanging over age of majority, eligible voters etc, either way it’s a lot of people and more than usual ↩︎
- William Davies does a nice breakdown of how populism functions in claims like Jacob Rees-Mogg’s here, somewhere in this NS podcast. ↩︎