Several viewers are seen observing the work in a projection screen, altered by AI.

It is not what is in it but it is what it is in

Clip of It is not what is in it but it is what it is in by Nathan Smith

An audience watches a screen displaying a sequence of live security camera feeds from several rooms, including the space in which the audience is located. Two AI models converse with each other to manipulate the visible environments: one model generates imagery prompted by the other, which is paradoxically generating descriptions of the imagery produced by the first.


The occupants of each space in the camera feed exist untouched by the manipulations that surround them, while each additional occupant spurs the AI to generate wilder manipulations; if the room is unoccupied, little is altered. Meanwhile a collection of AI models engage in conversation with each other, discussing, categorising and narrativising their vision of the manipulated world and its occupants.

The epistemic backstop presumes that heretofore, recorded media moderated testimony and provided a basis for factual accuracy.

I’m sceptical of the assertion that the erosion of this backstop will cause an “epistemic maelstrom”, instead suggesting that our epistemological divination is more nuanced, and
complex.

Without a trusted context to empower it, synthetic content is ineffective at manipulating trust.

The fruits borne of AI technologies seem neither exclusively rotten nor entirely ripe; their qualities derive from how we react to and interact with each technology’s affordances.

The significance is not what the AI decontextualises (what is in it) but what contextualises the AI, lending it authority (what it is in). The effectiveness of the machine is contingent on the human. Without human intervention validating it, no manipulation really occurs.

AI cannot objectively determine reality.

Truth we derive from synthetic media is not a result of the content’s innate quality, but rather a result of our trust in the context which supports it.

In the face of new technology emerging, we seem to discard vital theoretical frameworks already in
our possession.

Enchantment allows AI discourse to support untenable techno-optimism, while shielding its creators from accountability.

This points to a network of social, cultural and economic motivations undergirding the state of the art, waiting to be wrestled with.