email from Nathan (Item 20)

A cute example of poisoning the well

In Max Read’s piece, he also cites a funny example of Google’s attempts at factual and useful AI interactions rather than entertaining ones, by asking it for food names ending with “um.” Gemini returns fruit names creatively appended with “um”, like Tomatum, ignores the existence of plums altogether, and offers a plain coconut:

For my sins, I actually do use these AI answers fairly often in searches using Kagi, which has had the feature for a while now and does manage to provide very brief useful summaries with links to information. Kagi uses Anthropic’s Claude Haiku model and I was curious to see if it could handle it any better, and at first I was surprised to find exactly the same result from a totally different AI model!

That was, until I realised that it’s gleaned its answers by crawling all the shared meme JPGs of Gemini’s mistake, a stunning example of one company’s dodgy AI model managing to poison another’s. Kagi actually gets a goes a step further by offering a regular algorithmic answer summary, which tells us that the Japanese fruit “Umeboshi” ends with “um”:

…which unfortunately has managed to scrape info from Poe.com, which itself generates answers from LLMs. Ironically we’ve managed to get a little closer to a correct answer because Umeboshi is translated as “salted Japanese plums”. Another stunning example, this time of some beautiful GIGO. Don’t worry, I’m sure this is a sustainable information infrastructure and all we need here is more compute? or for Amazon to spend a billion dollars training a single AI model? Everything’s fine.