Slopback: A Storm in a Milk Carton

I recently wrote at length about the historical context and the moral and ethical reactions to synthetic content, particularly that of low-quality, colloquially known as “Slop”.

https://iain.so/ai-slop-psychology-history-and-the-problem-of-the-ersatz

Over the Christmas period, there was an interesting storm in a teacup (with two handles?) when tech blogger John Gruber published two posts accusing Apple CEO Tim Cook of sharing “AI slop” on Twitter/X. The image in question, a whimsical illustration of milk and cookies promoting Apple TV+'s Pluribus, was created by established artist Keith Thomson.

Keith Thomson Pluribus image for Apple

What followed was a textbook case of what I'm calling “slopback”: a reflexive, evidence-free accusation of AI-generated imagery that reveals more about the accuser's assumptions than about the work itself. Gruber's posts demonstrate how the legitimate concern about AI-generated content has metastasised into something far less useful, a witch-hunt mentality that now threatens the reputations of working artists.

The First Post: Accusation as Headline

On 27 December, Gruber published a post titled “Tim Cook Posts AI Slop in Christmas Message on Twitter/X, Ostensibly to Promote 'Pluribus'.”

https://daringfireball.net/linked/2025/12/27/slopibus

The title construction is interesting; this isn't “Does Tim Cook's Image Look Like AI?” or “Questions About Cook's Christmas Post.” The headline presents the accusation as an established fact. So we presume we’re going to see some, you know, actual evidence.

It turns out that Gruber's case rests on several subjective observations about the image:

The soft focus tree with a crisp edge. This is a standard technique in both photography and illustration. Selective focus with defined edges appears throughout Thomson's portfolio. It's not evidence of AI, it's evidence of artistic choice.

The milk carton labelled both “Whole Milk” and “Lowfat Milk.” Gruber found this damning. He later added an update acknowledging that the actual props from Pluribus have exactly the same labelling. But rather than reconsidering his thesis, he dismissed this as “a stupid mistake to copy”. When, in fact, the image is just accurately reflecting the movie’s props.

Furthermore milk cartons are central to the plot of Pluribus. In Episode 5, “Got Milk,” protagonist Carol Sturka investigates mysterious milk cartons that the hive-mind Others consume. The conflicting labels aren't a mistake to copy; they're a deliberate reference to the show's core mystery. The “errors” are deliberate and calculated.

The “Cow Fun Puzzle” maze. Gruber writes that he “can't recall ever seeing a puzzle of any kind on a milk carton” and suggests this conflates milk cartons with cereal boxes. This is simply a failure of memory or imagination. Mazes and puzzles have appeared on milk cartons for decades, particularly in the American market where Pluribus is set. For example A 2002 Packaging World article documented a Crayola school milk program where cartons were printed with “puzzles and brainteasers” on the side panel.

The general “weirdness” of the image. Subjective aesthetic judgments dressed up as forensic analysis aren't evidence. Thomson's established body of work frequently features surreal, off-kilter scenes that blend everyday objects with unexpected elements. His style has been compared to a modern, whimsical Edward Hopper.

Multiple critics have noted that Pluribus functions as an allegory for generative AI. The hive mind consumes human-generated content and produces outputs that approximate, but never quite replicate, genuine human creation. James Poniewozik of the New York Times explicitly drew parallels between the show's premise and “the modern lure of AI, which promises to deliver progress and plenty for the low, low price of smooshing all human intelligence into one obsequious collective mind.”

The Scam Theory

Most troublingly, Gruber wrote: “Apple must have somehow fallen for a scam, because that Keith Thomson's published paintings are wonderful.”

Let's be clear about what's being alleged here: that a professional artist with an established portfolio and decades of work deliberately defrauded one of the world's largest companies by submitting AI-generated work as his own.

This is an extraordinary accusation made without evidence. In the UK, where the burden of proof lies with the defendant, publicly accusing someone of fraud, particularly when that person is named, the accusation is published to a wide audience, and it directly concerns their professional reputation, is the kind of statement that defamation lawyers dream about.

The Follow-Up: Doubling Down

Two days later, Gruber published “Slop Is Slop,” which was somehow even worse.

https://daringfireball.net/2025/12/slop_is_slop

The “Non-Denial Denial”

When journalists contacted Keith Thomson, he responded: “I'm unable to comment on specific client projects. In general, I always draw and paint by hand and sometimes incorporate standard digital tools.”

Gruber's interpretation? “That is a non-denial denial that he used generative AI to create the image.”

This reading is remarkable. An artist says he draws and paints by hand and sometimes uses standard digital tools. Gruber treats this as a confession of AI use because... it didn't explicitly exclude AI? By this standard, any artist who doesn't specifically deny using every conceivable tool in every interview is implicitly admitting to using them.

“Standard digital tools” in the illustration world typically refers to a wide range of software, such as Photoshop, Illustrator, and Procreate, that have been industry standards for decades. Interpreting this phrase as a coded admission of generative AI use requires a level of motivated reasoning that borders on the paranoid.

Rejecting the Obvious Explanation

M.G. Siegler, a former Google Ventures partner, suggested the image might be deliberately referencing Pluribus's AI themes.

The show is explicitly about a hive mind that functions eerily like a large language model. It can't create anything truly new; it can only recombine existing knowledge. Siegler wondered whether the promotional image might be playing with these very themes.

Gruber's response was contemptuous: “I think MG didn't put enough y's in the wayyyy in 'I'm sure I'm reading wayyyy too much into that tweet'. There is no 3D chess being played here.”

But consider what Gruber is asking us to believe: that Apple, a company notoriously obsessive about brand presentation, accidentally published obviously sloppy AI-generated artwork to promote their flagship new show, credited a specific artist by name, and doubled down when challenged, all without anyone noticing it was AI.

Against this, Siegler's theory that a promotional image for a show about AI themes might deliberately play with AI aesthetics seems almost boringly straightforward.

The Occam's Razor Misapplication

Gruber invokes Occam's razor, arguing that “the simplest explanation is that it simply is AI-generated slop, and Keith Thomson suckered Apple into paying for it.”

This is a fundamental misuse of Occam's razor. The principle isn't “assume the most cynical interpretation.” It's “don't multiply explanatory entities unnecessarily.”

The simplest explanation for the image is:

  1. Apple commissioned promotional art from a professional artist
  2. The artist created an image referencing the show's plot (milk cartons are central to Pluribus)
  3. The image was designed to look slightly “off” to match the show's themes about compromised reality
  4. The artist's style, which has always embraced surreal elements, was deliberately deployed
  5. Apple published it.

Gruber's “simple” explanation requires:

  1. Keith Thomson, an established artist with decades of work, decided to commit professional fraud
  2. He submitted AI-generated work as his own hand-made art
  3. Apple's entire marketing apparatus failed to notice
  4. When challenged, Apple doubled down and explicitly credited the work as human-made
  5. Thomson gave statements that carefully avoided denying AI use (implying conspiracy)
  6. Everyone at Apple and everyone who worked on the campaign remained silent about the deception.

Which of these scenarios actually requires fewer assumptions?

Separately, Gruber has written thoughtfully about AI and art. In October 2025, he published a piece acknowledging that “generative AI tools not only can be, but already are, used to create genuine art.” He claims his objection isn't to AI itself but to “slop”: low-quality output passed off as craftsmanship.

Fair enough. But the Pluribus incident shows how easily this reasonable concern can metastasise into something uglier: a presumption of guilt, a refusal to consider alternative explanations, and a willingness to publicly accuse working artists of fraud based on aesthetic hunches.

Conclusion

The backlash against AI-generated imagery is understandable. Genuine slop exists, and people have every right to be concerned about it. But “slopback”, the reflexive accusation of AI use based on vibes and pattern-matching, helps no one.

The Pluribus promotional image controversy should serve as a case study in how not to respond to suspected AI-generated imagery. John Gruber, a normally careful writer, let his suspicions outrun his evidence and publicly accused a working artist of fraud.

The benefit of the doubt, as PiunikaWeb noted, “is gone in 2025.” Perhaps we should work on getting it back.

I am a partner in Better than Good. We help companies make sense of technology and build lasting improvements to their operations. Talk to us today: https://betterthangood.xyz/#contact