Stock photos of glowing robots tell us nothing about AI. Old science textbooks might.
May, 2024
Some thoughts on why I find AI-generated pictures depicting AI horrible.
Open any article about artificial intelligence and you'll likely encounter the same image: a glowing humanoid figure, perhaps with visible circuitry, often reaching out a hand toward the viewer. If it's a particularly uninspired piece, there might be hexagons. Maybe binary code cascading in green, à la The Matrix.
There's something peculiar happening here. We have developed a visual vocabulary for AI before we've developed the thing itself. Or rather, we've borrowed the wrong vocabulary altogether. As Neil Richards and William Smart observed in their influential 2013 paper, we fall prey to what they call the "Android Fallacy": the assumption that robots (and by extension AI) are "just like people" and that humanoid form carries meaningful significance (Richards & Smart, 2016). But AI is not a body. It has no form. A large language model looks like nothing at all.
This creates a representational problem. How do you illustrate the invisible?
The answer we've settled on is to reach for science fiction. The robots of Metropolis, Blade Runner, Ex Machina and I, Robot have colonised our visual imagination so thoroughly that stock image libraries can offer little else. Researchers at the Oxford Internet Institute have documented how these images remain static even as the technology transforms beneath them. The same blue-glowing androids illustrating articles in 2025 appeared in 1995 (Mustaklem, 2024). The technology has changed. Our pictures of it have not.
The irony is that the images most commonly used to represent AI are now largely generated by AI itself. There's a recursive quality to this. The medium depicting itself, like Velázquez painting himself into Las Meninas. Mise en abyme, the French call it: an image containing a smaller copy of itself, receding into infinity. AI generates images of AI that look like what AI thinks we think AI looks like.
But beyond the politics of accountability, I confess a simpler objection: I find these images ugly. Visually offensive. A failure of imagination dressed up in chrome.
When I need to visualise something abstract, whether for a project, a presentation, or my own thinking, I ask myself: what is the visual identity here? And I've come to know my own answer. I lean toward something like a post-Soviet 1970s mathematics textbook, the kind you might read in a cottage in a Norwegian fjord. Oddly specific, I know. But the aesthetic carries meaning.
What draws me to this tradition is how mid-century scientific illustration handled abstraction. Before we could render anything photorealistically, illustrators had to think about structure. Look at how physicists visualised atomic models, how mathematicians illustrated set theory, how textbooks explained probability. Simple forms, almost childlike. Circles, arrows, nested shapes. Diagrams that show relationships and progressions without pretending to show the thing itself.
This, I think, is what AI imagery needs. Not technical accuracy. You cannot illustrate a transformer architecture for a general audience, and nor should you try. But you can show structure. You can show process. A simplified diagram of a language model, with boxes and arrows showing how text flows through layers of processing, tells you more about AI than any glowing android ever could.
The Better Images of AI project offers similar principles: represent the human and social impacts of AI systems; reflect the "realistically messy, complex, repetitive and statistical nature" of the technology; avoid anthropomorphism. What they're asking for, really, is honesty.
Stop making it look like a person. Stop making it glow.
Make it look like what it is: a system. And systems, it turns out, can be beautiful, if we remember how we used to draw them.
References
Better Images of AI. (2024). Better Images of AI. https://betterimagesofai.org/
Mustaklem, M. (2024). What's wrong with the robots? An Oxford researcher explains how we can better illustrate AI news stories. Reuters Institute for the Study of Journalism. https://reutersinstitute.politics.ox.ac.uk/
Richards, N. M., & Smart, W. D. (2016). How Should the Law Think About Robots? In R. Calo, A. M. Froomkin, & I. Kerr (Eds.), Robot Law (pp. 3-22). Edward Elgar Publishing.