Kate Hodesdon

The reason for art's protection from automation is that there is something intrinsically human about art. Art is produced with intent. In many cases, our appreciation of a piece of art consists in our recognition of a mood, or outlook, that the artist conveys. Part of the artistic value of an Edward Hopper painting is the urban lonliness it captures. Were the same image to be produced by a random process, with no agent behind it with an aim of communication, then it would not have the same value. Natural objects are produced by the agent-less process of evolution. No matter how beautiful it is, no sunset, shell or flower is _art_. What we have with text2img models is something half-way between: while the AI is responsible for a lot of the work, there is a human behind the images, guiding the generative process with words, with which the person is communicating something. It will be interesting to see the truly original moves that AI makes in image generation, analogous to those made in the game of Go by alphago. Besides a vocal minority of digital artists, I doubt that artists are all that worried about being replaced by AI. Even independently of fine art photography, if we consider jounralistic photography, it is clear that here as well there are differences in value between human-generated and AI-generated images. Consider a stock photograph, maybe a woman laughing alone with salad. Do we care if she exists? Not especially. The opposite is true of iconic photographs such as Tank Man, the protestor standing in front of a column of tanks in Tiananmen square, or Dorothea Lange’s 'Migrant Mother', the dustbowl refugee family. These photographs have cultural value that a purely synthetic image, no matter how realistic, can never have. It's not just the case that the photographs are a record of a historically significant event, but also that the photographer has, with the photograph, captured and communicated a particularly poignant emotion. Consider language models, by analogy. Nobody expects ChatGPT to replace poets or novelists, though it might churn out limericks, some of which are funny, or even formulaic stories for children. But the low hanging fruit is the generation of functional text, such as some marketing copy, or mid-tier undergraduate essays, which just need to tick a few boxes. Really, language models are better at synthesising text given a learnt corpus than they are at generating completely novel ideas, which is why applications like chatGPT, which do question answering, as so effective.