April 12, 2024, by Brigitte Nerlich
Hunting for AI metaphors
Thousands of articles and blog posts have been written about generative AI, especially ChatGPT. Some or these, especially blog posts, are about metaphors. As a metaphor hunter (see image!) I feel a bit ashamed that I haven’t done much on metaphor and AI. A little bit yes; for example, on what metaphors ChatGPT uses about itself, or about what it says about metaphors, or about how it thinks I study metaphors.
But there is more to ‘AI’ and metaphor, of course. Others have pointed out that it can help you generate metaphors or understand difficult metaphors, and much more. In this post I have tracked down a few blog posts etc. on metaphor and AI and started to sort the metaphors used for AI into groups. But before I do that, just a little warning about metaphors.
Insight and illusion
Metaphors map or match what we (think we) know about one thing (source) onto another thing (target) so as to understand or convey it better, say, to bring something huge and complex down to our level of ‘intelligence’ (‘black hole’, ‘carbon footprint’), or to give something a new spin or emphasis (‘squashing the sombrero‘). This is a creative process, but one that is also full of pitfalls.
Metaphorically speaking, metaphors are “the cognitive fire that ignites when the brain rubs two different thoughts together” (States 2001: 105). They can illuminate but also inflame. They can be used to cook a delicious meal for the mind but also cremate it. Ok, enough of the meta-metaphors.
One important aspect of metaphors is that they can give us an illusion of knowledge. This phenomenon can be illustrated with a very ordinary example. I once said about a philosophy book that it was “pop-tart philosophy”. I was not understood by my German friend who had, müsli-eater that he was, never seen a pop-tart. When it comes to AI, there is more at stake though.
As Dominik Lukes says: “Metaphors give an illusion of understanding AI unless accompanied by actual knowledge of how AI works.” There are two issues here that make understanding difficult. On the one hand, popular understanding of ‘AI’ is shaped from the start by the word ‘intelligence’ and possibly shunted onto an illusionary track. On the other hand, although there are a few people who have ‘actual knowledge’ of AI, there is a certain vacuum of knowledge at the heart of AI, something often called the ‘black box’ where understanding sort of evaporates. So, in a way, metaphors are all we have for the moment to circle that black box….. But what metaphors do we have at our disposal? Here are some types of metaphors that I managed to hunt down, poor things.
Ur-metaphors
To start with, we have what I call ‘dead metaphors’ or root metaphors or ‘Ur’-metaphors. These are metaphors that we don’t actually see anymore, because they are buried so deep in the AI discourse, such as AI is a brain, or metaphors like ‘intelligence’ (as some say, a real ‘illusion‘ HT @sarahmay1.bsky.social), ‘learning’, ‘predicting’, ‘machine learning’, even ‘self-training’, reasoning and, neural network.
The last metaphor is perhaps the least dangerous, as the phrase doesn’t have a ‘thick’ meaning like ‘reasoning’, which misleads us into attributing all sorts of things to what is basically a machine trained to synthesise existing texts. By contrast, the phrase ‘neural network’ is more akin to the word ‘code’ in genetics, which actually helps scientists trying to understand how genetic phenomena work. But then, who knows, over time, ‘learning’ will probably become literal or has already… (see its use in compounds like supervised learning, unsupervised learning, and reinforcement learning; or transfer learning, federated learning, and continual learning; and; of course, the mother of them all: machine learning)…
Agential metaphors
Then we have a whole bunch of anthropomorphising or perhaps better agential metaphors, so beloved by ChatGPT itself, which can highlight and hide various social roles or relationships between humans and bots.
On the one hand certain words can position AI as a partner, assistant (or buddy, co-pilot, tutor), or by disgruntled people, as a smart, drunk intern. On the other hand AI can be positioned as a nefarious and capricious agent, not a partner but a master, and become a shoggoth, a sorcerer’s apprentice, a Robot overlord, or the ‘terminator’.
Related to the sorcerer’s apprentice metaphor is that of “King Midas” or a “Genie”. “In this metaphor, AI is like one of the many entities in legends – Genie, Monkey’s paw, etc. – that follow their user’s literal wishes, but in the least helpful way possible.”
Linked to this is the fear that AI is emerging as a separate, advanced culture or species which might swallow us up if we fail to adapt. More commonly though, bots are just seen as ‘hallucinating’ or ‘confabulating’ or generating bullshit.
Impact metaphors
There are also metaphors for the impact of AI on us and the world and on people. Here I’ll only talk about flood and bomb metaphors.
Some fear the impacts of an AI tsunami ‘crashing‘ into the shore of the labour market (look at the illustration!), while others want to ‘ride‘ the tsunami.
Some fear that AI will be like a nuclear explosion or atomic bomb or a nuclear weapon while others say it is not. As Stella Biderman said on X/Twitter: “The AI models = nuclear weapons analogy is terrible for a lot of reasons, but most importantly it heavily misleads policy-makers. Nuke-inspired regulation won’t prevent people from building powerful AIs, but it will protect tech companies from competition”.
Debates about this and other impact metaphors are linked to heated debates about whether AI poses an ‘existential risk‘ or not.
Pollution metaphors
More realistically, there is also speculation about a more immediate threat posed by AI, namely pollution or contamination of what one may call the epis-sphere. Instead of or, actually, on top of polluting the atmosphere (think energy use by data servers), AI may also pollute human knowledge and understanding (epistemology).
The pollution metaphor is now quite widespread, encapsulated in metaphors and analogies like the exponential enshittification of science (Gary Marcus), the contamination of scientific literature (Zen Faulkes), an oil spill into our information eco-system (Emily Bender), web pollution (Saminathan Balasundaram), knowledge pollution and many more.
This is nicely summarised in a thread on Mastodon. Here, Rich Felker taps into the dangers posed by fossil fuels to frame the dangers of AI: “AI is a lot like fossil fuel industry. Seizing and burning something (in this case, the internet, and more broadly, written-down human knowledge) that was built up over a long time much faster than it could ever be replenished.”
Celebrity metaphors
There are also a few celebrity metaphors, such as AI as a Stochastic parrot (or for short Infernal Stochastic Gibberish Generator), a paperclip maximizer, a blurry JPEG of the web (although others might say that ChatGPt is rather a ‘simulacrum‘ HT @christophstc.bsky.social) or AI as the library of Babel or Borges’ universal library. These metaphors need a lot of unpacking to be understood. They don’t really provide even an illusion of understanding unless you are already in the know.
Competition metaphors
As with all technological advances, competition between developers of AI is conceptualised as “a ‘global AI race‘, often positioning the EU as struggling for a bronze medal behind the USA and China”. Some even talk about an ‘arms race‘.
Creative metaphors
And finally, people also invent creative metaphors, metaphors that might provide new insights into what’s going on with AI in general or in particular contexts and situations.
Some creative metaphors are relatively novel, such as ‘AI is the new electricity’ (which it is not totally novel, as it follows the pattern of ‘data is the new oil’) or ‘AI is snake oil‘ (which follows a well-known pattern used when rejecting pseudo-science); some metaphors are a bit more novel, such as ‘common sense is the dark matter of AI’ (although here too we can see an underlying pattern, as this reminds us of ‘Junk DNA is the dark matter of genomics’); while some metaphors are really novel and quite situation-specific.
Here is an example – ‘AI is an untested vaccine’….: “Having tried and failed to explain the risks of AGI to my Mom (‘I really don’t think Siri has the brains to kill me, dear’), I’ve found that emphasising AI as an invasive concept (think untested vaccine being injected into your house, car, plane, health system) has resonated slightly: she has now started writing letters to local politicians.”
Another example is this – something like AI is, unintentionally, like a medical scan: “I’m beginning to think of ChatGPT as a tool that reveals weak points in institutional credibility; illuminating checks that humans should make but skip out of lack of reward, laziness etc. LLM shibboleths then show up in the text like idoine [sic] glowing in your veins before a CT scan”.
Do you have other examples of fresh and creative metaphors about AI?
Adddendum, 25 June 2024. I have just discovered an article in the domain of ‘critical AI literacy’ which discusses a few metaphors: “Assistant, Parrot, or Colonizing Loudspeaker? ChatGPT Metaphors for Developing Critical AI Literacies“. And, added 10 August 2024, a great post by Sean Trott on metaphors for LLMs!
Addendum, 15 November, 2024: Melanie Mitchell has published an article on AI metaphors in Science!!!
Addendum, 18 November, 2024. There is a call for papers: Metaphors of AI in Higher Education – Discourses, Histories and Practices – looks great!!
While looking at earlier writings on generative AI (and its ancestors ‘expert systems’ and ‘machine learning’), I came across this great metaphor. It would be great to know more about the history of metaphors used for generative AI over time:
From: The American Banker, June 3, 1985
For those interested in AI and metaphors, this article is essential:
Metaphors for designers working with AI
Authors
Dave Murray-Rust, Delft University of Technology
Iohanna Nicenboim, Delft University of Technology
Dan Lockton, Eindhoven University of Technology
Abstract
In this paper, we explore the use of metaphors for people working with artificial intelligence, in particular those that support designers in thinking about the creation of AI systems. Metaphors both illuminate and hide, simplifying and connecting to existing knowledge, centring particular ideas, marginalising others, and shaping fields of practice. The practices of machine learning and artificial intelligence draw heavily on metaphors, whether black boxes, or the idea of learn-ing and training, but at the edges of the field, as design engages with computational practices, it is not always apparent which terms are used metaphorically, and which associations can be safely drawn on. In this paper, we look at some of the ways metaphors are deployed around machine learning and ask about where they might lead us astray. We then develop some qualities of useful metaphors, and finally explore a small collection of helpful metaphors and practices that illuminate different aspects of machine learning in a way that can support design thinking.