magic wand with little star

October 27, 2023, by Brigitte Nerlich

ChatGPT and its magical metaphors

Last week, a new issue of Technoscienza, an Italian journal of Science and Technology Studies, landed in my inbox. It had a very intriguing cover, co-created between Sergio Minniti and ChatGPT — a portrait drawn by ChatGPT using ASCII.

But that was the least of it. After Sergio had prompted ChatGPT to create this stunning masterpiece (add a slight tone of irony here), he asked it to give the portrait a title. ChatGPT obliged and suggested “TextBot”. Sergio then asked: “Can you imagine a more interesting title?”. Of course it did and said: “Here’s an alternative title for the text-based avatar: ‘The Linguistic Luminary’. This title emphasizes the AI’s expertise in language and its ability to illuminate and provide insights through its textual responses. It adds a touch of intrigue and sophistication to the avatar’s identity.”

So, ChatGPT now has an identity. Whatever next! I wanted to explore that ‘identity’ a bit more through the medium of metaphor. I had once had a conversation with ChatGPT about how it generates metaphors. This time, I wanted to chat with it about what metaphors for itself it would use – those that matched its ‘identity’ – and what metaphors for AI more generally it thought were out there.

ChatGPT and its magical metaphors

I asked it four times what metaphor it would use to describe itself …. I could have gone on but stopped. It identified metaphorically as a swiss army knife, a boundless library of knowledge, a cosmic observatory and a digital compass. (If you want the full transcript with ChatGPTs longer explanations of the metaphors, let me know)

Looking at these self-images (and what it said about it self-portrait), one has to conclude that ChatGPT is a bit full of itself! Everything about it is wonderful, magical, beneficial. The chatbot tries to assist, help, answer, explain, shed light, provide insights, navigate, explore etc. There is nothing negative or dangerous about it at all. The self-anthropomorphised version of the chatbot is a veritable wonder to behold.

To dig a bit deeper, I asked it to provide me with some metaphors, comparisons, or analogies for AI in general. Again, I leave out the explanations here and just list the metaphors, which, again, are as positive as it gets. It says that AI has been compared to a digital assistant, an infinite encyclopedia, a data doctor, a virtual artist, a simulated teacher, an algorithmic chef, a financial advisor, an environmental steward, a robotic companion, and even a language wizard – a metaphor that chimes with ‘linguistic luminary’. Look at these words, assistant, doctor, companion, advisor etc. – nothing negative at all – just wonderful wizardry.

To make sure I wasn’t just seeing things, I also asked for metaphors for LLMs. ChatGPT became even more hyperbolic and I was told about LLMS being a knowledge ocean, an information oracle, a storytelling djinn, and, to top it all, a technical magician. Still nothing negative, just magic.

So, I asked whether it could give me any metaphors for the impacts of AI, and finally I got some, but only some, critical metaphors. It said one could see AI as a double-edged sword, as a Pandora’s Box, and a cultural mirror, “reflecting societal biases and values”; but the other metaphors were, again, quite positive, a force multiplier, a game changer, a time traveller (relating to forecasting), a “navigator… that guides us through the vast sea of information and decision-making, helping us make choices and find our way”, and also a digital ecosystem “with interconnected elements that influence each other, similar to how ecosystems in nature evolve and adapt”.

I asked myself where in the world ChatGPT got its hyped-up sense of self from? What were the training data that allowed it to pattern-match in such a hyperbolic way? It didn’t seem to be aware of some of the existential risk metaphors floating about in abundance, such as nuclear explosion or bomb or tsunami, until I asked it and then it finally defined them.

When it came to nuclear bomb, it gave quite an interesting explanation: “AI as a Nuclear Bomb: This metaphor might be used to highlight the potential destructive power of AI if not properly controlled or used ethically. It could emphasize concerns related to AI’s misuse, privacy violations, or even autonomous weapons.”

But then it swerved back into positive mode and said: “While these metaphors can effectively convey the significant and potentially disruptive nature of AI, it’s essential to use them judiciously and responsibly, as they can also invoke fear and negative associations. It’s generally advisable to strike a balanced and nuanced approach when discussing the impacts of AI to foster constructive conversations and responsible development.”

This is interesting, as it never advocated for a balanced, judicious and responsible use of magical and hyperbolic metaphors! And talking about the impacts of AI, it did not mention one of the most discussed negative metaphors, namely knowledge pollution. It had however used the metaphor of a digital ecosystem. So lets explore that conjunction of metaphors a bit.

ChatGPT and knowledge pollution

I asked it what it thought about the metaphor of knowledge pollution. It came up with a some good bullet points summarising what knowledge pollution means but then said: “Ultimately, while the “knowledge pollution” metaphor can be a thought-provoking way to address concerns about AI-generated content, it’s important to recognize that AI systems have the potential to provide significant benefits in terms of efficiency, accessibility, and information retrieval. Balancing the benefits and challenges of AI in information dissemination is an ongoing consideration in AI ethics and development.”

Hmmm, how it is possible to retain that balance in an increasingly polluted environment? And this brings us back to the AIs insouciant use of the metaphor of ‘ecosystem’. This insouciance is perhaps a bit misguided.

Emily Bender, a professor of linguistics, talks for example about the creation of “an equivalent of an oil spill into our information ecosystem” and says: “The reason I make the analogy to oil spills is that this isn’t just about the harms to the person who initially receives the information. There are systemic risks as well: the more polluted our information ecosystem becomes with synthetic text, the harder it will be to find trustworthy sources of information and the harder it will be to trust them when we’ve found them. […] companies say they’ll try to mitigate some pollution down the road, but do not wish to do anything about the toxic waste they’re currently spewing.” And I wonder: Is the hyperbolic stuff ChatGPT is spewing out about itself perhaps also a form of pollution?

Conclusion

So we have a problem. On the one hand we have an AI full of its own importance, glory and wonder, using metaphors that extoll its promises and benefits – probably based on anthropomorphised language used in its promotion. On the other hand we have worries about knowledge pollution, and, not to forget, paperclip maximisers and stochastic parrots, metaphors used by expert that don’t exactly trip off the tongue.

It would be great to think more about how the positive metaphors that permeate ChatGPT arise and how negative ones vanish from its remit. It would also be good to think more about what metaphors we can use to communicate about AI with wider publics. At the moment we haven’t any that really work across communities or between experts and lay people, I think. But that’s another post.

Further reading: I discovered this great blog post about metaphors and AI after drafting this post. It’s by Dominik Lukes and entitled “How (not) to learn about AI with metaphors: And how to use ChatGPT as a metaphor generation assistant”.

Image: Wikimedia commons

 

 

 

 

 

 

 

 

 

 

Posted in artifical intelligenceMetaphors