January 19, 2024, by Brigitte Nerlich
Humanising artificial intelligence and dehumanising actual intelligence
Millions of people will now have interacted with a new form of accessible artificial intelligence, in the form of ChatGPT, DALL-E or Midjourney. Many will have had (at first) quite strange feelings of empathy with the bot, saying please and thank you and trying not to overburden it. We might also admire its apparent humility or get exasperated by its arrogance.
We, as humans, have a tendency to anthropmorphise, that is, “to imbue the real or imagined behaviour of non-human agents with humanlike characteristics, motivations, intentions or emotions“. This tendency to humanise the non-human can be dangerous though, especially in a world where fake politics dehumanises humans.
In the following, I’ll try to point readers to some potential dangers when interacting with (and humanising) artificial intelligence bots: attributing knowledge and agency, falling into epistemic traps, faking reality, spreading epistemic pollution, and dealing with shape-shifting personas.
Attributing knowledge and agency
When, some years ago now, I interacted with the Mars Curiosity rover on Twitter/X, I humanised this assemblage of metal and wires quite readily. I also do this with Larry, the Number 10 Downing Street cat, who is still on Twitter, and whom many consider to be more human than many of the humans inhabiting that building. This is fun. We know what we are doing and we can distinguish between pretence and reality (the humans writing the tweets). We know the communicative intent that lies behind this anthropomorphising and we enjoy playing the game knowing it.
With recent AIs that’s different – there is no human ‘puppeteer’ in sight, no pretence, just, as some allege, ‘a fancy Markov chain‘. The synthetic text produced by ChatGPT, for example, “represents” as Emily Bender points out “no one’s communicative intent. But it is well-formed and so people who encounter it interpret it” and, in the process, attribute intention and human properties to the AI.
Falling into epistemic traps
When we interact with this newly found, always available and even trusted friend, we easily fall into what one may call ‘epistemic traps’. We believe what the polite text generating bot tells us or shows us.
There is a problem though, namely “that while the answers which ChatGPT and other generative AI technologies produce have a high rate of being incorrect, they typically look like the answers might be good and the answers are very easy to produce”.
We are taken in by the ease of access, the ease of getting an answer, the plausibility of the answer and so on, and we overlook the cracks and glitches in the answers, especially when we ask something we really don’t know much about. We need a lot of actual human knowledge to realise that the artificial knowledge spat out by the AI is sometimes rather threadbare.
It’s easy to believe something that sounds right or looks right and so we don’t ask and even more often don’t know whether it is right.
Faking reality
If this crumbly knowledge accumulates unchecked, finding reliable information below the rubble might become increasingly difficult. It might become normal to live in a reality that is ‘fabricated’, rather than a reality that just is. The knowledge we’ll have of that artificial world will be artificial knowledge which, it too, will be entirely fabricated. We are not quite there yet, but look for example at these fake images spreading on Facebook.
In the long run, in a world of inaccuracy and fakery, we will no longer know what’s real and what’s not real, what’s artificial ‘intelligence’ and what’s human intelligence. We’ll also have difficulties in discerning what an ‘error’ is, and truth will become a quaint old concept. (Perhaps I am too pessimistic here. Some people think that “You can use AI for fact checking. You can automate fact checking. You can automate filters. You can create watermarks. You can signal to people what are more reliable sources.”)
But anyway, if you want to experience this new reality and if you speak French, you can read this novel entitled 404, where fake news becomes reality. (Is there more fiction on this topic out there, I wonder…?) Or, perhaps closer to home, look at these news anchor avatars. As an article in Ars Technica devoted to their emergence says: “These human-looking AI avatars now seem well on their way to climbing out of the uncanny valley”….
Polluting knowledge
And there is a related problem. Knowledge builds on knowledge and knowledge connects with other knowledge. So, when one piece in the knowledge puzzle or one block in the knowledge building is faulty the whole knowledge system becomes unstable and unreliable. Or, to use a better metaphor, used here by Chirac Shah and Emily Bender: “the use of present-day synthetic media machines (large language models as well as image generation systems) is polluting the information ecosystem of the Web”. They also talk about toxic spill, contamination, or even “LLM-extruded sludge“!
This pollution has repercussions on trust. What knowledge can you still trust and whose knowledge can you still trust in a knowledge-polluted world? This also has implications for those who normally clear up polluted knowledge, those who fact-check knowledge. When the spill becomes too wide and deep, this clear-up operation becomes more and more difficult. While all this is happening, more and more people are talking to AIs and so the process continues.
The same goes for science, where AI and machine learning are used more and more (and, one should stress, often really help in the sciencing process), but might be contributing to the reproducibility crisis that’s already so difficult to tackle. (And one can’t really blame AI when it can’t sift the wheat from the chaff in a world of paper mills, scientific fraud, fake journals and faked data….)
Shape-shifting personas
And there is one more danger lurking around the AI corner. When we talk to a chatbot like ChatGPT and attribute human qualities to them, we expect that bot to be quite stable in its ‘persona’, as we are used to with, say, Alexa, who has a built-in and well-designed persona – or even the Mars Curiosity Rover or Larry the Cat who have personas designed by those who animate them, so to speak.
However, when Ben McCulloch, an expert in this sort of stuff, including designing AI conversations, conversed with ChatGTP, he found out something rather unsettling. As he expected, “ChatGPT has a persona”, but “the key characteristic of that persona is inconsistency”.
While he was chatting with ChatGPT about a very specific topic it also offered him ‘personalized medical advice’. This alarmed him somewhat. Ben came to see ChatGPT as a ‘social chameleon’ or a ‘shapeshifter’.
When ChatGPT chats with us, it sounds like an expert, “so we imagine a qualified health professional like a doctor. Then it talks about conversation design […] and we think it’s an expert on that domain too. Then it talks to us about something else and the persona evolves further.” Ben found that rather disconcerting. He wonders whether “One day there may be tools available that allow us to keep the personas of LLMs such as ChatGPT in line, but for now it’s a social chameleon. You still need to be the one who keeps it in line when it tries to dazzle the crowd with its limitless boasting.”
At the moment, knowledge and expertise are stuff of actual intelligence. We have to beware of the knowledge and expertise with which artificial intelligence dazzles us and which we reflect back on it through anthropomorphism. It’s very easy to be hoodwinked and to attribute knowledge and expertise where it is not warranted and to sideline hard-earned warranted knowledge in the process.
Corporations and curiosity
We also have to remember something else, something rather human about artificial intelligence! As Chuck Wendig points out: “Artificial intelligence isn’t a person. It’s not even really, despite how I describe it, a machine. It’s the representative of a company. It’s the tool of not just one corporation, but many. And it only exists because real people did real art. Without something to chew up, it has nothing to spit out.”
We have to be careful. By humanising artificial intelligence, we might gradually be degrading and devaluing actual intelligence and actual creativity.
There is a bit of hope though. As Charlie Beckett, an expert in journalism, says: “Curiosity rather than creativity is something that machines will struggle with. Generative AI can respond to prompts, but the human quality of wanting to know something that you don’t necessarily know, or to know more and to investigate, that, I think is currently at least impossible to replicate through AI.” Let’s hope he is right.
Image: Pareidolia (wikimedia commons)
No comments yet, fill out a comment to be the first
Leave a Reply