October 11, 2024, by Brigitte Nerlich
Playing with AI/Playing with fire
Since ChatGPT was released in November 2022, I have been fascinated by all new AIs that we can now ‘play’ with and that have made ‘artificial intelligence’ accessible to anybody who wants to give it a go. Remember the recipes for soup in the form of a Shakespearean sonnet or the Limericks on climate change or whatever we asked ChatGPT to do back then, alongside the more serious business of cheating in exams.*
Engaging with some AI platforms
I too played with ChatGPT, but I also discovered a more practical use of ChatGPT, for example to critique the draft of a blog post I had written and to make me rethink its structure and content. And, of course, that’s only the tip of the iceberg.
Then came Claude. I played with that a bit more than with ChatGPT, probably because it didn’t only seem to be humble like ChatGPT, but because it flattered me in the right way, something AIs have a tendency to do (and thus they wheedle themselves into your circle of ‘friends’ – more below).
I had nevertheless quite a fruitful conversation with Claude about the nature of machine metaphors in biology and the metaphor of the genome as an autoencoder. In these conversations Claude displayed quite a good grasp of some of the main tenets of metaphor theory but also of science communication.
Then came Perplexity. I only used that bot very briefly. It was ok, but it didn’t speak to me like Claude, don’t ask me why. (Although I love the name!)
Then came Poe. I really admired what a colleague had managed to do with it, namely set up his own bot, the NielsMedeBot, that can talk users through his research on science, populism and communication. However, as much as I l would have liked to, I just could not manage to do that for my own work, mainly because feeding the bot all the necessary texts was just too much hard work. If there was an AI to help in this process that would be even better. Give it access to ORCID and let loose, perhaps.
And finally, I was made aware of NotebookLM and the AI dialogue/conversation/podcast generator Deep Dive… I went straight there and fed it one of my most recent blog posts, which was, a bit ironically, about ‘superintelligence’.
The allure and potential dangers of AI-generated dialogues
In just a few minutes the AI had turned the post into a podcast dialogue. I was quite blown away by it, I have to confess. The dialogue between the two AI voices was lively and engaging and dealt with most of the things I had said in the post and more, extrapolating from what I had written to what I could have written – listen here.
Then I tried them on a less speculative and more factual post about John Herschel and, again, I was pleased with the outcome, especially by the ending where they tell listeners to always stay curious and never stop asking questions, something I had not said in the post (listen here).
Then I thought, ok, that’s just my sort of superficial blog turned into as superficial dialogue. What if I fed it something deeper, really deep like a bit of Hegel on right**. Would they jollify that as well? They actually did and I laughed out loud. But, in their defence, I have to admit that I got more out of that little podcast then I ever did out of a whole very boring and extremely serious seminar I attended on the matter back in 1978 at the University of Düsseldorf (I can still remember staring blankly into space). Have a listen here. (I would love to have that cheery podcast thing in German though!)
However, I think, on the whole, Antonio Casilli is right to call this dialogue generator a good example of “vanity AI”. It certainly tickled my vanity; and I wondered…. would Hegel turn in his grave!***
Playing with the dialogue/podcast AI made me think. In the end, this was all too good; too slick. What if one converted not one of my mediocre blog posts or a bit of Hegel but something more sinister into a cheery little dialogue (or ‘wild’ little dialogue, to use one of their favourite words)? The potential for misinformation is huge. And of course I asked myself, like Sean Thomas in The Spectator: “Has AI just killed the podcast bro?” What do podcasters think about it all??
Unlike Sean, however, I hadn’t really speculated about where all this might lead in the future: “Indeed, the more you think about this technology, the more mind-spinning it becomes. What happens, for instance, when the technology allows you to interrupt the podcast, and join the dialogue? With Shakespeare, Joe Rogan, Lord Byron or the funniest lover you ever met?”
Broader implications and the need for caution
Playing with this dialogue tool (and I have only just discovered Google Illuminate****) might be entertaining, but it might also be quite dangerous, especially to real conversations (and real podcasts). As the sociologist Gabriel Tarde has made clear a long time ago, real conversations weave the fabric of life and society. Real conversations between real people are what we are made of.*****
Like LLMs’ potential to pollute knowledge, even to lead to knowledge collapse, might this new development in AI accelerate this process (and I haven’t even talked about voice and images – two other cans of worms)? Or will it be just be as ephemeral as getting ChatGPT to write haikus about earthworms? We shall have to see. But one thing is clear, playing with the new AIs is playing with fire and, at the same time, consuming water like there is no tomorrow… So be careful.
*I wrote this post before the Nobel Prizes in physics and chemistry were announced….And as somebody joked on Twitter, the Nobel Prize for Literature will probably go to ChatGPT. Ah, it hasn’t….
**I also gave them a bit of Heidegger to digest and they actually said it gave them a headache, understandably; they also said that “it’s a real doozy this ontological difference”, which made me smile. Again, I liked the ending, though, about “taking these big ideas and letting them resonate in our own little corners of existence”….
***If you want to explore the philosophy-AI-podcast nexus (feedback loop) a bit further you should read this blog post on TrueSciPhi!
****Google Illuminate is an AI tool that converts academic research papers into audio discussions that are similar to podcasts. Here is an example.
*****Most of my blog posts about AI are based on conversations I have had with my son. Without conversations, no blog posts!
Image: Marta Shmatava 2010 Dialogue
No comments yet, fill out a comment to be the first
Leave a Reply