the words Artificial Intelligence against a turquoise and black background and a geometric patterns of connections

January 6, 2023, by Brigitte Nerlich

Artificial Intelligence: Education and entertainment

I have heard about artificial intelligence or AI for decades, but I have never really played with it. I guess this is the same for many people. We might have AI all around us, but at the end of 2022 it became much more tangible – we had it at our fingertips. A company called Open AI had launched something called ChatGPT and we could all experiment with it.

As Wikipedia, our (so far) general knowledge store, pointed out: “ChatGPT (Generative Pre-trained Transformer) is a chatbot launched by OpenAI in November 2022. It is built on top of OpenAI’s GPT-3.5 family of large language models, and is fine-tuned with both supervised and reinforcement learning techniques.” You ask it a question and it provides you with an answer. You can also engage it in conversation to generate more sophisticated answers.

At first, I saw a lot of people worrying that ChatGPT may threaten essay writing and exams, as it was so easy to use and to generate answers to exam questions that gave at least the illusion of accuracy. But then things changed and worrying about this AI was, it seems, replaced by playing. It was the time of Christmas and New Year after all.

Of course, I played as well, just like many other people who made it do the most obscure things in the form of sonnets or haikus. And it accomplished these tasks with panache. But when it came to writing an essay on a specific expertise-laden topic, things were different. People who were ‘real’ experts in a field or discipline or topic saw flaws and shortcomings. Some experts in Germany called it a “wissenschaftliche Fake-News-Schleuder”, ‘a scientific fake news slingshot’…

There was also a third use of ChatGPT: to work, in a way, as Santa’s little helper . It could indeed help people write or programme better. As Kombiz Lavasany said on Twitter: “i’ve found chatgtp extremely useful for a few things: 1.) Writing I hate doing 2.) Creating lists of choices I can investigate 3.) Editing my writing 4.) Suggesting social media posts for articles 5.) Helping me code in python and fix python code”.

Overall, ChatGPT can be useful, fun, but also dangerous. This is nothing new when it comes to AI, but now we can all see it,  experience it and perhaps discuss it. Students of science and technology should have a field day. For a very long time, they have studied knowledge and expertise and how they function in science and society. Here we have a live performance of this happening all around us. There are three things that I think need studying.

First, the issue of pleasure, fun and entertainment. Second, the issue of knowledge, expertise and education. And third, the issue of bullshit, ‘knowledge pollution’ and dependency.

I’ll briefly talk about the first two, before coming to the third, and, at the very end, I’ll give ChatGPT a chance to speak for itself about the thorny issue of knowledge pollution. [Health warning: I have no idea about what’s going on under the bonnet of this AI; it’s just as deep a mystery to me as a carburettor – of course, I now know what that is, because I have asked the AI to give me a definition….]

Pleasure, fun and entertainment

The internet is full of examples where people have real fun with ChatGPT, and I can’t list all that fun, of course. Occasions for having fun vary widely and wildly, depending on people’s preoccupations and predilections.

To give only one weird example. On Tuesday there was a brief discussion on Twitter about the awful terminology that is used in academic grant writing, a rather niche topic, but one I can sympathise with, as I have always hated words like ‘deliverables’. In most tweets people just moaned about such cumbersome words, but one person told ChatGPT to write a poem about this and, of course, it obliged: “Great, another grant application to write, Another work package to ignite. Another set of deliverables to meet. Another milestone to greet”. If you want funnier stuff, such as the removal of a peanut butter sandwich from a VCR in the form of biblical verse or Fermat’s last theorem in the style of a cooking recipe, read this, for example.

I asked myself: How can something that, basically, just strings words together without knowing their meaning according to some probabilistic model, create pleasure, generate fun and aid creativity, as some have claimed? It even deals with metaphors, although sometimes it refuses to and its earnestness is itself quite funny!

Then I thought, it’s actually quite similar to other things that we use for fun and games. Think about hoops or rope or chalk. In and of themselves, these things are not ‘fun’, but in the hands of creative children, they can generate a lot of fun. When it comes to AI, a new form of play is perhaps emerging, a whole new social practice. However, ChatGPT also has a darker side that one might call CheatGPT, and that is a social practice that poses real dangers.

Knowledge, expertise and education

ChatGPT uses existing knowledge to provide answers to questions. The answers it provides depend on the questions you pose and how you structure your questions, but more importantly on the knowledge that it can retrieve.

At the moment this is ‘our’ knowledge, warts, biases and all. Whatever knowledge we put into the knowledge space, say Wikipedia, it spits it out. Strangely, this is also the model that informs much of school and university ‘learning/education’ – put knowledge in, spit knowledge out. And so ChatGPT can be used to cheat in exams, as it can provide quick, albeit rough, answers to exam questions – a good example here. This poses problems to examiners and, more generally, the ‘education’ system, as ChatGPT is good at creating the illusion of knowledge. However, real domain experts can mostly still detect whether ChatGPT outputs are accurate, correct or, indeed, novel.

Given the sheer power of the pipeline model of knowledge, education and learning, there is a danger though that in the long run this process might create problems not only for the knowledge regurgitation system (education), but, more importantly, for knowledge production, for example in science.

In education people might have to resort to talking to each other! Perhaps we have to rethink education as ‘Bildung‘, where education and emancipation go together. But what about knowledge production or creation (which can, of course, also be part of education)?

Bullshit, knowledge pollution and dependency

ChatGPT depends on retrieving existing knowledge. Existing knowledge increases over time. Now, interestingly, that rising pool of knowledge will also be fed by the outputs of ChatGPT, and there is a lot of it. And the more we play with it, the more output there will be. Some of that output will no longer be accurate or correct, indeed, some say most of the output produced by ChatGPT is bullshit rather than knowledge.

And so the pool of our knowledge may gradually get polluted and diluted. (One should point out that this also happens without AI, e.g. in popular science or history books that gloss over, perpetuate or muddle up well-established scientific/historical facts in such as way that experts cry out in pain – see many reviews on the The Renaissance Mathematicus site)

These developments are especially dangerous in science, where, as Nick Brown says, “[w]hat has risen to the fore over the last few years […] is not onlyreplication crisis, but also a fabrication crisis.” (See the work by Dorothy Bishop on this matter) The potential convergence of knowledge fabrication, fake science and knowledge dilution may be quite dangerous.

I think this is something that people who have, unlike me, an actual understanding of this type of AI need to keep an eye on – especially those interested in responsible innovation. I bet they already are.

Another danger might be an evolving dependency on ChatGTP to generate arguments. For example: I could not resist asking it a question about knowledge pollution and it provided an, in my view, quite comprehensive argument for responsible innovation, one which would have taken me more than a few seconds to write up in a blog post!!

Thus spoke ChatGPT

My question: “What are the dangers of open AI polluting human knowledge?”

Its answer:

“There are several potential dangers of artificial intelligence (AI) polluting human knowledge. One concern is that AI systems may produce or disseminate inaccurate or misleading information. For example, an AI system trained on biased or incomplete data could generate false or misleading conclusions that are then widely disseminated and accepted as fact. This could have serious consequences for fields such as journalism, science, and politics, where the accuracy and integrity of information are critical.

Another danger is that AI may be used to manipulate or deceive people by generating fake content or news stories. For example, an AI system could be trained to generate realistic-sounding news articles or social media posts that spread false or misleading information. This could have negative consequences for public discourse and trust in media and institutions.

Another potential danger is that AI may be used to automate certain tasks or processes that require critical thinking and judgment. For example, an AI system may be able to analyze and process large amounts of data more quickly than a human, but it may not be able to understand the broader context or implications of that data. This could lead to the erosion of important skills and knowledge among humans and potentially result in negative consequences for society.

Overall, it is important to carefully consider the potential dangers of AI and to develop responsible and ethical approaches to its development and use in order to minimize the risks of AI polluting human knowledge.”

Can experts in responsible innovation do better? Answers on a postcard, please!

 

 

 

 

 

Posted in artifical intelligence