ALT=""

December 13, 2023, by Ben Atkinson

An Introduction to AI for Higher Education

In this post, John Horton, Learning Technology Senior Project Manager (Pedagogies), reflects on the development of AI and how it might impact higher education at the University of Nottingham and the wider sector.

“Artificial Intelligence” (AI) – more specifically, generative AI – is arguably the biggest computational development to hit the general public for many years … perhaps since email became widely available. Essentially, AI is the predictive text we already know from phones but on a vastly greater scale. It is a probabilistic algorithm – put simply, AI guesses … but guesses very well. It is important, therefore, to recognise what AI cannot do besides appreciating what it can. AI can certainly write pages of text based on just a few sentences of instructions or the “prompt” (to use the word now popular for the instructions given to AI). On the other hand, it does struggle in some circumstances, particularly where detail or technicalities are involved. Obviously, we don’t want text to include errors so this weakness of the probability technique must be acknowledged. It is particularly unfortunate, therefore, that AI-generated text always sounds so plausible even when it contains nonsense! There is always the temptation for AI users not to read the text thoroughly before using it.

That AI covers its mistakes so easily may be the biggest challenge to AI’s long-term use, especially in education. The immediate problem, however, is deciding on what is legitimate use of AI and how illicit use can be detected, ideally leading to its prevention. Programs called AI detectors can detect text that has been written by AI, but they do not do so in a human way. In fairness to accused students, therefore, it may not be possible to use such evidence against them. This knotty problem remains to be resolved.

Let’s look to a longer term when the use of AI has been normalised. It is not clear what this will look like or how long it will take … if ever. Plagiarism, for instance, has occurred for generations and has become even easier in recent years. Even if the use of AI can be normalised to some extent, though, problems remain. One is that students must not only be sufficiently educated to identify the errors that AI can make but also independently-minded enough to challenge them. Weaker students may well find this difficult – they may have neither the knowledge to identify errors nor the intellectual confidence to challenge them.

To my mind, a second major – though less obvious – problem is that extensive use of AI text may deprive human beings of the chance to develop their own written style. For many people, entwined with that style is their ability to develop well-argued cases.

Perhaps others can see further problems.

I’ll end with two examples of the sort of error mentioned above, one academic and the other with a Christmas theme. Given the opening sentence of Virgil’s epic, the Aeneid, ChatGPT offered me a viable translation with one slight but definite error. Finding this error, I challenged ChatGPT about it. ChatGPT acknowledged the error but only amended its translation after several further challenges.

Finally, that Christmas theme – asked for some quiz questions about Christmas (each one with its correct answer marked), ChatGPT included this:

Which popular Christmas plant is known for its white berries?

  1. a) Mistletoe
    b) Holly
    c) Ivy
    d) Pine

Correct Answer: b) Holly

The important message to take away is that AI is not human, it does not think and it does not understand; it is a probabilistic algorithm only. On the Internet, both white berries and holly may be associated with Christmas, but that doesn’t mean holly berries are white.

Posted in Artificial IntelligenceLearning technology