red sunset on horizon over blue sea, red mirrored in sea

November 3, 2023, by Brigitte Nerlich

Super-intelligence and Supercomputers: When frontiers collide

This post has been written in collaboration with Alan Miguel Valdez, Lecturer in Technology and Innovation Management, The Open University, Milton Keynes (the home of Bletchley Park and of little roaming robots)

***

This week the UK AI Safety Summit took place at Bletchley Park, an iconic location associated with British codebreaking feats during World War II, Alan Turing and the advent of intelligent machines. Indeed, there was much talk about machines achieving a super-human level of intelligence. There was less talk about Isambard-AI, a super-duper supercomputer, named after Isambard Kingdom Brunel, an iconic figure in engineering history. In this post we shall bring the two together and see what emerges.

In a previous post, Brigitte talked about the concept of ‘frontier’ which was, for a while, front and centre in announcements of the AI summit, intimating that it was dealing with both cutting edge technology and cutting edge dangers of that technology, in fact, “risks created or exacerbated by the most powerful AI systems”. We shall see how this concept fares when superintelligence collides with supercomputers.

Super risks and super safety reassurances

In a speech of 26 October foreshadowing the AI summit, the UK Prime Minister Rishi Sunak mentioned several ‘frontier’ or cutting edge risks, including the emergence of super-intelligence which might, he intimated, lead to the extinction of humanity. But he assured listeners that the government will keep everybody safe. Indeed, he repeated the words ‘safe’/’safety’ 18 times in his speech. So there are great risks, but there is nothing really to worry about. Given the background noise from the Covid inquiry taking place at the same time as the summit, one might wonder whether such assurances are trustworthy… but that’s another topic.

Alongside risks and safety, there are, of course, also opportunities. Sunak stressed that “we’re going to make it even easier for ambitious people with big ideas to start, grow, and compete in the world of AI.” (Isn’t that a bit risky, perhaps?) He continued: “That’s not just about having the technical skills, but the raw computing power. That’s why we’re investing almost a billion pounds in a supercomputer thousands of times faster than the one you have at home. And it’s why we’re investing £2.5bn in quantum computers, which can be exponentially quicker than those computers still.”

So we have the risk of super-intelligence on the one hand (but it’s all fine, we are safe) and the promise of supercomputers on the other (risks, what risks?). 

Supercomputers and new frontiers

Sunak points to Google’s Sycamore quantum computer, as an example of a new supercomputer. He could also have mentioned the world’s first exascale computer which came online in 2022 at  the Department of Energy’s Oak Ridge National Laboratory in the US. Interestingly, it’s called Frontier and boasts 1.1 exaflops of performance. When you go to its website you find an interesting quote on the front page “It has been basic United States policy that the Government should foster the opening of new frontiers. It opened the seas to clipper ships and furnished land for pioneers. Although these frontiers have more or less disappeared, the frontier of science remains.” That’s from Vannevar Bush, “Science, The Endless Frontier” — 1945

Frontier” “is the first [system] to achieve an unprecedented level of computing performance known as exascale, a threshold of a quintillion calculations per second.” What is exascale?

A UK government press release about new supercomputers defines it thus: “Exascale is the next frontier in computing power, where systems are built to carry out extremely complex functions with increased speed and precision. This in turn enables researchers to accelerate their work into some of the most pressing challenges we face, including the development of new drugs, and advances in nuclear fusion to produce potentially limitless clean low-carbon energy.” 

At the moment, there is talk of a Bristol supercomputer or AI Research Resource (AIRR) also called Isambard-AI, referencing Isambard Kingdom Brunel of Bristol suspension bridge fame, and of a new exascale computing facility in Edinburgh. During the summit it was announced that Cambridge will host a supercomputer too and that the Bristol and Cambridge supercomputers “will help researchers to analyse the safety of advanced AI models and drive further breakthroughs”. 

Capacity and opacity

Would this mean that we’ll have ever bigger and ever more inscrutable supercomputers protecting us from ever bigger and ever more inscrutable supercomputers? This feels rather circular and it also infantilises us humans. Is there a danger that we’ll lose track of what ‘we’ are doing with an AI that the capacity to do things ‘we’ can’t possibly do? Is that ‘safe’? Is this exascaling in line with responsible capability scaling?

A deeper question is that of the inevitable and probably exponential growth in the opacity of AI and the concomitant loss of human autonomy. As Vaassen (2022) observes, “AI decision algorithms are opaque even when they are reliable: they might deliver the right results, but they do not provide users or affected parties any insight as to how they came to produce those results”. Such opacities may be incompatible with the idea of safe and trusted AI.

In his article on ‘black box’ artificial intelligence Carabantes distinguished between three forms of opacity (1) opacity as intentional corporate or institutional concealment of their algorithms in general, including those of machine learning; (2) opacity as technological illiteracy that prevents society from understanding a field as specialised as that of computer programming; and (3) opacity in the sense presented so far, that is, opacity as cognitive mismatch between the complex mathematical operations performed by machine learning algorithms and the type of reasoning used by human beings. 

Scaling up does nothing to address the first two forms of opacity, and may even exacerbate the third one.

Frontiers, scales and maps – where are we heading?

The notion of a frontier is associated with the great unknown, with uncharted territory, but also with something that is knowable, mappable and, in the end, safe and controllable. Given the ever-growing scale in computing power and the associated increase in the opacity of AI, are we reaching what one might call, after, Vannevar Bush, an ‘endless frontier’, that is, are we venturing into territories that cannot be responsibly scaled or mapped and that therefore escape our control?

Image: Photo by ToryYu1989 form PxHere

Posted in artifical intelligence