curved white lines of light against blue background

October 20, 2023, by Brigitte Nerlich

Frontier AI: Tracing the origin of a concept

The UK government has convened an international AI Safety Summit at Bletchley Park, Buckinghamshire which will take place on 1 and 2 November, 2023. On 16 October the Department for Science, Innovation and Technology tweeted: “The agenda for the opening day of the #AISafetySummit has been published. The UK is laying out a focused plan and will seek to reach a global consensus on the risks of Frontier AI and how they are best managed.” 

Thereupon Jack Stilgoe, a champion of responsible AI and participant in the summit wrote: “Back in July, I argued that this would be an opportunity to lead on AI regulation. This opportunity has not been taken. Instead, Government have let the AI industry set the agenda. The term ‘Frontier AI’ was coined by Open AI…… The risks being discussed here are mostly imaginary. We need to talk about people’s actual hopes and fears for AI.” 

In reply to Jack’s tweet, Seán Ó hÉigeartaigh, an AI risk researcher, asked for the source regarding the coining of the term ‘Frontier AI’ and said “The first proper definition I can find came from this paper (with a few OpenAI coauthors, but predominantly others), but I understand it was in use earlier?”. This was a paper from 2023.

This made me think about the origins of the phrase ‘Frontier AI’ and I began to trace them. Here are the results – and I will come back to the paper Seán mentioned. For the moment, I’ll just provide the paper’s definition of ‘Frontier AI’ as “highly capable foundation models that could possess dangerous capabilities sufficient to pose severe risks to public safety. Frontier AI models pose a distinct regulatory challenge: dangerous capabilities can arise unexpectedly; it is difficult to robustly prevent a deployed model from being misused; and, it is difficult to stop a model’s capabilities from proliferating broadly”. So ‘Frontier’ here is linked to risk and danger, rather than risk and opportunity. Interesting.

From frontier technology to Frontier AI

When it comes to English words and concepts, I always start with the Oxford English Dictionary. In this case, I didn’t expect to find a lot and I was right about that, but what I found was still quite interesting. 

The OED  doesn’t have a definition yet for ‘Frontier AI’, but it has one for ‘frontier’ used as an adjective more generally: “At the forefront or cutting edge of research or technology; pioneering, innovative.” This meaning is linked to the longstanding frontier mythology that has been pervasive in the United States, from the conquest of the ‘Wild West’ to science to Star Trek’s final frontier. It’s a quite positive meaning.

The OED provides two examples of the use of that adjective. The first one is from 1955 (incidentally, that’s ten years after Vannevar Bush’s famous talk about an ‘endless frontier’ for science): “Supporting research is the lifeblood of frontier technology, and guided missiles are indeed at the frontier of our technology. Ordnance (U.S.) September 243/2”. Guided missiles! Now there you have danger. But the stress was on innovation.

The other example is from 2018: “Frontier technologies including blockchain, big data, artificial intelligence, biometrics, and machine learning present powerful potential ways to address difficult development and health challenges. @HefterScott 8 August in twitter.com (accessed 20 Feb. 2023)” Artificial Intelligence has become a ‘frontier technology’ – and again, it’s about opportunities rather than danger. The year 2018 is interesting, as we shall see in a moment.

Frontier AI in 2018 and 2019 and early Chinese voices

To get more clarity on the first uses of the phrase ‘Frontier AI’, I then scoured the news database Nexis using the search terms “frontier AI” and “frontier artificial intelligence”. There was a lot to look at because despite putting the phrase in quotation marks, Nexis showed hundreds of extraneous hits relating to AI in the context of phrases like the last, next, new, final frontier. But skimming through things from 2006 onwards, I noted that around 2018 things changed and there were first attestations of the phrase ‘frontier AI’ as such. More interestingly still, quite a few were from Chinese news sources and there meanings were quite neutral. (More research needed!)

On 12 March 2018 China Daily reported that: “Wan [Gang, minister of science and technology] said China will strengthen its AI research and train a new generation of experts to tackle key and frontier AI-related science issues. The nation will also accelerate the commercialization and application of AI technologies to ‘solve the public’s concerns, such as security, health and environment’, he said. At the same time, he added, China will strengthen research into related laws and regulations in response to possible ethical and social challenges caused by AI technologies, such as privacy, employment and national security.” Intriguing, as these are all topics that the November Frontier AI summit will probably cover.

On 9 April 2018, the CIO Magazine reported [only on Nexis, no hyperlink]: “The larger tech companies are now the largest sources of cutting edge research in AI outside of China. Google’s DeepMind astonished the world with software that taught itself to play Go better than any human with only a knowledge of the rules of the game as a starting point. But China is a major rival. It uses an intricate system of subsidies, incentives and below-cost loans from state-owned banks to direct research and development to AI. It is now leading the way on frontier AI research such as deep learning.” This is also interesting, as China has been invited to the November summit, amidst some controversy.

To get a different focus on the 2018 date for the onset of Frontier AI discourse, I then went to Scopus, the database of academic/scientific articles and searched for Frontier AI. I got only nine hits with the first article published in 2019 by Chinese authors on AI, medicine and healthcare

The article says: “Despite great potentials of frontier AI research and development in the field of medical care, the ethical challenges induced by its applications has[sic] put forward new requirements for governance. To ensure ‘trustworthy’ AI applications in healthcare and medicine, the creation of an ethical global governance framework and system as well as special guidelines for frontier AI applications in medicine are suggested.” Note that in 2019 Chinese researchers were talking about what’s now called ‘responsible AI’.

When I looked for “frontier artificial intelligence”, I found only three articles. Again, the first was published in 2019 by Chinese researchers, this time on financial innovation and they said: “The frontier artificial intelligence technologies, such as the technology of expert system, machine learning and knowledge discovery in database are combed to explore the financial applications of artificial intelligence.” 

Around that time the ‘deep learning revolution’ was in full swing. Although deep learning, as part of the broader family of machine learning methods, has a long history, over the last decade or so deep learning algorithms based on neural networks have transformed artificial intelligence technologies substantially. In 2016, Roger Parloff mentioned a “deep learning revolution” transforming the AI industry and in “March 2019 Yoshua Bengio, Geoffrey Hinton and Yann LeCun were awarded the Turing Award for conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing”. Yoshua Bengio is now part of the UK Governments Frontier AI task force.

Defining Frontier AI in 2023

Now we jump forward to 2023 and backwards to the beginning of this post – leaving out a lot of developments. On 6 July Markus Anderljung and many others, some from OpenAI, published a preprint or white paper on arXiv entitled “Frontier AI Regulation: Managing Emerging Risks to Public Safety”. That was the paper from which I got the definition of Frontier AI that I provided earlier on. This paper was simultaneously published on an OpenAI website, and on 10 July, Markus Anderljung, Jonas Schuett and Robert Trager wrote a blog post about it for the Center for the Governance of AI (founded in 2018). That post provides a slightly different definition of ‘Frontier AI’ compared to the paper.

“We define ‘frontier AI models’ as highly capable foundation models, which could have dangerous capabilities that are sufficient to severely threaten public safety and global security. Examples of capabilities that would meet this standard include designing chemical weapons, exploiting vulnerabilities in safety-critical software systems, synthesising persuasive disinformation at scale, or evading human control.” In a footnote, they define foundation models as “models trained on broad data that can be adapted to a wide range of downstream tasks.” Again, the focus is on risk and danger. This contrasts with meanings of ‘frontier’ as ‘to boldly go where no one has gone before’ and replaces this with a sign saying ‘danger ahead’.

On 26 July 2023 ​​OpenAI, Anthropic, Google and Microsoft teamed up to establish the “Frontier Model Forum” (here Model replaces AI)  “to ensure the safe and responsible development of so-called frontier artificial intelligence (AI) models. The goal is to minimize the potential risks they may pose to the individual and society, according to OpenAI, Anthropic, Google and Microsoft. Frontier Model Forum was created with the aim of promoting the responsible development of frontier models, the most sophisticated ones.” They refer to “large-scale machine learning models that exceed the capabilities currently present in the most advanced existing models”.

In April 2023 the UK Government had announced £100 million in funding for a ‘Foundation Model Taskforce’ which later, in September, I believe, became the ‘Frontier AI task force’. At the announcement in September it was pointed out that the “team was initially known as the Foundation Model Taskforce but the name was changed to reflect the focus on ‘Frontier AI’.” The announcement said that the “term describes highly capable foundation models, which could have dangerous capabilities that are sufficient to severely threaten public safety and global security.” This reflects the definition in the July blog post written for the Center for the Governance of AI.

Incidentally, that was also the time when Jack Stilgoe wrote his blog post warning that debate about responsible AI might be hijacked by industry voices and, I suppose, by a focus on the mitigation of over-hyped rather than realistic risks.

AI, society and risks

Now back to Jack Stilgoe and his misgivings that set me off down this rabbit hole. His criticism was that the new ‘frontier’ task force was perhaps too much beholden to the AI industry, rather than investigating societal issues around AI.

So, at a fringe event on AI and society on 31 October Jack and others will discuss issues around the climate impacts of AI; justice; education; policing and borders; workers’ rights; democracy; good governance; misinformation; creativity; privacy and surveillance. Similarly, the Institute for the Future of Work tweeted “While discussions at Bletchley focus on the future ‘x-risk’ of ‘god-like’ Frontier AI, at our ‘Making the Future Work’ summit we will be grounding debate in the here-and-now of how work is being transformed and working lives impacted.” 

That was interesting, especially in terms of paraphrasing Frontier AI as ‘god-like’ and having ‘x-risks’, i.e. existential risks, something I didn’t see in the more neutral/positive uses of ‘Frontier AI’ in around 2018.

AI experts have long criticised the focus on existential risks, pithily summarised by Professor Noel Sharkey, Emeritus Professor of Artificial Intelligence and Robotics at the University of Sheffield: “AI poses many dangers to humanity but there is no existential threat or any evidence for one. The risks are mostly caused by the natural stupidity of believing the hype. Many AI systems employed in policing, justice, job interviews, airport surveillance and even automatic passports have been shown to be both inaccurate and prejudiced against people of colour, women, people with disabilities. There is a lot of research on this and we desperately need to regulate AI. Looking for risks that don’t yet exist or might never exist distracts from the fundamental problems.” That’s what Jack Stilgoe had called ‘imaginary risks’.

A focus on ‘frontier’ AI and ‘cutting-edge’ safety concerns might distract from discussing these fundamental problems. The focus on the doomsday scenario of AI destroying the world. might also distract from discussing AI opportunities which the Government seemed so keen to explore at one point. But using the word ‘frontier’ is probably a good framing for attracting attention. It has a sort of duck-rabbit meaning in the context of AI – on the one hand meaning innovative and pioneering, on the other pointing to potential existential risks.

Image by Pexels from Pixabay

Posted in artifical intelligence