December 15, 2023, by Brigitte Nerlich

Making science public 2023: End-of-year round up of blog posts

The year 2023 began with a bang. Suddenly there was a new form of ‘artificial intelligence’, and by ‘new’ I mean a form of AI that even I could use and vaguely understand. There was, it seems, some monstrous machine (called LLM) gobbling up everything we have ever produced in science, literature and art and spitting it back at us in any form we liked; from a recipe for shakshuka in the style of Bruno Latour to a picture of the tree of life in all its glory (and that was only on one random day on Twitter/X). So quite a few of my posts tried to deal with this emerging phenomenon.

Artificial Intelligence

Over Christmas 2022 my son had told me that there was a new AI kid on the block, called ChatGPT. The news began to fill up with stories about this amazing new chatbot. So, after Christmas, I looked a bit more closely at it and wrote a post about what I saw, especially about the various things one could do with it or that it could do for you – you could use it to help you write essays and cheat, to enhance your work, especially coding, to just entertain yourself by playing with it, and much more. I also warned about ‘knowledge pollution’, a more serious topic that needs some attention.

There weren’t a lot of metaphors for this new form of AI around, but I found one, and so, in my next post I dissected the metaphor “Common sense is the dark matter of (artificial) intelligence”, or at least I tried. 

I then became more adventurous and started to chat with ChatGPT about metaphor itself and found that actually quite enlightening. However, when I probed it about the metaphors it would use for itself, things became slightly surreal. The image it had of itself was an anthropomorphised image of a super-helpful, even magical, companion, which is perhaps not surprising given the data, i.e. hype, it worked with.

Over time, things became more technical in the news and I started to collect not only metaphors but a menagerie of ‘large language models’, the heart of this new form of AI, including llamas, alpacas, Dolly the sheep etc. But, I have to confess, I am slightly overwhelmed by it all. Give me a real llama any time!

I also reflected on the merging of AI and biology and how old metaphors of ‘the language of life’ bumped into LLMs or large language models. And that made me think about previous discussions about ‘red’ and ‘green’ research into genetically modified organisms (in a way where or Institute started), red or medical being perceived as ‘good’, green as ‘bad’ – and I applied that to thinking about potential attitudes to medical and non-medical AI. 

Then things began to get more serious and alarming, so I wrote a post about the history of alarmism in AI and talked about existential threats. I also fitted in some musings about ‘alignment’, a concept that was totally new to me.

In November the AI summit happened at Bletchley Park, focusing in part on existential risks posed by so-called ‘frontier’ AI. I wrote two blog posts, one tracing the origin of the frontier concept in AI, the other (written with Alan Miguel Valdez) exploring the contradictory talk of superintelligence and supercomputers.

In my last post on the matter of AI, again written with Alan, we explored the opposite of the rather scary ‘frontier’ AI stories, namely the use of AI in the shape of more homely delivery robots deployed in Milton Keynes, where Alan studies them.

At the end of the year, in December, Google DeepMind brought out a new multimodal chatbot, Gemini, but I haven’t tested it out yet. It says on the front page: “Gemini is built from the ground up for multimodality — reasoning seamlessly across text, images, video, audio, and code.” The anthropomorphic language seems now to be built in as standard.

And, more or less at the same time, “EU lawmakers have agreed the terms for landmark legislation to regulate artificial intelligence, pushing ahead with enacting the world’s most restrictive regime on the development of the technology” (Financial Times). I wonder if anybody, apart from Emily M. Bender, is really looking not only at large language models, but also at the emerging language surrounding this emerging technology.

Climate change

Of course, pancake recipes in the form of Shakespearian sonnets or whatever written by ChatGPT don’t really help with the most urgent problem the world is facing, namely climate change, but nicely divert from it. This year was the hottest on record and was marked by outrageously extreme weather events, ranging from monstrous wildfires to atrocious floods, basically all over the place. 

Two posts in particular dealt with heat and wildfires, one on the new phrase ‘global boiling’ and one on wildfires for which I coined a new phrase (with the help of Twitter friends), namely, ‘orange is the new bleak’. I also expressed exasperation at the fact that climate change is trying to ‘speak’ to the world through words and pictures but that nobody is listening. And finally I briefly tackled a problem which featured quite prominently at this year’s mega-COP in Dubai, namely climate change and health. While finalising this post, COP28 has published its statement and perhaps people are finally listening to what the climate is telling us.

Genetics and genomics

Alongside these main topics covered in my blog posts this year, I still maintained some interest in my old hobby-horses of genetics and genomics, and, of course metaphor in all its forms – more about that later.

In one post I examined the metaphors of bombs and bullets used when discussing the dangers of gene drive, while in another I talked about the metaphor of ‘gene shears’ used to talk about gene editing in the past, while nowadays the metaphor of scissors is much more prominent. Some posts dealt with more topical issues, such as the genome editing summit in London, synthetic embryos and how to talk about them, and also mitochondrial replacement and the pangenome.


If you have read this overview up to this point, haha, you’ll have noticed that metaphor is never far away from my thinking and writing. This year I wrote a few more incidental posts on particular metaphors, such as cancer metaphors, gravitational wave and music metaphors, superconductor  metaphors, and even crumbling building metaphors (the RAAC scandal), a post I entitled ‘Metaphors we live in’.

I also thought a bit more deeply about metaphor itself and wrote a post detailing what metaphors are actually for.

Other topics

One history of science post dealt with an intriguing scientific family, the Gmelin family whose members studied everything from chemistry to phlogiston to permafrost…. Another post went back in time (although not that far) and looked at what I wrote about bird flu in the past and how it might apply to bird flu now; and yet another post mused about the choice and meaning of names for storms, viruses and heatwaves.

In a final substantive post, I dissected the important issue of science and trust as treated in a recent report by the International Science Council and pointed out that some parts of this report are built on strawmen.

Guest posts

This year, I published two posts written by colleagues, one, a guest post by Jack Morgan Jones reflecting on whether metaphors can hinder scientific progress and the other a repost by Dimitrinka Atansova asking whether generative AI contributes to more culturally inclusive higher education and research.


At the beginning of this year, I wrote an overview of ten years of blogging, which made me quite nostalgic. Towards the end of this year I could/should have blogged about the FDA approval of gene editing for sickle cell treatment, about the COP28 in Dubai, or about the UK government’s national vision for engineering biology (a new term for synthetic biology about which I have written before), but life in various shapes and forms interfered.

I hope to resume blogging in the new year, but after 12 years, the inspiration and energy sometimes run out; so we’ll have to wait and see. In the meantime, we can only hope that the world doesn’t get worse than it is already. On this happy note, I wish you all a peaceful and healthy New Year.


Posted in artifical intelligenceClimate Changegene drivehistory of scienceMetaphors