September 14, 2018, by Brigitte Nerlich
Anticipating public reactions to emerging sciences and technologies: Nano, synbio and AI
In around 2003 I woke up to nanotechnology because I was watching my son play a computer game that involved ‘nano-armour’. That pricked my curiosity. Later I came across a quote from Howard Lovy, then editor of Small Times: “Nanotechnology, independent of its development as a science, is spreading as a cultural idea and icon. This separate branch of nanotech – a little bit of fact and a whole lot of imagination – can be turned into a powerful force.” (May 28, 2004; the link to that document is now extinct)
This made me want to study the spread of nanotechnology as a cultural idea, but, unfortunately, I never got the funding to do this. Nevertheless, in 2005 I published an article on the visual construction of nanoscience. This was the time when, in terms of images, nanobots still dominated the utopian and dystopian scene. By contrast, when you now search for nanotechnology on the Science Photo Library, you still get some nanobots and even nanosubmarines, but also a lot of nanoparticles, Bucky balls and nanotubes.
The 2003-2006 nano hype has gradually fizzled out and with it also nano as a cultural idea and/or cultural force.
Interestingly, nano never provoked the public outcry that academics anticipated and this despite of (or because of) a lot of upstream and anticipatory public engagement, science communication and awareness raising activities. General public awareness never rose to the challenge, for good or for ill. That was quite unexpected, given the fear of nano provoking another ‘GM debate’…
What can we learn from nano for public debates about other emerging sciences and technologies, such as synthetic biology and now AI? And what can we learn about how to better anticipate public reactions?
Richard Jones on nanotechnology and lessons for science communication
Richard Jones has recently written a short article entitled “Between promise, fear and disillusion: two decades of public engagement around nanotechnology”. He provides an overview of early hopes and fears around nanotechnology, fuelled by a particular constellation of events, and reflects on lessons learned.
Jones tells us that the work of the futurist K. Eric Drexler, whose books were scientifically controversial but popularly inspiring, were central to making nano at least somewhat ‘public’, leading to other futurists, even transhumanists, to become involved. Another aspect of early nano was the influx of lots of venture capital, as well as government endorsements, fanning the flames of hope, while also expecting science communicators to ward off the evils of public rejection and fear (keeping the spectre of GM always in mind).
Going beyond Jones, one can ask: How does this compare to the situation in which synthetic biology finds itself? We have Craig Venter, Drew Endy, George Church…, but there is no Drexler. There is influx of venture capital and there is government endorsement which fans the hopes of monetary gains while also asking scientists to work ‘responsibly’ and science communicators to ward off rejection and fear (keeping the spectre of GM always in mind). There are also concerted efforts made by scientists to be open, transparent and to engage young scientists and members of the general public, something that we also saw in the nano era.
So, there are some similarities, but also some differences between nano and synbio. These differences include that over and above a stress on ‘upstream engagement’ and ethics (ethical, legal, social issues), popular during the height of the nano-hype, we now also have Responsible Research and Innovation (RRI) and anticipatory governance.
This means that science communicators (including social scientists and other intermediaries) have to shoulder a heavy burden in terms of ‘responsibility’, especially as catchphrases like upstream engagement, RRI (which in its AREA definition includes ‘anticipation’) and anticipatory governance are extremely fuzzy and difficult to operationalise.
Science communication and engagement dilemmas
Richard Jones summarised one of the dilemmas faced by science communicators in the following way: “It’s essential to be able to make some argument about why research needs to be funded and it’s healthy that we make the effort to anticipate the impact of what we do, but there’s an unhealthy, if possibly inevitable, tendency for those claimed benefits to inflate bubble proportions.”
There is however no real advice on how to navigate a course between these cliffs of public engagement. Jones quotes Alfred Nordmann (a philosopher of science who was very active in early nano-debates) on “responsible representation”, which, according to Nordmann, “involves determination of plausibility in light of ongoing trends rather than radical novelty” and “requires that communicators take responsibility for their representations by being prepared to defend their credibility”.
Again, the question is, how can science communicators do that, namely ‘responsibly’ determine plausibility and defend credibility at the beginning of a trajectory that might lead to an explosion of public concern, just fizzle out, or anything in between? This is particular difficult in view of the fact that STS advocates of upstream engagement, RRI and anticipatory governance stress that “upstream forms of public engagement with science are emphatically not about earlier prediction (and subsequent management) of impacts” (Wynne 2006: 73).
Science communicators in the broadest sense are supposed to become “experts in social engineering” with all the responsibilities this entails, but nobody really tells them how that can be achieved ‘responsibly’. Lessons from the past, as summarised by Jones, seem to be rather dilemmatic and problematic.
Anticipating future public reactions by exploring the cultural past
As was the case with nano, public awareness of synthetic biology is low (possibly even lower than it was in the case of nano) and public debate is non-existent. That’s what nano and synbio debates have in common. But there is also a difference.
Nano discourses were, in the early days, structured by extreme nano-hyperbole and nanophilia on the one hand and rather extreme nanophobia on the other. In synbio we have neither had the “dramatic nanophobia” of a Bill Joy (2000) nor the fearmongering of a Michael Crichton (Prey, 2002) for example, as far as I can tell. So public discourse didn’t have to weave between these extremes before now fizzling out, just as nano did some years ago despite all the hype. And unlike nano, synbio didn’t really inspire the public imagination.
What about AI? I think that here public awareness is quite high (one would have to look into this properly) and we are seeing an emergence of two poles of AI-philia and AI-phobia (combined with algorithm phobia; see the recent emergence of the phrase ‘Franken-algorithm’). AI also seems to inspire the public imagination, as it links up with a long history of AI/robotic speculation. Between the hyped hopes and the hyped doom, some more nuanced discourses are emerging, such as the interventions by Jim Al-Khalili.
So how do we do upstream engagement, RRI and AI-Ethics in this situation?
One idea would be to do what Chris Toumey did in the early days of nano when trying to anticipate public reactions. To anticipate public reactions to AI (it’s too late for synbio) we need to gain insights into what relevant stories have been told in the past and about its past. Only in that way can we anticipate future reactions.
As Chris said in 2005 of nano (I replaced nanotechnology with AI in the quote): “One of the ways people try to envision the future of AI is to tell stories about the past, expecting that the future will continue certain features of the past.” We need to explore AI as a cultural icon, a cultural idea, even a metaphor. We need to map which stories and images are out there and how they might impact public debate and public understanding.
We also need to know, not only about inter-cultural differences, but also about inter-generational differences in stories of hopes and fears around AI; between those who, say, know the difference between “AI” and “an AI” for example, and those who don’t.
This type of work is quite different to RRI and upstream engagement but should be complementary to it. You can’t do engagement without knowing what stories and myths are out their reflecting the hopes, fears and values of those with whom you engage.
As Chris said in 2005 (and again I replaced nano with AI): “For a short time, we have the luxury of anticipating the possible forms of public reactions to AI.” It might already be too late though….
Image: Google images, labelled for reuse
No comments yet, fill out a comment to be the first
Leave a Reply