March 30, 2016, by Warren Pearce

Reviewing the evidence on transparency in science: a response to Lewandowsky & Bishop.

Transgress the boundaries...get a red flag?

Transgress the boundaries…get a red flag?

Co-authors: Warren Pearce, Sarah Hartley & Brigitte Nerlich.


In January, Nature published a Comment piece by Lewandowsky and Bishop entitled “Don’t let transparency damage science“. The authors argued that some of the “measures that can improve science — shared data, post-publication peer review and public engagement on social media — can be turned against scientists”. Following this observation, they propose a series of ‘red flags’ that may be raised about researchers or their critics within a number of categories. The piece caused some consternation in the science blogosphere and here at Making Science Public, leading us to pen a brief response that Nature published in February. Here, we expand a little on the themes of our correspondence.

The danger of simplified dichotomies

Social problems are complex, often bewilderingly so. A key task for social scientists is to make sense of this complexity. As the aphorism (not quotation!) says: “everything should be made as simple as possible, but not simpler”. Unfortunately, Lewandowsky and Bishop do over-simplify the complex issues in play when thinking about transparency in science, portraying them as dichotomies that pit researchers against their critics. What’s worse is that this over-simplification appears in the pages of Nature, science’s most high-profile forum. This dangerously inflames tensions in controversial areas of public science and stymies efforts to break deadlocks. This was noticeable in ‘below the line’ comments on Nature, especially from chronic fatigue syndrome (CFS) commentators. The authors expressed surprise at such reactions, but such a caricatured presentation of delicate issues was always likely to garner robust opposition.

The dangers of unsubstantiated claims

Lewandowsky and Bishop claim to have “identified ten red-flag areas”, but no evidence is provided as to how these were identified. For example, it is claimed that “hard-line opponents” to research on nuclear fallout, vaccination, CFS or GM organisms “have employed identical strategies” to opponents of climate change and tobacco control research. No evidence is supplied to support this view, which has the effect (intended or otherwise) of discrediting campaigners in these areas (only CFS researchers were represented at the Royal Society meeting that prompted the Nature piece and that one of us (Pearce) attended). Indeed, the cases are very different. For example, corporate interests play very different roles within the debates over climate change and GM organisms. It is imperative to focus on social contexts when trying to understand what drives some, but not all, of the criticisms levelled at scientists. Not doing so is especially troubling as there is social science evidence available that sheds light on these ‘red-flag’ issues, but which are ignored in the piece (see below for a list of further reading, short and long). Writing such an important piece in a journalistic style and making sweeping claims unsupported by evidence is dangerous, as it may inflame the debate still further. It is somewhat ironic that scientists who are also experts in science communication have seen fit to write a piece that is so cavalier with evidence, especially evidence about expertise.

Who is the expert?

Lewandowsky and Bishop present researchers and their critics operating outside of their area of training and/or expertise as worthy of a ‘red flag’. However, no consideration is given as to how such an ‘area’ might be delineated. As Harvey Graff explains in his recent book on interdisciplinarity, the exchange of ideas between different areas of knowledge has been central to the emergence of many of today’s established disciplines. Yet Lewandowsky and Bishop consider boundary transgressions to be rewarded with a red flag for researchers. A decade ago, Sheila Jasanoff succinctly summarised the problem:

“Difficulties in securing responsible criticism are compounded when, as is often the case for public science, claims and data cut across disciplines, involve significant uncertainties or entail significant methodological innovations.”

So the fundamental question is who counts as an expert, and under what conditions? This is by no means self-explanatory. In the case of oophorectomies, for example, Stephen Turner shows (£) how online commenters provided an important check on the inflated rhetoric of medical experts who claimed that the process had no significant side effects. Blog posters who provided their own personal experiences were subsequently proved correct. This is not to say that we should always privilege online commenters over professional researchers. Rather, that they provide different types of expertise which should be included in controversial areas of public science.

Where are the public?

Unsurprisingly, but depressingly, there is a large hole in Lewandowsky & Bishop’s analysis where the public should be. There is no mention of what the public interest might be in raising ‘red flags’, and what role sections of the public can play in science. Close reading of the article reveals it to be really about science governance, an issue too important to be left to the research community alone. What the piece does demonstrate is that a broader public discussion about the role of scientific experts in society is needed. Lewandowsky & Bishop argue that science is vulnerable to abuse. We agree, and scientists should be subject to the same legal safeguards as any other members of society. However, attempting to delineate general (and to some extent off the cuff) rules for distinguishing legitimate and illegitimate criticism risks doing more harm than good, and can lead to further distancing science from the society it is supposed to serve. A more fruitful approach to addressing public doubts about science was proposed by David Demeritt in 2001 (writing about climate change but, we argue, more generally applicable):

“The proper response to public doubts is not to increase the public’s technical knowledge about and therefore belief in the scientific facts of global warming. Rather, it should be to increase public understanding of and therefore trust in the social process through which those facts are scientifically determined. Science does not offer the final word, and its public authority should not be based on the myth that it does, because such an understanding of science ignores the ongoing process of organized skepticism that is, in fact, the secret of its epistemic success. Instead scientific knowledge should be presented more conditionally as the best that we can do for the moment. Though perhaps less authoritative, such a reflexive understanding of science in the making provides an answer to the climate skeptics and their attempts to refute global warming as merely a social construction.”

Lewandowsky and Bishop pinpoint some pitfalls in the societal process of science, but a cavalier attitude to evidence has inadvertently reinforced a caricatured image of this process.

Social scientists: up your game!

What is noticeable is how little these social sciences critiques have cut through to those in the natural sciences. To be clear, there is no excuse for ignoring the existing evidence base. However, we believe that social scientists must be more proactive in using that evidence base in order to lead the debate from a position of strength.

Further reading

Short reads:

Janz, N. (2016, January 29). Getting the idea of transparency all wrong. Political Science Replication.

Kiser, B. (2015, November 16). The undisciplinarian. A View from the Bridge. (an interview with Harvey Graff)

Murcott, T. (2012, July 11). Unreasonable doubt. Research Fortnight.

Nerlich, B. (2012, October 12). Making the invisible visible: On the meanings of transparency. Making Science Public.

Pearce, W. (2013, November 3). The Subterranean War on Science? A comment. Making Science Public.

Tamblyn, J. (2013, May 22). Bring on the yawns: Time to expose science’s ‘dirty little secret’. Making Science Public.

Hulme, M., & Ravetz, J. (2009, December 1). ‘Show Your Working’: What ‘ClimateGate’ means. BBC.

Long reads:

Demeritt, D. (2001). The construction of global warming and the politics of science. Annals of the Association of American Geographers, 91, 307–337.

Graff, H. (2015). Undiscipling Knowledge: Interdisciplinarity in the Twentieth Century. Baltimore: John Hopkins University Press.

Hess, D. J. (2010). To tell the truth: on scientific counterpublics. Public Understanding of Science, 20(5), 627–641.

Jasanoff, S. (2006). Transparency in public science: purposes, reasons, limits. Law and Contemporary Problems, 69(3), 21–45.

Stilgoe, J., Irwin, A., & Jones, K. (2006). The Received Wisdom: Opening up Expert Advice. London: Demos.

Turner, S. (2013). The blogosphere and its enemies: the case of oophorectomy. The Sociological Review, 61, 160–179. (*subscription required)

Image: Livestock Chicago 1947, Wikimedia Commons

Posted in interdisciplinarityopen accesspublic engagement with sciencepublic participationpublicsSocial sciencetransparency