May 25, 2014, by Brigitte Nerlich
Science is not what you want it to be
This is a GUEST POST by PHILIP MORIARTY
This level of engagement between natural scientists and sociologists is great to see and, given the momentum we established last week, it would be a great shame if the Circling… conference did not become an annual event. The Making Science Public blog will, I’m sure, keep you posted on this. (I think we must have done something right when a physics PhD student saw fit to tweet the message below…
Leaving aside the perennial question of whether or not an aptitude for impenetrable writing is worn as a badge of honour in some areas of sociology — see here; and here; and, for examples spanning the truly awful to the actually rather quite good, here – the aspect of the conference that really exercised me was the vexed question of the independence and objectivity of scientific research. I was perhaps being somewhat mischievous with the choice of title for my post on the Circling… conference for physicsfocus: “The laws of physics are undemocratic”.
But not that mischievous.
I’ve been informed by e-mail that I entirely missed the point and that, of course, no-one had ever said that all scientific evidence is tainted by investigator bias, or that sociologists in general have any beef with the idea of scientific objectivity. (This line, from the description of an influential book by Sheila Jasanoff, would suggest otherwise: “… who should define what counts as good science when all scientific claims incorporate social factors and are subject to negotiation?”). Nestling beside those messages in my inbox are e-mails whose authors are equally convinced that, of course, we’re all subject to cognitive biases so how could we ever have truly objective data? And over at Occam’s Typewriter, Athene Donald is of the opinion that I am “verging on the simplistic” in my belief “that science is inherently neutral”.
Put simply, if the scientific process is not inherently neutral then we’re not doing it right. I hate to succumb to the usual temptation to quote Feynman, but the following words are so apposite to this debate, I can’t resist. (Sorry, Athene!).
“If you’re doing an experiment, you should report everything that you think might make it invalid – not only what you think is right about it…The first principle is that you must not fool yourself – and you are the easiest person s my to fool…After you’ve not fooled yourself, it’s easy not to fool [others],”
I stress yet again that I am not referring to how scientific data will be used by politicians or reported by the media. Nor am I suggesting that all science is free of observer bias. (Indeed, I’m embroiled in a debate at the moment about a series of high profile papers which are based on a worrying mix of experimental artefacts and strong researcher bias.)
But Rule #1 of Science 101 is that we must aim to hunt out sources of random and systematic error and reduce them as much as we can. We drive undergraduate physicists to distraction in their laboratory sessions because of this focus on experimental uncertainties. We then decimate – sometimes even literally – their marks for a lab. write-up if they don’t include error analyses. (I’ve often paraphrased Pauli – somewhat out of context — in 1st year lab. lectures I’ve given, stating that to report a result without its associated uncertainty is “not even wrong”.)
Science must always strive to be neutral in its methods and its data interpretation. This is a very simple, if perhaps not simplistic, message. The remarkable success of science in explaining so much of the natural world via reproducible experiments coupled to mathematical models, simulations, and theory shows that scientific neutrality can indeed be attained, and is not just the idealisation which some suggest it is.
“Regulatory science” is not science.
It was very clear from the Circling the Square conference that what a practicing scientist understands by the term “science” is very often not that closely related to how “science” is viewed by those in sociology. This is the crux of the debate and there are misconceptions and preconceptions on both sides.
A colleague in Sociology and Social Policy here in Nottingham, Sujatha Raman, brought this important document to my attention yesterday. (Thanks, Sujatha!). Although I’m familiar with the concepts of post-academic and so-called “Mode-2” science, the “regulatory science” to which Jasanoff refers in that document is something with which I’m not particular familiar.
So I spent a little time genning up on the topic of regulatory science.
And I got more and more frustrated.
I realise that I am yet again picking at old “science wars” wounds (c.f. last week’s comments on the Sokal affair) but those wounds really need to be re-opened. (I agree entirely with Reiner Grundmann, Chair of the Science and Technology Studies Strategy Group here at Nottingham, that frank discussion is required).
Let’s make the first incision…
“Regulatory science” is an oxymoron. The differences between “regular science” (i.e. science) and regulatory ‘science’ are spelt out in the table below (from Sheila Jasanoff’s “The Fifth Branch: Science Advisors as Policymakers” book.). Regulatory ‘science’ is a damn good recipe for introducing deliberate bias, for diluting the quality of scientific evidence, and for reducing public trust in the independence and reliability of scientific conclusions.
I’m certainly not going out on a limb in stating this. Irwin and co-authors start off their 1997 discussion of regulatory science with the following excerpts from earlier papers:
It may well be that when sociologists speak about “science” that they sometimes have this type of regulatory ‘science’ in mind. If so, very many scientists and sociologists are going to be speaking past each other until we agree some common ground.
For scientific evidence to be credible and trustworthy, the scientific process has to be as independent and disinterested as possible. Regulatory science — or, indeed, any of the other “next generation” forms of science including post-academic, Mode-2, etc. — wilfully erodes this independence and disinterestedness.
The parallels between regulatory ‘science’ and Research Councils UK’s advice on impact are also rather striking. But that’s a blog post for another day…
Coda: Reiner pointed me towards a fascinating blog post and associated comments thread on the matter of the extent to which physical laws are fundamental and objective.
I think that any physicist reading Cartwright’s comments quoted in that post will grind their teeth quite a bit and may, if so inclined, mutter some rather choice language under their breath. (I should stress that Reiner is at pains in the comments thread to state that he is just quoting Cartwright, not supporting her arguments).
Newton’s second law most definitely does apply to the situation which Cartwright describes, contrary to her belief. Cartwright’s straw-man argument is flawed to its core (particularly the idea of physics being a “fundamentalist faith”). The issue with the falling banknote is not that Newton’s second law fails, it’s that one has to apply it correctly! There are a number of forces acting on the banknote and these must all be taken into account.
Predicting the motion of an object on the basis of a combination of forces is, of course, not always straight-forward (and can often be intractable) but that’s not because Newton’s second law is not valid! I thoroughly recommend James Gleick’s book Chaos for a thorough grounding in how simple laws of motion can produce complex behaviour.
One doesn’t even have to consider a system as complicated as the banknote fluttering to the ground to see how exceptionally intricate behaviour can originate from a simple equation which has Newton’s second law at its core:
(Jump to ~ 18:20 if you want to cut to the chase!)