October 23, 2015, by Warren Pearce
Methodological clarity required when publishing social science in natural science journals
The latest issue of Nature Climate Change features a Correspondence from Peter Jacobs and colleagues which concerns a recent Letter that appeared in the same journal; our Reply is also published. We do not wish to deny that there are real and significant differences between ourselves and Jacobs et al but we think the correspondence also says something about the publication of qualitative research studies in prestigious natural scientific outlets. This Correspondence/Reply received three-peer reviews: one appeared unsure if our reply was worthy of publication; one felt Jacobs et al should be ignored; and a third, rather presciently, felt that both social scientists and natural scientists would think they had ‘won’ and return to their houses none-the-wiser. Given that Nature Climate Change has published many calls for qualitative and/or interpretive studies from the social sciences and humanities, we’d like to muse, very briefly, on what might be learned from this exchange for those writing, or reading, reports in the future. From our perspective, we’d like to highlight the importance, and difficulty, of inductive research.
We could think of hypothetic-deductive, hypothesis driven, research as being “top-down” – one starts with a theory or a question and then interrogates a data set in order to see how that theory stands in relation to the data. Inductive research is quite different and is “bottom up” – you start with the data, see patterns or interesting things, and the theories and broader claims are integrated later. This latter approach is certainly the one which we took in our Letter; the two main theoretical sources in that piece – Jeanne Fahnestock’s work on meaning in science (£) and Susan Leigh Star’s writings on certainty in science – were only incorporated after we had analysed the press conference transcript itself. This approach is incredibly common in the social sciences; Grounded Theory, the archetypal inductive approach, is quite possibly the most used research method. Inductive research is, however, far less common in the natural sciences (although some claim that Big Data is changing this).
When reading back the methods section of our Letter is noticeable that the word ‘inductive’ does not appear at all and the role of theory in our coding scheme (i.e. it played a largely insignificant role) isn’t as well explained as it might be. To a social scientific audience we don’t think this would be problematic but in this context we think it proved to be so; there is the strong sense that Jacobs et al think that we didn’t test our hypotheses rigorously but this is precisely because there weren’t any hypotheses. A perfect example of different answers arising from an unacknowledged difference in questions rather than anything fundamental about answers.
As calls (and pushes!) for interdisciplinary research intensify, our experience at the sharp end within a high-profile journal have taught us two basic but important lessons for future engagements: i) qualitative researchers need to be clear about what their methods are when they’re using when publishing in natural science journals, and ii) those more accustomed to hypothetic-deductive approaches need to at least be aware that qualitative methods frequently use different rules to those they most commonly encounter.