September 26, 2014, by Brigitte Nerlich

The invisibles: Science, publics and surveys

Online_Survey_Icon_or_logoThis is a guest post by two science communication researchers, one working at the University of Otago, New Zealand, the other at the University of Queensland, Australia: Fabien Medvecky and Joan Leach.

How much can large-scales surveys tell us about attitudes to science and what can we say about the categories of publics constructed around these attitudes? Not much! In fact, much less than we think we can. We should be grateful for this because it’s this void of explicit knowledge that invites us to think more interestingly about just what kind of publics exists in the world.

Surveys of public attitudes to science

Large-scale surveys about attitudes to science and technology are a substantial dish in the banquet that is science communication. Here, we have in mind the Eurobarometer special report on science and technology and such like. These surveys, among other things, seek to find out about public attitudes to science and technology, and classically differentiate between the interested or engaged and the disinterested or disengaged. We know who the disengaged are; they are the survey respondents who express a lack of interest or engagement with science. The disengaged have long been a category of interest to science communication, often perceived as the ill-behaved, disappointing or naïve child who fails to see the importance and unquestionable value of science.

More recently, there has been a more nuanced reading of the disengaged. While they may not engage in the way we want them to, the simple fact that we know they exist forces us to question why we think it is so important to be engaged. The disengaged, then, can be understood as enablers of critical reflection, and that surely must be a good thing. But the question we want to tackle here is about our capacity to make claims about levels of engagement based on large-scale surveys. We’ll suggest two major stumbling blocks to making any ontologically reliable claims about levels of engagement based on large-scale surveys, and both these are reliant on a rarely discussed subset of the surveyed population, namely the non-respondents: the invisible men and women.

A methodological challenge

The first challenge is a methodological challenge and concerned with how far we can make claims about the world based on the responses we get. Put simply, the single largest determinant of response rates is an interest or care for the survey topic. You’re more likely to be willing to respond to (and complete) a survey about jazz if you like jazz or have an interest in jazz or even really really dislike jazz, and likewise, you’re more likely to respond to a survey about science if you have an interest in science or a particular bone to pick with it. Given those with no interest in science are less likely to respond to a survey on science and those with an interest in science are more likely to respond to such a survey, any question about levels of interest in science is likely to result in biased results. So unless we can find out about the attitudes of the non-respondents, our capacity to construct accurate categories seems pretty slim.

A conceptual challenge

The second challenge is conceptual: how can we and how should we understand the richness and complexity of ways individuals engage with science when the majority of the surveyed population does not want to tell us about their engagement (non-response rate for large-scale surveys is often above 80%). Complain all we might about those assessed as disengaged in large-scale surveys, but truth be told, they are at least engaged in enough of a dialogue with us to tell us they’re not interested. The non-respondents, on the other hand, might be fully engaged but busy; they might be disengaged; they might not trust us; and so forth. But the catch is they don’t let us know, but we can safely assume there are a variety of reasons why people don’t respond to surveys. While we can’t survey those who don’t respond to find out why (since that survey will have non-respondents, and thus we enter an infinite regress), we can’t just assume that non-response has no meaning or that non-response is not, at least at times, a form of expression.

Non-response – what does it mean?

So how should we read non-response? Let’s learn from political science and use nonvotes as an analogy. Nonvotes (abstention and spoiled ballots) have been well studied, and like non-response, they stem from a variety of reasons and motivations, ranging from confusion to disinterest to discontent. But thinking about non-response as nonvotes is not only a neat way to make sense of those we can’t see, it also makes us more critical about what we can see. Firstly, it invites us to re-interpret the disengaged and the other constructed category borne out of the surveys (those we initial put in the “disengaged” did, after all, stop to tell us they were not engaged!). Secondly, thinking of non-response in terms of nonvotes helps make the political nature of such large-scales surveys explicit. And they are inherently political: they drive policies; they drive assumptions about the force of science relative to other forms of knowledge; they created more or less politically relevant categories of publics; etc.

So next time a survey on public attitudes to science comes out, let’s be more critical about the limited wealth of information we can get from them, and let’s remember those who chose not to respond, the invisible people, and what their silence tells us.

Image: Google images (free for reuse)

 

Posted in publicsScience