Crowd of small symbolic 3d figures linked by lines.

June 21, 2012, by Brigitte Nerlich

Open data, trust and data/visual literacy

Two reports

When I opened my twitter timeline on 21 June, a stream of tweets announced the publication of two reports relating to open access and open data: The Royal Society’s report on Science as a Public Enterprise (plus an article about it in the THE and a Nature news blog) and the RCUK’s Open Data Dialogue final report. There was talk about open access, accessible information, intelligent openness, data communication and so on.

Data, technology and trust

This calls up images of interested individuals pouring intelligently over openly accessible scientific articles, data and data sets in order to make up their own minds about issues that affect their lives, especially health, the health of the planet or whatever issue might be at stake at any one time. That sounds good. But does it actually work?

The Royal Society report is illustrated with one of the increasingly iconic ‘maps’ generated by data mining and data visualisation software; in this case it is a mapping of the Spanish Cucumber E. Coli. Such images have a beguiling aesthetic quality but their emergence and meaning is deeply impenetrable to the non-expert. In addition, the inner workings of the software used to generate the image may even be a black box to the scientists that employ it.

Social scientists, art historians, science communicators, and sociologists of science are beginning to study the relation between data sets, data visualisation technologies, aesthetics, truth and trust, for example at a forthcoming European Science Foundation conference on Images and Visualisation. The issue of trust in particular is important here. The underlying premise of some of the open access and open data agendas seems to be that freely available data will generate more trust in science. There also seems to be an assumption that people just don’t trust science or scientists to interpret data for them and that they therefore want to double check what’s going on. But is this actually possible in an age of what Geoffrey Boulton calls a ‘data deluge’, where much of the data generation, visualisation and interpretation is left to the ‘machine’ (the data crunching software with its algorithms)? The question then becomes: Can one trust the machine and/or algorithm? And what does this mean for the desired accessibility, intelligibility, assessability and usability of the data by non-specialists, be they individuals or groups of individuals?

The Royal Society’s report says that: ”Large scale data collection and analysis creates challenges for the traditional autonomy of individual researchers. The internet provides a conduit for networks of professional and amateur scientists to collaborate and communicate in new ways and may pave the way for a second open science revolution, as great as that triggered by the creation of the first scientific journals. At the same time many of us want to satisfy ourselves as to the credibility of scientific conclusions that may affect our lives, often by scrutinising the underlying evidence, and democratic governments are increasingly held to account through the public release of their data.”

There is a tension here, it seems, between on the one hand individuals wanting to scrutinise data and data sets (and their ability to do so without, at least initially, having the necessary expertise and tools) and on the other hand (special interest) groups or communities scrutinising the data (using novel tools which might not be available to individuals). This tension between individuals, groups, technologies and tools has consequences for the democratic governance and politics of science. Let us explore one aspect of this conundrum in a bit more detail.

The politics of data visualisation

Science, technology, engineering, mathematics and medicine increasingly employ images, imaging technologies and systematic visualisations of data to formulate problems, report on discoveries, and propose new avenues of research and treatment. Advanced imaging and data visualisation technologies allow us to see the unseeable (the incredibly small, the incredibly large, the incredibly far away or the incredibly complex), to integrate and map huge amounts of information, to simulate or model the future and much more. However, the progressive sophistication of such technologies, their proliferation and the increasing ease with which they can be used, pose challenges to science and society. These emergent issues may also challenge received understandings of the relationship between science and politics.

To make political decisions in a modern world, policy makers have to rely on scientific information (and the same applies to citizens) (which increasingly also includes data and maps of social, political and digital networks etc). This information is more and more visually designed and delivered through digital and social media. Information researchers have stressed that this visualisation of information “is not the mere decoration of factual information. It is elemental to the construction of meaning and how it is perceived. It’s what Richard Saul Wurman calls ‘the design of understanding’”.  As understanding determines what social and political actions we perform or encourage or reward as individuals, as politicians and as communities, we should really have a better understanding of how this design of understanding works. There needs to be more research into the politics and rhetoric of visual persuasion, for example.

As Tony Prichard, an expert on design for visual communication, has pointed out, “successive British governments fail to acknowledge visual methods as intrinsic to solving many of the challenges facing society.” Politicians and policy makers seem to be blind to the fact that they are surrounded by (data) visualisations, that their understanding of science and society is indeed ‘visually designed’ and mediated. This blindness is perhaps understandable as the outward face of British government, Parliament, the House of Commons and the House of Lords, are entities which are purely verbal, textual and rhetorical, in fact visually sterile, unlike many other modern environments. However, political decision making relies increasingly on expert advice that is extracted from immense data sets, models and simulations and is visually presented. Pie charts or graphs are no longer enough and a multitude of other types of visual maps and infographics are taking their place (as this periodic table of visualisation methods illustrates).

Recently there have been many calls for politicians to gain a better understanding of science and the scientific method. There should also be a call for politicians to gain a better understanding of the (visual) ‘design of understanding’, that is, the production of visual maps based on very large data sets using data mining and data visualisation tools. A starting point may be reading The Guardian‘s data blog.

Calling for open access, accessible information, intelligent openness, data communication and transparency is not enough. We also need something that might be called data literacy or data visualisation literacy (or, dare I say, public understanding of data and data visualisation). Without it all the open data in the world and all the visually pleasing ‘maps’ out there will remain incomprehensible and meaningless to citizens and politicians alike or, even worse, may be utterly misleading. Without the ability to read the data and make sense of data visualisations, democratic governance of science will remain elusive. However, we also have to take into consideration issues such as time and cost, both financial cost and information processing costs, which may mean that some politicians and some citizens may want to leave the data analysis to elected and/or trusted experts.

PS

Edoardo from Blu Frame contacted me today (7 January 2015) to tell me about a review he has done of the 20 best big data visualisation tools – well worth a look!

Posted in Science and GovernmentScience CommunicationScience PolicyTrust