Wrap Up on Venice: What is "Quality" Science Communication?

i-74481e4556edd811bd2866bec7362321-bridge-of-sighs.jpg

Day two of the expert workshop on science communication at the Venice Institute of Science & Arts focused more narrowly on the question of defining and evaluating forms of science communication including journalism, institutional outreach, advertising and marketing, entertainment programming, digital media, policy campaigns, and public engagement initiatives such as consensus conferences or deliberative forums. (The workshop was organized by Massimiano Bucchi, professor of sociology of science at University of Trento, Italy, where he chairs the Science & Society Programme)

Many scientists, organizations, journalists, advocates, and scholars talk about the importance of quality science communication, but the term is broadly used and typically poorly defined. This presents problems for understanding the role and effective use of science communication in policy decision-making, culture, citizen participation, and/or personal or consumer decisions, to name just a few examples.

First there is the basic problem of conceptualization including the nature of the communication process and attributes that make for quality science communication. As one participant put it: Is science communication only about the information produced by scientists or does it involve other things such as matters of governance and public participation, or even questions of entertainment, culture and ethics?

Several attributes relating to content that were mentioned included significance, relevance, meaningfulness, and accuracy. Of course, there are even more basic criteria that might apply such as cost-effectiveness or professional production values, depending on the type of communication form evaluated.

In terms of process, it was emphasized that science communication should not be thought of as transmission from elites to the masses, but rather should directly involve lay citizens. Forms of citizen participation in science communication include consensus conference or deliberative forum type exercises but also user centered digital media. (On the shift from a transmission model to a participatory model for communication, see this op-ed I contributed last week at the Italian daily La Stampa.)

There was even the question raised of who sets the criteria for "quality" science communication. Normally it is either scientists and their institutions or journalists who are most concerned with evaluating quality in science communication, but as one participant concluded, it is difficult to leave criteria-setting to just these two groups since self-interests and biases are unavoidable. (Consider alternatively, what focus groups and polling might reveal about the what the public looks for in "quality" science communication.)

Related to a clear conceptual framework, another looming challenge in assessing the quality of science communication in society is the absence of reliable, valid, and comparable indicators that track communication and public engagement activities. Comparing the intensity and nature of science communication activities in the U.S. to other countries such as Canada, Italy, Germany, and the UK would be a very valuable exercise. Several researchers thought this idea had promise to carry out. (Note: If you are an interested funder, please contact me.)

Finally, there is the problem of defining and measuring impacts and outcomes. For example, these impacts can be assessed at the national or the local/community levels or within specific demographics of interest. They could involve individual outcomes such as learning about the technical and political side of science; attitudes and preferences; or political and personal behavior. They would also track group level impacts such as coalition building or the shift in agenda status for science across an institutional or organizational context. They might even involve how science communication efforts effect other scientists or experts, such as this study of the impact of science fiction films on scientific research. Other impacts could also track national or cross-national news and entertainment media indicators as well as policy agendas, framing, and decisions.at the national, state, or community level.

These issues and questions are already part of the academic work in the area, are raised in many personal conversations I have had with government agencies and organizations here in DC, and are commonly raised at meetings and even blogs. Expect more detailed thoughts to come out of the Venice conference in the form of a special journal issue or edited book. In the meantime, check out this previous edited volume inspired by the first Venice conference held in 2007.

More like this

BioinfoTools;
There are more than two stakeholders, and the "criteria-setting" comment is meant to address the point that fundamentally the most centrally involved "player" in this equation is neither the scientists nor the journalists but rather the citizens who have to make sense of the results, and ultimately politically will decide whether they want more of such information and research or less.
Given the difficulty of meaningful citizen involvement, the use of journalists and scientists as competing interest groups to "balance" the criteria can suffice. But only for as long as the two groups are considered to be conflictual can this seem like an appropriate checks-and-balances system. I hate to perpetuate myths about science and journalists being naturally in competition.

Two quick points:

1. I realise you are probably quite busy, but my post to your previous thread seems stuck in moderation.

2. re [...] it is difficult to leave criteria-setting to just these two groups since self-interests and biases are unavoidable.

I would agree if you mean that either of the two groups alone (up to a point). But if you mean that both groups together "balancing" the others' position are not sufficient, why not? After all they are the two players: the latter (journalists and publishers) are representing the former (scientists and their institutions). Why is some other party needed? (Perhaps I'm not understanding what you are trying to say and you need to elaborate on what you mean by "criteria-setting"?)

Scientists and their institutions are, of course, primarily interested in the part of science communication that affects them, that the science (i.e. "factual material") presented is (reasonably) accurate and that scientists are fairly represented. After all, it is their "industry" (for want of a better term) that is being represented. Do other industries have a third player involved in how they are represented? (Other than the obvious "standards committees".)

By BioinfoTools (not verified) on 21 Jan 2009 #permalink

Thanks for sharing. I'll definitely re-read this before I go to the AAAS Communicating Science Workshop in Chicago.