Framing and ethics (part 1).

If it's spring, it must be time for another round of posts trying to get clear on the framing strategies advocated by Matthew C. Nisbet, and on why these communications seem to be so controversial among scientists and science bloggers.

My past attempts to figure out what's up with framing can be found here:

The present post has been prompted by Matt's recent post on the ethics of framing science.

If you haven't the stomach for another round of the framing wars (or the attempt at analysis from here on the sidelines), come back later for tasty framing-free content: This afternoon I'll be posting an illustration by the elder Free-Ride offspring, and this evening I'll be revealing the identities of the mystery crops in my garden.

For those still reading, here's my plan: First, in this post, I'll consider the four ethical principles Matt says ought to be guiding scientists, journalists, and other communicators who are framing science. In the next post, I'll say something about what seems to be going on when proponents of the framing strategy object that scientists are not applying it correctly. Finally, I'll try to draw some broader lessons about the folks interested in communicating science - and about the strategies that might be useful (or counterproductive) in trying to sell scientists on the utility of the framing strategy.

You'll recall that "framing" is a communication strategy concerned with reaching your intended audience by highlighting the particular aspects of the issue about which you're trying to communicate that will resonate with that audience's core values. In short, the aim is to get that audience to understand why they should care about the issue - how that issue connects to other things they already care about. The aim is not to bombard the intended audience with a bunch of technical details that they don't have the background or interest to follow. (How does one figure out which core values will resonate best with a particular audience? Data from focus groups and the like.) Predictably, scientists and science bloggers raised concerns about whether framing didn't amount to dumbing down scientific information to try to persuade non-scientists to accept or reject certain policies or points of view. As well, concerns were raised about whether gathering focus group data was as feasible or reliable as framing advocates thought it was, and about whether doing so might somehow shift scientists from their roots as denizens of the reality-based community to "spinners" or public relations hacks.

Matt's post sets out the following as ethical principles that should guide those framing science:

  1. Whenever possible, dialogue should be a focus of science communication efforts, rather than traditional top-down and one-way transmission approaches.
  2. No matter their chosen role, whether as "issue advocates" or "honest brokers," scientists and journalists should always emphasize the values-based reasons for a specific policy action.
  3. Scientists and journalists must be accurate, respecting the uncertainty that is inherent to any technical question and resisting engaging in hyperbole.
  4. Scientists and journalists should avoid using framing to denigrate, stereotype, or attack a particular social group or to use framing in the service of partisan or electoral gains.

The first principle is one that hasn't been entirely clear in earlier discussions about framing. I wondered about this very issue last summer:

The sort of data Nisbet identifies as crucial prior to developing your strategy to communicate your message -- from focus groups and polls -- assumes not only that the group of people with whom you're communicating are relatively homogeneous and stable (with regard to the assumptions and core values with which your message will need to resonate), but also that you pretty much have one shot at getting your message across. In other words, framing is a strategy that assumes mass communication (via TV, radio, or print media, for example) where the person trying to communicate the message lays it out and the intended audience takes it or leaves it.

It is not a strategy that assumes a back and forth interaction between communicator and target audience.

I think the bloggers and others who are not sold on the strategic importance of polling and focus groups see themselves engaged in communication that involves a real back and forth. In that exchange with the people with whom they're communicating, they can find out what it is those people take as given and what they value. Indeed, to the extent that their communications are happening at a smaller scale (maybe in online conversations of a hundred people on the high end), they can probably get better information about the people they're talking to than polls or focus groups would yield, since they aren't getting information about people approximately like their target audience -- they're getting information from their actual audience!

If framers feel the pull of an ethics in which dialogue is preferable that unidirectional transmission - of listening to the intended audience before and after making the case for a particular position - it's much more likely that the information reaching that intended audience will at least approximately address the concerns voiced by that audience.

Starting with some reasonable model of what people in this kind of audience value (whether built on polling data or focus group data) obviously helps initiate the dialogue in more productive terrain than opening with no idea at all of what matters to the people you're trying to persuade. However, in an actual dialogue, listening is important - so much so that information gathered from the actual people you're trying to persuade probably ought to trump prior data from samples intended to model people like the ones you're trying to persuade.

The second principle asserts that framers ought to focus on the values the target audience cares about (or at least, on those that can be connected to the values the target audience already recognizes and/or prioritizes). I think the idea here is to be clear that the facts themselves do not argue for any particular response to them. Rather, policies ought to be chosen on the basis of what we value, and how we can secure what we value in the light of the facts.

Being clear about the difference between how things are and how we want to respond is an excellent goal. Of course, framers need to keep in mind here that, given agreement about a given set of facts, people can have legitimate disagreements about how to respond to those facts. The values of your target audience may lead them to different conclusions than the ones you are advocating - and knowing what they value in no way guarantees that you can make your case successfully.

The third ethical principle emphasizes making accurate claims and coming clean about the uncertainties in the scientific knowledge at hand. No surprise that I think this is a good idea. Given that scientists have specialized expertise and thus can build reliable knowledge that non-scientists cannot, they also have a duty not to abuse these powers to take advantage of non-scientists. Pumping up the apparent strength of the evidence that makes one policy option look more likely to succeed or more pressing - and doing this to persuade people without the expertise to evaluate the evidence themselves - is an abuse of power.

The fourth ethical principle here reads like another exhortation for scientists and journalists not to abuse their powers. Surely, just on practical grounds, denigrating, stereotyping, or attacking are not things you want to do to a social group with whom you are trying to communicate - since prefacing your pitch for a particular policy with "You suck!" is not likely to persuade your audience that you are arguing from shared (or even compatible) values. If the idea is to make your case for X by showing that X is a good fit with your audience's values, implicitly your case relies on those values being legitimate ones to hold.

It seems the strategy the fourth ethical principle is supposed to rule out is something like this:

"People who are against X display absolutely crazy values, which you can see in play in these examples ... Assuming you're not crazy like those people, you'll be for X."

I agree that this is not a great strategy for selling X, as there may be a number of legitimate values one could hold while still being against X. Moreover, the "crazy" behavior pointed to as a reason to reject a particular set of (X-rejecting) values might well have some other source than those values. Finally, it's much harder to present an objective argument for or against a particular set of values than it is to present one for or against a particular scientific conclusion.

The last part of the fourth principle, I imagine, will be a little more contentious. What exactly is covered under using framing in the service of partisan or electoral gains? Since the strategy of framing has been commended to scientists as a way to make the case for specific policy actions, implementing your desired policy must not count as a partisan gain. But given the tight connection between policies and the parties and elected officials that institute them, it's hard to know just where the line is supposed to be. My best guess is that scientists (and journalists) are being urged to use their powers as scientists (and as journalists) to present honest arguments engaging the values of their target audiences, rather than arguments that turn on ulterior motives.

Taken as a group, what do these four guiding ethical principles tell us about the type of activity framing is supposed to be? The idea seems to be that effective communication about science should focus less on what scientists know and how they know it than on the ways that an audience's pre-existing values and concerns are (or can be) connected to scientists' values and concerns; that effective communication about issues that matter to scientists will involve scientists actually listening to the people with whom they're communicating and taking their concerns seriously; that scientists will be honest and accurate in their representations of what is known and what is not, and will draw clear distinctions between knowledge and values; and that scientists won't use their "expert" status to unfairly rig the dialogue or to unfairly marginalize the voices of others in the dialogue.

On its face, this picture of communication between scientists and non-scientists doesn't strike me as terribly objectionable. But perhaps we need to consider specific examples in which scientists have been taken to task for not using framing well - or for using the framing strategy less than ethically -- to understand the rift between the framers and the framing skeptics. Since Matt's post suggests that Richard Dawkins (and the crowd he runs with) runs afoul of these four ethical principles, we'll dig into the particulars in the next post.

More like this

There seem to be two issues here which you aren't quite addressing: First, in regard to Nisbet's point 2: As written and as apparently practiced, it seems that Nisbet wants to emphasize values instead of facts. This is not good. Indeed, most policy issues don't actually require much about the relevant values involved.

I disagree with your interpretation of his fourth point. Even without the Dawkins material, the expulsion of PZ from Expelled (which you analyzed in an earlier post) seems to be more what Nisbet is thinking of here. If that's the case, then calling Ben Stein and his friends a bunch of lousy hypocrites isn't just ethical; it is the most ethical thing to do in the situation. Being ethical is not the same as being overly diplomatic or pulling punches.

There's also the amazing coincidence that Nisbet's claims allow him to conveniently label as unethical exactly the behavior that he personally doesn't like.

Framing, my favorite subject. Ok, maybe somewhere in the top 10. As a former academic researcher and biology professor now working on conservation policy, I have a bit of experience on framing science. I think framing has gotten tarred and feathered by many science bloggers, thanks for being more open-minded.

My experience learning to achieve policy gains was a much harder apprenticeship than my Ph.D. in biology. But I've been fascinated to watch many academic biologists assume that they know how to do my current job better than I do without having any real experience.

So I suggest another ethical principle: scientists should respect the expertise of others, including respecting the expertise of communicators regarding communicating.

Perhaps in another post, you could address the observation that, by positioning his particular set of strategies as an ethical issue, Matt seems to depict those who disagree with him as not merely wrong, but bad.

I strongly object to this one:

No matter their chosen role, whether as "issue advocates" or "honest brokers," scientists and journalists should always emphasize the values-based reasons for a specific policy action.

Poppycock! I no way is that a proper ethical principle that scientists should always follow.

A scientist's #1 ethical obligation in this context is to properly communicate the science itself. A scientist may choose to address values and policies, but in doing so s/he is stepping beyond the role of scientist.

Suggesting that scientists have an obligation to address values and policies implies that values and policies are part of the scientific purview. Of course they're not.

Mark Powell:

My experience learning to achieve policy gains was a much harder apprenticeship than my Ph.D. in biology.

You would have been well prepared for the biology research by your previous academic work, but you probably encountered the policy role with the disadvantage of an older mind already cluttered with complex ideas. Some aspects of the scientific research life (focusing on a single issue, discussing it exclusively with domain experts) probably work against the more fluid and social role of the policy operator.

But I've been fascinated to watch many academic biologists assume that they know how to do my current job better than I do without having any real experience.

It is a shame to see how often people can fail to appreciate the expertise that goes into other jobs. In some ways it's a duck-swimming issue - you don't see the hard work underneath the water. When you add water, cream and sugar to coffee, it seems so trivial, yet there are huge infrastructures providing the coffee from the beans on the tree, the cup of fired clay, the milk from cows, the sugar from the cane, the water from the reservoir and the power to heat the water.

Also, the better a person does a job, the easier it looks to an outsider (sometimes it helps to big oneself up in the consulting world - try to find a polite way of saying "yes, it looks really easy when I do it, but that's only because I'm soooo good!").

Another issue is that a good decision or action is partly defined by all the bad decisions or actions that were avoided and unseen - so the observer might not appreciate that you put Flange A onto Sprocket B, twisted Knob C clockwise by two notches, checked indicator D and gauge E before flipping Switch F in that precise order because any other sequence would cause gas to leak, the power supply to fail, the wings to drop off, the sensor to overload, the mouse to burst into flames, whatever.

So the unaware know how difficult their job is because they know why they do it so carefully and how many details and variables they are controlling, but they don't make the mental jump to work out that other people's work has those hidden dimensions too.

So I suggest another ethical principle: scientists should respect the expertise of others, including respecting the expertise of communicators regarding communicating.

Many do, I think. But there do seem to be a lot of commenters in Science Blogs who seem to think the world outside their field is a lot simpler than it really is!

But on the Nisbett thing, there are other issues. Matt Nisbett has annoyed the "community" and I think there is a circling of wagons against the enemy. PZ Myers has a very loyal flock of acolytes who react as an angry anthill when their main man or his homies are dissed. It is a very potent mutual admiration society.

Just as with Wikipedia, what you are seeing here is an enforcement of cultural norms of behavior. It is not always - or even often - about right or wrong conclusions, so much as acceptable or unacceptable behavior.

And, I think, many are unconvinced that Nisbett has the claimed expertise as a communicator, myself included. He labels himself as such, but someone who annoys a large sector (who would be naturally sympathetic to the idea of getting more science Out There) so persistently must have his skills called into question.

If the science side has its faults, the Nisbett side is mirroring those.

1. Whenever possible, dialogue should be a focus of science communication efforts, rather than traditional top-down and one-way transmission approaches.

Hmm, depends really. I can see the advantages in some situations but in some cases a dialogue just isn't appropriate. For example in dealing with young-earth creationists there is no room for dialogue given their disregard for evidence (or reality). In such cases science just has to stand up and say "sorry, but that's way beyond wrong".

2. No matter their chosen role, whether as "issue advocates" or "honest brokers," scientists and journalists should always emphasize the values-based reasons for a specific policy action.

No, no, no. Matt is way off base with this one, there are no values in science

3. Scientists and journalists must be accurate, respecting the uncertainty that is inherent to any technical question and resisting engaging in hyperbole.

Fair enough on respecting uncertainty, I think that's more or less universally acknowledged. Hyperbole is not an ideal debating tool but it does serve a purpose in certain cases, for example in highlighting certain extreme positions or the ludicrous consequences of a particular course of action.

4. Scientists and journalists should avoid using framing to denigrate, stereotype, or attack a particular social group or to use framing in the service of partisan or electoral gains.

Here Matt seems to be assuming that we're dealing with entirely innocent parties all the time. The fact is though that some social groups need attacked, they need denigrated.

I think he's pretty roundly failed in his task. As mentioned in the comments his little list just amounts to "if you disagree with me, you're unethical".

1) If you're going to tell people not only that you know how to do something well but that they should cease doing it in their way, you ought to have 1) significant general achievements in communicating and 2) a pretty good rationale for not only why your method should work, but why the current method is counterproductive (if it's non productive it doesn't hurt, and if it helps, even slightly, there isn't a reason to stop unless the new method is substantially better than the old one). I don't know where Dr. Nisbet has either of those - I know he's published in Science, but I don't know whether he has actually successfully used his techniques to communicate messages that were not effectively communicated previously.

2) Telling a set of uppity atheists to shut up isn't going to go well, no matter how rabid or calm the fan base is. (An elementary reading of Pharyngula should have told him that.) Besides the fact that no one likes being told to shut up (even if it's by someone eminently qualified to do so, which Dr. Nisbet does not appear to be), that manner of speaking is likely to invoke previous historical modes of belittling and minimizing atheists' ability to believe (so to speak) what they do and to propagate said beliefs. It ought to be obvious to an observer that this is a bad framing strategy, and that Dr. Nisbet continues to insist on it seems to indicate either 1) he doesn't have a clue how to communicate (at least to this set of people, which in this case would seem to be a large problem) or 2) he dislikes this form of the atheist message and would prefer it to go away (which is neither his right nor a conclusion that seems supportable with his evidence). Neither option is beneficial.

3) However you slice it, treating people with dignity and respect requires one to tell the truth. Often the truth is not going to be popular with sets of people, and while there may be good ways to impart it and bad ways to impart it, going on popularity to decide how to impart a message can easier slide into eliding messages people don't want to hear. The truth isn't sufficient (I have to care about the people I am talking to as well), but it is necessary. Telling a portion of your audience to shut up doesn't seem to give hope that people in the other intended audiences will be exposed to evidence and truths they don't like, because the pretense that the less popular parts of your message shouldn't be said sounds a lot like the problematic implementation of framing discussed above.

Soft-pedaling your message to different targets is a strategy that can easily be used against by those with dishonest motives - while you try to sell a weaker form of your message, those willing to be dishonest can use the message to claim that you are either arguing for a weaker version of your actual claims, or can pull the discourse closer to them by simply not moving. Framing may be more useful in discussions in which all the participants value honesty, but that does not appear to be the case in those arguments for which framing is claimed to be useful (AGW, evolution).

from HP: "... by positioning his particular set of strategies as an ethical issue, Matt seems to depict those who disagree with him as not merely wrong, but bad."

This is critical. As a philosopher of ethics, I cannot help but notice that what Nisbet calls "ethical" codes are little more than proposed Codes of Conduct, with very little in the way of ethical content other than, perhaps, "be nice" and "be accurate." These seem far-fetched as ethical principles. But by using the word "ethics" Matt gets to suggest that those who don't abide by his proposed codes of conduct are bad people, immoral, or evil. What a frame-job. He's taken his lessons from the worst on the far-right, I see.