MIT Science Tracker On Coverage of the Pew Science Survey

Over at MIT's Knight Science Journalism Tracker, the wise Charlie Petit has a great round-up of coverage of yesterday's Pew science survey. On what I described earlier today as a troubling "fall from grace" narrative in some reporting and commentary, Petit points to the obvious difficulties science reporters might have in covering an issue they deeply care about:

One notes that bylines [in coverge] tend to belong to science writers. Science writers can hope to cover science itself with a semblance of objective dispassion. But they have an inbuilt conflict of interest when the topic is the standing and penetration of science as a way to reach conclusions. Imagine the difference in coverage were a survey showing that the public thinks Shakespeare plays are outdated stuffy nonsense were reported by theatre critics, or alternately by hockey writers or stock analysts. One wonders - would the stories on this survey be much different if handed over to the closest science-phobic, ex-English-major political or general assignment reporter?

Which goes to suggest that a great follow up and parallel to this latest Pew survey would be to do a survey of journalists on their perceptions of science and its relationship to society. You might as well throw in Congressional staffers too. Sounds like a worthy grant proposal. ;-)

More like this

In order to attempt to determine the impressions that the public may be getting from the coverage of the Pew Survey (or any other scientific topic) by science journalists, I believe that another interesting follow-up would be to compile how various news outlets headline and feature the articles themselves.

People love to slap numbers and statistics on things and think it's 'science' worth pouring attention on. The Pew findings are just survey research -- fun stuff, but very loose, very pliable, and not all that meaningful or scientific. Words like "science," "evolution," "achievement," "general public," and on and on, are actually highly imprecise terms having different meanings for different respondents. Alter the semantics of the questions even slightly and administer to a new so-called "random" sample, and you'll get some different results. Great fun and ink-consuming for the media, but otherwise just a lot of 'sound and fury' over very little.