Skeptical science and medical reporting (#Scio13 wrap-up)

Ivan Oransky and I moderated a session last week at ScienceOnline, the yearly conference covering all things at the intersection of science and the internets. We discussed the topic ""How to make sure you're being appropriately skeptical when covering scientific and medical studies."

We started out discussing some of the resources we'd put up at the Wiki link. Ivan teaches medical journalism at NYU, and noted that he recommends these criteria when evaluating medical studies. I noted I use similar guidelines, and as a scientist, think about papers in a journal club format before I cover them on the blog, considering their strengths and weaknesses (especially in study design and analysis). Ivan also mentioned the need sometimes to consult a real statistician if you don't understand some of the analyses--suggesting to "keep a biostatistician in your backpocket" or, failing that, to reach out to those at you local university, as "they tend to be lonely people anyway." (Just kidding, biostats friends and colleagues!) A number of stats references for journalists were also mentioned (see the Storify for specific links). From here, we handed the discussion over to the audience.

One of the first topics we reviewed was just what is meant by being "appropriately skeptical," which was a theme of the session that we kept coming back to. How does one do that without being an asshole? The importance of criticizing the study's limitations and weaknesses--and not necessarily being a jerk to the authors--was noted. No study is going to be perfect, after all. It was also pointed out that anyone reporting on the study should go beyond the press release, and not to do so is in fact "journalistic malpractice." Bora also started an interesting tangent--are medical studies more likely to be fake (or more deserving of skepticism about results) than more basic science reports? Also, is it worth reporting on bad studies? Sometimes this can help to point out the bad science (like that recent mouse-GMO study, which was reported on--negatively--in many venues). This recent study on "out" versus "closeted" homosexuals in Montreal was also brought up by Annalee Newitz--a small study that was widely reported, but was it designed and powered correctly to examine the questions it supposedly answered? (I haven't read it, but just looking over the article, looks like "no".)

Audience members also asked how to find sources to comment on studies. Ivan has previously written a post on this, and others in the audience recommended looking at other references in the story itself, or looking at reviews or meta-analyses on the topic to see who else may have expertise in these particular areas. However, SciCurious also noted that you need to be somewhat skeptical of those as well, and examine if the authors of these reviews or analyses have their own biases that may skew the information being presented.

The idea of "Glamour Mags" was also introduced. How should those reporting on a story know whether the results were published in a "good" journal or not? Several pointed out that just because a study is in a lower impact-factor journal doesn't necessarily mean it's not to be trusted. Eli elaborated, noting that fraud is actually higher in the big, fancy journals, and that many studies that end up in lower-tier journals actually go through *more* peer review in some cases, as they have been rejected from higher impact publications.

Unfortunately as I was moderating, I wasn't taking notes, and I can't recall what we ended the session on (but it was a great comment and general agreement that it nicely tied things up). I've also tried to Storify the session based on the #medskep hashtag, but I'm new to Storify and it doesn't want to embed for me. If you were there, please feel free to add to the discussion in the comments below.

More like this

One dimension that all science journalists have to check is the financial one. How was the research project financed? For instance, do the authors have any financial interest in the product they test or evaluate? Or any financial link with the company that commercializes the product? If it is the case, how is their independence guaranteed? If there is no explanation, then forget the study, it stinks.

By Florence Piron (not verified) on 04 Feb 2013 #permalink

Yep, COI was discussed as well--there is a bit of that in the storify. Obviously a thorny issue.