Science journalism: don't forget the editors

Continuing the current discussion of the questionable quality of popular science journalism, British researcher Simon Baron-Cohen weighs in at the New Scientist with his personal experiences of misrepresented research. Baron-Cohen complains that earlier this year, several articles on his work linking prenatal testosterone levels to autistic traits, including coverage in the Guardian, were titled and subtitled misleadingly:

It has left me wondering: who are the headline writers? Articles and columns in newspapers are bylined so there is some accountability when they get things wrong. In this case, it was a nameless headline writer who seems to be to blame. Did he or she actually read the journalist's article?

I've experienced this problem myself: I've actually written a press release and fact-checked it, only to find that the title was altered to something misleading after it left my hands. The change happened without my input, and I was unaware of it until the researcher I had interviewed contacted me (assuming, no doubt, that it was my doing). The problem was eventually fixed, but I felt that my credibility as a writer and scientist was impacted, and the researcher was understandably unhappy about the process. It was an unfortunate situation all around.

In journalism, titling is often divorced from the writing process - as a writer, you can suggest titles, but they will likely be changed by editors or other staff. This is not unique to science journalism, but it may disproportionately affect science pieces, simply because it is so challenging to briefly and accurately summarize a study's outcomes and relevance without resorting to jargon. Titles inevitably oversimplify the science. That's not a problem as long as the article builds on the title to clarify and explain further. But when the editors responsible for titling pieces don't understand the science or its context, the title can end up not merely simplified, but misleading, inaccurate, or just plain creepy. Bad titles are particularly problematic because the title frames the entire story for readers, and can predispose them to read everything in the article in a completely inappropriate light. Misleading photos, illustrations, or captions can have a similar effect. Yet these elements are not necessarily vetted by a qualified individual who understands the research.

Baron-Cohen works on autism. His work has highly emotional implications for parents of autistic children, and is thus particularly vulnerable to sensationalization by unscrupulous or incautious editors. He suggests that such misrepresentations can cause serious harm to the public:

Scientists are rightly regulated by ethics committees because they can do harm to the public. The media too has the potential to do harm. Should there be some similar before-the-event regulation here too?

This is an intriguing question. Who, in fact, holds the media accountable for accuracy in science journalism? Often, criticism of bloggers centers on their lack of accountability to editors or formal media outlets. Traditional journalists are seen as more accountable, both because they have others checking their work, and because they have an incentive to maintain their own professional credibility and the credibility of their organization. Yet even within the framework of traditional media, mistakes are clearly made. And the recent debate about scientists vs. science journalists as communicators obscures the fact that many mistakes don't originate with either the scientist or the journalist, but with editors and others involved in the publication process. Unfortunately, when such mistakes or misrepresentations are eventually identified and corrected, the revised message may not seem substantially different to the public, and may never get as much attention as the original error. (You never get a second chance to make a first impression, right?)

So can linking scientists directly with the public help? One of the benefits attributed to science blogs is that researchers may be able to both explain and frame their work more accurately than a media outlet peopled by generalists and editors looking for catchy ledes. I think that by providing direct access to scientists, science blogs can effectively complement, though perhaps not replace, science journalism. Several of our colleagues are currently working to get scientists more involved in communications in the blogosphere and during graduate school.

In Baron-Cohen's case, the Guardian stepped up and let him publish a clarification of his research. His key point - that hormonal levels predict autistic traits, not autism per se - was a subtle one, and not surprisingly, it didn't seem to get as much attention as the original piece. The comments on his clarification represent varying degrees of disbelief and ire, including accusations that autism scientists are deliberately concealing and misrepresenting their findings. Ugh. Reaching out directly to the public might have worked better if it had happened before the original article framed the discussion in an emotionally charged manner.

Just last week, Ben Goldacre took the British press to task for misrepresenting new prostate cancer screening research - for, in his words, choosing to "ignore one half of the evidence and fail to explain the other half properly." The Guardian's own article on the topic was titled "Prostate cancer screening could cut deaths by 20%." That's technically correct (for men between 55 and 69). But as Goldacre points out, the original NEJM research article is equivocal about the risks and benefits of prostate cancer screening with this particular technique. To prevent a single prostate cancer death, 1410 men would have to be screened and 48 men would have to be treated. Treatment carries the risk of serious complications like impotence, and there is a high rate of false positives with this test (as much as 50%). The Guardian article (which has the same byline as the piece on Baron-Cohen's autism research) mentions this risk - but only in the very last two paragraphs, where it does very little to counterbalance that Pollyanna-ish title. Headlines from the WaPo ("Prostate cancer screening may not reduce deaths") and Sydney Morning Herald ("Prostate cancer blood test does little to decrease death rate") are more consistent with the authors' conclusions (and that of a second new prostate cancer study, which the British press inexplicably neglected to mention).

As Goldacre explains, oversimplified and misleading headlines - like "Prostate cancer screening could cut deaths by 20%" - do the public a disservice:

For complex risk decisions such as screening, it has been shown in three separate studies that patients, doctors, and NHS purchasing panels make more rational decisions about treatments and screening programmes when they are given the figures as real numbers, as I did above, instead of percentages. I'm not saying that PSA screening is either good or bad: I am saying that people deserve the figures in the clearest form possible so they can make their own mind up.

But in order to find the "real numbers" in this study, along with the authors' own interpretation of their work, you'd have to go to the NEJM article itself - and the Guardian article doesn't link to it. It's surprisingly common practice for press releases and articles to fail to cite the source paper, or even name a lead author - a reader often has to go to the journal's website and search for the relevant article using keywords. In the intertube age, it's downright ridiculous that media outlets can't just paste in a URL to the original article. Both Goldacre and Mark Liberman at Language Log have called for explicit links from media coverage to the peer-reviewed articles being discussed. This might help short-circuit the propagation of error that happens when a mediocre press release is processed to yield a worse brief article and a hopelessly inaccurate sound byte, without any of the authors apparently understanding the original research. At the very least, it would give a concerned reader somewhere to look for more information.

Liberman says,

What Ben is suggesting, I guess, would be something like Google News for science, with the addition of links to the underlying scientific publications (if any), and to coverage in blogs and web forums. Also, it would be useful to have the clusters of links be durable -- i.e. available across time, unlike Google News -- and perhaps linked into a loose network of higher-order relationships.

Coverage by bloggers is not necessarily any better -- it may often be worse -- but at least you have a shot at finding someone who has read the paper, not just the press release, and who knows the field well enough to understand the paper and to offer an independent interpretation.

To this end, Adam Bernard has created The Science Behind It: "The site was built to deal with my frustration at journalists summarizing scientific papers without citing their sources. It tries to infer from the details that they do include the probable candidate articles." In a similar vein, Dougal Stanton contributed "Just Fucking Cite It" , an all-purpose link to use when calling out articles that totally fail to represent the science accurately.

So will pushing science journalists to include proper citations in mainstream articles help? Well, the PLoS One community blog discussed their take on the problem Thursday:

PLoS ONE articles often receive a lot of coverage in the media and because our open-access articles are freely available to read online as soon as they are published, we always encourage journalists to link to the original study in online versions of their reports. However, with some notable exceptions, this doesn't seem to happen very often (although increasingly, links are being added to the PLoS ONE homepage where we often highlight newsworthy articles for interested readers). Of course, the reporters to whom we send our press releases (which include the URL for the articles) aren't always the people who are able to add the URLs to the stories with various web teams and editors often involved in the process of getting a news story online.

It all comes back to the editors, doesn't it? Regrettably, in discussions of the mainstream media process and how to improve science journalism, editors and other staff are often overlooked. But although it's necessary to have both scientists who can communicate effectively and journalists who can understand and explain the science, it's not sufficient. You also need an editorial team who won't screw the whole thing up!

A media outlet needs the right people in place, or the right process in place, to check the accuracy and objectivity of details like titles, layout, pictures, illustrations, captions, and citations. A media outlet under acute financial pressure to downsize, or one that prioritizes sensational headlines over accuracy in science reporting, may not have those safeguards.

While blogs have their downsides, at least there is usually just one person responsible for what you're reading. In some ways, that gives bloggers an advantage over mainstream journalists, who may have very little control over the way their piece is edited and presented. First impressions count, and in mainstream science journalism, editors have a lot of control over the way the science appears to the public - sometimes more than the scientists or journalists themselves.

More like this

We all are familiar with the headlines: "SCIENTISTS SAY BROCCOLI CURES CANCER????" We all also know that pop-science articles are, functionally, useless.  Sometimes its editors manipulating article titles to make them more 'catchy', sometimes journalists trying to stir up controversy, sometimes its…
This excellent article in the Chicago Tribune documents the abuses of science by quacks. Legitimate researchers identify certain properties of autism — markers for inflammation in the brain, for instance, or correlations with testosterone — and write up papers that even go out of their way to…
Ever have one of those times when you have a cool new blog post all ready in your head, just needs to be typed in and published? Just to realize that you have already published it months ago? Brains are funny things, playing tricks on us like this. I just had one of such experiences today, then…
You may be aware that, as of recently, one of my tasks at work is to monitor media coverage of PLoS ONE articles. This is necessary for our own archives and monthly/annual reports, but also so I could highlight some of the best media coverage on the everyONE blog for everyone to see. As PLoS ONE…

Mainstream press editors are a *cause* of shitty journalism, because their jobs are directly and explicitly tied to financial success of the media outlet. This leads to an inexorable pressure to sensationalize and oversimplify everything.

I'm thinking it is time the gloves come off and the naming and shaming begins in earnest. It's not like editors spring fully formed from editor school. They don't get awarded a degree in editing after successfully completing their education. They are given this power. Hold them accountable for their decisions and actions just as you would any other body. We're not going to change these policies any other way.

To be fair, CPP, unless they're freelancers, the journalists' jobs are also tied to the financial success of the outlet. But we still expect the journalists to do an ethical, credible job of reporting. I think editors can resist the pressure to sensationalize and oversimplify, and we should expect them to do so. :)

This only shows how much scientists need to become science bloggers and properly title (and subtitle) their own writings. A gift for writing can surely conjure up an appropriate soundbite that properly encompasses the subject matter and which can catch people's attention too.

I'm not sure if you know this, but it doesn't matter in the slighest what the article is about on the Guardian's site. The comments are always dreadful.

By Marc Abian (not verified) on 30 Mar 2009 #permalink

It's a good point, Marc. Comments at most mainstream outlets are pretty dreadful, aren't they? I find the comments at places like Sb a lot better. . . and of course I'd argue MY commenters at BioE are the best of all.

I see misleading sensationalist headlines all the time, in a variety of publications and above articles on a variety of topics (not just science). The question is, what is the purpose of a headline or title? When the purpose is to accurately reflect the conclusion of the article, we get titles of the form found in peer-reviewed research -- unintelligible to the lay-person and about a newpaper paragraph long! When they are designed to grab attention so that the audience will read the article they inevitably wander from the facts of the story. Part of being media literate these days is being able to 'get over' the headline and read the article for what it actually says.

However, I think editors re-titling articles without even *informing* the author is terrible. At least provide a chance for compromise!

Well, there are headline mistakes of every description, some as basic as this: "Third of EMS Stethoscopes Carry MRSA Virus" (http://bit.ly/OVib) . What bothers me isn't an oversimplification or two - that's inevitable, and a well-written article will flesh out the details. What bothers me is a misdirection of the reader to believe the science says something it doesn't, or has implications it doesn't. The headline on Baron-Cohen's work suggested he was trying to create a prenatal test to screen and terminate autistic pregnancies, when he was instead doing basic research on the development of autistic traits. That's not an oversimplification, it's misleading and inflammatory. I agree that a savvy reader could "get over" the headline if it turns out to be inaccurate, but I don't think the majority of the audience is able to evaluate and dismiss a title that turns out to be inaccurate - I think it powerfully frames the way they read and interpret the piece.

I realise now that this article is no longer linked from the top page of your website, probably no-one is reading (!), but I just wanted to quickly say that I appreciate you writing this article. I have been thinking over this issue for some time, as science communication is a line of work I am considering adding to my consultancy. I came to similar conclusions and it is nice to see that I am not alone in thinking along similar lines.

By BioinfoTools (not verified) on 06 Apr 2009 #permalink

Thanks! It's always great to hear from people with similar interests. :) Best of luck with getting into the communications realm!

Writing headlines is harder than it looks. As a copy editor, you are forced to use a certain number of letters, and what you really want to say may be too long or too short to fit the space. You have make the headline say something new, or else it will sound like an old story people have seen before and they won't read it. Reporters don't write headlines so don't blame them. Newspapers are cutting back drastically on the number of copy editors, so there is more work for fewer editors under deadline pressure, and the editors may have to work on articles for the web site in addition to the printed paper.

It's usually easy to find the missing citations that are not included in the news stories. I use Google Scholar. A lot of people don't even know that exists. You can type in the important words from the news story, with the names of the scientists and the name of the journal, and chances are you will find the article plus a lot more related articles.

There are only a few science reporters I trust. I'm very skeptical and go to the source if I really want to check the facts. In know, most people don't do that, and that's the problem.