To correct or to retract? The ethics of setting the record straight.

An important part of the practice of science is not just the creation of knowledge but also the transmission of that knowledge. Knowledge that's stuck in your head or lab notebooks doesn't do anyone else any good. So, scientists are supposed to spread the knowledge through means such as peer reviewed scientific journals. And, scientists are supposed to do their level best to make sure the findings they report are honest and accurate.

Sometimes, scientists screw up. Ideally, scientists who have screwed up in a scientific communication need to set the record straight. Just what is involved in setting the record straight, however, sometimes presents interesting problems, as the following case illustrates nicely.

The case, involving a team of physicists at Oak Ridge National Laboratory led by Stephen Pennycook, is described in a Boston Globe story by Eugenie Samuel Reich that was published on November 27, 2006:

Thirteen years after the fact, scientists at a top US Department of Energy laboratory have admitted misrepresenting key data in a landmark paper on the use of electron microscopes to analyze materials at the atomic scale.

Publication of the correction earlier this month in the science journal Nature was more than a historical footnote. It followed an allegation that scientists in the same lab had manipulated data in a new paper submitted to a sister journal. The allegation, made by a reviewer for Nature Physics, was confidential. ...

Pennycook said the data problems were mistakes that didn't undermine the papers' conclusions.

There seem to be at least two important issues to track here. One is the question of the source of the misrepresentation: was it an honest mistake or an intentional effort to mislead? The second is the question of the effect of the misrepresentation: would the correct data support the same conclusions as the incorrectly presented data?

It will surprise no one that I view intentional efforts to mislead as very bad, given that they undermine the trust that is necessary for any sort of useful communication. My inclination is that instances where it has been established that a misrepresentation was intentional require retractions that clearly identify the intent to deceive in the original paper. After all, part of the point of sharing scientific findings in journals is so that scientists don't have to collect every bit of relevant data for every question that interests them on their own. If findings are reported in a peer reviewed journal, the hope is that we can trust those findings. If we discover that you misled readers in your paper, flagging that paper -- and you, as the author who set out to deceive us with it -- as unreliable seems like a sensible corrective.

Honest mistakes are sometimes made. When discovered, they should be identified clearly and corrected swiftly. (As well, extra efforts should be directed at avoiding such mistakes in the future.) That's what honest scientists do. If you're directed by the goal of building a reliable body of knowledge, you're going to want to clear out unreliable bits as soon as you notice them.

Undoubtedly, there is a gray area between honest mistake and intentional deception. There are mistakes that stem from sloppy measurements, sloppy data analysis, and sloppy thinking. There are mistakes that flow from bias toward seeing the result you hope to see. There are presentations of data that leave out bits that don't quite fit your hunch (and that, as a result, you can't quite explain to your own satisfaction or the anticipated satisfaction of the imagined referee for your paper). Each of these sorts of mistakes might be avoided, whether by taking more care with the research itself or being more skeptical of one's own results. Making these mistakes doesn't make you a liar, but it might make you a less reliable source on the piece of science you're trying to work out. Again, to the degree that you're committed to the project of building reliable scientific knowledge, you should also be committed to rooting out such mistakes and publicly correcting them if you find them.

If the correct data does not support the same conclusions as those drawn from the incorrectly presented data, it seems obvious that the correct data should be reported and that the original conclusions should be identified as not supported by the research. Basically, this requires retraction of the original paper. Even if the originally presented data was incorrect as the result of honest mistakes, it is important to publicly identify the original conclusion as suspect, at least on the basis of the correct data.

One might wonder whether it is so important to publish corrections if the correct data supports the same conclusions that were drawn from the incorrectly presented data. Maybe the fact that it leads you to the same conclusions mean that the originally reported data cannot have been too incorrect. However, if you know you've presented the data incorrectly, you should still correct it. Some of the scientists reading your paper may be more interested in the data you report than in the conclusions you draw -- the data may be of interest because they connect to some other scientific question, so getting them right is important. Moreover, your argument in favor of your conclusion is weakened if it rests on false premises -- even if there exist true premises (i.e., the data correctly reported) that could support the same conclusion. Finally, your reliability as a source of good scientific knowledge is undermined if you develop too casual a relationship with the truth.

In a perfect world, every scientist would be committed to reporting findings honestly and completely, and the main challenge would be wrestling good data out of one's experimental systems. In the real world, scientists grapple not only with the reliability of their own measurements, but also with the reliability of their scientific peers. Here, the system of peer review complicates matters. From the Globe article:

[T]he Oak Ridge case highlights the fact that even when peer reviewers suspect that authors are manipulating or fabricating data, there is no certainty other scientists will be alerted to their concerns.

In some cases, reviewers may be hesitant to get involved in a messy situation and simply recommend rejection of the paper. If they do inform editors, the editors are barred from speaking publicly, though they can notify the authors' institution.

This can leave authors free to revise the paper and shop it around to other journals with a less rigorous review process.

As I've discussed before, peer reviewing of scientific papers does not generally include full-scale attempts to replicate the research reported in the papers:

The reviewer, a scientist with at least moderate expertise in the area of science with which the manuscript engages, is evaluating the strength of the scientific argument. Assuming you used the methods described to collect the data you present, how well-supported are your conclusions? How well do these conclusions mesh with what we know from other studies in this area? (If they don't mesh well with these other studies, do you address that and explain why?) Are the methods you describe reasonable ways to collect data relevant to the question you're trying to answer? Are there other sorts of measurements you ought to make to ensure that the data are reliable? Is your analysis of the data reasonable, or potentially misleading? What are the best possible objections to your reasoning here, and do you anticipate and address them?

While aspects of this process may include "technical editing" (and while more technical scrutiny, especially of statistical analyses, may be a very good idea), good peer reviewers are bringing more to the table. They are really evaluating the quality of the scientific arguments presented in the manuscript, and how well they fit with the existing knowledge or arguments in the relevant scientific field. They are asking the skeptical questions that good scientists try to ask of their own research before they write it up and send it out. They are trying to distinguish well-supported claims from wishful thinking. ...

[I]n the process of this evaluation, peer reviewers are taking the data presented in the manuscript as given.

To the extent that peer reviewers are expressing concerns about how data are presented in the papers they are reviewing, this is probably because they happen to be working on a closely related experimental system and seeing very different results, or because a reasonable knowledge of the field helps them identify data that could not possibly be correct. In the latter case, the reviewers may well infer that the authors of the paper have no idea what they are doing, or that they are making stuff up.

You can be honest and be out of your depth. A referee's comments might help you recognize that and withdraw a manuscript that ought not to go out until you figure out what you're doing.

If you're making stuff up, honesty would seem not to be on your agenda. Trying to fool a different set of referees at a different journal might advance your agenda, but it won't necessarily be a contribution to a body of reliable scientific knowledge.

The peer reviewers cannot get inside your head to determine whether you are mistaken or dishonest. If you are mistaken, their comments can help you correct your mistakes. If you are dishonest, they might help you adjust your misrepresentation the better to avoid detection. Back to the Globe article:

Karl Ziemelis, Nature's physical sciences editor, said the journal is unable to publicly discuss submitted manuscripts, even ones with serious problems, because they are confidential. "In general, there is nothing sinister about this -- one of the key purposes of peer review is to identify honest mistakes, which may subsequently be corrected," he said. "But, of course, peer review confidentiality could be exploited."

This possibility was envisioned by the reviewer for Nature Physics in April. "I find that there is direct, incontrovertible evidence for systemic data manipulation and scientific misconduct in this manuscript," the reviewer wrote about a paper whose lead author was Maria Varela, a staff scientist in Pennycook's group. ...

The reviewer went on to raise the concern that the comments might "effectively 'aid and abet' improper behavior," enabling the authors to quietly correct or remove the evidence of misconduct from their manuscript and resubmit it.

I'm hesistant to suggest that authors of scientific papers should be presumed to be deceptive by peer reviewers. On the other hand, the confidential nature of peer review means that good evidence of deception may go unreported. Peer reviewers are working scientists, and they are well aware of how harmful it can be when bad information makes its way into the scientific literature and masquerades as good. Time and resources can be lost when people try to build new knowledge on published papers that turn out to be wrong.

It's bad enough when this results from honest mistakes. If you have reason to believe that there are scientists who are less than committed to honesty in their scientific reports, you want to bench them, or at least make sure that their unreliability as sources is well known.

A system that does more to protect bad actors than truthful reports is not a good way for the community of science to advance the goal of building good knowledge.

More like this

During a political discussion the other day, I made the observation that the strength of an institution or practice could be measured by how resilient it is to the machinations of someone like Karl Rove (i.e. a smart but utterly unprincipled liar and manipulator.) The discussion concerned how much lasting damage (if any) Rove et al have done to the institution of the U S Government, how resilient is what Berube calls our "civic nationalism"?

However, during the discussion I used "Science" as an example of an area where a Rove would not be able to do any significant (and certainly not lasting) damage from within(I don't regard someone like Lysenko as working from within - he was an "agent" of outside forces which can of course be extremely damaging to science.). I am not a working scientist, but I do hope that someone like him would wash out early - but I can see where maybe not. Still, I believe the true strength of science is its capacity for self-correction - the best humans have devised yet, but still vulnerable, especially to corupting external forces.

My work experience is in the corporate world rather than the world of science. But although the stakes may not be quite as high in "knowledge" terms, the types of "misrepresentations" which I've seen map exactly to those you identify. (My favourite ... and it's a corporate classic ... is basing a recommendation on a "difference" which is well within the margin of error.)

In any event ...

* Mistakes - I add 8+7 and get 16.
* Gray areas - I use a statistical technique in a way that is probably not advisable.
* Flogging offenses - I'm missing terms 23-26, so I interpolate between 22 and 27.

When I say that the stakes aren't quite as high in "knowledge" terms, I mean that people are less likely to be basing future work on these misrepresentations. Funding decisions may be based on faulty interpretations, but it's less a question of "faulty intellectual capital for the future" than it is in science.

I suppose part of the point of writing this is that it's struck me before that the worlds of science and business aren't as different as many would like to believe ...

By Scott Belyea (not verified) on 26 Apr 2007 #permalink

I'm hesistant to suggest that authors of scientific papers should be presumed to be deceptive by peer reviewers.

As one such author (or at least I used to be), I am not in the least so hesitant. Most papers are wrong, and of those, my personal estimate is that at least half contain known errors -- that is, the author/s fudged something, left out a point that didn't fit their argument, showed only the one run that worked instead of the 6 that didn't, whatever. Science is so bitterly competitive that misconduct -- at least, behaviours that fit into the "grey area" and so can be rationalized away -- is rapidly becoming the norm. "Everyone does it", "I'll never get tenure without playing the game" (oh how I hate that expression), "Once I get tenure I won't have to do this any more" -- I've heard it all. And it's all crap.

the stakes may not be quite as high

"Academic politics is so vicious precisely because the stakes are so low" -- WS Sayre (paraphrased).

The enterprise of science today is driven by greed and power just like any other for profit corporation. Success in science is measured by how many dollars one secured for one's research, not by how much new knowledge one has uncovered. In many ways, it is the academic institution fault, the NIH funding priorities fault and, of course, the fault of those scumbag scientists among us who discard their integrity and honesty to achieve fame and riches.

I should know, since I was personally involved as a whistleblower in a case where all three factors played their roles.
http://www.brownwalker.com/book.php?method=ISBN&book=1581124228

How long before we witness the first murder being committed on the road to "individual scientific success"?

By S. Rivlin (not verified) on 27 Apr 2007 #permalink

During a political discussion the other day, I made the observation that the strength of an institution or practice could be measured by how resilient it is to the machinations of someone like Karl Rove (i.e. a smart but utterly unprincipled liar and manipulator.) The discussion concerned how much lasting damage (if any) Rove et al have done to the institution of the U S Government, how resilient is what Berube calls our "civic nationalism"?

However, during the discussion I used "Science" as an example of an area where a Rove would not be able to do any significant (and certainly not lasting) damage from within(I don't regard someone like Lysenko as working from within - he was an "agent" of outside forces which can of course be extremely damaging to science.). I am not a working scientist, but I do hope that someone like him would wash out early - but I can see where maybe not. Still, I believe the true strength of science is its capacity for self-correction - the best humans have devised yet, but still vulnerable, especially to corupting external forces.