Open peer review: an idea whose time has come?

Over at the Nature blogs, they're soliciting comments and opinions about open peer review:

The goal of any change in the peer review system must be to improve the quality of review, where quality is determined by two distinct functions: filtering manuscripts for publication in a given journal; and making constructive suggestions on how the manuscript or study could be improved. Would open review (in which reviewers sign their reviews) accomplish this goal? I have experienced several cases of open review, intentional and unintentional, with mixed results.

It's an interesting question, which perhaps should be the next Ask a Scienceblogger question, and I hope my readers will chime in here.

Traditional peer review is anonymous, where the authors don't know who the peer reviewer is. The purpose of anonymity is so that the peer reviewers can be as honest and critical as possible, without fearing making enemies of the authors whose papers they review. The problem is that this anonymity can also allow peer reviewers to scuttle the papers of rivals or articles that don't agree with their own work with impunity. True, most journals ask authors if there are any people who should not review their paper because of a conflict of interest. Fortunately, because there's not (yet) a lot of competition in the area in which I'm working, there really are only one or two people who in my estimation might have a scientific conflict of interest and thus should not review my papers on one particular topic, and even then I'm not sure that it's more paranoia than a reasonable caution that leads me to list their names. However, in highly competitive areas of research, it is easy to imagine peer reviewers nixing papers by their rivals or slowing down their publication by asking for ridiculous amounts of additional work. And, of course, often "anonymous" peer review isn't really all that anonymous in some areas of research in which there are very few hardcore experts. These people all know each other and often can recognize each other by the kinds of questions they ask and the works they cite.

Although the traditional anonymous peer review system has served science well for several decades, recently there have been an number of intitiatives to try to radically alter and, it is hoped, improve the peer review system, such as "open access publishing" as proposed by PLoS, as commented on by Evolving Thoughts. Rather than trying radical reinvention, Nature seems to be seeking reform rather than revolution. For example, Nature links to an article by Tom DeCoursey advocating a form of open peer review. He makes a number of good points and lists what he considers to be several advantages of open peer review:

But I do think that there are several advantages to an open peer-review system. First, reviewers would be more tactful and constructive. I admit that I have used sarcasm when reviewing studies that seem to be thrown together haphazardly. Sometimes I feel that I put more thought into my review than the authors have in designing the study and writing the manuscript.

Second, reviewers with a vested interest in suppressing the publication of a manuscript could be more easily unmasked by authors. Although manuscripts are rarely reviewed by a single reviewer, anonymous review does offer unscrupulous reviewers more opportunities for blocking publication without repercussion.

Third, a completely open review system would have reviewers' names published in a footnote to each paper to further encourage reviewers to do a thorough job. When bad science is published, the negligence of reviewers can be as aggravating as the incompetence of authors.

All of these are excellent points, and, if an open peer review system were implemented, I would tend to support publishing the names of the peer reviewers, although one thing would cause me a little trepidation. If research is fraudulent, it can often be difficult to detect by reviewers. Usually fraud is detected either when another scientist (or other scientists) try and fail to replicate published results, leading to questions and a closer examination of the scientists' original data. It is usually not caught by peer reviewers, who do not have access to the raw data, although occasionally peer reviewers will notice figures that are obviously seriously Photoshopped or autoradiographs that just don't look right. Fearing such outcomes, researchers, it is feared, might be less willing to serve as peer reviewers if they knew that their name would be tied to every paper they reviewed and that they would share in the disgrace of a fraudulent paper.

In addition, I agree with one commenter that perhaps an even bigger problem is whether reviewers should be aware of the names of the authors papers that they are reviewing. All too often, I see work by big, well-established, well-entrenched labs in high profile journals like Cell or Nature and am surprised that the work got accepted in such prestigous journals. I and a fair number of other younger academicians sometimes harbor a suspicion that, once you reach a certain level of scientific prestige, that reviewers tend to give you a pass on a lot of things, and a lot of these prominent researchers are chummy with editors of various journals. As Dan Kolker put it:

We have all seen, in our various fields, papers by prominent scientists accepted at top-name journals, even when deep inside we have felt that the quality of the work alone probably does not merit such prominence. In some cases a reviewer may feel compelled to ask fewer questions of a prominent researcher than he or she might of a more junior scientist.

Maybe. But I can also envision a case in which a less prominent peer reviewer might want to prove his or her mettle by "gunning" for such prominent researchers and taking them down a notch or two, and the anonymity of peer review would let them do it. Either way, though, I can see a definite benefit in making the authors of papers anonymous to the peer reviewers. Yes, in areas with few investigators peer reviewers will be able to guess what labs many papers they receive come out of, just as authors can now guess who peer reviewers are by their comments. Also, most researchers cite their previous work in the introductions to their papers in order to familiarize readers with the background behind their research, and to be truly effective anonymizing the paper would necessitate stopping that practice. Nonetheless, I could see a definite advantage to blinding peer reviewers to the authors. Peer reviewers wouldn't be as likely to be dazzled by the big guns in their field.

Personally, I'm leaning towards supporting the fusing both of these concepts, but I'd go Dr. DeCoursey one step further. Manuscripts would be stripped of the authors' names before being sent out to peer reviewers, and peer reviewers at this stage would also remain anonymous, a "double-blinding," so to speak. If the manuscript is rejected, peer reviewers and authors would remain anonymous to each other, with the editor mediating any complaints and synthesizing a reason for rejection from the comments of the peer reviewers. Both reviewers and authors would be protected to some extent from repercussions of a rejection by their anonymity. In contrast, if the manuscript is accepted, then both the authors and the peer reviewers would be "unblinded," each informed of who the other is, and full constructive comments and requests or suggestions for revisions conveyed from the peer reviewers to the author through the editor, much as the system works now. Peer reviewers would be listed in a footnote of the manuscript when published, thus sharing the responsibility with the journal for the scientific content of the published paper. The journal could even publish the critiques of the peer reviewers on its website! Such a system, in my view, would decrease the number of studies finding their way into the top tier journals based primarily on the reputation of the investigator rather than on the quality of the science, allow reviewers to reject papers that in their opinion should be rejected and be less likely to accumulate enemies doing so, while at the same time forcing reviewers to share more fully with the journal the responsibility for the science that is published therein.

Thoughts?

Tags

More like this

I think that anonymous authors and anonymous reviewers are the way to go.

But I can also envision a case in which a less prominent peer reviewer might want to prove his or her mettle by "gunning" for such prominent researchers and taking them down a notch or two, and the anonymity of peer review would let them do it. Either way, though, I can see a definite benefit in making the authors of papers anonymous to the peer reviewers.

I recently had a nice manuscript that I was trying to get published, in which I (a young researcher) was the corresponding author, and one of my co-authors was a very well-known collaborator. I sent it off to four pretigious journals, three of which declined it without review, and the fourth which gave it only a cursory review and declined it for very bad reasons. I then decided to submit the exact same manuscript to another equally prestigious journal with my well-known co-author listed as the corresponding author, and it was accepted quickly with only minimal comment. Perhaps it was just luck of the draw with the reviewers, but I have my doubts...

One problem with anonymous authorship to my mind is that simply omitting the names from the manuscript won't be much of a blind in most cases. My experience for example, and maybe this is not typical in general, is that most authors reference themselves more than anyone else, since they are usually building on their own research. This means you can get a good handle on identity merely by glancing at the list of refs and finding the name(s) mentioned the most. This is in addition to other indicators like style and research area, which may be well familiar to any potential referee.

I really like the Open Peer Commentary format of journals like Current Anthropology and Behavioral and Brain Sciences. It's very intensive work for the editors, however.

So will this be called Scientific Idol? Will Simon Cowell referee the match?

I think the idea has merit IFF people acted without political bias and focused on the science only and didn't contact each other to let their buddies know that a paper is under review (facultyof1000 smacks of this - it's bad).

In short, it's not going to happen with journals that have serious impact in a competitive field. Can you imagine a microRNA paper undergoing review like this? As they say in the US army - wear your kevlar on your back.

Reviewing a paper is a scholarly task, ain't it? And one which should bring recognition, to my mind anyway. Wouldn't it be nice to get a reputation as a really good peer reviewer?

As noted above, there are ways to subvert any system, but the more level you make the playing field, the better. I like double-blinded peer review. And if I, as a new author, want to play the system, I will put as many references as I can to well-published authors, perhaps even mimicking their style, to get my first few papers published. That's not so different from they way papers are written now anyways.

Sorry, one addition:

How about a little more quality review of the peer reviewers as well? (anonymously, of course)

I favor the double blind method of review. As others have pointed out, it's not going to really be absolutely blind any more than a "double blind" drug trial comparing two drugs with completely different side effects will be truly blind, but at least it's something of a start.

An open review system would give too much advantage to established researchers, especially famous researchers. Suppose, for example, you were given a paper by James Watson to review. Suppose it turned out to be complete nonsense. Nonetheless, would you want to be the one to turn down a paper by a Nobel Prize winner with a reputation for vindicitiveness? It might mean the end of your career.

Another advantage of blinding the reviewers to the authors identity is that it would protect somewhat against bias, unconcious or concious, influencing results. Several studies have shown that papers or grant proposals get downgraded when the authors are female. If the authors are unknown, that is no longer a problem.

I favor the double blind method of review. One thing that hasn't been mentioned so far is the role of established, senior researchers in the advancement of junior folk. In the past year I found myself sitting in a job interview with a senior investigator who was quite prominent in the field after having given a very poor review to one of his manuscripts (it was supposed to be double-blind, but the author's included identifying information in the acknowledgement). In this case, I fully stand by my review -- I believe to this day that the analysis was poorly conceived and executed. That said, I had more than a moment of worry while in that interview.

I think there is quite a bit that Editor's can do to intervene in to improve the process (my apologies, Editors - I know you're already busy!). I take my reviewing responsibilities very seriously and would welcome specific feedback on my performance. Plus, we've all probably read reviews, where we scratch our heads and think -- "Did this person actually read the manuscript???" Editor feedback might minimize that problem too...

I don't think double-blind is a good idea, at least in my field (computational fluid dynamics).

Dave S.'s comment brings up an important point, but I think it's worth expanding on. In the area of computational fluid dynamics that I work in, it takes several years of work to develop a system that can get significant large-scale results, and these systems are unique enough that it takes at least two or three full papers to describe how they work. Those inner workings are fairly critical in evaluating the quality of the reported results, so it's quite important that the reader of a "results" paper have access to a complete description of it. The appropriate way to handle this is by saying something like, "This computation was performed using the method of papers X, Y, and Z," in which case it's quite clear which lab the paper came from. Perhaps mentioning the papers could be avoided by including 150 pages of redundant introduction, but even so it would be immediately recognizable.

The result of that is that double-blind simply would not work for these papers. They cannot be written in a way that would make the authors anonymous, and they cannot be adequately reviewed without including the parts that reveal the authors.

Double-blind is not merely of little worth in such cases; it is actively dangerous. It provides the appearance of a solution without actually providing a solution. By doing that it will obscure any bias problems that persist, and will hinder efforts to find a real solution to them.

I have to disagree with Brooks; anyone can cite the manuscripts that have been previously published, and build from the work therein. I don't know about computatoinal fluid dynamics, but in computational biophysics, this is quite common; one group will take a system built by another group and modify it or expand upon it. The fact that traditionally only one lab in your field does so in no way invalidates the idea that such information could be included as references; if the reviewers wish to assume that these references must indicate authorship by a certain group of people, that is their concern.

And I don't know understand what Brooks means by "obscuring bias problems that exists." No one is suggesting that vital references be left out of a paper; only that the reviewers not be able to see the authors' names, and that the authors not refer to their previous work using first person pronouns. As such, the work will be forced to stand upon its own merit, and not upon the reputation of the authors. As epador pointed out, if reviewer bias remains a problem in a double-blind study because reviewers are trying to guess the authors' identities, newer investigators can take advantage of that by building upon the previous works of known senior researchers. Such a practice is not only ethical science, but will correct the reviewer bias problem quickly.