How do researchers perceive peer review?

ResearchBlogging.orgYou don't have to look far to find mutterings about the peer review system, especially about the ways in which anonymous reviewers might hold up your paper or harm your career. On the other hand, there are plenty of champions of the status quo who argue that anonymous peer review is the essential mechanism by which reports of scientific findings are certified as scientific knowledge.

So how do scientists feel about anonymous peer review? A 2008 paper in Science and Engineering Ethics by David B. Resnik, Christina Guiterrez-Ford, and Shyamal Peddada, titled "Perceptions of Ethical Problems with Scientific Journal Peer Review: An Exploratory Study", attempts to get a preliminary handle on that question. They write:

Although most scientists agree that ethical problems can occur in journal peer review, evidence has been anecdotal, consisting of personal accounts published in news stories, letters, or commentaries. In this article, we report the results of an exploratory survey of scientists' perceptions of ethical problems with journal peer review. (306)

Specifically, Resnik et al. studied the perceptions of scientists at the National Institute of Environmental Health Sciences (NIEHS) in Research Triangle Park, North Carolina. They probed the perceptions of researchers, research staff, post-doctoral trainees and technicians using an anonymous survey distributed to at mandatory Responsible Conduct of Research (RCR) training sessions.

They distributed 556 surveys, of which 283 were returned. The researchers set aside the surveys returned by contractors, graduate students, people in non-research positions, respondents who didn't indicate their position at NIEHS, and those who hadn't published any papers, leaving 220 completed surveys for them to analyze.

Those 220 surveys left in the pool were from postdocs (94), principal investigators (38), staff scientists (55) and technicians (33) who together represented 22 different biomedical disciplines.

In addition to asking for demographic information (like age and level of education), the surveys asked respondents to answer the following questions:

1. Approximately how many articles have you published in peer-reviewed scientific or professional journals?

Have any of the following ever happened to you during the peer review process:

2. The review period took longer than 6 months.
3. The review period took longer than a year.
4. Comments from reviewers included personal attacks.
5. A reviewer was incompetent (i.e. he/she did not carefully read the article, was not familiar with the subject matter, or made mistakes of fact or reasoning in his/her review).
6. A reviewer was biased (i.e. didn't give an article a fair hearing, prejudged it).
7. A reviewer breeched the conï¬dentiality of the article without your permission.
8. A reviewer used your ideas, data, or methods without your permission.
9. A reviewer delayed the review so that he/she could publish an article on the same topic.
10. A reviewer required you to include unnecessary references to his/her publication(s). (310)

For questions 2-10, the available responses were "yes," "no," and "don't know". In their analysis, the researchers focused on responses to questions 4 through 10.

Here's the table in the paper that presents the aggregate data they collected:

i-87a54301370738a1294a3bb68303f541-Resnik-Table1.jpg

Worth noting here is that some of the ethical breaches that we might consider most serious in peer review -- using the veil of anonymity to steal ideas, data, or methods from the author whose work you are reviewing, using the power of the reviewer to slow down the publication of the work under review in order that the reviewer can get his or her work published first, or otherwise violating the confidentiality of peer review -- rank fairly low in the percentage of respondents who say they have happened to them (less than 10%). Much larger numbers of respondents indicated that they thought a reviewer was incompetent (61.8%) or biased (50.5%). It is also interesting that none of questions 4 through 10 received only "no" or "don't know" responses -- each of them had at least 10 positive responses from among the 220 respondents in the final pool.

The researchers found some interesting patterns in the responses. Older respondents were more likely than younger ones to answer "yes" to question 4 (about whether comments from reviewers included personal attacks). Researchers with more published papers were more likely to report incompetent reviewers or biased review. As well, the respondents who were PIs and postdocs tended to identify more negative experiences of peer review than did the respondents who were technicians and staff scientists. (The researchers suggest that this may be because postdocs and PIs are more likely to be first authors on papers -- and that reviewers' critiques may feel more personal to them as a result.) Postdocs, too, gave the most reports of incompetent reviewers.

Now, there are some clear limits to the conclusions than can be drawn from the results of this study. The research focused on scientists at a single research institution, rather than respondents from different institutions. Potentially, the culture of NIEHS might not be representative of the community of biomedical researchers as a whole (even in the U.S.). Further, the surveys asked researchers whether they had ever had the particular experiences of interest with peer reviewers, but did not ask for information on the frequency of these experiences.

Perhaps most importantly, the researchers were measuring respondents' perceptions of problems they have encountered with peer review, but there was no effort to establish whether the problems they reported has actually happened -- whether the reviewers we actually biased, or incompetent, or swiping information from the manuscripts they are reviewing.

To this concern, Resnik et al. respond:

[D]ocumenting that scientists perceive that there are ethical problems with journal peer review can be an important ï¬nding in its own right, because a scientist may change his/her behavior in response to what he/she perceives to be a problem. A researcher who is concerned that his/her ideas will be stolen, for example, may not disclose all the information that is needed to repeat his/her experiments. A researcher who is concerned that a reviewer is incompetent or biased may choose to ignore the reviewer's comments rather than address the concerns (which may in fact be valid), especially if they involve further time and effort in the laboratory. (308-309)

The effects of researchers' perceptions about what peer reviewers are doing (or might be doing) on their own scientific conduct is, of course, a question that might merit further research.

Indeed, Resnik et al. suggest that even in the absence of this further research, the results they report may warrant action:

As mentioned earlier, commentators have made various proposals for reforming the peer review procedures used by scientiï¬c journals. Our survey provides support for these reforms, since it demonstrates that biomedical researchers perceive that there are some problems with the integrity and quality of peer review. Since incompetence and bias were by far the most frequent problems respondents claimed to have experienced during peer review, journals, research institutions, and scientiï¬c societies should consider ways of dealing with these problems, such as providing additional education and training for reviewers on the scientiï¬c and ethical standards for peer review, requiring reviewers to disclose conï¬icts of interest, and paying more careful attention to the selection of reviewers. A more radical reform, such as open review, may be needed to counteract the perceptions of the most serious violations of peer review ethics, such as breach of conï¬dentiality and use of data, methods, or ideas without permission. (309)

I welcome your discussion in the comments on the question of whether significant perception of problems with peer review on the part of the scientists whose careers depend on it (at least to some extent) is reason enough to examine the status quo and reform peer review.

_____

Resnik, D., Gutierrez-Ford, C., & Peddada, S. (2008). Perceptions of Ethical Problems with Scientific Journal Peer Review: An Exploratory Study Science and Engineering Ethics, 14 (3), 305-310 DOI: 10.1007/s11948-008-9059-4

More like this

In the last post, we looked at a piece of research on how easy it is to clean up the scientific literature in the wake of retractions or corrections prompted by researcher misconduct in published articles. Not surprisingly, in the comments on that post there was some speculation about what…
An important part of the practice of science is not just the creation of knowledge but also the transmission of that knowledge. Knowledge that's stuck in your head or lab notebooks doesn't do anyone else any good. So, scientists are supposed to spread the knowledge through means such as peer…
Science is supposed to be a project centered on building a body of reliable knowledge about the universe and how various pieces of it work. This means that the researchers contributing to this body of knowledge -- for example, by submitting manuscripts to peer reviewed scientific journals -- are…
tags: researchblogging.org, female scientists, science publishing, double-blind review, single-blind review, cultural observation, gender bias, sexism, feminism A microbiologist at work. Image: East Bay AWIS. A few months ago, a controversy occurred in the blogosphere regarding whether…

I feel I have to post a quote from a review of one of my papers:

"While, I am not aware of any publication discussing these variants, I would expect them having been tested before (so far without significant success)."

One wonders how the reviewer knows that the variants are unsuccessful if he is not aware of any publications on the topic and only 'expects' they've been tested before.

I've published some 60 odd papers. I have had one paper lost in the process for a while, but, other than that, have never had any problems with reviewers. Some journals give the identity of the reviewers on the first page of the published paper. I like that. Other journals publish a yearly list of reviewers in alphabetical order. I have refused to review an NSF grant application because I did not agree with the purpose of the program.

My favorite review of one of my papers went something like this, "I am sorry to say I have lost the MS you sent me for review. It covered important material perfect for the journal. It was extremely well written and I recommend it be published without revision."

By Jim Thomerson (not verified) on 01 Apr 2010 #permalink

Perceptions of a problem are enough reason to re-examine an issue, but not enough reason to conclude problems definitively exist.

My personal opinion is that the biggest limitations facing the peer review system are that (1) scientists often don't put nearly enough time into peer review ("I don't have time") and (2) scientists often value quantity more than quality of publications. Despite complaining about it, such behavior continues to be widely practiced.

Quis custodiet ipsos custodes?

Resnik's study shows that either there are biased peer reviewers, or some of the scientists who participated at the study are. But they all are humans and scientists, and except for their relative position towards the article (one is the author and the other is the reviewer) there is no difference between them. Why should we consider that only the authors are subjective, and the reviewers are not? Are scientists more subjective when they have the role of the author, than when they are reviewers? Isn't this bias already?

Psychological studies showed us that humans are biased, no matter how smart they are. In many quantum theory textbooks is written that Einstein and Bohm were biased against the Copenhagen interpretation. Even Bohr said that Einstein is biased because his philosophical beliefs. So, at least one of them was biased. History of science contains many examples of very good scientists who, at some point or another, made simple mistakes or obviously wrong claims because of their subjectivity. If they were biased, why couldn't we accept our human condition that we all may be, to some degree, subjective, including the peer reviewers?

Are scientists accepted as peer reviewers only after taking a test which measures their "bias quotient"?

Then, why is so taboo to question the peer review system? Don't the peer reviewers' reviews need to be reviewed at their turn?

Is there any scientific proof that the peer review system cannot be improved?

Acknowledged mainstream scientists tend to favor the current peer review system, and less acknowledged or out of stream ones tend to criticize it. But isn't natural to praise the very system which made you the acknowledged scientist who you are?

Interesting study, though I guess not super-surprising.

As for your question about whether a study of "the scientists whose careers depend on it" provides reason for reform, I'd like to see studies of other groups perceptions of peer review. Journalists, university administrators, policy makers, students... That's not to say that the researchers' views are useful, but they aren't all of the picture and I don't know why studies of peer review seem to obsess over them (only exacerbates one of the things I perceive as a problem, that it's such a closed shop).

That said, researchers are fascinating people who do know and care a lot about peer review, and it'd also be really interesting to compare between disciplines.

It would have been interesting to include questions about the behavior of participants when they were acting as reviewers, along the lines of what they felt about authors (e.g., did they ever feel that an author hadn't really finished the development of a manuscript and was hoping that reviewers' comments might suggest ways to finish it) and more importantly whether they feel that they have ever been less than fully competent or had some bias regarding documents (grants, manuscripts) they were asked to review.

By Greg Shenaut (not verified) on 02 Apr 2010 #permalink

Thanks for a heads up on this article. I've had a couple of run-ins with peer review, where the in one case one reviewer said âpublishable as isâ, another said âneeds revisionâ and a third said ânot publishable at allâ, the latter seemed like a euphemism for âhas no idea what heâs talking aboutâ, based on the other remarks that were left by that reviewer. Not a "personal attack", but close, if you are a fledgling academic, still unsure of the value of your contributions. I've done some peer review myself, and sometimes I feel in the same boat as above when you read a paper that needs major revision. How do you bring that across gently? That's not an easy task. As flawed as the peer review process may seem, I don't see how we can replace it. What does need improvement, though, is the time it takes from review to acceptance to actually being published.

What about the case where one reviewer found an error in my manuscript that the other reviewers missed? By the definition of this survey, the ones who failed to spot my mistake are incompetent for not reading carefully enough.

Survey questions of the form "Have any of the following ever happened to you?" are--for obvious reasons--highly dependent on the exposure level of the respondent to the possibility of those types of things having happened. Almost none of those things have ever happened to first-year technicians, because they have never been paper authors. Senior big-wig investigators have experienced most of them, because they have authored from dozens to hundreds of papers.

A much better way to gauge how researchers perceive peer review would have been to have asked them "How do you consider each of the following as problems with the peer review system?"

And supplied as allowable answers something like: "Not at all, a little bit, substantial, a lot, massive".

I agree with Resnik's article, that scientific review is severely flawed and needs to be changed. As a professor and one who has published many articles, I think its very biased and Reviewers do hides behind their veils of annonnymity to make comments that prevent good papers from being published. This hurts everyone, ultimately.

What especially bothers me is that there are a handful of scientists who appear on almost every editorial board of the top journals, and who consequently wield unusual power as to what gets published and what does not. This happens more often in U.S.-based journals than in European journals. Then, those same people also sit on the peer review committees at the NIH, thereby regulating who gets funded and who does not.

So, not only do these people now control what gets published, but they also control what gets researched. And if you research an aspect of a disease that may be different and more likely to cure that disease, you can bet that your research will neither get funded nor published! Meanwhile, these same people will fund their friends, who in turn fund them, and they will collaborate with each other and facilitate the publication of each other's work [through favorable reviews], while very worthwhile, exciting work by others does not see the light of day.

Its not supposed to happen this way, but it does. I have complained about this to Editors in Chief of journals, to top people in charge at NIH [with solid examples], all to no avail. Its all YOUR tax dollars people!

For me, personally, I only have a few years left in science, and I publish now mostly in European journals. But I worry about the future. This corrution has to stop. Science is supposed to be the last bastion of pure truth and knowledge. But in the last 15 years, it has become extremely corrupted. And everyone just looks away....

Neuroprof

By Neuroprof (not verified) on 04 Apr 2010 #permalink

Technically, isn't the confidentiality between the reviewer and the editor, not the reviewer and the author? I know of many cases where reviewers have asked the editor's permission to fact-check something with and additional person, but I've never heard of the editor asking the author if that is OK.

I've published lots of papers and have experienced all the problems mentioned.

There are similar problems with editors. E.g., one paper was rejected after /two/ years with apparently no review having occurred; a few months later the journal published a paper by another author with the same theory.

I'm an editor myself, and I see one responsibility of an editor being to evaluate reviewers just as reviewers evaluate papers. It seems to be honored more in the breach than in the observance, though.

I have published very few papers, and as an active conservationist I am considering not publishing any more, as they may detract from conservation goals. I was recently accused by a 'bank-teller' scientist of a major US Gov department of being under-hand by not offering this person the full dues that this person reckoned he/she deserved in a review. Conservation of the 'ego' does not sit well with conservation. This all stemmed from the fact that I decided to review this person chapter in a book only on the condition of anonymity - I do beleive that this is a pillar of academic freedom, however, I was 'outed' and the other party then made more of this, due to the unstable nature of his/her personality. Can't win - but conservation in a poor contnent suffers.

By k ferguson (not verified) on 22 May 2010 #permalink