Peer Reviews Faked In Tiny Percentage of A Small Percentage of Journals, Heads Will Roll.

The academic world and its detractors are all a-tizzy about this recent news reported here:

Springer, a major science and medical publisher, recently announced the retraction of 64 articles from 10 of its journals. The articles were retracted after editors found the authors had faked the peer-review process using phony e-mail addresses.

The article goes on to say that science has been truly sullied by this event, and anti-science voices are claiming that this is the end of the peer reviewed system, proving it is corrupt. The original Springer statement is here.

See this post at Retraction Watch and links therein for much more information on this and related matters.

Here are the papers

Personally, I think that anyone who circumvents the process like this should be tossed out of academia. Academics toss each other out of their respective fields for much much less, and often seem to enjoy doing it. Cheating like this (faking peer review as well as making up data) is the sort of thing that needs to end a career, and a publishing company that allows this to happen has to examine its procedures.n This is probably worse than Pal Review, but Pal Review is probably more widespread and harder to detect because it does not use blatantly obvious fake people. (See this for more on Pal Review.)

But, there is an utter, anti-scientific and rather embarrassing lack of perspective and context here. Let us fix it.

Ten of Springer's journals are sullied by this process. Is that most of Springer's journals? All of then? Half of them? Well, Springer produces 2,900 journals, so it is less than a percent of the journals. I'm not sure how many papers Springer produces in a year, but about 1,400,000 peer reviewed papers are produced per year. So the total number of papers known to be affected by this nefarious behavior is a percentage that is too small to be meaningful. More papers were eaten by dogs before publication.

So there are two obvious facts here. One is that authors who faked their research or its qualities, including faking the peer reviewed process, need to be run out of town on a rail, drawn and quartered, and otherwise gone medieval on. The other is that the conversation we are going to have about this on social media and elsewhere is likely to be a useless pile of steaming bull dung if we can't start the conversation with two feet planted firmly in reality, rather than context free scary looking numbers willfully (I assume, but maybe out of ignorance) presented scale free.

Springer's full statement, which actually indicates that the process works rather than it does not work, is here:

Retraction of articles from Springer journals

London | Heidelberg, 18 August 2015
Springer confirms that 64 articles are being retracted from 10 Springer subscription journals, after editorial checks spotted fake email addresses, and subsequent internal investigations uncovered fabricated peer review reports. After a thorough investigation we have strong reason to believe that the peer review process on these 64 articles was compromised. We reported this to the Committee on Publishing Ethics (COPE) immediately. Attempts to manipulate peer review have affected journals across a number of publishers as detailed by COPE in their December 2014 statement. Springer has made COPE aware of the findings of its own internal investigations and has followed COPE’s recommendations, as outlined in their statement, for dealing with this issue. Springer will continue to participate and do whatever we can to support COPE’s efforts in this matter.
The peer-review process is one of the cornerstones of quality, integrity and reproducibility in research, and we take our responsibilities as its guardians seriously. We are now reviewing our editorial processes across Springer to guard against this kind of manipulation of the peer review process in future.

In all of this, our primary concern is for the research community. A research paper is the result of funding investment, institutional commitment and months of work by the authors, and publishing outputs affect careers, funding applications and institutional reputations.

We have been in contact with the corresponding authors and institutions concerned, and will continue to work with them.

More like this

On the one hand, you're right. On the other hand, this suggests that the editors responsible were not checking to see if the suggested referees had a reasonable publication record and were not collaborators or colleagues of the authors. (I'm not clear how they could have done even a cursory check and not noticed that the referees in question were not actual people.) If that's right then there were certainly a much larger number of papers that were reviewed by spouses, cousins and close friends. You say that the authors should be bounced from academia. I agree, but how about tossing the editors also? They seem unaware of the concept of due diligence.

By Ethan Vishniac (not verified) on 20 Aug 2015 #permalink

Ethan, I think you've missed the point of my comment. Knowing not much else, we see that the editors have identified 64 cases out of millions after the fact. We can assume they identify some, maybe many, before the fact. That may well mean that the editorial process in peer review works rather well, not that it does not work.

Meanwhile, you mention Pal Review. Note my link to that in anti-science. Mashey's work suggests that we are more likely to see Pal Review in fake sciences. That makes it even easier to spot.

Also, the Springer review seems almost entirely Chinese. If that is correct, then maybe there is something about the system that makes Chinese contributors harder to check. I can think of a number of factors that may make that the case. I've personally had a very hard time running down contacts that I want to talk to in China. This has to do with email and web issues there. I even wonder, actually, about wheter some of these "fake" people were simply real people who seem fake. Hopefully not!

Finally, your own argument may obviate your own argument. Yes, how could you do even a cursory check and not find fakes? I don't necessarily think you are right about this (I don't know) but if THAT is your starting assumption than your assertion based on the cockroach theory (if you see one, then there are thousands) is invalid. You need to make up your mind, and before doing that, I suggest adding more data to the mix to get a better handle on what is going on (we all need to do that).

Greg,

I got your point. I even agree with it. I just think that you're being way too easy on the editors involved. IMHO an editor who doesn't know who the referee is, is an editor who has failed to exercise due diligence. I know that Chinese names, particularly Chinese names rendered in the Roman alphabet, can be ambiguous. There are astronomers who use identical renderings of their names who have nothing to do with one another. My point is that if you haven't taken the time to distinguish between people then you're not doing your job as an editor.

It's yet another argument for getting journals to use ORCID so that authors have unique identifiers, but there will always be people who haven't gotten one yet. People have to do their jobs for the journals to work.

Full disclosure: I work in a relatively small field (astrophysics) so this may affect my outlook.

Cheers
Ethan Vishniac

By Ethan Vishniac (not verified) on 20 Aug 2015 #permalink

The purpose of peer review is not to determine if a paper is correct, or is fraudulent, or is not fraudulent but merely incorrect. The purpose of peer review is to determine if something should be published in the scientific literature.

If you can't evaluate whether a paper in the scientific literature is good or bad, is correct or wrong, then you should not use that paper in your research, and should not cite that paper in your publications.

The problem is that the scientific literature is being used for things that it was never intended to be used for, and which it is not suitable for. The number of citations a journal has is not a measure of the "scientific value" of articles published in that journal. The "scientific value" of a particular article can only be determined by those knowledgeable enough to understand the paper and its context in the scientific field it is in.

It would be "scientific misconduct" to pretend that the "scientific value" of a particular paper can be evaluated by where it is published. In other words, there is zero scientific bases for determining the scientific value of a paper based on where it is published.

I appreciate that the people using citation index metrics to evaluate "scientific value" are not scientists. That doesn't make the process acceptable. It means the evaluations have even less value.

Less than zero value means the process is hurting scientific progress.

By David Whitlock (not verified) on 20 Aug 2015 #permalink

It’s yet another argument for getting journals to use ORCID so that authors have unique identifiers

That won't solve the problem, either, because it can be defeated by the simple expedient of registering an alias with the database.

Even requiring the editor to know who the reviewer is can be problematic. It's one thing to do that in a small field like astronomy or geophysics (the latter being my area). But even there, you run a risk of turning it into pal review--climate change, a field mentioned above, is part of geophysics. In a field as large and varied as medicine, the risk of pal review becomes overwhelming, simply because the field is too big for everybody to know everybody. Thus, as a quick perusal of Orac's archives will demonstrate, pal review is a serious issue, resulting in obvious nonsense like homeopathy and theraputic touch showing up in peer-reviewed publications.

By Eric Lund (not verified) on 21 Aug 2015 #permalink

P.S. According to the link at Retraction Watch, there have been cases where the faking of reviews was done without the authors' knowledge. There was a case of more than 30 faked reviews at Hindawi journals where the editors were found responsible for the faked reviews. So while I agree that this ought to have been a career-ending move on the part of the responsible party, it isn't always the authors.

By Eric Lund (not verified) on 21 Aug 2015 #permalink

I've learned a little bit more about this, and it appears that in some cases the suggested referees were real people, with substantial reputations. The trick was that the submitters gave a phony email address. In this case I would agree that the editors who got fooled deserve some sympathy. We maintain a database of names with professional email addresses and if there is a discrepancy we use the email given in recent publications.

Also, it may be that the authors were not always aware of this game. It seems that some companies that help non-English speaking authors polish their manuscript were also helping with the submission.

By Ethan Vishniac (not verified) on 24 Aug 2015 #permalink

I have always found it safest to assume that reviewers recommended by the authors (or apparently, in the biomedical field, some middleman) were either grad school buddies or past collaborators of the authors or at least people who are known to be sympathetic to their approach or ideology. You can certainly use one of them, but you also go out and find someone who wasn't on the list.

You can certainly use one of them, but you also go out and find someone who wasn’t on the list.

That, too, can be gamed, although it's not guaranteed to work. One of the ways an editor can pick reviewers is by looking at the reference list: someone who has multiple papers cited in the manuscript is more likely to be familiar with the science topic, if not the specific research protocol. Even in a small field, no editor will know all of the relevant literature, so the authors can stack the deck by citing the papers of someone who is likely to be a sympathetic reviewer and not citing the papers of someone who is not likely to be sympathetic. This won't work if the topic of the manuscript involves a sufficiently well-known controversy, because even the editor will notice if the introduction fails to acknowledge the existence of the controversy. But with non-controversial topics or large fields, this technique is more likely to succeed.

Of course, even if the authors successfully game the review in this fashion, it will still be a genuine review, unless somebody is successfully spoofing the reviewer database.

There should be techniques for catching spoofed e-mail addresses of suggested reviewers. A Google or MSN e-mail address should be automatically regarded with suspicion, unless the potential reviewer is identified as having a Google or Microsoft affiliation (most journals I am familiar with require authors to supply affiliations for suggested reviewers). For that matter, an address that does not match a known affiliation of the reviewer should raise a flag. Any reputable organization (and quite a few that aren't) should have its own domain by now.

By Eric Lund (not verified) on 25 Aug 2015 #permalink