Having spent the last couple of days dealing with pure woo, such as germ theory denialism and naturopathic quackery, I think now's as good a time as any to move on to a more serious topic.
One of the most important aspects of science is the publication of scientific results in peer-reviewed journals. This publication serves several purposes, the most important of which is to communicated experimental results to other scientists, allowing other scientists to replicate, build on, and in many cases find errors in the results. In the ideal situation, this communication results in the steady progress of science, as dubious results are discovered and sound results replicated and built upon. Of course, scientists being human and all, the actual process is far messier than that. In fact, it's incredibly messy. Contrary to popular misconceptions about science, it doesn't progress steadily and inevitably. Rather, it progresses in fits and starts, and most new scientific discoveries go through a varying period of uncertainty, with competing labs reporting conflicting results. To achieve consensus about a new theory can take relatively little time (for example, the less than a decade that it took for Marshall and Warren's hypothesis that peptic ulcer disease is largely caused by H. pylori or the relatively rapid acceptance of Einstein's Theory of Relativity) to much longer periods of time.
One of the pillars of science has traditionally been the peer review system. In this system, scientists submit their results to journals for publication in the form of manuscripts. Editors send these manuscripts out to other scientists to review them and decide if the science is sound, if the methods appropriate, and if the conclusions are justified by the data presented. This step of the process is very important, because if editors don't choose reviewers with the appropriate expertise, then serious errors in review may occur. Also, if editors choose reviewers with biases so strong that they can't be fair, then science that challenges such reviewers' biases may never see print in their journals. The same thing can occur to grant applications. In the NIH, for instance, the scientists running study sections must be even more careful in choosing scientists to be on their study sections and review grant applications, not to mention picking which scientists review which grants. Biases in reviewing papers are one thing; biases in reviewing grant applications can result in the denial of funding to worthy projects in favor of projects less worthy that happen to correspond to the biases of the reviewers.
I've discussed peer review from time to time, although perhaps not as often as I should. My view tends to be that, to paraphrase Winston Churchill's invocation of a famous quote about democracy, peer review is the worst way to weed out bad science and promote good science, except for all the others that have been tried. One thing's for sure, if there's a sine qua non of an anti-science crank, it's that he will attack peer review relentlessly, as HIV/AIDS denialist Dean Esmay did. Indeed, in the case of Medical Hypotheses, the lack of peer review let the cranks run free to the point where even Elsevier couldn't ignore it any more. One thing's for sure. Peer review may have a lot of defects and blindnesses, but lack of peer review is even worse. It's no wonder why cranks of all stripes loved Medical Hypotheses.
None of this means that the current system of peer review is sacrosanct or that it can't be improved. In the 25 years or so I've been doing science, particularly in the 20 years since I began graduate school, I've periodically heard lamentations asking, "Is peer review broken?" or demanding that the peer review system be radically altered or even abolished. Usually they occur every two or three years, circle around scientific circles for a while, and then fade away, like the odor of a particularly stinky fart. It looks as though it's time yet again, as a rather amusingly titled article in The Scientist, I Hate Your Paper: Many say the peer review system is broken. Here's how some journals are trying to fix it:
Twenty years ago, David Kaplan of the Case Western Reserve University had a manuscript rejected, and with it came what he calls a "ridiculous" comment. "The comment was essentially that I should do an x-ray crystallography of the molecule before my study could be published," he recalls, but the study was not about structure. The x-ray crystallography results, therefore, "had nothing to do with that," he says. To him, the reviewer was making a completely unreasonable request to find an excuse to reject the paper.Kaplan says these sorts of manuscript criticisms are a major problem with the current peer review system, particularly as it's employed by higher-impact journals. Theoretically, peer review should "help [authors] make their manuscript better," he says, but in reality, the cutthroat attitude that pervades the system results in ludicrous rejections for personal reasons--if the reviewer feels that the paper threatens his or her own research or contradicts his or her beliefs, for example--or simply for convenience, since top journals get too many submissions and it's easier to just reject a paper than spend the time to improve it. Regardless of the motivation, the result is the same, and it's a "problem," Kaplan says, "that can very quickly become censorship."
I daresay pretty much every scientist has submitted a paper (probably several) papers, only to have outrageously unreasonable reviewer comments returned to them similar to those Kaplan described above. I myself have experienced this phenomenon on multiple occasions. Most recently, it took me multiple submissions to four different journals to get a manuscript published. It took nearly a year and a half and more hours of writing and rewriting and doing more experiments than I can remember. But "censorship"? I'm half tempted ot respond to Dr. Kaplan: Censorship. You keep using that word. I do not think it means what you think it means. In fact, I just did.
No, incompetent or biased peer review is not "censorship." It's incompetent or biased peer review, and it's a problem that needs to be dealt with wherever and whenever possible. As for "rejecting papers for convenience," perhaps Dr. Kaplan could tell us what a journal editor should do when he or she gets so many submissions that it's only possible to publish 10 oe 20% of them. Peer reviewers aren't paid; with the proliferation of journals the appetite of the scientific literature for peer reviewers is insatiable. Moreover, reviewing manuscripts is hard work. That's why higher impact journals not infrequently use a triage system, where the editor does a brief review of submitted manuscripts in order to determine whether it is appropriate for the journal or has any glaring deficiencies and then decides whether to send them out for peer review.
I have the same problem with another complaint in the article, that of Keith Yamamoto:
"It's become adversarial," agrees molecular biologist Keith Yamamoto of the University of California, San Francisco, who co-chaired the National Institutes of Health 2008 working group to revamp peer review at the agency. With the competition for shrinking funds and the ever-pervasive "publish or perish" mindset of science, "peer review has slipped into a situation in which reviewers seem to take the attitude that they are police, and if they find [a flaw in the paper], they can reject it from publication."
He says that as though that were a bad thing. There is no inherent right to publish in the scientific literature, and papers with major flaws should be rejected. How major or numerous the flaws have to be to trigger rejection comes down to the policies of each peer reviewed journal. Don't get me wrong. I'm not all Pollyannaish, thinking that our current peer review system is the best of all possible worlds. Improvement in the system can only be good for science, if true improvement it is, and there are some good suggestions for improving peer review in the article.
Perhaps the most pernicious problem in peer review is the problem of reviewers with a bias or an axe to grind. To attack this problem, some journals are trying to eliminate anonymous peer review. The idea is that, if everything is completely open and transparent, with the peer reviews being "part of the record," so to speak. I can see the appeal of this change. A reviewer is less likely to "be a dick" if he or she knows that the review will be in the public record, for all to see, or at least that the manuscript authors know who the peer reviewers are. Personally, I have a problem with this, mainly because I think the downside of getting rid of reviewer anonymity outweigh the potential good side. For example, I rather suspect that a lot of reviewers would be reluctant to be too hard on the manuscripts submitted by big names in their field if they knew their names would be on the review. You don't want to piss off the big Kahunas in your field. These are the people who organize conferences, invite outside speakers, and sit on study sections. In general, it's not a good idea to get on their bad side, particularly if you're still young and struggling to make a name for yourself in the field. For example, I'm a breast surgeon, and I know I would be reluctant to apply even deserved respectful insolence to a paper by, for example Monica Morrow or Armando Guliano (two very big names in the field) if I knew they knew who was reviewing their papers and even if the paper I was reviewing was obviously crap.
Personally, I like the idea expressed here:
Frontiers journals are trying to find a balance by maintaining reviewer anonymity throughout the review process, allowing reviewers to freely voice dissenting opinions, but once the paper is accepted for publication, their names are revealed and published with the article. "[It] adds another layer of quality control," says cardiovascular physiologist George Billman of The Ohio State University, who serves on the editorial board of Frontiers in Physiology. "Personally, I'd be reluctant to sign off on anything that I did not feel was scientifically sound."
As would I.
Another idea I've proposed before is to go for full anonymity. In other words, reviewers are anonymous to the authors of manuscripts, and--here's the change--the authors are anonymous to the reviewers. One advantage to such an approach is that it would tend to alleviate any effect of personal dislikes or even animosity, and it would "take the glow" off of big names submitting papers, hopefully making it less likely that reviewers would give a weak paper a pass because it came from a big name lab. On the other hand, in small fields, everyone knows what everyone else is doing; so anonymizing the manuscript authors would often not hide the identity of the authors.
The last two problems with peer review discussed by this paper are highly intertwined:
- Peer review is too slow, affecting public health, grants, and credit for ideas
- Too many papers to review
The first of the two problems above is largely a function of the last. As I pointed out above, the appetite of journals for peer reviewers is insatiable, and peer reviewers are not paid. They're expected to do it out of the goodness of their hearts, because it's service back to the community of science. True, peer review activity counts when it comes time to be considered for promotion and tenure, but it's a lot of work for very little reward, not to mention articles like the one under discussion, in which seemingly no one can get it right. Oddly enough, there was one suggestion that I didn't see anywhere in this article, and that's to pay reviewers for their hard work. Apparently the financial model of journal publishing won't support it.
Be that as it may, one solution to this proposed is to go to a model like that of PLoS ONE:
An alternative way to limit the influence of personal biases in peer review is to limit the power of the reviewers to reject a manuscript. "There are certain questions that are best asked before publication, and [then there are] questions that are best asked after publication," says Binfield. At PLoS ONE, for example, the review process is void of any "subjective questions about impact or scope," he says. "We're literally using the peer review process to determine if the work is scientifically sound." So, as long as the paper is judged to be "rigorous and properly reported," Binfield says, the journal will accept it, regardless of its potential impact on the field, giving the journal a striking acceptance rate of about 70 percent.
"The peer review that matters is the peer review that happens after publication when the world decides [if] this is something that's important," says Smith. "It's letting the market decide--the market of ideas."
This approach has also proven successful, with PLoS ONE receiving their first ISI impact factor this June--an impressive 4.4, putting it in the top 25 percent of the Biology category. And with a 6-fold growth in publication volume since 2007, Binfield estimates that "in 2010, we will be the largest journal in the world." Since its inception in December 2006, the online journal has received more than 12 million clicks and nearly 21,000 citations, according to ISI.
I realize that my experience is anecdotal, but among the worst reviewer experiences I ever had was submitting a manuscript to PLoS ONE. In my case, at least, the reviewers were every bit as brutal as any I have ever experienced, which I found odd, because the manuscript I submitted had been all but accepted at an excellent cancer journal. The sole reason it wasn't accepted is that the reviewers wanted animal studies, and I didn't have them, nor did I want to delay publication to do them. So I sbumitted to PLoS ONE, believing its mantra that it's all about the scientific merit of the paper, only to have my manuscript rejected with extreme prejudice. I later reformatted it for another journal, and got it accepted after one round of revisions to a journal with an impact factor significantly higher than PLoS ONE. Maybe my experience was anomalous, but I don't buy that PLoS ONE represents the savior of anything or even that much of an improvement over traditional publication methods. Certainly, I don't plan on submitting any more of my work to PLoS ONE for a long time, if ever. In fact, I doubt I'll submit anything to PLoS ONE ever again.
More intriguing is the concept of letting authors take their peer review with them when they resubmit their manuscript to a different journal after rejection. When I first heard of this concept, I was quite skeptical. After all, if your paper was rejected, chances are that the reviews probably weren't that positive or that they were, at best, lukewarm. Personally, I can say unequivocally that after I've had a paper rejected by a journal the last thing I want is to have to show the next journal to which I submit my manuscript the crappy reviews that I got the first time around. Why on earth would anyone want that? I want a fresh start; that's why I resubmit the manuscript in the first place! Peer reviews from a journal that rejected my manuscript are not baggage I want to keep attached to the new manucript as I submit it to another journal.
In the end, peer review is the mainstay of scientific publishing. While it has a great deal of difficulty detecting fraud, it can generally detect bad science. No one claims that the current system is perfect or even that it doesn't have a lot of problems, some of them serious. However, the cries that "peer review is broken" strike me as a perennial complaint without that much substance. As scientists, we can and should do whatever is feasible to shore up the peer review process, and we shouldn't be afraid of trying out new models of peer review, such as some of the models described in this article. Just don't throw the baby out with the bathwater. Peer review may have significant problems, but it works surprisingly well, given its ad hoc nature, and it's incumbent upon those who would overthrow it to show that the systems that vie to replace it would result in better science being published.
- Log in to post comments
Agreed with what you have written above.
However in my chosen field, (herpetology), there are some shocking examples of papers that are published in supposedly peer reviewed papers, where clearly authors have perverted the process through friendly editors, reviewers or similar in a process they have been able to rort or rig in their favour to get demonstrably bad and dishonest work published.
In journals I have edited (2), the review process, (what the reviewers do and don't do, and how they are to conduct their tasks is spelt out in print and published, for all to see).
This makes the process more transparent for both authors, editors and reviewers and readers in that they know exactly what kind of quality control is and isn't being done.
One of the journals this process was done in did not market itself as peer reviewed, even though it was, while the other is promoted as peer reveiwed.
I can give examples of such cases, but I am sure there are many I am also unware of.
Put simply, I think the example of publishing rules for reviewers to work on should be more widely adopted to make a generally successful process better.
ALL THE BEST
The reviewers are supposed to prevent errors (flaws) from getting through to publication. Perhaps they should be viewed as editors. They are supposed to make suggestions for revisions that will make the paper appropriate for publication, but an editor is also supposed to point out when something is not appropriate.
It is odd that the beginning of the sentence is using the ever-pervasive "publish or perish" mindset as a criticism of peer review. As if peer review created that mindset.
Quite the opposite. If "publish or perish" is ever-pervasive, then the importance of strict peer review becomes even more important. It is a mistake to lower standards in response to higher volume. It is not an uncommon mistake, but it is a mistake.
.
The plethora of papers and shortage of peer reviewers means that I have quite often had papers reviewed by people who obviously no less about the subject than I do, or who have expertise in the content area but absolutely no experience with the methods we used. Comments in these cases can be extremely annoying. On the other hand, if the reviewers the editor can come up with don't understand what you did, many readers probably won't either so you do need to provide more background info and explanation. Or maybe you have the wrong journal.
Another major annoyance is when a reviewer has some pet idea and insists that you need to cite so and so -- probably a collaborator meaning the reviewer is a co-author -- or consider the blemophery theory -- probably his own brilliant idea -- which in fact has nothing to do with your work. It's usually counterproductive just to say, "that's really not relevant," you end up having to incorporate or acknowledge it somehow to get the paper published on resubmission, even if it ends up distorting your argument.
I don't really have any solutions, however, except that peer reviewers shouldn't behave corruptly.
What are your thoughts on this solution to "the peer review crisis"?
Jeremy Fox and Owen Petchy recently published this article in the Bulletin of the Ecological Society of America suggesting a "PubCred" banking system where one must rack up credits reviewing submissions in order to be allowed to submit papers. Their suggesting all journals buy into the idea, and people are considering it the idea as a viable option.
They've even got an online petition up to encourage backers to commit to their ideas.
"Another major annoyance is when a reviewer has some pet idea and insists that you need to cite so and so -- probably a collaborator meaning the reviewer is a co-author -- or consider the blemophery theory -- probably his own brilliant idea -- which in fact has nothing to do with your work. It's usually counterproductive just to say, "that's really not relevant," you end up having to incorporate or acknowledge it somehow to get the paper published on resubmission, even if it ends up distorting your argument."
Sometimes when I'm reading a journal article, I like to try and spot the paragraphs that were most likely inserted to placate a reviewer. A paragraph stuck in the middle of the introduction with very little relationship to anything before or after about a largely irrelevant theory? Bingo!
I vaguely recall, many years ago when I did some peer review (got out of academia, so I don't anymore), that the manuscripts were blinded to authorship. Might have been a peculiarity of the particular journals, I don't know. I do know it what generally very obvious whose research lab the papers were coming from, if only by looking at the citations!
The problem is that many journal's triage system is run by relatively inexperienced scientists, and therefore the status quo is enforced because they don't know the fields well enough. I won't go the "failed post-doc" argument route, but the fact is they do not have enough experience.
Just brainstorming here... it seems like reviewer anonymity is a bad thing for papers by authors with few publications, but a good thing for papers by authors with a lot of publications. Perhaps journals could have some kind of metric that determines whether you get to know who your reviewers are?
I'm mostly joking with this next suggestion, but it does seem to have some cosmic justice: If you, as a peer reviewer, have more publications than the author(s) of the paper you are reviewing, then you have to disclose your name, otherwise you don't. heh...
I do know it what generally very obvious whose research lab the papers were coming from, if only by looking at the citations!
This is why blinding the article authorship to the reviewer won't work, at least in my field. The field is small enough that everybody who has been around for a while knows who everybody else is.
Frontiers journals are trying to find a balance by maintaining reviewer anonymity throughout the review process, allowing reviewers to freely voice dissenting opinions, but once the paper is accepted for publication, their names are revealed and published with the article.
One of the leading journals in my field, the Journal of Geophysical Research, has a similar system, but the reviewer can opt out of revealing his name. It does add some credibility when the reviewer says "publish this paper, and you can quote me on that."
I don't know if this is standard or a function of the particular journals that I have submitted to, but requests for certain reviewers to review and to not review are made. That is not to say that you are going to get what you want. I agree with the idea of requesting certain reviewers, not review a manuscript, but I'm not so sure about asking for particular reviewers as that fosters cronyism.
James Sweet,
That's the Matthew Effect.
.
http://en.wikipedia.org/wiki/Matthew_effect
The most annoying thing, particularly when submitting to high profile journals, is that it is very easy for reviewers to demand more. It is rare that a study ties everything up in a bow, leaving no unanswered questions, so it is easy for a reviewer to demand that you answer those, too. Sometimes, I suspect that a reviewer is doing this simply because he couldn't find anything wrong with the results presented, and felt like he needed to say something negative to maintain his credibility as a critical scientist.
On the other hand, I don't think I've ever had publication blocked by reviewers. Sometimes publication is delayed. We might have to do additional work that I'd have preferred to put in a follow-up study (which often results in authorship issues, in which two people have made major contributions but only one can be listed first). Or I may have to go to a lower-profile journal than I wanted. But the work always gets published. And in almost every case, the final paper benefits from the reviews I've received.
I agree with this wholeheartedly. I used to get all bothered by reviewer comments, especially those where I felt the reviewer "didn't get it" or "missed the point" or otherwise didn't have enough expertise to understand it. Then I realized, isn't that the whole point? Of course I know more about this than everyone else, that is why I am writing the paper, and it is a mistake to expect everyone to understand it all a priori beforehand. The whole purpose of my article is to help those who don't know this (which pretty much involves everyone) to understand it. So if they "didn't get it" or "missed the point," then that is my fault, not theirs. I need to write it better.
Science Mom, the journals I submit too also ask for reviewer selections, but I have heard that the "do not submit to" reviewers actually get the manuscript, on the theory that they would know a great deal what you are talking about and would be able to rip your manuscript a new one. One of my favorite moments in grad school was watching a faculty member rage about how they had given the manuscript to his nemesis (he actually used the words "nemesis" and "arch enemy" and cussed loudly in multiple languages) even though he had put their name on the "do not review" list.
I have had reviewers either purposely reveal their names in their reviews, or make a mistake like having the track changes feature show their initials. But I'm not in a contentious field, either.
One comment about the "reviewers should only evaluate the acceptability of the methodology and leave the quality assessment to the community" like in the bio journal that Orac mentions (I know it's not quite that simple, but to a large extent it is). Unfortunately, reviewers unwilling to stand up to criticize garbage studies that were accepted on the grounds of, "well, there's nothing technically wrong with it" has led to the downfall of some journals, who have been overrun by so much crap that the good stuff gets lost. A reviewer has to be willing to object to papers on the grounds that this is uninteresting garbage with no sophistication or intellectual merit at all, and does not advance the state of knowledge except in the details.
When I see a paper that is a trivial topic using unsophisticated methods, I'll call it out. My usual phrase is, "This looks like a study that could have been carried out by a high school student over a weekend using their laptop."
There could be nothing technically wrong with it, but when papers like that become rampant, they destroy a journal.
The "anonymous author" of a paper does not work well. When I get a paper to review, the author (PI) is pretty obvious from the topic, and references to previous work.
@Pablo #14
Elitism! Just because you want your WESTERN science to "mean" something and "have an impact" on "people's lives" doesn't mean you have the right to prevent me from padding my CV with dead-end tripe!
ELITISM! I refudiate!
@LovleAnjel, LOL at your faculty member and I can relate. I didn't think I was in a contentious field either but in a rather well-known (to one another) core of investigators. One of my experiences was with a particular submission that overlapped the investigations of a very well-known investigator, except mine was more thorough in its parameters as I had the benefit of seeing what was done before me. Boy, did I get ripped; the actual corrections required were insignificant but the comments were brutal. Needless to say, I just re-submitted to a different journal (requesting the non-involvement with aforementioned investigator) and the comments were very fair and corrections, easily remedied.
I think situations, such as this, demonstrate the weaknesses of peer-review when peer-reviewers do try and quash the 'competition'.
Many peer-reviewed conferences in computer science use double-blind review. As Orac says, experienced reviewers can usually guess who the authors are. And it feels pretty silly to blank your name when submitting a follow up paper that obviously cites yourself. Still, maybe it's a small improvement.
I have had reviewers either purposely reveal their names in their reviews, or make a mistake like having the track changes feature show their initials.
And it is sometimes possible to identify the reviewer through the style and/or substance of the comments they make. For example, if the reviewer admonishes you for not citing the work of Fulano et al. (2002, 2004, 2005, 2006, 2008), then it is reasonable to guess that Dr. Fulano reviewed your manuscript. I've played the guess-the-reviewer game with some success, and I know that many of my colleagues also play.
The second half of the line I quoted is why I never use Microsoft Word to write a review of a manuscript submitted to a journal. There are too many ways to accidentally reveal your identity to the corresponding author (and some of my colleagues have run afoul of this problem). The journals I work with generally have web forms for entering the review, so I write the review in TextEdit (Notepad and emacs/vi would also work for Windows and Linux users, respectively) and copy-paste the text into the appropriate box.
I rather like this version (follow link) of the peer-review system...it can't be any worse than the current system. :)
http://community.acs.org/journals/acbcct/cs/Portals/0/wiki/PeerReview.j…
I have that picture stuck on my office wall where it meets with knowing nods by fellow workers and visitors.
Isn't there any work done on the review process by economists or game theorists?
I thought this was an interesting take on the subject. I have a ms under review at this journal right now, so we'll see if they walk the walk as well as they talk the talk :)
And sometimes you've just got to do the work. There was a time when the solar wind was a ludicrous idea in the minds of many. Then we got evidence of the solar wind. Once saying a bacteria caused ulcers was silly on the face of it, then we found just such a bacteria. Accepting things too easily leads to bad things, and backtracking; better to be a tad stubborn and demand evidence that demonstrates the truth of a matter.
I am convinced by the Patterson/Gimlin Film whereto sasquatch, but I can see why others are reluctant to accept it and the subject it portrays. Thanks to hoaxsters and the mulish we need more evidence. Science is not about composing convincing arguments, science is about composing arguments based on solid evidence. When you don't have the latter, the former does you no damn good whatsoever. To paraphrase a popular saying, it's the forensics, stupid.
So keep demanding more work, but make sure you're asking for the right thing. Understand what you're dealing with, and ask for the right kind of supporting work. And now when it's time to let the world have a chance at the subject, for you never know where somebody might have the information you don't.
Orac,
A rather timely post given the Virology Journal debacle.
Peer reviews broken? Nah! that's impossible. Peers never make mistakes or inject personal/political objections into their reviews. Peer reveiwed anything is infallible and impervious to mistakes.
Of course it is broken. I have been saying this for a long time now.
Antime I have an argument against big pharma or evilution or global warming scam, liberals always say the same thing :
"Where is your peer reviewed research and proof"? My reply now is "where if your peer reviewed journal repairman?"
So offer an alternative; it's always easy to criticise, now construct.
If you believe that you, personally, are being supressed by mainstream science: let's see some journal-ready papers. Let's see what was rejected.
It's surprisingly rare to see this. Even rarer to find such a paper that doesn't immediately and obviously look terrible.
Doctor Smart: You are confusing problems that annoy researchers, and cause them more work, and problems with scientific accuracy.
You've never had a conservative ask you about peer reviewed research? I find that hard to believe. There are many smart conservatives who understand science. Perhaps you just label those who disagree with you as liberals.
Here's my challenge to you: find me a publication-ready paper that disproves evolution, and which was rejected.
I already constructed. Wanna see?
@Science Mom
You just asked a creationist, antivaxer, and global warming dumb ass his opinion on how to fix peer review.
Find me a paper that disproves creation.
Yes there are some people out there who parade around as conservatives who will stab real conservatives in the back. My former favorite conservative Ann Coulter has done this. Her treachery and treason to conservatism should not be overlooked. never again will I buy her books, only burn the ones I have. She has betrayed the movement and is no longer worthy or allowed into the movement. I just wonder if Sean hannity will condemn her actions.
Remember george Bush? Traitor. He paraded around as a conservative, but once in office he became some silly little moderate to left winger. He betrays his fans too.
Real conservatives stick with their beliefs and hold on to them through thick and thin even if it means losing your job, your house, your family, and even your life. Some people are too weak to call themselves conservative. Chnage can be a good thing, but it can also be a very destructive thing. Therefore, they are liberals or liberal sympathizers.
Only peoeple who believe in the global warmng scam are dumbasses. The rest are normal sane people. Yes I am a creationist, normal sane person.
Yes I am anti-killer/population control vaccine.
I refuse your killer vaccines. I would rather take my chances with the disease than your cure. It is safer that way. Global warming is a scam. Evolution is a scam. However, I cannot figure out which one is the bigger hoax. maybe you can elighten me on this?
Adam_Y @ 29:
Oh, you haven't scratched the surface yet. Follow his link to his Web site and you'll find Obama & Hitler comparisons, Bilderburg conspiracy crap, a whole raft of crazy.
Insofar as I can see, the only difference between this guy and the Mabus is that this guy can type without bursting into ALL CAPS!!!!!
Seriously people. Stop feeding the idiots. If you read the website he linked to a website written by an electronics technician.
Ooh. Mr. Adam Y just read some important material and decided to entice you all with his knowledge of the force.
What I guess electronics is no longer a "science"? Who is the idiot now?
Ooohkaaaay, just read your subsequent comments and I can only suggest a soothing cup of tea (in a plastic cup), hopefully enjoyed in the presence of a large attendant with restraints at the ready.
@Adam_Y, I don't think 'dumbass' even begins to adequately describe (backs away slowly and maintains eye contact).
I have a degree in electrical engineering you idiot. Just because you know the mechanics of it doesn't make you a freaking scientist. And even then it doesn't give you the ability to critique any type of science you wish.
Oh dear God. Tell me its not so. I know electrical engineers and all of them do thing completely bass ackwards. They are smart, but have little common sense. I may not have the answer you seek but my URL here does.
Yes I can critique evolution or global warming if I so choose since both of them are massive scams. Technicians make better workers than engineers becuase engineers think too much. We tech actually test and use real methods instead of formulas to solve problems. Formulas are good for designing things, but trial and error works best in the tech department. Working wth the same equipment every day I can tell you that keeping notes on problems and what fixed them eases the day.
Engineers can come up with some weird stuff sometimes. Using common sense rather than book sense if often the best approach.
Nor does being an electronics technician give you any kind of special knowledge about biology. Especially if you cannot understand what is reported on Fox News. Now which vaccine did Desiree Jennings get last year? And if she was permanently paralyzed why is she walking and playing with her dog?
Obviously. Something the MM/DrS/troll does not indulge in.
Actually, it leaves many more questions. As to how does one fail so badly at trolling.
Well, if it isn't my personal little turd stalker Chris. What took you so long to come on to me this time?
What is a fart, but the lonely cry of an imprisoned turd?
I fail at everything and am very proud of it. Thank you very much.
I may be bad at trolling, but I am excellent at turd farming. Wanna play?
I bet chris is one of those annoying turd burgler people.
turd burgler
buy turd burgler mugs, tshirts and magnets
When you are taking a crap in a public restroom and someone tries to come into your stall, even though its locked they try forcing their way in because they dont know you are there
Chris my url will explain your curious obsession with my turds.
Just out of curiosity would Doctor Smart be the first person to be banned from this blog?
No, I am sorry, Adam, but there have been at least three people who have been banned.
The first was a "Generation Angel" who said regretful things when Orac's dog died. The second was a very clueless and cruel anti-vax troll. The third was an idiot who posted a link that threw X-rated photos onto your screen (including those using Macs, like Orac!).
Orac does have his limits. This clueless troll has not come anywhere near them. He just gives us entertainment with his idiocy.
I would also like to say that not every electronic technician is as stupid as our troll. My nephew is one, and presently works as a technical supervisor at a television subscription service (if you call for technical support for your equipment, you would get someone who is under his supervision). He is very bright and knows his stuff. Though it may be because he got his training from his service in the US Navy.
Oops, I dropped a word: It as a "Generation Rescue Angel."
His name is John Best. The second one was a "Dawn", who became known as "Crazy Dawn."
Chris...I thought crazy evil Dawn (not to be confused with the good Dawn) had said some truly horrific things regarding Orac and his mom-in-law when she died which was why he banned her. Maybe I have my cause and effect mixed up, but I remember reading her comments and wondering what sort of unbalanced person would think it ok to spew such bile. It made you want to nuke her computer and forever keep her off the web, not to mention rescue her children before they're poisoned by her twisted hate. Orac briefly laid into her and then told her good-bye. It really was good riddance.
Also made me long for mandatory sterilization laws---or at least have wanna-be parents take a course and obtain a license before they have children---you need to get a license for pretty much everything from building a fire to driving a vehicle (paddling a canoe), so why not for one of the biggest and most important decisions you'll ever make that has far-reaching consequences if you screw it up.
@ 27 Gopiballava writes:
. . .
@ 28 DS responds with:
Clearly incapable of recognizing anything publication-ready.
Apparently, DS is nowhere near the first to make this mistake.
.
Orac- As a PLoS ONE section editor, I was dismayed to read about your experiences with Plos ONE. What you described should not have happened. Certainly if you were able to get your paper published in another journal after a round of review, it should have been published in PLoS ONE. Pete Binfield once said "if a paper we rejected gets published in another journal, either we made a mistake or they did".
It sounds like you were the unfortunate recipient of a bad set of reviews, which will happen, no matter what journal you go to and I don't know of any way to completely eliminate that. We do have an appeals process for such cases though.
Anyway, I hope you will give us another try.
Ivan
Papers that interest me often involve using rather complicated statistical models on large sets of data in biomedial fields. Most, I repeat, most of the papers I read have methods that are inadequately described - I cannot follow or reproduce what was done. It is common that the data is not even provided. It is common that complete results are not provided. All of those are unacceptable in my opinion, but apparently others found it OK. Stop doing that!
Fudging of various kinds seems common. There are dozens of flavors. Sometimes we see smart and sly cheaters, other times it is just error, or using self-fooling methods without realizing it (distinguishing which is difficult). The major cause of this is poor review. I am talking about papers in very prestigious journals.
Taste some of Baggerly and Coombes work, on SELDI-TOF, or array-based mRNA assays, like: "Microarrays: retracing steps" Nature Medicine 2007; 13(11):1276-7.
The reviewers are 1) too tolerant of inadequate description and documentation and data availability, 2) technically incompetent, and/or 3) don't spend enough time on it. It can take me days to adequately review a serious paper with heavy data. I suggest good journals publishing such papers have the necessary technical expertise. Complete review is impractical due to costs. However I do claim that it may not cost too much to demand that descriptions of methods be sufficient, and that data and complete results be available and in good order, and we are nowhere near that yet.
On a different note there have been discussions of public comment after publication - the kind of thing where I can say that a result is crap, or fudged, or circular, or inadequately described, after publication. Briefly, the problem there is that if anonymous, cranks can be too free or lying in their complaints, but if not anonymous, I might face repercussions. Sorry I'm not spending time to give links to some of the best discussions I've seen. The reference I gave above shows a case where even with highly public comment, what's really happened is still not clear, despite a rather high-impact journal, which is depressing.
David J. Andrews:
Thanks, I forgot about that. I do remember she kept trying to come back.
you mean Daniel?
I think you are a little too dismissive of the censorship charge, orac.
Even if not the intent of the peer review process, what if it has that effect?
In physics or mathematics, even if the reviewer couldn't identify the author from the topic alone, they could punch up the arXiv and probably find a preprint of the article. Heck, some journals let you submit papers by sending in their arXiv reference numbers, so whatever the reviewer sees, they'd be able to find in e-print form, with the author's names right up top.
Oops, yes I did! Sorry Daniel!
@chris
can be very confusing around here :P
cervantes made a comment about people reviewing papers who have less expertise then themself (him/her/the author).
That is a potential problem in scientific publishing, but to change the subject here somewhat, I shall mention recent experiences here of another perversion of this theme.
Recently, the law enforcement bureaucracy has tried to legislate our reptile education business out of business so to speak.
They impose idiot rules and restrictions on the basis of "science" they know to false and easily demonstrated to be so.
then with the aid of biased judicial officers with NO knowledge of the subject, we get judgements made against us that fly in the face of all logic.
I am in Australia, but in the USA, S373 (AKA the constrictor ban bill) is similarly based on dodgy, science (not peer reviewed in that case).
All the best
Raymond Hoser
Peer review can be pretty random. I'm currently revising a paper that was rejected by the first journal I submitted it to. Reviews were...mixed. One reviewer said our method was incurably flawed and the paper shouldn't be published end of story. The second was obviously recommending rejection with resubmission since s/he made a lot of specific suggestions on how to improve the paper (Thanks anonymous reviewer!) The third suggested acceptance withtout change. Go figure.
Having been a reviewer, I think that the editors are more likely to reject a paper outright than the reviewers. I have several times put a lot of work into trying to constructively point out flaws in a paper and suggest resubmission only to have the editor reject it outright. It's annoying.
One suggestion, though not sure how to implement this: right now reviewing papers is something one does as part of service to the field. It's a nice compliment to be asked to review, but it doesn't get ones own name in print, bring in grant money, or educate any students...so it's kind of a time sink in terms of advancing one's career and research. Maybe if papers reviewed and quality of reviews were part of what universities considered in tenure review it would be easier to find reviewers and they'd put more effort into it?
On several occasions the author of a paper I have reviewed has told me that they knew I was the reviewer. And then we had some drinks.
I have occasionally been able to guess who reviewed one of mine. And again, we had some drinks.
In each case this has applied for accepted and rejected work.
Partly this is due to being in a rather specialized and small community, and perhaps we're getting too inbred. But I have never tried to hide my identity in a review, and on occasion have even felt it necessary to essentially identify myself to either clarify or address a misinterpretation of my cited work.
I have never written anything in a review that I would not want my name to be attached to. But I have certainly received a few where I felt someone was hiding behind the tattered cloak of anonymity to say things they would never say otherwise. (And I'm not a big fish that needs to be feared and attacked under cover of dark.)
Just a quick note to say how much I like the new Frontiers system. Two advantages that have not yet been mentioned: (1) You can remove your name from the review, effectively saying that you do not think the authors can fix the paper to the point where you will sign off on it. This is equivalent to rejecting the paper from the journal. If no one that the editor feels is qualified will vouch for the paper, then it doesn't get published. But if someone else feels that the paper should get published, then it's their name on the reviewer list, not yours. (2) Reviewers get some credit/blame for the published papers. I have seen several papers that I decided to work through in detail because I saw who the reviewers were.
I have just come across to a typical case of absurd peer review process. Nearly two months after submission I got the answer from the EIC that my manuscript had been recommended "Publish Unaltered" by one of the reviewer and "Publish after minor changes" by the other. Nearly 1 month after the submission of my revised manuscript I got letter from the EIC, suggesting that my manuscript could be publish after minor changes suggested by the reviewers. But I felt aback after found that a completely new sets of comments arrived which was completely different from the previous one (to those i have answered). I had answered to all those points and submitted a second set of revised manuscript. After 1&1/2 months I further got a letter from the Journal EIC that clarified "your manuscript could not be published due to some major comments". I felt very much annoyed after seeing the all new set of comments (comments were of lowered quality and nothing relevant to my work) which are completely different from the previous tow sets of comments. I strongly believes that these so called ridiculous comments seriously hampers the thought process. This things must stopped at any cost.
Law S - remember, you are not getting "a letter from the journal" you are getting a letter from an editor. If you have concerns about the process, call the editor in charge of your manuscript. He/she can describe their thought process and reasoning behind the way the paper was treated.
These are not automatrons. These are people who are trying to make informed decisions. We don't always agree with their decisions, but in that case, the most important thing to do is to figure out the basis for the decision in the first place.
Come up with a list of specific questions that you want to ask (not just, "why did you reject my paper?") and call them up.