I had a weird experience dealing with journals and peer review a little while ago. Recent discussions of the CRU e-mail hack (especially Janet's) has made me think more about it, and wonder about how the scientific community ought to think about expertise when it comes to peer review.
A little while ago, I was asked to be a reviewer for a journal article. That's a more common experience for people at research universities than for someone like me, but it's still something that's part of my job. I turned down the request because I didn't feel qualified to review the paper. That wouldn't have been weird, except that I couldn't figure out why the editors would have chosen me, out of all the structural geologists in the world, to ask to be a reviewer. I mean, I had written a blog post about a related paper, but...
Did a journal ask me to be a peer reviewer because I had written a blog post about a related piece of research?
The thought was horrifying enough before the blogosphere started discussing the CRU e-mails. But in these discussions of whether climate researchers were trying to unethically interfere with the process of peer review, it may be worth discussing how reviewers are found. (My experience is relevant because it was with a journal that publishes a broad range of geoscience research, including climate change.)
I think about my expertise (and lack thereof) a lot, in part because my job involves developing expertise in other people (my undergrads). I may have a Ph.D. in science, I may teach at the college level, I may go to scientific conferences, I may have published my own work in peer-reviewed journals. But that doesn't mean I'm an expert in most things. In fact, as I was thinking about the comments on last week's post about getting undergrads to critically read the literature, I started to categorize the way I think about scientific papers:
Level 1 - I have no clue what the abstract says, even in journals like Science or Nature (which supposedly publish papers that are aimed at a broad audience). I get my information about these subjects from science journalists or other popular writers, not directly from research articles. Anything dealing with a genomes or quantum mechanics falls into these categories for me.
Level 2 - I can understand the point of the abstract, and the basic reasoning of the paper makes sense. I probably don't know much about the methods used, but I've seen other papers that have used them, or they're based on concepts I learned in college, so they make sense to me. Most geoscience papers (including climate change work) fall into this category for me.
Level 3 - I understand the paper (including methods and context) well enough to explain it to a less-experienced audience (like a class of undergraduates). These are the types of papers that I feel comfortable blogging about. They include papers in structural geology, tectonics, metamorphic petrology, regional geology of places where I've worked, and some mineralogy, igneous petrology and earthquake studies. (I've taught all those topics in courses for geology majors.)
Level 4 - I've got enough expertise to see problems with the methods or with the conclusions. That means I've got experience with the topic myself - maybe I've done work on those rocks, and I know things that the authors neglect to mention. Maybe I've got experience teaching a course discussed in a pedagogy paper. Maybe I've used a technique enough to recognize when another scientist is misusing it, or misinterpreting the results. These are the papers that I would be qualified to review.
I hope that undergrads come out of my junior-level courses able to make sense of the basic reasoning of papers in the field (level 2), and I want senior thesis students to understand work related to their work well enough to explain it to other undergraduates (level 3). So one of the goals of my junior-level writing class is to push students towards a higher level of expertise in their chosen sub-field (which may not be mine, and yes, that makes the class especially challenging to teach).
I don't expect my undergrads to become qualified to review research articles, however, and I don't think journal editors would consider them qualified. And that brings me back to my concern. Why would an editor think I was qualified to review a paper when my C.V. (which is online, and which includes a list of all the papers I've ever published) doesn't indicate any research expertise in the subject? Did they really judge my expertise based on my blogging?
I turned down the request. But there are plenty of bloggers who write about scientific topics who might be delighted to participate in peer review. Some of the people who don't think humans affect climate, for instance. Would they give a positive review to a paper whose methods they didn't understand, because they agreed with the conclusions?
If I were an editor, I would find it difficult to figure out where to draw the line on expertise. The geosciences are filled with people who have switched specialties throughout their careers - climate scientists with degrees in Applied Math, for instance, or planetary geologists who used to do field work on Hawaii or Antarctica. And even reviewers with appropriate expertise can be unfairly harsh when a paper disagrees with ideas that they like (or easy-going when a paper supports their view). And what's an editor to do when faced with a paper that is genuinely ground-breaking - where experience with the topic might not exist? (Actually, I've got a mental list of Big Thinkers who would probably be fair and insightful reviewers of wild ideas. I bet editors do, as well.) So I'm somewhat sympathetic to the journals. But I'm also concerned that, given the politics and media noise around climate change, they may feel pressured to find sympathetic reviewers for mediocre-to-bad papers. (More pressured than if, say, an Expanding Earth paper* was submitted to the same journal.)
And if they're using the blogosphere to find potential reviewers? Ack.
Is it ok to talk about these kinds of issues in the scientific community? Or is that an unreasonable interference in the process of peer review?
*Yes, there have been legitimate geoscientists who believed the Earth is expanding. The most famous, S. Warren Carey, is deceased, but he was still alive and publishing while I was in grad school (and received the GSA Structure/Tectonics division's Career Contribution Award in 2000). He wasn't publishing in highly respected journals any more, however. If there had been a lot of political attention given to people who think subduction has no evidence to support it, would Carey's late-career work have been published in mainstream journals?
And if they're using the blogosphere to find potential reviewers? Ack.
My initial feeling is that all peer review ought to have a section asking reviews to disclose their relevant expertise and what portions of the paper they are able to assess as a peer in that research area.
Many journals have a space for you to indicate your expertise in accessing the article in the reviewers report. Does this have one?
Even if such a section is absent, I just write the equivalent in a quick cover letter expressing any concerns. Seems the right thing to do to my mind.
Do you know either the handling editor or the author? Is there some mechanism by which you could have been suggested as a reviewer? When you declined, did you give them the name of someone more appropriate, or was it so far off that you don't even know who works in that field?
No, I didn't know the editor or any of the authors. And I didn't know a more appropriate reviewer to suggest, unfortunately. (I could have looked in Georef, but the editors should be able to do that, too.)
On the other hand, my blog post shows up on the first page of the first Google search I did on a related term. I certainly hope that journal editors aren't using Google to find reviewers...
Or even a google scholar search, which is exactly as easy as a google search...
Editors use google. The same as everybody else.
Do scienceblogs turn up in Google scholar?
Editors use google. The same as everybody else.
Yes, but I would hope that they would do more critical evaluation of their search results than the students in my intro class do!
And I don't know if Sb turns up in Google scholar. I don't use Google scholar much. (I prefer to use Georef through my library - I've got a lot of practice using the various types of searches, and we've got decent links back to the library's collection, so I can figure out whether I've got access to the articles or not.)
Let's put it this way - I sure hope Sb (or blogspot, where I spent more time) doesn't show up on Google scholar searches.
I've also refereed papers about things I'm not an expert in. I took the view that as long as I could say something reasonable, it's OK. I always have the option of admitting my ignorance in my comments.
I have declined to review, when it's clear I can't say anything non-trivial.
It's very likely that the authors suggested you. They may have found your blog; they may even have discussed it in seminars. They may like your writing and think that you're likely to agree with their paper.
Any editor depends on a large network of reviewers. If you can't think off hand of a more appropriate reviewer, then the editor's current network may only include one or two people with more relevant expertise. Maybe she already asked both; maybe they've both given reviews for other papers within the last few months. It's unfair to tap the same people for unpaid work again and again.
To be sure, you were right to refuse to review if you felt you didn't have the relevant expertise. From the editor's point of view, though, the simplest way to assess whether you fall in category 3 or 4 (and to additionally see what kind of reviewer you are) is to ask you to review a paper.
Some of the people who don't think humans affect climate, for instance. Would they give a positive review to a paper whose methods they didn't understand, because they agreed with the conclusions?
I think the inverse is the worse case. Lazy reviewers are a lot more likely to go with the consensus than to buck it. "Peer pressure" can be more influential than peer review.
I got my first request for reviewing a paper when I was still a grad student (addressed to "Dr. " so I wrote back to the editor saying that I was happy to review the paper (it was in my field) but pointing out that perhaps I wasn't sufficiently senior, but the editor was happy to have me do it. It was a good paper, so it was an easy task... unlike a more recent 100 page monstrosity review paper that was the first paper I have ever had to give a "recommend not publishing" review for...
On John Hawks statement: while I might agree that a paper that agrees with the consensus is more likely to slip past lazy reviewers, I would argue that those papers mostly end up disappearing and so don't matter. Its the politically contentious papers that are total junk that get published that become a headache for the entire community because skeptics keep citing them over and over as "published and peer reviewed and therefore quality work"! Think: G&T, Khilyuk and Chilingar, Beck, etc. Some people do take the time to rebut them: I suggest reading the rebuttal published by the journal: http://www.springerlink.com/content/36w570322514n204/ , which is one of the most scathing rebuttals I've ever read in a peer-reviewed setting... but then, the paper they were rebutting included fine logic like stating that human CO2 emissions were clearly insignificant because they are orders of magnitude smaller than the outgassing of the earth's crust - over the 4.5 billion year lifetime of the earth. In a normal field, papers like these would be an embarrassment, swept under the rug. In the climate field... not so much.
Yes, there have been legitimate geoscientists who believed the Earth is expanding.
Not just the Earth - they think that all planets are expanding. There are some kooky people out there anyway.
Back on topic, I'm divided...on the one hand, the Scientific Method is robust: even if situations arise like this one, where a bad paper could get published or a good one rejected, the truth will out. People trying to replicate the results of the bad paper will find that they cannot...or the frustrated author will turn to a journal of slightly lesser impact number, and the deserving paper will get published. On the other hand, even if science cannot be corrupted for long by such events, the effect they have on society is likely to be profoundly detrimental. Most people vote on the issues of the day, and if they don't trust scientists (and have been subjected to copious amounts of specious spin via the good old mainstream media, not to mention election attack ads), they will vote for politicians who are ignorant of and indifferent to science.
You know what, in the absence of other issues, I don't think I would worry. Unfortunately, these days private funding can buy you tailor-made results, sometimes even published in a tailor-made journal! Being asked to review a paper because you wrote about a similar paper in a blog post is just one tiny crystal in the groundmass of a porphyritic andesite...there are 3-cm plag crystals to take care of before we worry about this.