Two recent events put in stark relief the differences between the old way of doing things and the new way of doing things. What am I talking about? The changing world of science publishing, of course.
Let me introduce the two examples first, and make some of my comments at the end.
Example 1. Publishing a Comment about a journal article
My SciBling Steinn brought to our collective attention a horrific case of a scientist who spent a year fighting against the editors of a journal, trying to have a Comment published about a paper that was, in his view, erroneous (for the sake of the argument it does not matter if the author of the original paper or the author of the Comment was right - this is about the way system works, er, does not work). You can read the entire saga as a PDF - it will make you want to laugh and cry and in the end scream with frustration and anger. Do not skip the Addendum at the end.
Thanks to Shirley Wu for putting that very long PDF into a much more manageable and readable form so you can easily read the whole thing right here:
See? That is the traditional way for science to be 'self-correcting'....Sure, a particularly egregious example, but it is the system that allows such an example to be a part of that continuum somewhere on its edge - this is not a unique case, just a little bit more extreme than usual.
Janet wrote a brilliant post (hmmm, it's Janet... was there ever a time I linked to her without noting it was a "brilliant post"? Is it even possible to do?) dissecting the episode and hitting all the right points, including, among others, these two:
Publishing a paper is not supposed to bring that exchange to an end, but rather to bring it to a larger slice of the scientific community with something relevant to add to the exchange. In other words, if you read a published paper in your field and are convinced that there are significant problems with it, you are supposed to communicate those problems to the rest of the scientific community -- including the authors of the paper you think has problems. Committed scientists are supposed to want to know if they've messed up their calculations or drawn their conclusions on the basis of bad assumptions. This kind of post-publication critique is an important factor in making sure the body of knowledge that a scientific community is working to build is well-tested and reliable -- important quality control if the community of science is planning on using that knowledge or building further research upon it.
----------snip----------
The idea that the journal here seems to be missing is that they have a duty to their readers, not just to the authors whose papers they publish. That duty includes transmitting the (peer reviewed) concerns communicated to them about the papers they have published -- whether or not the authors of those papers respond to these concerns in a civil manner, or at all. Indeed, if the authors' response to a Comment on their paper were essentially. "You are a big poopyhead to question our work!" I think there might be a certain value in publishing that Reply. It would, at least, let the scientific community know about the authors' best responses to the objections other scientists have raised.
Example 2: Instant replication of results
About a month ago, a paper came out in the Journal of the American Chemical Society, which suggested that a reductant acted as an oxidant in a particular chemical reaction.
Paul Docherty, of the Totally Synthetic blog, posted about a different paper from the same issue of the journal the day it came out. The very second comment on that post pointed out that something must be fishy about the reductant-as-oxidant paper. And then all hell broke lose in the comments!
Carmen Drahl, in the August 17 issue of C&EN describes what happened next:
Docherty, a medicinal chemist at Arrow Therapeutics, in London, was sufficiently intrigued to repeat one of the reactions in the paper. He broadcast his observations and posted raw data on his blog for all to read, snapping photos of the reaction with his iPhone as it progressed. Meanwhile, roughly a half-dozen of the blog's readers did likewise, each with slightly different reaction conditions, each reporting results in the blog's comment section.
The liveblogging of the experiment by Paul and commenters is here. Every single one of them failed to replicate the findings and they came up with possible reasons why the authors of the paper got an erroneous result. The paper, while remaining on the Web, was not published in the hard-copy version of the journal and the initial authors, the journal and the readers are working on figuring out exactly what happened in the lab - which may actually be quite informative and novel in itself.
Compare and contrast
So, what happened in these two examples?
In both, a paper with presumably erroneous data or conclusions passed peer-review and got published.
In both, someone else in the field noticed it and failed to replicate the experiments.
In both, that someone tried to alert the community that is potentially interested in the result, including the original authors and the journal editors, in order to make sure that people are aware of the possibility that something in that paper is wrong.
In the first example, the authors and editors obstructed the process of feedback. In the second, the authors and editors were not in a position to obstruct the process of feedback.
In the first example, the corrector/replicator tried to go the traditional route and got blocked by gatekeepers. In the second example, the corrector/replicator went the modern route - bypassing the gatekeepers.
If you had no idea about any of this, and you are a researcher in a semi-related field moving in, and you find the original paper via search, what are the chances you will know that the paper is being disputed?
In the first example - zero (until last night). In the second example - large. But in both cases, in order to realize that the paper is contested, one has to use Google! Not just read the paper itself and hope it's fine. You gotta google it to find out. Most working scientists do not do that yet! Not part of the research culture at this time, unfortunately.
If the Comment was published in the first example, chances that a reader of the paper will then search the later issues of the journal seeking comments and corrections are very small. Thus even if the Comment (and Reply by authors) was published, nobody but a very small inner circle of people currently working on that very problem will ever know.
Back in grad school I was a voracious reader of the literature in my field, including some very old papers. Every now and then I would bump into a paper that seemed really cool. Then I would wonder why nobody ever followed up or even cited it! I'd ask my advisor who would explain to me that people tried to replicate but were not successful, or that this particular author is known to fudge data, etc. That is tacit knowledge - something that is known only by a very small number of people in an Inner Circle. It is a kind of knowledge that is transmitted orally, from advisor to student, or in the hallways at meetings. People who come into the field from outside do not have access to that information. People in the field who live in far-away places and cannot afford to come to conferences do not have access to that information.
Areas of research also go in and out of fashion. A line of research may bump into walls and the community abandons it only to get picked up decades later once the technological advances allow for further studies of the phenomenon. In the meantime, the Inner Circle dispersed, and the tacit knowledge got lost. Yet the papers remain. And nobody knows any more which paper to trust and which one not to. Thus one cannot rely on published literature at all! It all needs to be re-tested all over again! Yikes! How much money, time and effort would have to be put into that!?
Now let's imagine that lines of research in our two Examples go that way: get abandoned for a while. Let's assume now that 50 years from now a completely new generation of scientists rediscovers the problem and re-starts studying it. All they have to go with are some ancient papers. No Comment was ever published about the paper in the first Example. Lots of blogging about both afterwards. But in 50 years, will those blogs still exist, or will all the links found on Google (or whatever is used to search stuff online in 50 years) be rotten? What are the chances that the researchers of the future will be able to find all the relevant discussions and refutation of these two papers? Pretty small, methinks.
But what if all the discussions and refutations and author replies are on the paper itself? No problem then - it is all public and all preserved forever. The tacit knowledge of the Inner Circle becomes public knowledge of the entire scientific community. A permanent record available to everyone. That is how science should be, don't you think?
You probably know that, right now, only BMC, BMJ and PLoS journals have this functionality. You can rate articles, post notes and comments and link/trackback to discussions happening elsewhere online. Even entire Journal Clubs can happen in the comments section of a paper.
Soon, all scientific journals will be online (and probably only online). Next, all the papers - past, present and future - will become freely available online. The limitations of paper will be gone and nothing will prevent publishers from implementing more dynamic approaches to scientific publishing - including on-paper commentary.
If all the journals started implementing comments on their papers tomorrow I would not cry "copycats!" No. Instead, I'd be absolutely delighted. Why?
Let's say that you read (or at least skim) between a dozen and two dozen papers per day. You found them through search engines (e.g., Google Scholar), or through reference managers (e.g., CiteULike or Mendeley), or as suggestions from your colleagues via social networks (e.g, Twitter, FriendFeed, Facebook). Every day you will land on papers published in many different journals (it really does not matter any more which journal the paper was published in - you have to read all the papers, good or bad, in your narrow domain of interest). Then one day you land on a paper in PLoS and you see the Ratings, Notes and Comments functionality there. You shake your head - "Eh, what's this weird newfangled thing? What will they come up with next? Not for me!" And you move on.
Now imagine if every single paper in every single journal had those functionalities. You see them between a dozen and two dozen times a day. Some of the papers actually have notes, ratings and comments submitted by others which you - being a naturally curious human being - open and read. Even if you are initially a skeptical curmudgeon, your brain will gradually get trained. The existence of comments becomes the norm. You become primed....and then, one day, you will read a paper that makes you really excited. It has a huge flaw. It is totally crap. Or it is tremendously insightful and revolutionary. Or it is missing an alternative explanation. And you will be compelled to respond. ImmediatelyRightThisMoment!!!11!!!!11!!. In the old days, you'd just mutter to yourself, perhaps tell your students at the next lab meeting. Or even brace yourself for the long and frustrating process (see Example 1) of submitting a formal Comment to the journal. But no, your brain is now primed, so you click on "Add comment", you write your thoughts and you click "Submit". And you think to yourself "Hey, this didn't hurt at all!" And you have just helped thousands of researchers around the world today and in the future have a better understanding of that paper. Permanently. Good job!
That's how scientific self-correction in real time is supposed to work.
- Log in to post comments
Great! Thank you.
This is an excellent example of how those in power, with privilege use that power and privilege to maintain that power and privilege, even in science.
How soon do you think that will happen?
I worked with someone trying to reproduce a result so we could use it for something new. The method didn't work, but I don't know if anyone called the original researcher. Instant commenting would be useful.
One word: Trolling. PLoS is admirable for it's efforts, but one needs to distinguish posts that contain worthless information from actual criticism.
Papers should have a streamlined, official commenting system: a comment should be reviewd by an editor for clarity and structure, and reviewers to check for scientific mistakes. All that needs to change is to make the whole process public and efficient.
A simple example: A PLoS article comes out. Someone posts an informal comment like "great!" and that appears on the normal comments.
But someone else wants to dispute/add to it: they submit a "Scientific Commentary" (SC) or some such pompous differentiator; the public status of the paper changes to: "Paper has one SC awaiting moderation".
An editor gets emailed, and will go remove the grammer mistaiqes. Status changes to: "One SC awaiting review" and suddenly it appears on the site as is. The fact that the status appears puts public pressure on the journal to update, and the comment appearing immediatly gives pause to anyone who was going to jump in and accept the paper as true.
But only when a commissioned reviewer pannel gets access to it, says "It's A-OK" will it appear as published. The debate and comments of the reviewers and authors should all be published on the site and appear publicly at all turns. This makes it more transparent, so that nothing "suddenly disappears". I doubt PLoS would do such a thing, but it would sure set a standard for everyone else.
This proposed "status" could be dynamic, stating "This paper has 2 SC's, to which there are 2 replies", each commentary being a separate thread of discussion. Another idea would be to make the SC display of a "Google Docs" type, where the latest version appears, but people can hover sections to read reviewer notes (PLoS already does something similar for papers) and even go back to previous versions.
I think this would be a much better and reliable system than simply including a forum type comments thread or keep the current system.
This will only become commonplace accepted practice if the gatekeepers of scientific resources--funding, jobs, promotions--allocate scientific credit for such an activity. If not, it will remain the sole province of buffs, cranks, and other parties motivated by considerations other than scientific progress itself.
Thanks for bringing this interesting story into an even larger context- I was completely unaware of the first example. Maybe this will inspire another PhD comic on the topic...
I've been on both ends of this sort of thing, and even when the process appears to work from the outside, it is still deeply flawed from the inside. You can even see this from what are apparently the actual comments.
If I didn't read the account of Trebino and I didn't have experience wrangling with Nature on one of my own comments, I wouldn't really think anything of it. If I were in even a slightly different field, I'd think "Well, that topic appears like it is still up for debate." In my experience with Nature, it was really frustrating, despite the fact that we managed to have a dialogue with the other authors before publication. We kept going back and forth on what we thought would be the final versions of responses. And when the process came to an end, we ultimately had a response that seemed to line up well with the original paper and the authors' response to our comment. However, it turned out to be a red herring, as the authors got the opportunity for at least one final round of revision. And so they were able to once again move the goalposts and make some of our comments seem less important. Furthermore, they raised additional objections that that we could have easily answered, but we didn't because of space constraints. Altogether an unpleasant experience.
In this case, our community responded rather rapidly, though. In less than 2 years there were more than 10 critical comments on the orignal paper (3 in Nature alone). Most of the responses came as independent papers in other field-specific journals.
This is funny - I am not the only one thinking how comments on the Watson/Crick DNA Structure paper would look like. And check the Ref.#3 - LOL, the best way to get cited!
Actually, Posterous gets the credit for putting the PDF up in web-readable form - it automatically converts PDFs :).
While the ideal sounds good, I think in practice hosting comments would need yet another layer of "policing".
I've seen web-hosted comments to some papers in some journals (the particular case I'm thinking of wasn't in PLoS) where the person writing hasn't disclosed a conflict of interest despite the comment form stating explicitly that they must. In the example I am thinking of an anti-vaccine spokeswoman wrote against a paper without declaring her interest in selling the "competing" "natural remedy" that she referred to, nor that she was the founder/spokesperson of an anti-vaccine organisation. I wrote asking for either these details to be included, or her comment removed (as the rules required). The journal in question didn't even acknowledge my letter. I wasn't impressed.
That is an important part of my job - monitoring comments, deleting spam and trolls, forcing/reminding people to state conflict of interest, etc.
Wonderful post. Sad but wonderful paper. I really appreciate the EXTRA time after the comment procedure itself, that the author invested to make this example known to the general community.