When collaboration ends badly.

Back before I was sucked into the vortex of paper-grading, an eagle-eyed Mattababy pointed me to a very interesting post by astronomer Mike Brown. Brown details his efforts to collaborate with another team of scientists who were working on the same scientific question he was working on, what became of that attempted collaboration, and the bad feelings that followed when Brown and the other scientists ended up publishing separate papers on the question.

Here's how Brown lays it out:

You would think that two papers that examine the same set of pictures from the Cassini spacecraft to map clouds on Titan would come up with the same answers, but they don't. And therein lies the root of the problem. When the main topic of a paper is where there are and aren't clouds on Titan and you sometimes say there are clouds when there aren't and there aren't clouds when there are, well, then you have a problem. They have a problem, since theirs is the paper that makes the mistakes. So why are they mad at me? I think perhaps I know the answer, and, perhaps I even think they might have some justification. Let me see if I can sort it out with a little of the convoluted history.

I started writing my paper about 18 months ago. A few months later I realized the other team was writing the exact same paper. Rather than write two identical papers, I joined there team and the two papers merged. The problem was that as I worked with their team through the summer, it became clear that their analysis was not very reliable. I spent hours going over pictures in details showing them spots where there were or were not clouds in contradiction to their analysis. Finally I came to the conclusion that their method of finding clouds and thus their overall paper was unsalvageable. I politely withdrew my name from their paper and explained my reasons why in detail to the senior members of the team overseeing the paper. I then invited them to join me in my analysis done in a demonstratively more accurate way. The senior member of the team agreed that it seemed unlikely that their method was going to work and he said they would discuss and get back to me.

I felt pretty good about this. I had saved a team of people who I genuinely liked from writing a paper which would be an embarrassment to them, and I had done it - apparently - without alienating anyone. I remember at the end of the summer being proud of how adeptly I had navigated a potentially thorny field and come out with good science and good colleagues intact. Scientists are usually not so good at this sort of thing, so I was extra pleased.

I never did hear back from them about joining with me, so when I wanted to present the results of the analysis at a conference in December, I contacted the team again and asked them if they would like to be co-authors on my presentation in preparation for writing up the paper. I was told, no, they had decided to do the paper on their own. Oh oh. I though. Maybe things won't end up so rosy after all.

Their paper came out first, in June of this year, in the prestigious journal Nature of all places (it's not hard to figure out the reason for the catty comment often heard in the hallways "Just because it's in Nature doesn't necessarily mean that it is wrong."). I was a bit shocked to see it; I think I had really not believed they would go ahead with such a flawed analysis after they had been shown so clearly how flawed it was (and don't get me started about refereeing at this point). Our paper came out only this week, but, since their paper was already published, one of the referees asked us to compare and comment on their paper. I had avoided reading their paper until then, I will admit, because I didn't want to bias our own paper by knowing what their conclusions were and because - I will also admit - I was pretty shocked that they had, to my mind, rushed out a paper that they knew to be wrong simply to beat me to publishing something. I hoped that perhaps they had figured out a way to correct their analysis, but when I read their paper and found most of the erroneous cloud detections and non-detections still there, I realized it was simply the same paper as before, known flaws and all.

So what did I do? In my paper I wrote one of the most direct statements you will ever read that someone else's paper contains errors. Often things like that are said in couched terms to soften the blow, but, feeling like they had published something that they knew to be wrong, I felt a more direct statement in order.
And now they're mad.

You should really read the whole thing, as Brown does an admirable job trying to put himself in the shoes of the scientists who are now mad at him.

Here, I just want to add a few thoughts of my own on the general subject of collaboration.

The hope, with a scientific collaboration, is that the scientists involved, each bringing their knowledge, skills, and insight to the table, will work together to produce a better piece of scientific knowledge than any of them could on their own. Ideally, you get something more than a mere division of labor, with a few extra sets of hands to make the workload more manageable. Instead, you ought to get more rigorous examination of the knowledge claims by members of the group before they are sent on to the larger scientific community (by way of journal editors and referees).

In other words, applying some organized skepticism -- to data, to methods of analyzing that data, to the conclusions being drawn from the analyses, etc. -- it part of the job of the scientists involved in the collaboration.

Here, I think a group of collaborating scientists needs to resist the impulse to see themselves as working together against the rest of the scientific community, and to try to remember that they remain a part of that larger community of scientists, all of whom are involved in trying to address particular scientific questions (about how to get information on a particular phenomenon, or how to interpret particular results, or how to test a particular model, or what have you) and in evaluating the answers others in their community propose to these questions. Disagreement about conclusions, and about the best ways to arrive at them, is an expected part of scientific discourse. As such, ignoring such disagreement within a group of collaborators rather than acknowledging it and dealing with it seems like a really bad call.

My sense is that Brown did the right thing in pulling out of his collaboration when the methods his collaborators were using to deal with the data did not pass his gut-check. When he pulled out, he clearly communicated to the senior scientists on the team with which he was collaborating his reasons for withdrawing his participation. Here, given that the rest of the team apparently disagreed with Brown's critique of their methods, I'd think the most honest thing to do would be to acknowledge the objections he raised to them and then respond to these objections -- that is, give reasons why they felt that their analysis was not as vulnerable to these objections as Brown thought it was.

Apparently, though, they didn't do that. And failing to acknowledge Brown's criticisms, even if they felt that these criticisms were not in the end a good reason to switch to Brown's preferred methodology, looks an awful lot like failing to acknowledge what Brown contributed while he was collaborating with them.

Making honest disagreements between scientists a part of the larger discussion within the scientific community seems like it could be more useful to that scientific community than hiding such disagreements between collaborators in order to present a united front and get your results published before a competitor's.

But, trying to suppress such disagreement, or getting pissed when someone else airs them, might mean you've taken your eyes off the larger project of building good scientific knowledge and working out better strategies (whether experimental or inferential) for getting that knowledge. It is an awful lot like hogging the ball rather than helping Team Science score a goal. Sometimes it works out, but sometimes by ignoring the feedback from others on the team, your shot goes wide.

More like this

Actually, they do acknowledge Brown, albeit obliquely:

"We thank M. E. Brown for discussions that allowed us greatly to improve the quality of this study."

On the same theme, I highly recommend a book called "The Hubble Wars" by Eric J. Chaisson.

Many astonomers working on the Space Telescope made demands to "hog the ball" until Congress nearly ended the Hubble altogether. The ferocity of the arguments and the absolute unwillingness to compromise for the good of all is extraordinary.

As we all know, refereeing is far from perfect, even (perhaps especially) at GlamourMags. I've read more than one paper where I've asked myself, "How the #&$^ did that get past the referees?" Which is a question I'm sure Brown was asking when he read that paper.

I agree with you that the authors of the GlamourMag pub should have acknowledged, and made some attempt to refute, Brown's criticisms. Instead they tried to brazen it out, and probably put Brown on the list of people who should not referee the paper (most journals I am familiar with let you make both positive and negative suggestions about who should referee the paper). I don't blame Brown for taking offense, but since I've only heard his side and am not an expert in his field I cannot judge his criticisms on the merits. I have, however, witnessed a comparable kerfluffle, the great cometesimal debate of the late 1990s (instrument PI George Parks withdrew from Lou Frank's paper after learning that the features Frank claimed were evidence of cometesimals had appeared in laboratory calibration data). In that case, the critics of cometesimals were right.

Is there some reason Brown chose not to make a comment on the paper? That is the normal means, at least in my field, of airing a scientific dispute of this kind.

By Eric Lund (not verified) on 20 Oct 2009 #permalink

I was involved with two colleagues who were doing exactly the same study. I made a strong effort to get them to collaborate, but was unsuccessful to the max. I was unhappy about this at the time. Later both studies were published. They used different genes and somewhat different techniques, but reached very similar conclusions. So, instead of one study, we have two studies which support each other. I have come to think this is a good thing.

By Jim Thomerson (not verified) on 20 Oct 2009 #permalink

Dr Brown's post presents his side of the issue, and your post reflects that, but I don't see that we've heard from the other side. They might see it very differently.

The question of objective measures versus subjective evaluation comes up in many science fields, and each side has its fans (fanatics, sometimes). Dr Brown presents that the automated system misses things he thinks are clouds. I'm sure the supporters of the automated system would counter that his subjective selection of clouds could be biased by his desire to detect a particular pattern. Or that different observers might have different thresholds for identifying clouds.

Did Dr Brown's group use multiple independent observers and compute a kappa statistic demonstrating inter-observer reproducibility? Actually I can't tell what he did as he didn't post full references to the papers involved. Finding the Rodriguez article was easy as he mentioned Nature, June'09. But finding his article takes some experienced digging, and I don't know the field. His lack of accurate citations in his blog rant is a problem. But reading his blog rant makes it clear that he considers himself to be an expert on cloud identification, and I suspect he didn't go the extra mile of validating his subjective ratings.

In any event, the first authors (Rodriquez et al) address the false-positive and false-negative issue briefly, viz: in the caption to Fig 1, they state: "We deliberately choose a conservative threshold to avoid false positives. This can lead to the rare non-detection of optically thin or low altitude clouds, of clouds much smaller than a VIMS pixel or of clouds that are too close to the limb."<\i>

I can't really tell what the best false negative rate is, but to my inexpert reading of the limited information, the two groups seem to reach similar final conclusions. So Dr Brown seems to be upset that the other group used a quicker, cheaper and more objective method to get the right result before him.

And, as pointed out by commenter @1 above, they do credit Dr Brown's comments for helping to improve their study.