Not on my watch, or it's not my job to watch?

Via Evolgen, an article by Nicholas Wade on tools to recognize doctored images that accompany scientific manuscripts. Perhaps because "seeing is believing," pictures (including visual presentations of data) have been a favored weapon in the scientist's persuasive arsenal. But this means, as we know, that just as images can persuade, they can also deceive.

The deceptions Wade discusses in the linked article rely primarily on using Photoshop to cover up inconvenient features (like bands on gels), to resize isolated parts of images, to rotate things, and the like. Wade writes:

At The Journal of Cell Biology, the test has revealed extensive manipulation of photos. Since 2002, when the test was put in place, 25 percent of all accepted manuscripts have had one or more illustrations that were manipulated in ways that violate the journal's guidelines, said Michael Rossner of Rockefeller University, the executive editor. The editor of the journal, Ira Mellman of Yale, said that most cases were resolved when the authors provided originals. "In 1 percent of the cases we find authors have engaged in fraud," he said.

(Emphasis added.)

Notice that while most of the manipulations were not judged to be fraud, there was a fairly high proportion -- a quarter of the accepted manuscripts that had illustrations -- that violated JCB guidelines.

Possibly this just means that the "Instructions to Authors" aren't carefully read by authors. But it seems likely that this is also indicative of a tendency to cherry-pick images to make one's scientific case in a manner that would seem pretty darn sneaky were it applied to data. You can't just base your analysis on the prettiest data; why should you get to support your scientific claims with the prettiest available images?

RPM has a lovely discussion of this, including the phenomenon of "picture selection". And the Wade article gives a nice feel for how the mathematical features of digital images can make alterations that aren't detectable by the naked eye as altered quite easy to find with the right algorithms. Either this kind of image doctoring will get smacked down quicker than a student paper cut and paste from the internets ... or the job opportunities for mathematicians in science labs may increase. (Knowing how the algorithms work may make it possible to find ways to defeat detection, too.)

But that's not the part of the Wade article that got my dander up today. The bit I want to discuss (below the fold) is whose responsibility it is to catch the folks trying to lie with prettied-up images.

I should explain that before I read the Wade article, I was at a beginning of the semester faculty meeting. At this meeting, someone raised a question about how we philosophy instructors ought to deal with students in our super-large "service" class that supports another college in the university. Despite the fact that course prerequisites include completing core general education requirements and passing a writing skills test, a frighteningly large proportion of the students in this service class can't write intelligibly in English. Here in the Philosophy Department, of course, even our service classes require a great deal of writing. But, we don't see ourselves as teachers of spelling, grammar, punctuation, and the like; we're trying to teach philosophy. Ought we to cut these semi-literate students a break by squinting our eyes until we think we can make out a philosophical point in their essays? Ought we to be hard-asses and insist that the skills in written communication are a necessary precondition for demonstrating the required philosophical skills? (If not, can we shove electrodes into their brains to measure their understanding and take worries about writing skills off the table?)

Despite the fact that I don't teach this super-large "service class", I could feel the pull of arguments on both sides. On the one hand, the students' home college (and whoever the heck was supposed to be teaching the students how to write in their core general education classes) passed them right through to us, despite the fact that they can't write for beans. Maybe in their part of the workforce (and the corresponding departments of the university), written expression just doesn't matter, and failing them just because of our silly attachment to comprehensible sentences is cruel. On the other hand, the fact that others may have abandoned their standards does not require that we abandon ours; perhaps we ought to be taking a principled stand for literacy as a requirement of passing a writing-intensive college course.

And here's where we get back to the article on doctored images. In it, Wade writes:

Emilie Marcus, editor of Cell, said that she was considering the system [for detecting manipulation of digital images], but that she believed in principle that the ethics of presenting true data should be enforced in a scientist's training, not by journal editors.

The problem of manipulated images, she said, arises from a generation gap between older scientists who set the ethical standards but don't understand the possibilities of Photoshop and younger scientists who generate a paper's data. Because the whole scientific process is based on trust, Dr. Marcus said: "Why say, 'We trust you, but not in this one domain?' And I don't favor saying, 'We don't trust you in any.' "

Rather than having journal editors acting as enforcers, she said, it may be better to thrust responsibility back to scientists, requiring the senior author to sign off that the images conform to the journal's guidelines.

Those guidelines, in her view, should be framed on behalf of the whole scientific community by a group like the National Academy of Sciences, and not by the fiat of individual editors.

(Emphasis added.)

In some ways, the choice Dr. Marcus sets out here is a lot like the choice my colleagues were wrestling with in the faculty meeting: Whose job is it to ensure that a certain standard has been met?

Is it primarily the authors' responsibility to ensure that "representative" images really are fair representations, free of any alterations that might mislead their viewers? Even if this requires that they come up to speed on new-fangled doohickeys like Photoshop that their younger collaborators are using so they can check up on those collaborators? Sure. If you're doing science, it's part of the job description to be honest -- with your data sets, words, and images. And, as recent events seem to bear out, being your collaborator's keeper might be a good idea.

But, does this mean journal editors don't also have a responsibility to be alert to doctored images (as well as other signs of fabrication, falsification, plagiarism, and, while we're at it, impenetrable prose)? I'm not sure this follows. Journal editors are, after all, members of the scientific community too. If they have reason to suspect that the images that come in with manuscripts are not meeting standards for honest scientific communication -- and the JCB numbers look like reason to suspect this, whether or not there's any ill intent in the creation of these images -- then they have a responsibility to the community to at least say something about it. Indeed, since the journal editors have a relatively large degree of control over the "finished products" of scientific labor -- published findings -- they may have special duties to exercise due dilligence. Otherwise, how can the rest of the community put its trust in the scientific literature.

Dishonest conduct by scientists reflects badly on the community of science as a whole. Yes, there's already a lot of work involved in being a journal editor. Yes, authors ought to take a more active role in making sure what gets submitted to the journals is truly true. But instead of saying, "Hey, that bit of policing isn't my job!", why can't everyone in the community demonstrate a commitment to honesty by stepping up and calling doctored images out?

More like this

online poker Fragmenty publikacji, recenzje, opinie czytelniko'w kompletny zbio'r informacji na temat korekcji w Photoshopie, dzie;ki niemu praca zawodowych retuszero'w, artysto'w fotografiko'w i grafiko'w komputerowych stanie sie. Informacje o sieciach i systemach rodzinyNajwie;kszy na Warmii i Mazurach Portal Internetowy, zawieraja;cy wiadomos'ci regionalne, turystyczne, biznesowe i sportowe oraz og?oszenia, informacje o

Circadian rhythms in many organisms are awfully messy, thus most of the generated actographs are god-awful examples of modern art (I am lucky with my choice of animal - Japanese quail, and choice of measured output - body temperature, that are unusually clean and pretty). It is understood in the field that the "representative" record in the figure is the best you got. However, that representative record is not supposed to be doctored in any way and all manipulations, if any were done which is rare, must be noted in the figure legend. Also, you must do your statistics on ALL your data, no matter how visually unappealing they may be, and experienced people in the field are going to immediatelly notice if your "representative" record and your statistical data do not fit together well.

I don't understand Marcus' comment at all, except as a way of rationalizing not paying for something you don't want to pay for.

The job of a journal editor is to judge the quality of the work and filter out the bad work. We journal readers rely on journals to provide us with trustworthy information. Running an elementary check for fraud seems as much a part of the editors job as finding appropriate peer reviewers. And she certainly can't believe that we should just rely on trust. The experience of the Journal of Cell Biology shows that trust alone doesn't work.

And as for writing: writing is not something you learn in one course and then have down pat for the rest of your life. Good writing takes more than one course to learn, and has to be practiced even once it is learned. So of course it is a philosopher's job to teach good writing. Constantly.

It seems a sad state of affairs when the scientific community is contemplating routine checks for fraud of manuscritps submitted for publication.

While I would applaud any journal's effort to implement such scrutiny, its value may be more in showing willingness to act upon an (apparent) increase in fraud, rather than actually 'catching' fraud on a regular basis. Image manipulation is really only one of the many ways a malevolent scientist can deceive others. And the crudest at that.

To tackle the problem at its roots, a more thorough education of the budding scientist would perhaps help. And certainly every senior author should be able to vouch for all the data presented in the manuscript. But evidently, a determined individual will always be able to deceive, no matter what checks are in place.

One measure that I consider a great improvement, and that has not yet been adopted by all journals, is a clear attribution of responsibility in every manuscript (who provided what data, who analysed it, etc.). At least in the aftermath of any fraud detection, this would clearly identify the culprit(s).

Remember that "staged" photos of moths is one of the key objections of IDists to Darwinian theory. It's wonderful that someone would bother to check on science publications to measure accuracy -- I expect to see this article cited in the Discovery Institute's criticisms of science in textbook fights in the states, however.

It would be interesting to measure something other than a violation of "author's instructions," for the sake of accuracy in these hearings. How many of these photo manipulations changed the results of the paper in any way?

Heck, now I wonder: Would Kettlewell's photos of moths violate the journal's standards? He used dead moths attached to bark to illustrate how one variant of the moth blended in to the coloring of that plant, and how another moth variant stood out in contrast to the bark. Would that be allowed?

By Ed Darrell (not verified) on 26 Jan 2006 #permalink