The Impact Factor Folly

The latest issue of Epidemiology features a (only somewhat tongue-in-cheek) article by Miguel A. Hernan: Epidemiologists (of All People) Should Question Journal Impact Factors. Well worth reading and thinking about:

Developing a good impact factor is a nontrivial methodologic undertaking that depends on the intended goal of the rankings. Hence, a scientific discussion about any impact factor requires that its goal is made explicit and its methodology is described in enough detail to make the calculations reproducible. Paradoxically, the methodology of the impact factor that is used to evaluate peer-review journals cannot be fully evaluated in a peer-reviewed journal. As illustrated above, a manuscript describing the Thomson Scientific impact factor would be a hard sell for most journals, and hardly acceptable for the American Journal of Epidemiology, the International Journal of Epidemiology, or Epidemiology.

The same issue also features several interesting responses:

Impact Factor: Good Reasons for Concern
How Come Scientists Uncritically Adopt and Embody Thomson's Bibliographic Impact Factor?
Rise and Fall of the Thomson Impact Factor
The Impact Factor Follies

More like this

For those of you interested in the science publishing business, there is an interesting paper out about Impact Factors, where they do the math to try to explain why the IFs are apparentluy always rising from year to year, and to figure out the differences between disciplines.
As reported by Uncertain Principles and Bad Astronomy, there was a mete
is it similar to the ones from yesteryear? Could Jupiter have just been hit by a sizable comet? Again?
I'm doing a presentation at this week's Ontario Library Association Super Conference on a case study of my Canadian War on Science work from an altmetrics perspective.