Measuring scientific impact where it matters

Everyone and their grandmother knows that Impact Factor is a crude, unreliable and just wrong metric to use in evaluating individuals for career-making (or career-breaking) purposes. Yet, so many institutions (or rather, their bureaucrats - scientists would abandon it if their bosses would) cling to IF anyway. Probably because nobody has pushed for a good alternative yet. In the world of science publishing, when something needs to be done, usually people look at us (that is: PLoS) to make the first move. The beauty of being a path-blazer!

So, in today's post 'PLoS Journals - measuring impact where it matters' on the PLoS Blog and everyONE blog, PLoS Director of Publishing Mark Patterson explains how we are moving away from the IF world (basically by ignoring it despite our journals' potential for marketing via their high IFs, until the others catch up with us and start ignoring it as well) and focusing our energies in providing as many as possible article-level metrics instead. Mark wrote:

Article-level metrics and indicators will become powerful additions to the tools for the assessment and filtering of research outputs, and we look forward to working with the research community, publishers, funders and institutions to develop and hone these ideas. As for the impact factor, the 2008 numbers were released last month. But rather than updating the PLoS Journal sites with the new numbers, we've decided to stop promoting journal impact factors on our sites all together. It's time to move on, and focus efforts on more sophisticated, flexible and meaningful measures.

In a series of recent posts, Peter Binfield, managing editor of PLoS ONE, explained the details of article-level metrics that are now employed and displayed on all seven PLoS journals. These are going to be added to and upgraded regularly, whenever we and the community feel there is a need to include another metric.

What we will not do is try to reduce these metrics to a single number ourselves. We want to make all the raw data available to the public to use as they see fit and we will all watch as the new standards emerge. We feel that different kinds of metrics are important to different people in different situations, and that these criteria will also change over time.

A paper of yours may be important for you to be seen by your peers (perhaps for career-related reasons which are nothing to frown about) in which case the citation numbers and download statistics may be much more important than the bookmarking statistics or the media/blog coverage or the on-article user activity (e.g., ratings, notes and comments). At least for now - this may change in the future. But another paper you may think would be particularly important to be seen by physicians around the world (or science teachers, or political journalists, etc.), in which case media/blog coverage numbers are much more important for you than citations - you are measuring your success by how broad an audience you could reach.

This will differ from paper to paper, from person to person, from scientific field to field, from institution to institution, and from country to country. I am sure there will be people out there who will try to put those numbers into various formulae and crunch the numbers and come up with some kind of "summary value" or "article-level impact value" which may or may not become a new standard in some places - time will tell.

But making all the numbers available is what is the most important for the scientific community as a whole. And this is what we will provide. And then the others will have to start providing them as well because authors will demand to see them. Perhaps this is a historic day in the world of science publishing....

More like this

This is not good news for young scientists.

We need the impact factor to show the importance of our work because we do not have the seniority to point to our citation counts/ h factor.

5 years after something is published these individualised metrics begin to become useful. One hopes one still has a job at that point...

By antipodean (not verified) on 22 Jul 2009 #permalink

Getting rid of the IF is good for all the young scientists whose bosses are not in bed with the editors at the GlamMagz. Thus, it is good for science in general, as personal relationships between authors and editors become less influential.

Hey Bora! I have to say that I think this post comes off a bit confused. Impact Factors are a trivially simple metric about the average citation rate of articles in a journal -- a per-article measure of influence. Nothing more, nothing less. People and institutions that judge individuals with the IF and other similar _journal_ metrics are being foolish. If you want to judge an individual, look at who cites them and read their papers. I'll be curious to see the new things you're coming up with, but at some point science isn't just about numbers. Anyway, I imagine we agree on these points, but for PLoS to shun IFs for the reasons stated in this post comes off a little (OK, a lot) holier-than-thou and misses the point of what they do. And, for the record, I think improved measures of influence like the Eigenfactor.org project are cool and informative, at the _journal_ and discipline level.

By Dave Hewitt (not verified) on 22 Jul 2009 #permalink

antipodean:

I am confused. Citations are the slowest of all the metrics. Downloads (in html, xml or pdf), bookmarks, trackbacks, etc. can happen minutes after publication and then accumulate over time, for months or years before the paper gets its first citation. There was a recent paper demonstrating that number of downloads correlates quite strongly with the subsequent citation rate of the paper. There is, on the other hand, pretty much no correlation between citation of the paper and the IF - 20% of Nature papers are responsible for it's high IF - the remaining 80% are hardly cited at all. Put the article-level metrics for your papers in yoru CV - if they are good, they will impress the people considering hiring you.

Dave Hewitt:

"Impact Factors are a trivially simple metric about the average citation rate of articles in a journal -- a per-article measure of influence."

But IF is not a per-article measure. It is a journal-level measure, which is irrelevant for judging an individual (for career purposes, for example).

"People and institutions that judge individuals with the IF and other similar _journal_ metrics are being foolish."

They are. But you will be surprised how many bureaucrats in various departments, institutes, and even entire countries rely ENTIRELY on IF for promotion and jobs purposes.

Semantics I guess, but IF measures citations per [average] article, thus the name (repeated for Eigenfactor measures as well). I think it is a well-accepted term regardless of like/dislike of the concept, and I'm sticking with it since I used it in a paper recently.

Nonetheless, I have also heard of big decisions being made all on 2-yr window IFs (the old ones; Thomson now follows the Eigenfactor lead and uses a 5-yr window as well). Scary that folks with so much power would be so poorly informed.

By Dave Hewitt (not verified) on 24 Jul 2009 #permalink

Bora

I quite agree that downloads numbers would be great. But in the same way that citation counts are time dependent so are downloads. Old papers will have been downloaded more than new papers. This will once again favour older scientists in the same way that the citation numbers and H factor.

I certainly like downloads as one of a handful of options. We will need a culture of understanding around them before they can become useful though. The problem will be deciding how many downloads over how long a period of time is what measure of quality.

By antipodean (not verified) on 27 Jul 2009 #permalink