NSF Workshop on Scholarly Evaluation Metrics – Afternoon 1

this continues my stream of consciousness notes from the workshop held in DC, December 16, 2009.

Peter Binfield (PLOS) - article level metrics. Not talking about OA, not talking about journal level.  Journal is just packaging, and shouldn't necessarily judge articles by the packaging. PLoS ONE has half a percent to all the publications that appear in PubMed.

Evaluating an article after publication instead of before so article level metrics are of interest. JIF measures the journal and not the article (or the person). Some things that could be used: citations, web usage, expert ratings, social bookmarking, community rating, media/blog coverage, commenting activity. They do all of these except for expert ratings (so far). They have now a basket of metrics (click on the tab for each article - most measures go to a landing page at an external service like CiteULike or Scopus). How: Other data providers - emerging ecosystem - they need an open api, easily digestible data point, a landing page that is freely available. Uses the "alm" app (open source), uses DOIs.Why these measures? Social bookmarking is a predictor of future use/value. They like when people comment on their site because it can be attached to the article, but they understand that people often want to comment in their own space for reputation or findability or other reasons.

ALM are downloadable. GitHub - plos-alm.opensci.info/articles_search.

Others doing this Frontiers... journals. Article impact analytics. Need to do author thing - ACM is doing this.

What's missing? predictive, day-one metrics. in the past what we had was journal name. Expert ratings - they're working with F1000. Trying to get mass media coverage is very hard - they never use the DOI and sometimes don't even mention the journal. Tracking conversations outside of the publisher. Reputation metrics for commenters. Then use filtering and navigation tools that help you use the ALM.

In the community, need more data sources providing more complete data. What new metrics are needed? De-dup data. Expert analysis of the data. Standards. Community understanding and adoption. Other publishers need to do this, too, or maybe a trusted third-party.

(fwiw - he's wearing the t-shirt from zazzle with Bollen's map on it :)  )

 

Julia Lane (NSF/OSTP) - Developing an Assessment Portfolio - OMB and OSTP (and other similar institutions both in the US and overseas) want hard data on impact of science for relative funding decisions (vs education or transportation or what else).

Science of Science & Innovation

Can't develop metrics unless you have some theoretical framework. Economists found that for the 90's, 3/4 of the economic growth attributable to science and technology. This was calculated using production function framework , which is great for aggregate impacts (Haskell/Clayton). Need information at a micro level - discovery to innovation is non-linear, what unit of analysis to use, dependent on organizational systems. Where should I put the money (NSF or NIH or...) and for what types of funding (centers, small grants, infrastructure).

Increasing inputs do not necessarily lead to increases in output. For example, Sweden, massive increases in funding without accompanying increases in jobs or tech exports.

Can't just do the standard labor economics studies or health policy or education policy studies. These things don't work the same way. Don't have a theoretical model. No counterfactuals - what would have happened if you didn't have that funding. Outcome measures - economic, social, scientific?

Empirical challenges: structural - no systematic frame of individuals touched by science funding. Have the PI, but what about the support staff?  Data collection potential. Have patent and citation, potential in others like clickstream. Fragmentary and voluntary.

Role of evaluation - very few levers - science funding, r&d tax credits (may or may not help)... there's now science of science and innovation or whatever. STAR science and technology for America's recovery. Framework for who is touched by science - using agency and university records. Reduce burden on PIs and universities. Systematized standardized and validated - economic, scientific, social.

Collect this and let the community decide how to use, what metrics, etc. Report this back to scientists who can automate their annual reports and also see how their work is received.

There's an office of tax policy in every country, shouldn't there be the same for science? cs.scienceofsciencepolicy.net Submit a proposal to the SciSIP program (has to have a science base- some theoretical background not just a number for numbers sake).

More like this

Everyone and their grandmother knows that Impact Factor is a crude, unreliable and just wrong metric to use in evaluating individuals for career-making (or career-breaking) purposes. Yet, so many institutions (or rather, their bureaucrats - scientists would abandon it if their bosses would) cling…
I attended this one-day workshop in DC on Wednesday, December 16, 2009. These are stream of consciousness notes. Herbert Van de Sompel (LANL) - intro - Lots of metrics: some accepted in some areas and not others, some widely available on platforms in the information industry and others not. How are…
If you are a regular reader of this blog, you are certainly aware that PLoS has started making article-level metrics available for all articles. Today, we added one of the most important sets of such metrics - the number of times the article was downloaded. Each article now has a new tab on the top…
Continuing stream of consciousness notes from this workshop held in DC, Wednesday December 16, 2009 Alexis-Michel Mugabushaka. (European Research Council) - intertwined research funding structures at national and European level. At the national level two main funding modes - institutional (block…