Cite this post! (On publication metrics)

Posted on behalf of Raghuveer Parthasarathy (Physics)


All departments at the University of Oregon are being called upon to create metrics for evaluating our “scholarship quality.” We’re not unique; there’s a trend at universities to create quantitative metrics. I think this is neither necessary nor good — in brief, quality is better assessed by informed, narrative assessments of activity and accomplishments, and the scale of university departments is small enough that these sorts of assessments should be possible — but I’ll leave an elaboration of that for another day. Here, I’ll point out that even a “simple” measure of research productivity, publications, is not simple at all, even applied within a single department. There’s nothing novel about this argument; similar things have been written by others. Still, since metric-fever persists, apparently these arguments are not obvious.

I think everyone would agree that published papers are a major component of research quality. Papers are what we leave behind as our contribtion to the body of scientific knowledge, our stories of what we’ve learned and how we’ve learned it. If one wanted some sort of quantitative measure of research activity, papers should undoubtedly figure in it. But how?

Similarly, one could argue that citations of papers are an important indicator of their impact on science. This seems straightforward to quantify — or is it?

I’ll illustrate some of the challenges in ascribing quality to content-free lists of papers or citations by looking at some example papers.

Here’s one:

[1] Raghuveer Parthasarathy, “Rapid, accurate particle tracking by calculation of radial symmetry centers,” Nature Methods 9:724-726 (2012). [Link]

As of today, this paper has been cited 205 times — a pretty high number. (I’m very fond of this paper, by the way; I think it’s one of the most interesting and useful things I’ve figured out, and it’s my only purely algorithmic / mathematical publication.)

Here’s another:

[2] G. Aad, T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A.A. Abdelalim, O. Abdinov, … , L. Zwalinski, “Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC,” Physics Letters B 716: 1-29 (2012). [Link]

This is from a physics department colleague, who is (by all accounts) an excellent scientist and a leader in his field. This paper has a stunning 11,201 citations! It’s also important to note that it has an even more stunning 2932 authors, with 178 different affiliations.

These are extreme examples, but they illustrate real differences between fields even within Physics. Biophysical studies typically involve one or at most a few labs, each with a few people contributing to the project. I’d guess that the average number of co-authors on my papers is about 5. High-energy physics experiments involve vast collaborations, typically with several hundred co-authors.

Is it “better” to have a single author paper with 205 citations, or a 2900-author paper with 11000 citations? One could argue that the former is better, since the citations per author (or even per institution) is higher. Or one could argue that the latter is better, since the high citation count implies an overall greater impact. Really, though, the question is silly and unanswerable.

Asking silly questions isn’t just a waste of time, though; it alters the incentives to pursue research in particular directions. If the goal, for example, is maximizing papers-per-author (or citations-per-author), this would reward hiring in areas like biophysics, or theory. If the goal is maximizing papers (or citations) in themselves, this would reward hiring in fields populated by large collaborations. If these metrics were applied everywhere, for example at the other research universities to which we like to compare ourselves, the net result would be an unintended shift towards some areas and away from others.

There are many other issues with counting papers or citations. Papers can be short or long, and norms of what constitutes a “publishable unit” differ between fields. Different journals have different (average) levels of quality, with high variance both between and within them. Some papers are initially highly cited and then forgotten, or the opposite. Despite a cottage industry of various indexes (h-indexes, impact factors, etc.), there’s no metric that captures all of this, nor have any of the existing metrics actually been assessed, as far as I know, as being good measures of “quality.”

Then, why ask for quantitative metrics? I really don’t know. Our university departments are small enough that, I would hope, our administrators have first-hand knowledge of what we’re doing and how well we’re doing it. Also, Physics and other departments do a pretty good (though not perfect) job of assessing what each faculty member is doing, through yearly narrative evaluations. These could be, and perhaps already are, conveyed to the higher-ups. Developing quantitative metrics of things like publications and citations is futile.

Is there anything positive I can say, towards having something “simple” to point to to guide the allocation of resources? If I were in charge, I’d pay more attention to external reviews of departments (which happen every decade or so, with little impact as far as I can tell), and also focus more on the small fraction of faculty who are unproductive by any measure, trying to construct carrots or sticks to enhance their activity. These would have a larger impact on the university’s research productivity than number-chasing.


 

Leave a Reply