Metrics and Equity

Posted on behalf of Nicole Swann (Human Physiology).

I just wanted to share my concerns about metrics. Specifically, many of the proposed quantitative metrics have been suggested by the university have been shown to be impacted by gender disparities (see citations below). I think this should be carefully considered, especially in light of the universities commitment to diversity and inclusion. Even if these metrics are used only at a department level they could be disadvantaging departments that have taken steps to reduce gender disparities. Although the citations below focus on gender disparities, many of these same arguments go for people of color and other groups under-represented in higher education.

Theater Arts: “Ultimate Simplicity”

Below is a guest post by Harry Wonham (Theatre Arts), who writes:

I don’t have much useful to say about the department’s deliberations, except that TA took seriously the advice that “ultimate simplicity” (in terms of process and result) should be our mantra. I could elaborate, but that would not be fully in keeping with the spirit of “US.”

UO Theatre Arts Metrics

In the list of Scholarly Outputs, Theatre Arts would like to add the categories Theatre Production and Outside Professional Work to go along with Articles, Books, Book Chapters, Conference Papers.

The Venue of Work for the Theatre Productions category would be Producing Theatres. The Quality Index ranking for the Theatre Productions category would be:

  1. National/Regional Theatres
  2. Local Theatres
  3. University Theatres
  4. Community Theatres

The Venue of Work for the Outside Professional Work category would also be Producing Theatres. The Quality Index ranking for the Outside Professional Work category would also be:

  1. National/Regional Theatres
  2. Local Theatres
  3. University Theatres
  4. Community Theatres

The Quality Index ranking for other forms of Scholarly Output should read:

  1. Top Tier Journals in Theatre Arts (listed in random order)
    1. The Drama Review
    2. Performing Arts Journal
    3. Theatre Journal
    4. Comparative Drama
    5. Theater
    6. New Theatre Quarterly
    7. Contemporary Theatre Review
    8. Theatre Topics
    9. Journal of Dramatic Theory and Criticism
    10. Studies in Theatre and Performance
    11. American Theatre Magazine
    12. Modern Drama
    13. New England Theatre Journal
    14. Canadian Theatre Journal
    15. Latin American Theatre Review
    16. Arte Publico
    17. Journal of Stage Directors and Choreographers
    18. Theatre Design and Technology
    19. Ecumenica
    20. Shakespeare Bulletin
  2. Top Tier Academic/Trade Presses for books published in Theatre Arts (listed in random order)
    1. Yale University Press
    2. Routledge
    3. University of California Press
    4. Norton
    5. Theatre Communications Group (TCG)
    6. Bloomsbury/Methuen Drama
    7. Duke University Press
    8. Palgrave Macmillan
    9. Dramatists Play Service
    10. Samuel French Inc.
    11. Southern Illinois University Press
    12. University of Oklahoma Press
    13. University of Iowa Press
    14. Focal Press
    15. University of Michigan Press
    16. Norwestern University Press
    17. McFarland & Co.
    18. Oregon State University Press
  3. Top Professional Organizations in the field (listed in random order)
    1. American Society for Theatre Research (ASTR)
    2. Association for Theatre in Higher Education (ATHE)
    3. International Federation for Theatre Research (IFTR)
    4. United States Institute for Theatre Technology (USITT)
    5. Kennedy Center American College Theatre association (KCACT)
    6. United Scenic Artists
    7. Actors Equity Association
    8. Society for Directors and Choreographers
    9. Theatre Communications Group
    10. Screen Actors Guild and the American Federation of Television and Radio Artists (SAGAFTRA)
    11. Literary Managers and Dramaturgs of the Americas
  4. Top Awards (listed in random order)
    1. Scholarly Awards (National or International)
      1. Granted by the American Society for Theatre Research
        1. Distinguished Scholar Award
        2. Sally Banes Publication Prize
        3. Errol Hill Award
        4. Gerald Kahan Scholar’s Prize
        5. Oscar G. Brockett Essay Prize
        6. Distinguished Scholar Award
        7. Cambridge University Press Prize
        8. Community Engagement Award
        9. José Esteban Muñoz Targeted Research Working Session
      2. Granted by the Association for Theatre in Higher Education
        1. Ellen Stewart Career Achievement in Professional Theatre and Career Achievement in Academic Theatre
        2. Oscar Brockett Outstanding Teacher of Theatre in Higher Education
        3. Leadership in Community-Based Theatre and Civic Engagement
        4. Outstanding Book
        5. Outstanding Article
        6. Excellence in Editing
        7. Judith Royer Excellence in Playwriting
        8. Jane Chambers Playwriting
        9. ATHE-ASTR Award for Excellence in Digital Scholarship
      3. Granted by the International Federation for Theatre Research
        1. New Scholars’ Prize
        2. Helsinki Prize
    2.  Performance
      1. Tony Award/Regional Theatre Tony Award
      2. Drama Desk Award
      3. Drama League Award
      4. New York Drama Critics’ Circle Award
      5. Obie Award
      6. Outer Critics Circle Award
      7. Susan Smith Blackburn Prize
      8. Lucille Lortel Award
    3.  Design
      1. USITT
      2. Prague Quadrennial
  5. Top Fellowships (listed in random order)
    1. Performance
      1. The Shepard and Mildred Traube Fellowship (Stage Directors and Choreographers Foundation)
      2. The Denham Fellowship (Stage Directors and Choreographers Foundation)
      3. The Mike Ockrent Fellowship (Stage Directors and Choreographers Foundation)
      4. The Sir John Gielgud Fellowship (Stage Directors and Choreographers Foundation)
      5. The Charles Abbott Fellowship (Stage Directors and Choreographers Foundation)
      6. The Kurt Weill Fellowship (Stage Directors and Choreographers Foundation)
    2.  Scholarship
      1. ASTR Research Fellowships
      2. ASTR Targeted Research Fellowships
      3. IFTR Leverhulme Early Career Fellowship Opportunity at the University of Kent
      4. Fulbright
      5. USITT



State of the Department – Further information

Posted on behalf of Elliot Berkman (Psychology)

Ulrich Mayr from my department presented an idea for presenting narrative information alongside quantitative metrics with an annual State of the Department report. In a recent CAS Heads meeting, Ulrich brought up the idea of having a template for the SOTD report to introduce some standardization across departments. A template would also save departments some work by providing an outline of the kinds of content that the Provost is looking for.

I put together the draft template below. Departments could fill in the following sections then provide a full list of books, chapters, articles, grants, etc., in an appendix. In the end, the SOTD would resemble a kind of annotated departmental CV for each calendar year. (It seems like calendar year would be the easiest time frame given that most publications are dated by the year.)

Executive Summary

  • A paragraph or two written by the department head that highlights key successes and activities
  • After the first year, this section should include an evaluation of the progress toward the previous year’s goals

Contextualizing Information

  • A brief summary or disclaimer about interpreting the mission metrics with respect to other activities
  • Important info to include might be: # of faculty / total TTF FTE, avg effective teaching load / TTF, quantity of service / TTF, budget / TTF
  • See this post for further thoughts on this.


  • Qualitative / quantitative summary of peer-reviewed or otherwise juried scholarship, including why it is important
  • The main question being addressed here is: How has your department advanced the research mission of the University this year?


  • Qualitative / quantitative summary of impact on society and the research community.
  • Can include things like citations and awards, and also outreach activities, community involvement, etc.

Grants and Fellowships

  • A summary of external funding sought and received
  • This includes funding for research projects and also fellowships to support faculty time


  • Qualitative / quantitative summary of graduate and undergraduate student mentorship that is not captured by teaching metrics.
  • Can also include research mentorship of postdocs and junior faculty.

Equity, Diversity, and Inclusion

  • List concrete contributions to equity, diversity and inclusion
  • Consider describing outreach efforts as well as indices about current diversity

Other Relevant Information [OPTIONAL]

  • Additional information or narrative content related to research that is noteworthy but not covered in the other sections.

Goals for the coming year

  • Specific, measurable goals for the following year
  • Do not need to present goals in all categories – can be focused on specific areas (e.g., increase outreach activities by 10%)

Then the appendix would be a complete list of the activities and products related to each of these categories. I created a Google Doc of this template for easy sharing. One way to do this would be to have faculty copy-paste info from their CVs into the appropriate section of the appendix, then the Head and chairs of relevant committees would fill in the narrative components.


Contextualizing metrics

Posted on behalf of Elliot Berkman (Psychology)

A common theme that comes up in conversations about metrics is that they cannot be interpreted in isolation. For example, departments where faculty have unusually large service or teaching loads cannot be expected to maintain the same level of research productivity as departments where faculty have smaller loads.

I want to put in a plug for “contextualizing metrics” or other information about factors that influence scholarship productivity to be included alongside the departmental mission metrics.

Several candidates for contextualizing metrics have come up in various conversations. For example, it might be very helpful for consumers of your metrics to know about:

  • Total number of active faculty
  • Average effective teaching load (where “effective” means # courses actually taught per year per 1.0 TTF after accounting for course releases, etc.)
  • Level of university service (number of assignments per TTF? Or just a list of the major committees?)
  • Level of field-level service (e.g., major editorships)
  • Average # of graduate students supervised
  • Departmental budget per TTF
  • Etc

I emphasize that the point of these contextualizing metrics is not to provide a full accounting of your other activities (teaching, service, mentorship) — the Provost hopes to measure those, too, but not quite yet — but rather to give a more complete picture of how faculty in your department spend their time so the scholarship metrics can be interpreted with more nuance.


What, Why and How of University Metrics

Posted on behalf of Scott Pratt (Executive Vice Provost, Department of Philosophy)


Metrics (“indicators,” standards of measurement) as used here include two categories of information: Operational and Mission. The former are intended to provide information about faculty teaching workload and departmental cost and efficiency and the latter information about how well we are achieving our basic missions of teaching and research.

Operational metrics are aggregated at the College, School and department level and include SCH and majors per TTF and NTTF, number of OA and classified staff FTE per TTF, average and median class size, and degrees per TTF.

Mission metrics include data regarding undergraduate and graduate education (including serving diverse populations) and, still under development, data regarding faculty research.

Undergraduate data describes the undergraduate program in each college, school and department in terms of number of majors and minors, demographic information, major declaration patterns, graduation rates, and time to degree.

Graduate data describes the graduate program at the college, school and degree level in terms of completion rate and time to degree, demographic information, admission selectivity, information regarding student experience.

Research metrics (under development) are data regarding faculty research/creative activity productivity specific to each discipline and sub discipline where the data to be collected is specified by the faculty in the field. These so-called “local metrics” are intended to provide a faculty-determined set of measures that describe faculty work both quantitatively and qualitatively. It is clear that no single standard or number apply across all fields and so whatever metrics are produced, they will not be reducible to either a single standard or a single number.

It is important to note that research metrics will be revisable over time in response to changes in departments, disciplines and subfields, information available, and as we learn what are good and less good indicators of progress. The mode of reporting (also still under development) will likewise be revisable. (One approach to reporting is suggested by Ulrich Mayr, Psychology, on this blog.)

Note that PhD completion rate and time to degree is also reported by the AAU Data Exchange at the degree program level and so UO information can be compared with other degree programs at other AAU institutions. All other data is comparable over time and, in a limited way, comparable among departments (so that one could compare data within a school or college or among departments with similar pedagogy, for example).

Other graduate data is currently being collected through the AAUDE exit survey so that over the next several years sufficient data will be available to report PhD initial placement, graduate student research productivity (represented in publications), and data regarding student assessment of graduate advising support. These data will be available by degree program across all reporting AAU institutions.

Information about faculty service is not currently collected. Since service is a vital part of faculty work, we hope to develop a means of defining and collecting service data so that this can also be reported at the college, school, and department levels.


There are at least three reasons that operational and mission metrics will be collected. They are (1) external communication/accountability, (2) internal communication/continuous improvement/accountability, and (3) to provide information to help guide the allocation of limited resources.

Public research universities have a need for external communication that provides an account of their work to students and their families, the public, government agencies, disciplines and other constituencies. While the university already attempts to be accountable as a whole to its mission, it also has some obligation to be accountable in its parts.  Diverse academic units support the mission of the university in different ways. A general accounting of the work of the university (which necessarily attempts to reduce the university’s work to a few standards) is insufficient to the latter task and so the ability to account for work accomplished at the department or disciplinary level is vital to ensure that our constituencies understand the value of both the whole and its parts.

Anecdotal information and “storytelling” are part of this effort, but so are systematically collected data.  Whatever information is used to promote communication needs to be presented with explanatory information so that others will understand the differences between programs and what constitutes success.  Data for external communications (such as student success data currently available to the public) must be limited to aggregated information and data sets that are large enough to ensure anonymity.  Research metrics are important to this function because they provide a picture of what our faculty do, especially those programs whose work and expected results are less familiar to the wider public, legislators, and so on.

Internal communication is likewise essential both in order to ensure that university and college leadership understand the work done by faculty and so that departments themselves have a shared understanding of their work, needs, and the meaning of success.  Communication with leadership needs to involve both a quantitative dimension that provides some idea of how much work is common in a particular field and how quality is defined for that field. Some of this information (e.g. student success data) is common enough across disciplines to suggest a general conception of quality using quantitative proxies (e.g. graduation rates), while other quantitative data (e.g. class size, research productivity) requires more explanation and narrower application.

Internal communication also concerns communication with faculty in helping make transparent department expectations for teaching and research. While review standards are often obscure, faculty nevertheless need a shared sense of what it means for faculty to be successful in aggregate in their fields. The development, implementation and regular review of metrics at all levels by faculty and administrative leadership provides a means to foster a shared vision of success; the ability to identify goals, opportunities, and problems; and determine how best to move forward.

Resources at every university are limited and allocating them requires both good information and good critical deliberation. Past budgeting systems have relied on formulaic systems that depend on reducing unit quality to two or three indicators for all programs (e.g. SCH, number of majors, number of degrees).  Such an approach, when fully implemented, excludes aspects of department success that are not captured in the indicators. When not fully implemented, the need to allocate resources beyond what is partially specified by the limited set of indicators means that allocations must be made ad hoc. Rather than the reductionist approach taken in recent years, the current budget model aims to implement a deliberative model informed by both quantitative and qualitative data.


How the metrics figure in the allocation process varies. In the Institutional Hiring Plan, the full complement of metrics is to be considered in deciding where particular TTF lines will be created. The IHP involves a structured review process involving department generated proposals, vetting by the school or college, review by the Office of the Provost, a faculty committee, and the deans’ council, with the final decision by the Provost.

In allocating GE lines, data regarding teaching needs is combined with graduate student success data and (when available) research data. Decision-making takes into account enrollment goals, student success data, other program data, and regular meetings with deans and Directors of Graduate Study.

The block allocation process (that establishes the base operating budget for each college and school) considers operational metrics and past budget allocations. Block allocations are proposed by the Office of the Provost and negotiated with the individual schools and colleges.

The strategic initiative process will consider in part data relevant to the proposals at hand (e.g. undergraduate success for proposals for undergraduate programs, graduate success data for proposals related to new program development).  The initiative process involves a faculty committee with recommendations to the provost.

In general, use of the metrics will be guided by our goal to advance the UO as a liberal arts R1 university.  This means that while there are other goals to be met (see below), meeting them must take into account the character and purpose of the UO as a liberal arts university.

  • We should always be working towards improvement.
  • Undergraduate student educational needs must be met.
  • PhD programs must be sustained and improved both in terms of enrollments and placement.
  • Diversity and inclusion must be fostered in the faculty, student population and academic programs.
  • Excellent programs should be supported and expanded where there is a demonstrable possibility of expansion.
  • Less successful programs should receive resources to support evidence-based plans for excellence consistent with the other goals.
  • Programs that are successfully meeting their own and university goals should be supported to continue that success.
  • Programs that are unsuccessful and do not have workable plans for improvement may be eliminated according to the HECC and CBA guidelines.


Process is important

Posted on behalf of Leah Middlebrook (Comparative Literature, Romance Languages)

One element that has risen to the top in our conversations about metrics, both here in the blog and at various meetings, is narrative. The Provost has requested a way to discuss and describe —i.e., narrate—the work we do in our departments and research units.  It is interesting (and not surprising) to observe something of a consensus among colleagues across the sciences and the humanities with respect to how that narrative be assembled. Nearly all of us agree that databases and various kinds of quantitative reports give the illusion of transparency and objectivity, whereas in fact what one learns from the reports generated by means of these tools is guided by the narratives one draws from them. These narratives are subjective at some stage of the process: whether it is a person interpreting a report or a menu of filters that one selects in order to curate data, people, and their (our) opinions and judgments are involved —because ultimately, people write software, and groups of people edit and update that software (and forgive me if my training in lit. crit. inflects that short summary of how data and software work).

So the snapshots or views or talking points “generated” by databases and quantitative reports are as conditioned and inflected by human conscious and unconscious biases as are prose narratives. What is different between many kinds of quantitative and numbers-based reports and other kinds of evaluations is a lack of transparency regarding the minds and subjectivities that condition the rubrics, the filters and, hence, the findings and conclusions of those reports. This situation is problematic, generally, and it poses particular concerns when we have set inclusivity, equity and diversity at the heart of our mission to grow as a university and a community.

To the fine analysis and suggestions proposed elsewhere in this blog, I’d like to add that one solution to the challenges of unconscious and/or implicit bias is process. So the question I hope we can introduce into our discussion is What kind of process would facilitate productive, accurate and equitable evaluation of departments and campus units? As a second point: the university administration appears to be searching for an expedient way to look across units to evaluate and compare our strengths. That makes sense. We are all overworked. So: What kind of evaluative process will conform to our aspirations for equity and inclusiveness, while attending to the need for expediency?

This might be a good time to take a new look at the system by which universities across the U.S. collaborate in the work of reviewing candidates for tenure and promotion. The combination of internal and external evaluations, collected in a portfolio and evaluated with care by bodies such as our own FPC, has the virtue of being time-tested and ingrained in academic culture at a moment in which we are feeling the lack of congruence between how businesses (on the one hand) and academic and research institutions (on the other) conduct their affairs. For example, a colleague of mine pointed out recently that the academic context changes yearly as new subfields emerge and existing concerns / approaches / objects of study recede. Taking a snapshot in any given year freeze-frames evaluative criteria that should in fact adapt to changing intellectual horizons (horizons that faculty keep up with by reading in our fields). This acute observation underscores the importance and the value of faculty-based review. How often would an abstract, general metric need to be modified to keep up with how knowledges work “IRL”?

When I raised the model of external review recently, an objection was raised:  Why would departments consent to having their research profiles evaluated by outside bodies? Wouldn’t they prefer to keep that information internal? The question was delivered in passing, so I am not sure if I understood it correctly, but my answer would be that to feed our information into databases is, in fact, sending it out for review. The advantage of working with committees and signed, confidential reports is that it is possible to trace the process by which judgements and opinions are formed, and also to ask questions.

I have worked on both the DAC and on the FPC, and am continually impressed by the professionalism, the thoughtfulness and ethics displayed by nearly all of those who undertake the work of serving on our committees or serving as external evaluators of a file. Furthermore, although I respect my colleagues to the skies (!!), I would maintain that the very good work that gets done through those processes is a function of their design. Each stage of the process of review builds from what has come before, but adds new perspectives and new opportunities to ask questions. The work of reviewing takes time and is, in the end, work –work that results, I hasten to add, in a fairly short report that goes to the Provost’s office. But this report is backed by a varied, well-organized file that carries layers of signatures and clear narratives of the process at every step of the way.

A good place to start on this process is the “State of the department” report proposed by Ulrich Mayr in this blog.  Periodically (say, every three or five years), this report could go through a process of external review, undertaken via agreements with comparator institutions and departments. As in personnel cases, a number of reviewers would read and consider the department reports and prepare a confidential letter commenting on the strengths and relative standing of the unit with respect to the comparator pool. Finally, the file and letters would be reviewed and considered by an FPC-like body here on campus and a report would be issued to the Provost’s office. Given the fundamental importance of this work, and the priority we have placed as an institution on fostering equity, inclusivity and diversity, it is reasonable to expect that the university would commit funds to compensate work by the relevant committees and external evaluators (it seems likely that these funds would add up to less than the price of many software packages and database subscriptions).


Cite this post! (On publication metrics)

Posted on behalf of Raghuveer Parthasarathy (Physics)

All departments at the University of Oregon are being called upon to create metrics for evaluating our “scholarship quality.” We’re not unique; there’s a trend at universities to create quantitative metrics. I think this is neither necessary nor good — in brief, quality is better assessed by informed, narrative assessments of activity and accomplishments, and the scale of university departments is small enough that these sorts of assessments should be possible — but I’ll leave an elaboration of that for another day. Here, I’ll point out that even a “simple” measure of research productivity, publications, is not simple at all, even applied within a single department. There’s nothing novel about this argument; similar things have been written by others. Still, since metric-fever persists, apparently these arguments are not obvious.

I think everyone would agree that published papers are a major component of research quality. Papers are what we leave behind as our contribtion to the body of scientific knowledge, our stories of what we’ve learned and how we’ve learned it. If one wanted some sort of quantitative measure of research activity, papers should undoubtedly figure in it. But how?

Similarly, one could argue that citations of papers are an important indicator of their impact on science. This seems straightforward to quantify — or is it?

I’ll illustrate some of the challenges in ascribing quality to content-free lists of papers or citations by looking at some example papers.

Here’s one:

[1] Raghuveer Parthasarathy, “Rapid, accurate particle tracking by calculation of radial symmetry centers,” Nature Methods 9:724-726 (2012). [Link]

As of today, this paper has been cited 205 times — a pretty high number. (I’m very fond of this paper, by the way; I think it’s one of the most interesting and useful things I’ve figured out, and it’s my only purely algorithmic / mathematical publication.)

Here’s another:

[2] G. Aad, T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A.A. Abdelalim, O. Abdinov, … , L. Zwalinski, “Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC,” Physics Letters B 716: 1-29 (2012). [Link]

This is from a physics department colleague, who is (by all accounts) an excellent scientist and a leader in his field. This paper has a stunning 11,201 citations! It’s also important to note that it has an even more stunning 2932 authors, with 178 different affiliations.

These are extreme examples, but they illustrate real differences between fields even within Physics. Biophysical studies typically involve one or at most a few labs, each with a few people contributing to the project. I’d guess that the average number of co-authors on my papers is about 5. High-energy physics experiments involve vast collaborations, typically with several hundred co-authors.

Is it “better” to have a single author paper with 205 citations, or a 2900-author paper with 11000 citations? One could argue that the former is better, since the citations per author (or even per institution) is higher. Or one could argue that the latter is better, since the high citation count implies an overall greater impact. Really, though, the question is silly and unanswerable.

Asking silly questions isn’t just a waste of time, though; it alters the incentives to pursue research in particular directions. If the goal, for example, is maximizing papers-per-author (or citations-per-author), this would reward hiring in areas like biophysics, or theory. If the goal is maximizing papers (or citations) in themselves, this would reward hiring in fields populated by large collaborations. If these metrics were applied everywhere, for example at the other research universities to which we like to compare ourselves, the net result would be an unintended shift towards some areas and away from others.

There are many other issues with counting papers or citations. Papers can be short or long, and norms of what constitutes a “publishable unit” differ between fields. Different journals have different (average) levels of quality, with high variance both between and within them. Some papers are initially highly cited and then forgotten, or the opposite. Despite a cottage industry of various indexes (h-indexes, impact factors, etc.), there’s no metric that captures all of this, nor have any of the existing metrics actually been assessed, as far as I know, as being good measures of “quality.”

Then, why ask for quantitative metrics? I really don’t know. Our university departments are small enough that, I would hope, our administrators have first-hand knowledge of what we’re doing and how well we’re doing it. Also, Physics and other departments do a pretty good (though not perfect) job of assessing what each faculty member is doing, through yearly narrative evaluations. These could be, and perhaps already are, conveyed to the higher-ups. Developing quantitative metrics of things like publications and citations is futile.

Is there anything positive I can say, towards having something “simple” to point to to guide the allocation of resources? If I were in charge, I’d pay more attention to external reviews of departments (which happen every decade or so, with little impact as far as I can tell), and also focus more on the small fraction of faculty who are unproductive by any measure, trying to construct carrots or sticks to enhance their activity. These would have a larger impact on the university’s research productivity than number-chasing.


Citation trajectories

Posted on behalf of Greg Bothun (Physics)

Measuring impact of one’s peer reviewed publications is generally a highly subjective process and often it is assumed that if one publishes in the leading journals of their field, then those papers automatically have high impact.

Here I suggest a modest data driven process to remove this subjectivity and replace it with simple objective procedure that can be easily applied to Google scholars since the data is readily available.

This procedure basically involves using the time history of citations as an impact guide. The concept of ½ life is relevant here where after some paper is published there is some time scale over which ½ of its citations have occurred. Papers with longer ½ life, likely have had a larger impact. By way of example, we offer the following kinds of profiles:

  1. A high impact paper is one that has sustained referencing over time, at a quasi-constant level: Two example time histories of such papers are shown below (note that the form of the graph is more important than the actual citation numbers).

Paper I: Published in 2005 which does not show decaying reference behavior but fairly sustained referencing over 12 years:

Paper II: Published in 1986 which shows quasi-constant referencing over a 30 year period (yes it sucks to be old):


  1. Next example is a more rare form, which we will refer to as Pioneer impact. In this case the paper is not well acknowledged near the time of publication but has been rediscovered in later years.

In terms of numbers, the period 2000 through 2011 contains ½ of the total citation number (244), while the other ½ occurred over the 6 year period 2012-2017.  This is a qualitatively different citation history than the previous example.

Another example, this time on a small citation lifetime timescale, is shown below:

And here is an example of almost no impact early one, but the techniques described in the paper were discovered or emerged 10 years later:

  1. Finally there is the case of the average or Moderate Impact which has a decaying citation curve and a relatively short ½ life. Examples below:

For the latter case, although there is a long citation tail of low annual citations, ½ the citations were earned in a 5 year period near the time of publication.


  1. Finally there is the case of the Low Impact paper (like most of fine) – that has just a brief citation annual rate before decaying:

I am not advocating that any of the above procedure be used, I simply saying that information in addition a raw number is available and can be used in some way to better characterize impact.


Value-Driven Metrics?

Posted on behalf of Elliot Berkman (Psychology)

Following on the article “Applying the Yardstick, Department by Department” that Dr. Bothun recommended on this blog, I was inspired to think through what a more faculty-driven set of metrics might look like. The article quotes Bret Danilowicz, Dean of CAS as Oklahoma State, on the metrics system they implemented there:

Mr. Danilowicz thought there was a better way, starting with getting lower-level administrators, department heads, and faculty members to participate in the assessment process every year.

“To me, the president and provost are too high a level for this,” he says. “If the goal is to give your departments a chance to take a good look at themselves, and with an eye toward improvement, you need to get faculty and chairs more involved in the process.”

A marine biologist by training, Mr. Danilowicz had served as dean of science and technology at Georgia Southern University. Now he wanted to understand the wider range of disciplines he was sizing up at Oklahoma State. It was important, he thought, to develop qualitative measures, not just numbers that could be mashed up.

“I’m a scientist, so grants and publications were very important to me,” he says. But as he talked with chairs and professors outside the sciences, he saw that each discipline brought its own yardstick.

“I came to learn that people in humanities want to review the quality of their scholarship. And arts people value creativity, which is really hard to measure,” he says. “The more I learned, the more it seemed natural to have the departments develop their own criteria and do their own assessments, and for my office to give them my thoughts on what they come up with.”

This process could start with a broad set of values and principles that can be different for each department. For example, in Psychology, I might articulate some of our values as the pertain to scholarship as:

  • High quality, high impact research
  • Diversity, equity, and inclusion in our research
  • Professional  training and mentorship
  • Interdisciplinary, team science
  • Open and reproducible scholarship

Note that this is my personal articulation of some of our values and is not necessarily representative or universal. The set of values need to be a product of the entire department. I can imagine that having a series of high-level conversations within a department about what values it hopes to promote with its scholarship might be a useful and interesting exercise in its own right.

But, for now, let’s start with this initial set of values. How might these be translated into measurable variables? In psychology, for better or worse, the unit of scholarship is the peer-reviewed publication, typically in a journal. So, I can attempt to articulate ways that the values above could by translated into metrics on a per-paper level. For each paper, I can ask the following yes/no questions:

  • Is the paper published in what my department considers to be a high quality, high impact, journal?
  • Is the sample or authorship team diverse, equitable, and inclusive?
  • Is a graduate student or postdoc first-author (or, possibly, any author)?
  • Is the research team interdisciplinary, as defined by bridging subdisciplines or spanning fields?
  • Are the methods open and reproducible?

How would this work with actual papers? Here are a few recent papers from my lab with scores:

  • Cosme, D, *Mobasser, A., Zeithamova, D., Berkman, E.T., & Pfeifer, J.H. (in press). Choosing to regulate: Does choice enhance craving regulation? Social Cognitive and Affective Neuroscience.
    • High-quality journal? ✅ This journal is considered a top journal in my field.
    • Diverse sample or authorship team? ✅ The sample is not particularly diverse but the authorship team is.
    • Student or postdoc first author? ✅ Cosme is a graduate student in psychology. 
    • Interdisciplinary? ✅ The authors span 3 of the 4 areas within our department.
    • Open science? ✅ The data and code are publically available.
  • Berkman, E.T. (2018). Value-based choice: An integrative, neuroscience-informed model of health goals. Psychology & Health, 33, 40-57.
    • High-quality journal? Eh. It’s a niche journal but not what I’d consider top-tier.
    • Diverse sample or authorship team? Nope.
    • Student or postdoc first author? Nope.
    • Interdisciplinary? The content is sorta interdisciplinary, but the team is not.
    • Open science? This is a theory paper, but it is not open in the sense that it the journal is not open access.
  • Giuliani, N.R., Merchant, J.S., *Cosme, D., & Berkman, E.T. (in press). Neural predictors of eating behavior and dietary change. Annals of the New York Academy of Sciences.
    • High-quality journal? Moderate. This journal would probably not make a selective list.
    • Diverse sample or authorship team? ✅ Diverse authorship team.
    • Student or postdoc first author? No, Giuliani is faculty in the College of Education.
    • Interdisciplinary? ✅ Yes, Giuliani is in the COE.
    • Open science? ✅ This is a review, but the paper and some of the materials we used are publically available.

So what do I make of these ratings? As an author of these papers, this ordering (Cosme et al. > Giuliani et al. > Berkman) comports with my understanding of the “excellence” of these papers as I think of that term. Does this mean I think the Berkman (2018) paper is bad or low-quality? Not at all. In fact, I think there are some good ideas in that paper and that it might be influential in the field (a hypothesis I could test by watching how often it gets cited in the next few years). What is means is that the paper doesn’t advance my department’s values as much as the other two. I still get “credit” for it – it goes toward my publication count – but this system allows for a way to differentiate among my papers. The system prescribes a simple, moderately objective rubric for quickly assessing whether a paper promotes the values that I want to advance with my scholarship.

The incentives for our department are to publish papers that are in the journals we think are good; use diverse samples and authors; are authored by students and postdocs; are cross-disciplinary; and use open data and materials. I can game this system by publishing more papers like Cosme et al. Will I stop writing solo-authored theory pieces like the Berkman (2018) paper? No, because sometimes they’re fun and useful, and there’s not really a direct disincentive for me to write them (again, they still “count” and go on my CV and will be part of my p&t and merit review).

What would this process look like in practice? I can imagine that faculty score their own papers annually. When we send our CVs to CAS (which we already do each year), we could score all the new papers we published that year. Perhaps we could cap the score at some number, say 4, even if there are more than 4 values, so there are multiple ways for a paper to achieve the highest possible score. The departmental executive committee (or comparable), which already reviews files as part of merit reviews, could provide a sanity check on the scores produced by faculty.

At the department level, what we’d get is a count of total papers produced by the department that year, as well as an average values score for the papers in that department. Perhaps we could also supplement those metrics with some basic and readily available additional data such as citations and media mentions. Those decisions would be made at the department level.

In the end, the average values scores are not interpretable on their own. The would need to be contextualized primarily by year-over-year trends — as a department we want to see the scores go up next year — and possibly by similar data from comparator departments. We could gather that data for a small number of other departments ourselves, or, by making our process open and transparent, encourage other departments to start collecting themselves.