Marrying Metrics and Narratives: The “State of the Department Report”

Posted on behalf of Ulrich Mayr (Psychology)


As obvious from posts on this blog, there is skepticism that we can design a system of quantitative metrics that achieves the goal of comparing departments within campus or across institutions, or that presents a valid basis for communicating about departments’ strengths and weaknesses.  The department-specific grading rubrics may seem like a step in the right direction, as they allow building idiosyncratic context into the metrics.  However, this eliminates any basis for comparisons and still preserves all the negative aspects of scoring systems, such as susceptibility to gaming and danger of trickle-down to evaluation on the individual level.  I think many of us agree that we would like our faculty to think about producing serious scholarly work, not how to achieve points on a complex score scheme.

Within Psychology, we would therefore like to try an alternative procedure, namely an annual, State of the Department report that will be made available at the end of every academic year.

Authored by the department head (and with help from the executive committee and committee chairs), the report will present a concise summary of past-year activity with regard to all relevant quality dimensions (e.g., research, undergraduate and graduate education, diversity, outreach, contribution to university service, etc.).  Importantly, the account would marry no-thrills, basic quantitative metrics with contextualizing narrative.  For example, the section on research may present the number of peer-reviewed publications or acquired grants during the preceding year, it may compare these number to previous years, or—as far as available–to numbers in peer institutions.  It can also highlight particularly outstanding contributions as well as areas that need further development.

Currently, we are thinking of a 3-part structure: (I) A very short executive summary (1 page). (II) A somewhat longer, but still concise narrative, potentially including tables or figures for metrics, (III) An appendix that lists all department products (e.g., individual articles, books, grants, etc.), similar to a departmental “CV” the covers the previous year.

Advantages:

––When absolutely necessary, the administration can make use of the simple quantitative metrics.

––However, the accompanying narrative provides evaluative context without requiring complex, department-specific scoring systems.  This preserves an element of expert judgment (after all, the cornerstone of evaluation in academia) and it reduces the risk of decision errors from taking numbers at face value.

––One stated goal behind the metrics exercise is to provide a basis for communicating about a department’s standing with external stakeholders (e.g., board members, potential donors).  Yet, to many of us it is not obvious how this would be helped through department–specific grading systems.  Instead, we believe that the numbers-plus-narrative account provides an obvious starting point for communicating about a department’s strengths and weaknesses.

––Arguably, for departments to engage in such an annual self-evaluation process is a good idea no matter what.  We intend to do this irrespectively of the outcome of the metrics discussion and I have heard rumors that some departments on campus are doing this already.  The administration could piggy-back on to such efforts and provide a standard reporting format to facilitate comparisons across departments.

Disadvantages:

––More work for heads (I am done in 2019).


 

REEES Metrics

Posted on behalf of Jenifer Presto (Russian, East European, and Eurasian Studies)


Here is information for calibrating local metrics in REEES. Perhaps the most important thing we have highlighted in our document is the fact that there are multiple ways of judging quality and impact aside from citations (peer reviews, course adoptions, appearance in syllabi and MA/Ph.D. reading lists) and that citations in a small field such as our own cannot be compared to citations in a larger field, even one with a similar methodology; they would need to be calibrated somehow to account for the size of the field.

[embeddoc url=”https://blogs.uoregon.edu/casmetrics/files/2018/03/REEES-METRICS-Mar-2018-yuya6j.pdf” download=”all” viewer=”google” ]

 


 

Political Science Metrics

Posted on behalf of Craig Parsons (Head, Political Science).


Here is a summary of what PS has agreed to do, with a faculty vote last Thursday, on research metrics. Let me stress that I was very surprised that we were able to get to agreement—so maybe there are useful hints here for other departments that don’t have an obvious path forward at this point.

Our solution for journals/presses starts from our recently-renegotiated Course Load Adjustment policy. That system includes a “top” category of 8 journals and 14 presses that receive “bonus” points, and then gives “normal” points for all other peer-reviewed articles and books.

From there we decided that each faculty can nominate ONE more “top general” journal, to be added to those existing 8; and three journals of their choice for a “top specialty” category. There will be some overlap in their suggestions (which we know because we did a partial mock survey before this real one), so with our 20-ish faculty I’m going to bet we end up with 20-ish “top general” journals and 45-ish “top specialty.” Not the tightest list, but not ridiculous.

Then—the brilliant suggestion from my colleague Priscilla Yamin, which brought an overall deal within reach—we added an “interdisciplinary venues” category. We have a lot of people who do interdisciplinary research, and were having a lot of trouble with how to fit those options into the top general/top specialty tiers. For this third category, we decided that we would include a set of flagship journals in related areas, plus each faculty could name two other journals.

On presses, since we already began from a generous list of 14 “top” presses, we just asked everyone to name three “top specialty” presses. We specified that they must be university presses, which makes sense in political science.

For awards and grants, things were simpler—we’re mostly just shooting for a capacious list—and you can see what we did in the links below.

Of course there are another few details, but that’s the basic idea. It isn’t going to produce a beautiful hierarchical measure of quality, but no such measure would be acceptable to our department. This process sets fairly reasonable bounds on a deal to let everyone name the venues they respect the most.

NOTE: PLEASE DON’T ENTER ANYTHING IN THESE LINKS. THEY ARE FEEDING INTO OUR SURVEY. [VIEW ONLY]

Scholarly Output: https://blogs.uoregon.edu/polisci/wp-admin/admin-ajax.php?action=frm_forms_preview&form=mtury

Awards: https://blogs.uoregon.edu/polisci/wp-admin/admin-ajax.php?action=frm_forms_preview&form=wtjh1

Grants: https://blogs.uoregon.edu/polisci/wp-admin/admin-ajax.php?action=frm_forms_preview&form=tullh