Posted on behalf of Ulrich Mayr (Psychology)
As obvious from posts on this blog, there is skepticism that we can design a system of quantitative metrics that achieves the goal of comparing departments within campus or across institutions, or that presents a valid basis for communicating about departments’ strengths and weaknesses. The department-specific grading rubrics may seem like a step in the right direction, as they allow building idiosyncratic context into the metrics. However, this eliminates any basis for comparisons and still preserves all the negative aspects of scoring systems, such as susceptibility to gaming and danger of trickle-down to evaluation on the individual level. I think many of us agree that we would like our faculty to think about producing serious scholarly work, not how to achieve points on a complex score scheme.
Within Psychology, we would therefore like to try an alternative procedure, namely an annual, State of the Department report that will be made available at the end of every academic year.
Authored by the department head (and with help from the executive committee and committee chairs), the report will present a concise summary of past-year activity with regard to all relevant quality dimensions (e.g., research, undergraduate and graduate education, diversity, outreach, contribution to university service, etc.). Importantly, the account would marry no-thrills, basic quantitative metrics with contextualizing narrative. For example, the section on research may present the number of peer-reviewed publications or acquired grants during the preceding year, it may compare these number to previous years, or—as far as available–to numbers in peer institutions. It can also highlight particularly outstanding contributions as well as areas that need further development.
Currently, we are thinking of a 3-part structure: (I) A very short executive summary (1 page). (II) A somewhat longer, but still concise narrative, potentially including tables or figures for metrics, (III) An appendix that lists all department products (e.g., individual articles, books, grants, etc.), similar to a departmental “CV” the covers the previous year.
––When absolutely necessary, the administration can make use of the simple quantitative metrics.
––However, the accompanying narrative provides evaluative context without requiring complex, department-specific scoring systems. This preserves an element of expert judgment (after all, the cornerstone of evaluation in academia) and it reduces the risk of decision errors from taking numbers at face value.
––One stated goal behind the metrics exercise is to provide a basis for communicating about a department’s standing with external stakeholders (e.g., board members, potential donors). Yet, to many of us it is not obvious how this would be helped through department–specific grading systems. Instead, we believe that the numbers-plus-narrative account provides an obvious starting point for communicating about a department’s strengths and weaknesses.
––Arguably, for departments to engage in such an annual self-evaluation process is a good idea no matter what. We intend to do this irrespectively of the outcome of the metrics discussion and I have heard rumors that some departments on campus are doing this already. The administration could piggy-back on to such efforts and provide a standard reporting format to facilitate comparisons across departments.
––More work for heads (I am done in 2019).