Process is important

Posted on behalf of Leah Middlebrook (Comparative Literature, Romance Languages)


One element that has risen to the top in our conversations about metrics, both here in the blog and at various meetings, is narrative. The Provost has requested a way to discuss and describe —i.e., narrate—the work we do in our departments and research units.  It is interesting (and not surprising) to observe something of a consensus among colleagues across the sciences and the humanities with respect to how that narrative be assembled. Nearly all of us agree that databases and various kinds of quantitative reports give the illusion of transparency and objectivity, whereas in fact what one learns from the reports generated by means of these tools is guided by the narratives one draws from them. These narratives are subjective at some stage of the process: whether it is a person interpreting a report or a menu of filters that one selects in order to curate data, people, and their (our) opinions and judgments are involved —because ultimately, people write software, and groups of people edit and update that software (and forgive me if my training in lit. crit. inflects that short summary of how data and software work).

So the snapshots or views or talking points “generated” by databases and quantitative reports are as conditioned and inflected by human conscious and unconscious biases as are prose narratives. What is different between many kinds of quantitative and numbers-based reports and other kinds of evaluations is a lack of transparency regarding the minds and subjectivities that condition the rubrics, the filters and, hence, the findings and conclusions of those reports. This situation is problematic, generally, and it poses particular concerns when we have set inclusivity, equity and diversity at the heart of our mission to grow as a university and a community.

To the fine analysis and suggestions proposed elsewhere in this blog, I’d like to add that one solution to the challenges of unconscious and/or implicit bias is process. So the question I hope we can introduce into our discussion is What kind of process would facilitate productive, accurate and equitable evaluation of departments and campus units? As a second point: the university administration appears to be searching for an expedient way to look across units to evaluate and compare our strengths. That makes sense. We are all overworked. So: What kind of evaluative process will conform to our aspirations for equity and inclusiveness, while attending to the need for expediency?

This might be a good time to take a new look at the system by which universities across the U.S. collaborate in the work of reviewing candidates for tenure and promotion. The combination of internal and external evaluations, collected in a portfolio and evaluated with care by bodies such as our own FPC, has the virtue of being time-tested and ingrained in academic culture at a moment in which we are feeling the lack of congruence between how businesses (on the one hand) and academic and research institutions (on the other) conduct their affairs. For example, a colleague of mine pointed out recently that the academic context changes yearly as new subfields emerge and existing concerns / approaches / objects of study recede. Taking a snapshot in any given year freeze-frames evaluative criteria that should in fact adapt to changing intellectual horizons (horizons that faculty keep up with by reading in our fields). This acute observation underscores the importance and the value of faculty-based review. How often would an abstract, general metric need to be modified to keep up with how knowledges work “IRL”?

When I raised the model of external review recently, an objection was raised:  Why would departments consent to having their research profiles evaluated by outside bodies? Wouldn’t they prefer to keep that information internal? The question was delivered in passing, so I am not sure if I understood it correctly, but my answer would be that to feed our information into databases is, in fact, sending it out for review. The advantage of working with committees and signed, confidential reports is that it is possible to trace the process by which judgements and opinions are formed, and also to ask questions.

I have worked on both the DAC and on the FPC, and am continually impressed by the professionalism, the thoughtfulness and ethics displayed by nearly all of those who undertake the work of serving on our committees or serving as external evaluators of a file. Furthermore, although I respect my colleagues to the skies (!!), I would maintain that the very good work that gets done through those processes is a function of their design. Each stage of the process of review builds from what has come before, but adds new perspectives and new opportunities to ask questions. The work of reviewing takes time and is, in the end, work –work that results, I hasten to add, in a fairly short report that goes to the Provost’s office. But this report is backed by a varied, well-organized file that carries layers of signatures and clear narratives of the process at every step of the way.

A good place to start on this process is the “State of the department” report proposed by Ulrich Mayr in this blog.  Periodically (say, every three or five years), this report could go through a process of external review, undertaken via agreements with comparator institutions and departments. As in personnel cases, a number of reviewers would read and consider the department reports and prepare a confidential letter commenting on the strengths and relative standing of the unit with respect to the comparator pool. Finally, the file and letters would be reviewed and considered by an FPC-like body here on campus and a report would be issued to the Provost’s office. Given the fundamental importance of this work, and the priority we have placed as an institution on fostering equity, inclusivity and diversity, it is reasonable to expect that the university would commit funds to compensate work by the relevant committees and external evaluators (it seems likely that these funds would add up to less than the price of many software packages and database subscriptions).


 

Leave a Reply