What, Why and How of University Metrics

Posted on behalf of Scott Pratt (Executive Vice Provost, Department of Philosophy)


What? 

Metrics (“indicators,” standards of measurement) as used here include two categories of information: Operational and Mission. The former are intended to provide information about faculty teaching workload and departmental cost and efficiency and the latter information about how well we are achieving our basic missions of teaching and research.

Operational metrics are aggregated at the College, School and department level and include SCH and majors per TTF and NTTF, number of OA and classified staff FTE per TTF, average and median class size, and degrees per TTF.

Mission metrics include data regarding undergraduate and graduate education (including serving diverse populations) and, still under development, data regarding faculty research.

Undergraduate data describes the undergraduate program in each college, school and department in terms of number of majors and minors, demographic information, major declaration patterns, graduation rates, and time to degree.

Graduate data describes the graduate program at the college, school and degree level in terms of completion rate and time to degree, demographic information, admission selectivity, information regarding student experience.

Research metrics (under development) are data regarding faculty research/creative activity productivity specific to each discipline and sub discipline where the data to be collected is specified by the faculty in the field. These so-called “local metrics” are intended to provide a faculty-determined set of measures that describe faculty work both quantitatively and qualitatively. It is clear that no single standard or number apply across all fields and so whatever metrics are produced, they will not be reducible to either a single standard or a single number.

It is important to note that research metrics will be revisable over time in response to changes in departments, disciplines and subfields, information available, and as we learn what are good and less good indicators of progress. The mode of reporting (also still under development) will likewise be revisable. (One approach to reporting is suggested by Ulrich Mayr, Psychology, on this blog.)

Note that PhD completion rate and time to degree is also reported by the AAU Data Exchange at the degree program level and so UO information can be compared with other degree programs at other AAU institutions. All other data is comparable over time and, in a limited way, comparable among departments (so that one could compare data within a school or college or among departments with similar pedagogy, for example).

Other graduate data is currently being collected through the AAUDE exit survey so that over the next several years sufficient data will be available to report PhD initial placement, graduate student research productivity (represented in publications), and data regarding student assessment of graduate advising support. These data will be available by degree program across all reporting AAU institutions.

Information about faculty service is not currently collected. Since service is a vital part of faculty work, we hope to develop a means of defining and collecting service data so that this can also be reported at the college, school, and department levels.

Why?

There are at least three reasons that operational and mission metrics will be collected. They are (1) external communication/accountability, (2) internal communication/continuous improvement/accountability, and (3) to provide information to help guide the allocation of limited resources.

Public research universities have a need for external communication that provides an account of their work to students and their families, the public, government agencies, disciplines and other constituencies. While the university already attempts to be accountable as a whole to its mission, it also has some obligation to be accountable in its parts.  Diverse academic units support the mission of the university in different ways. A general accounting of the work of the university (which necessarily attempts to reduce the university’s work to a few standards) is insufficient to the latter task and so the ability to account for work accomplished at the department or disciplinary level is vital to ensure that our constituencies understand the value of both the whole and its parts.

Anecdotal information and “storytelling” are part of this effort, but so are systematically collected data.  Whatever information is used to promote communication needs to be presented with explanatory information so that others will understand the differences between programs and what constitutes success.  Data for external communications (such as student success data currently available to the public) must be limited to aggregated information and data sets that are large enough to ensure anonymity.  Research metrics are important to this function because they provide a picture of what our faculty do, especially those programs whose work and expected results are less familiar to the wider public, legislators, and so on.

Internal communication is likewise essential both in order to ensure that university and college leadership understand the work done by faculty and so that departments themselves have a shared understanding of their work, needs, and the meaning of success.  Communication with leadership needs to involve both a quantitative dimension that provides some idea of how much work is common in a particular field and how quality is defined for that field. Some of this information (e.g. student success data) is common enough across disciplines to suggest a general conception of quality using quantitative proxies (e.g. graduation rates), while other quantitative data (e.g. class size, research productivity) requires more explanation and narrower application.

Internal communication also concerns communication with faculty in helping make transparent department expectations for teaching and research. While review standards are often obscure, faculty nevertheless need a shared sense of what it means for faculty to be successful in aggregate in their fields. The development, implementation and regular review of metrics at all levels by faculty and administrative leadership provides a means to foster a shared vision of success; the ability to identify goals, opportunities, and problems; and determine how best to move forward.

Resources at every university are limited and allocating them requires both good information and good critical deliberation. Past budgeting systems have relied on formulaic systems that depend on reducing unit quality to two or three indicators for all programs (e.g. SCH, number of majors, number of degrees).  Such an approach, when fully implemented, excludes aspects of department success that are not captured in the indicators. When not fully implemented, the need to allocate resources beyond what is partially specified by the limited set of indicators means that allocations must be made ad hoc. Rather than the reductionist approach taken in recent years, the current budget model aims to implement a deliberative model informed by both quantitative and qualitative data.

How?

How the metrics figure in the allocation process varies. In the Institutional Hiring Plan, the full complement of metrics is to be considered in deciding where particular TTF lines will be created. The IHP involves a structured review process involving department generated proposals, vetting by the school or college, review by the Office of the Provost, a faculty committee, and the deans’ council, with the final decision by the Provost.

In allocating GE lines, data regarding teaching needs is combined with graduate student success data and (when available) research data. Decision-making takes into account enrollment goals, student success data, other program data, and regular meetings with deans and Directors of Graduate Study.

The block allocation process (that establishes the base operating budget for each college and school) considers operational metrics and past budget allocations. Block allocations are proposed by the Office of the Provost and negotiated with the individual schools and colleges.

The strategic initiative process will consider in part data relevant to the proposals at hand (e.g. undergraduate success for proposals for undergraduate programs, graduate success data for proposals related to new program development).  The initiative process involves a faculty committee with recommendations to the provost.

In general, use of the metrics will be guided by our goal to advance the UO as a liberal arts R1 university.  This means that while there are other goals to be met (see below), meeting them must take into account the character and purpose of the UO as a liberal arts university.

  • We should always be working towards improvement.
  • Undergraduate student educational needs must be met.
  • PhD programs must be sustained and improved both in terms of enrollments and placement.
  • Diversity and inclusion must be fostered in the faculty, student population and academic programs.
  • Excellent programs should be supported and expanded where there is a demonstrable possibility of expansion.
  • Less successful programs should receive resources to support evidence-based plans for excellence consistent with the other goals.
  • Programs that are successfully meeting their own and university goals should be supported to continue that success.
  • Programs that are unsuccessful and do not have workable plans for improvement may be eliminated according to the HECC and CBA guidelines.

 

Leave a Reply