By Robert Voelker-Morris
In 2004, as part of my work on the Institute of Museum and Library Services (IMLS) granted project for the University of Oregon’s Museum of Natural and Cultural History, I took part in the IMLS’s required Outcome-based Evaluation training. This training was the focus of my June 2004 piece in CultureWork.
Since that training, and the completion of the IMLS granted project, the directions of my professional work have changed. I now work for the University of Oregon’s Teaching Engagement Program (TEP), and have taught multiple undergraduate courses for the University. Even with my work largely shifting away from arts and culture sector work, specifically museum administration, I have found Outcome-based Evaluation to be an essential consideration within my leadership of faculty development programming and the facilitation of feedback and evaluation.
When looking back at the key “foundational” questions within programming evaluation:
“How has my program made a difference?”
“How are the lives of the program participants better as a result of my program?”
I see the ways in which this foundational framework informs student learning (or any type of learner and learning experience) and can be easily reframed into instructional objectives:
“How has my teaching made a difference?”
“In what ways have the students learned essential skills and content as a result of my class?”
At one level, one can read into this reframing a more specified and “specialized” focus, but I don’t see it this way. I see it more as a simple interchanging of “program” with “teaching,” but in essence both are serving the same need and endpoint. For what one is accomplishing with Outcome-based Evaluation is the assessment of one’s educational programming, whether that be an exhibit, after-school tour program, or college course.
Many of the Outcome-based Evaluation categories fit well into any type of educational setting. For instance, the overarching question of what needs to be designed into your course to help the learners reach the goals you have for them? In Outcome-based Evaluation these are the “services” which “are the elements that directly involve the end-users.” Or considering who is being served, as in who are the students (learners)? Are you designing the “grading” and feedback to “to measure all the participants and materials or a specific subgroup, and what special characteristics of the target audience can be utilized to further clarify the group being measured.” Again, these are important questions whether applied to a museum program, educational performance, or school/university classroom.
What I have found to really resonate over the years with my Outcome-based Evaluation training is just how universal it can be to any type of learning experience. At a deeper philosophical level, it also reminds me how important it is to share ideas, theories, applications, case studies, and experiences across wide and diverse audiences. Arts and culture sectors do not only impact specific sectors within their selective “arts” and “culture” communities, but we inform many other areas of our society. As well we are powerfully informed and impacted by other sectors too. All of which is a wonderful reminder of the power of coalitions, collaborations, and partnerships that involve multiple stakeholders. As I concluded in my original piece, “With education a museum can better serve its public, reaching them from a level that each individual brings to the learning experience.” In this case, the learning experience is the sharing of management experiences where everyone has something to bring to the table in helping answer the question of “How are the lives of the program participants better as a result of my program?”
Robert Voelker-Morris is a faculty technology consultant in the Teaching Engagement Program at the University of Oregon. He has been affiliated faculty with the university’s Arts & Administration Program, First-Year Programs, and Comics & Cartoon Studies Minor. Robert served as co-editor of CultureWork for 10 years.