Faculty Success Program: Pedagogy

Resources: Concerns about student evaluations: TEP has resources for you on their web site.

Backward design: Fink, L. (2013). Creating significant learning experiences : An integrated approach to designing college courses (Revised and updated ed., Jossey-Bass higher and adult education series). San Francisco: Jossey-Bass. Online library copy.

Think of the story arc for the course. Big picture, also the details, with some flexibility. Different learning styles in the classroom – lecture, calling out, turn to your neighbor, group work, whole class discussion. Have some narrative closure and also keep things current.

Tiny tools for studio based classes and for other kinds of classes too. The class is a journey, they have to go somewhere, need to check in where they are in regards to the subject matter. Mark the beginning by sharing their narrative. Use a memo, send it anytime, mark if it’s ok to share or keep private. How the course is landing, showing up in your own life. Around mid-term, but not  Mid-term: stop-start-continue. What stop doing? Start: What not doing but could be doing? Continue:What’s working?

Towards the end of the course: draw a time line on the board, showing speakers, assignments, etc. Use guided meditation/journaling. Think about 1st day of class and think forward to now – who are you now? Identify 2 points of transformation. Share in small groups. Giving tools to students while they are learning.

Teaching methods: group work – tension between have each person understand everything and having a split of some people do all the work and others who slide by; have a self evaluation – group grade on project, individual paper responding to group stuff, but is an individual grade. Have students reflect on what they did and turn that in. They don’t like group projects, but teaching real world skills. Tell them that at the beginning.

Have a group communicator who can report to the instructor.

Look in the literature: biggest take home, don’t give them a project they could do on their own!

TEP has a handout; Sierra has something from a MOOC to share.

Peer evaluations: what are those like? See more here: http://academicaffairs.uoregon.edu/content/peer-evaluation-teaching

Including tools, same link as above, observation rubric – research based, meet beforehand, talk about the syllabus, goals, connect to have you

Make a commitment to do them systematically. Easier, you know what you’re trying to do, clear organized process.

Ask TEP to come and do confidential observations, loose notes, not official evaluation process.

Strategies for dealing with bias in student evaluations:

Fix the all caps emails or the Miss Gash or first name emails; introduce self as Professor Gash, all emails/communication must be addressed as Prof. —-.  This gives you a moment to remember who I am and the things you might say to Professor Gash, and it might be different than what you’d say to Alison. Hoping that if you think about this as you start you won’t say things you might later regret.

Mike Urbancic found this awesome flowchart from Andrea Eidinger to use as a slide for his class.

Puts them on notice about a lot of explicit things.

Find positive ways to share intellectual journey, research identity. Follow a question to its conclusion. Librarians are interested in teaching these (and more): Scholarship is a conversation and Research as Inquiry!

Teaching a 400/500 level class: have some ideas, but would love tips.

Didn’t work to teach both, taught toward the undergrads, thoughts for grad students, more work for me, research paper. Never successful the other way around.

Resented that had to teach 2 classes for the price of 1. Have grad students be mentors to undergrads. Depends on the mix too. Have grad students share what they discuss with undergrads.

Invite them to use the course for their own insight. Have them observe the arc of the discussion, instead of participate and report back.

Being careful not to use them as unpaid GEs!

Help them figure out their goals and design syllabus for their own interests.

Sierra Dawson’s blog for teaching large classes.[mentioned after formal panel]

TEP handout for mid-term evaluations and upcoming TEP events.

Why I print things on dead trees and hand them out in classes

From the enduringly excellent Nielsen Norman Group is this article, “Reading Content on Mobile Devices“, by KATE MEYER which repeats tests done for reading speed and accuracy on mobile devices  and computers. Folks are getting better about reading on mobile devices, I’ll have to accept that and move on. However, it takes longer and is harder for readers to understand more difficult material on mobile devices. I wonder if this too will change, although I suspect that will take longer. So, for now, if I want students to read and understand relatively complex material I’ll continue to print it out and bring it to class, since many of us will be using our mobile devices to read it otherwise.

Also, while they recommend open ended questions instead of closed ones, and I agree with them for some purposes, I think our work on assessment keeps the questions deliberately closed for useful reasons. For one, it’s more in line with the Kirkpatrick levels. Closed-ended questions keep the options limited for the purposes of learning and assessing goals. We hope they also do these three things that are from the list of when to ask closed ended questions from the article:

“When collecting data that must be measured carefully over time, for example with repeated (identical) research efforts

When the set of possible answers is strictly limited for some reason

After you have done enough qualitative research that you have excellent multiple-choice questions that cover most of the cases”

We hope to be doing this over time, the possible answer are limited to conform to our learning outcomes and we do hope that they are excellent questions that cover the cases that we are interested in measuring.

What happened to Beall’s list?

Update: It was voluntarily closed down due to “threats & politics”. See a bit more at the Chronicle.
From my STS listserv, the following helpful bits of information:
“Rumors via twitter are that Cabell’s is subsuming Beall’s list.  They’ve been incorporating it into their criteria for a couple of years now.
-Gail P. Clement  | Head of Research Services  | Caltech Library
What is Cabell’s? Is it yet another subscription? Will Ulrichs do any of this? Please, pretty please, let them be doing some of this work.
and this helpful report, along with some of the reasons his list was problematic:

“According to the Support for Open Access Publishing website, Beall’s website has been shut down.

– Gillian Clinton, Principal. Clinton Research.   www.clintonresearch.ca
This is another site that seems to be getting some traction: http://thinkchecksubmit.org

Critically thinking about peer review

I’m starting here and mean to return:

Touching on issues that I deal with when teaching about peer review and predatory publishing:

http://www.nytimes.com/2016/12/29/upshot/fake-academe-looking-much-like-the-real-thing.html?_r=0

Currently, I give students examples of popular and scholarly sources in hard copy, unbound format. I ask them to construct a grid of criteria to determine characteristics of each. I also give them some less clear examples of literature, newsletters from reputable sources and trade publications mostly, that will have good and accurate information, but isn’t peer-reviewed. I just tried adding more direction. Who is the audience? Who are the authors?

Article with some interesting history of peer review, with an irresistible lead: http://physicstoday.scitation.org/doi/full/10.1063/PT.3.3463

And this proposed new course: Calling Bullshit in the age of big data

Stop using likert scales, write better learning outcomes and more!

Our article is available for everyone to enjoy here:

Turnbow, D., & Zeidman-Karpinski, A. (2016). Don’t Use a Hammer When You Need a Screwdriver: How to Use the Right Tools to Create Assessment That Matters. Communications In Information Literacy, 10(2). Retrieved January 7, 2017, from http://www.comminfolit.org/index.php?journal=cil&page=article&op=view&path%5B%5D=v10i2p143

Thank you for all it took to make this possible. It was years in the making and several years to write it all down. We couldn’t be more grateful to the amazing librarians at Radford University for hosting the most terrific conference, The Innovative Library Classroom, and letting us present this work in its initial stages there.

Fake news, critical thinking, reliable sources

Some ideas for student activities:

http://learning.blogs.nytimes.com/2015/10/02/skills-and-strategies-fake-news-vs-real-news-determining-the-reliability-of-sources/

and http://learning.blogs.nytimes.com/2014/10/08/guest-post-practical-tools-for-teaching-news-literacy/

More on the fake news-makers:

http://www.npr.org/sections/alltechconsidered/2016/11/23/503146770/npr-finds-the-head-of-a-covert-fake-news-operation-in-the-suburbs

Slate’s answer for finding fake news on FB:

http://www.slate.com/articles/technology/technology/2016/12/introducing_this_is_fake_slate_s_tool_for_stopping_fake_news_on_facebook.html

Crummy sites:

crummy sites to use for evaluation

And

False, Misleading, Clickbait-y, and Satirical “News” Sources‘ by Melissa Zimdars

Fake news, reliable sources libguide from Nathan Rinne

Construct your own matrix:

Benjes-Small, C., Archer, A., Tucker, K., Vassady, L., & Resor Whicker, J. (2013). Teaching Web Evaluation: A Cognitive Development Approach. Communications In Information Literacy, 7(1). Retrieved March 6, 2014, from http://www.comminfolit.org/index.php?journal=cil&page=article&op=view&path%5B%5D=v7i1p39

Especially Appendix A

and: Alyssa Archer and Candice Benjes-Smal also developed  “Evaluating news sites: Credible or Clickbait?”   at http://www.projectcora.org/assignment/evaluating-news-sites-credible-or-clickbait

screen-shot-2016-12-07-at-10-26-00-am

An infographic of “A decent breakdown of all things real and fake news.” here: http://i.imgur.com/7xHaUXf.jpg

includes sources for verifying news: https://greenwichfreepress.com/news/darien-reference-librarian-shares-ways-to-spot-fake-news-click-bait-satire-79507/

A nice overview:

“Campuses, she said, will have to “either put our money where our mouths are and follow through on this, or accept that our students are not going to be as information- and media-literate as we believe they should be.””

http://www.chronicle.com/article/How-Can-Students-Be-Taught-to/238652

Stop using Likert scales: 3 minute version

Description:

I’ll present a performance-based assessment model that produces actionable data as an alternative to Likert scale questions. [longer article forthcoming in the journal linked below]

Dominique Turnbow and I have been working on developing performance based replies for our end of class evaluations, based on work by Will Thalheimer who does this in business settings. We have a lot to say about this in an upcoming article that will be in Communications in Information Literacy. We’ll be sure to let you know when you can read all about it. We suggest turning your likert scales into something more useful. Dominique and Amanda Roth, both at UCSD, have this lovely example in qualtrics.

Or in like this in google forms.

I have also printed out copies and brought them to class for students to fill out. When I don’t have students working on computers, that’s a good strategy. Qualtrics is so much nicer, but anything will work. Whenever possible I use our LMS (it’s now Canvas) to make a simple survey for students to fill out. We don’t have a formal system for reporting our results, but I can use these results to see what worked and what didn’t. I use that for my own documentation of my instruction for contract renewals and performance reviews. Here is one that I used:

If I haven’t convinced you about this, that’s ok, but please use qualtrics which lets you label each reply, NOT google forms for likert scales.

Please consider reading the full published article now available.

COPUS in Library instruction

The instrument for recording is named COPUS.xlsx file in the folder linked here.

You will need some training first. I am happy to show you what I have learned.

Using another department’s template, I was able to make 2 documents for two different library instruction sessions and 2 different instructors. One was a single class in a full term class and the other was a one off session.

The teaching inventory would work for the full class, but not as well for the one off class. I’m working on making a version that will work for one-off classes, but any changes will mean that it’s not nationally calibrated. I am open to ideas.

Why active learning? part 1

For the beginner:

I have gone so far down the path of figuring out how to use active learning effectively in a classroom that I was more than a bit taken aback recently. I had to defend the reasoning behind active learning for understanding material, for acquiring skills (which we do a lot in our one-off sessions), but also for understanding important concepts in a field. This is especially true when you are trying to change thinking and patterns about things the learners feel or think are true. Everything isn’t on google, the world is complicated, etc., etc.

While we might change our minds in another 10 years, here is what we know now:

Across STEM fields and at all levels, students fail less and learn more when the classes involve more active learning and less lecture. Female students and  those from disadvantaged backgrounds do even better in active learning classes than traditional lectures.

“The authors examined two outcome measures: the failure rate in courses and the performance on tests. They found the average failure rate decreased from 34% with traditional lecturing to 22% with active learning (Fig. 1A), whereas performance on identical or comparable tests increased by nearly half the SD of the test scores (an effect size of 0.47).”

and

“However, in undergraduate STEM education, we have the curious situation that, although more effective teaching methods have been overwhelmingly demonstrated, most STEM courses are still taught by lectures—the pedagogical equivalent of bloodletting. Should the goals of STEM education research be to find more effective ways for students to learn or to provide additional evidence to convince faculty and institutions to change how they are teaching? Freeman et al. do not answer that question, but their findings make it clear that it cannot be avoided.”

from Wieman, C. E. (2014). Large-scale comparison of science teaching methods sends clear message. Proceedings of the National Academy of Sciences, 111(23), 8319–8320. doi:10.1073/pnas.1407304111

Referring to this work: Freeman, S., Eddy, S. L., McDonough, M., Smith, M. K., Okoroafor, N., Jordt, H., … Bruce Alberts, by. (n.d.). Active learning increases student performance in science, engineering, and mathematics. doi:10.1073/pnas.1319030111

Classrooms

Active learning in classrooms designed for it. I wish there were more references here, but it seems clear that students are happier in active learning classrooms. And they think they are learning more (perceived learning). Likely, there is some study that has better data about actual learning  and interactivity (see slide 12). I am not sure what higher-order active learning might refer to.

Pearling and footnote chasing

A recent post on Medical Libraries Discussion List suggests that this is yet another topic that might need a new article. I learned the following in library school. From the ever so clever Dr. Marcia Bates.

footnote chasing (Bates, 1989) or backwards chaining (Ellis, 1989)

citation searching (Bates, 1989) – Although I could swear that I learned “pearling” in library school, but maybe not from Dr. Bates? Booth (2008)* is a likely suspect, but I still don’t know where I got pearling from.

berrypicking (Bates, 1989)  – “Browsing is undirected, while berrypicking is more directed.” (Bates, 2002)

My attempts at representing these ideas at powerpoint slides. Feel free to use, share and re-mix: pearling footnote chasing

*Booth (2008) makes a distinction between “citation pearl growing” and “citation searching”. What I call pearling is what he calls “citation searching.” I will figure out where I put his citation pearl growing (building search terms from a relevant article) in my model. I also think he mis-characterizes Bates’ summary of berry picking on p.

Bates, M. (2002). Toward An Integrated Model of Information Seeking and Searching. Retrieved February 7, 2016, from https://pages.gseis.ucla.edu/faculty/bates/articles/info_SeekSearch-i-030329.html

Bates, M. (1989). The design of browsing and berrypicking techniques for the online search interface. Online Review. Retrieved from http://www.emeraldinsight.com/doi/abs/10.1108/eb024320  [http://comminfo.rutgers.edu/~tefko/Courses/e530/Readings/Bates_Berrypicking.pdf]

Booth, A. (2008). Unpacking your literature search toolbox: on search styles and tactics. Health Information and Libraries Journal, 25(4), 313–7. doi:10.1111/j.1471-1842.2008.00825.x

Ellis, D. (1989). A behavioural approach to information retrieval system design. Journal of Documentation. Retrieved from http://www.emeraldinsight.com/doi/abs/10.1108/eb026843