Why I print things on dead trees and hand them out in classes

From the enduringly excellent Nielsen Norman Group is this article, “Reading Content on Mobile Devices“, by KATE MEYER which repeats tests done for reading speed and accuracy on mobile devices  and computers. Folks are getting better about reading on mobile devices, I’ll have to accept that and move on. However, it takes longer and is harder for readers to understand more difficult material on mobile devices. I wonder if this too will change, although I suspect that will take longer. So, for now, if I want students to read and understand relatively complex material I’ll continue to print it out and bring it to class, since many of us will be using our mobile devices to read it otherwise.

Also, while they recommend open ended questions instead of closed ones, and I agree with them for some purposes, I think our work on assessment keeps the questions deliberately closed for useful reasons. For one, it’s more in line with the Kirkpatrick levels. Closed-ended questions keep the options limited for the purposes of learning and assessing goals. We hope they also do these three things that are from the list of when to ask closed ended questions from the article:

“When collecting data that must be measured carefully over time, for example with repeated (identical) research efforts

When the set of possible answers is strictly limited for some reason

After you have done enough qualitative research that you have excellent multiple-choice questions that cover most of the cases”

We hope to be doing this over time, the possible answer are limited to conform to our learning outcomes and we do hope that they are excellent questions that cover the cases that we are interested in measuring.

What happened to Beall’s list?

Update: It was voluntarily closed down due to “threats & politics”. See a bit more at the Chronicle.
From my STS listserv, the following helpful bits of information:
“Rumors via twitter are that Cabell’s is subsuming Beall’s list.  They’ve been incorporating it into their criteria for a couple of years now.
-Gail P. Clement  | Head of Research Services  | Caltech Library
What is Cabell’s? Is it yet another subscription? Will Ulrichs do any of this? Please, pretty please, let them be doing some of this work.
and this helpful report, along with some of the reasons his list was problematic:

“According to the Support for Open Access Publishing website, Beall’s website has been shut down.

– Gillian Clinton, Principal. Clinton Research.   www.clintonresearch.ca
This is another site that seems to be getting some traction: http://thinkchecksubmit.org

Critically thinking about peer review

I’m starting here and mean to return:

Touching on issues that I deal with when teaching about peer review and predatory publishing:

http://www.nytimes.com/2016/12/29/upshot/fake-academe-looking-much-like-the-real-thing.html?_r=0

Currently, I give students examples of popular and scholarly sources in hard copy, unbound format. I ask them to construct a grid of criteria to determine characteristics of each. I also give them some less clear examples of literature, newsletters from reputable sources and trade publications mostly, that will have good and accurate information, but isn’t peer-reviewed.

And this proposed new course: Calling Bullshit in the age of big data

Stop using likert scales, write better learning outcomes and more!

Our article is available for everyone to enjoy here:

Turnbow, D., & Zeidman-Karpinski, A. (2016). Don’t Use a Hammer When You Need a Screwdriver: How to Use the Right Tools to Create Assessment That Matters. Communications In Information Literacy, 10(2). Retrieved January 7, 2017, from http://www.comminfolit.org/index.php?journal=cil&page=article&op=view&path%5B%5D=v10i2p143

Thank you for all it took to make this possible. It was years in the making and several years to write it all down. We couldn’t be more grateful to the amazing librarians at Radford University for hosting the most terrific conference, The Innovative Library Classroom, and letting us present this work in its initial stages there.

Fake news, critical thinking, reliable sources

Some ideas for student activities:

http://learning.blogs.nytimes.com/2015/10/02/skills-and-strategies-fake-news-vs-real-news-determining-the-reliability-of-sources/

and http://learning.blogs.nytimes.com/2014/10/08/guest-post-practical-tools-for-teaching-news-literacy/

More on the fake news-makers:

http://www.npr.org/sections/alltechconsidered/2016/11/23/503146770/npr-finds-the-head-of-a-covert-fake-news-operation-in-the-suburbs

Slate’s answer for finding fake news on FB:

http://www.slate.com/articles/technology/technology/2016/12/introducing_this_is_fake_slate_s_tool_for_stopping_fake_news_on_facebook.html

Crummy sites:

crummy sites to use for evaluation

And

False, Misleading, Clickbait-y, and Satirical “News” Sources‘ by Melissa Zimdars

Fake news, reliable sources libguide from Nathan Rinne

Construct your own matrix:

Benjes-Small, C., Archer, A., Tucker, K., Vassady, L., & Resor Whicker, J. (2013). Teaching Web Evaluation: A Cognitive Development Approach. Communications In Information Literacy, 7(1). Retrieved March 6, 2014, from http://www.comminfolit.org/index.php?journal=cil&page=article&op=view&path%5B%5D=v7i1p39

Especially Appendix A

and: Alyssa Archer and Candice Benjes-Smal also developed  “Evaluating news sites: Credible or Clickbait?”   at http://www.projectcora.org/assignment/evaluating-news-sites-credible-or-clickbait

screen-shot-2016-12-07-at-10-26-00-am

An infographic of “A decent breakdown of all things real and fake news.” here: http://i.imgur.com/7xHaUXf.jpg

includes sources for verifying news: https://greenwichfreepress.com/news/darien-reference-librarian-shares-ways-to-spot-fake-news-click-bait-satire-79507/

A nice overview:

“Campuses, she said, will have to “either put our money where our mouths are and follow through on this, or accept that our students are not going to be as information- and media-literate as we believe they should be.””

http://www.chronicle.com/article/How-Can-Students-Be-Taught-to/238652

Stop using Likert scales: 3 minute version

Description:

I’ll present a performance-based assessment model that produces actionable data as an alternative to Likert scale questions. [longer article forthcoming in the journal linked below]

Dominique Turnbow and I have been working on developing performance based replies for our end of class evaluations, based on work by Will Thalheimer who does this in business settings. We have a lot to say about this in an upcoming article that will be in Communications in Information Literacy. We’ll be sure to let you know when you can read all about it. We suggest turning your likert scales into something more useful. Dominique and Amanda Roth, both at UCSD, have this lovely example in qualtrics.

Or in like this in google forms.

I have also printed out copies and brought them to class for students to fill out. When I don’t have students working on computers, that’s a good strategy. Qualtrics is so much nicer, but anything will work. Whenever possible I use our LMS (it’s now Canvas) to make a simple survey for students to fill out. We don’t have a formal system for reporting our results, but I can use these results to see what worked and what didn’t. I use that for my own documentation of my instruction for contract renewals and performance reviews. Here is one that I used:

If I haven’t convinced you about this, that’s ok, but please use qualtrics which lets you label each reply, NOT google forms for likert scales.

Please consider reading the full published article now available.

COPUS in Library instruction

The instrument for recording is named COPUS.xlsx file in the folder linked here.

You will need some training first. I am happy to show you what I have learned.

Using another department’s template, I was able to make 2 documents for two different library instruction sessions and 2 different instructors. One was a single class in a full term class and the other was a one off session.

The teaching inventory would work for the full class, but not as well for the one off class. I’m working on making a version that will work for one-off classes, but any changes will mean that it’s not nationally calibrated. I am open to ideas.

Why active learning? part 1

For the beginner:

I have gone so far down the path of figuring out how to use active learning effectively in a classroom that I was more than a bit taken aback recently. I had to defend the reasoning behind active learning for understanding material, for acquiring skills (which we do a lot in our one-off sessions), but also for understanding important concepts in a field. This is especially true when you are trying to change thinking and patterns about things the learners feel or think are true. Everything isn’t on google, the world is complicated, etc., etc.

While we might change our minds in another 10 years, here is what we know now:

Across STEM fields and at all levels, students fail less and learn more when the classes involve more active learning and less lecture. Female students and  those from disadvantaged backgrounds do even better in active learning classes than traditional lectures.

“The authors examined two outcome measures: the failure rate in courses and the performance on tests. They found the average failure rate decreased from 34% with traditional lecturing to 22% with active learning (Fig. 1A), whereas performance on identical or comparable tests increased by nearly half the SD of the test scores (an effect size of 0.47).”

and

“However, in undergraduate STEM education, we have the curious situation that, although more effective teaching methods have been overwhelmingly demonstrated, most STEM courses are still taught by lectures—the pedagogical equivalent of bloodletting. Should the goals of STEM education research be to find more effective ways for students to learn or to provide additional evidence to convince faculty and institutions to change how they are teaching? Freeman et al. do not answer that question, but their findings make it clear that it cannot be avoided.”

from Wieman, C. E. (2014). Large-scale comparison of science teaching methods sends clear message. Proceedings of the National Academy of Sciences, 111(23), 8319–8320. doi:10.1073/pnas.1407304111

Referring to this work: Freeman, S., Eddy, S. L., McDonough, M., Smith, M. K., Okoroafor, N., Jordt, H., … Bruce Alberts, by. (n.d.). Active learning increases student performance in science, engineering, and mathematics. doi:10.1073/pnas.1319030111

Classrooms

Active learning in classrooms designed for it. I wish there were more references here, but it seems clear that students are happier in active learning classrooms. And they think they are learning more (perceived learning). Likely, there is some study that has better data about actual learning  and interactivity (see slide 12). I am not sure what higher-order active learning might refer to.

Pearling and footnote chasing

A recent post on Medical Libraries Discussion List suggests that this is yet another topic that might need a new article. I learned the following in library school. From the ever so clever Dr. Marcia Bates.

footnote chasing (Bates, 1989) or backwards chaining (Ellis, 1989)

citation searching (Bates, 1989) – Although I could swear that I learned “pearling” in library school, but maybe not from Dr. Bates? Booth (2008)* is a likely suspect, but I still don’t know where I got pearling from.

berrypicking (Bates, 1989)  – “Browsing is undirected, while berrypicking is more directed.” (Bates, 2002)

My attempts at representing these ideas at powerpoint slides. Feel free to use, share and re-mix: pearling footnote chasing

*Booth (2008) makes a distinction between “citation pearl growing” and “citation searching”. What I call pearling is what he calls “citation searching.” I will figure out where I put his citation pearl growing (building search terms from a relevant article) in my model. I also think he mis-characterizes Bates’ summary of berry picking on p.

Bates, M. (2002). Toward An Integrated Model of Information Seeking and Searching. Retrieved February 7, 2016, from https://pages.gseis.ucla.edu/faculty/bates/articles/info_SeekSearch-i-030329.html

Bates, M. (1989). The design of browsing and berrypicking techniques for the online search interface. Online Review. Retrieved from http://www.emeraldinsight.com/doi/abs/10.1108/eb024320  [http://comminfo.rutgers.edu/~tefko/Courses/e530/Readings/Bates_Berrypicking.pdf]

Booth, A. (2008). Unpacking your literature search toolbox: on search styles and tactics. Health Information and Libraries Journal, 25(4), 313–7. doi:10.1111/j.1471-1842.2008.00825.x

Ellis, D. (1989). A behavioural approach to information retrieval system design. Journal of Documentation. Retrieved from http://www.emeraldinsight.com/doi/abs/10.1108/eb026843

pearling and footnote chasing list summary

 from the recent discussion on the Medial Libraries Discussion List:
pearl growing/building/purling/citation pearling – citation associations (or keyword associations through pearl growing) can be conceptually incestuous when it comes to building a comprehensive search. For example, you would unlikely see someone who uses a behavioural approach to intervention citing a cognitive theorist or vice versa. In both cases if you stayed within one paradigm you’d miss important points of view/evidence. Reviewing and using additional index or key terms
citation tracking/tracking/searching/chasing/referencing/mining.
Ancestral searching
tree – bread crumbing – staircase
The Oxford Guide to Library Research by Thomas Mann (4th edition, 2015), Chapter Six is titled Citation Search and Mann uses the term Citation Searching.  I and other faculty have used this text in our library school online research classes at SJSU School of Information.
Citation Searching
http://hlwiki.slais.ubc.ca/index.php/Snowballing
Snowballing is sometimes referred to as *reference harvesting* or *pearl
growing search method*; here are some other terms that evoke similar kinds
of search strategies:
   – forward citation searching, footnote chasing, reference scanning,
   reference harvesting, hand-searching & powerbrowsing
   – backward chaining, forward chaining, digital browsing, footnote
   chasing
   – pearl growing, reference harvesting, reference lists, reference
   searches, ‘cited by searching’ ….and so on
Bates articulates 29 different search strategies in a era before online databases:
Bates, M. (1979). Information search tactics. Journal of the American Society for Information  …. Retrieved from http://onlinelibrary.wiley.com/doi/10.1002/asi.4630300406/full  [https://pages.gseis.ucla.edu/faculty/bates/articles/Information%20Search%20Tactics.html]