Statistics and methods classes at UO

Contributed by Rose Maier and John Flournoy.

There are stats and research methods classes offered by several departments around campus, but it can be hard to know which classes to take, or who to ask for advice. This is a venue for us to pool our knowledge about stats and methods courses offered around campus.

After the jump, you’ll find an extensive, but likely incomplete, list of statistics and methods courses at UO. If you’re aware of any that are missing, especially 607 or 610 listed courses, please speak up in the comments or email us.

If you have taken any quantitative classes outside of psych, please take a moment to write up a little information about your experiences. In particular, please try to include the following:

  1.  The name and course number of the class, including department code (e.g. “LING610: Empirical Methods II”)
  2.  The instructor’s name
  3. The strengths and weaknesses of the class, from your own experience as a psychology student (e.g. applicability to psych research, level of redundancy with psych stats classes like 611/612/613, etc.). If you have a copy of the syllabus, consider uploading that with your post.

(Also please keep in mind that this is a public blog, not an anonymous eval form.)

Hopefully this will turn out to be a handy little resource! Thanks for your help!

Continue reading Statistics and methods classes at UO

This Thursday – analyzing a longitudinal intervention study

This Thursday, Sept 13, Ted Bell and Mandy Hampton Wray will be presenting a data analysis problem involving a longitudinal intervention study. Ted wrote out some notes summarizing the problem — this should give you a gist of what they’re looking for. See you all Thursday!

*****

Longitudinal data sets from an intervention.

Time series:

Pre-test Post-Test Longit 1 Longit 2 Longit3

Between subject controlled factors: intervention vs. control groups

Considerations:

Testing dates are not on fixed schedule, but show considerable variability between Post-test, L1, and other dates

Some data points are missing.

Many potential dependent variables.

Many potential predictors and individual subject characteristics.

Important covariates: age, gender, pretest score?

We have some dyadic interaction data for families and children.

Questions of interest:

Do our treatment groups differ from controls longitudinally on any of a variety of continuous and categorical variables?

Does the above interact with any within subject factor, such as pre-test score, or intervention gains, or intervention fidelity, or age, or gender?

Stats questions:

What are the most straightforward statistical techniques to apply in these situations?

Program/platform recommendations

 

 

 

Followup to our discussion of Simonsohn’s fraud paper

Here are a few links following up on things that came up in our discussion today.

1. For those of you who are interested in learning more about the Diederik Stapel fraud case, a collection of links to commentaries about it.

2. Regarding Dirk Smeesters, I mentioned that one of his collaborators posted a lengthy comment about the case, including his involvement and some pretty raw emotions, when it was first revealed at Retraction Watch. (See also here at Wired.)

4. Some interesting parallels between Simonsohn’s work and that of an earlier fraud detective, NIH biologist Walter Stewart.

*****

The paper we discussed today dealt with detecting outright fraud — fabrication of data out of nothing. But some other issues came up today related to other aspects of research integrity and quality…

5. I mentioned today that there’s been some work on the problem of improbable strings of successful replications in multi-study papers, which can be interpreted as evidence of publication bias. Some important statistical work on this subject was done by Ioannidis and Trikalinos. Lately Greg Francis has been applying these methods to papers in psychology — see the “Statistics” section of his publications page (but note also that Francis’s use of the I&T method has generated some controversy; most of his papers have generated rebuttals, and Simonsohn wrote a critique of how he is using the method). But I think an absolutely terrific paper on this issue is an in-press paper by Uli Schimmack titled The Ironic Effect of Significant Results on the Credibility of Multiple-Study Articles. (That paper might be a good one to read for a future meeting.)

6. More broadly on the issue of cutting corners and questionable research practices, there are many, many resources out there. Here are a few recent papers dealing with these issues:

– An article on “False-Positive Psychology” that argues that problematic practices can lead to “researcher degrees of freedom” that make it possible to get almost any result. Simonsohn was one of the authors of that paper. (See also this discussion by the Psych Science editorial board on whether to implement their recommendations and some similar recommendations by Russ Poldrack about flexibility in fMRI analysis.)

– A survey of the prevalence of various questionable research practices.

– An analysis of Daryl Bem’s ESP paper that guesses about some of the ways he could have nudged his data (and speculates that many of them may be commonly used).

 

Scientific fraud

For our next Meth Lab meeting, this Thursday at noon, we are going to talk about scientific fraud.

As many of you probably know, this summer there were some revelations of high-profile fraud cases in psychology, both of which resulted in the resignation of the accused researchers (Dirk Smeesters of Erasmus University, and Lawrence Sanna at the University of Michigan). Both were caught when Uri Simonsohn, a psychologist at Wharton, investigated statistical anomalies in their published studies.

Here is a link to a working paper by Simonsohn (under review at Psych Science) that details his method for detecting improbable results. That paper will be the focus of our discussion.

In addition, if you aren’t up on the fraud cases, here are a few links with some supplemental background information:

News report on Smeesters’s resignation:
http://news.sciencemag.org/scienceinsider/2012/06/rotterdam-marketing-psychologist.html

News report on Sanna’s resignation:
http://www.nature.com/news/uncertainty-shrouds-psychologist-s-resignation-1.10968

Official report of Erasmus University’s investigation of Smeesters (longish but fascinating):
http://www.eur.nl/fileadmin/ASSETS/press/2012/Juli/report_Committee_for_inquiry_prof._Smeesters.publicversion.28_6_2012.pdf

Interview with Uri Simonsohn in Nature News:
www.nature.com/news/the-data-detective-1.10937

See you all Thursday at noon!

First meeting – stimulus effects article and discussion

Our first meeting will be Thursday, July 19, from 12-1 in 143 Straub. Here’s the agenda:

1. Introductory and organizational stuff. We’ll talk about why we’re all here and what we are hoping to accomplish. We will also talk about possible topics for future meetings.

2. For our first topic, we thought it would be fun to read and discuss a recent article on analyzing stimulus effects in experiments.

Say you’re running an experiment that has multiple trials within each of 2 or more conditions, and each trial presents a stimulus drawn from a larger possible set. If that sounds vague it’s because it is a pretty common situation in cognitive and social-cognitive experiments. (The article has some specific examples from social psych; see if you can think of others from your field.) The classic PSY 611 approach has 2 steps: in step 1 you average together responses from all trials within each condition, and then in step 2 you run an ANOVA on the condition averages. That approach makes the experiment amenable to the assumptions of ANOVA in step 2. But it ignores the variability among stimuli and the fact that you are making generalizations about some larger hypothetical population of stimuli. Judd et al. discuss some statistical problems with the classical approach, and show how to used mixed models (a.k.a. multilevel models) to model stimulus effects and test your hypotheses in the same analysis:

Judd, C. M., Westfall, J., & Kenny, D. A. (2012). Treating stimuli as a random factor in social psychology: A new and comprehensive solution to a pervasive but largely ignored problem. Journal of Personality and Social Psychology, 103, 54-69. [The link should take you to a full text copy if you are on campus or on the VPN.]

To ground the discussion a bit, Karyn is going to talk about an example of how she modeled trial-level effects in a recent paper of hers. If you’re so inclined, you are welcome to read that paper too (but we aren’t expecting you to).

We also encourage you to think of examples from your own research or other work you have read, and come ready to talk about them.

See you on the 19th!

 

Welcome to Meth Lab

Do you have a design, measurement, or analysis problem that you’d like some feedback on? Or is your summer feeling empty without a little statistics to keep it going?

Three of us from the Department of Psychology (Elliot Berkman, Karyn Lewis and Sanjay Srivastava) are starting up a brownbag for consulting and discussion of statistics and methods. We’re calling it the Quantitative Methods Laboratory, which is a mouthful but conveniently abbreviates to “Meth Lab.”

Meetings will serve 2 purposes:

  1. People can present on problems in design or analysis from their own research and get consultation and feedback from the group.
  2. We will discuss journal articles and current events related to quantitative methods (which may occasionally involve a little light reading).

Our first meeting will be Thursday, July 19 from 12-1 in 143 Straub. We will meet biweekly through the summer. Anybody is welcome to drop in any time. We will post our upcoming agenda and other items of interest on this blog.

NOTE: Anyone may present for consulting and feedback. But we hope to get a regular core of attendees who find this stuff interesting or want to keep up to date on quantitative methods. Toward that end, regular attendees will get priority in presenting or sponsoring their labmates to present. (The latter is encouragement to send at least one delegate from your lab on a regular basis.)

If you are interested, send me an email and I will add you to our list to get announcements, updates etc. Hope to see you there!