Registration!

You can actually register for Bayes Club now! Sorry for the huge delay on that.

Remember that you don’t need to register for Bayes Club to attend. Anyone is welcome to just show up. There are two differences between taking Bayes Club for credit (registering) vs. just showing up:

1) It shows up on your transcript, i.e. you get official credit for it, obviously.

2) You are required to provide a little evidence that you’re actually participating in Bayes Club, or you run the risk of failing or getting an incomplete. The exact nature of this “evidence” is twofold: You have to actually show up for Bayes Club (we’ll take attendance), and you have to send a short write-up to Sanjay, the professor of record, by the end of the term discussing what you learned and/or did in Bayes Club that term.

So basically, you should register if you want it to show up on your transcript and you’re not worried about being able to meet the requirements. If either of those things isn’t true, then just plan to show up without registering. Don’t worry – we won’t run out of space (and if we do, we’ll just find a bigger room).

CRN: 17487

Normal vs. t-dist priors?

We model most continuous variables with normal distributions because 1) many of them genuinely are normally distributed, 2) the normal distribution is mathematically convenient, and 3) it’s a pretty ingrained habit (honestly, that is probably the real reason in most cases). In Bayesian modeling in particular, there’s the additional attraction that normal distributions form conjugate priors when other less elegant distributions (like t) won’t. This is really important if you’re doing Bayesian modeling by hand*, but it doesn’t make a difference if you’re relying on a sampler (e.g. MCMC) to generate your posterior. In some cases, there continues to be a legacy preference for mathematically tidy distributions left over from when we didn’t have the computing power to just do everything by sheer brute force, which I think accounts for some of the preference for normal distributions on priors in modern Bayesian modeling.

However, you may have noticed some Bayesians using t-distributions when you might have otherwise expected a normal – usually a Cauchy distribution, which is t with 1 degree of freedom. So what’s the difference?

John D. Cook has a nice explanation of one important difference on his blog (which often has good Bayes content, by the way): http://www.johndcook.com/blog/2010/08/30/robust-prior-illustration/

Enjoy.

* But don’t do that. Seriously. If your computer breaks, just take a couple days off and write some poetry or something.

End of term reminder

Just a friendly reminder, since we’re approaching the end of spring term. If you’re taking Bayes Club for credit, you need to:

1) attend Bayes Club (we’ve been keeping track of attendance, so you’re covered there)

2) send a one-page-ish write-up to Sanjay (sanjay@uoregon.edu) by the end of the term, describing what you’ve done and/or what you’ve learned in Bayes Club this term.

If you are signed up for credit and you think you’ll have trouble meeting one or both of these requirements, email Sanjay.

Sampler Exampler

x <- rnorm(10) #what's the mean of x? # i'm a sampler!! # try beta = 1 beta = 1 # get the probability of each observation given my model (i.e. "the data are normally distributed") and a parameter estimate ("beta = 1") p <- dnorm(x, mean = beta) # assuming the observations are independent, the probability of all of the data for a particular beta value is the probability of each observation multipled together PofD_B1 = prod(p) # try beta = 0 beta <- 0 p <- dnorm(x, mean = beta) PofD_B0 = prod(p) # beta = 0 looks better than beta = 1. I'll move to beta = 0 and keep looking around there. # try lots of betas! # this is a dumb sampler since it's just chugging through betas and not chosing them based on probability, but it demonstrates how this would reuslt in a likelihood distribution. betas <- seq(-2,2,.1) PofD_B <- 1:length(betas) for (i in 1:length(betas)) { beta <- betas[i] p <- dnorm(x, mean = beta) PofD_B[i] = prod(p) } results <- cbind(betas,PofD_B) plot(PofD_B~betas, type="l")

A Quick Reading for Tomorrow

Tomorrow, for our first meeting of Bayes Club for the term, we’ll be going through some JAGS code, and talking more about MCMC algorithms. If you have a moment, take 5-10 minutes to look through a StackExchange post here. It’s a list of suggestions that people had for explaining MCMC to a beginner (which we all qualify as), and was really helpful to me. Some of the posts are clearer than others, so feel free to just skim over the page looking for things that make the most immediate sense.

See you tomorrow!

Summer Bayes!

Nothing says “summer vacation” like taking a bunch of advanced stats classes, ammiright?

Here are a couple summer classes on Bayesian analysis for social scientists. Please comment on this post if you know of other classes or training opportunities for this summer (or email me, and I’ll add it into the body of the post itself).