Normal vs. t-dist priors?
We model most continuous variables with normal distributions because 1) many of them genuinely are normally distributed, 2) the normal distribution is mathematically convenient, and 3) it’s a pretty ingrained habit (honestly, that is probably the real reason in most cases). In Bayesian modeling in particular, there’s the additional attraction that normal distributions form conjugate priors when other less elegant distributions (like t) won’t. This is really important if you’re doing Bayesian modeling by hand*, but it doesn’t make a difference if you’re relying on a sampler (e.g. MCMC) to generate your posterior. In some cases, there continues to be a legacy preference for mathematically tidy distributions left over from when we didn’t have the computing power to just do everything by sheer brute force, which I think accounts for some of the preference for normal distributions on priors in modern Bayesian modeling.
However, you may have noticed some Bayesians using t-distributions when you might have otherwise expected a normal – usually a Cauchy distribution, which is t with 1 degree of freedom. So what’s the difference?
John D. Cook has a nice explanation of one important difference on his blog (which often has good Bayes content, by the way): http://www.johndcook.com/blog/2010/08/30/robust-prior-illustration/
Enjoy.
* But don’t do that. Seriously. If your computer breaks, just take a couple days off and write some poetry or something.