Despite always having more certainty about the parameter, the certainty about the patient having CHF can still go down. For instance, a posterior of beta(120,120) is right at 50% mean, so is less sure than the prior about CHF in terms of entropy of CHF outcome, though more sure about where the parameter’s at.

]]>Can it actually happen in Bayesian inference that the posterior has broader tails than the prior? I don’t think this can happen with the Dirichlet-Multinomial, for instance. Because the likelihood can only add positive counts to the prior, the posterior has to be more peaked than the prior (maybe this only holds for alpha > 1?).

Hmm: I wonder if this is a general property of exponential families. So in order to get a posterior that is less peaked than the prior we’d need to find an exponential family where increasing the sufficient statistics by adding the log likelihood results in a less peaked distribution. I wonder if there are any such families of distributions?

]]>