Cross Validated Asked by Maverick Meerkat on November 17, 2020
Chapter 7 of Jim Albert’s book considers the case of using a hierarchical model, to estimate heart-transplant mortality rates ($lambda_i$) from 94 hospitals, each with it’s own exposure (# of operations, $e_i$), with:
$$
y_i sim Poisson(e_ilambda_i) \
lambda_i sim Gamma(alpha, frac{alpha}{mu}) \
$$
Under this model, we get that the posterior probability of $lambda_i$ is $Gamma(y_i + alpha, e_i + frac{alpha}{mu})$. It is also shown that the posterior mean of $lambda_i$ is equal to $mathbb E(lambda_i | y_i,alpha,mu) = (1-B_i)frac{y_i}{e_i} + B_i mu$, which demonstrates Shrinkage.
In addition, after assuming the following prior distribution for the hyperparameters $mu$ and $alpha$:
$$
g(mu) proptofrac{1}{mu} \
h(alpha) = frac{z_0}{(alpha+z_0)^2}
$$
We get also a Marginal Posterior density of the two. Which allows us to sample values for these hyperparameters, and then use these samples to sample plausible $lambda_i$ values and in turn also $y_i$.
Now at some point he states that $mathbb E(lambda_i | data)$ can be approximated by $(1-mathbb E(B_i|data))frac{y_i}{e_i} + mathbb E(B_i|data) frac{sum y_i}{sum e_i}$.
Now I think that he meant that we replace the parameters with our simulated averages of these parameters. This is what he does with $mathbb E(B_i|data)$ anyway. But I wonder why he replaced $mu$ with $frac{sum y_i}{sum e_i}$ ?
My hunch is that the posterior mean of $mu$ is equal to $frac{sum y_i}{sum e_i}$. At least they are very close numerically from the simulated samples. But I wonder if there is a way to show this analytically?
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP