Cross Validated Asked by Probability-Stats-Optimisation on November 6, 2021
I have the following question:
Let $X_1,dots,X_n$ be independent, identically distributed random variables with
$$
P(X_i=1)=theta = 1-P(X_i=0)
$$
where $theta$ is an unknown parameter, $0<theta<1$, and $ngeq 2$. It is desired to estimate the quantity $phi = theta(1-theta) = nVar((X_1+dots+X_N)/n)$.
Suppose that a Bayesian approach is adopted and that the prior distribution for $theta$, $pi(theta)$, is taken to be the uniform distribution on $(0,1)$. Compute the Bayes point estimate of $phi$ when the loss function is $L(phi,a)=(phi-a)^2$.
Now, my solution so far:
It can easily be proven that $a$ needs to be the mean of the posterior. Also, when $theta$ spans $(0,1)$, $phi$ spans $(0,frac{1}{4}]$. Hence, we have that
$$
a = int_0^{frac{1}{4}}phicdot f(phi|x_1,dots,x_n)dphi.
$$
Now, we have that
$$
f(phi|x_1,dots,x_n)propto f(x_1,dots,x_n|phi)cdot pi(phi).
$$
Given that $theta$ follows $U[0,1]$, we get that $phi$ follows:
$$
P(Phileq t) = frac{1-sqrt{1-4t}}{2}
$$
Hence we can derive $pi(phi)$. However, I am not sure how to derive $f(x_i|phi)$.
Help proceeding forward and letting me know if I have made any mistakes so far would be very appreciated.
$$ theta sim text{Beta}(a_0,b_0) $$ $$ X_imidthetasimtext{Ber}(theta) qquadqquad i=1,dots,n $$ $$ X:=X_1+dots+X_n $$ $$ Xmidtheta sim text{Bin}(n,theta) $$ $$ theta mid X = x sim text{Beta}(x + a_0,n - x + b_0) $$ $$ text{E}[theta mid X = x] = frac{x + a_0}{n + a_0 + b_0} $$ $$ text{Var}[theta mid X = x] = frac{(x+a_0)(n-x+b_0)}{(n + a_0 + b_0)^2(n + a_0 + b_0 + 1)} $$
$$ phi=text{Var}[X_i mid theta]=theta(1-theta) $$
Under quadratic loss, the Bayes estimate for $phi$ is: begin{align*} hat{phi}_{text{Bayes}}(x) &= text{E}[phi mid X = x] \ &= text{E}[theta mid X = x] - text{E}[theta^2 mid X = x] \ &= text{E}[theta mid X = x] - text{Var}[theta mid X = x] - text{E}^2[theta mid X = x] \ &= frac{(x+a_0)(n-x+b_0)}{(n + a_0 + b_0)(n + a_0 + b_0 + 1)} end{align*}
Answered by Zen on November 6, 2021
Since X is a bernoulli random variable, we can say that $f(x_i|theta)= theta^{X_i}(1-theta)^{1- X_i}$, but given is $phi$, so by equation $phi = theta(1-theta)$, write $theta = f(phi)$ and substitute in above equation, we get $$f(x_i|phi)= f(phi)^{X_i}(1-f(phi))^{1- X_i}$$
Answered by Nisarg Jain on November 6, 2021
One idea would be to perform the simulation since you do the Bayesian. The posterior for $theta$ is of closed-form and hence you can easily simulate from $p(theta|x)$. Then you just apply your function $phi^m = f(theta^m) = theta^m(1 - theta^m), m = 1,ldots, N$ where $N$ - number of simulated points from the posterior. Finally you just find $hat{phi} = frac{sum_{m=1}^N phi^m}{N}$.
Let me clarify a little bit more. The posterior density for $p(theta|x)$ has the following form
begin{align} p(theta|x) sim mathcal{B}(alpha + sum x_i, beta + n - sum x_i), end{align} where $pi(theta) sim mathcal{U}(0,1) = mathcal{B}(1,1)$, hence $alpha = beta = 1$ and $mathcal{B}(.,.)$ means a beta distribution. Please refer to wiki for the clarifications https://en.wikipedia.org/wiki/Conjugate_prior. Now you can simulate from this density. See the attached code.
# Set a seed
set.seed(3)
# Number of observations
N <- 1e2
# Set the true value to check
theta_true <- 0.5
# Compute the true phi
phi_true <- theta_true*(1 - theta_true)
# Simulate the data given the parameteres
x <- rbinom(N, size = 1, prob = theta_true)
# Estimate the posterior
alpha_new <- 1 + sum(x)
beta_new <- 1 + N - sum(x)
# Sample from the posterior
theta_sample <- rbeta(n = N, shape1 = alpha_new, shape2 = beta_new)
# Estimate the posterior mean for the draws
mean(theta_sample)
theta_true
# close
phi_sample <- theta_sample*(1 - theta_sample)
# Estimate the posterior mean for the draws
mean(phi_sample)
phi_true
# close
```
Answered by Koval Boris on November 6, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP