Mathematics Asked by Noe Vidales on December 1, 2021
I am trying to wrap my head around the Neyman–Pearson lemma for simple vs simple hypothesis that is
$$H_o: theta_0 hspace{19mm} H_a: theta_1$$
with the respective pdfs $f_1$ and $f_0$. I am trying to understand when randomization of the hypothesis test occurs. We reject the null with probability $gamma$ for the following set ${yin Y: f_1(y)=kf_0(y)}$ where $Y$ is the range of our rv $X$ and $k$ is a constant chosen to get the appropriate size of the test.
When is randomization needed? I have seen randomization applied to both continuous and discrete random variables for composite/simple hypothesis ($H_o: text{Uniform}(0,10), H_a: text{Uniform}(2,12)$ randomization on $Xin(2,10)$). In a simple vs simple hypothesis testing for continuous random variables is randomization not required or is it a case by case basis and RV type does not tell us anything about randomization?
I was thinking about a continuous random variable $X$ then if I consider the set $A_k={yin Y: f_1(y)=kf_0(y)}$ and if $P(A_k)>0$ for all $kin R$. This would contradict $P$ being a probability measure. .Is this example correct or even enough to justify that for continuous RV the set $A_k$ must be a null set?
Randomization in this context never gets you a better test than what you would have without randomization. With discrete distributions, the distribution of the p-value, assuming the null hypothesis is true, is a discrete distribution, so for example, it may be that for one value of the test statistic, the p-value is $0.07$ and for another it is $0.04.$ In that case, one could achieve exactly $0.05$ only with a randomized test. People may talk about this to illustrate some theoretical point, but in practice, it amounts to discarding some of the data, so it's not a good thing.
Answered by Michael Hardy on December 1, 2021
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP