Mathematics Asked on November 24, 2021
I recently learned that two independent random variables $X$ and $Y$ must have a covariance of $0$. That means that the correlation between them is also $0$.
However, apparently, the converse is not true. 2 random variables $X$ and $Y$ can have a correlation of $0$, yet still be dependent. I don’t understand why this is. Doesn’t a correlation of $0$ imply that the random variables do not affect each other?
The definition of covariance between two random variables $X,Y$ is $Cov(X,Y)=E(XY)-E(X)E(Y)$. Zero covariance means that $E(X,Y)=E(X)E(Y)$. But the independency condition is much stricter: $E(g(X)h(Y))=E(g(X)) E(h(Y))$ for any functions $g,h$. To give some intuition on the difference, I will assume that $X,Y$ are smooth (infinately differentiable) random variables and Taylor expand both functions. I will be expanding around zero in order for the presentation to be clearer. Doing so, gives $$g(X)=sum_ndfrac{1}{n!}g^{(n)}(0)X^n$$ where $g^{(n)}$ is the $n-$th derivative of $g$, and similarly for $h(Y)$. Then using the linearly of $E( .)$, namely $E(aX+bY)=aE(x)+bE(Y)$ for $a,b$ constants, we get $$E(g(X)h(Y))=sum_{n,m}dfrac{1}{n!m!}g^{(n)}(0)h^{(m)}(0)E(X^nY^m)$$ Now, if $Cov(X,Y)=0$, then the $n,m=1$ term simplifies through $E(X,Y)=E(X)E(Y)$ but we can say nothing about terms containing $E(X^nY^m)$. This shows you that covariance measures independence on a "linear level", whereas dependence of two random variables is much more general that their correlation.
Remark for applications: For $x,y$ ($x,y$ being the realizations of $X,Y$) sufficiently small, then we might be able to approximately treat them as independent (although they are not), which is what we do in any system in which we care about understanding its behaviour when $x,y$ are small (we do this a lot in physics). More concretely, if we can neglect all powers $n,m>1$, then we can approximate uncorrelation as independence.
So, as you can see, in order to have independence between $X,Y$, $E(X^nY^m)=E(X^n)E(Y^m), forall n,m$. If this is the case, we can write the second Taylor expansion as $$E(g(X)h(Y))=sum_{n}dfrac{1}{n!}g^{(n)}(0)E(X^n)sum_mdfrac{1}{m!}h^{(m)}(0)E(Y^m)$$ and using linearity and the Taylor expansions of $g(X)$ and $h(Y)$, we arrive at the final result: $$E(g(X)h(Y))=E(g(X))E(h(Y))$$
Answered by TheQuantumMan on November 24, 2021
No, it doesn't imply that. Correlation is just a one-dimensional measure, whereas dependence can take many forms. For instance, the indicator variable of the event that a normally distributed random variable is within one standard deviation of the mean is uncorrelated with the random variable itself, but is clearly not independent of it.
Answered by joriki on November 24, 2021
Correlation is a measure of linear dependence, so it kind of gives you an indication of how the two variables are related linearly. It doesn't capture however more complicated behaviour.
Therefore if you have $X$ and $X^2$ with $X sim N(0,1)$, then
$$operatorname{Cov}(X, X^2) = E(X^3) - E(X)E(X^2) = 0$$
but the two random variables are clearly dependent.
Answered by johnny on November 24, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP