Cross Validated Asked by DomB on December 13, 2021
In the context of best linear unbiased predictors (BLUP), Henderson specified the mixed-model equations (see Henderson (1950): Estimation of Genetic Parameters. Annals of Mathematical Statistics, 21, 309-310). Let us assume the following mixed effects model:
$y = Xbeta+Zu+e$
where $y$ is a vector of n observable random variables, $beta$ is a vector of $p$ fixed effects, $X$ and $Z$ are known matrices, and $u$ and $e$ re vectors of $q$ and $n$ random effects such that $E(u) = 0$ and $E(e) = 0$ and
$ Var begin{bmatrix}
u \
e \
end{bmatrix} =
begin{bmatrix}
G & 0 \
0 & R \
end{bmatrix}sigma^2$
where $G$ and $R$ are known positive definite matrices and $sigma^2$ is a positive constant.
According to Henderson (1950), the BLUP estimates of $hat {beta}$ of $beta$ and $hat {u}$ of $u$ are defined as solutions to the following system of equation:
$X’R^{-1}Xhat {beta}+X’R^{-1}Zhat {u} = X’R^{-1}y$
$Z’R^{-1}Xhat {beta}+(Z’R^{-1}Z + G^{-1})hat {u} = Z’R^{-1}y$
(Also see: Robinson (1991): That BLUP is a good thing: the estimation of random effects (with discussion). Statistical Science, 6:15–51).
I have not found any derivation of this solution but assume that he approached it as follows:
$(y – Xbeta – Zu)’V^{-1}(y – Xbeta – Zu)$
where $V = R + ZGZ’$. Hence the solutions should therefore be
$X’V^{-1}Xhat {beta} + X’V^{-1}Zhat {u} = X’V^{-1}y$
$Z’V^{-1}Xhat {beta} + Z’V^{-1}Zhat {u} = Z’V^{-1}y$.
We also know that $V^{-1} = R^{-1} – R^{-1}Z(G^{-1}+Z’R^{-1}Z)Z’R^{-1}$.
However, ho to proceed to arrive at the mixed-model equations?
For a very simple derivation, without making any assumption on normality, see my paper
A. Neumaier and E. Groeneveld, Restricted maximum likelihood estimation of covariances in sparse linear models, Genet. Sel. Evol. 30 (1998), 3-26.
Essentially, the mixed model $$y=Xbeta+Zu+epsilon,~~ Cov(u)=sigma^2 G,~~ Cov(epsilon)=sigma^2 D,$$ where $u$ and $epsilon $ have zero mean and wlog $G=LL^T$ and $D=MM^T$, is equivalent to the assertion that with $x=pmatrix{beta cr u}$ and $P=pmatrix{M & 0cr 0 &L}$, $E=pmatrix{I cr 0}$, $A=pmatrix{X & Z cr 0 & I}$, the random vector $P^{-1}(Ey-Ax)$ has zero mean and covariance matrix $sigma^2 I$. Thus the best linear unbiased predictor is given by the solution of the normal equations for the overdetermined linear system $P^{-1}Ax=P^{-1}Ey$. This gives Henderson's mixed model equations.
Answered by Arnold Neumaier on December 13, 2021
One approach is to form the log-likelihood and differentiate this with respect to the random effects $mathbf{u}$ and set this equal to zero, then repeat, but differentiate with respect to the fixed effects $boldsymbol{beta}$.
With the usual normality assumptions we have:
$$ begin{align*} mathbf{y|u} &sim mathcal{N}mathbf{(Xbeta + Zu, R)} \ mathbf{u} &sim mathcal{N}(mathbf{0, G}) end{align*} $$ where $mathbf{y}$ is the response vector, $mathbf{u}$ and $boldsymbol{beta}$ are the random effects and fixed effects coefficient vectors $mathbf{X}$ and $mathbf{Z}$ are model matrices for the fixed effects and random effects respectively. The log-likelihood is then:
$$ -2log L(boldsymbol{beta,}mathbf{u}) = log|mathbf{R}|+(mathbf{y - Xboldsymbol{beta} - Zu})'mathbf{R}^{-1}(mathbf{y - Xboldsymbol{beta} - Zu}) +log|mathbf{G}|+mathbf{u'G^{-1}u} $$ Differentiating with respect to the random and fixed effects: $$ begin{align*} frac{partial log L}{partial mathbf{u}} &= mathbf{Z'R^{-1}}(mathbf{y - Xboldsymbol{beta} - Zu}) - mathbf{G^{-1}u} \ frac{partial log L}{partial boldsymbol{beta}} &= mathbf{X'R^{-1}}(mathbf{y - Xboldsymbol{beta} - Zu}) end{align*} $$ After setting these both equal to zero, with some minor re-arranging, we obtain Henderson's mixed model equations:
$$ begin{align*} mathbf{Z'R^{-1}}mathbf{y} &= mathbf{Z'R^{-1}Xboldsymbol{beta}} + mathbf{u(Z'R^{-1}Z+G^{-1})} \ mathbf{X'R^{-1}}mathbf{y} &= mathbf{X'R^{-1}Xboldsymbol{beta}} + mathbf{X'R^{-1}Zu} end{align*} $$
Answered by Robert Long on December 13, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP