TransWikia.com

Efficient computation of marginalized multivariate normal likelihood

Computational Science Asked by nwknoblauch on February 11, 2021

In general,if we know that the marginal Gaussian distribution for some variable $textbf{x}$ and a conditional Gaussian distribution for some $textbf{y}|textbf{x}$ of the forms:
$$p(textbf{x}) = mathcal{N}(textbf{x}|boldsymbol{mu},Lambda^{-1})$$

$$p(textbf{y}|textbf{x}) = mathcal{N}(textbf{y}|Atextbf{x}+textbf{b},L^{-1})$$
then the marginal distribution of $textbf{y}$ and the conditional distribution of $textbf{x}|textbf{y}$ are given by:

$$ p(textbf{y}) = mathcal{N}(textbf{y}|Aboldsymbol{mu}+textbf{b},L^{-1}+ALambda^{-1}A^{T})$$
$$p(textbf{x}|textbf{y}) = mathcal{N}(textbf{x}| Sigma left{ A^{T} L ( textbf{y} – textbf{b} ) + Lambda boldsymbol{mu} right} , Sigma)$$

where :
$Sigma = (Lambda + A^{T}LA)^{-1}$

In my situation, I have a prior and likelihood: $$boldsymbol{beta} sim mathcal{N}(0,I_psigma^2_beta)$$
$$hat{boldsymbol{beta}} | boldsymbol{beta} sim mathcal{N}(hat{textbf{S}}hat{textbf{R}}hat{textbf{S}}^{-1}boldsymbol{beta},hat{textbf{S}}hat{textbf{R}}hat{textbf{S}}).$$
Where $hat{textbf{S}}$ is a diagonal matrix and $hat{textbf{R}}$ is a symmetric correlation matrix. By making substitutions I arrive at the following marginalized likelihood :

$$hat{boldsymbol{beta}}|sigma_beta^2 sim mathcal{N}(0,sigma_beta^2hat{textbf{S}}hat{textbf{R}}hat{textbf{S}}^{-2}hat{textbf{R}}hat{textbf{S}}+hat{textbf{S}}hat{textbf{R}}hat{textbf{S}})$$

My question is this, if I want to find the value of $sigma_beta^2$ that maximizes the marginalized likelihood, do I have to recompute the cholesky of that entire covariance matrix from scratch every time I try out a new value of $sigma_beta^2$, or do I have other options?

If $hat{textbf{S}}=text{diag}(1)$, then we can simply take the eigenvalue decomposition of $textbf{R}$:
$$hat{boldsymbol{beta}}|sigma_beta^2 sim mathcal{N}(0,sigma_beta^2textbf{R}^2+textbf{R}) = mathcal{N}(0,sigma_beta^2textbf{Q}textbf{D}^2textbf{Q}+textbf{Q}textbf{D}textbf{Q}) rightarrow textbf{Q}^That{boldsymbol{beta}}|sigma_beta^2 sim mathcal{N}(0,sigma_beta^2textbf{D}^2+textbf{D})$$
Where $textbf{Q}$ are the eigenvectors of $textbf{R}$ and $textbf{D}$ is the diagonal matrix of eigenvalues. That’s really easy to compute for different values of $sigma_beta^2$. With $hat{textbf{S}}neqtext{diag}(1)$ None of that works. My intuition is that I’m trying to get a cheap full rank update of an eigenvalue decomposition, which is in general a no-go, but I would love to hear thoughts.

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP