Computational Science Asked by VF1 on December 3, 2020
I have a system where $A$ is a large $ntimes n$ marix with fast MVMs. It may have many nonzero entries (albeit in a structured way so as to allow fast MVMs), and is not necessarily diagonally dominant.
However, $A$ is weakly positive definite.
$Y$ is a $ntimes m$ dense matrix.
I understand iterative Krylov-subspace-based methods exist for finding $X=A^{-1}Y$, and they perform well. Are there any optimizations that can be implemented if I am only interested in finding:
$$
textbf{x}=text{diag }Y^top A^{-1}Y
$$
In other words, I only want to solve for the $i$-th entry of the solution $Y^toptextbf{x}_i$ where $Atextbf{x}_i=textbf{y}_i$ for $Y=[textbf{y}_i]_i$.
MVM stands for matrix-vector multiplication. In my application this takes only ~$n$ runtime compared to $n^2$ when dense.
An approach by Papandreou and Yuille for diagonal $A$ relates variance estimation to the expectation of a quadratic form. The logic follows more generally: since $A$ is PD so is its inverse. Then $Zsim N(0, A^{-1})$ is well-formed, and $textbf{x}=mathbb{E}[(Y^top Z)^2]$.
If we can sample $Z$ then an approximation is possible (and converges). The linked paper has success when we can find the square root of $A$: solving $AZ=A^{1/2}G$ for samples $Gsim N(0,I)$ offers a way of sampling $Z$ since $A^{-1/2}Gsim N(0,A^{-1})$.
When an operator $A^{1/2}$ is available, this provides an estimation opportunity for variance.
Answered by VF1 on December 3, 2020
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP