TransWikia.com

How to Invert a Poorly Conditioned Matrix

Computational Science Asked by Brian.C.Seymour on March 26, 2021

In my research, I need to invert a Fisher matrix in order to get a covariance matrix for me to do parameter estimation. Unfortunately, the values of Fisher matrix vary by many orders of magnitude, and the numerical errors are significant (see image at end).

I first attempted this in python using scipy.linalg.inv, and it is not giving stable enough results. Do you guys have any recommendations to proceed?

Edit: Some context for the comment. I am trying to find parameter uncertainties $theta^i$ given measurement $h(f,theta^i)$. The fisher matrix is defined as
$$
Gamma_{ij} = left( frac{partial h}{partial theta^i} | frac{partial h}{partial theta^j}right) = 2 int_0^infty S_n(f)^{-1} left(frac{partial h^star}{partial theta^i} frac{partial h}{partial theta^j} + frac{partial h}{partial theta^i} frac{partial h^star}{partial theta^i} right) df , .
$$

Then the parameter uncertainties are
$$
langle Delta theta^i Delta theta^jrangle = left( Gamma^{-1} right)^{ij} , .
$$

So my numerical issues are occuring when I try and find $left( Gamma^{-1} right)^{ij}$.

Example of a Matrix I am trying to invert

One Answer

There is no simple fix. For an ill-conditioned matrix $A$, the harm (loss of precision) is already done the moment you wrote those numbers in a numpy array, because that tiny $10^{-16}$ perturbation from the exact non-representable values is already harmful.

You could increase your working precision; but at that point the question is if your matrix entries $A_{ij}$ can really be computed with more than 16 correct digits; the answer is almost surely no, for data that depend on real-world measurements.

Another hope is that a diagonal rescaling can improve the condition number; it looks like row and column 2 have the largest values in your data, so you could scale those down. This may give you better accuracy in single entries (but not necessarily if you are measuring the accuracy of the computed $B approx A^{-1}$ with $|B-A^{-1}|$).

So you'll never have the inverse with better normwise error than that. Since you know statistics, you are used to functions of your data being uncertain; you can treat this as just another instance of that phenomenon. This is not the fault of the algorithm; it's the fault of the uncertainties in your input data.

Answered by Federico Poloni on March 26, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP