TransWikia.com

How to verify solution to pre-conditioned linear systems solver?

Computational Science Asked on May 12, 2021

I am solving Ax=b. A has a very large condition number (> O(10^10))

I am using the conjugate gradients method with point jacobi pre-conditioning. I obtained a solution ‘x’ that “looks” reasonable. What is the best way to verify that my solution is correct using this method with preconditioning?

Note that I cannot obtain an exact solution to the original equation.

My matrix is symmetric, and I believe it is negative definite/semidefinite as at least one of the its eigenvalues is 0.

2 Answers

You've started with a singular linear system of equations $Ax=b$. As a practical matter, it's unlikely that $b$ lies exactly in the range of $A$, so at best you can find a least squares solution that minimizes

$min | Ax - b |_{2}$

Because the system is singular, the null space of $A$ is non-empty, and there will be an infinite number of solutions to this least squares problem. In particular, if $x_{LS}$ is some particular least squares solution and $v$ is any vector in the null space of $A$, then any $x$ of the form $x=x_{LS}+alpha v$ will be a least squares solution.

Which of these infinitely many least squares solutions is the "correct" one?

Perhaps you don't care and will accept any least squares solution. In that case, any solution that satisfies the normal equations $A^{T}Ax=A^{T}b$ will be correct.

Perhaps you want to simultaneously minimize the norm of $x$ to get a minimum norm least squares solution? You can regularize the least squares problem by minimizing

$min | Ax-b |_{2}^{2} + lambda^{2} | x |_{2}^{2}$

where $lambda$ is a small parameter. This least squares problem is strictly convex, so it has a unique optimal solution.

What happens in floating point arithmetic with limited precision? In that case, it's likely that $A$ will have a very large but still finite condition number.

You can throw a preconditioner $M$ at $A$, in hopes that $M^{-1}A$ will be better conditioned than $A$. This might work in the sense at $M^{-1}A$ is numerically better conditioned than $A$, but since we know that $A$ is theoretically singular, this is a mirage! $M^{-1}A$ should still be singular.

So what is the solution to the numerically well-conditioned system $M^{-1}Ax=M^{-1}b$ doing? This is similar to regularization in the sense that $M^{-1}A$ is better conditioned than $A$, but you don't know much about the direction in which the preconditioner has pushed the solution. If $M^{-1}Ax-M^{-1}b$ is small, and $M^{-1}$ is well behaved, then perhaps $Ax-b$ is small and you've got a least squares solution. On the other hand, if $M^{-1}$ is very badly scaled, it may be that the residual in the original problem $Ax-b$ is quite large.

There has been some research on preconditioners that are designed to achieve a particular regularization. See for example:

Calvetti, Daniela. 2007. “Preconditioned Iterative Methods for Linear Discrete Ill-Posed Problems from a Bayesian Inversion Perspective.” Journal of Computational and Applied Mathematics, Special Issue: Applied Computational Inverse Problems, 198 (2):378–95. https://doi.org/10.1016/j.cam.2005.10.038.

Correct answer by Brian Borchers on May 12, 2021

If you don't care too much about which $b$ you're working with (say just for linear algebra), then you can use the Method of Manufactured Solutions (MMS) in Linear Algebra much like we differential equations geeks would for our problems to start with a known exact solution, $x^*$, derive a $b$, and then apply your solver to see how decent an $x$ you can solve for. By picking $b=Ax^*$ for any $x^*$ you make up, you can see how well your solver performs on that $b$ and ask yourself as many questions as you like about how your proposed solution $x$ and $x^*$ compare. If you require that $b$ or $x^*$ have certain properties, then you may have more trouble. This way you can compute $|{x-x^*}|$ in almost any norm you like. If you're committed to your $A$ and $b$, then the problem is somewhat harder, and I'd suggest using the MMS in the original problem that led to $A$ and $b$ if you can control them (which is what we PDE geeks do when we can). I can't tell for sure if you're trying to pick the best solver for your underlying problem or if you're trying to design a new linear algebra solver for a class of poorly conditioned problems. Either way, you can use MMS to give you exact solutions to work with. For your PDE, the MMS $x^*$s might not be highly physical, but the $b$ that is generated for you will be as representative as possible for what your system, $A$, can create for $x^*$.

I recommend starting with simple $x^*$s and working up to complex ones. In the end, for error analysis, people tend to recommend using functions that excite as many of the possible modes of the physical solution as possible and have no singularities (unless you expect them), like transcendentals and other high-order functions, but starting with polynomials on simple domains is probably best when it comes to picking functions for arbitrary, smooth PDEs.

The original paper on MMS can be found at the original MMS paper link (PDF warning, if you care).

Answered by Bill Barth on May 12, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP