Computational Science Asked by Soumyadipta Sarkar on December 22, 2020
I am presently working on solving very large symmetric (but not positive definite) systems, generated by some certain algorithms. These matrices have a nice block sparsity which can be used for parallel solving. But I can’t decide whether I should be using a direct approach (like Multi-frontal) or iterative one (preconditioned GMRES or MINRES). All my studies show that iterative solvers (even with pretty fast convergence of 7 inner iterations) fail to beat the direct ” operator in MATLAB. But in theory, direct methods are supposed to be costlier. How is this happening? Is there any up to date document or paper for such case? Can I use block sparsity in parallel systems using direct methods as efficiently as flexible iterative solvers like GMRES?
Any iterative solver can beat direct methods only if the problem is sufficiently large (large, depends on several factors such as storage required, efficiency of implementation). And also any krylov method (for example GMRES) are good only if you use an appropriate preconditioning (in practice). If you are not using any preconditioning, krylov methods are of no use in general. I too work with block sparse matrices (these come from PDE applications) and observed that block preconditioned(overlapping additive schwarz) krylov solver (either restarted GMRES or BiCG-Stab) coupled with a coarse grid correction (or a multigrid) are scalable for these type of matrices.
Answered by user1964178 on December 22, 2020
If your matrix has dimension in the mid tens of thousands or less, use a direct method, although there aren't many freely available direct methods for indefinite symmetric systems (actually none that I know of are open source). There's MA57 from the HSL, but that's only free for academic use. You can certainly ignore symmetry and use UMFPACK.
At around the low tens of hundreds in dimension, the memory use of a direct method starts to exceed what a typical desktop computer can reasonably handle, so unless you have a beefy shared memory machine, you will need to move to iterative methods. For indefinite problems, you can specialize BiCG (biconjugate gradient) for symmetric systems, although breakdown is possible. There is a recently released MINRES-QLP for symmetric systems which provides more numerical stability.
The two methods require rather different implementations, since for direct methods you actually need to form the matrix, while in the iterative approach, you usually won't form the matrix explicitly.
There are a number of reasons why one approach might be faster than the other, especially as a function of matrix dimension. For highly ill conditioned systems, iterative methods can stagnate pretty badly. For not-so-sparse matrices, direct methods end up creating lots of fill-in, which slows things down a lot. Also, the direct methods in Matlab are highly optimized (it uses UMFPACK or MA57 internally), while iterative methods are usually coded directly in Matlab, and there's fewer opportunities to exploit level 3 BLAS since the bottlenecks are the application of matvecs and dot products.
Answered by Victor Liu on December 22, 2020
The MUMPS sparse direct solver can handle symmetric indefinite systems and is freely available (http://graal.ens-lyon.fr/MUMPS/). Ian Duff was one of the authors of both MUMPS and MA57 so the algorithms have many similarities.
MUMPS was designed for distributed-memory parallel computers but it also works well on single-processor machines. If you link it with a multi-threaded BLAS library you can achieve reasonable speedups on shared memory, multicore processors.
You didn't say how large "very large" is but MUMPS also has an out-of-core capability to handle problems where the factored matrix will not fit in the available memory.
Answered by Bill Greene on December 22, 2020
The general problem that direct solvers are suffering from is the fill-in phenomenon, meaning that the inverse of a sparse matrix may be dense. This leads to huge memory requirements if the structure of the matrix is not "suitable".
There are attempts to work around these issues, and MATLAB's default lu
-function employs a few of them. See http://www.mathworks.de/de/help/matlab/ref/lu.html for an overview of permutations, diagonal scaling and such.
The same holds true of course for more advanced packages like MUMPS (http://graal.ens-lyon.fr/MUMPS/).
In general, though, if your problem is large enough, you will hit that memory boundary, and you'll have to use iterative (Krylov) methods.
If your problem is symmetric and indefinite, MINRES may be the obvious choice. Note however that it may suffer from loss of orthogonality in the underlying Arnoldi process. GMRES can help here. In exact arithmetic, it does the same thing as MINRES, but tends to cope with orthogonality issues better. This is at the expense of higher memory requirements than MINRS. Some people regard GMRES as "the best implementation of MINRES".
In any case, you'll probably profit from preconditioning your system. If you don't want to spend time on tailoring a preconditioner, incomplete LU-factorization preconditioner (ILU) might already get you somewhere.
Answered by Nico Schlömer on December 22, 2020
Matlab '' operator is highly efficient due to top notch programming. You may see some of the idea and how they used all possible short cuts in Timothy Davis's book.
In matlab you can use gmres, which is well developed and stable. Probably minres, which theoretically should be ideal for your case, may not be that reliable (at least my experience says so). You should be getting equal or higher efficiency from matlab gmres if
For even bigger systems, use a tool like PETSc.
Answered by Soumyadipta Sarkar on December 22, 2020
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP