Computational Science Asked by Florian Brucker on April 13, 2021
I’m currently looking into parallel methods for ODE integration. There is a lot of new and old literature out there describing a wide range of approaches, but I haven’t found any recent surveys or overview articles describing the topic in general.
There’s the book by Burrage [1], but it’s almost 20 years old and hence does not cover many of the more modern ideas like the parareal algorithm.
[1] K. Burrage, Parallel and Sequential Methods for Ordinary Differential Equations, Clarendon Press, Oxford, 1995
I'm not aware of any recent overview articles, but I am actively involved in the development of the PFASST algorithm so can share some thoughts.
There are three broad classes of time-parallel techniques that I am aware of:
Methods that parallelize across the method usually perform very close to spec but don't scale beyond a handful of (time) processors. Typically they are relatively easier to implement than other methods and are a good if you have a few extra cores lying around and are looking for predictable and modest speedups.
Methods that parallelize across the time domain include Parareal, PITA, PFASST. These methods are all iterative and are comprised of inexpensive (but inaccurate) "coarse" propagators and expensive (but accurate) "fine" propagators. They achieve parallel efficiency by iteratively evaluating the fine propagator in parallel to improve a serial solution obtained using the coarse propagator.
The Parareal and PITA algorithms suffer from a rather unfortunate upper bound on their parallel efficiency $E$: $E < 1/K$ where $K$ is the number of iterations required to obtain convergence throughout the domain. For example, if your Parareal implementation required 10 iterations to converge and you are using 100 (time) processors, the largest speedup you could hope for would be 10x. The PFASST algorithm relaxes this upper bound by hybridizing the time-parallel iterations with the iterations of the Spectral Deferred Correction time-stepping method and incorporating Full Approximation Scheme corrections to a hierarchy of space/time discretizations.
Lots of games can be played with all of these methods to try and speed them up, and it seems as though the performance of these across-the-domain techniques depends on what problem you are solving and which techniques are available for speeding up the coarse propagator (coarsened grids, coarsened operators, coarsened physics etc.).
Some references (see also references listed in the papers):
This paper demonstrates how various methods can be parallelised across the method: A theoretical comparison of high order explicit Runge-Kutta, extrapolation, and deferred correction methods; Ketcheson and Waheed.
This paper also shows a nice way of parallelizing across the method, and introduces the RIDC algorithm: Parallel high-order integrators; Christlieb, MacDonald, Ong.
This paper introduces the PITA algorithm: A Time-Parallel Implicit Method for Accelerating the Solution of Nonlinear Structural Dynamics Problems; Cortial and Farhat.
There are lots of papers on Parareal (just Google it).
Here is a paper on the Nievergelt method: A minimal communication approach to parallel time integration; Barker.
This paper introduces PFASST: Toward an efficient parallel in time method for partial differential equations; Emmett and Minion;
This papers describes a neat application of PFASST: A massively space-time parallel N-body solver; Speck, Ruprecht, Krause, Emmett, Minion, Windel, Gibbon.
I have written two implementations of PFASST that are available on the 'net: PyPFASST and libpfasst.
Correct answer by Matthew Emmett on April 13, 2021
Here is a short introduction to waveform relaxation. When talking about the time-parallel method like parareal or PITA or other methods, one should distinguish between the dissipative and the conservative (Hamiltonian) ODE systems. The latter seems more difficult to parallelize in time dimension by partitioning to time sub-intervals. Here is an analysis of parareal for Hamiltonian systems. The dissipative system is easier because the error caused at initial time $u_0$ tends to disappear due to the dissipation $u(t)=exp(-lambda t)u_0,$ $mathrm{Re},lambda>0.$
Answered by Hui Zhang on April 13, 2021
Although this post is now two years old, in case someone stumbles across it, let me give a brief update:
Martin Gander recently wrote a nice review article, that gives a historical perspective on the field and discusses many different PINT methods: http://www.unige.ch/~gander/Preprints/50YearsTimeParallel.pdf
There is now also a community website which lists very many references and gives descriptions of different methods: http://www.parallel-in-time.org/
A discussion of the Parareal parallel-in-time algorithm in particular can be found here: https://en.wikipedia.org/wiki/Parareal
Update: There is now a newer 2020 review paper by Ong and Schroder.
Answered by Daniel on April 13, 2021
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP