TransWikia.com

Method to solve linear, first order ODE of generalized matrix matrix form

Computational Science Asked by BlueLemon on October 23, 2021

The equation and its meaning:

Consider two sets $(A)_{l=0,…,m_a},$,$(B)_{l=0,…,m_b}$ of hermitian matrices and a set of positive semidefinite matrices $(C)_{l=0,…,m_c}$. Each matrix has the dimension $n times n$. These form the ODE:
$$
y^prime(t) = sum_{i=0}^{m_a} (A_iy+yA_i^dagger) + sum_{i=0}^{m_b}(B_iy-yB_i^dagger) + sum_{i=0}^{m_c}C_iyC^dagger_i. qquad(*)
$$
Note that y(0) is a hermitian $ntimes n$ matrix too.

Eqs. like $(*)$ describe the norm-preserving evolution of a linear operator with respect to the anti-commutator (1st), commutator (2nd) and “basis-transform” (3rd). Eqs. like this occur often in Quantum mechanics, see “master equations” for more information.

The problem with vectorization of the matrix matrix form:

The most common ODE solver operates on vectors. So say x(t) is a vector that contains all columns of y(t) then the ODE becomes with the Kronecker product $otimes$:
$$
x^prime(t) = left(sum_{i=0}^{m_a} (1 otimes A_i + A_i^*otimes 1) + sum_{i=0}^{m_b}(1 otimes B_i – B_i^*otimes 1) + sum_{i=0}^{m_c}C_i^*otimes C_iright)x equiv Xx(t).
$$
But X is a full $n^2 times n^2$ matrix! Note that the Jacobi-Matrix coincides with X. So each call to $Xx(t)$ is asymptotically worse than the matrix-matrix multiplication above. Thus, I’m looking for an algorithm to solve the ODE in the matrix form (*) for $y(t_{i=0,…,m_t})$.

A few additional properties

  • The eigenvalues of the matrices may vary over several magnitudes,

  • The dimension of $n$ is never bigger than $10^3$, usually it is between $10^1 -10^2$,

  • The matrices $A_i,B_i,C_i$ may be sparse as single ones, but the sum, i.e. X, is not sparse,

  • The eigenvalues can be negative,

  • Stiffness and time-dependent scaling of the matrices, i.e. $A_j to alpha(t)_j A_j$, can occur.

  • Precision should be at least $10^{-5}$, but never more than $10^{-8}$

What I’ve tried

I know ODEPACK, but they rely on the Jacobian, which is unnecessary big here. RK4 fails, except for very, very small step sizes. So, are they solutions which are easy to adapt for this problem, preferable in Fortran?

Related:

Algorithms for linear system of ODEs

How to choose a method for solving linear equations

One Answer

There are a few ways you can do this. Even though RK4 fails unless stepsizes are really small, one thing that can happen a lot is that the model can start out more stiff than it really is in the full timespan. Thus I would still try something adaptive like MATLAB's ode45 or Julia's Tsit5() before ruling out non-stiff solvers. If that fails, then I would try something made for large equations like Sundials. You can directly call Sundials from something like DifferentialEquations.jl and use the options for linear solver choices and see if anything does well enough. If nothing does well enough, then you need to get fancy.

Then there are three options. The problem I see here is you want to avoid vectorization / building the matrix and instead want to do something that is matrix-free. The most standard option is then option 1: use a matrix-free representation of your operator in a Krylov subspace method. So take any stiff solver like BDF, Rosenbrock, SDIRK, etc. What you want to do is re-define the linear solver that it uses to not use a factorization method, but instead use an iterative solver like GMRES and do so without constructing the matrices. In Julia, you can do this by defining a A = LinearMap{T}(f) from LinearMaps.jl where f defines the matrix-vector product (your equation *).

Now you can use A in IterativeSolvers.jl (or any library which allows abstract types) to define how to solve Ax=b using your * without having to do the slow path of forming X. Great! We're almost there. Now we have to get it working like this in the ODE solver. The trick is that the LinearMap type knows how to compose linear maps, so if you do I - gamma*A (I is a type for the identity matrix in Julia, and gamma here is a scalar) it will automatically generate another LinearMap which can also be used in an iterative solver. This is the fact we want to exploit.

I am sad to say that right now you don't have full control over the internal Jacobian typing so there is a quick modification to the algorithms that would be required. But let's say you want to use the 2nd order adaptive Rosenbrock method. Then you'd go to the code which defines the implicit system and make it so that way the Rosenbrock W is simply

W = I - γ*A

(you'll need to do scoping to define A in there, etc. etc. but that's standard stuff). Just see Shampine's MATLAB ODE Suite paper which describes this algorithm and it's pretty clear exactly why this is the needed change. Then use the linear solver specification to tell it to use that matrix-free GMRES to solve the implicit equations. And there you go, that's an adaptive stiff solver you can use your equation * to solve all of the implicit equations.

In a few months I plan on releasing an update to DifferentialEquations.jl's function definitions which will allow you to explicitly define W so that way Option 1 can be done without modifying the code. But for now that's what you got.

Option 2 would be to implement the full method yourself and make all of the linear solves matrix-free. The second order Rosenbrock method really isn't difficult and Shampine explains it well so an implementation where you replace the linear solver would be pretty quick to make.

Option 3 would be to use some form of an exponential integrator. There are specialized "Strang splitting" methods which can handle this type of equation well, but I don't know of any implementations. We're building one into DifferentialEquations.jl right now, so you can see some of our discussions and use those plus Hairer's geometric integration methods book to piece together how this is done and implement one yourself before us (it'll be a lot easier since you can make it specifically for your set of operators instead of making it generic. But the hard part is you'll want to code a fast-path for computing expmv, which is w = exp(A*dt)*v using a matrix-free A).

Answered by Chris Rackauckas on October 23, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP