TransWikia.com

What is the efficient way to code Successive Over-relaxation (SOR) method in Mathematica?

Mathematica Asked by Naitree on February 18, 2021

I have written a SOR method (SOR is this method) code using C-style procedural loops.

However, I think there might be (much) better ways to achieve the same end in Mma avoiding these loops.

So my question is: Is that feasible to code SOR method using functions like Nest, Fold that could be substantially more efficient?

For example, the following is a simple set of multiline equations with tridiagonal matrix.(my original equations will have an larger coefficient matrix, like 100*100 tridiagonal sparse array.)

  1. Let listnew be the list that stores the unknown $x_1,x_2,x_3,x_4,x_5$ after each time of iteration.
  2. And listold correspondingly stores the unknowns before each time of iteration.
  3. b stores right-hand side of the equation above.
  4. k indicates the times of iteration.
  5. i is the index of $x_i$.
  6. ϵ is the pre-defined maximum error.
  7. error indicates the maximum difference between listnew and listold after each iteration.
  8. ω is the so-called relaxation factor.

Then, my (dumb) code using For loop would be:

ϵ = 0.1;
ω = 1.5;
b = {1, 1, 1, 1, 1};
listnew = listold = Table[0, {i, 5}];
error = ϵ + 1;
For[k = 0, error > ϵ, k++,
    listnew[[1]] = listold[[1]] - ω (b[[1]] + 4 listold[[1]] - listold[[2]])/4;
    For[i = 2, i < 5, i++, 
        listnew[[i]] = listold[[i]] - 
                       ω (b[[i]] + 4 listold[[i]] - listnew[[i - 1]] - listold[[i + 1]])/4];
    listnew[[5]] = listold[[5]] - ω (b[[5]] + 4 listold[[5]] - listnew[[4]])/4;
    error = Max[Abs[listnew - listold]]; listold = listnew;]

I’m sorry if it looks so dumb…

2 Answers

You asking for "efficient" implementation. SOR is an iterative process. One does a step, checks if converged or not, and if not, repeat the step. I found M to be very fast using plain old Do. I do not see the point of replacing a well understand algorithm expressed naturally using Do[ f[i,....], {i,1,n}], by something like f[#,...] &/@ Range[n] or other variations, just for the sake of syntax change.

But the reason I am writing is not this. I just thought your iterative implementation was not clear so will post implementation of SOR based on direct implementation of the formula I am familiar with, as seen in this Mathematical association of America page

Mathematica graphics

The above solves A x = b using SOR. So $a_{ii}$ is the diagonal entry of $A$ and $k+1$ is the new solution and $k$ is the old solution. Guess starts with the zero solution.

The loop above is implemented using Do and those inner summations are implemented using the Sum command. The check for convergence uses relative error check. For additional speed, one can try to Compile this. As others said, SOR is not used in practice to solve real problem. It is only for academic interest as there are faster iterative methods these days, and if you have lots of RAM, you can just use a direct solver. LinearSolve in Mathematica.

Here is an implementation of the above equation. You can change omega and for fixed tolerance, see the effect on how many iterations are needed to converge.

enter image description here

Manipulate[tick;
 Module[{keepRunning = True, i, j, xOld = Table[0, {5}], n = 5}, 
  While[keepRunning, 
   Do[x[[i]] = xOld[[i]] + w/mat[[i, i]] (b[[i]] - 
    Sum[mat[[i, j]] x[[j]], {j, 1, i - 1}] - Sum[mat[[i, j]] xOld[[j]], {j, i, n}])
    , {i, 1, n}
    ];
   If[Norm[x - xOld]/Norm[x] < e,
    keepRunning = False,
    xOld = x;
    k++
    ]
   ]
  ];

 Grid[{
   {Row[{"Iteration needed to converge ", k}], SpanFromLeft},
   {TableForm[{x, direct}, TableHeadings -> {{"SOR", "direct"}, 
     {"x1", "x2", "x3", "x4", "x5"}}], SpanFromLeft}},
  Spacings -> {1, 1}, Frame -> All], 

 Grid[{{Button["RUN", k = 0; x = Table[0, {5}]; tick = Not[tick]]}}],

 {{w, 1.5, "w"}, 1.1, 1.9, .01, Appearance -> "Labeled"},(*sor omega*)
 {{e, 0.0001, "error tolerance"}, 0.000000001, 0.001, 0.000000001, 
  Appearance -> "Labeled"},

 {{x, Table[0, {5}]}, None}, 
 {{k, 0}, None}, 
 {{tick, False}, None}, 
 TrackedSymbols :> {tick},

 Initialization :> (SetOptions[$FrontEndSession, PrintPrecision -> 16];
   mat = {{-4, 1, 0, 0, 0}, {1, -4, 1, 0, 0}, {0, 1, -4, 1, 0}, {0, 0, 1, -4, 1}, 
   {0, 0, 0, 1, -4}};
   b = Table[1, {5}];
   direct = N@LinearSolve[mat, b]
   )
 ]

Correct answer by Nasser on February 18, 2021

Nasser has given a good answer, here's just some complements.

First, a literal translation for the first formula provided in the wikipedia of SOR:

methodSOR1[a_, b_, ω_, ϕ_, ϵ_] := 
 Module[{d, l, u, coe1, coe2, coe3}, {d, l, u} = 
   N@{DiagonalMatrix@Diagonal@#, LowerTriangularize[#, -1], UpperTriangularize[#, 1]} &@a; 
  coe1 = Inverse[d + ω l]; coe2 = ω b; coe3 = (ω u + (ω - 1) d); 
  FixedPoint[coe1.(coe2 - coe3.#) &, ϕ, SameTest -> (Max@Abs[#1 - #2] < ϵ &)]]

Then, the implementation of the second formula (compiled version), notice the Module inside the code is necessary for the speed up because it localizes intermediate variables i.e. ϕ, ϕold, n in the function:

methodSOR2 = 
  Compile[{{a, _Real, 2}, {b, _Real, 1}, {ϕ0, _Real, 1}, {ω, _Real}, {ϵ, _Real}}, 
   Module[{ϕ = ϕ0, ϕold = ϕ0 + 1, n = Length@ϕ0}, 
    While[Max@Abs[ϕold - ϕ] > ϵ, ϕold = ϕ; 
     Do[ϕ[[i]] = (1 - ω) ϕ[[i]] + 
        ω/a[[i, i]] (b[[i]] - Sum[a[[i, j]] ϕ[[j]], {j, i - 1}] - 
           Sum[a[[i, j]] ϕ[[j]], {j, i + 1, n}]), {i, n}]]; ϕ]];

Finally, since you have optimize the formula for your specific problem, I'd like to give an example of compiling your original code with minimal effort:

specificSOR = 
  With[{end = 5000}, 
   With[{ϵ = 0.1, ω = 1.5, b = ConstantArray[1., {end}], list = ConstantArray[0., {end}]},
    Compile[{}, 
     Module[{listold = list, listnew = list, error = ϵ + 1, k, i},
      For[k = 0, error > ϵ, k++, 
       listnew[[1]] = listold[[1]] - ω (b[[1]] + 4 listold[[1]] - listold[[2]])/4;
       For[i = 2, i < end, i++, 
        listnew[[i]] = listold[[i]] - 
                       ω (b[[i]] + 4 listold[[i]] - listnew[[i - 1]] - listold[[i + 1]])/4];
       listnew[[end]] = listold[[end]] - 
                        ω (b[[end]] + 4 listold[[end]] - listnew[[end - 1]])/4;
       error = Max[Abs[listnew - listold]]; listold = listnew]; 
      listnew]]]];

Of course using Do and While instead of For will make the code look better.

Finally it bears repeating that if you're not practicing programming SOR, just use LinearSolve for your equations.

Answered by xzczd on February 18, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP