TransWikia.com

Parallelisation strategies for mixed FE formulations

Computational Science Asked by Chenna K on January 16, 2021

Mixed FE formulations with LBB-stable elements require two different meshes for the primary and the constraint variables, for example, displacement and pressure. With continuous approximation for the pressure field, I am finding it difficult to parallelise for distributed memory architectures.

I am interested in learning some commonly employed parallelisation strategies for such problems. I very much appreciate any useful resources on this topic.

Note that I use the PETSc library for solving the matrix system in my C++ code.

2 Answers

It's a misunderstanding that you need two different meshes: The proper way to see things is that you are using the same mesh, but different polynomial spaces for the two variables. For example, for the Stokes equation, you'd have quadratic polynomials for the velocity $mathbf u$ and linear polynomials for the pressure $p$.

Appropriate parallelization strategies are then to partition the mesh among processors. This also induces a partitioning of degrees of freedom, and consequently of those rows of the matrix (and vector elements) each processor stores. It's really no different than if you had a scalar problem.

Answered by Wolfgang Bangerth on January 16, 2021

You do not like to have two or more different meshes, differently partitioned. That will make massive communication. Degrees of freedom from multiple fields should be as close as possible to each other in adjacency tree, to make interprocessor communication to a minimum.

You have one mesh, but you have DOFs associated with different entities, for example for H1 space, piece-linear continuous, DOFs are on nodes, whereas DOFs for L2 space, price-linear discontinuous, DOFs are on cells. That is for the simple case, for vectorial spaces, like H-div or H-curl, things are a bit more complicated. Of cores, for example, hierarchical space, you can have DOFs on vertices, edges, faces, and cells.

So you partition cells. Sub-entities, i.e. nodes, edges, faces, on the skin of partition are shared. DOFs on shared entities are typically owned by a partition with a lower rank. On other partitions, DOFs on shared entities are so-called ghost DOFs. You can create a special vector with ghost DOFs; you have such vectors in PETSc.

To partition cells, you need to build a graph; then you can use metis, or parameters to partition it. Itself, you have many strategies on how to partition the graph. You can build a graph as well in a different way. You can do it by the numbering of cells and then make an adjacent matrix, by finding neighbour cells through bridge entity. Bridge entity can be node, edge or face. For classical FEM you would use bridge adjacency entity as a vertex. For H-div - L2 formulation bridge adjacency entity should be facing. Since for H-div space, DOFs are on faces (and volumes). When you are using H-curl space, bridge adjacency entity will be an edge. For discontinuous Petrov-Galerkin, bridge adjacency entity will be on a face, since DOFs are on the skeleton.

Moreover, each cell can have weight, if you heterogenous order of approximation. That is needed for load balancing, to distribute work among processors equally.

In the end, there are many solutions, many strategies.

But why to do it by yourself, I can point you to the FEM code, which does it all for you.

Answered by likask on January 16, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP