Custom (global) mesh refinements in MPI
5 months ago by
My current project requires that I use a barycenter-refined mesh, i.e. for every triangle, I compute the barycenter (average of the three nodes), then create three new triangles from the original. I have an ad-hoc tool that computes this mesh refinement. It's a simple script that reads-in a triangular mesh from fenics, using that information to write a new XML file for a barycenter-refined mesh. I then just open the new mesh from the XML file. This works fine for simple simulations. However, I run into problems when I try to run simulations in parallel using MPI.
If I try to blindly run my script, I get this:
Parallel XML mesh output is not supported. Use HDF5 format instead.
My first idea for a work-around was to simply "pre-refine" my mesh before running my code in MPI. However, that won't work because (odds are) FEniCS is going to draw the subdomains along "new" edges (which terminate on at least one of the barycenters.) This won't work (for unimportant mathematical reasons.) Thus, my only option is to refine my mesh after it has been partitioned into subdomains.
My question is as follows: Does it make more sense to create a subprocess script to use my tool with the HDF5 format? (This would work a wrapper. Each MPI process would save its subdomain's mesh as an HDF5 file, then a subprocess would open it, refine it using the XML method, save it to a new HDF5 file, which the original process would open.) I ask, because this is the only obvious path I see, moving forward.
There's no way this is ideal.
I would love to simply load a new mesh object from two numpy arrays (one array for mesh nodes, and the second for the triangles), but I've yet to see an obvious (or documented) way to make this work.
Any advice would be greatly appreciated!
Thank you all!
Community: FEniCS Project
Please login to add an answer/comment or follow this question.