Assemble over submesh in parallel


234
views
0
4 months ago by
Hi!

I am solving two PDEs. One is solved on the whole mesh but the other is only solved on part of the mesh. I have created the mesh in serial using SubMesh and writes it to a h5 file. Like this:
from dolfin import *

parameters["form_compiler"]["representation"] = "quadrature"

mesh = RectangleMesh(mpi_comm_world(),Point(0.0, 0.0),Point(1.0, 1.0), 100, 100)

class Obstacle(SubDomain):
    def inside(self, x, on_boundary):
        return (between(x[1], (0.5, 0.7)) and between(x[0], (0.2, 1.0)))

obstacle = Obstacle()
subdomains = MeshFunction('size_t', mesh, 2)
subdomains.set_all(0)
obstacle.mark(subdomains, 1)

submesh = SubMesh(mesh, subdomains, 1)

mesh_file = HDF5File(mpi_comm_self(), "mesh_test.h5", 'w')
mesh_file.write(mesh, '/mesh')
mesh_file.write(submesh, '/submesh')
mesh_file.write(subdomains, "/subdomains")
​

Then I run my code:
from dolfin import *
import numpy as np

f = HDF5File(mpi_comm_world(), "mesh_test.h5", 'r')

mesh=Mesh(mpi_comm_world())
f.read(mesh,"/mesh",False)

submesh=Mesh(mpi_comm_world())
f.read(submesh,"/submesh",False)

subdomains = MeshFunction("size_t", mesh)
f.read(subdomains, "/subdomains")

dx = dx(domain=mesh, subdomain_data=subdomains)

V=FunctionSpace(mesh,"CG",1)
F=Function(V)
F.vector().set_local( np.ones(F.vector().get_local().size) )
F.vector().apply('')
      
Vsub=FunctionSpace(submesh,"CG",1)
Fsub=Function(Vsub)
Fsub.vector().set_local( 0.5*np.ones(Fsub.vector().get_local().size) )
Fsub.vector().apply('')

print(assemble(Fsub*F*dx(1)))
​

The problem is that it works fine in serial but when I run it in parallel I get in trouble with assemble:
*** Error:   Unable to evaluate function at point.
*** Reason:  The point is not inside the domain. Consider calling "Function::set_allow_extrapolation(true)" on this Function to allow extrapolation.
*** Where:   This error was encountered inside Function.cpp.
*** Process: 1
*** 
*** DOLFIN version: 2017.2.0​

I suspect this is because the DOFs of the two functions defined live on different processors when run i parallel.

The in the real problem I need to solve, I need the solution of the problem defined on the submesh to be used as a source term to the problem defined on the whole mesh. And I need it to run fast and in parallel.

Is there a nice way of solving this problem? MultiMesh? Define both function on the big mesh and somehow mask out the DOFs which are not needed when solving on the submesh? How? :)

Thanks for any pointers,
Søren
Community: FEniCS Project

1 Answer


4
4 months ago by
Running in parallel requires setting what the error message suggests for Functions, namely your Function F, Fsub.  Although generally speaking MPI/parallel code is not yet well documented in Fenics, similar questions and solutions can be found here:
https://www.allanswered.com/local/search/page/?q=set_allow_extrapolation

I have also encountered the need to set the "ghost_mode" parameter, but I am not sure when that is necessary and if it is in your case.  Suggest searching this if it still doesn't work.
Thanks a lot! Just do what fenics says, so simple :-)
written 4 months ago by SMadsen  
I ran into some sort of bug or hidden feature... If I change the mesh the program will stall with "allow_extrapolation". This seems to be due to the BoundingBoxTree used in the extrapolation, since it began to work again if I explicitly call

self.mesh.bounding_box_tree().build(mesh)

on my meshes. Is the bounding box tree cached somewhere?
written 4 months ago by SMadsen  
Please login to add an answer/comment or follow this question.

Similar posts:
Search »