Using MPI for a 3D cell problem with periodic boundary conditions


252
views
1
4 months ago by

I'm trying to solve cell problems for diffusion on a periodic cube with two subdomains determined from imported data (with constant diffusion coefficients within each subdomain). It is working fine in serial but when I run in parallel, using mpirun -n np python script.py (with np>1), I am getting incorrect results that change with the number of cores used.

After some checks, I believe that the diffusion coefficients are being applied properly in both serial and parallel (even though the presented method here is slow). Since each of the solutions:  $u_1$u1 ,  $u_2$u2  and  $u_3$u3 change with the number of cores used, I'm thinking that there is something wrong with the mesh partition.

Here is a condensed version of my code:

from dolfin import *
import numpy as np
import scipy.io as io
from mpi4py import MPI as nMPI

#------------------------------------------------------------------------------------------------------------
#CLASSES

# Class for Periodic boundary condition
class PeriodicBoundary(SubDomain):

    def inside(self, x, on_boundary):
        # return True if on left (x(0) = 0), bottom (x(1) = 0) or front (x(2) = 0) face boundary AND NOT on the points (0,1,1), (1,1,0) or (1,0,1), nor on the edges (x=1 and z=0: excluding (1,0,0)), (y=0 and z=1: excluding (0,0,1)) and (x=1 and y=0: excluding (1,0,0))
        return bool((near(x[0], 0) or near(x[1], 0) or near(x[2],0)) and
                    (not(
                    (near(x[0], 0) and near(x[1], 1) and near(x[2], 1)) or
                    (near(x[0], 1) and near(x[1], 1) and near(x[2], 0)) or
                    (near(x[0], 1) and near(x[1], 0) and near(x[2], 1)) or
                    ((near(x[0], 1) and near(x[2], 0)) and not (near(x[0], 1) and near(x[1], 0) and near(x[2], 0))) or
                    ((near(x[1], 0) and near(x[2], 1)) and not (near(x[0], 0) and near(x[1], 0) and near(x[2], 1))) or
                    ((near(x[0], 1) and near(x[1], 0)) and not (near(x[0], 1) and near(x[1], 0) and near(x[2], 0)))
                     )) and
                    on_boundary)

    def map(self, x, y):
        if near(x[0], 1) and near(x[1], 1) and near(x[2], 1):
           y[0] = x[0] - 1
           y[1] = x[1] - 1
           y[2] = x[2] - 1
        elif near(x[0], 1) and near(x[1], 1) and near(x[2], 0):
            y[0] = x[0] - 1
            y[1] = x[1] - 1
            y[2] = x[2] + 1
        elif near(x[0], 0) and near(x[1], 1) and near(x[2], 1):
            y[0] = x[0] + 1
            y[1] = x[1] - 1
            y[2] = x[2] - 1
        elif near(x[0], 1) and near(x[1], 0) and near(x[2], 1):
            y[0] = x[0] - 1
            y[1] = x[1] + 1
            y[2] = x[2] - 1
        elif near(x[0], 1) and near (x[1], 0):
            y[0] = x[0] - 1
            y[1] = x[1] + 1
            y[2] = x[2]
        elif near(x[0], 1) and near (x[1], 1):
            y[0] = x[0] - 1
            y[1] = x[1] - 1
            y[2] = x[2]
        elif near(x[0], 1) and near (x[2], 0):
            y[0] = x[0] - 1
            y[1] = x[1]
            y[2] = x[2] + 1
        elif near(x[0], 1) and near (x[2], 1):
            y[0] = x[0] - 1
            y[1] = x[1]
            y[2] = x[2] - 1
        elif near(x[1], 0) and near (x[2], 1):
            y[0] = x[0]
            y[1] = x[1] + 1
            y[2] = x[2] - 1
        elif near(x[1], 1) and near (x[2], 1):
            y[0] = x[0]
            y[1] = x[1] - 1
            y[2] = x[2] - 1
        elif near(x[0], 1):
            y[0] = x[0] - 1
            y[1] = x[1]
            y[2] = x[2]
        elif near(x[1], 1):
            y[0] = x[0]
            y[1] = x[1] - 1
            y[2] = x[2]
        elif near(x[2], 1):
            y[0] = x[0]
            y[1] = x[1]
            y[2] = x[2] - 1
        else:
            y[0] = -1000
            y[1] = -1000
            y[2] = -1000

#--------------------------------------------------------------------------------------------------------
#CREATE MESH

mesh = UnitCubeMesh(4, 4, 4)

# Define variational problem
V = FunctionSpace(mesh, "CG", 1, constrained_domain=PeriodicBoundary())
dx = Measure("dx")

V0 = FunctionSpace(mesh, 'DG', 0)
D = Function(V0)

D_values = [1, 100]  # diffusion values in the subdomains

vec = D.vector()
values = vec.get_local()

dofmap = V0.dofmap()
first, last = dofmap.ownership_range()

mat_file = io.loadmat('tiny_test.mat')
subdomain_vec = mat_file['List_FENICS']
subdomain_vec = np.array(subdomain_vec, dtype = 'float_')
subdomain_vec = subdomain_vec[0]

visited = []

for cell in cells(mesh):
    dofs = dofmap.cell_dofs(cell.index())                   # local
    for dof in dofs:
        if not dof in visited:
            global_dof = dofmap.local_to_global_index(dof)  # global
            if first <= global_dof <= last:
                visited.append(dof)
                if subdomain_vec[global_dof] == 0.0:
                    values[dof] = D_values[0] #Set values in first subdomain
                else:
                    values[dof] = D_values[1] #Set values in second subdomain

vec.set_local(values) #Set diffusion coefficients within this process
vec.apply('insert')

#SOLVE INDIVIDUAL CELL PROBLEMS

#X DIRECTION (I've omitted the y and z direction for the sake of brevity)
#Define LHS and RHS

u_1 = TrialFunction(V)
v = TestFunction(V)

a_x = D*(inner(nabla_grad(u_1), nabla_grad(v)))*dx()
L_x = - D*(nabla_grad(v)[0])*dx()

u_1 = Function(V)
solve(a_x == L_x, u_1)

#COMPUTE EFFECTIVE DIFFUSION TENSOR

#These values change with the number of processes
print assemble(u_1*dx()) 

D_11 = D*((Dx(u_1,0) + 1))
D_12 = D*(Dx(u_1,1))
D_13 = D*(Dx(u_1,2))

print("Results")

print assemble(D_11*dx()),
print assemble(D_12*dx()),
print assemble(D_13*dx())


Could the issue be with the periodic boundary in parallel? Since when the mesh is partitioned, it will be trying to set values to be equal on opposite sides of the cube that are now not necessarily contained in the same partition. I'm not receiving an error though, just incorrect solutions. Any help would be greatly appreciated.

I've added the illustrative mat file "tiny_test.mat" to Dropbox at https://www.dropbox.com/s/6gsn73lpkaehv27/tiny_test.mat?dl=0, which you'd need to run the script.

Community: FEniCS Project
I'm not sure if this is the correct answer, but periodic boundaries are not compatible with DG at the moment.

https://bitbucket.org/fenics-project/dolfin/issues/558/periodic-boundary-conditions-for-dg
written 4 months ago by Lukas O.  
1
The DG space is just used to represent the cell-constant material coefficients it appears. This shouldn't be a problem.
written 4 months ago by Nate  
@Lee:  I am having an issue with periodic BC in parallel also.  See:
https://www.allanswered.com/post/xnqop/mpi-segmentation-fault-with-c-demo-periodic/

Are you able to run the "Periodic" demo in Python using mpi and periodic BC (yours or any simple one)?
In C++, the code crashes on definition of the FunctionSpace using a periodic BC object, so I'm not sure what is the problem there or if it is related to your issue.  JW
written 4 months ago by jwinkle  
1

No, I haven't got anything working with mpi and periodic BCs currently.

In my case, I can run the above code without error but in parallel it's giving incorrect results. As you did in your question, I get a segmentation error when I try to run the code with

parameters['ghost_mode'] = 'shared_vertex'

it fails at the definition of the FunctionSpace using the periodic BCs, as yours does. I have yet to find an explanation for why this is but will keep you updated if I do.

written 4 months ago by Lee Curtin  

Turns out that I did have code working in parallel in 3D, I'd just not tried it in parallel before because I hadn't needed to. I've not needed ghost mode at all to get it working.

The periodic bcs I presented here work fine in parallel. My problem was with the dof mapping changing between a serial and parallel run - which I'm creating my own code to fix as I couldn't find an appropriate inbuilt function.

written 3 months ago by Lee Curtin  
In this case, were there any changes made to your code to no longer crash with seg. fault?   You seem to hint that it was working all along, so I'm unclear which code crashed before and if it doesn't now, what was changed.  --JW
written 3 months ago by jwinkle  
My code only crashed with a segmentation fault when I used ghost mode. The code I presented in this question didn't crash but rather gave incorrect results in parallel. I thought the issue was with the parallel BCs but instead it was with my dof mapping.

I have only ever seen a seg fault due to ghost mode. What error do you get if you run your code without setting ghost mode?
written 3 months ago by Lee Curtin  
Please login to add an answer/comment or follow this question.

Similar posts:
Search »