Run Code in parallel


294
views
0
7 months ago by
I am trying to run code in parallel for that I have used command: mpirun-np 2 python myexample.py


But code is running 2 times. I have read few documents there also I have seen same problem. How can I resolve this issue.

Thanks for the help. 
from dolfin import *

mesh = Mesh('./Mesh/CT.xml')
materials = MeshFunction('size_t', mesh, './Mesh/CT_physical_region.xml')

class K(Expression):
    def __init__(self, materials, k_0, k_1, **kwargs):
        self.materials = materials
        self.k_0 = k_0
        self.k_1 = k_1 
    def eval_cell(self, values, x, cell):
            if self.materials[cell.index] == 3 or self.materials[cell.index] == 4 :
                values[0] = self.k_0
            else:
                values[0] = self.k_1
#-------------------------------------------------------
# Space
#-------------------------------------------------------
V = VectorFunctionSpace(mesh,'CG',1)
Q = FunctionSpace(mesh,'CG',1)

u , v = TrialFunction(V), TestFunction(V)
E = Function(Q)
nu = Function(Q)
Gc = Function(Q) 
EG = 80.90e9
EH = 64.44e9
nuG = 0.43
nuH = 0.3
K1 = 40e6 # Mpa>m
K2 = 0.74e6 #Mpa>m
def fac_Gc(nu):
    return (1-nu**2)
Gc_G = (K1**2)*fac_Gc(nuG)/EG
Gc_H = (K2**2)*fac_Gc(nuH)/EH

E.interpolate(K(materials,EH,EG, degree = 1))
Gc.interpolate(K(materials,Gc_H, Gc_G, degree = 1 ))
nu.interpolate(K(materials,nuH,nuG, degree = 1))
lmbda = Expression('E*nu/((1.0 + nu )*(1.0-2.0*nu))',E = E, nu = nu, degree = 1 )
mu = Expression(' E/(2*(1+nu))', E = E, nu = nu, degree  = 1)



def eps(u):
    return sym(grad(u))

def sigma(u):
    return lmbda*(tr(eps(u)))*Identity(2) +2*mu*eps(u)


class left(SubDomain):
    def inside(self,x,on_boundary):
        tol = 1e-10
        #return abs(x[1]) < tol and on_boundary
        return (x[0]-6.5)**2 + (x[1]-8.45)**2 - 3.25**2 <= tol and on_boundary

class right(SubDomain):
    def inside(self,x,on_boundary):
        tol = 1e-10
        #return abs(x[1]-1.20) < tol and on_boundary
        return (x[0]-6.5)**2 + (x[1]-22.75)**2 - 3.25**2 <= tol and on_boundary

  
#--------------------------------------------------------
#    Boundary Conditions
#-------------------------------------------------------

Left = left()
Right = right()

fix_b_bottom_x =  DirichletBC(V,Constant((0.0,0.0)),Left)
disp_top = DirichletBC(V.sub(1),Constant(0.1),Right) 
bc_disp = [fix_b_bottom_x,  disp_top ]

E_du = inner(grad(v),sigma(u))*dx  
u = Function(V)
problem_disp = LinearVariationalProblem(lhs(E_du), rhs(E_du), u, bc_disp )
solver_disp = LinearVariationalSolver(problem_disp)
solver_disp.solve()
u1,u2 = split(u)
plot(u, mode = 'displacement')
plot(u1, interactive = True)
plot(u2, interactive = True)​
Community: FEniCS Project
xml is not a suitable format for parallel input. You should try to use hdf5 instead (see here)
written 7 months ago by Hernán Mella  
Thanks for the help. I have used this now:
hdf = HDF5File(mesh.mpi_comm(), "file.h5", "w")
hdf.write(mesh, "/mesh")
hdf.write(subdomains, "/subdomains")


mesh = Mesh()
hdf = HDF5File(mesh.mpi_comm(), "file.h5", "r")
hdf.read(mesh, "/mesh", False)
subdomains = CellFunction("size_t", mesh)
hdf.read(subdomains, "/subdomains")​

But still code is running 2 times.

Note: dolfin-version 1.6

written 7 months ago by hirshikesh  
1
In the past I have experienced this behavior. My solution was remove completely fenics and then reinstall it (maybe someone else can help you with a more elegant solution)
written 7 months ago by Hernán Mella  
I love this answer; it is truly the best option sometimes.  I have bash scripts set up and a USB drive with me at all times in case I need to format and reinstall my entire Ubuntu OS.  It is better this way.
written 7 months ago by pf4d  
Thank you i will do this
written 7 months ago by hirshikesh  
Thanks with the new installation it works ... :)
written 7 months ago by hirshikesh  

I do believe that the mpi command with fenics will always spawn multiple processes. The question is; do they each solve the problem separately, or do they solve a single problem together. Did you check if the runtime is halved?

I had a similar problems myself too, where each process simply solved the whole problem without any communication. I was working on a cluster though. For me the trick was to actively 'load' a specific mpi module in the shell script. I know that is awfully specific to my own situation, but maybe it gives someone an idea.

written 7 months ago by Stein Stoter  
Same with this code also code is running two times without any communication.
written 7 months ago by hirshikesh  
I had similar problems in the past. This was due to the fact that I had compiled fenics with MPICH, whereas, for execution I tried to use openMPI. This is also described in: https://wiki.mpich.org/mpich/index.php/Frequently_Asked_Questions#Q:_All_my_processes_get_rank_0 . Can you try to run the program with a different version of MPI? (In ubuntu you can do this for instance by specifying mpirun.openmpi or mpirun.mpich)
written 6 months ago by SQ  
Please login to add an answer/comment or follow this question.

Similar posts:
Search »