Mesh distribution issue


193
views
0
11 months ago by

Dear all,

I'm getting an error when I try to go parallel. In few words, the code is executed one time on each process  (is not being parallelized). This example show this issue:

from dolfin import *

# MPI parameters
comm = mpi_comm_world()
rank = MPI.rank(comm)

print('rank: '+str(rank))
print('comm: '+str(comm))

mesh = UnitSquareMesh(10, 10)
plot(mesh, interactive=True)


The output for the above code is:

rank: 0
comm: <petsc4py.PETSc.Comm object at 0x7f6cf43cc7e8>
rank: 0
comm: <petsc4py.PETSc.Comm object at 0x7f41d00ea7e8>
rank: 0
comm: <petsc4py.PETSc.Comm object at 0x7f91e8110878>
rank: 0
comm: <petsc4py.PETSc.Comm object at 0x7f0d601a07e8>


Can anyone confirm and explain this behavior?. At the moment I'm using dolfin 2017.1 in a machine with linux mint.

 

Regards!

Community: FEniCS Project
1
I tried to run your code on Linux Mint using
mpirun -n 4 python code.py​

and both in 1.6.0 and 2016.2 it seemed to work as expected with a typical output like

rank: 2
comm: <petsc4py.PETSc.Comm object at 0x7f4340410c10>
rank: 0
comm: <petsc4py.PETSc.Comm object at 0x7f3711a42c10>
rank: 1
comm: <petsc4py.PETSc.Comm object at 0x7ff9b060cc10>
rank: 3
comm: <petsc4py.PETSc.Comm object at 0x7f7459960c10>
written 11 months ago by Adam Janecka  
Thanks for your comment. A few minutes ago I have checked this code in my laptop (with linux mint and dolfin 2017.1) and works fine. It seems to be that the problem is my desktop machine.
written 11 months ago by Hernán Mella  

1 Answer


0
11 months ago by
Removing completely and reinstalling the libraries mpich and openmpi has solved my problem
Please login to add an answer/comment or follow this question.

Similar posts:
Search »