Running multiple solve in parallel using MPI
6 months ago by
After some support ( https://fenicsproject.org/qa/3772/spawning-independent-dolfin-processes-with-mpi4py/ ), I was able to use the following code to force each process to operate only on the local problem (and not partition the mesh).
This solution doesn't work any more. At least in my machine I get the error:
import dolfin as do from petsc4py import PETSc from mpi4py import MPI comm = PETSc.Comm(MPI.COMM_SELF) mesh = do.UnitSquareMesh(comm,10,10)
~/.emacs.d/.python-environments/tm3/lib/python3.5/site-packages/dolfin/cpp/mesh.py in __init__(self, *args) 6385 6386 """ -> 6387 _mesh.UnitSquareMesh_swiginit(self, _mesh.new_UnitSquareMesh(*args)) 6388 __swig_destroy__ = _mesh.delete_UnitSquareMesh 6389 UnitSquareMesh_swigregister = _mesh.UnitSquareMesh_swigregister TypeError: (size_t) expected positive 'int' for argument 1
Did something change in the mesh generation lately?
Also, right now I would need to use the constructor do.Mesh and load an XML mesh instead of using do.UnitSquareMesh. I.e. I would like to do:
import dolfin as do from petsc4py import PETSc from mpi4py import MPI comm = PETSc.Comm(MPI.COMM_SELF) mesh = do.Mesh(comm,'mesh.xml')
Any idea? Thanks!
Community: FEniCS Project
6 months ago by
2017.2.0(docker image stable).
Anyway, it is better to use fenics's methods to get MPI communicator. You wont get into these type of problems then (the problem you have is discrepancy in MPI wrapped as mpi4py communicator and as petsc4py communicator).
2017.2.0and earlier use
These both return different python objects, but their use should be consistent throughout the fenics (also for Mesh creation).
Please login to add an answer/comment or follow this question.