output something in parallel


376
views
0
7 months ago by
Hello everyone,

I am running a damage simulation where I have to extract the crack tip position which is the coordinates of nodes of which the nodal phase field (stored as self.d) larger than a threshold (damMax). This is the code for this purpose which works nicely using 1 processor

def define_crack_tip(self):
“”"
Define the crack tip as the node of which the phase field is larger than
a threshold, say 0.8,
“”"
damMax  = self.parameters.post_processing.damMax
indices    = np.where(self.d.vector() > damMax)[0]
dofx        = self.V_d.tabulate_dof_coordinates().reshape((-1, 2))

if len(indices) > 0:
  coords = dofx[indices] # coordinates of all nodes of which damage > 0.8
  xmax = np.amax (coords[:,0]) # get the right most node x-coordinate
  ymax = np.amax (coords[:,1]) # get the right most node y-coordinate

  self.xtip = xmax
  self.ytip = ymax
else:
  self.xtip = self.parameters.post_processing.xtip0
  self.ytip = self.parameters.post_processing.ytip0

but seems not working for multiple processors.

How to make it work in parallel? I guess the reasons are np.where () and np.amax above were executed on different sub-domains. But I have no idea how to make it work.

Thanks,
Phu
Community: FEniCS Project

2 Answers


0
7 months ago by
You might try all-reducing the local maximums to get a global maximum.  Here is a minimal example of using MPI_Allreduce() from DOLFIN:

from dolfin import MPI, mpi_comm_world

mycomm = mpi_comm_world()
myrank = MPI.rank(mycomm)

# compute the maximum rank; substitute, e.g., xmax for myrank
globalMax = MPI.max(mycomm,myrank)

print(myrank,globalMax)
​

Thanks David, with your suggestion I edited my code to:

def define_crack_tip(self):
“”"
Define the crack tip as the node of which the phase field is larger than
a threshold, say 0.8,
“”"
mycomm = mpi_comm_world()

damMax  = self.parameters.post_processing.damMax
indices    = np.where(self.d.vector() > damMax)[0]
dofx        = self.V_d.tabulate_dof_coordinates().reshape((-1, 2))

if len(indices) > 0:
  coords = dofx[indices] # coordinates of all nodes of which damage > 0.8
  xmax = np.amax (coords[:,0]) # get the right most node x-coordinate
  ymax = np.amax (coords[:,1]) # get the right most node y-coordinate

  xMax = MPI.max(mycomm,xmax)
  yMax = MPI.max(mycomm,ymax)

  self.xtip = xMax
  self.ytip = yMax
else:
  self.xtip = self.parameters.post_processing.xtip0
  self.ytip = self.parameters.post_processing.ytip0








But it does not work, what I obtained was an endless loop. Without that change (in BOLD), it runs expectedly (even though the result is wrong). ANy ideas why?

Phu

written 7 months ago by Phu Nguyen  
My guess is that the program gets stuck because only some tasks are making it inside the if statement.  If only some of the MPI tasks call max(), the program will hang while those tasks wait for the other tasks to call it.
written 7 months ago by David Kamensky  

Thanks David. Do you know how to resolve this issue? I changed my code to


def define_crack_tip(self):
"""
Define the crack tip as the node of which the phase field is larger than
a threshold, say 0.8,
"""
damMax = self.parameters.post_processing.damMax
indices = np.where(self.d.vector() > damMax)[0]
dofx = self.V_d.tabulate_dof_coordinates().reshape((-1, 2))

if len(indices) == 0:
self.xtip = self.parameters.post_processing.xtip0
self.ytip = self.parameters.post_processing.ytip0
return

coords = dofx[indices] # coordinates of all nodes of which damage > 0.8
xmax = np.amax (coords[:,0]) # get the right most node x-coordinate
ymax = np.amax (coords[:,1]) # get the right most node y-coordinate

mycomm = mpi_comm_world()

xMax = MPI.max(mycomm,xmax)
yMax = MPI.max(mycomm,ymax)

self.xtip = xMax
self.ytip = yMax

which should work but it did not. I guess there is something wrong with xMax = MPI.max(mycomm,xmax)?

P

written 7 months ago by Phu Nguyen  
1
The updated code can still run into problems due to the return in the if block (if I'm guessing the indentation right), since the tasks that return at that point will never call MPI.max().  If any task doesn't reach the call to MPI.max(), the tasks that do will get stuck there, since it is a collective operation.  
written 7 months ago by David Kamensky  
Thanks a lot David. With your help, my code works perfectly now.
written 7 months ago by Phu Nguyen  
Assuming that the value self.parameters.post_processing.xtip0 is less than any x-coordinate in the domain, you could try replacing self.xtip with another variable, say, localXMax, inside of the if/else, then put "self.xtip = MPI.max(mycomm,localXMax)" after the if/else, so it will be called by all tasks.  (And likewise for y.)  
written 7 months ago by David Kamensky  
0
7 months ago by
This discussion may give you some ideas.
Best
https://fenicsproject.org/qa/3025/solving-a-problem-in-parallel-with-mpi-&-petsc
Please login to add an answer/comment or follow this question.

Similar posts:
Search »