### How to run multiple iterations of a problem in parallel, distributed?

92

views

2

I am trying to calibrate a set of parameters using gradient descent. Every few iterations, I want to evaluate a Jacobian, which needs to run 5 different iterations of the model (I have 5 parameters). I would also like to make every model evaluation distributed.

For example:

I make my initial guess, and run the model, split into 4 sub-domains using "mpirun -np 4 python MyModel.py 1 1 1 1 1"

I then need to do the following five runs to calculate a Jacobian:

"mpirun -np 4 MyModel.py 1.01 1 1 1 1"

"mpirun -np 4 MyModel.py 1 1.01 1 1 1"

"mpirun -np 4 MyModel.py 1 1 1.01 1 1"

"mpirun -np 4 MyModel.py 1 1 1 1.01 1"

"mpirun -np 4 MyModel.py 1 1 1 1 1.01"

Instead of doing that, and having to write to file at each Jacobian evaluation, I have written code which encapsulates the model and returns a vector of my quantity of interest.

An example of what I'd like to implement is below. I would like to be able to split up the for-loop in the Jacobian function so that each run is done in parallel, while each mesh is partitioned into 4 parts. I have access to the 20 cores required to do this on HPC.

For example:

I make my initial guess, and run the model, split into 4 sub-domains using "mpirun -np 4 python MyModel.py 1 1 1 1 1"

I then need to do the following five runs to calculate a Jacobian:

"mpirun -np 4 MyModel.py 1.01 1 1 1 1"

"mpirun -np 4 MyModel.py 1 1.01 1 1 1"

"mpirun -np 4 MyModel.py 1 1 1.01 1 1"

"mpirun -np 4 MyModel.py 1 1 1 1.01 1"

"mpirun -np 4 MyModel.py 1 1 1 1 1.01"

Instead of doing that, and having to write to file at each Jacobian evaluation, I have written code which encapsulates the model and returns a vector of my quantity of interest.

An example of what I'd like to implement is below. I would like to be able to split up the for-loop in the Jacobian function so that each run is done in parallel, while each mesh is partitioned into 4 parts. I have access to the 20 cores required to do this on HPC.

```
from fenics import *
from mshr import *
import numpy as np
def MyModel(params):
p1,p2,p3,p4,p5 = params
mesh = Mesh('MyMesh.xml')
#run the FEM using fenics
#calculate Quantity of Interest (QOI)
return QOI
def Jacobian(func,current_params):
J=[]
for p in range(len(current_params)):
params_J = current_params
params_J[p] += 0.01
J.append( func(params_J) )
J = np.asarray(J)/0.01
return J
def UpdateParams(params,err,J):
#linear algebra to determine new parameters
return NewParams
def CalibrateModel(data,initial_guess):
initial_model = MyModel(initial_guess)
err = np.sum(np.power(data-initial_model,2))
tol = 1.0e-6
params = initial guess
while err > tol:
J = Jacobian(MyModel,params)
params = UpdateParams(params,err,J)
err = np.sum(np.power(data-MyModel(params),2)) #update error
return params
```

Community: FEniCS Project

### 1 Answer

0

Hi Ryan,

Your question is closely related to https://www.allanswered.com/post/vnkjo/running-multiple-solve-in-parallel-using-mpi/ and https://fenicsproject.org/qa/3772/spawning-independent-dolfin-processes-with-mpi4py/

Your question is closely related to https://www.allanswered.com/post/vnkjo/running-multiple-solve-in-parallel-using-mpi/ and https://fenicsproject.org/qa/3772/spawning-independent-dolfin-processes-with-mpi4py/

Please login to add an answer/comment or follow this question.