different results with or without using mpi for assembly


71
views
0
5 weeks ago by
Hello,
I use the code given at the end of this message to test accuracy and scaling of fenics.
The code takes a mesh name as an argument which I create separately by

from dolfin import *
f = File("uc_20x20x20.xml")
m = UnitCubeMesh(20,20,20)
f << m

the problem is the L2 norm of the error is computed as 0.00407236658149
when run with 1 core but if I use mpirun -n 2 python testcode.py uc_20x20x20.xml I get nan
(also with higher number of processes)

I tried this on conda installation as well as using the fenics ubuntu package.

Is the accuracy of fenics tested in some repository for parallel runs?


#//--------------------------------
from dolfin import *
import sys
import math
from mpi4py import MPI

comm = MPI.COMM_WORLD

mesh = Mesh(sys.argv[1])

V = FunctionSpace(mesh, "Lagrange", 1)

def boundary(x, on_boundary):
return on_boundary

# Define boundary condition
u0 = Constant(0.0)
bc = DirichletBC(V, u0, boundary)

# Define variational problem
u = TrialFunction(V)
v = TestFunction(V)
f = Expression("3*pi*pi*sin(pi*x[0])*sin(pi*x[1])*sin(pi*x[2])", pi=math.pi, degree=2)

soln_exact = Expression("sin(pi*x[0])*sin(pi*x[1])*sin(pi*x[2])", pi=math.pi, degree=2)

a = inner(grad(u), grad(v))*dx
L = f*v*dx

u = Function(V)

A = PETScMatrix()
b = PETScVector()
x = PETScVector()

solver = KrylovSolver(A, "gmres")

comm.Barrier()
tic = MPI.Wtime()
A = assemble(a)
toc = MPI.Wtime()
time_assembleA = toc - tic

comm.Barrier()
tic = MPI.Wtime()
b = assemble(L)
toc = MPI.Wtime()
time_assembleb = toc - tic

comm.Barrier()
tic = MPI.Wtime()
bc.apply(A, b)
toc = MPI.Wtime()
time_assemblebc = toc - tic
solution=Function(V, x)

comm.Barrier()
tic = MPI.Wtime()
solver.solve(A,x,b)
toc = MPI.Wtime()
time_solve = toc - tic

L2_error = math.sqrt(assemble((soln_exact-solution)*(soln_exact-solution)*dx))
print ('{} {} {} {} {}'.format( L2_error, time_assembleA, time_assembleb, time_assemblebc, time_solve))
Community: FEniCS Project

2 Answers


3
5 weeks ago by
If I avoid using the Function constructor with a GenericVector argument, it works fine for me:

comm.Barrier()
tic = MPI.Wtime()
bc.apply(A, b)
toc = MPI.Wtime()
time_assemblebc = toc - tic

###############################
#solution=Function(V, x)
solution=Function(V)
###############################

comm.Barrier()
tic = MPI.Wtime()

##################################
#solver.solve(A,x,b)
solver.solve(A,solution.vector(),b)
##################################

toc = MPI.Wtime()
time_solve = toc - tic
​


The C++ API documentation here

https://fenicsproject.org/olddocs/dolfin//2017.2.0/cpp/programmers-reference/function/Function.html

says "Warning: This constructor is intended for internal library use only", so there may be some tricky assumptions that must be satisfied. 

0
5 weeks ago by
thank you very much,
now it works :-)
Please login to add an answer/comment or follow this question.

Similar posts:
Search »