Local Refinement and MPI
8 months ago by
To refine the following code is used:
By running is series it works well and the result is correct as well. In the parallel case it refine correctly but the result is non sense.
cell_markers = CellFunction("bool", mesh) cell_markers.set_all(False) origin = Point(x_refine, y_refine) for cell in cells(mesh): p = cell.midpoint() if p.distance(origin) < refine_criteria: cell_markers[cell] = True refined_mesh = refine(mesh, cell_markers,True)
For each mesh the problem, u,u_n and space function (V) are adopted and the newest problem is solved:
adapt(V, refined_mesh) adapt(u_n, refined_mesh) adapt(u, refined_mesh) adapt(problem, refined_mesh) V = V.child() problem = problem.child() u_n = u_n.child() u = u.child() solver = LinearVariationalSolver(problem.leaf_node()) solver.solve()
In addition, to do mesh partitioning following parameters are applied:
parameters["allow_extrapolation"] = True parameters["partitioning_approach"] = "PARTITION" parameters["mesh_partitioner"] = "ParMETIS"
It could be as a result of not partitioning locally correct in parallel (although the mesh shows correct refinement) or maybe nodes writes their results on other nodes.
Is it at all possible to do local refinement in parallel like that? Should I implement it in the other way?
I searched to find the the solution but could not find yet. I really appreciate it if somebody could give me help or hint.
Community: FEniCS Project
Please login to add an answer/comment or follow this question.