Is i possible to reduce RAM required for computation?


210
views
0
5 months ago by
I'm trying to solve the Laplace equation in the problem of electrical stimulation.
An example of the code I use is as follows:

from fenics import *

mesh = UnitCubeMesh(8, 9, 10)
V = FunctionSpace(mesh, 'P', 1)

u_Left = Constant(1)
def boundary_Left(x, on_boundary):
    tol = 1e-14
    return on_boundary and near(x[0], 0, tol)
bc_L = DirichletBC(V, u_Left, boundary_Left)

u_Right = Constant(0)
def boundary_Right(x, on_boundary):
    tol = 1e-14
    return on_boundary and near(x[0], 1, tol)
bc_R = DirichletBC(V, u_Right, boundary_Right)

u = TrialFunction(V)
v = TestFunction(V)
f = Constant(0)
conductivity = Constant(1)
a =  conductivity*dot(grad(u), grad(v))*dx
L = f*v*dx

u = Function(V)
solve(a == L, u, [bc_L, bc_R])​

I'm solving the problem in 3D. The UnitCubeMesh size is 200 x 220 x 100.
I set the conductivity variable using the expression (chapter 4.3.1 on page 88 of https://fenicsproject.org/pub/tutorial/pdf/fenics-tutorial-vol1.pdf)
Running a smaller task, I extrapolated that to accomplish this task I need 128 GB of RAM. Is it possible to reduce the amount of RAM required, for example, by using an iterative solver or in some other way?

Community: FEniCS Project
Have you tried with an iterative solver?
written 5 months ago by Eleonora Piersanti  

1 Answer


3
5 months ago by
list_krylov_solver_methods()

shows you the available iterative solvers.

list_krylov_solver_preconditioners()

shows you the available preconditioners().

You can apply them e.g. by calling
solve(a == L, u, [bc_L, bc_R],"gmres","ilu")
I tried some versions of the solver and the preconditioner. Most of the options did not converge for my task. The working option is the use of gmres and petsc_amg. A working example is shown in the code below:

from fenics import *

# Mesh
mesh = UnitCubeMesh(100, 110, 50)
V = FunctionSpace(mesh, 'P', 1)

# Boundary conditions
u_Left = Constant(1)
def boundary_Left(x, on_boundary):
    tol = 1e-14
    return on_boundary and near(x[0], 0, tol)
bc_L = DirichletBC(V, u_Left, boundary_Left)

u_Right = Constant(0)
def boundary_Right(x, on_boundary):
    tol = 1e-14
    return on_boundary and near(x[0], 1, tol)
bc_R = DirichletBC(V, u_Right, boundary_Right)

# Problem
u = TrialFunction(V)
v = TestFunction(V)
f = Constant(0)
conductivity = Constant(1)
a =  conductivity*dot(grad(u), grad(v))*dx
L = f*v*dx
u = Function(V)

# Solver configuration
problem = LinearVariationalProblem(a, L, u, [bc_L, bc_R])
solver = LinearVariationalSolver(problem)
solver.parameters.linear_solver = 'gmres'
solver.parameters.preconditioner = 'petsc_amg'
prm = solver.parameters.krylov_solver  # short form
prm.monitor_convergence = True

# Solving
solver.solve()
​

The memory consumption for the direct solver is about 26 KB per voxel, and for the iterative solver is about 2 KB per voxel (the required RAM for 200 x 220 x 100 task is about 8.6 GB, which is less than 128 GB for direct solver computation). An iterative solver computation time is approximately same as the fastest runs of a direct solver.

When I run the script using mpirun -np 2 ./code.py, the computation freezes. This is not a big problem for me but what is the reason for this freezes?

written 5 months ago by Mikhail  
Please login to add an answer/comment or follow this question.

Similar posts:
Search »