Which solver should I use for large meshes? UMFPACK out of memory, mumps "Problem with integer stack size 1 1 9"


200
views
1
4 months ago by
Sa Wu  

I would like to know which direct solver, i.e. solver_params for solve, I can reliably use for large meshes for a problem with heterogeneous coefficients.

Currently, I have PETSc built with 32bit indices (required for mumps) using it's own configure script, dolfin components from git.

I seem to be running into memory problems, which the rest of this post is only a documentation of.

 

When using larger meshes, e.g.

resolution = 12

in the following abridged code sample, without heterogeneities, on my 16GB RAM workstation with PETSc/mumps with 32bit indices, dolfin, mshr built from git/source, I seem to run into what I interpret as memory problems.

Lower resolutions seem to run fine.

The code

from dolfin import *
from mshr import *
import numpy as np
import matplotlib.pyplot as plt

solver_params = ('mumps',)
resolution = 12

domain = Rectangle(Point(-1,-1),Point(1,1))-Rectangle(Point(0,0),Point(1,1))
near_corner = AutoSubDomain(lambda xx, on: np.sqrt(xx@xx) < 0.1)
nearest_corner = AutoSubDomain(lambda xx, on: np.sqrt(xx@xx) < 0.05)
boundary = AutoSubDomain(lambda xx, on: on)
mesh = generate_mesh(domain, int(2**resolution))
mf = CellFunction('bool', mesh, False)
near_corner.mark(mf, True)
mesh = refine(mesh, mf)
mf = CellFunction('bool', mesh, False)
nearest_corner.mark(mf, True)
mesh = refine(mesh, mf)

def compute(space,aa_form,LL_form,bc_form):
    VV = space(mesh)
    test = TestFunction(VV)
    trial = TrialFunction(VV)
    aa = aa_form(test, trial)
    LL = LL_form(test)
    bc = bc_form(VV)
    uu = Function(VV,name='u')
    AA,bb = assemble_system(aa,LL,bc)
    solve(AA, uu.vector(), bb, *solver_params)
    return uu

space = lambda mesh: VectorFunctionSpace(mesh, 'CG', 1)
poisson = 0.3
ll = poisson/((1.+poisson)*(1.-2.*poisson))
mu = 1/(2.*(1.+poisson))
aa_form = lambda test, trial: inner(2*mu*sym(grad(trial))+ll*tr(sym(grad(trial)))*Identity(2),sym(grad(test)))*dx
LL_form = lambda test: inner(Constant((0,-6)), test)*dx
bc_form = lambda VV: DirichletBC(VV, Constant((0,0)), boundary)
uu = compute(space, aa_form, LL_form, bc_form)
File('elasticity_{:d}.pvd'.format(resolution)) << uu

yields

Traceback (most recent call last):
...
    solve(AA, uu.vector(), bb, *solver_params)
  File "/home/wu/.local/lib/python3.6/site-packages/dolfin/fem/solving.py", line 310, in solve
    return cpp.la_solve(*args)
  File "/home/wu/.local/lib/python3.6/site-packages/dolfin/cpp/la.py", line 4903, in la_solve
    return _la.la_solve(*args)
RuntimeError: 

*** -------------------------------------------------------------------------
*** DOLFIN encountered an error. If you are not able to resolve this issue
*** using the information listed below, you can ask for help at
***
***     fenics-support@googlegroups.com
***
*** Remember to include the error message listed below and, if possible,
*** include a *minimal* running example to reproduce the error.
***
*** -------------------------------------------------------------------------
*** Error:   Unable to solve linear system using PETSc Krylov solver.
*** Reason:  Solution failed to converge in 0 iterations (PETSc reason DIVERGED_PCSETUP_FAILED, residual norm ||r|| = 0.000000e+00).
*** Where:   This error was encountered inside PETScKrylovSolver.cpp.
*** Process: 0
*** 
*** DOLFIN version: 2017.2.0.dev0
*** Git changeset:  4da7bfb2502e1e09f00770010ed13f4983e460f0
*** -------------------------------------------------------------------------

Is this an out of memory error? That's my assumption but I am not sure.

 

Without mumps I get out of memory errors already at a lower resolution

UMFPACK V5.7.6 (May 4, 2016): ERROR: out of memory

Traceback (most recent call last):
...
    solve(AA, uu.vector(), bb, *solver_params)
  File "/home/wu/.local/lib/python3.6/site-packages/dolfin/fem/solving.py", line 310, in solve
    return cpp.la_solve(*args)
  File "/home/wu/.local/lib/python3.6/site-packages/dolfin/cpp/la.py", line 4903, in la_solve
    return _la.la_solve(*args)
RuntimeError: 

*** -------------------------------------------------------------------------
*** DOLFIN encountered an error. If you are not able to resolve this issue
*** using the information listed below, you can ask for help at
***
***     fenics-support@googlegroups.com
***
*** Remember to include the error message listed below and, if possible,
*** include a *minimal* running example to reproduce the error.
***
*** -------------------------------------------------------------------------
*** Error:   Unable to successfully call PETSc function 'KSPSolve'.
*** Reason:  PETSc error code is: 76 (Error in external library).
*** Where:   This error was encountered inside /data/wu/gits/dolfin/dolfin/la/PETScKrylovSolver.cpp.
*** Process: 0
*** 
*** DOLFIN version: 2017.2.0.dev0
*** Git changeset:  4da7bfb2502e1e09f00770010ed13f4983e460f0
*** -------------------------------------------------------------------------

 

Moving to a larger workstation with 512GB of RAM with mumps only yields the cryptic.

Problem with integer stack size           1           1           9

which according to Google is an error in mumps which except for the appearing in the source code is undocumented.

 

Can I just use different solver_params? Or do I need 64bit index PETSc? Which solver?

Community: FEniCS Project

2 Answers


1
4 months ago by
I'd be amazed if you could solve any problem with a direct solver that also requires the indexing space of 64-bit indices! That is certainly not your issue.

For scalable iterative strategies to solving the elasticity equations, try PETSc AMG + Conjugate Gradient. You can see an example of setting this up here: https://bitbucket.org/fenics-project/dolfin/src/bc08b002b887cff99bf126028e2281ef27a81b7e/demo/undocumented/elasticity/python/demo_elasticity.py?at=master&fileviewer=file-view-default

Thank you for the pointer. I could use this.

I am still interested though, if there is a direct solver I can use for large systems, for example for mixed formulation or additional constrains with Lagrange multipliers.

I would also be interested if there existed some parameters for solve such that my example code would simply run without the aforementioned errors.

written 4 months ago by Sa Wu  
0
4 months ago by
I recommend MUMPS and SuperLU_dist as direct solvers for larger problems. 

UMFPACK seems to store an unnecessarily  large integer internally, and this limits its used to smaller problems than one would expect for a given int type.
Please login to add an answer/comment or follow this question.

Similar posts:
Search »