superlu_dist in fenics 2016.1.0

4 months ago by

Hi there , I am using Fenics stable 1.6.0 (through docker on windows 10 ) and trying to solve a nonlinear elasticity problem. mumps is unable to solve the problem and the process gets killed on my system (Core i7 32 gb ram). I have also tried krylov solver (1572975 x 1572975 system) but its not converging.

I have read in several posts that superlu_dist is also meant for large system of equations but i am unable to find superlu_dist once i obtain list of available linear solvers in FEniCS 1.6.0.

I am stuck because the process gets killed when i use mumps. Can any one suggest how can i add superlu while using the same FEniCS version.
Community: FEniCS Project

2 Answers

4 months ago by
You will need to compile PETSc with SUPERLU_DIST:

If you can manage using the docker version, it has this package nicely installed for you.  Otherwise, you can investigate the FEniCS dev's dockerfiles for help installing from source:

Hope that helps!

EDIT: I seem to have misread that you are using the docker file.  Is your question simply how to use SUPERLU_DIST instead of MUMPS?  If so, you can find which linear solvers are installed via ``list_linear_solver_methods()``, this will give you the key to send to the solver parameters.
No you did not misread my issue. Infact i have tried using superlu_dist but that is not installed as part of fenics stable 1.6.0. When i try to obtain list of available solvers through list_linear_solver_methods() and list_lu_solver_methods() , superlu or superlu_dist is not displayed in the list.

Now i am trying to install the dev-env version available through the link that you have shared. Is it possible to ammend the fenics 1.6.0 docker file and install superlu as part of fencs 1.6.0 ?
written 4 months ago by Ovais  
This I do not know.  Is there any particular reason you don't want to use version 2017.2.0?
written 4 months ago by pf4d  
actually i have written code in fenics 2016 version and it was giving MixedFunctionSpace (related) errors in the latest version. But now i guess i have to make the changes because superLU is readily available in the latest version.  

written 4 months ago by Ovais  
I know this issue, this is what you are looking for:

scalar = FiniteElement('P', tetrahedron, 1)
vector = VectorElement('P', tetrahedron, 1)
mixed_element = MixedElement([scalar, vector])
Space = FunctionSpace(mesh, mixed_element)
written 4 months ago by Emek  
I have been able to implement the code in latest fenics version. But the problem is not getting solved with superlu as the solver as well. With mumps the problem gets solved for a less quadrature degree i.e. 2 but for quadrature degree= 4 the problems gets killed.
My guess: The problem is getting killed NOT because of computational expense but because OOM killer is causing it to stop. I reached this conclusion by monitoring cpu usage and ram percentage usage. I have even tried to turn OOM killer off through docker settings but somehow my process gets killed. The question is , how to stop Linux from killing my process. There is only 13 - 15 percent of CPU usage. mumps seems to be working towards solving problem but the process gets killed while fenics seems to be busy. Any suggestions ? what to do ? (I can post complete code but I have intentionally not done that so as to keep the focus on the OOM killer issue rather than code).

I am using win 10 on Core I7 with 32 GB of ram.
written 4 months ago by Ovais  
Ubuntu is a very good OS and kills memory eager processes (instead of getting out of the memory and showing a blue screen as being the case in other OSes). Your problem is the use of quadrature degree=4 leading to high memory consumption. Here is an important detail for you, by using n=2 quadrature degree, you can integrate a polynomial degree 2n-1=3 exactly by using Gauss integration. In other words, up to cubic Lagrange elements there is no need to use something else than quadrature degree=2. Consider if your choice is meaningful.
written 4 months ago by Emek  
4 months ago by
A nonlinear elasticity problem can be solved by using cg as long as the material parameters are not very different---a soft material attached to a metal may show problems with an iterative solver. Try to use different iterative solvers. You may first check on a smaller mesh, if everything works as expected.
Thanks for the reply emak.

I have solved the problem for simpler and smaller meshes numerous times through different solvers. But now the issue is to solve the problem on a finer mesh. I have tried few iterative solvers but they did not converge. Is it possible to change parameters of mumps to make it more efficient ? ...any suggestion regarding superlu_dist would also be very help ful
written 4 months ago by Ovais  
Please login to add an answer/comment or follow this question.

Similar posts:
Search »