PETSc and SNES in DOLFIN: Jacobian test fails with O(1) relative error

4 months ago by
I was having a problem with a diverging Newton method, so I decided to try the SNES nonlinear solver. As a sanity check, I used the "test" method to test my Jacobian, and I got the following output:

Testing hand-coded Jacobian, if the ratio is
O(1.e-8), the hand-coded Jacobian is probably correct.
Run with -snes_test_display to show difference
of hand-coded and finite difference Jacobian.
Norm of matrix ratio 1.00017, difference 31232.9 (user-defined state)
Norm of function ratio 1., difference 4371.8 (user-defined state)
Norm of matrix ratio 1.00017, difference 31232.9 (constant state -1.0)
Norm of function ratio 1.00015, difference 4411. (constant state -1.0)
Norm of matrix ratio 1.00017, difference 31232.9 (constant state 1.0)
Norm of function ratio 1.00015, difference 4411. (constant state 1.0)​

After running, it exits with an error (normal behavior for test method). I am using a Jacobian computed automatically, so this result seems strange to me -- shouldn't the Jacobian computed with DOLFIN be exact, up to numerical precision? Is this the expected output from running the SNES test?

Community: FEniCS Project

Could you perhaps try this same test with a more simple nonlinear PDE, and post the MWE (minimum working example)?

I'm interested in where this goes, since I've had some odd issues with fenics.derivative in the past, and never figured out what was wrong. (as of now, fenics.derivative works fine for my problem)

Also I'm guessing this isn't practical for your case, but: Can you derive the Jacobian (Gateaux derivative) by hand and test that? I did this a few times when I was having issues, and it's quite tedious.

written 4 months ago by Alexander G. Zimmerman  
So I'm not sure if this is "the" answer, but the derivative test seems particularly sensitive to the input vector. As long as the solution vector is initialized with some initial condition (setting u = u0 before calling "solve"), the Jacobian test seems to pass reasonably. The function ratio is still high, but the matrix ratio is much smaller, and less than unity. I haven't tested this rigorously, it's just something I noticed while debugging. Is it possible that the Jacobian has been linearized in some way so that it's only valid right around the true solution? That wouldn't make much sense to me, but it might explain the observation. I'm open to ideas!
written 4 months ago by Ellen Price  

I actually thought that the whole point of this was linearization, and I would indeed expect the result to be strongly dependent on the initial guess/solution. Perhaps I'm misunderstanding something.

You only have to call fenics.derivative once; but the nonlinear solver will always at some point have to evaluate the discrete Jacobian at a given solution point to form a linear system.

For my own use case, I am always solving a time-dependent initial boundary value problem (IBVP), so I simply initialize the solution either with the initial values of my IBVP or with the solution from the previous time step. That being said, I have observed at least once case where this somehow fails and leaving the default solution initialization (which should be zeros, at least when compiling with gcc) works miraculously.

written 4 months ago by Alexander G. Zimmerman  
Is it possible that the residual has non-smooth or discontinuous behavior near certain points?  For example, if you have a conditional() or abs() somewhere, it may be that the finite difference approximation used by the SNES test is inaccurate.  (Consider, e.g., differentiating a Heaviside function at a point \(\epsilon\) away from zero; the exact Jacobian is zero there, but a finite difference approximation might return a very large number.)
written 4 months ago by David Kamensky  
One trick that might be helpful for debugging derivatives is to look at the symbolic expression for the Jacobian.  You can get it with code like the following:
from dolfin import *
import math

mesh = UnitSquareMesh(10,10)

# scalar case: output is easy to understand
S = FiniteElement("Lagrange",mesh.ufl_cell(),1)

# vector case: output uses an ugly index notation and requires a bit more
# patience to understand, but still eventually makes sense
#S = VectorElement("Lagrange",mesh.ufl_cell(),1)

V = FunctionSpace(mesh,S)

u = Function(V)
v = Function(V)

a = exp(sqrt(inner(u,v)))*dx

deriv_a = derivative(a,u)

u_trial = Function(V)

action_deriv_a = action(deriv_a,u_trial)

print action_deriv_a
written 4 months ago by David Kamensky  
Please login to add an answer/comment or follow this question.

Similar posts:
Search »