managing a nonlinear Poisson system with large boundary values

4 months ago by
I'm trying to solve nonlinear Poisson (Poisson-Boltzmann).  The class system can be set up in terms of the function (the potential) alone. But I'm trying to get it working with explicit source functions, as a precursor for solving time-dependent drift-diffusion problems.

My solution works well enough when the boundary values are small enough (in terms of real quantities, smaller than 500 mV) but diverges for high boundary values.  I've trying rescaling and reframing source terms, and using different direct and iterative solvers. All that does work (including rescaling functions to unity), but doesn't solve the problem (it still diverges for large boundary values). It seems the high boundary value makes the system ill-conditioned.  I'm interested to hear clues for other strategies I should try.

The minimal example seems to be sufficient to illustrate the problem. The mixed solution u has 3 components, u.sub(0) is the potential (the "main" solution), and u.sub(1), u.sub(2) are the positive and negative source terms (charge concentrations cp, cn). I've presented it in a 1D regular mesh with 10000 cells (in my actual implementation I use NonlinearVariationalProblem and have a refined mesh which becomes finer at the boundary. That helps it get further but the basic problem still occurs when the boundary value is sufficiently high).

With boundaryValue=100, the iterations diverge and hit nan.  boundaryValue=20 completes satisfactorily. boundaryValue=30 proceeds slowly and hits the iteration limit.
import matplotlib.pyplot as plt
from dolfin import *

# Sub domain for Dirichlet boundary condition
class DirichletBoundary(SubDomain):
    def inside(self, x, on_boundary):
        return (abs(x[0]) < DOLFIN_EPS or abs(x[0] - 1.0) < DOLFIN_EPS) and on_boundary

# Create mesh and define function space
mesh = UnitIntervalMesh(10000)

#V = FunctionSpace(mesh, "CG", 1)
singleElement = FiniteElement('CG', interval, 1)
elementList = [ singleElement, singleElement, singleElement ]
mixedElement = MixedElement(elementList)
V = FunctionSpace(mesh, mixedElement)
# Define boundary condition
g = Constant(boundaryValue)
bc = DirichletBC(V.sub(0), g, DirichletBoundary())

# Define variational problem
u = Function(V)
v = TestFunction(V)

(potential, cp, cn) = split(u)
a = ( inner(nabla_grad(potential), nabla_grad(v[0]))*dx
      + inner(cp, v[1])*dx - inner(exp(-potential), v[1])*dx
      + inner(cn, v[2])*dx - inner(exp(+potential), v[2])*dx )

f = cp-cn
L = f*v[0]*dx

F = a - L

# Compute solution
solve(F == 0, u, bc,

# Plot solution
(potential, cp, cn) = u.split()
plot(potential, title="Potential")

plot(cp, title="Source Positive")
plot(cn, title="Source Negative")

# Save solution in VTK format
file = File("nonlinear_poisson.pvd")
file << u

Happy to hear clues. Would it just be a matter of pushing mesh refinement harder at the boundary, or something more subtle?
Community: FEniCS Project

1 Answer

4 months ago by
A nonlinear problem is linearized by using Newton--Raphson method and solved. In the process of linearization an important assumption is used that the change of the result from the beginning guess it small (quadratic terms are neglected). Hence you cannot solve a system, if the initial condition is far away from the result. As the boundary condition gets bigger, this is probably happening. Try to solve the system in time (or a time-like incrementing the boundary condition). In simple words, try to increase the boundary values in several steps in a for/while loop such that the result from the last solution is used as the starting point in the Newton--Raphson linearization.

Best, Emek
Thanks Emek, your analysis is sound.  It's a valuable tip, I've found I can access the high boundary values I need by ramping up from a lower converging value.

I had to take care in using a converged solution as the initial guess for the next boundary value as I approached the target.  I needed to use u.copy(deepcopy=True), otherwise the previous converged solution (using only u.copy() ) would get overwritten with nan if the next solution diverged.  The issue with shallow vs deepcopy is currently being addressed in and .

I found the residuals reach a limit at a level that grows with the boundary value. An absolute tolerance of 2e-6 * |boundaryValue| seemed to work.  But after rescaling to constrain the functions to a value near unity, a consistent absolute tolerance of 1e-6 works well.

I needed to increase from a converged solution in steps of 10% i.e. increasing the boundaryValue by a factor (1+0.1).  15% steps worked for increasing to one order of magnitude, but it needed to be 10% to expand out by 2 orders of magnitude (to "10000 mV").
written 4 months ago by Drew Parsons  
Please login to add an answer/comment or follow this question.

Similar posts:
Search »