Manually compute residual form in radial coordinates


142
views
1
12 weeks ago by
Hello,

I'm solving a non-linear system of two coupled 1D equations.

I've manually implemented a Newton solver, and at every iteration I solve the linear system using Fenics's solve() method (i.e. I've linearised the equations and I pass a bilinear and linear forms). The convergence criterion I'm using is $\left|\left|u_k-u_{k-1}\right|\right|_{\infty}<\epsilon_r\left|\left|u_0\right|\right|_{\infty}+\epsilon_a$||ukuk1||<ϵr||u0||+ϵa , where   $\epsilon_r,\epsilon_a$ϵr,ϵa   are chosen relative and absolute tolerances. Below is a summary of the code I'm using.

I would like to compute the residual form after the non-linear solver has converged. I'm not sure how to do this, because I need to compute the laplacian in radial coordinates, and I'm not sure what is the good Fenics way to do this.

I know the built-in Newton solver computes the residual form, but it does so using test functions. I'm not sure how I can get the maximum nodal residual this way.

Can you please help? Thanks in advance!

def nonlinear_solver( self ):

        p = self.problem
        
        Dirichlet_bc = p.get_Dirichlet_bc()
        
        u = d.TrialFunction(p.V)
        phi, H = d.split(u)
        
        v1, v2 = d.TestFunctions(p.V)
        
        # create a vector with the linearised solution at every iteration
        u_k = d.Function(p.V)
        phi_k, H_k = d.split(u_k)

        [...]

        r2 = Expression('pow(x[0],2)', degree=p.func_degree)

        du = d.Function(p.V)   # difference between solutions at two iterations
        converged = False

        while (not converged) and self.i < self.max_iter:

            [...]

            # define bilinear form    
            # Eq.1
            a1 = - inner( grad(phi), grad(v1) ) * r2 * dx - ( m / Mn )**2 * phi * v1 * r2 * dx \
                 + alpha * inner( grad(H), grad(v1) ) * r2 * dx
        
            # Eq.2
            a2 = alpha * inner( grad(phi), grad(v2) ) * r2 * dx - \
                 inner( grad(H), grad(v2) ) * r2 * dx \
                 - ( M / Mn )**2 * H * v2 * r2 * dx - lam/2. * ( Mf / Mn )**2 * H_k**2 * H * v2 * r2 * dx
            
            # both equations
            a = a1 + a2

            # define linear form
            L1 = p.rho / Mp * Mn / Mf * v1 * r2 * dx          # Eq.1
            L2 = - lam / 3. * ( Mf / Mn )**2 * H_k**3 * v2 * r2 * dx # Eq.2
            L = L1 + L2                             # both equations

            # define a vector with the solution
            sol = d.Function(p.V)
            
            # solve linearised system
            pde = d.LinearVariationalProblem( a, L, sol, Dirichlet_bc )
            solver = d.LinearVariationalSolver( pde )
            solver.solve()

            # store norm of initial solution - used in convergence test
            if self.i == 0:
                # this nested if is to preserve the structure of the original built-in FEniCS norm function
                if self.norm=='linf':
                    # infinity norm, i.e. max abs value at vertices
                    u0_norm = r2_norm( sol.vector(), p.func_degree, norm_type=self.norm )
                else:
                    u0_norm = r2_norm( sol, p.func_degree, norm_type=self.norm )

            # compute and store the error at this iteration
            # this nested if is to preserve the structure of the original built-in FEniCS norm function
            if self.norm=='linf':
                # infinity norm, i.e. max abs value at vertices
                du.vector()[:] = sol.vector()[:] - u_k.vector()[:]
                self.du_norm[self.i] = r2_norm( du.vector(), p.func_degree, norm_type=self.norm )
            else:
                self.du_norm[self.i] = r2_norm( du, p.func_degree, norm_type=self.norm )
            self.rel_err[self.i] = self.du_norm[self.i] / u0_norm

            print 'Non-linear solver, iteration %d\tabs_err = %.3e (tol = %.1e)\trel_err = %.3e (tol=%.1e)' \
                % (self.i, self.du_norm[self.i], self.abs_tol, self.rel_err[self.i], self.rel_tol )

            # check convergence
            converged = ( self.du_norm[self.i] < self.rel_tol * u0_norm + self.abs_tol )

            # if maximum number of iterations has been reached without converging, raise an exception
            if ( self.i+1 == self.max_iter and ( not converged ) ):
                message = "UV_ERROR: The solver hasn't converged in the maximum number of iterations"
                raise Convergence_Error, message
            
            # update for next iteration
            self.i += 1
            u_k.assign(sol)
            

        return sol​
Community: FEniCS Project
1
What do you mean by "residual form"? Do you want to compute the residual w.r.t. the strong form of the PDE, in each cell?
The Laplacian in 1D is simply phi.dx(0).dx(0) or Dx(Dx(phi, 0), 0).
If I understood correctly, I'd just write the strong form of your PDE in terms of your solution, and then project it to a suitable DG space, with the project(residual, DG) method.
written 12 weeks ago by David Nolte  

Hello, thanks for your reply!

> Do you want to compute the residual w.r.t. the strong form of the PDE, in each cell?
yes! Sorry if it was confusing! I would like to use it to cross-check convergence and have an estimate of the residual.

> The Laplacian in 1D is simply phi.dx(0).dx(0) or Dx(Dx(phi, 0), 0).
I see. These are spherical coordinates, but I guess I can then adjust what you wrote to
Constant(2.)/r * phi.dx(0) + phi.dx(0).dx(0),
after having defined r with an Expression.

In the weak form I use grad(phi) for phi.dx(0): is there any subtle finite-element reason why they should not be the same thing or can they be interchanged safely? From a Fenics point of view I can't use grad(phi) in this case because it would create a vector quantity. But I'm wondering if grad(phi) and phi.dx(0) are represented/computed differently inside Fenics otherwise in 1D.

> and then project it to a suitable DG space, with the project(residual, DG) method.
Why a DG space? Can you please explain?

Thanks a lot again for your reply! There are subtle things to consider when using a new mathematical framework and library, and I wasn't sure!

written 12 weeks ago by scaramouche-00  
1
I think the difference between grad(u) and u.dx(0) in 1D is subtle: grad(u) is a vector of shape (1,), u.dx(0) is truly scalar. So you can't do grad(u)*grad(v), but have to use inner(). Turns out you can write div(grad(u)) for a 1D (cartesian) Laplacian, but I guess I found that awkward in 1D ;)
The advantage of using grad and div is that the same code could also work in 2D or 3D (for cartesian coordinates at least).

The space for the residual depends on the space of the solution and your PDE. For example, if the function space is P1, then first derivatives would be piece-wise constant functions, i.e., DG0, and adding P1 functions (if there are linear, not-derivated $\phi$ terms), you'll get functions that lie in DG1. For P2, the second derivatives are DG0, the first derivatives are DG1, 'zeroth' derivatives P2, so you get DG2 overall. etc etc
written 12 weeks ago by David Nolte  
Thanks for your reply! There is one more thing I don't understand though: in the FEniCS tutorial, section 5.5.2 they describe computation of the gradient. Because for post-processing you typically want a continuous gradient field (which is closer to the mathematical gradient you would have), they project onto a P space.

I understand why derivatives of piece-wise continuous functions would be discontinuous: however, I don't understand why projecting the discontinuous gradient (or Laplacian) field onto a continuous function space is not recommended here.

Thanks a lot for your time!
written 12 weeks ago by scaramouche-00  
You can do it of course, just be aware that you lose information. That may be acceptable, though.

Cheers
written 12 weeks ago by David Nolte  
I think this answers my question, thanks!
written 10 weeks ago by scaramouche-00  

1 Answer


0
9 weeks ago by
David Nolte answered my question, so I copy it here for future reference.

The residual wrt the strong form of the PDE requires the Laplacian to be computed: as I'm working in spherical coordinates, the FEniCS way to write it is

Constant(2.)/r * phi.dx(0) + phi.dx(0).dx(0)​

after having defined r as an Expression.

The Laplacian (and the rest of the residual) must then be projected onto a 'DG' space:
> The space for the residual depends on the space of the solution and your PDE. For example, if the function space is P1, then first derivatives would be piece-wise constant functions, i.e., DG0, and adding P1 functions (if there are linear, not-derivated $\phi$ terms), you'll get functions that lie in DG1. For P2, the second derivatives are DG0, the first derivatives are DG1, 'zeroth' derivatives P2, so you get DG2 overall. etc etc

Note than, even though you might expect grad(phi) and phi.dx(0) to be the same in 1D, they are not:
> I think the difference between grad(u) and u.dx(0) in 1D is subtle: grad(u) is a vector of shape (1,), u.dx(0) is truly scalar. So you can't do grad(u)*grad(v), but have to use inner(). Turns out you can write div(grad(u)) for a 1D (cartesian) Laplacian, but I guess I found that awkward in 1D ;)
> The advantage of using grad and div is that the same code could also work in 2D or 3D (for cartesian coordinates at least).

Thanks David!
Please login to add an answer/comment or follow this question.

Similar posts:
Search »