• Keine Ergebnisse gefunden

2.3 The DUNE Framework

2.3.3 PDELab

PDELab [102, 101] provides a number of high-level abstractions to build PDE

solvers on top of DUNE grids. The multi domain simulation software developed as part of this work is realized as a set of extensions to the PDELab framework and consequently reuses many of those abstractions as well as the associated terminology.

For that reason, we provide a short overview of the typical building blocks of a PDELab simulation as well as their API.

Remark 2.1. DUNEplaces all of its implementation into the namespaceDune::, and PDELab-specific functionality can be found in the namespace Dune::PDELab::. In order to improve the readability of the code examples in this section, we omit both of those namespace prefixes and assume that they have been imported by means of appropriate usingdirectives.

One of the most fundamental PDELab concepts is thefunction space, which is de-fined in terms of aGridView(a read-only view of aDUNEgrid), aFiniteElementMap that associates a specific finite element with each grid cell, a constraints engine (that is used to assemble information about constrainted DOFs) and a vector backend tag that is used to select and parameterize the vector implementation used for the

DOF vectors. In general, a function space is created like this:

1 // Define type of function space 2 typedef GridFunctionSpace<

3 LeafGridView, // GridView on which the function space is defined

4 FiniteElementMap, // map from cells to finite elements from dune-localfunctions 5 Constraints, // constraints engine (Dirichlet, hanging nodes, parallel...) 6 VectorBackend // control DOF vector structure (blocking etc.)

7 > GFS;

8 // Instantiate fully specified function space type 9 GFS gfs(

10 grid_view,

11 finite_element_map, 12 constraints_engine

13 );

Note that due to the statically typed and templated nature of the library,PDELab components are typically created by a combination of a typedef to fill in the template parameters of the component template, followed by an instantiation of the newly defined type.

Constrained spaces do not directly store their constraints, but use an external constraints container for this purpose, which has to be manually created and filled with the actual constraints. For the common case of a continuous Galerkin space with Dirichlet constraints, this is achieved by

1 typedef typename GFS::template ConstraintsContainer<double>::Type C;

2 C cg;

3 constraints(constraints_parameters,gfs,cg);

2.3 • The DUNE Framework 25 Theconstraints_parameters can carry additional information required for the constraints assembly; here they are used to determine the boundary type (Dirichlet vs. Neumann) at a given location.

The problem-specific code written by the user is mostly concerned with the imple-mentation of the cell- and intersection-restricted residual and jacobian terms. These are encapsulated in a class that has to conform to the LocalOperator concept.

As this part of the simulation is a direct translation of the residual form, it will typically be provided by the simulation developer, but PDELab also includes a number of implementations for common problems, e.g. an operator for the Poisson equation:

1 typedef Poisson<

2 SourceTerm,

3 ConstraintsParameters, 4 NeumannTerm

5 > LocalOperator;

6 LocalOperator local_operator(

7 source_term,

8 constraints_parameters, 9 neumann_term,

10 quadrature_order

11 );

In keeping with the largely dimension-agnostic nature ofDUNE, most local operators will work for any grid dimension, making it easy to e.g. do method development with a simple 2D example and then switch to a full 3D application at a later point. They typically also do not require a specific function space; the Poisson operator shown above will work for any type of continuous function space and only requires the user to specify a sufficiently high quadrature order. The local operator contains a number of local kernels that are used by PDELab to set up the sparse matrix pattern and assemble the residual or its Jacobian (cf. Section 2.2.5). These kernels are implemented as callback methods of the LocalOperator, e.g. for the αvol term:

1 // tell PDELab to call the alpha_volume() kernel 2 static const bool doAlphaVolume = true;

3

4 template<

5 typename Cell, // grid cell (for geometry information etc.) 6 typename LFSU, // local ansatz space for current cell 7 typename LFSV, // local test space for current cell 8 typename X, typename R> // local solution and residual vectors 9 void alpha_volume(

10 const Cell& cell,

11 const LFSU& lfsu, const X& x, const LFSV& lfsv, 12 R& r

13 ) const;

Note the additional flag that must be set; it is evaluated at compile time by PDELab and makes it possible to optimize the global assembly (Algorithm 2.1)

by tailoring it to the set of active kernels. If the user does not want to provide an explicit implementation of the Jacobian, PDELab can calculate it automatically by performing a numerical differentiation of the residual.

The user-providedLocalOperator only contains an implementation of the local residual contributions; as explained in Section 2.2.5, the remaining parts of the assembly algorithm are of a mostly problem-indepedent nature and can be provided by the framework in a generic manner. In PDELab, this job is performed by the grid operator, which applies theLocalOperator to a pair of (possibly constrained) ansatz and test function spaces. Moreover, it also handles the creation of matrices for the Jacobian, which are usually stored in a sparse format. The sparsity pattern depends on the mesh topology and the discretization scheme; matrix creation thus requires a mesh traversal to extract this information and set up the pattern.

The exact format of the matrix (block structure etc.) can again be chosen by a GridOperator parameter. Resuming our example from above, we can define the grid operator by

1 typedef GridOperator<

2 GFS,GFS, // ansatz and test function spaces

3 LocalOperator, // user-provided local integration kernels

4 MatrixBackend, // descriptor for matrix structure / implementation 5 double,double,double, // numeric types for solution, residual and jacobian 6 C,C // constraints containers for ansatz and test spaces 7 > GO;

8 GO grid_operator(

9 gfs,cg, 10 gfs,cg,

11 local_operator, 12 matrix_backend

13 );

Using the grid operator, we can now easily create vectors and matrices and calculate residuals and jacobians to feed into the algebraic solver:

1 typedef typename GO::Traits::Domain Vector;

2 Vector solution(gfs,0.0);

3 Vector residual(gfs,0.0);

4

5 typedef typename GO::Traits::Jacobian Matrix;

6 Matrix jacobian(grid_operator,0.0);

7

8 grid_operator.residual(solution,residual);

9 grid_operator.jacobian(solution,jacobian);

PDELab also contains components to solve the assembled algebraic problem (sequential and parallel linear solvers, preconditioners, a Newton solver) as well as time integrators for implicit and explicit methods and tools for exporting the solution to a VTK file, but since our focus is on problem assembly, we omit a description of that functionality at this point.