• Keine Ergebnisse gefunden

Iterative Solution With Dirichlet-Neumann Scheme

7.6 Performance Characteristics

8.1.1 Iterative Solution With Dirichlet-Neumann Scheme

In order to demonstrate our support for weakly coupled solution schemes, we have also implemented a version of the coupled Poisson-Poisson problem that uses a Dirichlet-Neumann fixed point iteration that alternates between solving the problem on each subdomain in lockstep, using the solution on the other subdomain to provide either Dirichlet or Neumann boundary data on the coupling interface.

A detailed mathematical description of this scheme can be found in [123]. In the following, we provide an outline of how this scheme can be implemented in Dune-Multidomain to demonstrate the large amount of support our framework provides for the development of advanced multi domain solution.

As a starting point, we assume that the simulation with a monolithic solver has already been set up as explained in the previous section. We will reuse the function spaces and subproblems created for that problem. However, we have to use a different vector backend tag for the MultiDomainGridFunctionSpace:

1 typedef Dune::PDELab::ISTLVectorBackend<

2 Dune::PDELab::ISTLParameters::dynamic_blocking 3 > MDVBE;

With this tag, PDELab will create vectors and matrices with separate blocks for each subproblem:

AL 0 0 AR

! uL

uR

!

= fL

fR

!

.

As our problem only couples via boundary conditions (i.e. the right hand side), the off-diagonal blocks of the matrix are empty. With this matrix structure in place, we can directly access the individual blocks and for example pass ALto an ISTLsolver without having to copy any data. At the same time,Dune-Multidomainretains all information about the global problem structure and identifies corresponding

DOFs on each side of the coupling interface.

As a first step, we need to create a local operator that calculates the residual contributions of the Neumann and Dirichlet coupling terms. For the purpose of this example, we have decided to apply both Neumann and Dirichlet boundary conditions on the coupling interface in a weak fashion; the implementation of the local operator can be found in Listing A.3. It is based on the default PDELab DG operator for convection-diffusion-advection problems and was created by changing the skeleton integrals to coupling terms and adjusting them to assemble either Neumann or Dirichlet boundary conditions using the solution from the other subdomain as boundary data. All other integration kernels were simply deleted.

We create two instances of this operator, one for each subdomain:

1 typedef ConvectionDiffusionDGNeumannDirichletCoupling<

2 Params

3 > NDCoupling; // type of the coupling operator

4 NDCoupling nd_coupling_neumann( // variant that applies a Neumann coupling 5 CouplingMode::Neumann,

6 params,

8.1 • Poisson Problem on Two Subdomains 133

7 ...

8 );

9 NDCoupling nd_coupling_dirichlet( // variant that applies a Dirichlet coupling 10 CouplingMode::Dirichlet,

11 params,

12 ...

13 );

With these local operators in hand, we can then create two coupling descriptors.

In this case, we decide to couple the left subproblem to the right subproblem by means of Dirichlet conditions:

1 typedef MultiDomain::Coupling<

2 LeftSubProblem, 3 RightSubProblem, 4 NDCoupling

5 > DirichletCoupling;

6 DirichletCoupling dirichlet_coupling(

7 left_sp, 8 right_sp,

9 nd_coupling_dirichlet

10 );

Here, we omit the second coupling; it is an exact mirror of the first one, apart from the different local operator for the coupling (Neumann instead of Dirichlet).

Before we can create a GridOperator, we first have to construct a matrix backend that can store an individual number for the count of nonzero entries per row of each matrix block:

1 typedef std::array<std::array<std::size_t,2>,2> EntriesPerRow;

2 typedef istl::BCRSMatrixBackend<EntriesPerRow> MBE;

3 MBE mbe(

4 {{

5 {27, 0}, // number of nonzero entries for each 6 { 0, 27} // of the four large matrix blocks

7 }});

For the Dirichlet-Neumann scheme we then need two separate GridOperators, one for each subdomain. Consequently, each operator assembles one subproblem and the associated coupling descriptor:

1 typedef MultiDomain::GridOperator<

2 MultiGFS,MultiGFS, 3 MBE,R,R,R,C,C, 4 LeftSubProblem, 5 DirichletCoupling 6 > LeftOperator;

7 LeftOperator left_operator(

8 multigfs,multigfs, 9 cg,cg,

10 mbe,

11 left_sp,

12 dirichlet_coupling

13 );

We again omit the code for the second GridOperator. Each of these two grid operators only assembles contributions into the parts of the residual and the Jacobian matrix that belong to its corresponding subproblem.

In order to complete our simulation, we now need to set up a linear solver backend for the problem. For this purpose, we use the customISTL_SEQ_Subblock_Backend, which accepts the global system matrix containing both subproblems, but only operates on one block row of the matrix, corresponding to one subproblem:

1 typedef ISTL_SEQ_Subblock_Backend<

2 SeqSSOR, 3 BiCGSTABSolver 4 > LinearSolver_DN;

5 LinearSolver_DN linear_solver_dn_0(

6 0, // select the block row on which to operate (0 for left, 1 for right) 7 ... // solver parameters

8 );

Finally, we can create a pair of standard PDELab linear problem solvers, which take care of assembling residuals and Jacobians and of solving the assembled system by running the linear solver backend:

1 typedef Dune::PDELab::StationaryLinearProblemSolver<

2 LeftOperator, 3 LinearSolver_DN,

4 V

5 > LeftPDESolver;

6 LeftPDESolver left_pde_solver(

7 left_operator, // GridOperator for left subproblem

8 linear_solver_dn_0, // linear solver for corresponding matrix block

9 ... // solver parameters

10 );

With all components in place, we are only left with implementing the actual fixed point iteration, which now becomes a very short loop that directly corresponds to the mathematical description of the scheme:

1 while (r / r_0 > rel_reduction && r > max_error && i < max_iterations)

2 {

3 u_old = u;

4 left_pde_solver.apply(u,!reassemble_jacobian_left && (i>0));

5 u *= alpha;

6 u.axpy(1.0-alpha,u_old);

7

8 u_old = u;

9 right_pde_solver.apply(u,!reassemble_jacobian_right && (i>0));

10 u *= alpha;

11 u.axpy(1.0-alpha,u_old);

12

13 r = residual.two_norm();

14 ++i;

15 }

8.1 • Poisson Problem on Two Subdomains 135 Solver h inner reduction dampening iterations time (s)

Monolithic 2−7 – – 204 4.87

Neumann-Dirichlet 2−7 10−4 0.8 67 202

Dirichlet-Neumann 27 104 0.8 24 52.4

Dirichlet-Neumann 2−7 10−4 0.7 29 62.5

Monolithic 26 – – 105 0.559

Dirichlet-Neumann 2−6 10−4 0.8 17 8.07

Table 8.2 —Iteration counts and solution times of monolithic solver and Dirichlet-Neumann iteration for coupled Poisson problems. The solution time comprises the complete simulation (assembly and solver). Benchmark performed on hardware configuration B.2.

Here, we have assumed that a suitable start defect r0 has been calculated outside of the loop; α∈(0,1) is a damping parameter. A fully working implementation of this example can be found in the test directory of Dune-Multidomaingrid.

Similar to the monolithic solver, we were able to reuse a large number of building blocks for this implementation; the subproblems and their local operators did not have to be changed from the monolithic multi domain problem. At the same time, we were able to completely change the block structure of the linear algebra containers by simply changing a single template parameter on the otherwise unchanged function space. Finally, while this problem involves more grid operators and solvers, they directly stem from the fact that we need to solve two separate problems for the Dirichlet-Neumann iteration; furthermore, the construction of all these components is very straightforward as long as the user takes care to use the special linear solver backend. The only really new code for this example was the local operator for the coupling, and even here we managed to profit from the fact that the interface of that operator in Dune-Multidomain closely resembles a standard DG skeleton residual, allowing us to copy and paste most of the code from an existing DG operator implementation.

We have assessed the performance of this weakly coupled solver by comparing it with that of the monolithic solver from the previous section. We chose a combination of a continuous Q2 space and a DG2 space to demonstrate the versatility of our Dirichlet-Neumann implementation, which applies the coupling boundary conditions at the equation level instead of the linear algebra level. We solved the problem with a target defect reduction of 10−10; the Dirichlet-Neumann solver was run with different damping parameters for the iterative scheme and we reduced the precision of the subdomain solves to 104 to improve its performance. The results can be found in Table 8.2. As is to be expected in such a simple setting, the monolithic solver greatly outperforms the iterative scheme because it has a much more information about the coupling effects between the two subdomains. Keep in

Figure 8.1 — Stokes-Darcy problem solved using our example application built using Dune-Multidomain. Left: Domain with two free-flow channels from left two right and two impermeable areas in the porous medium. Center: Computational grid. Right: Pressure and velocity fields of an example simulation.

mind that the Dirichlet-Neumann scheme only lists the number of outer iterations;

each of those iterations involved two linear solves, one for each subdomain, albeit with less precision than the linear solver in the monolithic approach.

Looking at this example, we conclude that our framework has managed to attain its goals of greatly simplifying the development of multi domain simulations, while at the same time offering the developer flexibility in the solution scheme and managing to provide competitive performance.