• Keine Ergebnisse gefunden

5.2 Approximations for weakly singular kernels

5.2.3 The locally corrected quadrature scheme

ax(r, θ) := a(x,x+rp(θ)) can be written as

˜

ax(r, θ) = XXXXXXXXX

|α|≥2

1

α!(∂αf)(x)p(θ)αrα−2[˜cx(r, θ)]−2 [J(x+rp(θ))]−1, which is clearly a differentiable function in the polar domain. This proves that

˜k2,xK (r, θ) :=k2K(x,x+rp(θ)) is a smooth function in the polar domain with

limr→0

2,xK (r, θ) = 1 2π

XX XX XX XXX

|α|=2

1

α!(∂αf)(x)p(θ)α[˜cx(0, θ)]−3[J(x+rp(θ))]−1.

5.2.3 The locally corrected quadrature scheme

The decomposition (5.8) is our starting point for a second decomposition that trun-cates the singular part to a function with compact support. To this end we introduce an-times continuously differentiable, symmetric cut-off functionχa,b:R→[0,1]for some constants 0< a < b such that

0≤χa,b(t)≤1, t ∈R, and

χa,b(t) =

(1, |t|< a, 0, |t|> b.

A typical example of such a cut-off function is shown in Figure5.1. With the help of such a cut-off function we define a second decomposition

K(x,y) = Klocal(x,y) +Kglobal(x,y), (5.26)

where

Klocal(x,y) :=χa,b(|x−y|) 1

|x−y|K2(x,y), (5.27)

Kglobal(x,y) :=K1(x,y) +h

1−χa,b(|x−y|)i 1

|x−y|K2(x,y). (5.28)

Figure 5.1: An example for a cut-off function χa,b Hence we can write (5.1) as the sum of Ag and Al, where

(Agψ)(x) :=

Z

R2

Kglobal(x,y)J(y)ψ(y)dy, x∈R2, (5.29)

(Alψ)(x) :=

Z

R2

χa,b(|x−y|) 1

|x−y|K2(x,y)J(y)ψ(y)dy, x∈R2. (5.30) The kernel function of the global operator is smooth by assumption, which means that the smoothness is limited only by the smoothness of the cut off function χa,b. Applying the quadrature scheme to the global operator we get the following operator (Ag,hψ)(x) := Q2h[Kglobal(x,·)J ψ], x∈R2, (5.31) which we can write as the infinite sum

(Ag,hψ)(x) =h2 X

j∈Z2

Kglobal(x, hj)J(hj)ψ(hj), x∈R2. (5.32) For the treatment of the local operator (5.30) we introduce some additional no-tation. To simplify the understanding of the following rather technical definitions, we illustrated the situation inFigure5.2. Let jxll,jxur ∈Z2 denote the index of the lower left and upper right corner points of the smallest rectangle Rx in the grid that contains Bb(x), an open ball of radius b and centre x. More formal, we define the two index sets

Jllx :={j ∈Z2 :hj ≤y for all y ∈Bb(x)}

and

Jurx :={j ∈Z2 :y≤hj for all y∈Bb(x)}.

Thenjxll ∈Jllxis the index such thatj ≤jxll for allj ∈Jllxand in an analogous fashion jxur ∈Jurx is the index such that jxll ≤j for allj ∈Jurx. HenceRx = [hjxll, hjxur]⊂R2. The function

Λx(y) :=

q

χa,b(|x−y|)J(y)ψ(y), y∈R2, (5.33)

5.2 Approximations for weakly singular kernels

has support inBb(x)⊂Rx, so it possesses a uniquely determined periodic extension that coincides with Λx on Rx. We approximate this periodic extension by the uniquely determined two dimensional trigonometric interpolation polynomial.

Bb(x)

hjxur

hjxll

b Rx

Jurx

Jllx

x

Figure 5.2: The setting for the local interpolation scheme: For a point x (green point), we depicted the circleBb(x)(gray) on which the functionΛxis approximated.

The grid points (red points), for which the locally corrected weights are computed, are the points inside the circle. The rectangle Rx is the blue square in the middle. It is determined by the lower left corner point hjxll (enlarged blue point) and the upper right corner point hjxur (small blue point). The set of all enlarged points inside Rx are used for the interpolation.

For a function ψ :Rx →C we define the interpolation operator (Pxψ)(y) := XXXXXXXXX

jxll≤j<jxur

ψ(hj)`j−jxll(y−hjxll), y ∈R2, (5.34) where `j(y)for 0≤j <Ldenotes the Lagrange basis for the square [0, hL], where we have set L:=jxur−jxll.

Example 5.3. In the case that the number of points used for the trigonometric interpolation is even, i.e. L= (L, L)∈2Z2, then the Lagrange basis is given through

`j(y) := ˜lj1(Lhπ y1) ˜lj2(Lhπ y2), 0≤j ≤L−1,

where

for k = 0, . . . , L− 1 denotes the one-dimensional Lagrange basis for the interval [0,2π], cf. [35, formula (11.13)].

We write the local operator in the form (Alψ)(x) =

where Λx is given through (5.33). Replacing Λx by its trigonometric interpolation polynomial we define the operator

(Al,hψ)(x) := which we can write in the form

(Al,hψ)(x) = XXXXXXXXX denote the locally corrected weights, which have to be computed by numerical inte-gration. We set α˜j(x) = 0, for j 6∈ {i ∈ Z2 : jxll ≤ i < jxur} to write (5.35) in the

Using the function p introduced above, cf. (5.9), we can write, after a change of variables, We observe that the singularity is completely removed through the Jacobian of this change of variables. The rather unmotivated use of the square root of the cut

5.2 Approximations for weakly singular kernels

off function χa,b in the definition ofΛx can now be justified. The assumption (W2) ensures that the integrand is a smooth function in the polar domain, whereas the cut off function allows us to interpret the integrand as a periodic function in the polar domain with [−b, b]×[0,2π]as its domain of periodicity. Thus the application of the composite trapezoidal rule, both for the radial and angular direction, yields an efficient high order integration scheme.

The operator A is approximated by

(Ahψ)(x) := (Ag,hψ)(x) + (Al,hψ)(x), x∈R2, which again is an infinite sum of the form

(Ahψ)(x) = X

j∈Z2

Kh,j(x)J(hj)ψ(hj), x∈R2, (5.39) where

Kh,j(x) :=h2Kglobal(x, hj) + ˜αj(x), j ∈Z2. (5.40) In analogy to the convergence results summarised in Lemma 5.1, we prove a similar result for the weakly singular kernels that shows super-algebraic convergence.

Lemma 5.4. Let A be given through (5.1), where K : R2 × R2 → C denotes a weakly singular kernel function that in addition satisfies the condition: there exists a constant c >0 such that

|∂αK(x,y)| ≤ c

(1 +|x−y|)2 for |x−y| ≥1, (5.41) for all α ∈ N and the constant c may depend on α. Let ψ ∈ BCq(R2) with q > 0 and f ∈ BC(R2). Then, the operator Ah for h > 0 given through (5.2), with Kh,j given through (5.40), converges pointwise to A and the error can be estimated through

kAψ−AhψkBC(Rd) ≤ kAgψ −Ag,hψk+kAlψ−Al,hψk where

kAgψ−Ag,hψk ≤C1kKglobal(x,·)kBCm

2 (R2)kJkBCm(R2)kψkBCqm(R2)hm, (5.42) and

kAlψ−Al,hψk ≤C2 sup

x∈R2

Z

Bb(x)

K3(x,y) dy

x−PxΛxkBC(R2) (5.43)

with

K3(x,y) :=

q

χa,b(|x−y|) 1

|x−y|K2(x,y) (5.44) and the constants C1, C2 depend only on m and q.

Proof. The assumption (5.41) ensures that Kglobal ∈BC2m(R2)for allm≥0. Hence we can apply the same proof as inLemma 5.1 to get the estimate (5.42).

For the local operator we estimate

|Alψ(x)−Al,h(ψ)(x)| ≤ Z

Bb(x)

|K3(x,y)| |Λx(y)−(PxΛx)(y)|dy

≤sup

x∈R

Z

Bb(x)

|K3(x,y)| dy kΛx−(PxΛx)kBC(R2)

≤C kΛx−(PxΛx)kBC(R2),

for some constant C > 0 independent of x. Now it follows from standard error estimates for smooth periodic functions that trigonometric interpolation will yield a super-algebraicly convergent scheme with respect to the mesh-size h for a fixed cut-off radius b and a smooth cut-off function χa,b. The assumption on the weakly singular kernel, cf. Lemma 1.15 for the details, ensure that

sup

x∈R

Z

Bb(x)

|K3(x,y)| dy is bounded onR2.

Remark 5.5. The computational costs for the computations of the local corrected weights are the dominating costs in the overall scheme. We think it is therefore the best, to limit the numbers of points for which the local corrected weights are computed to a fixed number. A radius of2h or 3h, which means that9 or 21weights are being computed, seems to be a good choice.

Chapter 6

Nyström methods for rough surface scattering

Using the parametrisation (1.8) of the scattering surfaceΓf we can write the bound-ary integral equation (18) as an integral equation on R2. Setting

ψ(x) :=ϕ(x, f(x)) and φz(x) :=−2G((x, f(x)), z) (6.1) for some source point z ∈Df, we get an equation of the form

ψ(x) + (W ψ)(x) =φz(x), x∈R2, (6.2) where W denotes the integral operator

(W ψ)(x) :=

Z

R2

k(x,y)J(y)ψ(y)dy, x∈R2 (6.3) with kernel function

k(x,y) := 2

G(x, y)

∂ν(y) −iη2G(x, y)

, x6=y, (6.4)

for x = (x, f(x)), y = (y, f(y)) and surface area element J(y) = p

1 +|∇f(y)|2. Introducing the operators

(Wξψ)(x) :=

Z

R2

kξ(x,y)J(y)ψ(y)dy, x∈R2, (6.5) for ξ ∈ {S, K} with kernel functions kS and kK given through (5.6) and (5.7), we can write the operator (6.3) in the form

W =WK−iηWS.

We approximate both operators WK and WS as described in the previous chapter, i.e. the integral operator W is approximated by

(Whψ)(x) := X

j∈Z2

kh,j(x)J(hj)ψ(hj), x∈R2, (6.6)

where

kh,j(x) :=h

h2kglobalK (x, hj)−iηh2kglobalS (x, hj)i +h

˜

αKj (x)−iηα˜Sj(x)i

. (6.7) Now one approximates the solution of equation (6.2) by the solution of the equation

ψh(x) + (Whψh)(x) = φz(x), x∈R2.

In complete analogy to the Nyström method for the case of finite intervals one proves the following theorem.

Lemma 6.1. For some h >0 let ψh be a solution of ψh(x) + X

j∈Z2

kh,j(x)J(hj)ψh(hj) =φz(x), x∈R2. (6.8) Then the sequence ψh = (ψjh)j∈Z2 with ψhj :=ψh(hj),j ∈ Z2 solves the infinite set of equations

ψih+X

j∈Z2

kh,j(hi)J(hj)ψjhz(hi), i∈Z2. (6.9) Conversely, if the sequence ψh = (ψjh)j∈Z2 is a solution of (6.9), we get a solution of (6.8) through

ψh(x) :=φz(x)− X

j∈Z2

kh,j(x)J(hj)ψjh, x∈R2 (6.10) that agrees with the sequence (ψjh)j∈Z2 at the set of quadrature points. This interpo-lation function is called Nyström interpolant.

Proof. The first part is trivial. For the second part one can argue as follows. For a sequence (ψhj)j∈Z2 that solves (6.9) we see that the function ψh, given through (6.10), takes the values

ψh(hi) = φz(hi)− X

j∈Z2

kh,j(hi)J(hj)ψhjhi, i∈Z2.

Inserting this, together with (6.10), into (6.8) shows that ψh solves the equation (6.8)

Remark 6.2. The system (6.9) can be written as a linear equation in `2(Z2), i.e.

ψh+cWhψhz,h, (6.11)

where φz,h = (φz,hj )j∈Z2 with φz,hj :=φz(hj)for j ∈Z2 and where we have introduced the operator

cWh :`2(Z2)→`2(Z2), ψ 7→ X

j∈Z2

kh,j(hi)J(hj)ψj

i∈Z2

.

6.1 Method I: Discretisation-truncation

Though it is clear that we can never implement this infinite linear system on a computer, the approach is interesting from a theoretical point of view. In the case of scattering by one-dimensional rough surfaces, some results on the convergence of the Nyström method were proven in [2], [40] and [41].

6.1 Method I: Discretisation-truncation

To get a finite dimensional linear system that we can actually implement on a computer we restrict the quadrature points to the finite set

hj :−N ≤j ≤N −1 (6.12)

where we have set

N := (n, n)∈N2 (6.13)

for some n ∈N. Thus instead of using the solution of (6.8) as an approximation to the true solution, we approximate the solution of (6.2) by the solution of

ψh,%(x) + (Wh,%ψh,%)(x) = φz(x), x∈R2. where we have set

%:=h·n (6.14)

and

(Wh,%ψ)(x) :=

N−1

X XX XXXXXX

j=−N

kh,j(x)J(hj)ψ(hj), x∈R2. (6.15) The meaning of the parameter % will become clear in the next section. In the same manner as before one proves the following theorem.

Lemma 6.3. For some h >0 and n ∈N let ψh,% be a solution of ψh,%(x) +

N−1

XXX XXXXXX

j=−N

kh,j(x)J(hj)ψh,%(hj) =φz(x), x∈R2, (6.16)

where N and%are given through (6.13)and (6.14). Then the two-dimensional array ψ = (ψj)j=−N,...,N−1 ∈C×(2N) withψj :=ψh,%(hj)for j =−N, . . . , N−1 solves the finite dimensional system

ψi+

N−1

XXXXXXXXX

j=−N

kh,j(hi)J(hj)ψjz(hi), i=−N, . . . , N −1. (6.17)

Conversely, if the arrayψ= (ψj)j=−N,...,N−1 is a solution of (6.17), we get a solution of (6.16) through

ψh,%(x) :=φz(x)−

N−1

XXXXXXXXX

j=−N

kh,j(x)J(hj)ψj, x∈R2. (6.18)

Remark 6.4. We note that the linear system (6.17) is simply a truncated version of the infinite linear system (6.9).