• Keine Ergebnisse gefunden

3.3 The linearized operator L

3.3.2 Fredholm theory and spectral analysis

has a solution w = (w, ζ) ∈Yη Since (sI− L)u= r∈ Xη1 we obtain using integration by parts for all ϕ∈C0(R,R)

Z

R

uxx(x)ϕx(x)dx= Z

R

A1[su−cux−Sωu−Df(v)u−r](x)ϕx(x)dx

=− Z

R

A1[sux−cuxx−Sωux−Df(v)ux−D2f(v)[v⋆,x, u]−rx](x)ϕ(x)dx.

Thus ux∈ Hη1∩Hloc2 with

uxxx=A1[sux−cuxx−Sωux−Df(v)ux−D2f(v)[v⋆,x, u]−rx]∈L2η. Therefore, ux∈Hη2 solves

(sI− L) ux

0

=

(sI −L)ux

0

=

x[su−Auxx−cux−Sωu−Df(v)u] +D2f(v)[v⋆,x, u]

0

=

rx−D2f(v)[v⋆,x, u]

0

.

Since wis the unique solution of (3.37) we concludew=ux,ζ = 0. This proves i). Now ii) follows by (3.36) and by applying Lemma 3.10 to the equation

(sI − L) ux

0

=

rx−D2f(v)[v⋆,x, u]

0

.

3.3. THE LINEARIZED OPERATOR L 73 In addition, let us consider the operator

Lη :H2 →L2, u7→ηLη1u.

A straightforward calculation shows thatLη can be written as a second order differential operator on L2

Lηu=Auxx+Bµux+Cµu with coefficients given by

Bµ(x) =cI + 2µx

√x2+ 1A,

Cµ(x) =Sω+Df(v(x)) +µ2A x2

x2+ 1 −cµI x

√x2+ 1 −µA 1

√x2+ 1 − x2 (x2+ 1)32

! . The next step is to introduce the map

ψ :Xη →L2η ×R2, u

ρ

7→

u−ρˆv ρ

.

Taking the norm k(u, ρ)k2L2η×R :=|ρ|2+kuk2L2η onL2η×R2, it follows immediately that ψ defines an isometry fromXη to L2η×R2 and its inverse is given by

ψ1 :L2η×R2 →Xη, u

ρ

7→

u+ρˆv ρ

.

Lemma 3.12. The map ψ : Xη →L2η ×R2 is an isometric isomorphism. Moreover, if 0≤µ <2, then ψ :Yη →Hη2×R2 is a homeomorphism.

Proof. By the previous observation it remains to show the continuity of ψ from Yη to Hη2×R2. From Proposition 2.7 we obtain for 0≤µ <2 and u= (u, ρ) ∈Yη

kψ(u)k2Hη2×R2 =|ρ|2+ X2

α=0

k∂α(u−ρˆv)k2L2η

≤(1 +kvˆxk2L2η +kvˆxxk2L2η)|ρ|2+ku−ρˆvk2L2η +kuxk2L2η+kuxxk2L2η ≤Ckuk2Yη.

With the homeomorphism ψ we can define the operator Lψ :Hη2×R2 →L2η×R2, u7→ψLψ1u

Again, a straightforward calculation shows that for u = (u, ρ) the operator Lψ can be written as

Lψu=

Auxx+cux+Sωu+Df(v)u+Aρˆvxx +cρˆvx+ (Df(v)−Df(v))ρˆv Sωρ+Df(v

. If Assumption 1 and 2 are satisfied, it follows thatLψ defines a closed, linear operator on L2η×R2 withD(Lψ) =Hη2×R2. Furthermore, sinceψ is a homeomorphism and therefore a Fredholm operator of index 0we conclude from Lemma A.2 that the Fredholm indices ofsI−LandsI−Lψ coincide. The same holds true for the operatorssI−LandsI−Lη since the multiplication operator associated withηfrom Lemma 3.1 is a homeomorphism.

Furthermore, a compact perturbation argument will show that the variable coefficient operatorsI−Lη onL2 has the same Fredholm index as the piecewise constant coefficient operator given by

Lη, :H2→L2, u7→Auxx+Bµ,ux+Cµ,u, where

Bµ,(x) =

(cI + 2µA, x ≥0

cI −2µA, x <0, Cµ,(x) =

(Sω+Df(v) +µ2A−cµI, x≥0 Sω+Df(0) +µ2A+cµI, x <0.

(3.39) We note and prove these observations in the following lemma.

Lemma 3.13. Let Assumption 1 and 2 be satisfied, 0≤µ≤ min(µ,2) and s ∈C with µ from Theorem 2.6. Then the following statements are equivalent:

i) The operator (sI− L) :Yη →Xη is a Fredholm operator of index k.

ii) The operator (sI− Lψ) :Hη2×R2 →L2η ×R2 is a Fredholm operator of index k.

iii) The operator (sI−L) :Hη2 →L2η is a Fredholm operator of index k.

iv) The operator (sI−Lη) :H2 →L2 is a Fredholm operator of index k.

v) The operator (sI−Lη,) :H2→L2 is a Fredholm operator of index k.

Proof. i) ⇔ii): By Lemma 3.12, the maps ψ :Xη →L2η×R2 and ψ :Yη →Hη2×R2 are homeomorphisms and therefore Fredholm operators of index0. Thus, the equivalence of i) and ii) follows by Lemma A.2.

iii)⇔iv): By Lemma 3.1, the multiplication operatorsmη :L2η →L2 andmη :Hη2 →H2 are homeomorphisms and therefore Fredholm operators of index0. Thus, the equivalence

3.3. THE LINEARIZED OPERATOR L 75 of iii) and iv) follows by Lemma A.2.

ii) ⇔ iii): The operatorLψ can be decomposed into Lψ = ˜L+K where L˜is given by L˜:Hη2×R2 →L2η ×R2, L˜

u ρ

:=

Auxx+cux+Sωu+Df(v)u (Sω+Df(v))ρ

and the operator K by

K :Hη2×R2 →L2η ×R2, K u

ρ

:=

Aρˆvxx+cρˆvx+ (Df(v)−Df(v))ρˆv 0

. Since sI −Sω−Df(v) ∈ R2,2 is a Fredholm operator of index 0 on R2, Lemma A.3 implies that sI−L˜is a Fredholm operator of index k if and only if (sI−L) :Hη2 →L2η is. We show that K is a compact operator. Then the assertion follows by Lemma A.4.

To see the compactness of K, let {un}nN ⊂ Hη2 ×R2, un = (un, ρn) be a bounded sequence and let C > 0 denote a universal constant. Then there exists a subsequence ρnk such that ρnk →ρ as k→ ∞. We define

w:=Aρˆvxx+cρˆvx+ (Df(v)−Df(v))ρˆv.

Then Assumption 1 and Theorem 2.6 imply w∈L2η. Moreover, we have k(Df(v)−Df(v))(ρnk −ρ)ˆvkL2η

≤ k(Df(v)−Df(v))(ρnk −ρ)ˆvkL2η(R)+k(Df(v)−Df(v))(ρnk−ρ)ˆvkL2η(R+)

≤CkˆvkL2η(R)nk−ρ|+C|ρnk −ρ|kDf(v)−Df(v)kL2η(R+)

≤CkˆvkL2η(R)nk−ρ|+C|ρnk −ρ|kv −vkL2η(R+)≤C|ρnk−ρ|. This implies for w= (w, ρ)∈L2η ×R2

kKun

k−wkL2η×R2 ≤ |ρnk −ρ|+kAρnkxx+cρnkx+ (Df(v)−Df(v))ρnkvˆ−wkL2η

=|ρnk−ρ|+kA(ρnk−ρ)ˆvxx+c(ρnk −ρ)ˆvx+ (Df(v)−Df(v))(ρnk −ρ)ˆvkL2η

≤C(1 +kvˆxxkL2η+kˆvxkL2η)|ρnk−ρ|+k(Df(v)−Df(v))(ρnk −ρ)ˆvkL2η

≤C|ρnk −ρ| →0, k→0.

Thus, Kun

k → w in L2η ×R2 as k → ∞. This shows the compactness of K and the assertion follows by Lemma A.4.

iv) ⇔ v): Since the operator ∂x : Hk+1 → Hk, k ≥ 0 is bounded, Lemma D.4 and Theorem 2.6 imply that the operator

Lη −Lη, :H2 →L2, u7→(Bµ−Bµ,)ux+ (Cµ−Cµ,)u is compact. Hence the assertion follows by Lemma A.4.

The lemma shows that the Fredholm properties ofsI − L on Xη are determined by the piecewise constant coefficient operator sI−Lη,. Thus, we are interested into the solvabilty of the resolvent equation

(sI−Lη,)u=r, u∈H2, r∈L2, s∈C. (3.40) We use the classical approach, for instance, from [36] or [56], [32], where the solution of (3.40) is constructed using exponential dichotomies. For this purpose we transform (3.40) into a first order system via w= (u, u)and obtain

M(s)w=h, M(s) =∂x−M(s,·), h= (0, r) (3.41) with

M(s, x) =

(M+(s), x≥0

M(s), x <0, M±(s) =

0 I A1(sI −C±) −A1B±

and the matrices B±, C± given by

B± :=cI ∓2µA, C± :=Sω+Df(v±) +µ2A∓cµI.

From [22] we have that the operatorM(s)has an exponential dichotomy on the half-line R± if and only if the matrix M±(s) is hyperbolic, cf. Proposition B.4. Therefore, we define the set

F :={s∈C:M+(s)and M(s) are hyperbolic}.

For s ∈ ΩF we denote by m±s,u(s) the dimensions of the stable and unstable subspaces of M±(s), i.e. m±s(s) denotes the sum of multiplicities of the eigenvalues of M±(s) with negative real part and m±u(s) those with positive real part. Now we have the following classical result which can be found in several texts from the literature. See for instance [36, Lem. 3.1.10] or [48], [49], [56, Sec. 3].

Lemma 3.14. Let Assumption 1 and 2 be satisfied, 0 ≤ µ ≤ min(µ,2) with µ from Theorem 2.6. Then the operatorsI−Lη, :H2 →L2 is a Fredholm operator if and only if s ∈ΩF. If s∈ΩF then the Fredholm index is given by

ind(sI−Lη,) =m+s (s)−ms(s).

Lemma 3.14 together with Lemma 3.13 imply that sI− L : Yη →Xη is a Fredholm operator if and only ifs∈ΩF. Moreover, since the matricesM± depend continuously on s ∈ C we conclude that the Fredholm index of sI − L stays constant in any connected component of ΩF. Recall the dispersion set from (0.27) given by

σdisp,µ(L) =σdisp,µ (L)∪σdisp,µ+ (L) Then we have the following lemma.

3.3. THE LINEARIZED OPERATOR L 77 Lemma 3.15. Let Assumption 1 and 2 be satisfied. Then the set ΩF is the complement of the dispersion set, i.e. ΩF =C\σdisp,µ(L).

Proof. First we show that det(−ν2A+iνB±+C±−sI) = 0 if and only ifM±(s) is not hyperbolic. Assume M±(s) is not hyperbolic, i.e. there exists ν ∈ R, w ∈ C4, |w| = 1 such that iνw =M±(s)w. This implies, with w = (w1, w2), that iνw1 =w2 and hence w1 6= 0 due to |w|= 1. Moreover,

−ν2w1 =A1(sI−C±)w1−A1B±w2 =A1(sI−C±)w1−iνA1B±w1. Hence,

(−ν2A+iνB±+C±)w1 = 0.

Thus, det(−ν2A+iνB±+C±−sI) = 0.

Conversely, suppose det(−ν2A+iνB±+C±−sI) = 0. Then there exists w1 ∈Cm such that (−ν2A+iνB±+C±−sI)w1 = 0. Now setting w2=iνw1 leads to

0 I A1(sI−C±) −A1B±

w1

w2

=

w2

A1(sI −C±)w1−A1B±w2

=

iνw1

A1(sI −C±)w1−iνA1B±w1

=

iνw1

A1(sI−C±−iνB±)w1

=

iνw1

−A1ν2Aw1

=iν w1

w2

.

It holds true that the Fredholm index of sI − L stays constant in any connected component of ΩF, see [36]. Therefore, we are interested in the shape and location of the dispersion set and in particular in the connected components of ΩF. The Fredholm region ΩF can be written as

F =

s∈C: det(sI−D±(ν))6= 0∀ν ∈R , D±(ν) := −ν2A+iνB±+C±. Hence we look for eigenvalues of the matrixD±(ν),ν ∈R. Its characteristic polynomial is given by

d±(s) =s2−trD±(ν)s+ detD±(ν).

The roots of d±(·, ν) can be computed explicitly. We have s∈σdisp,µ+ (L) if and only if s= trD+(ν)

2 ±

r(trD+(ν))2

4 −detD+(ν)

=−α1ν2+i(c−2α1µ)ν+µ2α1−cµ+g1(|v|2)|v|2

±h

−α22ν4−4iα22µν3+ (6α22µ2+ 2α2g2(|v|2)|v|22 + 4i(α22µ3+µα2g2(|v|2)|v|2

−α22µ4−2α2µ2g2(|v|2)|v|2+ (g1(|v|2)|v|2)2i12

(3.42)

and s ∈σdisp,µ(L) if and only if s= trD(ν)

2 ±

r(trD(ν))2

4 −detD(ν)

=−α1ν2+i(c+ 2α1µ)ν+µ2α1+cµ+g1(0)

±h

−α22ν4+ 4iα22µν3 + (6α22µ2+ 2α2(g2(0) +ω))ν2

−4iα22µ3+µ(g2(0) +ω))ν−α22µ4−2(g2(0) +ω)α2µ2−(g2(0) +ω)2i12 . (3.43) Roughly speaking, these are four curves in the complex plane running from −∞ −i∞ to −∞+i∞. In the special case α2 = 0 the equations (3.42) and (3.43) simplify to

s=−α1ν2+i(c−2α1µ)ν+µ2α1−cµ+g1(|v|2)|v|2±g1(|v|2)|v|2 and

s=−α1ν2+i(c+ 2α1µ)ν+µ2α1+cµ+g1(0)±i(g2(0) +ω).

Then the dispersion set consists of four parabolas in the complex plane opened to the left, see 0.4 b).

We are now in the position to formulate and prove the main result of this section de-scribing the essential spectrum of L on the exponentially weighted space Xη.

Theorem 3.16. Let the Assumption 1-3 be satisfied and 0 < µ < min(µess, µ,2) with µess from Assumption 3 and µ from Theorem 2.6. Then there are ε > 0, γ <0 and a unique connected component Ω of ΩF satisfying:

i) Sε,γ :=

s ∈C:|arg(s−γ)|< π2 +ε, s 6=γ ⊂Ω.

ii) For all s∈Ω the operator sI− L:Yη →Xη is Fredholm of index 0.

3.3. THE LINEARIZED OPERATOR L 79 iii) ∂Ω⊂σdisp,µ(L).

iv) σess(L)⊂C\Ω.

Proof. i). Fors ∈ΩF Lemma 3.13 and Lemma 3.14 imply the operatorsI − Lto be a Fredholm operator of index ind(sI− L) = m+s (s)−ms (s). For s ∈ σdisp,µ(L) we have Res → −∞ as |s| → ∞. Thus, Reσdisp,µ(L)< ∞. Now let s0 >Reσdisp,µ(L). Recall for a matrix M ∈ Cm,m its lower spectral bound α(M) := min{Re (xHMx) : |x| = 1} and let s0 be so large such that

α(s0I−C±)≥s0−max{Re (vHC±v) :|v|= 1}> µ|α2| α1 . Then for all s ≥s0 we also have

α(sI−C±)≥s−max{Re (xHC±x) :|x|= 1}> µ|α2| α1

and since α(A) =α1 we obtain

|B±−B±|=|2µ(A−A)|= 4µ|α2|= 4α(A)µ|α2| α1

<4α(A)α(sI−C±) ∀s ∈[s0,∞).

Now for all s ∈ [s0,∞) Lemma D.1 implies M±(s) to be hyperbolic with m±s (s) = 2.

Thus sI− L is a Fredholm operator of index 0. Since both M± depend continuously on s we conclude that m±s (s) = 2 for s in the connected component Ω of ΩF containing [s0,∞). Thus sI − L is Fredholm index 0 for s ∈ Ω and ∂Ω ⊂ ∂ΩF = σdisp,µ(L).

Therefore ii) and iii) hold. Moreover, iv) follows by definition of the essential spectrum, cf. Definition 1.10. It remains to show i). Using ii) it is sufficient to show there is a sector Sε,γ, ε > 0, γ < 0 with σdisp,µ(L)∩ Sε,γ = ∅. The dispersion set consists of four curves given by the equations (3.42) and (3.43). For each of those we can choose a parametrizationχ:R→Csuch that (3.42), (3.43) respectively, is equivalent toχ(ν) =s and χ is given by

χ(ν) =−α1ν2+iξ1ν+ξ2± q

−α22ν4+p(ν), ν∈ R,

where ξ1, ξ2 ∈R and p(ν) =iβ3ν32ν2+iβ1ν+β0 is a polynomial of degree 3over C with βi ∈R. For the derivative of the parametrization there holds

χ(ν) =−2α1ν+iξ1∓ 2α22ν3

p−α22ν4+p(ν) ± p(ν) p−α22ν4+p(ν). Now,

22ν2

p−α22ν4+p(ν) = 2α22

p−α224p(ν)

ν→±∞

−→ 2α22

p−α22 = 2α22

i|α2| =−2i|α2|.

Therefore,

ν1χ(ν)→ −2α1±2i|α2|, ν→ ±∞.

Let Tχ(ν) := |χχ(ν)(ν)| be the tangent vector of the curve atχ(ν). Then for ν >0 Tχ(ν) = ν1χ(ν)

1χ(ν)| → −α1±i|α2|

|α| , ν → ∞. For ν <0 we obtain

Tχ(ν) =− ν1χ(ν)

1χ(ν)| → α1∓i|α2|

|α| , ν → −∞.

˜ γ

˜ σdisp,µ(L) ε

Sε,˜˜γ

a)

σdisp,µ(L)

−β0

Sε,˜˜γ

Sε,γ

b)

Figure 3.2: Geometric situation in the proof of Theorem 3.16.

SinceReχ(ν)→ −∞as ν → ±∞, we find a sector Sε,˜˜γ,γ >˜ 0, 0<ε <˜ tan1

α1

|α2|

such that σdisp,µ(L)⊂(Sε,˜˜γ)c, cf. Figure 3.2 a). Now Assumption 3 implies σdisp,µ(L)⊂ {Res <−β0}. Then for every−β0 < γ <0there is0< ε≤ε˜such thatSε,γ∩σdisp,µ(L) =

∅, cf. Figure 3.2 b).

Remark 3.17. To fully describe the essential spectrum, according to Definition 1.10, of the linearized operator L one can use Lemma 3.14 and compute m+s (s), mu(s) in the connected components ofΩF. In the connected components the dimensionsm+s (s), mu(s) stay constant. The dimensions are given by the number of eigenvalues with negative real part of the matrices M+(s), M(s). The Fredholm index in then given by ind(sI− L) =

3.3. THE LINEARIZED OPERATOR L 81 m+s (s)−mu(s). The essential spectrum is shown in Figure 3.3 in case of (QCGL) with parameters α = 1, µ=−18, β = 1 +i, γ =−1 +i In this case the matrices are explicitly given by

M+(s) =



0 0 1 0

0 0 0 1

s−2g1(|v|2)|v|2 0 −c 0

−2g2(|v|2)|v|2 s 0 −c



and

M(s) =



0 0 1 0

0 0 0 1

s−g1(0) −ω−g2(0) −c 0 ω+g2(0) s−g1(0) 0 −c



.

The numbers in the connected components indicate the Fredholm index of sI − L. We

-2 -1 0

-2 0

2

1

−1

+1 +1

+1

+2

Figure 3.3: The essential spectrum of the linearized op-erator L (green) with the dispersion sets (red/blue) in an example of (QCGL). The numbers in the connected com-ponent of ΩF indicate the Fredholm indiex of sI− L.

note that the essential spectrum strongly depends in the choice of its definition which differs in the literature, cf. [25]. However, for the stability behavior of the TOF, the choice of the precise definition has no effects since the essential spectrum is bounded by the dispersion set in any case.

Now we take Assumption 4 into account and conclude the section by studying the zero eigenvalue. Assumption 4 states that zero is an eigenvalue of the linearized operator Lwith algebraic multiplicity at most2. Using the fact that the whole group orbita(γ)v, γ ∈ G is a stationary solution of the Cauchy problem (0.22) and that the group action a(γ) is continuously differentiable, we will see that one finds two linearly independent eigenfunctions of L. Then by Assumption 4 it follows that zero is an eigenvalue of algebraic multiplicity equal to 2.

Lemma 3.18. Let the Assumption 1-4 be satisfied and 0< µ <min(µess, µ,2)with µess

from Assumption 3 and µ from Theorem 2.6. Then s = 0 is an eigenvalue of L with algebraic multiplicity two and linearly independent eigenfunctions given by

ϕ1 =−S1v =−

S1v S1v

∈Yη, ϕ2 =−v =− v⋆,x

0

∈Yη such that

N(L) = span{ϕ1, ϕ2}=: Φ.

Proof. Clearly, ϕ12 are linearly independent. Assumption 2 and Theorem 2.6 imply ϕ1, ϕ2 ∈ Yη. Thus it remains to show Lϕi = 0 for i = 1,2 then the claim follows by Assumption 4. Recall that (v, v) is a stationary solution of (0.22), i.e.

0 =

Av⋆,xx+cv⋆,x+Sωv +f(v) Sωv+f(v)

. By applying the group action a(γ) for γ = (θ, τ)∈ G we obtain

0 =

ARθv⋆,xx(· −τ) +cRθv⋆,x(· −τ) +SωRθv(· −τ) +f(Rθv(· −τ)) SωRθv+f(Rθv)

. (3.44) Differentiating (3.44) w.r.t. θ and evaluating at (θ, τ) = 0 yields

0 =

AS1v⋆,xx+cS1v⋆,x+SωS1v+Df(v)S1v

SωS1v+Df(v)S1v

=L

S1v

S1v

=Lϕ1. Further, differentiating (3.44) w.r.t. τ and evaluating at (θ, τ) = 0 leads to

0 =

−Av⋆,xxx−cv⋆,xx−Sωv⋆,x−Df(v)v⋆,x

0

=−L v⋆,x

0

=Lϕ2.

By Assumption 4 the half-plane {Res > γ} for some γ < 0 does not contain any eigenvalues of L expect for the eigenvalue s = 0. Thus we can assume that the sector Sε,γ from Theorem 3.16 lies in the resolvent set except for the zero eigenvalue.

3.3. THE LINEARIZED OPERATOR L 83 Corollary 3.19. Let the Assumptions 1-4 be satisfied and let 0 < µ < min(µess, µ,2) with µess from Assumption 3 and µ from Theorem 2.6. Then there are ε > 0, γ < 0 such that

Sε,γ ⊂ρ(L)∪ {0}.

Proof. The claim is a direct consequence of Theorem 3.16 and Assumption 4 by taking ε and |γ|sufficiently small.

As Corollary 3.19 shows the spectrum of L is completely included in the strict left half-plane except the zero eigenvalue. However, since it is an isolated eigenvalue of finite multiplicity we are able to block this neutral direction using the projector onto N(L). This will lead to time decaying estimates of the semigroup on the corresponding orthogonal complement, cf. [32, Thm. 1.5.3.]. For this purpose, we have to take the adjoint ofL into account which will be considered in the following. SinceXη is a Hilbert space we may identify its dual space Xη with Xη via the Riesz isomorphism. We define the (abstract) adjoint operator of L, cf. [61, Definition IV.11], by

L :D(L)⊂Xη →Xη, u7→ Lu.

For a detailed construction and properties of the adjoint operator L we refer to [61, IV.11]. Since L has a closed range we have, cf. [61, (11-7)],

R(L)=N(L), R(L) =N(L). (3.45) Lemma 3.20. Let the Assumptions 1-4 be satisfied and 0 < µ < min(µess, µ,2) with µess from Assumption 3 andµ from Theorem 2.6. Then there are adjoint eigenfunctions ψ1, ψ2 ∈ D(L) such that

i) N(L) = span{ψ1, ψ2}=: Ψ, ii) (ψi, ϕj)Xηij, i, j = 1,2, iii) Xη = Φ⊕Ψ,

iv) there is a continuous projectionP :Xη →Xη ontoΦ, i.e.

P(Φ) = Φ, P(Ψ) ={0}, P2 =P, which is given by

P v :=

X2

i=1

i, v)Xηϕj.

v) the subspace Ψ ⊂Xη is invariant under L, i.e. L(Ψ∩Yη)⊂Ψ.

Proof. i), ii). Let(·,·) = (·,·)Xη. Lis a Fredholm operator of index 0. Thus by Lemma A.5 the adjoint L : D(L) → Xη is Fredholm operator of index 0. This implies by Assumption 4 dimN(L) = dimN(L) = 2. Then choose linearly independent ψ1, ψ2 ∈ D(L)such that

N(L) = span{ψ1, ψ2}.

Now by Lemma A.2, the operator Ln is also a Fredholm operator of index 0 on Xη for all n ∈ N. Thus (Ln) = (L)n is Fredholm of index 0, which implies by Assumption 4 and Lemma 3.18 that dimN((L)n) = dimN(Ln) = 2 for n ≥ 2. Hence, L has no generalized eigenfunctions and therefore ψ1, ψ2 ∈ R/ (L). Now consider the matrix

A :=

1, ϕ1) (ψ1, ϕ2) (ψ2, ϕ1) (ψ2, ϕ2)

.

We show that A is invertible. Assume the contrary. Then there is z = (z1, z2) ∈ R2 with zA= 0. This implies

(z1ψ1 +z2ψ2, ϕi) = 0, i= 1,2

and therefore z1ψ1 +z2ψ2 ∈ N(L) = R(L) due to (3.45). Then we find w ∈ Xη

such that Lw = z1ψ1 +z2ψ2 ∈ N(L). A contradiction since L has no generalized eigenfunction. Hence A is invertible. Now define

ψ1 =b1ψ1 +b2ψ2, b =A1 1

0

, ψ2 =c1ψ1 +c2ψ2, c=A1

0 1

.

Then ψ1, ψ2 are linearly independent withN(L) = span{ψ1, ψ2} and (ψi, ϕj) =δij. iii). We may write u∈Xη as

u=u− X2

i=1

i, u)ϕi+ X2

i=1

i, u)ϕi. Then we have P2

i=1i, u)ϕi ∈ N(L) = Φ and due to ii) ψj, u−

X2

i=1

i, u)ϕi

!

= (ψj, u)− X2

i=1

i, u) (ψj, ϕi) = 0

3.4. THE SEMIGROUP ETL 85