3.2 Solutions to the Linear System
3.2.2 Analysis of a Mixed-Order BVP
We consider the spatial operator A(∂x). In order to match the assumptions in [34] which are necessary later we rewriteA(∂x) to an equivalent operatorA(D) with respect to (ud,· · ·, u1, u0) in the following way constant coefficients and the entries Ajk(D) are linear differential operators defined on Ω of order not exceeding sj +mk. For ξ = (ξ1,· · ·, ξd) ∈ Rd let A(ξ) denote the symbol, ˚A(ξ) the principal symbol ofA(D) respectively( ˚Ajk(ξ) consists of the terms in A(ξ) which are of the ordersj+mk), then
We compute det ˚A(ξ) and infer
det ˚A(ξ) = 0⇐⇒ξ = 0.
According to [5]A(D) is an elliptic differential matrix operator of mixed-order(a general system of Agmon-Douglis-Nirenberg type).
Next we shall be concerned with the following boundary value problem with a parameter η in some closed sector in the complex plane with vertex at the origin with respect to the unknown functions
u(x) := (ud, ud−1,· · ·, u1, u0) (A(D)u(x)−ηu(x) =f(x) in Ω,
B(D)u(x) =g(x) on ∂Ω,
(3.2.32) wheref(x) = (f1(x),· · · , fd+1(x))T are (d+ 1)×1 matrix functions defined in Ω, g(x) = (g1(x),· · ·, gd+1(x))T defined on ∂Ω. The (d+ 1)×(d+ 1) matrix operator B(D) describes the Dirichlet boundary conditions and is defined as follows
B(D) :=Id+1,
here letId+1denote the (d+ 1)×(d+ 1) identity matrix. As forB(D) we define another sequence of integers{rj}d+11 such that
r1 =r2 =· · ·=rd= 0, rd+1=−1, (3.2.33) then it is easy to see that the entries of B(D) are linear differential operators defined on∂Ω of order not exceeding rj+mk withrj < m:=sj+mj = 2 and defined to be zero ifrj+mk<0.
Before starting our main result we study the pullback of a linear differential operator. We first consider the linear differential operator whose domain of definition and range are scalar-valued functions.
Definition 3.2.1. For two nonempty bounded open sets Ω, G⊂Rn letΦ : Ω→ Gbe a bijection. Given a function u:G→C then the function Φ∗u:=u◦Φ : Ω→Cis called the pullback of u. Assume Φ : Ω→G is a Cm-diffeomorphism 1 < p < ∞ and let A(x, D) be a linear differential operator on G of order m with domain Wpm(G), then the operator B(x, D) := Φ∗A := A(· ◦Φ−1)◦Φ defined onΩ with domain Wpm(Ω)is called the pullback of A(x, D).
Theorem 3.2.4. In definition 3.2.1 assume the Jacobimatrix Φ0 of Φ is con-stant in x∈Ω, i.e., ∂jΦ0 ≡0(j= 1,· · ·, n).Then
B(x, D) =A(Φ(x),[Φ0(x)]−TD); (3.2.34) B˚(x, D) = ˚A(Φ(x),[Φ0(x)]−TD), (3.2.35) where[·]−T := ([·]−1)T,(·)T denotes the transpose, B(x, D),˚ A(x, D)˚ denote the principal parts of B(x, D) and A(x, D) respectively.
Proof. Without loss of generality assume (A(y, D)u)(y) =a(y)(Di1· · ·Disu)(y) with a functiona: G→ C and i1,· · · , is ∈ {1,· · ·, n}. Let v ∈Wpm(Ω), then
v◦Φ−1 ∈Wpm(G)(it follows from theorem 3.14 in [1]) and
∂i1(v◦Φ−1)(y) =
n
X
j1=1
(∂j1v)(Φ−1(y))Sj1i1, (3.2.36) where (Sjk)j,k=1,···,n:= (Φ−1)0. By product rule:
∂i2∂i1(v◦Φ−1)(y) =
n
X
j2=1 n
X
j1=1
(∂j2∂j1v)(Φ−1(y))Sj2i2Sj1i1. (3.2.37) Then it follows from iteration:
∂i1· · ·∂im(v◦Φ−1)(y) =
n
X
jm=1
· · ·
n
X
j1=1
(∂j1· · ·∂jmv)(Φ−1(y))
m
Y
l=1
Sjlil, (3.2.38) which implies
(A(y, D)(v◦Φ−1))(y)
=a(y)
n
X
jm=1
· · ·
n
X
j1=1
(Dj1· · ·Djmv)(Φ−1(y))
m
Y
l=1
Sjlil
=a(Φ(x)) m
Y
l=1
((Sjk)j,k=1,···,n)T(D1 D2· · ·Dn)T
il
! v
! (x).
Thus we obtain (3.2.34), and (3.2.35) follows as a direct consequence.
Compared to the scalar case thepullback of a vector field is defined by a different way. We consider only the special case that the transform Φ : Ω → G is a rotation and therefore Φ is a rotation matrix. Let u : G → Cn be a vector field, then the pullback of u is Φ∗u:= Φ−1◦u◦Φ : Ω→Cn. Now we give two important examples of the pullback of relevant operators on a vector field (or the range is a vector field).
Example 3.2.1. Let Φ : Ω → G be a rotation, u : G → C a scarlar field, then we obtain the pullback of u u◦Φ : Ω→C from definition 3.2.1. Let A be the gradient of u, then the corresponding differential operator on u◦Φ is also gradient, i.e.,
∇(u◦Φ)(x) = Φ−1∇u(y) with y= Φ(x).
Example 3.2.2. Suppose u :G → Cn is a vecor field, A is the divergence of u, then the divergence ofΦ−1◦u◦Φ : Ω→Cn equals to A(u), i.e.,
div(Φ−1◦u◦Φ)(x) =divu(y) with y= Φ(x).
Remark 3.2.1. From the aspects of theory of fields the gradient of a scalar field and the divergence of a vector field describe the feature of the scalar field and the vector field respectively and they are independent of the choice of the coordinate system (via rotation).
After these preparations mentioned above we have the following
Theorem 3.2.5. The boundary problem (3.2.32) is elliptic with parameter in any sectorL in the complex plane with vertex at the origin and
L ⊂S1∪S2 where
S1:=
η∈C, −π+arctg 2ν0
<argη <−arctg 2ν0
, S2:=
η∈C, arctg 2ν0
<argη < π−arctg 2ν0
. Assume
2ν0
≤√ 3,
then the boundary problem (3.2.32) is elliptic with parameter in any sector L where
L ⊂
z∈C, argz6= 0,±arctg 2ν0
in the sense of Definition 2.2.2.
Proof. Ford= 1,
det( ˚A(ξ)−ηI) = 0 =⇒η=ν0|ξ|2±i 2|ξ|2; ford >1,
det( ˚A(ξ)−ηI) = 0 =⇒η =ν0|ξ|2 or η=ν0|ξ|2±i 2|ξ|2. Then the first condition of defintion 2.2.2 follows immediately.
Fix now x0 ∈ ∂Ω, let Φx0 denote the coordinate transformation from the original coordinate system into a local coordinate system atx0 (here Φx0(0) = x0 and ν 7→ en, where ν is the interior normal to ∂Ω at x0 and (e1,· · ·, en) denotes the standard basis in Rn). We achieve this coordinate transformation via translation and rotation. Notice that the considered operator reads
−ν04 −div
−T0∇+ 2
4∇4 −ν04+τ−1
, (3.2.39)
then from the examples 3.2.1, 3.2.2 and4=∇ · ∇we can rewrite this operator in the local coordinate system associated with x0 in the same form as (3.2.39).
Thus the principal parts of the pullback ofA(D) reads the same as A(D).
Checking Lopatinskii-Shapiro condition (the second condition of Definition 2.2.2)will be carried out with the help of [5](45-51) since we consider the fol-lowing problem
A(ξ˚ 0, Dd)v(t)−ηv(t) = 0 for t=xd>0, v(t) = 0 at t= 0,
|v(t)| −→0 as t−→ ∞
(3.2.40)
which is a system of linear ordinary differential equations with constant (com-plex) coefficients and
A(ξ˚ 0, Dd)
:=
ν0|ξ0|2+ν0Dd2 0 · · · 0 A˚d 0 ν0|ξ0|2+ν0Dd2 · · · ... A˚d−1
... ... . .. 0 ...
0 0 · · · ν0|ξ0|2+ν0D2d A˚1
−iDd −iξd−1 · · · −iξ1 ν0|ξ0|2+ν0D2d
with
A˚d: =−i2
4|ξ0|2Dd−i2 4D3d, A˚j : =−i2
4ξj|ξ0|2−i2
4ξjD2d, j= 1,· · ·, d−1.
We apply the results of S. Agmon, A. Douglis, and L. Nirenberg in [5] pp. 45-51 and record the corresponding theorem here for easy reference.
Theorem(see[5])Let L(Dd),Ljk(Dd)denote det
A(ξ˚ 0, Dd)−ηId+1
,and the adjoint to the matrix A(ξ˚ 0, Dd)−ηId+1respectively. Assume the polynomialL(r) has exactly p+ complex roots with Imr >0 and L(r) =L+(r)L−(r) where the roots of L+(r) are those of L(r) for which Imr >0. Letξ0 ∈Rd−1,η∈ L and
|ξ0|+|η| 6= 0. Then the following statements are all equivalent to one another: 1. Complementing Condition 2 in [5](48) holds, i.e., The lines of the p+×
(d+ 1)matrix
Bσj(r)Ljk(r)
are linear independent modulo the polynomial L+(r);i.e., the equations X
σ
CσBσj(r)Ljk(r)≡0 ( modL+(r))
imply the constants Cσ are all zero,where Bσj(r) is a submatrix of B(r) with σ= 1,· · · , p+,j= 1,· · ·, d+ 1.
2. The problem
A(ξ˚ 0, Dd)v(t)−ηv(t) = 0 for t=xd>0,
Bσj(D)vj(t) =aσ at t= 0(σ = 1,· · ·, p+),
|v(t)| −→0 as t−→ ∞(exponentially)
(3.2.41)
has a solution for arbitrary aσ ∈C. 3. The problem
A(ξ˚ 0, Dd)v(t)−ηv(t) = 0 for t=xd>0,
Bσj(D)vj(t) = 0 at t= 0(σ = 1,· · ·, p+),
|v(t)| −→0 as t−→ ∞(exponentially)
(3.2.42)
has a unique solution: vj = 0 is the only exponentially decaying solution.
We will apply this theorem to complete the proof.
After some calculations we obtain L(r) = (ν0(|ξ0|2+r2)−η)d−1
(ν0(|ξ0|2+r2)−η)2+2
4(|ξ0|2+r2)2
and
L(r) = 0⇐⇒r2 = η ν0
− |ξ0|2 orr2 = 4 4ν02+2
ν0−i
2
η− |ξ0|2 orr2 = 4
4ν02+2
ν0+i 2
η− |ξ0|2. Since according to the assumptions the arguments of η don’t equal to zero and ±arctg
2ν0 L(r) has exactly d+ 1 complex roots with positive imaginary part. Definey:= r2+|ξ0|2, let q1, q2, q3 denote the roots of L(r) with positive imaginary part where q21 = η
ν0 − |ξ0|2, q22 = 4 4ν02+2
ν0−i
2
η− |ξ0|2, q32 = 4
4ν02+2
ν0+i 2
η− |ξ0|2, then
L+(r) = (r−q1)d−1(r−q2)(r−q3).
Now we compute the (d+ 1)×(d+ 1) matrixBσj(r)Ljk(r) ford= 2 andd= 3 separately.
d= 2:
Bσj(r)Ljk(r)
=
(ν0y−η)2+2
4|ξ0|2y −2
4rξ1y i(ν0y−η)2 4ry
−2
4ξ1ry (ν0y−η)2+2
4r2y i(ν0y−η)2 4ξ1y ir(ν0y−η) iξ1(ν0y−η) (ν0y−η)2
d= 3:
Bσj(r)Ljk(r) = (Bσj(r)Ljk(r))ij(i, j= 1,· · · ,4) here the entries (Bσj(r)Ljk(r))ij read as follows:
(Bσj(r)Ljk(r))11= (ν0y−η)
(ν0y−η)2+2 4|ξ0|2y
, (Bσj(r)Ljk(r))12=−(ν0y−η)2
4ξ2ry, (Bσj(r)Ljk(r))13=−(ν0y−η)2 4ξ1ry, (Bσj(r)Ljk(r))14=i(ν0y−η)22
4ry, (Bσj(r)Ljk(r))21=−(ν0y−η)2 4ξ2ry, (Bσj(r)Ljk(r))22= (ν0y−η)
(ν0y−η)2+2
4y(r2+ξ12)
, (Bσj(r)Ljk(r))23=−(ν0y−η)2
4ξ1ξ2y, (Bσj(r)Ljk(r))24=i(ν0y−η)22 4ξ2y, (Bσj(r)Ljk(r))31=−(ν0y−η)2
4ξ1ry, (Bσj(r)Ljk(r))32=−(ν0y−η)2 4ξ1ξ2y, (Bσj(r)Ljk(r))33= (ν0y−η)
(ν0y−η)2+2
4(r2+ξ22)y
, (Bσj(r)Ljk(r))34=i(ν0y−η)22
4ξ1y, (Bσj(r)Ljk(r))41=ir(ν0y−η)2, (Bσj(r)Ljk(r))42=iξ2(ν0y−η)2, (Bσj(r)Ljk(r))43=iξ1(ν0y−η)2, (Bσj(r)Ljk(r))44= (ν0y−η)3.
The following cases need to be considered.
1. Case 1: η= 0.
According to the assumption |ξ0|+|η| 6= 0 we have |ξ0| 6= 0. From the calculations above we obtain q1 =q2 =q3 =i|ξ0|and
L+(r) = (r−i|ξ0|)d+1.
We consider two cases:
(a) d= 2:
The equations X
σ
aσBσj(r)Ljk(r)≡0 mod L+(r) (aσ ∈C, σ= 1,2,3) imply that i|ξ0| is a root of multiplicity 3 of the linear combination of the first column ofBσj(r)Ljk(r):
l1(r) : =a1ν02y2+a1
2
4|ξ0|2y−a2
2
4ξ1ry+ia3rν0y,
=y
a1ν02y+a12
4|ξ0|2−a22
4ξ1r+ia3rν0
,
= (r−i|ξ0|)(r+i|ξ0|)×
×
a1ν02y+a12
4|ξ0|2−a22
4ξ1r+ia3rν0
. Since a1ν02y+a1
2
4|ξ0|2−a2
2
4ξ1r+ia3rν0 is a polynomial of order 2 with respect tor and a1ν02y+a12
4|ξ0|2 −a22
4ξ1r+ia3rν0 has a factor (r−i|ξ0|)2 it follows
a1ν02y+a12
4|ξ0|2−a22
4ξ1r+ia3rν0 =a1ν02(r−i|ξ0|)2, which is equivalent to
a1ν02r2−i2a1ν02|ξ0|r−a1ν02|ξ0|2
=a1ν02r2+
ia3ν0−a2
2 4ξ1
r+a1|ξ0|2
ν02+2 4
. By coefficient comparison we obtain
−a1ν02|ξ0|2=a1|ξ0|2
ν02+2 4
, (3.2.43)
which implies
a1 = 0. (3.2.44)
Then the linear combination of the second column of Bσj(r)Ljk(r) is reduced as:
l2(r) : =a2ν02y2+a22
4r2y+ia3ξ1ν0y,
= (r−i|ξ0|)(r+i|ξ0|)
a2ν02+a22 4
r2+a2ν02|ξ0|2+ia3ξ1ν0
.
By a same reasoning
a2ν02+a22 4
r2+a2ν02|ξ0|2+ia3ξ1ν0 =
a2ν02+a22 4
(r−i|ξ0|)2, from which it follows
a2 = 0. (3.2.45)
Then a3 = 0 follows immediately. Combining (3.2.44) and (3.2.45) the lines ofBσj(r)Ljk(r) are linear independent modulo the polyno-mialL+(r).
(b) d= 3 :
On the basis of the same procedure as case of d= 2 the equations X
σ
aσBσj(r)Ljk(r)≡0 mod L+(r) (aσ ∈C, σ= 1,2,3,4) imply that i|ξ0|is a root of multiplicity 4 of the linear combination of the first column of Bσj(r)Ljk(r):
L1(r) : =a1ν0y
ν02y2+2 4|ξ0|2y
−a2ν0
2
4ξ2ry2−a3ν0
2
4ξ1ry2+ +ia4ν02ry2
= a1ν03r2+
ia4ν02−a2ν0
2
4ξ2−a3ν0
2 4ξ1
r+a1ν03|ξ0|2+
+a1ν0
2 4|ξ0|2
!
(r−i|ξ0|)2(r+i|ξ0|)2.
(3.2.46) The first factor of L1(r) is a polynomial of order 2 and has a factor (r−i|ξ0|)2 thus
a1ν03r2+
ia4ν02−a2ν0
2
4ξ2−a3ν0
2 4ξ1
r+a1ν03|ξ0|2+a1ν0
2 4|ξ0|2
=a1ν03(r−i|ξ0|)2
=a1ν03r2−i2a1ν03|ξ0|r−a1ν03|ξ0|2. Coefficient comparison yields
a1 = 0. (3.2.47)
Then the linear combination of the second column of Bσj(r)Ljk(r) is reduced as:
L2(r) : =a2ν0y
ν02y2+2
4y(r2+ξ12)
−a3ν0
2
4ξ1ξ2y2+ia4ξ2ν02y2
= (r−i|ξ0|)2(r+i|ξ0|)2×
×
a2ν03y+a2ν0
2
4(r2+ξ12)−a3ν0
2
4ξ1ξ2+ia4ξ2ν02
= (r−i|ξ0|)2(r+i|ξ0|)2R, where
R:=
a2ν03+a2ν02 4
r2+a2ν03|ξ0|2+a2ν02
4ξ12−a3ν02
4ξ1ξ2+ia4ξ2ν02. Ris a polynomial inr of order 2 and it has a factor (r−i|ξ0|)2 thus
R=
a2ν03+a2ν02 4
(r−i|ξ0|)2. Coefficient comparison yields
a2 = 0. (3.2.48)
Then the linear combination of the third column ofBσj(r)Ljk(r) is reduced as:
L3(r) : =a3ν0y
ν02y2+2
4(r2+ξ22)y
+ia4ξ1ν02y2
= (r−i|ξ0|)2(r+i|ξ0|)2×
×
a3ν03+a3ν02 4
r2+a3ν03|ξ0|2+a3ν02
4ξ22+ia4ξ1ν02
! . A same reasoning provides
a3ν03+a3ν0
2 4
r2+a3ν03|ξ0|2+a3ν0
2
4ξ22+ia4ξ1ν02
=
a3ν03+a3ν0
2 4
(r−i|ξ0|)2. Coefficient comparison yields
a3 = 0. (3.2.49)
Then the linear combination of the fourth column ofBσj(r)Ljk(r) is L4(r) : =a4ν03(r−i|ξ0|)3(r+i|ξ0|)3.
That L4(r) has a factor (r−i|ξ0|)4 implies
a4 = 0. (3.2.50)
Thus (3.2.47)-(3.2.50) yield the lines of Bσj(r)Ljk(r) are linear in-dependent modulo the polynomial L+(r).
2. Case 2: η6= 0.
In this caseq1, q2, q3 are pairwise unequal and we still consider two cases.
(a) d= 2:
The equations X
σ
aσBσj(r)Ljk(r)≡0 modL+(r) (aσ ∈C, σ= 1,2,3) are equivalent to that li(qj) = 0(i, j = 1,2,3) where letli(r) denote the corresponding linear combination of theith column. Sinceq1is a root of the linear combination of the second column ofBσj(r)Ljk(r)
l2(r) :=−a12
4rξ1y+a2
(ν0y−η)2+2 4r2y
+a3iξ1(ν0y−η).
(3.2.51)
Then l2(q1) = 0 is equivalent to
a1ξ1−a2q1 = 0. (3.2.52) By the same reasoning q2 q3 are roots of the linear combination of the third column ofBσj(r)Ljk(r)
l3(r) :=a1i(ν0y−η)2
4ry+a2i(ν0y−η)2
4ξ1y+a3(ν0y−η)2. l3(q2) = 0 is equivalent to
a1
2q2+a2
2ξ1−a3= 0; (3.2.53) l3(q3) = 0 is equivalent to
a1
2q3+a2
2ξ1+a3= 0. (3.2.54)
Now we computel1(q1) = 0, l1(q2) = 0, l1(q3) = 0, l2(q2) = 0, l2(q3) = 0, l3(q1) = 0 then we obtain
l1(q1) = 0⇐⇒ξ1(a1ξ1−a2q1) = 0;
l1(q2) = 0⇐⇒a1
2q2+a2
2ξ1−a3= 0;
l1(q3) = 0⇐⇒a1
2q3+a2
2ξ1+a3= 0;
l2(q2) = 0⇐⇒
a1
2q2+a2
2ξ1−a3
ξ1 = 0;
l2(q3) = 0⇐⇒
a1
2q3+a2
2ξ1+a3
ξ1 = 0;
l3(q1) = 0⇐⇒(a1, a2, a3)T ∈R3 are arbitrary.
Finally it is easy to verify thatli(qj) = 0 (i, j= 1,2,3) are equivalent to (3.2.52), (3.2.53) and (3.2.54).
The system of equations (3.2.52), (3.2.53) and (3.2.54) with respect to (a1, a2, a3)T has only the trivial solution iff.
det
ξ1 −q1 0
2q2
2ξ1 −1
2q3
2ξ1 1
6= 0,
which is equivalent to
ξ12+1
2q1(q2+q3)6= 0.
Thus we obtain that the following two statements are equivalent i. X
σ
aσBσj(r)Ljk(r)≡0 mod L+(r) (aσ ∈C, σ= 1,2,3) imply that the constantsaσ are all zero,
ii. ξ12+1
2q1(q2+q3)6= 0.
(b) d= 3:
Since each entry ofBσj(r)Ljk(r) has a factorν0y−η =ν0(r+q1)(r− q1) the equationsX
σ
aσBσj(r)Ljk(r)≡0 mod L+(r) (aσ ∈C, σ= 1,2,3,4) are equivalent to that Li(qj) = 0(i= 1,2,3,4 j = 1,2,3) whereLi(r) := li(r)
ν0y−η and let li(r) denote the corresponding linear combination of the ith column of Bσj(r)Ljk(r). Some calculations
yield
Thus we obtain that
Li(qj) = 0(i= 1,2,3,4 j= 1,2,3) are equivalent to the equations
a1|ξ0|2−a2ξ2q1−a3ξ1q1= 0,
We consider the following linear equations:
B(a1, a2, a3, a4)T =0,
here
Recall the case ofd= 2 we obtain the conclusion that for both cases(d= 2,3) and η6= 0 the lines of thep+×(d+ 1) matrix
Bσj(r)Ljk(r)
are linear independent modulo the polynomialL+(r); i.e., the equations X
σ
CσBσj(r)Ljk(r)≡0 ( modL+(r)) imply the constantsCσ are all zero iff.
|ξ0|2+1
2q1(q2+q3)6= 0. (3.2.56) In case ofξ0 =0 (3.2.56) holds for allη ∈ Lsinceq16= 0, q2+q3 6= 0 thus we consider only the case ofξ06=0. We have the reasoning
|ξ0|2+ 1
2q1(q2+q3) = 0
=⇒(2|ξ0|2+q1(q2+q3))(2|ξ0|2−q1(q2+q3))×
×(2|ξ0|2+q1(q2−q3))(2|ξ0|2−q1(q2−q3)) = 0,
(3.2.57)
and then expand the right equation of (3.2.57) with respect to η to find (2|ξ0|2+q1(q2+q3))(2|ξ0|2−q1(q2+q3))(2|ξ0|2+q1(q2−q3))×
×(2|ξ0|2−q1(q2−q3)) = 0, which implies
2c2
ν02 η3−22c2|ξ0|2 ν0
η2+|ξ0|4(2c2+ 16c)η−16|ξ0|6
ν0c+ 1 ν0
= 0, which is equivalent to
P(η) :=η3−2ν0|ξ0|2η2+ν02|ξ0|4
5 +16ν02 2
η
−16|ξ0|6ν02 c2
2ν0+ 2 4ν0
= 0
(3.2.58)
wherec:= 4 4ν02+2.
P(η) is a polynomial in η of order 3 and we check that P(0) =−16|ξ0|6ν02
c2
2ν0+ 2 4ν0
<0
thenP(η) has a positive real root. The discriminant ∆ ofP(η) reads
∆ = p3 27 +q2
4, with
p: =ν02|ξ0|4 11
3 +16ν02 2
q : =ν02|ξ0|6
−16
27ν0+2ν0 3
5 +16ν02 2
− 16 c2
2ν0+ 2 4ν0
. Then ∆>0 and from algebraic laws P(η) = 0 has one real and a pair of complex conjugate roots. FactorP(η) as
P(η) = (η−α)(η2+βη+γ)
where let αdenote the positive real root ofP(η) = 0 andβ, γ denote the well-founded coefficients. Coefficient comparison yields
β =α−2ν0|ξ0|2.
Substitute 2ν0|ξ0|2 intoP(η) we obtainP(2ν0|ξ0|2)<0; sinceα >0 is the only real root of P(η) = 0 we conclude 2ν0|ξ0|2 < αthusβ >0.Then the
two complex conjugate roots of P(η) = 0 are situated in the second and the third quadrants in the complex plane respectively.
Notice that (3.2.56) holds for any η ∈S1∪S2 where
S1: =
x∈C| −π+ arctg 2ν0
<argx <−arctg 2ν0
, S2: =
x∈C|arctg 2ν0
<argx < π−arctg 2ν0
since 1
2q1(q2+q3) always has a negative imaginary part for η∈
x∈C| −π+ arctg
2ν0 <argx <−arctg 2ν0
and a positive imaginary part for η∈
x∈C|arctg
2ν0 <argx < π−arctg 2ν0
. Furthermore fromP(η) = 0 we see
η 2c2
ν02 η2+|ξ0|4(2c2+ 16c)
= 2|ξ0|2 2c2
ν02 η2+ 8|ξ0|4
ν0c+ 1 ν0
. A necessary condition of P(η) = 0 is the argument of the complex root ofP(η) = 0 with positive imaginary part is greater equal than 0 and less than 2
3π otherwise the lie of η
2c2
ν02 η2+|ξ0|4(2c2+ 16c)
is in the first or the second quadrant, and 2|ξ0|2
2c2
ν02 η2+ 8|ξ0|4
ν0c+ 1 ν0
lies in the third or the fourth quadrant, i.e. will be situated in different quadrants.
Assume 2ν0
≤√
3 then the complex conjugate roots ofP(η) = 0 are in S1∪S2,
consequently (3.2.56) holds for all η ∈ C exclusive the nonnegative real axis.
Theorem 3.2.6. Let 1< p <∞, there exists a η0 >0 such that for allη ∈ L with|η| ≥η0 where Lis defined in theorem 3.2.5 the boundary problem (3.2.32) has a unique solution u(x) = (ud, ud−1,· · · , u1, u0)∈Qd+1
j=1Wpmj+m(Ω)for any f = (f1,· · · , fd+1) ∈ Qd+1
j=1Wpmj(Ω) and g = (g1,· · · , gd+1)Qd+1
j=1Wp3/2−rj(Ω) where the sequences of integers {mj}d+11 and {rj}d+11 are defined in (3.2.30) and (3.2.33)respectively. Further the a priori estimate
d
X
j=0
|||uj|||md+1−j+2,p,Ω
≤c
d+1
X
j=1
|||fd+2−j|||md+2−j,p,Ω+
d+1
X
j=1
|||gd+2−j|||2−r
d+2−j−p1,p,∂Ω
(3.2.59)
holds where the constant c does not depend upon f, g, η and we use the norms depending on a parameter η ∈ C/{0}, namely let v ∈ Wps(Ω), g ∈ Ws−
1
p p(∂Ω) where s is an integer satisfying 1≤s≤m,
|||v|||s,p,Ω :=kvkWs
p(Ω)+|η|mskvkLp(Ω), (3.2.60)
|||g|||s−1
p,p,∂Ω :=kgk
Ws−
1p p (∂Ω)
+|η|
s−1 p
m kgkLp(∂Ω). (3.2.61) Proof. Since the boundary problem (3.2.32) is elliptic with parameter in L from Theorem 3.2.5, the existence and the a priori estimate follow directly from Theorem2.2.1.