• Keine Ergebnisse gefunden

Appendix B Proofs of the theoretical results from Section 2.2

2. It is important to note that the assertion of the lemma does not depend on the concrete form ofηi and hence will also hold ifηi contains the additional partRb

a β(t)Xi(t)d t, where it is assumed thatβ(t)L2([a,b])with|β(t)| ≤Mβfor some constantMβ <∞. Following the remark, in the proofs leading to Theorem 2.1, we will assume thatηi is given byηi =α+PS

r=1βrXir) +Rb

a β(t)Xi(t)d t, whereβ(t)∈L2([a,b])with|β(t)| ≤Mβ for some constantMβ <∞. Theorem 2.1 may then be recovered by lettingβ(t)≡0.

We now focus on the lemmas needed to proof Theorem 2.1. Under the moment condition given in Assumption 2.2 one may adapt Lemma 1 and Lemma 2 from Kneip et al. (2016b) to our setting:

Lemma B.1. Under Assumption 2.2 there exist constants0<D1<and0< D2 <, such that for all n, all0< δ <(b−a)/2, all t∈[a+δ,bδ], all0<s≤1/2withδκsκ2, and every0<z≤p

n we obtain

P

 sup

tsδ≤ut+sδ|1 n

n

X

i=1

[(Zδ,i(t)−Zδ,i(u))Yi−E((Zδ,i(t)−Zδ,i(u))Yi)]| ≤z D1 v tδksκ

n

‹

≥1−2 exp(−z2)

(B.2)

and

P

 sup

t−sδ≤u≤t+sδ|1 n

Xn i=1

[(Zδ,i(t)2Zδ,i(u)2)−E(Zδ,i(t)2Zδ,i(u)2)]| ≤z D2δκ v tsκ

n

‹

≥1−2 exp(−z2).

(B.3)

Proof of Lemma B.1. Assertion (B.3) follows directly from Lemma 1 in Kneip et al. (2016b).

For the proof of (B.2) we follow the notation of Lemma 1 in Kneip et al. (2016b) and define Zδ,i(q):= ps1κδκ(Zδ,i(t+qsδ)Yi−E(Zδ,i(t+qsδ)Yi))as well asZδ(q):= p1nPn

i=1Zδ,i(q). Note thatYi is not assumed Gaussian anymore. However by bounding the absolute moments ofE(|Yi|2m)by Assumption 2.2 one can easily verify that for K=4Æ

L1,3|q2q1|min{1,κ}σ|y|, where the constant 0<L1,3<∞is taken from by (C.7) in Kneip et al. (2016b), the Bernstein Condition

E(|Zδ,i(q1)−Zδ,i(q2)|m)≤ m!

2 Km2K2

holds for all 0<s≤0.5, all integers m≥2 and allq1,q2∈[−1, 1]and all 0< δ <(ba)/2.

An application of Corollary 1 in van de Geer and Lederer (2013) then guarantees that the Orlicz norm ofZ(q1)−Z(q2)is bounded, i.e. one has for allq1,q2∈[−1, 1]

||Zδ(q1)−Zδ(q2)||ΦL1,4|q1q2|min{12,12κ}

for some constant 0<L1,4<∞. The assertion then follows again by the same arguments as given in Kneip et al. (2016b).

A slightly more difficult task is to get an analogue of Lemma 2 in Kneip et al. (2016b). We derive

Lemma B.2. Under the assumptions of Theorem 2.1 there exist constants0< D3 < D4 <and0<D5<such that

0<D3δκ≤ inf

t∈[a+δ,b−δ]E(Zδ,i(t)2)≤σz,sup2 := sup

t∈[a+δ,b−δ]E(Zδ,i(t)2)≤D4δκ (B.4)

nlim→∞P

 sup

t∈[a+δ,b−δ]|1 n

Xn i=1

[Zδ,i(t)2−E(Zδ,i(t)2)]| ≤D5δκ v t1

nlog(ba δ

=1. (B.5)

Moreover, there exist a constant0<D<such that for any Awith D<AA we obtain as n→ ∞:

P

sup

t∈[a+δ,b−δ](1 n

n

X

i=1

Zδ,i(t)2)12|1 n

n

X

i=1

(Zδ,i(t)Yi−E(Zδ,i(t)Yi))|

A v u tσ|2y|

n log(ba δ )

→1,

(B.6)

P

sup

t∈[a+δ,b−δ]|1 n

n

X

i=1

(Zδ,i(t)Yi−E(Zδ,i(t)Yi))|

A v u

tσ|2y|D4δκ

n log(ba δ )

→1.

(B.7)

Proof of Lemma B.2. Again we can follow the proof and the notation given in Kneip et al.

(2016b). Assertions (B.4) and (B.5) follow immediately from the proof of Lemma 2 in Kneip et al. (2016b) for anyω2> ω1>1. In order to show (B.6) one can follow the proof given in Kneip et al. (2016b) until assertion (C.17).

It is then the crucial point to show that

n→∞lim P

sup

j∈{2,3,...,Nω1}

|1nPn

i=1Zδ,i(sj)Yi−E(Zδ,i(sj)Yi)|

(1nPn

i=1Zδ,i(sj)2)12A v u tσ2|y|

n log(ba δ )

=1.

Recall that it follows from (B.4) and (B.5) that with probability 1 (asn→ ∞) there exists a constant 0<L2,1<∞such that

u∈[a+δinf,b−δ]

1 n

Xn i=1

Zδ,i(u)2L2,1δκ.

Hence, because of an event which happens with probability converging to 1 (asn→ ∞) it is sufficient to show that

sup

j∈{2,3,...,Nω1}

|1nPn

i=1Zδ,i(sj)Yi−E(Zδ,i(sj)Yi)|

(L2,1δκ)12A v u tσ2|y|

n log(ba δ ) holds with probability converging to 1 (asn→ ∞).

Remember that by (B.4) there exists a constant 0< D4<∞such that for all sufficiently smallδ >0 we have supt∈[a+δ,b−δ]E(Zδ,i(t)2)≤D4δκ. Chose an arbitrary pointsj and define

Wi(sj):= 1

ÇD4δκσ|2y|(Zδ,i(sj)Yi−E(Zδ,i(sj)Yi)),

thenE(Wi(sj)) =0 and it is easy to show that under Assumption 2.2 withK =4, a constant which is independent ofsj, Wi(sj) satisfies the Bernstein condition in Corollary 1 of van de Geer and Lederer (2013), i.e., we have for allm=2, 3, . . . :

E(|Wi(sj)|m)≤ m!

2 Km2K2.

It immediately follows from an application of Corollary 1 in van de Geer and Lederer (2013) that there exists a constant 0< L3<∞such that the Orlicz-Norm||p1nPn

i=1Wi(sj)||Ψ can be bounded byL3<∞. And hence we can infer that

E

exp(n 6(

v u u t1+2

v u t 6

L32n| 1 pn

n

X

i=1

Wi| −1)2)

≤2.

It then follows from similar steps as in the proof of Lemma 1 in Kneip et al. (2016a) that there exists a constant 0<L4<∞such that for all 0<z≤p

nwe obtain

P

| 1 pn

Xn i=1

Wi(sj)|>z L4)

=P(|1nPn

i=1Zδ,i(sj)YiE(Zδ,i(sj)Yi)|

ÆL2,1δκ >z L4 v u

tD4δκσ2|y|

nL2,1δκ

≤2 exp(−z2).

We may thus conclude that there then exists a constant 0<L5<∞such that

P

|1nPn

i=1Zδ,i(sj)YiE(Zδ,i(sj)Yi)|

ÆL2,1δκ >z L5 v utσ2|y|

n

≤2 exp(−z2).

Finally, it follows from the union bound that

P

sup

j∈{2,3,...,Nω

1}

|1nPn

i=1Zδ,i(sj)YiE(Zδ,i(sj)Yi)|

(L2,1δκ)12z L5 v u tσ2|y|

n

≥1−

Nω1

X

j=1

P

|1nPn

i=1Zδ,i(sj)YiE(Zδ,i(sj)Yi)|

(L2,1δκ)12 >z L5 v u tσ2|y|

n

≥1−Nω12 exp(−z2)

≥1−2(ba

δ )ω1exp(−z2). Setting z=q

ω2log(bδa)for someω2 > ω1 we then have, for sufficiently large n,z ≤p n and

1−2(ba

δ )ω1exp(−z2)≥1−2(ba

δ )ω1−ω2→1.

There now obviously exists a constantDwith 0<pω2L5=D<∞for which assertion (B.6) will hold.

(As a side note we mention here that in the special case of a logistic regression one may set D=4

rD4

L2,1 and choseω1andω2 such that 1< ω1< ω2< AD.

Indeed, it is easy to show that in this case Zδ,i(sj)Yi−E(Zδ,i(sj)Yi)is sub-Gaussian with

pa-rameterσof at most 23/2Æ

E(Zδ,i(sj)2). It then follows from the Hoeffding bound that for all jwe have

P

|1nPn

i=1(Zδ,i(sj)Yi−E(Zδ,i(sj)Yi))|

ÆL2,1δκz 4 pn

v u t D4

L2,1

≤2 exp(−z2)

and assertion (B.6) follows again from the union bound while now settingz=ω2

qlog(bδa).)

Finally, (B.7) now follows again from similar steps as in Kneip et al. (2016b).

The difference to Lemma 2 in Kneip et al. (2016b) is that we don’t have D=p

2 anymore butD=pω2L5 for some constantω2 >1 and L5. This is the price to pay for not assuming GaussianYi.

Remarks to Lemma B.2 concerning the cut-offλ:

1. Using a slight abuse of notation, first note that there is a close connection betweenλ= A

rσ2|y|log(b−aδ )/nfor someA>Dgiven in Theorem 2.1 andλe:=A rq

E(Yi4)log(b−aδ )/n forA=p

2p

3 as used in our simulations. Indeed, setσ2|y|=E(Y2). Jensen’s inequality implies that there exists a constant 0< De ≤ 1 such that E(Yi2)De = q

E(Yi4). We can therefore rewrite the expression for λe in the form of λ presented in Theorem 2.1 as A

rσ2|y|log(b−aδ )/nwithA=p 2p

3eD.

We proceed to give more details about the motivation for cut-off used in the simulations:

2. Arguments for the applicability of the cut-offλin the proof of Theorem 2.1 follow from Lemma B.2. The crucial step for determining an operable cut-offλ is to derive useful bounds on

sup

j∈{2,3,...,Nω

1}

|1nPn

i=1Zδ,i(sj)YiE(Zδ,i(sj)Yi)|

(1nPn

i=1Zδ,i(sj)2)12 . Define Vδ(t) := (1/nPn

i=1Zδ,i(t)Yi −E(Zδ,i(t)Yi))/(1/nPn

i=1Zδ,i(t)2)1/2. It is then easy to see that under our assumptionsp

n(1/nPn

i=1Zδ,i(t)Yi −E(Zδ,i(t)Yi))satisfies the Lyapunov conditions. We hence can conclude thatp

nVδ(t)converges for alltin dis-tribution toN(0,V(Zδ,i(t)Yi)/E(Zδ,i(t)2)), while at the same time the Cauchy-Schwarz inequality impliesV(Zδ,i(t)Yi)/E(Zδ,i(t)2)≤q

3E(Yi4).

If the convergence to the normal distribution is sufficiently fast, using again the union

bound as in the proof of Lemma B.2 now together with an elementary bound on the tails of the normal distribution then leads to

P

 sup

j∈{2,3,...,Nω

1}

|1nPn

i=1Zδ,i(sj)YiE(Zδ,i(sj)Yi)|

(1nPn

i=1Zδ,i(sj)2)12A v u t

qE(Yi4)

n log(ba δ )

→1.

For someA≥p 2p

3. The choiceA rq

E(Yi4)log(b−aδ )/nfor someA≥p 2p

3 for the cut-off would then be an immediate consequence.

Lemma 3 in Kneip et al. (2016b) remains unchanged and is repeated for convenience.

Lemma B.3. Under the assumptions of Theorem 2.1 there exists a constant0<Msup<such that for all n, all0< δ <(b−a)/2and every t∈[a+δ,bδ]we obtain

E

‚ Zδ,i(t)

Z b a

β(s)Xi(s)ds

Œ

Msupδmin{2,κ+1}. (B.8)

Note that this Lemma is trivial in the case whereβ(t)≡0.

Due to Lemma 2.1, we obtain a slightly modified version of Lemma 4 in Kneip et al.

(2016b):

Lemma B.4. Under the assumptions of Theorem 2.1 let Ir:={t ∈[a,b]| |tτr| ≤mins6=r|tτs|}, r=1, . . . ,S.

If S>0, there then exist constants0<Q1 <and0<Q2 <as well as0< c<such that for all sufficiently smallδ >0and all r=1, . . . ,S we have with Msup

|E(Zδ,i(t)Yi)| ≤Q1 δ2

max{δ,|tτr|}2−κ +Msup δmin{2,κ+1} for every tIr, (B.9) as well as

sup

tIr,|t−τr|≥δ2

|E(Zδ,i(t)Yi)| ≤(1−Q2)c|βr|c(τrκ, (B.10)

and for any u∈[−0.5, 0.5]

|E(Zδ,ir)Yi)−E(Zδ,ir+uδ)Yi)|

=| −rc(τrκ



|u|κ−1

2(|u+1|κ−1)− 1

2(|u−1|κ−1)

‹

+R5;r(u)|, (B.11)

where|R5;r(u)| ≤Mer||u|1/2δ|min{2κ,2}for some constantsMer<, r=1, . . . ,S.

Proof of Lemma B.4. Lemma 2.1 guarantees us the existence of a constantc0such that E(Zδ,i(t)Yi) =c0

‚Z b a

β(s)E(Zδ,i(t)Xi(s))ds+ XS r=1

βrXr)

Œ

The proof then follows immediately from the same steps as in Kneip et al. (2016b) forQ1=cQ1 andMsup =c Msup, wherec=|c0|.

Proof of Theorem 2.1. By Lemma 2.1 we have for some constantc06=0 withc0<∞: E(Zδ,i(t)Yi) =E(Xi(t)Yi)−0.5E(Xi(tδ)Yi)−0.5E(Xi(t+δ)Yi)

=c0·E

Zδ,i(t)XS

r=1

βrXir) + Z b

a

β(s)Xi(s)ds

‹ ,

From this it is immediately seen that one has to simply adjust some of the constants appearing in the proof Theorem 4 in Kneip et al. (2016b). In particular withc=|c0|one has to exchange the term|βr|cr) by cr|cr) whenever it appears. Since c is a constant, which is inde-pendent ofs, and the assertions in our Lemma B.1–B.4 correspond exactly to the assertions of Lemma 1–4 in Kneip et al. (2016b), the proof of the Theorem then follows by the same steps as given in the proof of Theorem 4 in Kneip et al. (2016b).