• Keine Ergebnisse gefunden

As in the case of η, the asymptotic variance is symmetric at η= 0.5 for fixed alpha and has a pole at η = 0 of the form

g(α, η)∼ 1 η6+α

7056(8−5α2+ 12α)(1−α)(2−α) 2(8−α)(6−α)(1−α)4 .

As can be seen in figure 10, not all components of the asymptotic covariance matrix are symmetric. For a complete numerical implementation, see section 6.

true knot 0.3

0.4 0.5

0.6 0.7

d

−0.2

−0.1 0.0

0.1 1e+05 0.2

2e+05 3e+05 4e+05 5e+05

Asymptotic variance of ηη^

true knot 0.3

0.4 0.5

0.6 0.7

d

−0.2

−0.1 0.0

0.1 0.2 2e+07

4e+07 6e+07

Asymptotic variance of a^5

true knot 0.3

0.4 0.5

0.6 0.7

d

−0.2

−0.1 0.0

0.1 0.2 2e+07

4e+07 6e+07

Asymptotic variance of a^4

true knot 0.3

0.4 0.5

0.6 0.7

d

−0.2

−0.1 0.0

0.1 0.2

−6e+07

−4e+07

−2e+07 0e+00

Asymptotic covariance of a^

4 and a^

5

true knot 0.3

0.4 0.5

0.6 0.7

d

−0.2

−0.1 0.0

0.1 0.2 5e+07

1e+08

Trace of asymptotic covariance matrix

true knot 0.3

0.4 0.5

0.6 0.7

d

−0.2

−0.1 0.0

0.1 0.2 5.0e+20

1.0e+21 1.5e+21 2.0e+21

Determinant of asymptotic covariance matrix

Figure 10 Top: Asymptotic variance of η,ˆ ˆa5 andaˆ4 in case of a cubic spline with one free knot and different values of d. In contrast to ηˆand ˆa5, the variance ofaˆ4 is not symmetric at η = 0.5.

Bottom: Similarly to the variance ofaˆ4, the covariance of ˆa4 and ˆa5 is not symmetric. Note that the trace is not symmetric whereas the determinant appears to be symmetric atη = 0.5.

5.3 Estimating the dimension of the model

5.3.1. Throughout sections 5.1 and 5.2 we assumed that our model is identifiable in the sense of definition 4.1.4. In practice, however, this assumption may not hold: the exact number of knots is typically unknown and therefore needs to be estimated.

There exist several methods to determine the number of knots. In the case of two nested models, say for instance zero knots vs. one knot, formal tests, e.g. maximum

likelihood ratio types of statistics, can be applied. It is well-known that the asymptotic of these ratio test statistics do not follow the usual χ2 distribution but some extremal law, see e.g. [AHHK08, Dav02, AJ03, Jar98, Fed75a] for both continuous and noncontinuous change points. For the noncontinuous case, testing procedures based on bootstrap are for example discussed in [HP05]. Other testing procedure for the noncontinuous case are for example the CUSUM test [Kim94, BDE75] and an union intersection approach [Haw89].

We will not pursue these approaches further but turn to another alternative method:

the use of information criteria. Classical examples where information criteria are used to determine the number of parameters of the model are linear models and autoregressive time series models, see e.g. [Sha97, HQ79, BBO98]. Similarly, the number of knots can be consistently estimated by applying an adequate information criterion. Yao [Yao88]

showed consistency of the Schwarz criterion for piecewise constant piecewise polynomials and iid normal random variables. Liu et al. [LWZ97] showed that for locally exponentially bounded iid errors the number of knots can be estimated consistently by minimising a modified Schwarz criterion

M IC(l) = log [S(ˆη1, . . . ,ηˆl)/(n−p)] +pc0(log(n))2+δ0

n .

Here, S(ˆη1, . . . ,ηˆl) denotes the residuals sum of squares obtained under a model with l knots and p parameters in total,c0 and δ0 denote strictly positive constants. By using a law of the iterated logarithm, we wish to generalise the latter result to fractional processes.

To this end, we need to fix some additional notation beforehand.

5.3.2. In what follows, let

x(t) = µ(t/n) +ξ(t), t= 1, . . . , n withµ(s) =Pl

k=0

Ppk

j=1aj,k(s−ηk)b+j,k denoting a piecewise polynomial andξ(·) satisfying the assumptions of theorem 1.3.5.1 and one of the conditions stated in definition 1.1.5.

Definition 5.3.3. We say that µ(s) = Pl k=0

Ppk

j=1aj,k(s−ηk)b+j,k is the minimal repre-sentation of µ if aj,k 6= 0 for all j, k. Clearly, for each piecewise polynomial there is a unique minimal representation.

Definition 5.3.4. Let µ(s) = Pl k=0

Ppk

j=1aj,k(s−ηk)b+j,k be the minimal representation of µ. We then refer to l as the number of knots of µ.

1Consider remark 1.3.6 for a brief discussion about these assumptions.

Definition 5.3.5. Let l ∈N and let pk ∈ N\ {0} and bj,k ∈N for all (j = 1, . . . , pk, k = 0, . . . , l). Given a triple (l,(pk),(bj,k)), define the parametric family M= Ml,(pk),(bj,k) of piecewise polynomials by

M=

s: [0,1]→R, s(t) =

l

X

k=0 pk

X

j=1

aj,k(t−κk)b+j,k,(κk)lk=0 ∈ {0} ×Sl, aj,k ∈R

. Likewise, define

M(∆)=

s: [0,1]→R, s(t) =

l

X

k=0 pk

X

j=1

aj,k(t−κk)b+j,k,(κk)lk=0 ∈ {0} ×Sl, aj,k ∈R

with

Sl=

1, . . . κl)∈Sl:|κi−κj| ≥∆ . We refer to l as the number of knots of Mand to pM =l+Pl

k=0pk as the total number of parameters of M. For the sake of simplicity, we omit the subscript l,(pk),(bj,k) if it is clearly determined from the context.

Givenk∈Sl, we define the design matrix corresponding toM by Mn,M(k) = Mn,l,(pk),(bj,k)(k),

see 4.2.5 for details.

Definition 5.3.6. Let M={M1, . . . ,Mq} be a set of parametric families of piecewise polynomials. We say M is compatible with the piecewise polynomial µ, if there exists Mk∈ M such that

(i) µ∈ Mk,

(ii) The number of knots in Mk is equal to the number of knots of µ, (iii) pMk < pMj for all k 6=j such that µ∈ Mj.

In this case, we refer to Mk as the compatible model. Clearly, if Mk exists, then it is uniquely determined. If there exists no compatible model, we say that Misincompatible with µ.

Remark 5.3.7. Some examples may help to clarify the above definition. To that end, define

M1 ={s(t) = a+btfor a, b∈R}

M2 ={s(t) = a+bt+c(t−κ)+ for a, b, c∈R and κ∈(0,1)}

M3 =n

s(t) = a+P2

j=1bjtj+P2

j=1cj(t−κ)j+ for a, bj, cj ∈R and κ∈(0,1)o M4 =

s(t) =a+P4

t=1bjtj, a, bj ∈R

Assume that µ(s) = (s−0.5)+ and M = {M1,M2,M3}. Then, M is compatible with µ and the compatible model is given by M2.

Ifµ(s) =s2+ (s−0.5)+,Mis still compatible with µwith M3 being the compatible model.

Now, letµ(s) =s and M={M2,M4}. Then, Mis not compatible with µ. In fact, we have µ ∈ M2 and µ ∈ M4 with pM2 < pM4. But at the same time M2 violates condition (ii) and so it is not a compatible model. On the other hand, if µ(s) = s2 then µ6∈ M2. Thus, M4 becomes the compatible model and Mis compatible with µ.

Definition 5.3.8. Letx(t) =µ nt

+ξ(t) as in 5.3.2. Define (r(n))n∈

N ⊂R+

as in theorems 5.2.3 and 5.2.4, i.e.

r2(n) = n2

D2(n) ∼nαL(n) with α ∈(0,2) and L slowly varying. Furthermore, define

(h(n))n∈N⊂R+ such thath(n)→ ∞.

Lastly, if ξ(·) has either long or short memory, define (λ(n))n∈

N⊂R+ such that λ(n)

r2(n) →0 and h(n) log log(n)

λ(n) →0.

and if ξ(·) is antipersistent, define (λ(n))n∈

N ⊂R+ such that λ(n)

n →0 and h(n) λ(n) →0.

For example, choose λ(n) = log(n) and h(n) = q

λ(n) log log(n). Definition 5.3.9. Letx(t) =µ nt

+ξ(t),t= 1, . . . , nas 5.3.2 and letM={M1, . . . ,Mq} be a compatible set of parametric families of piecewise polynomials. For i ∈ {1, . . . , q}, define ˜σM2

i,h by

˜ σ2M

i,h = inf (1

n

n

X

t=1

(xt−f(t/n))2 :f ∈ Mi h−1(n) )

.

The determination of ˜σ2Mi,h can be regarded as a restricted least squares estimation:

given an observed series x(1), . . . , x(n), we obtain the optimal fit ˆµ∈ Mi(h−1(n)). The

space of admissible functions is restricted by a lower boundh−1(n) for the minimal distance between knots. In fact, we have

˜

σM2 i = inf

1 n

xn−prMn,M

i(k)xn

2

:k∈Slh−1(n)

with Slh−1(n) defined as above. Note that by the same arguments as in 5.1.1, there exists k˜in∈Slh(n) such that

xn−prM

n,Mi(˜kin)xn

2

= inf

1 n

xn−prMn,M

i(k)xn

2

:k∈Slh−1(n)

.

Furthermore, note that h−1(n)→ 0, since h(n)→ ∞. Now, the information criteria can be defined as follows:

Definition 5.3.10. If ξ(·) has either short or long memory, define IC(Mi) = log ˜σ2Mi

+pMi

λ(n) r2(n). If ξ(·) is antipersistent, define

IC(Mi) = log ˜σM2

i

+pMiλ(n) n .

Given a set a models {M1, . . . ,Mq}, we estimate the true model by minimising IC(Mi) with respect toi. This estimation procedure is consistent in the following sense:

Theorem 5.3.11. Letx(t) = µ nt

+ξ(t),t = 1, . . . , nas 5.3.2 and letM={M1, . . . ,Mq} be a compatible set of families of piecewise polynomials. Denote by Mk the compatible model. Then, for all i6=k,

P IC(Mi)> IC(Mk)

→1 as n→ ∞.

Proof. Case 1: Leti6=k such that µ6∈ Mi. Then, in the case of long and short memory, IC(Mi)> IC(Mk)⇔log ˜σ2Mi

+pMi

λ(n)

r2(n) >log ˜σM2

k

+pMk

λ(n) r2(n)

⇔σ˜2Mi

˜ σ2M

k

>exp

(pMk−pMi) λ(n) r2(n)

.

Likewise, in the case of antipersistence, IC(Mi)> IC(Mk)⇔ σ˜M2

i

˜ σM2

k

>exp

(pMk −pMi)λ(n) n

.

It follows from lemma 5.3.13 and 5.1.4 that ˜σ2Mk2+op(1). On the other hand,

˜ σ2M

i =n−1

µn+en−prM

n,Mi(˜kin)xn

2

=n−1kenk2+n−1

µn−prM

n,Mi(k˜in)xn

2

+ 2n−1D

en, µn−prM

n,Mi(k˜in)xnE . As in the proof of theorem 5.1.9, we have n2 D

en, µn−prM

n,Mi(k˜in)xnE

=op(1). However, since µ6∈ Mi, there is ∆>0 such that

lim inf

n

1 n

prM

n,Mi(k˜in)xn−µn

2

>∆ a.e.

Consequently, by 5.1.4, ˜σ2Mi2+ ∆ +op(1) and thus, by definition of (λ(n))n∈

N, P(IC(Mi)−IC(Mk)>0) =P

σ2+ ∆

σ2 +op(1) >1 +o(1)

→1.

Case 2: Let i 6= k such that µ ∈ Mi. Since Mk is a compatible model, we have k =pMi−pMk >0. Thus

IC(Mi)> IC(Mk)⇔kλ(n)

r2(n) >−log σ˜M2 i

˜ σM2

k

in the case of long and short memory, and

IC(Mi)> IC(Mk)⇔kλ(n)

n >−log σ˜M2 i

˜ σM2

k

in the case of antipersistence. In the case of short and long memory, lemma 5.3.12 implies r2(n)

λ(n) log σ˜M2

i

˜ σM2

k

=Op

λ−1(n)h(n) log log (n) . But by definition, h(n) log log(n)

λ(n) →0 and therefore

P(IC(Mi)> IC(Mk))→1.

In the case of antipersistence, we obtain n

λ(n)log σ˜2M

i

˜ σ2M

k

=Op

λ−1(n)h(n) . Again, by definition, λ(n)h(n) →0 which gives the desired result.

Lemma 5.3.12. Under the same assumptions and notations as in theorem 5.3.11, let Mi ∈ M such that µ∈ Mi.

Ifξ(·) has long or short memory, then log

σ˜M2

i

˜ σ2M

k

=Op

r−2(n)h(n) log log (n) . Ifξ(·) is antipersistent, then

log σ˜M2

i

˜ σM2

k

=Op

n−1h(n) . Proof. Let x ∈ −12,12

. By Taylor’s theorem, there exists ˜x with |˜x| ≤ |x| such that log(1 +x) = x(1+˜1x), hence the upper bound

|log (1 +x)| ≤ |2x|

for all x∈ −12,12 .

Assume that ξ(·) has long or short memory. Defining the set B=

˜ σM2

i−σ˜M2

k

˜ σM2

k

< 1 2

, we get the inequality

P

r2(n)

h(n) log log (n)log σ˜M2

i

˜ σ2M

k

≥∆

P(Bc) +P

r2(n) h(n) log log (n)2

˜

σ2Mi−σ˜M2

k

˜ σM2

k

>∆

. By lemma 5.3.13 and 5.1.4, P(Ξc) =o(1) and

r2(n) h(n) log log (n)

σ˜2Mi−σ˜M2 k

˜ σM2

k

=Op(1),

which proves the assertion. The same argument applies for the antipersistent case.

Lemma 5.3.13. Under the same assumptions and notations as in theorem 5.3.11, let Mi ∈ M such that µ∈ Mi.

Ifξ(·) has long or short memory, then ˜σMi−n−1kenk2

=Op

r−2(n)h(n) log log (n) . Ifξ(·) is antipersistent, then

˜σMi−n−1kenk2

=Op

n−1h(n) .

Proof. Letµ(s) =Pl k=0

Ppk

j=1aj,k(s−ηk)b+j,k be the minimal representation ofµand denote by li the number of knots of Mi. Setting b = max (bMi,maxj,k(bj,k)) and ηl+1 = 1, we immediately obtain

µn

l

M

k=0

sp Fηbk,nk+1 .

Likewise, for all k= (κ1, . . . , κli)∈Slh(n)i , κ0 = 0 and κli+1 = 1, sp (Mn,Mi(k))⊂

li

M

k=0

sp Fκbk,nk+1 . Denote

Wn(k) =

l

M

k=0

sp Fηbkk+1

,n

+

li

M

k=0

sp Fκbkk+1

,n

. (5.34)

Since µ ∈ Mi, it follows from the definition of the restricted least squares estimate that there exists N ∈Nsuch that kenk2 ≥n˜σM2 i for all n≥N. Hence, for such n,

en

2 ≥nσ˜2Mi =

enn−prM

n,Mi(k˜in) [µn+en]

2

enn−prW

n(k˜in) [µn+en]

2 = en

2− prW

n(k˜in)en

2

and thus

0≥nσ˜2Mi− en

2 ≥ − prW

n(˜kin)en

2. In particular, we obtain for all n≥N

en

2−nσ˜2M

prW

n(k˜in)en

2 ≤supn

prWn(k)en

2 :k∈Slh(n)i

o .

We now would like to apply lemma 5.3.14 and lemma 5.3.15 on the summands in (5.34) to conclude that

sup n

n−1

prWn(k)en

2 :k∈Slh(n)i

o

=Op

r−2(n)h(n) log log (n)

. (5.35) However, as the sum in (5.34) is not orthogonal, this is not directly possible. In fact, de-pending on the value ofk, the collinearity betweenLl

k=0sp Fηbk,nk+1

andLli

k=0sp Fκbk,nk+1 can become arbitrarily close to 1. Note, however, that we may assume w.l.o.g. thatηt6=κs for all t= 1, . . . , l and s= 1, . . . , li.2

To prove (5.35), we need to condition on the position of thefree knots (κs)ls=1i relative to thefixed knots (ηt)lt=0. More specifically, letv ∈Nl+1 such that|v|1 =Pl+1

t=1vt=li. We

2Indeed, if kSlh(n)i such thatηs=ηt for some s andt, then the subspaces sp Fηbkk+1

,n

l

k=0 and sp Fκbkk+1

,n

li

k=0 are linearly dependent and there exists k0 Slh(n)i such that ηs0 6= ηt for all s, t and Wn(k)Wn(k0). In particular,

prWn(k)en

2

prWn(k0)en

2.

say that (κ1, . . . , κli)∈Slh(n)i is of typev, ifPli

s=11k−1k)s) =vkfor allk = 1, . . . , l+ 1, i.e. if the number of free knots κs between ηk−1 and ηk is equal tovk. Denote

Slhi−1(n)(v) =n

k∈Slh(n)i :k is of type vo . Since Slhi−1(n) =S

v∈Nl+1,|v|1=liSlhi−1(n)(v), we have supn

prWn(k)en

2 :k∈Slhi−1(n) o

= max

v∈Nl+1,|v|1=li

supn

prWn(k)en

2 :k∈Slhi−1(n)(v)o . Hence, it suffices to show that for all v

supn

prWn(k)en

2 :k∈Slhi−1(n)(v)o

=Op

r−2(n)h(n) log log (n) .

So, let v ∈ Nl+1 such that |v|1 = li and k = (κ1, . . . , κli) ∈ Slh(n)i (v). Set u1 = 0 and ut+1 =vt+ut for t >1 and defineWn,t(k), (t = 1, . . . , l+ 1) by

Wn,t(k) =









sp Fηn,bt−1t

vt = 0, sp Fηn,bt−1ut+1

⊕sp Fκn,but+1t

vt = 1, sp Fηn,bt−1ut+1

⊕Lvt−1

k=1 sp Fκn,but+kut+k+1

⊕sp Fκn,but+vtt

vt >1.

Clearly, Wn,s(k) andWn,t(k) are orthogonal for s6=t and Wn(k) =

l+1

M

t=1

Wn,t(k). Therefore,

sup

k∈Slih(n)(v)

prWn(k)en

2

l+1

X

t=1

sup

k∈Slih(n)(v)

prWn,t(k)en

2

.

But it follows from a case-by-case application of lemma 5.3.14, lemma 5.3.15 and remark 5.3.16 that

sup

k∈Slih(n)(v)

prWn,t(k)en

2

=Op

r−2(n)h(n) log log (n)

(t= 1, . . . , l+ 1)

if ξ(·) has short or long memory, and sup

k∈Slih(n)(v)

prWn,t(k)en

2

=Op

n−1h(n)

(t= 1, . . . , l+ 1)

if ξ(·) is antipersistent which proves our claim.

Lemma 5.3.14. Under the same assumptions and notations as in theorem 5.3.11, we have

sup

a,b∈S2 b−a>h−1(n)

1 n

prVp,n

a,ben

2

=Op r−2(n)h(n) log log(n)

. (5.36)

Proof. We may w.l.o.g. assume thatVa,bp,n has full column rank. Indeed, ifa < bsuch that rank Va,bp,n

< p, then there exists a0 < b0 such that rank Vp,na0,b0

= p and sp Vp,na,b

⊂ sp Vap,n0,b0

, thus prVp,n

a,ben

prVp,n

a0,b0en

. Hence D

en,prVp,n

a,benE

=D

en,Vp,na,b Vp,na,b0Va,bp,n−1

Vp,na,b0enE

= D

Vp,na,b0en, Vp,na,b0Vp,na,b−1

Vp,na,b0enE

Vp,na,b0en

2·λmaxh

Vp,na,b0Va,bp,n−1i . Note that the maximum eigenvalue λmaxh

Va,bp,n0Vp,na,b−1i

is bounded: since

Vp,na,b0Va,bp,n

jk =

Pbbnc

t=banc+1(t/n− banc/n)k−1+j−1 q

Pbbnc

t=banc+1(t/n− banc/n)2j−2q Pbbnc

t=banc+1(t/nbanc/n)2k−2

=

=

Pbbnc−banc

t=1 (t/n)k−1+j−1 q

Pbbnc−banc

t=1 (t/n)2j−2 q

Pbbnc−banc

t=1 (t/n)2k−2

=

=

Pbbnc−banc t=1

t bbnc−banc

k−1+j−1 r

Pbbnc−banc t=1

t bbnc−banc

2j−2r

Pbbnc−banc t=1

t bbnc−banc

2k−2

ifbbnc−banc→∞

−−−−−−−−−→

R1

0 sk+j−2ds qR1

0 s2k−2ds qR1

0 s2j−2ds , there exists ε >0 such that infn∈N,a,b∈S2,bbnc−banc≥pλmin

Vp,na,b0Vp,na,b

> ε.

For proving (5.36) it thus suffices to verify that sup

a,b∈S2, b−a>h−1(n)

1 n D

va,bj,n,enE

2

=Op r−2(n)h(n) log log(n)

(5.37)

for j = 1, . . . , p. Here, va,bj,n denotes the j-th column of Va,bp,n, cf. definition 4.2.5.

To begin with, we need to find convenient upper bound of D

va,bj,n,enE

. To that end, define Yt =Pt

k=1ξk. Then,

hva,bj,n,eni =

Pbbnc−banc

t=1 (t/n)j−1ξt+banc

q

Pbbnc−banc

t=1 (t/n)2j−2

=

Pbbnc−banc

t=1 tj−1

Yt+banc−Ybanc− Yt−1+banc−Ybanc

q

Pbbnc−banc t=1 t2j−2

Pbbnc−banc−1 t=1

h

tj−1−(t+ 1)j−1i

Yt+banc−Ybanc q

Pbbnc−banc t=1 t2j−2 +

(bbnc − banc)j−1

Ybbnc−banc−Ybanc q

Pbbnc−banc t=1 t2j−2

2 supbbnc−banct=1

Yt+banc−Ybanc

(bbnc − banc)j−1 q

Pbbnc−banc t=1 t2j−2

=

2 supbbnc−banct=1

Yt+banc−Ybanc

(bbnc − banc)−1/2 q

(bbnc − banc)−1Pbbnc−banc

t=1 (t/(bbnc − banc))2j−2 .

Since bbnc − banc ≥p, the denominator of the last fraction is bounded away from zero.

Furthermore, bbnc−banc(b−a)nbbnc−banc+1

bbnc−bancp+1p . Hence, there exists a constant K such that sup

a,b∈S2, b−a>h−1(n)

√1 n

D

vj,na,b,enE

≤ sup

a,b∈S2, b−a>h−1(n)

K supbbnc−banct=1

Yt+banc−Ybanc np

(b−a) ≤

sup

a,b∈S2

K

ph(n) supbbnc−banct=1

Yt+banc−Ybanc

n ≤

sup

a,b∈S2

K 2p

h(n) supbbnct=1 |Yt|

n ≤K 2p

h(n) supnt=1|Yt|

n .

The assertion thus follows if we can show that r(n)

ph(n) log log(n)

ph(n) supnt=1|Yt|

n = supnt=1|Yt| plog log(n)D2(n)

is tight. To do so, note that by theorem 1.3.5, in conjunction with remark 1.3.7, P

\

k=1

[

t≥k

( 1

pD2(t) log log (t)|Yt|>1 )!

= 0, i.e. for all ε >0 exists N ∈Nsuch that

P [

t>N

( 1

pD2(t) log log (t)|Yt|>1 )!

≤ε. (5.38)

Assuming n > N, we obtain for any ∆>0 P supnt=1|Yt|

plog log(n)D2(n) >∆

!

≤P

n

[

t=1

|Yt|

pD2(n) log log (n) >∆

!

≤P

N

[

t=1

|Yt|

pD2(n) log log (n) >∆

! +P

n

[

t=N+1

|Yt|

pD2(n) log log (n) >∆

! .

Since D2(n) log log(n)1 →0, the former summand is o(1). It follows from (1.2) that supnt=1 DD22(n)(t)

is bounded. Thus, the latter summand is smaller thanεfor sufficiently large ∆ andn.

Lemma 5.3.15. Under the same assumptions and notations as in theorem 5.3.11, let a∈(0,1). Then, if ξ(·) has short or long memory

sup

b∈(a,1)

1 n

prVp,n

a,ben

2

=Op r−2(n) log log(n)

. (5.39)

If ξ(·) is antipersistent, then sup

b∈(a,1)

1 n

prVp,n

a,ben

2

=Op n−1

, (5.40)

Proof. We argue similar as in the proof of lemma 5.3.14. As there, we may w.l.o.g. assume that Vp,na,b has full column rank and obtain

D

en,prVp,n

a,benE

Va,bp,n0en

2·λmaxh

Vp,na,b0Vp,na,b−1i with supn∈N,b∈(a,1),bbnc−banc≥pλmax

h

Vp,na,b0Vp,na,b−1i

being bounded.

In case of short and long memory, it is thus again sufficient to show that sup

b∈(a,1)

1 n D

vj,na,b,enE

2

=Op r−2(n) log log(n) for all j = 1, . . . , p. But by the same argument as before, we obtain

sup

b∈(a,1)

√1 n

D

vj,na,b,enE

≤ sup

b∈(a,1)

K· supbbnc−banct=1

Yt+banc−Ybanc n√

b−a .

However, since a is fixed this time andξ(·) is strictly stationary, we have sup

b∈(a,1)

K supbbnc−banct=1

Yt+banc−Ybanc

n√

b−a

= supd b∈(a,1)

K supbbnc−banct=1 |Yt| n√

b−a ≤K supnt=1|Yt|

√nt . Again, we apply the law of the iterated logarithm in order to show that

r(n) plog log(n)

supnt=1|Yt|

√tn

is tight: Let N ∈Nas in (5.38). Assuming n > N, we obtain for any ∆>0 P r(n)

plog log (n)

supnt=1|Yt|

√tn > ∆ 2K

!

≤P

n

[

t=1

√n|Yt|

pD2(n)t log log (n) > ∆ 2K

!

≤P

N

[

t=1

√n|Yt|

pD2(n)t log log (n) > ∆ 2K

! +P

n

[

t=N+1

√n|Yt|

pD2(n)t log log (n) > ∆ 2K

!

Since D2(n) log log(n)n →0, the former summand is o(1). We can estimate the latter summand by

P

n

[

t=N+1

√n|Yt|

pD2(n)t log log (n) > ∆ 2K

!

≤P

n

[

t=N+1

|Yt|

pD2(t) log log (t) >

ptD2(n) log log (n) pnD2(t) log log (t)

∆ 2K

!

Suppose that ξ(·) has short memory. Then D2n(n) →P

k∈Zγ(k) = c, say. Hence, we may w.l.o.g. assume that 12c < D2t(t) <2cfor all t > N. Therefore,

tD2(n) log log (n)

nD2(t) log log (t) ≥ tD2(n) nD2(t) ≥

1 2c 2c = 1

4 for all t > N and thus P

Sn

t=N+1

n|Yt|

D2(n)tlog log(n) > 2K

≤ε for ∆ sufficiently large, by (5.38).

Suppose now that ξ(·) has long memory, i.e. D2(t) = t2−αL(t) for some α ∈ (0,1) and a slowly varying function L. The function L(t) = L(t) log log (t) is a product of two slowly varying function and therefore slowly varying itself. Hence

supn t=1

nD2(t) log log (t)

tD2(n) log log (n) =supn

t=1

t1−α n1−α

L(t) L(n)

supn t=1

t n

1−α

L(t) L(n)−

t n

1−α

+ 1≤q1,1−αn + 1 →1, by 1.1.4, in other words limninfnt=1 tD2(n) log log(n)

nD2(t) log log(t) = 1. As above, we conclude from (5.38) that P

Sn

t=N+1

n|Yt|

D2(n)tlog log(n) > 2K

≤ε for ∆ sufficiently large.

Ifξ(·) is antipersistent, must show that sup

b∈(a,1)

1 n D

vj,na,b,enE

2

=Op n−1

(5.41)

Once more, the law of the iterate logarithm paves the way:

P √

nsupnt=1|Yt|

√tn > ∆ 2K

≤P N

[

t=1

|Yt|

√t > ∆ 2K

+P

n [

t=N+1

|Yt|

√t > ∆ 2K

for all n > N and all ∆>0. As D2(t)

t →0, both summands are can be made arbitrarily small for appropriate choices of N and ∆.

Remark 5.3.16. Since ξ(·) is strictly stationary, we have under the assumptions of the-orem 5.3.11 for all b ∈(0,1)

sup

a∈(0,b)

1 n

prVp,n

a,ben

2

=Op r−2(n) log log(n)

. (5.42)

5.3.17. There are several unsolved problems in relation with theorem 5.3.11. First of all, the result seems incomplete since in the case of antipersistent noise the normalising sequence r2(n) has to be replaced by n. Furthermore, in the case of long and short memory we have log log(n)λ(n) →0 which is not in line with the results of Hannan and Quinn [HQ79] who also admit λ(n) = log log(n). In practical application, it is not clear how the normalisation sequencer2(n) is calculated since the corresponding parameterα has to be estimated from the data. Moreover, the sequences h and λ have to be determined.

Finally, note that by means of theorem 1.3.14 an analogous results can derived ifξ(t) is subordinated to a Gaussian process in the sense of 1.3.8.

6 Simulation studies