• Keine Ergebnisse gefunden

dB(y1) Z y1

−∞

dB(y2)· · ·

Z ym−1

−∞

dB(ym) Z t

0 m

Y

i=0

(s−yi)H032ds+

Z t 0

dB(y1) Z y1

−∞

dB(y2)· · ·

Z ym−1

−∞

dB(ym) Z t

y1

m

Y

i=0

(s−yi)H032ds )

is well-defined in the sense of Ito [Ito51]. Here, cH,m denotes a normalising coefficient ensuring that Var (HmH(1)) = 1. Recall that by symmetry of the multiple Wiener integral

HmH(t) = ˜cH,m

Z

Rm

Z t 0

m

Y

j=1

(s−yj)(12+1−Hm )

+

!

dsdB(y1). . . dB(ym),

see property (I.2) in [Ito51]. The process {HmH(t), t∈R+}, usually referred to asHermite process of order m, is H-sssi with mean zero and finite variance, see [Taq77, Taq78]. In particular, its second order structure is given by (1.17). Note, that HmH(·) is in fact a fractional Brownian motion if m= 1 and non-Gaussian if m >1.

1.3 Limit theorems for partial sums

1.3.1. The sole assumption that {ξ(t), t∈Z} is second order stationarity is usually not strong enough to draw any conclusions about the asymptotic distribution of the bro-ken line process Zn,ξ(·). Therefore, we need to impose more restrictive assumptions on {ξ(t), t∈Z}.

We discuss linear process with independent innovations to start with. More precisely, we assume that

ξ(t) =

X

j=0

bjε(t−j)

with (bj)j∈Z ∈ l2(Z) and {ε(t), t ∈Z} denoting an iid sequence such that E(εt) = 0 and Var (εt)<∞.

Note that a linear process is always strictly stationary. By [Han70, theorem 3, p. 204], a linear process is mixing, in particular ergodic. Since {ξ(t), t∈Z} has a finite mean, it follows from the ergodic theorem of Birkhoff that the strong law of large numbers holds, see e.g. [Han70, theorem 2, p. 203].

Theorem 1.3.2. Letξ(t) =P

j=0bjε(t−j) be a linear process as in 1.3.1. Assume that Var [Pn

t=1ξ(t)]∼n2HL2(n) asn → ∞, with 0< H <1 andLis slowly varying at infinity.

Then

Zn,ξ(t)−→d BH(t) (1.19)

where −→d denotes convergence in the sense of the finite dimensional distributions.

Proof. The original result is due to Davydov [Dav70]. For additional comments, see [Taq75, Taq03].

Remark 1.3.3. By [Taq75, lemma 2.1], the sequence{Zn,ξ(·), n= 1,2,3, . . .} is tight in D([0,1]) for 12 < H < 1. In particular, we have Zn,ξ(t) −→D BH(t) where −→D denotes weak convergence in the sense of Skorokhod, see e.g [Bil68]. If E(|ε(t)|2r) < ∞ for some r ≥ 2 and r+21 < H < 1, then it follows from the arguments of Davydov, that Zn,ξ(·), n= 1,2,3, . . . is tight in D([0,1]), cf. [Taq75, page 228].

As an immediate consequence of theorem 1.3.2, we obtain the following corollary:

Corollary 1.3.4. Let{ξ(t), t∈Z} be as in theorem 1.3.2.

(i) If {ξ(t), t∈Z} is a linear short-memory process, then Zn,ξ(t)−→d B(t).

(ii) If {ξ(t), t∈Z} is either a long-memory or antipersistent process, then Zn,ξ(t) −→d BH(t) withH = 2−α2 and α as in definition 1.1.5.

Theorem 1.3.5. Let ξ(t) = P

j=0bjε(t−j) as in theorem (1.3.2). Suppose that for all γ >0 exists δ <1 such that

lim sup

n

δn≤i≤nmax D2(i)/D2(n)<1 +γ, (1.20) E(|ε(t)|r)<∞ for some r >2, and (1.21)

lim inf

n D2(Kn)/D2(n)> K2/r for some integer K ≥2. (1.22) Then

lim sup

n

|Zn,ξ(1)|

p2 log log (D2(n)) ≤1 almost surely. (1.23) Proof. This result is due to Lai and Wei [LW82, theorem 3].

Remark 1.3.6. Assume D2(n) = n2−αL(n) for some α ∈ (0,2) and a slowly varying L, then

lim sup

n

δn≤i≤nmax D2(i)

D2(n) = lim sup

n

δn≤i≤nmax

i2−αL(i)

n2−αL(n) ≤lim sup

n

δn≤i≤nmax L(i) L(n) →1, by (1.4). In particular, lim supnmaxδn≤i≤n D2(i)

D2(n) < 1 +γ for all γ > 0, in other words condition (1.20) always holds if {ξ(t), t∈Z} has short memory, long memory or is an-tipersistent.

Likewise, under the same assumptions lim inf

n

D2(Kn)

D2(n) = lim inf

n K2−αL(Kn)

L(n) =K2−α.

Thus, condition (1.22) holds if r(2−α)>2. Sincer >2 by assumption (1.21) and (1.22) does not impose any additional restriction if{ξ(t), t∈Z}is either a long- or short-memory process. One should note that this is not the case for α ∈ (1,2), i.e. antipersistence. In fact, since r → ∞ for α → 2, the conditions on the moments imposed by (1.21) can be rather strict.

Remark 1.3.7. One should also note that we can replace p

2 log log(D2(n)) in (1.23) by the more familiar term p

2 log log(n). Since log log nβ

log log(n) = log [βlog (n)]

log log(n) = log (β) + log log (n) log log(n) →1 for all β∈(0,2), simply choose β1, β2 such that Dn2β(n)1 → ∞and Dn2β(n)2 →0.

1.3.8. In a more general setup, we now assume that {ξ(t), t∈Z} is not necessarily lin-ear itself but only subordinated, see [Maj81, page 3], to another linear process. More specifically, we assume that ξ(t) is given by

ξ(t) = G(η(t)) (1.24)

whereη(t) denotes a linear process yet to be specified andG:R→Ra sufficiently regular function.

1.3.9. We start with the case of a stationary Gaussian process η(t). More precisely, let {η(t), t∈Z}be a standardised1, stationary Gaussian process such that

k→∞lim Cov [η(t), η(t+k)] = 0 and denote by G a measurable function satisfying

E[G(ε(t))] = 0, E

G2(ε(t))

<∞.

Under these assumptions, ξ(t) as given in (1.24) defines a strictly stationary process with mean zero and finite variance. It is ergodic by [Sto74, theorem 3.5.8].

Denoting

G =L2

R,1exp

x22

, (1.25)

we immediately find that G ∈ G. It is well-known, see e.g. [Roz67] or [Abr72], that the Hermite polynomials

Hk(x) = (−1)kexp

x2 2

dk

dxk exp

x22

(k = 0,1,2, . . .) (1.26) form a complete orthogonal system in G. Thus

G(·) =

X

k=0

Ck k!Hk(·)

withCk=hHk, GiG and convergence understood with respect to the scalar producth·,·iG. Note that kHkk2G =k! and thus P

k=0 Ck2

k! <∞. It follows that ξ(t) =

X

k=0

Ck

k!Hk η(t)

where convergence is understood in the sense of the L2(P)-norm.

As in [Taq75], we define theHermite rank of Gby mG ={k ∈N, Ck 6= 0}. Note that mg >0, since E[G(ηt)] = 0. The covariance structure ofξ(t) is given by

Cov (ξ(t), ξ(s)) =

X

k=mg

Ck2

k!γηk(t−s), compare e.g. [Roz67, page 183]. Consequently, P

n∈Zξ(n)| <∞ if P

n∈Zη(n)| < ∞.

On the other hand, if η(t) is a long-memory process, we find γξ(n)∼ Cm2g

mgηmg(n). (1.27)

1Throughout this section we refer to a finite variance process as standardised, if E[η(t)] = 0 and Var [η(t)] = 1.

Hence, ξ(t) has long memory with parameter αmg, if αmg <1 and P

k∈Zξ(k)| <∞ if αmg >1.

Finally, note that antipersistence of η(t) does in general not entail antipersistence of ξ(t): if Cm = 0 for all uneven m then γξ(k)≥0 and thusP

k∈Zγξ(k)>0.

Theorem 1.3.10. Let{η(t), t∈Z}andGbe as in 1.3.9. Defineξ(t) =G(η(t)). Assume that {ξ(t), t∈Z} is a short-memory process in the sense of definition 1.1.5. Then

Zn,ξ(·)−→d B(·).

Proof. This particular result is due to Giraitis and Surgailis [GS85, theorem 5]. Compare also [BM83].

Remark 1.3.11. Note that the tightness criteria given by Taqqu [Taq75] and Davy-dov [Dav70] require the existence of higher moments than 2, if applied to short-memory processes.

Theorem 1.3.12. Let {η(t), t ∈Z} be a stationary, standardised Gaussian process and let G be as in 1.3.9. Define ξ(t) =G(η(t)). Assumeη(t) is a long-memory process such that αmG <1. Then

Zn,ξ(·)−→D CmG

mG!HmHG(·)

where HmHG denotes a Hermite process of order mG with self-similarity parameter H =

2−αmG

2 .

Proof. This pioneering result is due to Taqqu [Taq75] [Taq79] and Dobrushin and Major [DM79].

Remark 1.3.13. For the case γξ(k)∼ L(|k|)k , see [GS85, theorem 6].

Theorem 1.3.14. Let {ε(t), t∈Z} be a standardised Gaussian iid sequence and let (bj)j∈

N as in 1.1.8 and such thatP

b2j = 1. Define η(t) =

X

j=0

bjε(t−j) (t∈Z).

Then,η(t) is long-range dependent withα = 1−2d. LetGas in 1.3.9 such thatαmG<1 and define

ξ(t) =G(η(t)) (t∈Z).

Furthermore, define the polygonal line process Z˜n,ξ(·) by

n,ξ(s) =









0 s = 0

1 D(n)

Pnt

t=1ξ(t) s= nt, t= 1, . . . , n Zn,ξ t−1n

+ s−t−1n t

n

t−1

n < s < nt, t= 1, . . . , n .

Then, with probability one the sequence

(2 log log (n))mG2n,ξ(·)

n∈N

is relatively compact in C([0,1]). The set of limit points is given by CmmG

G!K where K denotes a compact subset of C([0,1]) consisting functions which have the representation

f(t) = Z t

0

Z v

−∞

(v−u)−αg(u) du m

dv for some g ∈L2(R) such that R

g2(u) du= 1.

Proof. For a proof of this particular result, see [MO87, theorem 5]. For a similar result, cf.

[Taq77]. Lai and Stout [LS80] obtained the upper half of the law of the iterated logarithm in a more general setting.

Remark 1.3.15. In the case m= 1, K can be interpreted as the unit ball of the repro-ducing kernel Hilbert space of a fractional Brownian motion, see e.g. [Taq77, PT00b].

1.3.16. We saw in (1.27) and theorem 1.3.12 that asymptotic behaviour of γξ and the limit distribution of Zn,ξ(·) essentially depend on G only via the Hermite rank mG if the underlying Gaussian process has long memory. This so-called reduction principle still applies if we replace the underlying Gaussian process by a general linear one. For such processes, the role of the Hermite expansion is played by an expansion in Appell polynomials which we will describe in the following. The pioneering result in this field was obtained by Surgailis [Sur82, theorem 2], who derived an analogous result to theorem 1.3.12 for a certain class of linear processes and an entire function G(x) = P

k=0ckxk satisfying some regularity assumptions. It is worthwhile noting that Surgailis established his result without the use of Appell polynomials. The link between these two topics was discovered later, see [Gir85, GS86, AT87].

This topic is technically more involved, so we restrict ourselves to the very basic definitions and results. For a detailed introduction, we refer to [Sch09, section 2.2 and 2.3]. See also [Sur03b, Sur03a].

Definition 1.3.17. Let X be a random variable such that E Xk

<∞. As in [AT87], we define the Appell polynomials recursively by

A0(x) = 1, d

dxAn(x) = nAn−1(x), E[An(X)] = δ0,n

for n = 1, . . . , k.

Remark 1.3.18. Equivalently, the Appell polynomials may be defined in terms of their generation function, i.e. by the relation

X

n=0

zn

n!An(x) = exp(zx)

E(exp(zX)). (1.28)

see e.g. [Gir85, GS86, AT87]. For detailed arguments, compare also [Sch09] and references therein. If X is a standardised Gaussian variable, the generating function of the Appell polynomials and the generating function of the Hermite polynomials coincide, see [Abr72].

Hence, the Appell polynomials of a standardised Gaussian distribution and the Hermite polynomials are the same.

If the moment generating function does not exist as an analytical object, the generating function can be defined as a formal power series, see e.g. [AT87, Sch09].

Likewise, Appell polynomials can be defined in terms of the characteristic function of X. In this case, we have the relation

Ak(x) = (−i)k dk dkz

exp(izx) E(exp(izX))

z=0

,

see [Sur03b]. Note that ddkkz

exp(izx)

E(exp(izX))

exists ifE |X|k

<∞.

Definition 1.3.19. Let X be a random variable and let G be a function which admits an expansion in Appell polynomials, i.e.

G(x) =

X

k=0

ckAk(x)

where Ak(x) denotes the k-th Appell polynomial associated with X or rather the distri-bution of X. We define the Appell rank of G by

mG = min{k= 0, . . . , n;ck 6= 0}.

Remark 1.3.20. Note that Surgailis [Sur82] [Sur00] [Sur03a] defines the Appell rank by mG = min

k∈N E G(k)(X) 6= 0

where G(k)(x) = ddkkxG(x). Both definitions coincide if G is a polynomial. If G is not a polynomial but admits an expansion G(x) = P

k=0ckAk(x), both definitions are still equivalent under some regularity assumptions, see [Gir85, proposition 2].

Theorem 1.3.21. Let {ε(t), t∈Z} be a standardised sequence of iid random variables.

Assume thatE ξk(t)

<∞for allk ∈N. Let (bj)j∈

Zbe real sequence such thatP

j∈Zb2j = 1 and let G be a polynomial such that E[G(η(t))] = 0 and E[G2(η(t))]<∞. Define

η(t) = X

j∈Z

bjε(t−j), ξ(t) =G(η(t)). Assume that P

k∈Zξ(k)|<∞ and P

k∈Zγξ(k)>0. Then, Zn,ξ(·)−→d B(·).

Proof. This result can be found in [Sur03a, theorem 1.1]. The original result is due to Giraitis [Gir85, proposition 6 and 8]

Theorem 1.3.22. Let {ε(t), t∈Z} be a standardised sequence of iid random variables.

Assume that ∆ =E[exp (r0|ξ(t)|)]<∞for somer0 >0. Let (bj)j∈Zbe real sequence such that P

j∈Zb2j = 1. Let G be a function such that E[G(η(t))] = 0 and E[G2(η(t))]<∞.

Define

η(t) = X

j∈Z

bjε(t−j), ξ(t) =G(η(t)). Suppose that G(x) = P

n=mGckAk(x) where the Ak denote the k-th Appell associated with η(t) and mG the corresponding Appell rank of G. Furthermore, suppose that the coefficients ck satisfy P

k=mG|ck|k!dk <∞ for some d >3(1 + 2∆)/r0. (i) Assume thatmG>1. Then, the conditions

X

k∈Z

ξ(k)|<∞ and X

k∈Z

γηmG(k) <∞ are equivalent.

(ii) Assume that P

k∈Zξ(k)|<∞ and P

k∈Zγξ(k)>0. Then, Zn,ξ(·)−→d B(·).

Proof. The first statement follows from [Gir85, proposition 8 and lemma 5]. The second statement is identical to [Gir85, theorem 2].

Remark 1.3.23. Giraitis also proves a central limit theorem for the case γξ(k)∼ L(|k|)k , see [Gir85, theorem 3].

Theorem 1.3.24. Let {ε(t), t∈Z} be a sequence of iid random variables such that E[ε] = 0 andE[ε2m(t)]<∞for m∈N\0. Letbj ∼L(j)jd−1,j ∈Nas in 1.1.8 such that α = (1−2d)< m1 and define

η(t) =

X

j=0

bjε(j)

Denote byAm(x) them-th Appell polynomial associated with the distribution ofη(t) and define

ξ(t) =Am(η(t)). Then,

D2ξ(n)∼cmn2−mαL2m(n) for some constant cm >0 and

Zn,ξ(·)−→ HD HmG(·) with H = 1−αm2 .

Proof. This result is taken from [AT87, theorem 2]

1.3.25. Clearly, theorem 1.3.21 and 1.3.22 can be regarded as the counterpart to theorem 1.3.10 whereas theorem 1.3.24 generalises theorem 1.3.12 for polynomials. In order to obtain theorem 1.3.24 for a general function G, let {η(t), t∈Z} be as in theorem 1.3.24 and assume thatGadmits an expansion as in theorem 1.3.22. Suppose there existsm ∈N such that αm∗<1 and α(m + 1)>1. Then, we have

G(x) =

m

X

k=mG

ckAk(x) +

X

k=m+1

ckAk(x) = G1(x) +G2(x).

Theorem 1.3.24 implies

Zn,G1(η)(·)−→ HD mHG(·).

Moreover, by by theorem 1.3.22, Var [Pn

t=1G2(η(t))] =O(n). Thus, we deduce Zn,G(η)(·)−→ Hd mHG(·).

If αm∗< 1 and α(m + 1) = 1, we may apply [GS85, theorem 3] instead. Note also the result of Surgailis [Sur82, theorem 2].

2 Fractional calculus

2.1 Fractional integrals on a compact interval