• Keine Ergebnisse gefunden

Asymptotic discretization error of the one-scale subsampling esti-

4.2 Asymptotics of the estimators: Stable central limit theorems

4.3.2 Asymptotic discretization error of the one-scale subsampling esti-

Proposition 4.3.3. On the Assumptions 1, 4.1 and 4.3, the discretization error of the one-scale subsampling estimator with subsampling frequencyiN converges stably in law

CHAPTER 4. 4.3. STABLE LIMIT THEOREM

to a centred mixed normal limit as iN → ∞, N → ∞, iN/Nα→0 for every α >2/3:

sN iN

N

X

j=i

XgjXlj−i+1 YγjYλj−i+1− hX, YiT

st N(0,AVARdis,sub) ,

with asymptotic variance

AVARdis,sub = 2 3T

Z T 0

G0(t)(σtXσtY)2(1 +ρ2t)dt . (4.19) Proposition 4.3.4. On the Assumptions 1,4.1 and 4.3, the discretization error of the generalized multiscale estimator with the noise-optimal weights given in (4.14) converges with rate pN/MN stably in law to a centred mixed Gaussian limit as MN → ∞, N →

∞, MN/Nα →0 for everyα >2/3:

s N MN

MN

X

i=1

αopti,M

N

i

N

X

j=i

XgjXlj−i+1

YγjYλj−i+1

−hX, YiT

st N0,AVARmultidis

with asymptotic variance

AVARmultidis = 26 35T

Z T 0

G0(t)(σXt σYt )2(1 +ρ2t)dt . (4.20) In the following, it is proved that the errors due to the lack of synchronicity for the one-scale subsampling estimator and the generalized multione-scale estimator are asymptotically negligible. Therefore, Propositions 4.3.3 and 4.3.4 above are implied by the corresponding limit theorems for the discretization errors of the closest synchronous approximation given as Propositions 4.3.5 and 4.3.6 in the next paragraph.

Note that MN is chosen of order √

N in Theorem 4.1, but the conditionMN =ON2/3 is needed here as a regularity assumption.

Discretization error of the closest synchronous approximation

Proposition 4.3.5. Suppose that Assumptions 1, 4.1 and 4.3 hold true. As iN

∞, N → ∞, iN/Nα → 0 for every α > 2/3, the discretization error of the closest synchronous approximation for the one-scale subsampling estimator with subsampling frequency iN converges stably in law to a centred mixed normal limit:

sN iN

N

X

j=i

XTjXTj−i YTjYTj−i

− hX, YiT

st N(0,AVARsyn,sub) ,

with asymptotic variance

AVARsyn,sub= 2 3T

Z T 0

G0(t)(σtXσtY)2(1 +ρ2t)dt .

4.3. STABLE LIMIT THEOREM CHAPTER 4.

Proposition 4.3.6. Let Assumptions 1,4.1 and 4.3 hold true. For the discretization error of the closest synchronous approximation of the generalized multiscale estimator with the noise-optimal weights given in (4.14), the following stable central limit theorem holds true as MN → ∞, N → ∞, MN/Nα →0 for every α >2/3:

Note that it suffices to prove the foregoing limit theorems for the zero-drift case. Since our limit theorems are stable, asymptotic mixed normality is assured to hold for the general setting on Assumption 1. This has been emphasized in Subsection 1.1.2 and at the end of Section 1.2. Denote Lt = R0tσsXdWsX and Mt =R0tσYsdWsY the continuous martingales that represent the efficient processes under the equivalent martingale measure.

As before in Section 3.3 the asymptotic mixed normality is implied as marginal distribution of a limiting time-changed Brownian motion which is proven to be the weak limit of the process corresponding to the discretization error.

We start with the discretization error of the closest synchronous approximation of a one-scale subsampling estimator. Recall the illustration of the discretization error in (4.11).

Proposition 4.3.7. On the same assumptions as in Proposition 4.3.5, the continuous martingale

where W is independent of F and vDs = 2

3G0(s)(σXs σYs)2(1 +ρ2s) .

CHAPTER 4. 4.3. STABLE LIMIT THEOREM

Proof of Proposition 4.3.7:

In the successional proof the subscript of the subsampling frequency is omitted andC denotes a generic constant andδN = supi∈{1,...,N}(TiTi−1).

Jacod’s Theorem 1.6 is applied in the manner of the proof of Proposition 3.3.2. Note that the conditions (1.3a), (1.3b) and the convergence of the quadratic variation process correspond to the conditions of the discrete-time version of the theorem, except we spare to prove a Lindeberg-type condition when working with the complemented continuous-time martingales.

Calculating the quadratic variation ofDNt yields hDNit= N

In the first step cross terms of the inner sums have been neglected. In the following step the squared increments of L andM and the increments of the product L·M in these inner sums are substituted by the increments of the quadratic (co-)variation processes.

Next, on each block (Tk−1, . . . , Tk−i∨0) the increments of the form RTTk−l

k−l−1f(t)dt with continuous functionsf forl= 1, . . . , k∧iare approximated by ∆Tkf(Tk−1). The error

4.3. STABLE LIMIT THEOREM CHAPTER 4.

induced by the first two steps is shown to be asymptotically negligible in Lemmas 4.3.8 and 4.3.9 with similar methods as already needed for the proof of Theorem 3.1. The crucial blockwise approximation that leads to the simple closed-form expression for the asymptotic quadratic variationhDNi is treated in Lemma 4.3.10. Here the concept of a time-changed quadratic variation of time comes into play and is needed to prove this approximation. It has been further used that 1/iPil=1(1−(l/i))2 = 1/3 +O(1) and the convergence in probability follows by Assumption 4.3 and the convergence of the Riemann sums to the integral.

Lemma 4.3.8. On the same assumptions as in Proposition 4.3.5, it holds true that

2 X Proof. All three terms have an expectation equal to zero. The asymptotic orders of the three terms are deduced following the same principles and we restrict us to the proof of the third equation. The left-hand side can be written

2 X

Tk≤t

∆hL, MiTkξk with centredi-dependent random variables

ξk=

Since Brownian increments over disjoint time intervals are independent, applying Itô isometry the variancesVar(ξk), k= 1, . . . , N are bounded by

CHAPTER 4. 4.3. STABLE LIMIT THEOREM Thus, the second moment of the sum above is bounded by

E

Lemma 4.3.9. On the same assumptions as in Proposition 4.3.5, it holds true that N

Proof. The left-hand sides are centred due to Itô isometry. The three equations can be proved analogously and we restrict ourselves to prove the first one. Denote

ψk= that are i-dependent centred random variables with finite variances

2k=

It has been used that the cross terms are uncorrelated by Itô isometry. For fixed k ,(N−i)> k >(i+ 1), ψkis correlated with ψr, r∈ {k−i, . . . , k+i}, and the sum of

4.3. STABLE LIMIT THEOREM CHAPTER 4.

all correlations (r6=k) is bounded by

k−1 The asymptotic order of the variancesEψ2k calculated above and the Cauchy-Schwarz inequality has been used. We conclude the order of the second moment

E Hence, the term converges to zero in probability.

Lemma 4.3.10. On the same assumptions as in Proposition 4.3.5, it holds true that the terms

converge to zero in probability.

Proof. The proof that the approximation errors of the type above converge to zero in probability is based on the concept of a time-change in the asymptotic quadratic variation of refresh times from Assumption 3.1 which has been presented and discussed in the last section as part of the preparations to establish a central limit theorem. We will suppose without loss of generality that the sampling design of the closest synchronous approximation satisfies (4.13).

CHAPTER 4. 4.3. STABLE LIMIT THEOREM

First, an application of the mean value theorem yields N continuous on [0, T] by Assumption 1

i∧k which isOa. s.(iδN). The asymptotic orders even hold almost surely which is represented by Oa. s.. Sinceρ is as well uniformly continuous analogous conclusions hold true for the two other terms.

Using the Cauchy-Schwarz inequality and (4.13), we obtain N

holds, also almost surely, due to (4.13) and the Cauchy-Schwarz inequality. The last two approximations hold analogously for the two other terms. The four preceding approximation errors that were shown to converge to zero almost surely imply the

4.3. STABLE LIMIT THEOREM CHAPTER 4.

statement of the lemma.

We proceed proving that the quadratic covariations hDN, Litand hDN, Mitconverge to zero in probability for allt∈[0, T].

has an expectation equal to zero for all t∈[0, T] and the second moment is bounded above by

The order follows from the evaluation of the second moment that has been carried out for the calculation of hDNit before. For this reason the quadratic covariation hDN, Li converges to zero in probability on [0, T]. It can be directly deduced thathDN, Mit=Op(1)

converges to zero in probability on [0, T] what is concluded analogously. For every bounded (Ft)-martingale M with hM, Mi ≡0 it also holds true thathDN, Mitp 0 for allt∈[0, T]. An application of Jacod’s Theorem 1.6 in the same manner as for the proof of Theorem 3.1 completes the proof of Proposition 4.3.7.

The stable central limit theorem for the one-scale subsampling estimator in Proposition 4.3.5 is implied from the marginal distribution fort=T.

Proposition 4.3.11. On the same assumptions as in Proposition 4.3.6, the continuous martingale

CHAPTER 4. 4.3. STABLE LIMIT THEOREM

α >2/3 to a limiting time-changed Brownian motion MNt st Proof of Proposition 4.3.11:

The discretization error of the generalized multiscale estimator calculated with the closest synchronous approximation under the equivalent martingale measure where the drift terms equal zero

equals the weighted sum of MN → ∞ discretization errors of the type considered in Proposition 4.3.5 because PMi=1Nαopti,M

N = 1. Note, that all approximation errors in the preceding proof of Proposition 4.3.7 converge to zero in probability as long as N → ∞, i/Nα →0 for everyα >2/3.

As outlined in the last section we begin with the proof of a multivariate stable central limit theorem for a finite-dimensional vector.

Lemma 4.3.12. Consider the sequence ofK-dimensional vectorsDN =Di1N, . . . ,DiKN

with a K-dimensional Brownian motionW independent of F and a predictable process

4.3. STABLE LIMIT THEOREM CHAPTER 4.

ForDNT we obtain the following multivariate stable central limit theorem s N

For one single fixedk this type of martingales corresponding to the discretization error of the closest synchronous approximation of a one-scale subsampling estimator with subsampling frequencyikN has been considered in Proposition 4.3.7. We already know that

from the proof of this Proposition whereqk additionally appears in the stochastic limit sinceMNT is the denominator in the root of the factor instead ofikNT.

The limit of the quadratic covariationshMimN,MikNi is derived using the same approx-imations as for the quadratic variation in the proof of Proposition 4.3.7. It follows that

CHAPTER 4. 4.3. STABLE LIMIT THEOREM

×

min (imN,ikN,r)

X

l=1

1− l imN

! 1− l

ikN

!

+Op(1)

−→p 2T Z t

0

2s+ 1)(σXs σYs)2G0(s) 1

6min (qm, qk)

3− min (qm, qk) max (qm, qk)

ds

sincePml=1(1−(l/m))(1−(l/M)) = (1/2)m−(m2/6M)−1/8 + 1/(12M) form, M ∈Z. The convergence in probability of the quadratic (co-)variations and a tightness criterion onhDN,DNi, which is fulfilled here, are enough to establish weak convergence of the vector (cf. Corollary VI. 6. 30 in Jacod and Shiryaev [2003]).

Corollary 1.2.4 enables us to prove the stronger result of stable weak convergence of the vector provided we can verify conditions (1.6a) and (1.6b). Since the reference continuous martingales for all entries of the vector DN are Land M, it is sufficient to show that

hDN,Lit−→p 0 , hDN,Mit−→p 0, ∀t∈[0, T],

whereLdenotes the vector with entries Lj =L, j = 1, . . . , K and M withMj =M, j = 1, . . . , K, respectively, and

hDN,Lit−→p 0 , hDN,Mit−→p 0, ∀t∈[0, T]

where L and M are bounded (Ft)-adapted martingales orthogonal to L and M, respectively. The componentwise proof of the conditions above is the same as for the univariate case in the proof of Proposition 4.3.7. We conclude that conditions (1.5), (1.6a) and (1.6b) are fulfilled and Corollary 1.2.4 yields the result of the lemma. The asymptotic distribution of the vector is described by a limiting Brownian motion on [0, T], and the marginal distribution at time T by a mixed Gaussian limit, where the normal distribution is defined as well as for all componentwise marginals on an orthogonal extension of the original underlying probability space.

From the preceding multivariate limit theorem the Cramér-Wold device allows to con-clude the weak convergence of all one-dimensional linear combinations of the transformed discretization errors of a finite collection of one-scale subsampling estimators. In particu-lar, a weighted sum of the type of our generalized multiscale approach weakly converges.

For an asymptotically N(0,Σ)-distributed random vector the sum of all components is asymptotically normally distributed with variance Pi,jij) by the Cramér-Wold device and the normality of any linear sum of components of a multivariate normal distribution (see e. g. pp.516-517 in Rao [2001]).

The asymptotic variance in Proposition 4.3.6 is deduced from the multivariate limit and X

k,l

k,l) = 2

MN

X

k=1 k

X

l=1

l 6MN

3− l

k

αoptk,M

Nαoptl,M

N +O(1) = 13

35 +O(1) with the weights (4.14).

For the completion of the proof of Propositions 4.3.11 and 4.3.6, it remains to extend the

4.3. STABLE LIMIT THEOREM CHAPTER 4.

result for asymptotically infinitely many addends. This part of the proof uses similar methods as in Zhang [2006] where a central limit theorem for a multiscale estimator for the integrated volatility in the univariate setting is proved.

Let 0< δ <1 be an arbitrarily chosen real number and α= 1−δ/

2. The ansatz is to approximate an infinite sum by finite sums incorporatingJ := min{n∈N|2αn−1δ2} addends. For a subsampling frequencyikN choose ˜ıkN ∈ {1, . . . , J}such that ikN/MN ∈ (αJ−˜ıkN−1, αJ−˜ıkN). For our generalized multiscale estimator the subsampling frequencies

follow the regular schemeikN =k∈ {1, . . . , MN}.

Now, letiN ∈ {1, . . . , MN} be the subsampling frequency for which the variance of the approximation errors for the one-scale estimators is maximized for fixedt:

iN := arg maxi∈{1,...,M1, κ2), the multivariate limit theorem above yields

n

Since every subsequence has a subsequence for which (in/Mn, in/Mn) converges, the result follows from

and then lettingδ→0. This completes the proof of Propositions 4.3.11 and 4.3.6. The stability conditions (1.3a) and (1.3b) for the sum follow directly from the corresponding

conditions that have been proved to show 4.3.12.

CHAPTER 4. 4.3. STABLE LIMIT THEOREM

Discretization error due to the lack of synchronicity

We take up the illustration (4.12) of the leading term of the error due to interpolations at the times Ti(N), i = 0, . . . , N in the discretization part of the estimation error of a one-scale subsampling estimator. We start with analyzing the error for the one-scale estimator (4.3) and then consider the multiscale extension (4.2) from Section 4.2.

Denote L and M the transformed efficient processes under the equivalent martingale measure where drifts are zero as in the foregoing proof. Define the corresponding transformed term

The outer sum above is centred and a sum of uncorrelated random variables since previous- and next-tick interpolations of one process do not overlap and, furthermore, Ti(N)g(Ni )Ti+1(N), i= 0, . . . , N holds. This will be thoroughly proved in the successive Proposition 4.3.14. First, we reveal that the addends including drift terms in (4.12) are of smaller order than the addends only including Brownian increments in the above term.

Lemma 4.3.13. On the Assumptions 1 and 4.1, it holds true that 1 by a constant multiplied with the time instant and second moment at most of orderi2δN4. Therefore, the whole term above has an expectation of orderN−1since the mixed addends with Brownian increments are centred. Terms of the type RTgj

j σXt dBtXRTTj

j−iµYt dt have second moments at most of order i2δ3N and terms of the type RTgj

j µXt dtRTTj−ij σtYdBYt of order3N. The variance of the whole term is hence at most of orderi−2N2·i2δN3 =N2δN3.

4.3. STABLE LIMIT THEOREM CHAPTER 4.

These findings imply the statement of the lemma.

Proposition 4.3.14. On the Assumptions 1 and 4.1 the term ANT is the endpoint of a discrete martingale with respect to the filtrationFj,N =FT(N)

j+1

. It holds true that ANT =Op

-measurable and the conditional expectation equals zero:

The martingale is centred, since the addends incorporate products of Brownian increments over disjoint time intervals. The conditional variance yields

1

CHAPTER 4. 4.3. STABLE LIMIT THEOREM

The variance of the term is of order (iN)−1 which can be proved by taking the expectation of the above given conditional variance and an upper bound of the second moment. We have used Itô isometry in the above calculation. The asymptotic orders of the addends follow from taking the expectations using Itô isometry and analyzing the differences of the addends minus their expectations, that converge to zero at a faster rate. This is analogous to the proofs for the discretization error of the closest synchronous approximation and we forgo a more detailed computation here.

Denote AN,iT the error due to non-synchronicity and interpolations for a fixed sub-sampling frequency i = 1, . . . , MN in the following. The error due to asynchronicity of the generalized multiscale estimator (4.2) equals the weighted sum PMi=1Nαopti,M

NAN,iT . We obtain directly that drift terms are asymptotically negligible again. The term PMN

i=1 αopti,M

NAN,iT has expectation zero and the variance is of order Var

Thus, the error due to interpolations is of smaller asymptotic order than the discretization error of the closest synchronous approximation and asymptotically negligible.

4.3. STABLE LIMIT THEOREM CHAPTER 4.