• Keine Ergebnisse gefunden

Empirical versions of the MDA-conditions

γ , (3.38)

where for ˜γ = 0 the right-hand side is interpreted as log(x).

(iii) There exists some positive function f such that

s→xlimF

1−F(s+xf(s))

1−F(s) = (1 + ˜γx)−1/˜γ (3.39) where 1 + ˜γx >0.

3.5 Empirical versions of the MDA-conditions

In the last section it was shown that 1−F ∈RV−1/γ implies that the distribution of relative excesses above some threshold t converges to a Pareto distribution Fpa,γ as t tends to infinity. In particular, the parameter of the Pareto distribution corresponds to the tail index γ. Starting from an i.i.d. sequence of random variables, X1, . . . , Xn, with common distribution F ∈ M DA(Hγ), we seek to construct an empirical version of the MDA-condition 1−F ∈RV−1/γ. Therefore, we have to replace the thresholdt with some empirical quantity which converges to infinity as the sample size grows. One obvious choice would be t=Xn,n, since we know that Xn,n → ∞. However, since by definition there is no observationP beyond Xn,n, we cannot construct an empirical version of the MDA-condition using this approach. However, definingk := k(n) such thatk → ∞ but k/n→0 as n → ∞ and setting t = Xn−k,n yields Xn−k,n

→ ∞. Moreover, using thisP

approach we obtain k relative excesses Xn−k+1,n

Xn−k,n

, . . . , Xn,n Xn−k,n

aboveXn−k,n from a sample of i.i.d. random variables X1, . . . , Xn with common distributionF ∈ M DA(Hγ). Since k → ∞ as n → ∞ this choice enables us to establish asymptotic results as the sample size grows. We first obtain a result for relative excesses above U(n/k), the theoretical counterpart of Xn−k,n, and then state a complete empirical version of 1−F ∈RV−1/γ.

Theorem 3.6 (Empirical version of the M DA(Hγ)-condition 1−F ∈RV−1/γ).

Suppose X1, . . . , Xn are i.i.d. random variables with common distribution F ∈ M DA(Hγ) and let Fn be the corresponding empirical distribution function. More-over, let X1,n ≤ · · · ≤Xn,n be the pertaining order statistics and set U be the tail quantile function of F. Then, considering the empirical distribution of relative excesses above U(n/k), we have for t =U(n/k)

Fn,t(x) := Fn(tx)−Fn(t)) 1−Fn(t))

P 1−x−1/γ for any x≥1,

for n → ∞ with k → ∞ but k/n → 0. Replacing U(n/k) by its empirical counterpart Xn−k,n results in

Fn,Xn−k,n(x) := Fn(Xn−k,nx)−Fn(Xn−k,n) 1−Fn(Xn−k,n)

P 1−x−1/γ, (3.40) for anyx≥1 as n→ ∞ with k → ∞ but k/n→0. We call (3.40) the empirical version of the M DA(Hγ)-condition 1−F ∈ RV−1/γ. Note that Fn,Xn−k,n is the empirical distribution of relative excesses above the random threshold Xn−k,n.

Remark. To highlight the analogy of (3.40) and 1−F ∈ RV−1/γ, observe that (3.40) is equivalent to

1−Fn(Xn−k,nx) 1−Fn(Xn−k,n)

P x−1/γ for any x≥1, as n→ ∞ with k → ∞ but k/n→0.

For the proof of the pointwise convergence in (3.40) we follow the approach stated in Resnick (2007), using the tail empirical measure and the concept of weak convergence of random measures. Therefore, we define the so-called tail measure by

νγ : E → R+,

(x,∞]7→νγ((x,∞]) :=x−1/γ,

where E is the Borel σ-field onE := (0,∞]. Moreover, letM+(E) be the space of nonnegative Radon measures onE endowed with the vague topology. Vague convergence in M+(E), which is consistent with the metric generating the vague topology, is defined as follows.

Definition 3.5 (Vague convergence of nonnegative Radon measures).

Let

CK+(E) :={f :E →R+ :f is continuous with compact support}.

Then, given a sequence {µn, n ≥ 0} with µi ∈ M+(E), µn is said to converge vaguely to µ0 if for all f ∈CK+(E)

µn(f) :=

Z

E

f(x)µn(dx)→µ0(f) :=

Z

E

f(x)µ0(dx) as n→ ∞.

Note that endowed with the vague metricM+(E) is a complete, separable, metric space. Moreover, for a sequence of i.i.d. random variablesX1, . . . , Xn we define the tail empirical measureνn,k :E → R+ by

νn,k(·) := 1 k

n

X

i=1

1{Xi/U(n/k)∈ · }, where k=k(n)< n.

Theorem 3.7 (Consistency of the tail empirical measure, Resnick (2007)).

Assume X1, . . . , Xn are i.i.d. random variables with common distribution function F ∈M DA(Hγ). This implies

µn,k(·) := n kP

X1 U(n/k) ∈ ·

v νγ(·) (3.41)

in M+(0,∞) as n → ∞ and k(n)→ ∞ but n/k(n)→ ∞. Then, in M+(0,∞]

νn,k ⇒νγ,

where νγ((x,∞]) =x−1/γ, x > 0, γ >0 and ⇒ stands for weak convergence on a general metric space.

Observe, that sinceνγ is a non random element fromM+(0,∞], we can deduce consistency ofνn,k from its weak convergence to νγ.

Proof of Theorem 3.7.

SinceF ∈M DA(Hγ) we have 1−F ∈RV−1/γ. This implies

n→∞lim n

k (1−F(U(n/k)x)) = x−1/γ for any x >0.

To prove vague convergence in (3.41) we have to show that for anyf ∈CK+((0,∞]) µn,k(f) :=

Z

f(x)n kP

X1

U(n/k) ∈dx

→νγ(f).

Sincef has a compact support, there exist some δ > 0 such that the support is contained in (δ,∞]. Define on (δ,∞]

Pn(·) = µn,k(·)

µn,k(δ,∞] and P(·) = νγ(·) νγ(δ,∞],

so thatPn, P are probability measures on (δ,∞]. Then for y∈(δ,∞]

Pn((y,∞])→P(y,∞] = y−1/γ δ−1/γ.

That means that Pn converges weakly to P. Since P and Pn are probability measures andf is bounded and continuous on (δ,∞], we conclude

Pn(f)→P(f).

That means

µn,k(f)

µn,k(δ,∞] → νγ(f) δ−1/γ. This implies µn,k(f)→νγ(f) and therefore µn,kν νγ.

In order to prove weak convergence of the tail empirical measure, we have to show that for a sequence hj ∈CK+(0,∞]

n,k(hj), j ≥1)⇒(νγ(hj), j ≥1) asn → ∞in R, which is equivalent to

n,k(hj),1≤j ≤d)⇒(νγ(hj),1≤j ≤d)

asn→ ∞ in Rd for anyd. Weak convergence is equivalent to the convergence of the joint Laplace transform. Therefore, for λj >0, j = 1, . . . , d we have to show that

E

ePdj=1λjνn,k(hj)

→E

ePdj=1λjνγ(hj) Now, since

d

X

j=1

λjνn,k(hj) =

d

X

j=1

λj Z

E

hj(x)νn,k(dx) = Z

E d

X

j=1

λjhj(x)νn,k(dx) =νn,k

d

X

j=1

λjhj

!

it suffices to show This converges to E

e−νγ(h) according to the vague convergence assumption (3.41). Hence,

lim sup

Since the tail empirical measure still contains the unknown tail quantile function U, we define

ˆ

νn,k(·) := 1 k

n

X

i=1

1{Xi/Xn−k,n∈ · }. onM+(0,∞] as an estimator of νn,k.

Lemma 3.1 (Consistency of ˆνn,k, Resnick (2007)).

InM+(0,∞] we have

ˆ

νn,k ⇒νγ as n→ ∞, k→ ∞ and k/n→0.

Proof. Following Resnick (2007) we define a scaling operator of Radon measures inM+(0,∞] by

T :M+(0,∞]×(0,∞)→M+(0,∞], where

(µ, x)(A)7→T(µ, x)(A) := µ(xA)

for anyA∈ E. Note that using Proposition 3.1 in Resnick (2007) we have joint weak convergence

νn, Xn−k,n

U(n/k)

⇒(νγ,1) inM+((0,∞]×(0,∞)). Since

ˆ

νn,k(·) =νn,k

Xn−k,n

U(n/k) ·

=T

νn,k(·), Xn−k,n

U(n/k)

,

it remains to show that T is continuous in (νγ,1). The desired convergence result will then follow by the continuous mapping theorem. Therefore, consider some sequences µn ∈M+(0,∞] and {xn, n ≥0} with xi ∈(0,∞) where µnv νγ and xn →x. In order to prove thatT is continuous at (νγ, x), it suffices to show that for any f ∈CK+(0,∞]

Z

(0,∞]

f(t)µn(xndt) = Z

(0,∞]

f(y/xnn(dy)→ Z

(0,∞]

f(y/x)νγ(dy).

In order to show, that the distance can be made arbitrary small, observe

The second difference is o(1) since f x·

∈CK+(0,∞] and µnv νγ by assumption.

The first difference can be made small exploiting the fact that for some large nthe supports off x· sincef is continuous with compact support, f is uniform continuous on (0,∞], which implies which completes the proof of the continuity of the scaling map.

Proof of Theorem 3.6. Recall that we have νn,k⇒νγ. This implies

This means that the empirical distribution of relative excesses above U(n/k) converges in probability to the Pareto distribution. However, note thatFn,U(n/k) remains dependent on the unknown tail quantile functionU. Therefore, we replace U(n/k) by its empirical counterpart Xn−k,n leading to an empirical version of M DA(Hγ)-condition 1−F ∈RV−1/γ stated by (3.40).

To prove the second assertion recall that we have ˆνn,k ⇒νγ. This yields ˆ

νn,k(x,∞] = n

k (1−Fn(Xn−k,nx))→P νγ(x,∞] =x−1/γ for any x≥1.

Hence

1−Fn(Xn−k,nx) 1−Fn(Xn−k,n)

P x−1/γ for any x≥1, or equivalently

Fn,Xn−k,n(x) := Fn(Xn−k,nx)−Fn(Xn−k,n) 1−Fn(Xn−k,n)

P 1−x−1/γ for any x≥1.

Next, we consider the necessary and sufficient condition forF ∈M DA(Hγ) given by U ∈ RVγ. Recall that U ∈ RVγ implies that quantiles of relative excesses aboveU(t) = F(1−1/t) tend to quantiles of a Pareto distribution as t tends to infinity. In order to construct an empirical counterpart of the MDA-condition starting from an i.i.d. sequence X1, . . . , Xn of random variables with common distributionF ∈M DA(Hγ), an appropriate empirical choice for U(t) is required.

Again, replacing U(t) by Xn,n leads to the same problem previously discussed, since there are no observations beyond Xn,n and therefore no relative excesses above Xn,n. To avoid this problem we define a sequence k := k(n) such that k → ∞, but k/n → 0 as n → ∞ and consider relative excesses above U(n/k).

The MDA-condition U ∈RVγ then implies

n→∞lim

U(n/(k(1−p)))

U(n/k) = (1−p)−γ for p∈[0,1).

Note that Xn−k,n is the empirical counterpart of U(n/k). Considering relative excesses aboveXn−k,n, the benefit of choosing an intermediatek becomes immedi-ately apparent. On the one hand the random thresholdXn−k,n tends to infinity as n→ ∞ensuring that the MDA-condition is still valid, while on the other hand the

number of relative excesses above Xn−k,n increases with the sample size allowing weak convergence results. Empiricalp-quantiles (p∈(0,1)) of relative excesses above Xn−k,n are given by Xn−[(1−p)k],n/Xn−k,n. Thus, we obtain the following empirical counterpart of the MDA-conditionU ∈RVγ.

Corollary 3.6 (Empirical version of the M DA(Hγ)-condition U ∈RVγ).

LetX1, . . . , Xnbe i.i.d. random variables with common distributionF ∈M DA(Hγ) and let X1,n ≤ · · · ≤Xn,n be the corresponding order statistics. Then,

Xn−[(1−p)k],n

Xn−k,n

P (1−p)−γ for any p∈[0,1), (3.42) provided k → ∞, k/n→0 as n → ∞.

To state the proof of Corollary 3.6 we require the following auxiliary result which follows from consistency of the tail empirical measure.

Corollary 3.7.

Suppose that X1, . . . , Xn are i.i.d. random variables with common distribution F ∈ M DA(Hγ). Then, νn,k ⇒ νγ in M+(0,∞) as n → ∞ and k = k(n) → ∞ with n/k → ∞ implies

Xn−k,n U(n/k)

P 1.

Proof. We follow Resnick (2007). Observe that P

Xn−k,n

U(n/k) −1

> ε

=P(Xn−k,n >(1 +ε)U(n/k)) +P(Xn−k,n <(1−ε)U(n/k))

≤P 1 k

n

X

i=1

1{U(n/k)Xi ∈(1+ε,∞]} ≥1

!

+P 1 k

n

X

i=1

1{U(n/k)Xi ∈[1−ε,∞]} <1

! . However, consistency of the tail empirical measure implies that

νn,k(1 +ε,∞] = 1 k

n

X

i=1

1{U(n/k)Xi ∈(1+ε,∞]}

P νγ((1 +ε),∞] = (1 +ε)−1/γ <1 and

νn,k(1−ε,∞] = 1 k

n

X

i=1

1{U(n/k)Xi ∈[1−ε,∞]}

P νγ((1−ε),∞] = (1−ε)−1/γ >1,

provided n → ∞, k = k(n) → ∞ and n/k → ∞. Thus, consistency of Xn−k,n

as an estimator of U(n/k) provided n → ∞, k = k(n) → ∞ and n/k → ∞ follows.

Proof of Corollary 3.6.

The empirical version of the MDA-condition follows immediately from Corollary 3.7. Therefore note that we have for any sequence k = k(n) satisfying k → ∞ and k/n→0 as n→ ∞

Xn−k,n

U(n/k)

P 1.

Setting ˜k(n) := (1−p)k(n) for p ∈[0,1) we still have ˜k → ∞ and ˜k/n → 0 as n→ ∞ and therefore

Xn−[˜k],n U(n/˜k)

P 1.

SinceF ∈M DA(Hγ) implies U ∈RVγ we obtain

n→∞lim

U(n/k)˜

U(n/k) = (1−p)−γ for any p∈[0,1).

Thus,

Xn−[(1−p)k],n

U(n/k)

P (1−p)−γ for p∈[0,1) (3.43)

follows. Moreover, again due toXn−k,n/U(n/k)→P 1 we end up with Xn−[(1−p)k],n

Xn−k,n

P (1−p)−γ for any p∈[0,1), asn → ∞and k → ∞ but n/k →0.

Remark. It is possible to derive a more powerful version of (3.43) given by Xn−[kt],n

U(n/k)

P t−γ in D(0,∞].

We refer to Resnick (2007), p. 82 for a proof. Obviously this result implies the empirical MDA-condition stated in (3.42).

3.6 Second-order behavior of heavy-tailed