• Keine Ergebnisse gefunden

Harmonic Moment Tail Index Estimator

3.10 Tail empirical process

4.2.4 Harmonic Moment Tail Index Estimator

The Conditional Mean Tail Index Estimator is well defined only for 0< γ <1. In order to enlarge the range of possible γ-values we introduce a tuning parameter β ≥ 0 and consider the random variable X1−β. For β = 1 the corresponding transformation reads as log(X). In order to see the effect of this transformation observe that ifX is Pareto distributed with Fpa,γ we have

E[X1−β] = (γ(β−1) + 1)−1 for any γ <(1−β)−1,

forβ 6= 1 and E[log(X)] =γ. In particular, forβ ≥1, E[X1−β] is well defined for anyγ >0 while for β <1 it is only well defined in the range 0< γ <(1−β)−1. Hence, the range for the existence of the first moment is enlarged by the tuning parameterβ.

Again starting from an i.i.d. sampleY1, . . . , Yk with common Pareto distribution, Fpa,γ, we can estimate 1/(γ(β−1) + 1) by By a simple transformation we obtain

ˆ

as an estimator of the Pareto parameter γ. Replacing the original observations in ˆ

γk(β) by relative excesses above Xn−k,n leads to the Harmonic Moment Tail Index Estimator defined as follows.

Definition 4.6 (Harmonic Moment Tail Index Estimator (HME)).

LetX1, . . . , Xnbe i.i.d. random variables with common distributionF ∈M DA(Hγ).

The Harmonic Moment Tail Index Estimator of γ is defined by ˆ

is the Hill estimator ˆγn,k(H). Moreover, γˆ(0)n,k corresponds to the Conditional Mean Tail Index Estimator.

Remark. The class of Harmonic Moment Tail Index Estimators was introduced in Henry (2009), using a different tuning parameter θ which equals 1/(β−1) in the definition above. Although the author mentioned that HME is applicable to F ∈M DA(Hγ), the asymptotic properties are derived under the quite restrictive assumption

∃u >0 such that x > u ⇒ P(X > x) =cx−1/γ, c, γ >0,

where u is some high threshold, i.e. the class of distributions is restricted to ones with an exact Pareto-tail beyond some threshold u. Stehl´ık et al. (2009, 2010) have proposed a t-score moment estimator, which is a special case of the Harmonic Moment Tail Index Estimator with β = 2. A comparison with the ML-estimator considering a single-parameter Pareto distribution has revealed the favorable properties of the t-score moment estimator in terms of robustness. A somewhat related approach was stated by Finkelstein et al. (2006), where an M-estimator for the tail index of a one parameter Pareto distribution based on the probability integral transform was proposed. Similar to the Harmonic Moment Tail Index Estimator the idea is to robustify the ML-estimator by reducing the influence of large observations.

Theorem 4.12 (Consistency of the HME, Beran et al. (2013b)).

LetX1, . . . , Xn be an i.i.d. sequence of random variables with common distribution F ∈ M DA(Hγ). If k(n)/n → 0, then νn,k ⇒ νγ implies consistency of the Harmonic Moment Tail Index Estimator,

ˆ

γn,k(β)P γ, provided β >1−1/γ.

Proof. Rewriting ˆγn,k(β) as a functional of ˆνn,k will serve the desired consistency result. Similar as before, we consider the functional

T(β)(µ) = Z

1

µ(x,∞]dx xβ,

defined onM+(0,∞], where β >1−1/γ. In order to prove that T(β)(ˆνn,k)→P T(β)γ) = 1

γ(β−1) + 1

define

Since the integration is over a finite region, the continuous mapping theorem yields Z M

Hence, according to Theorem 4.2 in Billingsley (1968) it remains to show that for allε >0, decomposition as in Resnick (2007):

≤P

The second probability is negligible, due to Xn−k,n/U(n/k)→P 1 and

Using Markov’s inequality we have P3 =P

Note that regularly varying tails imply n

So far, we have shown that Yn(β)P

Z 1

νγ(x,∞]x−βdx =γ/(1 +γ(β−1)). (4.11)

Rewriting Yn(β) we obtain

Forβ 6= 1, integration by parts yields Z

Combining (4.11) and (4.12) we obtain 1

Next, we derive asymptotic normality of the Harmonic Moment Tail Index Esti-mator.

Theorem 4.13 (CLT for the Harmonic Moment Tail Index Estimator).

LetX1, . . . , Xnbe i.i.d. random variables with common distributionF ∈M DA(Hγ) and assume that U ∈ 2RVγ,ρ. Then, for any intermediate sequence k(n) → ∞, with k/n→0, satisfying

n→∞lim

Remark. Note that, due to A0(t)∼A(t), (4.13) is equivalent to

n→∞lim

kA(n/k) =λ. (4.15)

Remark. Under the trivial assumption that for some fixed x0 >0, we have the exact equality 1−F(x) = P(X > x) =cx−1/γ for all x > x0 and some c, γ >0, we obtain asymptotically a N(0, σ2β)-distribution, i.e. no asymptotic bias. This simple case has been considered already in Henry (2009). Moreover, for β = 1, Theorem 4.13 coincides with previous results for the Hill estimator γˆn,k(1) (see for instance Theorem 4.9 or de Haan and Ferreira 2006, Theorem 3.2.5).

Proof of Theorem 4.13. We start with asymptotic normality of ˆ

First, rewrite ˆx(β)H,kas an integral with respect to the empirical distribution function Fn Forβ 6= 1, we obtain using integration by parts

Z where in the last step we have substituted s by tU(n/k) inside the integral.

Together with

we obtain

By the weighted uniform approximaton of the tail empirical process (see Theorem 3.8) we get

Moreover, since XU(n/k)n−k,nP 1, we can use the Slutsky-Theorem to show

Next, consider II. Again due to uniform convergence in Theorem 3.8 we obtain II = (1d −β) where in the last step we again make use of Slutsky’s Theorem. Note that

Z

where we implicitly assume that β >1−1/γ. Under the additional assumption limn→∞ Considering the remaining term, we obtain by the delta method

III = γ(1−β)

Now, sinceγ(1−β)/(γ(1−β)−1)Wn(1) + (1−β)γR1

0 Wn(s)s−γ−1+γβdsis a linear combination of Gaussian random variables, we obtain the desired central limit theorem after computing its variance. Therefore we write

E (

(γ(1−β)−1)−1W(1) + Z 1

0

W(s)s−γ−1+γβds 2)

= (γ(1−β)−1)−2E[W2(1)] + 2(γ(1−β)−1)−1 Z 1

0

E[W(1)W(s)]s−γ−1+γβds +

Z 1 0

Z 1 0

E[W(t)W(s)]t−γ−1+γβdts−γ−1+γβds

=:S1+S2+S3.

Obviously, S1 = (γ(1−β)−1)−2. Moreover, due to E[W(s)W(t)] = min(s, t), we obtain

S2 = 2(γ(1−β)−1)−1 Z 1

0

s−γ+γβds=−2(γ(1−β)−1)−2. Furthermore, we get

S3 = 2 Z 1

0

Z s 0

tγ+γβdtsγ−1+γβds= 2(γ(1−β)−1)−1(2γ(1−β)−1)−1, provided β >1−(2γ)−1. Note, that for β ≤1−(2γ)−1 S3 is not defined.

Finally, we obtain V ar

γ(1−β)

γ(1−β)−1Wn(1) + (1−β)γ Z 1

0

Wn(s)s−γ−1+γβds

(4.17)

2(1−β)2

(γ(1−β)−1)−2−2(γ(1−β)−1)−2 +2(γ(1−β)−1)−1(2γ(1−β)−1)−1

= γ2(1−β)2

(1 +γ(β−1))2(1 + 2γ(β−1)).

Now, we are ready to state the central limit theorem for ˆx(β)H,k. Suppose k is intermediate,β >1−1/(2γ) and limn→∞

kA0(n/k) =λ. Then

√ k

ˆ

x(β)H,k−1/(1 +γ(β−1)) d

→N λµ˜β,˜σβ2

, (4.18)

where

˜

µβ := ˜µβ(γ, ρ) = (1−β)/[(1−ρ+γ(β−1))(1 +γ(β−1))]

and

˜

σ2β := ˜σ2β(γ) = γ2(1−β)2

(1 +γ(β−1))2(1 + 2γ(β−1)).

We can construct an estimator for γ using the map g(θ) = β−11 1θ −1

Alternative proof of the asymptotic normality of the HME.

Recall that according to Theorem 3.10, we have forε small enough Xn−[ks],n

where theop-term converges to zero uniformly for 0< s≤1. Moreover, note that due toU ∈RVγ we have and thereforeU ∈2RVγ,ρ satisfies

t→∞lim

This yields a uniform approximation for any t, tx > t0. Applying similar arguments as in the proof of the second statement in Theorem 3.10 we obtain

Xn−[ks],n

Due to R1

Moreover, due to (4.17) we have V ar Thus, under the additional assumption limn→∞

kA0(n/k) =λ,

we obtain the assertion applying the delta method.

Corollary 4.3. Under the conditions of Theorem 4.13 the asymptotic mean squared error of γˆn,k(β) is given by

AM SE(β) :=k−12µ2ββ2).

Moreover,

(a) If ρ= 0, then µβ ≡1 and σβ is minimal for β = 1.

(b) If ρ <0, then

AM SE(1)< AM SE(β), for β >1 and there exist some β ∈(1−1/(2γ),1) such that

AM SE(1) > AM SE(β).

(c) If ρ→ −∞, then µβ = 0 and ef f(β,1) = AM SE(1)

AM SE(β) <1, for any β 6= 1 and β >1− 1 2γ. Proof of Corollary 4.3. The proofs of assertion (a) and (c) are straight forward.

Assertion (b) follows from d

dβAM SE(1)>0 for ρ <0.