• Keine Ergebnisse gefunden

Maximum Domains of Attraction

We proceed with necessary and sufficient conditions on the tail quantile function U = (1/(1−F)) to ensure the existence of nondegenerate limit distributions of linearly normalized maxima. Therefore, we state the following definition.

Definition 3.3 (Maximum domain of attraction).

A distribution F is in the maximum domain of attraction of some nondegenerate distributionG,F ∈M DA(G), if and only if there exist some normalizing constants

an>0 and bn ∈R such that

Fn(anx+bn) =P(Xn,n ≤anx+bn)→G(x) for any continuity point x of G.

Corollary 3.2 (Characterization of MDA).

Let F be some distribution function and U = 1/(1−F) the corresponding tail quantile function. Then,

(i) F ∈M DA(Hγ), (γ >0) if and only if U ∈RVγ,

(ii) F ∈ M DA(H0,ρ), (ρ ≤ 0) if and only if U ∈ 2RV0,ρ with a ultimately positive auxiliary function A∈RVρ.

The normalizing sequences in (3.5) can be chosen as (i) an=U(n), bn= 0,

(ii) an=A(n)U(n) and bn=U(n) for ρ≤0.

Proof. Following the lines of the proof of Theorem 3.3 both assertions as well as possible choices of normalizing sequences directly follows.

The normalizing sequences in caseF ∈M DA(H0,ρ) can be modified in order to avoid the auxiliary functionA for U ∈2RV0,ρ as stated by the next theorem.

Theorem 3.4. Let F ∈M DA(H0,ρ), i.e. there exists some sequences an>0and bn∈R such that

n→∞lim Fn(anx+bn) =H0,ρ(ax+b), (3.18) with a >0 and b ∈R. If ρ= 0, then setting an =g(U(n)), where

g(t) :=

RxF

t (1−F(s))ds 1−F(t)

and bn = U(n), leads to a = 1 and b = 1. If ρ < 0, then an = xF −U(n) and bn=xF yields a=−1/ρ and b =−1/ρ.

Proof. For ρ ≤0 we have bn = U(n) and an = a(n)U(n), where A is a positive auxiliary function for U ∈2RV0,0, as possible normalizing sequences. According to Theorem 3.3 this leads toa= 1 andb= 1 in (3.18). Hence, forρ= 0 it remains to show that

n→∞lim

g(U(n))

A(n)U(n) = lim

n→∞

nRxF

U(n)(1−F(s))ds A(n)U(n) = 1.

Therefore, observe that by integration by parts we have for any n >0 n

Z xF

U(n)

(1−F(s))ds=n Z xF

U(n)

sdF(s)−U(n) =n Z

n

U(s)ds

s2 −U(n).

Moreover, we obtain by the dominated convergence theorem

n→∞lim nRxF

U(n)1−F(s)ds

A(n)U(n) = lim

n→∞

nR

n U(s)dss2 −U(n) A(n)U(n)

= lim

n→∞

Z 1

U(nx)−U(n) A(n)U(n)

dx x2 = 1, where the last equality arises due to

n→∞lim

U(nx)−U(n)

A(n)U(n) = log(x), local uniformly for anyx >0.

Forρ <0, it suffices to show that

n→∞lim

xF −U(n) A(n)U(n) =−1

ρ. (3.19)

SinceU ∈2RV0,ρ (ρ <0) with positive auxiliary function A we have F ∈ERVρ with some auxiliary function a(t)∼A(t)U(t). Then, (3.19) follows by Theorem 2.7.

Remark. In the classical literature on limits of linearly normalized maxima there is no distinction between the two parameters γ >0 and ρ≤0. Both parameters are usually summarized to the so-called Extreme Value Index (EVI), γ˜∈R, where

˜

γ := γ for γ > 0 and γ˜ := ρ for γ = 0 and ρ ≤ 0. This allows to specify possible limit distributions using the one-parametric family Gγ˜. However, we favor the γ-ρ-parametrization, since it allows to distinguish between the first- and second-order behavior of the tail quantile functionU of the underlying distribution F. In particular, this reveals in full that the estimation of the parameters γ and ρ are two essentially different problems. In contrast, there exist several EVI-estimators designed for the estimation ofγ˜∈R. This frequently results in different asymptotic behavior of the estimators for γ >˜ 0 and ˜γ ≤ 0, which is caused by different behavior of the tail quantile function U in both cases.

The following corollary establishes the relation between M DA(Hγ), M DA(H0,ρ) and the MDA of classical limit distributions Fr´echet (Φα), Gumbel (Λ) and Weibull (Ψα).

Corollary 3.3. We have the following equivalences:

(i) F ∈M DA(Hγ) ⇐⇒ F ∈M DA(Φα) with α = 1/γ.

(ii) F ∈M DA(H0,0) ⇐⇒ F ∈M DA(Λ).

(iii) F ∈M DA(H0,ρ) (ρ6= 0) ⇐⇒ F ∈M DA(Ψα) with α= 1/ρ.

Proof. Note that Φ1/γ(x) = exp(−x−1/γ) = Hγ(x) which proves (i). To prove (ii) observe that H0,0(x) = Λ(x). For the last equivalence H0,ρ(x) = Ψ1/ρ(−ρx−1) suffices.

Next, we state a corresponding equivalences betweenM DA(Hγ), M DA(H0,ρ) and M DA(Gγ˜), where G˜γ(x) := exp −(1 + ˜γx)−1/˜γ

(˜γ ∈R), is the one-parametric representation of Jenkinson and von Mises.

Corollary 3.4. We have the following equivalences:

(i) For any γ >0, we have F ∈M DA(Hγ) ⇐⇒ F ∈M DA(Gγ).

(ii) F ∈M DA(H0,0) ⇐⇒ F ∈M DA(G0).

(iii) For any ρ <0, we have F ∈M DA(H0,ρ) ⇐⇒ F ∈M DA(Gρ).

Proof. Since Gγ(x) = Hγ−1(x−1)), (i) follows. The cases (ii) and (iii) are trivial.

Remark. One could interpret the two-parametric representation, (Hγ, H0,ρ), of the limit distributions as a compromise between the classical Fisher-Tippett-representation (Φα,Λ,Ψα) and the one-parametric representation of Jenkinson and von Mises, given by G˜γ. In recent literature the Gγ˜-representation seems to become prevalent, probably due to the simple necessary and sufficient condition U ∈ ERVγ˜ for F ∈ M DA(Gγ˜). However, as stated earlier, the class ERVγ˜ contains essentially different information about its members for γ >˜ 0 and ˜γ ≤0.

Hence, U ∈ERVγ˜ characterizes different types of behavior for the tail quantile function U providing a one-parametric condition for F ∈M DA(G˜γ).

Adequate necessary and sufficient conditions forF ∈M DA(Hγ) andF ∈M DA(H0,ρ) with ρ≤0 in terms of F are stated by the next theorem.

Theorem 3.5 (Theorem 1.2.1, de Haan and Ferreira (2006)).

We have the following equivalences:

1. F ∈M DA(Hγ) with γ >0 if and only if

t→∞lim

1−F(tx)

1−F(t) =x−1/γ for any x >0, (3.20) i.e. 1−F ∈RV−1/γ.

2. F ∈M DA(H0,0) if and only if

t→xlimF

1−F (t(f(t)x+ 1))

1−F(t) =e−x for any x∈R, (3.21) where f(t) := A(1/(1−F(t))) and A is some ultimately positive function satisfying A ∈RV0.

3. F ∈M DA(H0,ρ) with ρ <0 if and only if for any 1 +ρx <0

t→xlimF

1−F (t(f(t)x+ 1))

1−F(t) = (1 +ρx)−1/ρ, (3.22) where f(t) := A(1/(1−F(t))) and A is some ultimately positive function satisfying A ∈RVρ.

Proof. We follow de Haan and Ferreira (2006) slightly adjusting their arguments.

Since U ∈RVγ we conclude that the left- and right-hand sides of (3.24) tend to (x/(1 +ε))γ and (x/(1−ε))γ respectively. This implies Note thatU is non-decreasing. Hence inverting (3.25) yields

t→∞lim

1−F(t)

1−F(tx) =x1/γ for any x >0,

which corresponds to (3.20). The proof of the converse statement is similar.

For the second and third statement, letF ∈M DA(H0,ρ), which impliesU ∈2RV0,ρ with a positive auxiliary functionA. We first prove that

s→xlimF Moreover, sinceU ∈2RV0,ρ we have

1

and and (3.28) equal log(1±ε) respectively.

Next, substitute 1/(1−F(s)) =t and note thatt → ∞ass →xF. In combination for any x >0. Inverting both relations yields

s→xlimF The assertion follows with f(s) =A(1/(1−F(s))) . The proofs of the converse statements are similar.

The first part of the preceding theorem leads to the following definition.

Definition 3.4 (Heavy-tailed distribution, Tail index).

Assume that F ∈ M DA(Hγ) with γ > 0, then F is said to be a heavy-tailed distribution and the parameter γ is called tail index of F.

Remark. The condition1−F ∈RV−1/γ states that the tail1−F of a distribution F ∈M DA(Hγ) behaves roughly like a power function. Hence, the decay of 1−F to zero is rather slow, justifying the term heavy-tailed distribution. Moreover, note that 1−F ∈RV−1/γ can be relaxed to

t→∞lim

1−F(tx)

1−F(t) =x−1/γ for any x≥1. (3.33)

To see this fix some arbitrary x≥1 and define t0 =tx−1. Then, due to (3.33) we have

lim

t0→∞

1−F(t0x)

1−F(t0) = lim

t→∞

1−F(t)

1−F(tx−1) =x−1/γ. Hence for y:=x−1 we have

t→∞lim

1−F(ty)

1−F(t) =y−1/γ. (3.34)

Since x ≥ 1 was arbitrary (3.34) holds for any 0< y ≤ 1. Therefore, (3.33) is equivalent to 1−F ∈RV−1/γ. The reason why we state this equivalence is that one can easily see that (3.33) is, in turn, equivalent to

t→∞lim P X

t ≤x | X > t

= lim

t→∞

F(tx)−F(t)

1−F(t) = 1−x−1/γ (3.35) for any x≥1. Hence, for X ∼F with F ∈M DA(Hγ), the conditional distribu-tion of relative excesses X/t above some (high) threshold t tends to the Pareto distribution Fpa,γ(x) := 1−x−1/γ, x≥1 as t→ ∞. Note that deviations of X/t

from exact Pareto behavior are scaled out by letting t tend to infinity.

Moreover, we can interpret the condition U ∈RVγ in a similar way. Therefore, define the conditional distribution of relative excesses above some threshold t >0 by

Ft(x) :=P X

t ≤x | X > t

= F(tx)−F(t)

1−F(t) for x≥1.

Due to F(tx) =F(t) +Ft(x)(1−F(t)) the quantile function of Ft is given by Ft(p) := F(p+ (1−p)F(t))

t for p∈[0,1).

Setting t:=U(s) =F(1−1/s) with s∈(1,∞) yields FU(s) (p) = F(p+ (1−p)F(U(s)))

U(s)

= F(p+ (1−p)(1−1/s)) U(s)

= F(1−(1−p)/s) U(s)

= U(s/(1−p)) U(s) .

From U ∈RVγ we conclude

s→∞lim

U(s/(1−p))

U(s) = (1−p)−γ for any p∈[0,1). (3.36) Applying similar arguments as above one can even show that (3.36) and U ∈RVγ are equivalent. Further, since

Fpa,γ−1 (p) := (1−p)−γ, p∈[0,1)

is the quantile function of the Pareto distribution, U ∈ RVγ simply states that quantiles of relative excesses above the threshold U(s) tend to corresponding quan-tiles of the Pareto distribution with parameter γ for s→ ∞.

Remark. Note that F ∈M DA(H0,ρ) implies thatU ∈RV0, meaning that scaling the sample maximum by U(n) results in Xn,n/U(n)→P 1. Moreover, (3.21) and (3.22) are equivalent to

t→xlimF

F (t(f(t)x+ 1))−F(t)

1−F(t) =





1−e−x, ρ= 0, 1−(1 +ρx)−1/ρ, ρ <0.

Substituting t byU(s) we get

s→∞lim P

X U(s) −1

a(s) ≤x | X > U(s)

!

=





1−e−x, ρ= 0, 1−(1 +ρx)−1/ρ, ρ <0.

In particular, note that if F ∈M DA(H0,0) the conditional distribution of suitable scaled excesses above some high thresholdt converges to the exponential distribution as t→ ∞. ForF ∈M DA(H0,ρ) withρ <0the conditional distribution of suitable scaled excesses above some high threshold t converges to the generalized Pareto distribution with parameter ρ as t → ∞.

If F is absolute continuous, there exist a simple sufficient condition for F ∈ M DA(Hγ) in terms of the density of F.

Corollary 3.5 (von Mises condition).

LetF be an absolute continuous distribution function with xF = ∞ and density f satisfying

x→∞lim

1−F(x)

xf(x) =γ >0, (3.37)

then F ∈ M DA(Hγ). Conversely, if 1 − F ∈ RV−1/γ with γ > 0, i.e F ∈ M DA(Hγ) and f is ultimately monotone, (3.37) holds.

Moreover, (3.37) is equivalent to

t→∞lim

U(t)−1 tf(U(t)) =γ.

Proof. See Embrechts et al. (1997), p.132, Embrechts et al. (1997), p.568.

It is interesting to note that the von Mises condition can be interpreted according to the heuristics stated at the beginning of this section. Indeed, observe that under the assumption of absolute continuous distribution functionF with density f, the variance of the maximum can be approximated by

V ar(Xn,n)≈ n

(n+ 1)2(n+ 2)f(U(n))2 ∼ 1

n2f(U(n))2 as n→ ∞

(see Rice 1995). On the other hand, we require V ar(Xn,n) =O(U(n)2) in order to stabilize the distribution ofXn,n by scaling with U(n). This obviously yields

n→∞lim

U(n)−1

nf(U(n)) =c > 0.

Now, settingU(n) = xand note that n=U−1(x) = 1/(1−F(x)), we obtain

x→∞lim

1−F(x)

xf(x) =c >0,

which corresponds to the von Mises condition with c=γ. Hence, one could say that maxima of distributions satisfying the von Mises conditions are growing uniformly in the sense that their approximated expected value U(n) and their approximated standard deviation are increasing at the same rate.

We give some simple examples forF ∈M DA(Hγ) and F ∈M DA(H0,ρ).

Example 3.2 (Cauchy distribution).

For standard Cauchy distributed random variable X we have F(x) = 1

π arctan(x) = 1− 1

πx(1 +o(1)).

Thus we get

U(t) = 1

πt(1 +o(1)).

In particular, U ∈ RV1 and therefore F ∈ M DA(H1). That means X is heavy-tailed with tail index γ = 1. Moreover, possible normalizing sequences are given by

an:=π−1n ∼U(n) and bn = 0.

Example 3.3 (Exponential distribution).

Let X be exponential distributed, i.e.

Fλ(x) = 1−e−λx for x >0.

Then U(t) =λ−1log(t) and therefore U ∈ RV0. Moreover, for A(t) := 1/log(t), we obtain

t→∞lim 1 A(t)

U(tx) U(t) −1

= log(x).

That means U ∈ 2RV0,0 with a ultimately positive auxiliary function A which implies Fλ ∈M DA(H0,0). Possible normalizing sequences are given by an−1 and bn−1log(n).

Example 3.4. SupposeX is uniform distributed on(0,1). Then U(t) = 1−t−11[1,∞)(t) for t∈R.

Hence U ∈RV0. Moreover, setting A(t) := t−1/(1−t−1) yields

t→∞lim 1 A(t)

U(tx) U(t) −1

= 1−y−1.

Since A ∈RV−1, we have U ∈ 2RV0,−1 and therefore F ∈M DA(H0,−1) follows.

Furthermore, an=n−1 andbn= 1−n−1 are possible choices for the normalizing sequences.