• Keine Ergebnisse gefunden

1 + ˜γ

x−µ σ

−1/˜γ!

, with 1 + ˜γ

x−µ σ

>0 defines the Generalized Extreme Value Distribution. Note that any limit distribu-tion of linearly normalized maxima belongs to the class of G˜γ,µ,σ.

Next, we discuss a one-to-one relation between the Gγ˜-parametrization and (Φα,Λ,Ψα)-parametrization. For ˜γ >0, we have

Gγ˜((x−1)/˜γ) = exp(−x−1/˜γ) = Φ1/˜γ(x) for x >0 and 0 else. Further, we get for ˜γ = 0

G0(x) = exp −e−x

= Λ(x) for x∈R and for ˜γ <0

G˜γ(−(1 +x)/˜γ) =





exp −(−x)−1/˜γ

, x <0,

1, x≥0.

Thus G˜γ(−(1 +x)/˜γ) = Ψ−1/˜γ(x) for x ∈ R. Hence, both representations of Extreme Value Distributions together with their types characterize the same set of distributions, thus both are equivalent.

Remark. There exists another approach to derive the explicit form of possible limit distributions of linearly normalized maxima. The idea is to use the equivalence Xnd X if and only if for all real bounded continuous functions z, E(z(Xn))→ E(z(X)). We refer to Beirlant et al. (2004) for a detailed discussion.

3.2 Constructive approach

In this section we derive limit distributions of normalized maxima in a more con-structive way by slightly modifying the arguments in Section 3.1. We start with some heuristics and subsequently provide a rigorous proof of the corresponding

results using the theory of regularly varying functions.

Recall that from Xn,nP xF ≤ ∞ we concluded that normalization of Xn,n is necessary in order to obtain a nondegenerate limit distribution of the sample maximum. Assume for the moment that the first two moments of Xn,n exist.

A first approach is to scale Xn,n by some deterministic quantity which tends to xF as n → ∞. A theoretical counterpart of Xn,n is given by U(n), where U(t) := F(1−1/t) is the tail quantile function. Note that due to

E[Xn,n]≈F(n/(n+ 1))∼F(1−1/n),

U(n) can be considered as an approximation of the expected value of Xn,n (see Rice 1995). However, the scaled maximum,Xn,n/U(n), can exhibit rather different behavior depending on the variance of Xn,n. Three different cases are possible.

Firstly, if V ar(Xn,n) increases faster than U(n)2, as a function of n, we cannot expect to obtain a weak convergence result for Xn,n/U(n) since the deviations of Xn,nfrom its expected valueU(n) explode asn→ ∞, even after scaling withU(n).

Secondly, ifV ar(Xn,n) =O(U(n)2), we expect to obtain Xn,n/U(n)→dH, where H is some nondegenerate limit distribution, since the distribution of Xn,n/U(n) stabilize as n → ∞. A third situation, where V ar(Xn,n) = o(U(n)2) yields Xn,n/U(n) →P 1, since the variance of Xn,n grows too slowly as a function of n.

Hence, the limit distribution ofXn,ndegenerates after scaling withU(n) asn → ∞.

Note thatxF <∞ is sufficient but not necessary for the third case. Nevertheless, we can quantify the rate of convergence of Xn,n/U(n) to one, considering

1 A(n)

Xn,n U(n)−1

for some positive functionA with limn→∞A(n) = 0. Since V ar

1 A(n)

Xn,n U(n) −1

=V ar 1

A(n) Xn,n U(n)

, we can expect that A(n)1 X

n,n

U(n) −1

converges to some nondegenerate limit if V ar(Xn,n) =O(A2(n)U2(n)).

Indeed, we obtain the second and third case by the following theorem.

Theorem 3.3. LetX1. . . , Xn be i.i.d. random variables with common distribution F and corresponding tail quantile function U. Moreover, suppose that there exist some sequences an >0 and bn∈R such that

P((Xn,n−bn)/an ≤x) =Fn(anx+bn)→H(x), (3.12) weakly as n → ∞, where H is nondegenerate. Then H is of the type of the

following three classes:

(i) Hγ(x) =





0, x≤0,

exp −x−1/γ

, x >0,

γ >0.

(ii) H0,0(x) = exp −e−x

, x∈R.

(iii) H0,ρ(x) =





exp −(1 +ρx)−1/ρ

, 1 +ρx <0,

1, 1 +ρx≥0,

ρ <0.

Proof. We first show the appearance ofHγ as limit of linearly normalized maxima.

Therefore we ask under what, if any, conditions on U, and therefore on F, do we have

Xn,n/U(n)→d H or equivalently

Fn(U(n)x)→H(x) (3.13)

for any continuity point x of H, where H is some nondegenerate distribution function. Note that if xF <∞, thenFn(U(n)x)→δ1(x) for x∈R. This implies xF =∞. Further, due toU(n)x→xF forx >0 we have

n→∞lim

−log(F(U(n)x)) 1−F(U(n)x) = 1.

This allows to rewrite (3.13) by

n→∞lim

1

n(1−F(U(n)x)) =− 1

log(H(x)). (3.14)

Inversion of (3.14) yields

n→∞lim

U(ny)

U(n) =H e−1/y

for any y >0. (3.15) SinceU is measurable, we conclude that U ∈RVγ and therefore the left-hand side of (3.15) is given byyγ with some γ ∈R. However, since limn→∞U(n) =xF = ∞, we can exclude the caseγ <0. Moreover, we obtain from (3.15) that for U ∈RVγ with γ >0,

Xn,n/U(n)→d H as n → ∞ with limit distribution

H(x) =Hγ(x) := exp −x−1/γ

for x >0.

The remaining case, U ∈RV0, leads to a degenerate limit H0(x) =δ1(x) in (3.13).

This implies Xn,n/U(n)→P 1. Observe that for xF <∞, we have Xn,n/U(n)→P 1 andU ∈RV0 as well. Hence scaling Xn,n withU(n) is not suitable in both cases.

However, we can assume that there exists some positive function A(n), satisfying limn→∞A(n)→0, such that

1 A(n)

Xn,n U(n) −1

d

→H as n → ∞ or equivalently

Fn(U(n)(1 +A(n)x))→H(x) as n→ ∞

for any continuity point x of H, where H is some nondegenerate distribution function. Applying similar arguments as above we obtain

n→∞lim

1

n(1−F(U(n)(1 +A(n)x))) =− 1 log(H(x)),

due toU(n)(1 +A(n)x)→xF for any x∈R. Inverting the last relation yields

n→∞lim 1 A(n)

U(ny) U(n) −1

=H e−1/y

for any y >0.

Recall that, ifU ∈RV0, the only possible limit of the left-hand side is given by ψ(y) = yρ−1

ρ for any y >0,

where ρ ≤ 0 and ψ(y) := log(y) for ρ = 0. Note that this implies U ∈ 2RV0,ρ with a positive auxiliary function A satisfying A ∈ RVρ. The resulting limit distributions are given either by

H0,ρ(x) := exp(−e−x), x∈R forρ= 0 or

H0,ρ(x) :=





exp(−(1 +ρx)−1/ρ), 1 +ρx <0,

1, 1 +ρx≥0,

forρ <0.

So far, we identified a two-parametric class of possible nondegenerate limit distri-butions, denoted by (Hγ, H0,ρ) withγ >0 andρ≤0. In order to ensure that there is no other possible limits we follow classical arguments (see for instance de Haan 1976, Leadbetter et al. 1983, Resnick 1987 or Embrechts et al. 1997). Therefore, we first prove that any limiting distribution in (3.12) has to be max-stable and then show that max-stable distributions are necessarily of one of the type of (Hγ, H0,ρ).

Definition 3.2 (Max-stable distribution).

LetX, X1, . . . , Xn be i.i.d. random variables with common distribution function F. Then, F is called max-stable if for any n≥ 2 and appropriate constants an >0 and bn∈R

Xn,n=d anX1+bn.

Next, we show that any (nondegenrate) limit distribution in (3.12) has to be max-stable. For anyk ∈N we have

n→∞lim Fnk(anx+bn) =

n→∞lim Fn(anx+bn)k

=Hk(x), x∈R. Moreover,

n→∞lim Fnk(ankx+bnk) = H(x), x∈R.

Thus, according to the convergence to types theorem, there exist constants ˜ak>0 and ˜bk ∈R such that

n→∞lim ank

an = ˜ak and lim

n→∞

bnk −bn an = ˜bk

such that for i.i.d. random variablesX1, . . . , Xk with common distribution H Xk,k

= ˜d akX1+ ˜bk. Hence H is necessarily max-stable.

It remains to prove that (Hγ, H0,ρ) (γ >0 andρ≤0) and their types are the only possible max-stable distributions. In the sequel we mainly follow the arguments stated in Resnick (1987), p. 10-12.

From a continuous version of the max-stability condition, given by Ht(x) = H(a(t)x+b(t)) for any t >0, we obtain for t >0, s >0

H(a(ts)x+b(ts)) =Hts(x) = (Hs(x))t =H(a(t)a(s)x+a(t)b(s) +b(s)).

This implies

a(ts) = a(t)a(s),

b(ts) = a(t)b(s) +b(t) = a(s)b(t) +b(s).

Hence, a(t) = t−α for some α ∈ R follows. We distinguish three cases α = 0, α >0 and α <0.

Forα= 0 we have a(t)≡1 and therefore

b(ts) = b(s) +b(t),

which yields b(t) =clog(t) for some c6= 0. Note that c= 0 leads to a degenerate limitH. Hence, we have

Ht(x) =H(x−clog(t)) for any t >0.

Since for any fixedx,Ht(x) is nonincreasing intwe conclude thatc >0. Moreover, assume that there exists somex0 ∈R such that H(x0) = 1, then we obtain for all t >0

1 =H(x0−clog(t))

which implies H(x) = 1 for all x ∈ R. This contradicts the assumption of a nondegenerate limit H. The same holds if there exists some x0 ∈ R such that H(x0) = 0. Thus, we conclude 0< H(x)<1 for any x∈R.

Forx= 0 we get

Ht(0) =H(−clog(t)) ⇔ tlogH(0) = logH(−clog(t))

⇔ etlogH(0) =H(−clog(t)).

Settingu=−clog(t) results in

H(u) = exp e−u/clogH(0)

= exp e−u/c −elog(−log(H(0))) . Now, settingν = log(−log(H(0)))/c yields

H(u) = exp

−eu−νc

for any u∈R. Thus,H is of H0,0-type.

Forγ 6= 0, we obtain

t−γb(s) +b(t) =s−γb(t) +b(s).

This yields for anyt 6= 1 ands6= 1 b(t)

t−γ−1 = b(s) s−γ−1. Hence,

b(t) =c(t−γ−1) follows. Inserting a(t) and b(t) yields

Ht(x) =H(t−γx+c(t−γ−1))

=H(t−γ(x+c)−c).

Replacing x+c byy yields

Ht(y−c) =H(t−γx−c).

Now, consider ˜H(y) = H(y−c). Then, H and ˜H are of the same type and it suffices to solve

t(x) = ˜H(t−γx). (3.16)

We start withγ >0. Evaluating (3.16) at x= 0, yields ˜Ht(0) = ˜H(t−γ) for any t >0. This implies either ˜H(0) = 0 or ˜H(0) = 1. The latter case can be ruled out, since otherwise there exists somex0 <0 such that the left hand side of (3.16) is increasing in t, while the right hand side is decreasing. Thus, ˜H(0) = 0 follows.

Moreover, since ˜Ht(1) = ˜H(t−γ) for any t > 0, 0 < H(x)˜ < 1 for any x > 0 follows. Observe that ˜Ht(1) = 0 yields ˜H ≡0, and ˜Ht(1) = 1 results in ˜H =δ0, implying a degenerate limit ˜H in both cases. From ˜Ht(1) = ˜H(t−γ) we obtain

H(t˜ −γ) =etlog( ˜H(1)). Now, settingx=t−γ and ν =

−log( ˜H(1))γ

>0, results in H(x) =˜ e−(x/ν)−1/γ for any x >0.

In particular, ˜H is of Hγ-type.

Next, we consider (3.16) forγ <0. Similarly as before, evaluating (3.16) atx= 0, yields ˜Ht(0) = ˜H(t−γ) for any t >0. This implies either ˜H(0) = 0 or ˜H(0) = 1.

However, this time, we can exclude ˜H(0) = 0, since otherwise there exists some x0 > 0 such that the left hand side of (3.16), evaluated at x0, is decreasing in t, while the right hand side is increasing. Thus ˜H(0) = 1 follows. Moreover, evaluating (3.16) atx=−1 yields

t(−1) = ˜H(−t−γ) for any t >0. (3.17) Thus, for ˜H(−1) ≡ 0 or ˜H(−1) ≡ 1, we obtain a degenerate limit ˜H. Hence 0<H(−1)˜ <1 follows. Moreover, we obtain from (3.17)

H(−t˜ −γ) =etlog( ˜H(−1)).

Now, settingx=−t−γ and ν =

−log( ˜H(−1))γ

>0, yields H(x) =˜ e−(−x/ν)−1/γ for any x <0.

and therefore ˜H is of the type ofH0,γ with some γ <0.

Hence any max-stable distribution is of the type of (Hγ, H0,ρ) with γ > 0 and ρ≤0.