• Keine Ergebnisse gefunden

STEIN ESTIMATION IN HIGH DIMENSIONS AND THE BOOTSTRAP

N/A
N/A
Protected

Academic year: 2022

Aktie "STEIN ESTIMATION IN HIGH DIMENSIONS AND THE BOOTSTRAP"

Copied!
19
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

STEIN ESTIMATION IN HIGH DIMENSIONS AND THE BOOTSTRAP

1

Rudolf Beran

University of California, Berkeley

The Stein estimator ^S and the better positive-part Stein estimator ^PS both dominate the sample mean, under quadratic loss, in the N( I) model of dimension

q 3. Standard large sample theory does not explain this phenomenon well. Plausi- ble bootstrap estimators for the risk of ^S do not converge correctly at the shrinkage point as sample size n increases. By analyzing a submodel exactly, with the help of results from directional statistics, and then letting dimensionq!1, we nd:

In high dimensions, ^S and ^PS are approximately admissible and approximately minimax on large compact balls about the shrinkage point. The sample mean is neither.

A new estimator of , asymptotically equivalent to ^PS as q ! 1, appears to dominate ^PS slightly.

Resampling from a N(^I) distribution, where j^j2 estimates j j2 well, is the key to consistent bootstrap risk estimation for orthogonally equivariant estimators of . Choosing ^ to be the Stein estimator or the positive-part Stein estimator or the sample mean does not work.

Estimators of j j are subject to a sharp local asymptotic minimax bound as q increases.

AMS 1991 subject classi cation: Primary 62J07 secondary 62C15.

Key words and phrases: Admissible, minimax, high dimension, orthogonal group, equivariant, directional statistics.

1 This research was supported in part by NSF grant DMS 9001710. Some of the author's work was done as a guest of Sonderforschungsbereich 123 at Universitat Heidelberg.

(2)

1. Introduction.

For the mean vector of a q-variateN( I) distribution, the sample mean is not an admissibleestimator, under squared-error loss, when dimensionq 3. First proved by C. Stein (1956) and subsequently sharpened in James and Stein (1961), this re- markable result came as a surprise to the statistical community. Notable contributions to our understanding of the Stein phenomenon include Stein (1962), Brown (1966), Baranchik (1970), Strawderman (1972), Efron and Morris (1973), Stein (1981), Berger and Wolpert (1983). A valuable survey article is Brandwein and Strawderman (1990).

Large sample theory has diculty in explaining what Stein estimation is about.

SupposeY1Y2:::Yn are i.i.d. random q-vectors, each having a N( I) distribution with unknown and q 3. Letjj denote euclidean norm on Rq and let Yn denote the sample mean vector. The basic Stein estimator

(1:1) ^S = 1; q;2

njYnj2

Yn has risk

(1:2) Rq n(^S ) =q;1nEj^S ; j2 = 1;E (q;2)2=q

njYnj2

which is strictly less than the risk of Yn at every and equals 2=q at the shrinkage point = 0 (James and Stein 1961).

Standard large sample theory tells us that, as n!1 withq xed:

(a) Both ^S and Yn are locally asymptotically minimax estimators at every , in the sense of Hajek (1972) and LeCam (1972).

(b) Both ^S and Yn are Hajek regular at every 6= 0 and are asymptotically least dis- persed among such regular estimators of , by virtue of Hajek's (1970) convolution theorem.

(c) At the shrinkage point = 0, Ynis still Hajek regular but ^S is not (van der Vaart 1988).

(d) At every 6= 0, the risk of ^S improves upon the risk of Yn byO(n;1) (Ibragimov and Has'minskii 1981).

None of these results explain the form of ^S or whether signi cant improvement on

^S is possible.

Related to the lack of Hajek regularity in point (c) is the inconsistency at the shrinkage point, asn! 1, of plausible bootstrap estimators for the riskRq n(^S ).

WritingRn( ) for this risk, two natural parametric bootstrap estimators are Rn(^S) andRn(Yn). These correspond to resampling from theN(^SI) andN(YnI) distribu- tions respectively. Consider an arbitrary sequencef n 2Rqgsuch thatn1=2( n; ) !

h, a xed niteq-vector. LetZ denote a standard normal random q-vector. Then

(1:3) lim

n!1 R

n( n) =

(1 if 6= 0

w(h) if = 0

(3)

where

(1:4) w(h) = 1;E (q;2)2=q

jZ+hj2

Thus, when = 0 and q is xed as n !1, the bootstrap risk estimator Rn(^S) convergesin distributionto the non-degenerate random variablew(1;(q;2)=jZj2)Z], rather than to the correct riskw(0). Similarly,the alternative bootstrap risk estimator

R

n(Yn) converges in distribution to the non-degenerate random variablew(Z).

This paper pursues the theme that dimensional asymptotics, in which q ! 1, help to clarify the Stein phenomenon. The basis for this approach is in Stein (1956) and (1962). Asymptoticsinqhave receivedlittleattention in the subsequent literature on Stein estimation, but are common in the logically related nonparametric regression literature, where i is assumed to depend smoothly upon i.

Our results are organized as follows. Section 2 studies the best orthogonally equivariant estimator ^E(0) of in the N( I) submodel where 0 = j j is xed.

The orthogonal group is transitive on the parameter space of the submodel. We nd an explicit analytical formula for ^E(0) by using theory from directional statistics for the Langevin (or Fisher-von Mises) distribution on the unit sphere. A mathematically simpler equivariant estimator,

(1:5) ^AE(0) = (20=jYnj2)Yn

is shown to approximate ^E(0) well in high dimensions. It was the perception of this approximation that guided the treatment in Stein (1956) and (1962).

Section 3 develops good estimates ^ of j j in the full N( I) model and then analyzes the adaptive estimators ^E(^) and ^AE(^) of . In particular, Section 3.1 establishes that the estimators

(1:6) ^2 =jYnj2;(q;d)=n ^2= jYnj2;(q;d)=n]+

d being any constant, are locally asymptotically minimax for j j2 as q ! 1. Here x]+ is the positive-part function, equal to the larger ofx and 0. When ^2 is taken to bejYnj2;(q;2)=n, then ^AE(^) is the Stein estimator ^S. On the other hand, when

^

2 is jYnj2;(q;2)=n]+, then ^AE(^) becomes the positive-part Stein estimator

(1:7) ^PS = 1; (q;2)

njYnj2

+

Y

n :

Section 3.2 develops asymptotic optimality results concerning the estimation of . We prove that as q !1, with ^ given by (1.6), the estimators ^E(^) and ^AE(^) are asymptotically minimax and asymptotically -admissible on large compact balls about the origin. The estimator Yn has neither optimality property. The best choice of the constant d in the estimator ^E(^) is not entirely clear. However, a numerical experiment strongly suggests that, for everyq, there exist values of d such that the estimator ^E(^) dominates the positive-part Stein estimator ^PS.

(4)

Section 3.3 returns to the question of bootstrapping orthogonally equivariant es- timators such as Stein's. We prove that resampling from the N(^I) distribution, where j^j2 is an asymptotically ecient estimator of j j2 in the sense of Section 3.1, yields consistent risk estimators as q !1 for regular orthogonally equivariant esti- mators of . The argument also shows why resampling from theN(^SI) orN(^PSI) or N(YI) distributions fails. A simple adjustment reduces the bias of the proposed risk estimators.

2. The Fixed-Length Submodel.

Without any loss of generality, we x the sample size n at 1. Observed is the randomq-vector X = (X1:::Xq)0 whose distribution is N( I), the vector 2Rq being unknown. The risk of an estimator ^= ^(X) is

(2:1) Rq(^ ) =q;1Ej^; j2:

Of special interest in this paper are estimators ^ that are equivariant under the orthogonal group: ^(O X) =O^(X) for everyq q orthogonal matrixO. Every such estimator can be written in the form

(2:2) ^(X) =h(jXj)X

for some real-valued function h(Stein 1956, Section 3).

2.1. Exact theory. Consider the estimation of when j j is xed at a known value 0 and only the direction vector = =j j is unknown. In this submodel, we derive the minimumrisk equivariant estimator of and the minimumrisk equivariant estimator of among estimators whose length is0.

The conditional risk, given jXj, of any equivariant estimator (2.2) is (2:3) q;1h2(jXj)jXj2;2h(jXj)E( 0XjjXj) +20]:

Let ^ = X=jXj denote the direction vector of X. The choice of h that minimizes (2.3) is

(2:4) h0(jXj) =jXj;2E( 0XjjXj) =0jXj;1E(0jjX^ j):

The conditional expectation in (2.4) may be evaluated as follows. When q 2, the conditional distribution of ^ givenjXjis Langevin on the unit sphere in Rq, with mean direction= =j jand dispersion parameter =0jXj(cf. Watson 1986). The density of this distribution, relative to spherical surface measure, isaq()exp(0x), where

(2:5) aq() = (2 );q =2q =2;1Iq =2;1;1 ()

and I() is the modi ed Bessel function of the rst kind and order (cf. Schou 1978). When q = 1, the conditional distribution of ^ is discrete, supported on the two points 1 with

(2:6) P(^ = 1jjXj) = exp( jXj) + exp(; jXj)];1exp( jXj)

P

(^ =;1jjXj) = exp( jXj) + exp(; jXj)];1exp(; jXj):

(5)

From the analysis in Appendix A of Watson (1986), it follows that for every integerq 2,

(2:7) E(0jjX^ j) =Aq(0jXj) where

(2:8) Aq(z) =Iq =2(z)=Iq =2;1(z) z 0: Forq = 1, the conditional distribution (2.6) yields

(2:9) E(0jjXj^ ) = tanh(0jXj):

This calculation agrees conveniently with formula (2.7) for q = 1. Thus, by (2.4), (2.7) and (2.9), the minimum risk orthogonally equivariant estimator of in the xed length submodel is

(2:10) ^E(0) =0Aq(0jXj)^ q 1:

If we restrict attention to equivariant estimators ^ such that j^j = 0, the only possibilities, according to (2.2), are ^ =0^. The positive sign minimizes the con- ditional risk (2.3). Consequently, the best constrained length equivariant estimator of is

(2:11) ^CE(0) =0^

in agreement with intuition.

These considerations, the compactness of the orthogonal group on Rq, and the Hunt-Stein theorem prove the following result.

THEOREM 2.1. In the xed length submodel where j j = 0, the minimum risk orthogonally equivariant estimator of is ^E(0), dened in (2.10). This estimator is minimax and admissible among all estimators of . Among estimators of whose length is constrained to be 0, the minimum risk orthogonally equivariant estimator is ^CE(0), dened in (2.11). This alternative estimator is minimax and admissible among all estimators whose length is 0.

It is of interest to compare ^E with ^CE and with two other orthogonally equiv- ariant estimators: X and

(2:12) ^AE(0) = (20=jXj)^ :

The latter estimator will be seen to approximate ^E(0) for large values ofq(Theorem 2.3). While ^E(0) strictly dominates every orthogonally equivariant estimator in the xed length submodel, the improvement is large in the case ofX and is much smaller in the case of ^AE(0) or ^CE(0). These points will be clari ed through the next two theorems.

(6)

From (2.3) and (2.7), the risk of the general orthogonally equivariant estimator

^=h(jXj)X is

(2:13) Rq(^ ) =q;1Eh2(jXj)jXj2;20h(jXj)jXjAq(0jXj) +20]: Substituting the appropriate values ofh(jXj) into (2.13) yields

THEOREM 2.2. In the xed length submodel where j j=0, (2:14) Rq(^E(0) ) =q;1E20;20A2q(0jXj)]

(2:15) Rq(^AE(0) ) =q;1E20;230jXj;1Aq(0jXj) +40jXj;2] (2:16) Rq(^CE(0) ) =q;1E220;220Aq(0jXj)]:

The risks of the three estimators in Theorem 2.2 can also be computed by Stein's (1981) method for estimators of the form ^=X +g(X):

(2:17) Rq(^ ) = 1 +q;1Ejg(X)j2 + 2Xq

i=1

@g

i(Xi)=@Xi]

wheregi is the ith component of g. This approach yields strikingly dierent, though necessarily equivalent, expressions for the risks of the three estimators. For example, Stein's formula gives

(2:18) Rq(^E(0) ) =q;1EfjXj;0Aq(0jXj)g2;220f1;A2q(0jXj)g];1 From (2.18) and (2.14), we see that the sucient statistic X is not complete in the xed length submodel.

2.2. Properties of Aq(z). Further developments and the calculation of ^E(0) rely on the following results. The functionAq satis es the recursion

(2:19) Aq(z) = 1=Aq ;2(z);(q;2)=z q 3 by Schou (1978, Appendix A). In particular,

(2:20) A1(z) = tanh(z)

A

3(z) = coth(z);1=z

and so forth for odd ordersq. The functionAq also satis es the dierential equation (2:21) A0q(z) = 1;(q;1)Aq(z)=z;A2q(z)

as in Schou (1978, Section 2). For every integer q 1, Aq(z) is strictly monotone increasing and concave on z 0, withA00q(z)0,

(2:22) Aq(0) = 0 lim

z !1 A

q(z) = 1

A 0

q(0) = 1=q lim

z !1 A

0

q(z) = 0

(7)

by Watson (1986, Appendix A). Finally, for everyz 0,

(2:23) lim

q !1 zA

q(qz) = (z2+ 1=4)1=2;1=2:

To verify (2.23), let Bq(z) =zAq(qz) and writeB(z) for the limit of a convergent subsequence infBq(z) :q 1g. Equation (2.21) and the second line in (2.22) give (2:24) 0 =z2;B(z);B2(z):

The positive root of (2.24) is B(z) = (z2+ 1=4)1=2;1=2, implying (2.23).

2.3. Asymptotic risks. For t 0, let

(2:25) rE(t) =t=(1 +t) and

(2:26) rCE(t) = 1 +f(1 +t)1=2;t1=2g2]t=(1 +t):

EvidentlyrCE(t) >rE(t) whenever t > 0. The maximum dierence between rCE(t) and rE(t) is only .091. As the next theorem shows, this gure is the maximum dierence between the asymptotic risk of ^CE(0) and the asymptotic risk of the best orthogonally equivariant estimator ^E(0).

THEOREM 2.3. In the xed length submodel where j j = 0, the following uniform risk approximations hold for every nite c>0:

(2:27) lim

q !1

sup

2

0 q c

jR

q(^E(0) );rE(20=q)j= 0 and likewise forRq(^AE(0) ) while

(2:28) lim

q !1

sup

2

0 q c

jR

q(^CE(0) );rCE(20=q)j= 0

Moreover, the estimators ^E(0)and ^AE(0)are asymptotically equivalent in the sense that

(2:29) lim

q !1

sup

2

0 q c

q

;1

E

j^E(0); ^AE(0)j2 = 0:

PROOF. Let fq : q 1g be any sequence of positive numbers such that 2q=q ! a, wherea is nite. Let f q 2Rqg be any sequence such that j qj=q. To prove (2.27) it suces to show that

(2:30) lim

q !1 R

q(^E(q) q) =rE(a):

(8)

LetXq be a random q-vector withN( qI) distribution. As q!1, (2:31) jXqj2=q!1 +a in probability

and therefore, by (2.23),

(2:32) q;1qjXqjAq(qjXqj)!a in probability: Limit (2.27) now follows from (2.14) in Theorem 2.2.

Similar reasoning handles the risks of ^AE(0) and ^CE(0).

Let

(2:33) Cq=q;1(^E ; q)0(^AE; q) and observe that

(2:34) q;1Eqj^E ;^AEj2 =Rq(^E q) +Rq(^AE q);2Eq(Cq): From (2.31) and (2.32),Cq !a=(1 +a) in probability. Hence

(2:35) liminf

q !1 E

q(Cq) rE(a):

On the other hand, by Cauchy-Schwarz and (2.27) and its counterpart for ^AE(0),

(2:36) limsup

q !1 E

q(Cq)rE(a):

Conclusion (2.29) now follows from (2.34) and these considerations.

2.4. Geometry of the asymptotics. Figure 1, which is suggested by the gures in Stein (1962) and Brandwein and Strawderman (1990), exhibits the geometry of the limits in Theorem 2.3. Under the triangular array asymptotics used to prove the theorem, the following relations are very nearly true with high probability when dimensionq is large:

(2:37) jq;1=2 qj2 =a jq;1=2X ;q;1=2 qj2 = 1 jq;1=2Xj2 = 1 +a:

Consequently, the large triangle in Figure 1 is nearly right-angled, with (2:38) cos2() =a=(1 +a):

The circle in Figure 1 represents the parameter space of the xed-length submodel in whichjq;1=2 qj2 =a.

As was noted in (2.2), orthogonally equivariant estimators lie along the vectorX. The scaled equivariant estimatorq;1=2^that minimizesthe lossjq;1=2^; q;1=2 j2is the orthogonal projection ofq;1=2 onto X. For largeq, the minimizing ^ approximately satis es

(2:32) q;1=2^=jq;1=2 qjcos()^= a=(1 +a)]q;1=2X

(9)

with high probability. Algebraically, q;1=2^ coincides asymptotically with q;1=2^AE. On the other hand, since minimizing loss also minimizes risk,q;1=2^coincides asymp- totically withq;1=2^E.

Thus, from the geometry,

(2:40) q;1j^E ; j2 =q;1j^AE; j2 =asin2() =rE(a)

is very nearly true with high probability for largeq. This conclusion agrees with limit (2.28) in Theorem 2.3. Applying Pythagoras' theorem to the smallest right-angled triangle in Figure 1 yields the asymptotic approximation

(2:41) q;1j^CE ; j2 =rE(a) + a1=2;a1=2cos()]2 =rCE(a) in agreement with limit (2.29).

Figure 1 goes here]

3. The Full Model.

In the full N( I) model, with 2 Rq, we can pursue an adaptive strategy for estimating : rst devise a good estimator ^ of j j and then form

(3:1) ^E(^) = ^Aq(^jXj)^

^AE(^) = (^2=jXj)^

^CE(^) = ^ :^

When ^2 is taken to bejXj2;q+2 or jXj2;q+2]+, then AE(^) becomes the Stein estimator ^S or the positive-part Stein estimator ^PS, respectively. The proper choice of ^ and the performance of the estimators (3.1) for are the main themes of this section.

3.1. Estimation of j j2. The following triangular array central limit theorem suggests good estimators for j j2:

LEMMA 3.1. Let f q 2 Rqg be any sequence such that j qj2=q !a <1 as q !1. Then

(3:2) Lq;1=2(jXj2;q;j qj2)j q])N(02 + 4a):

The weak convergence in (3.2) is implied by the algebraic representation (3:3) jXj2 =q+j qj2+fjX ; qj2;qg+ 2 q0(X ; q)

and the Lindeberg-Feller theorem. To apply the latter, note that LjXjj qg depends on qonly throughj qj. Hence, there is no loss of generality in taking each component of q to beq;1=2j qj.

(10)

The next theorem gives a local asymptotic minimax bound on the mean squared error of estimators of j j2 in high dimension. The proof is in Section 4.

THEOREM 3.2. In the full N( I) model, for every nite a>0,

(3:4) lim

c!1

liminf

q !1

inf

^

sup

jj j 2

=q ;ajq

;1=2

c q

;1

E

(^2;j j2)2 2 + 4a the inmum being taken over all estimators^.

The lower bound (3.4) is sharp in the following sense: if ^2 = jXj2 ;q+d or jXj2;q+d]+, where dis a constant, then

(3:5) lim

q !1

sup

jj j 2

=q ;ajq

;1=2

c q

;1

E

(^2;j j2)2 = 2 + 4a

for every nite c > 0. This assertion is immediate from Lemma 3.1. In particular, the uniformly minimum variance unbiased estimator ofj j2, which is ^2 =jXj2;q, is locally asymptotically minimax among all estimators ofj j2 as dimensionqincreases.

The UMVU has the unfortunate property of being negative with positive probability.

An analogous lower bound for estimators of j j is (3:6) lim

c!1

liminf

q !1

inf

^

sup

jj j 2

=q ;ajq

;1=2

c E

(^;j j)2 (1 + 2a)=(2a): It is attained asymptotically by the estimator

(3:7) ^= jXj2;q+d]1=2+ whered is any constant.

3.2. Estimation of . We begin by computing the risks of the adaptive estimators (3.1) when ^ is given by (3.7). Since these adaptive estimators are orthogonally equivariant, it follows from (2.3) that

(3:8) Rq(^E(^) ) =q;1E^2A2q(^jXj);2j jAq(j jjXj)^Aq(^jXj) +j j2] (3:9) Rq(^AE(^) ) =q;1E^4jXj;2;2^2j jjXj;1Aq(j jjXj) +j j2] (3:10) Rq(^CE(^) ) =q;1E^2;2^j jAq(j jjXj) +j j2]:

Formula (3.9) is also valid for ^2 =jXj2 ;q+d. Thus, it applies to both the Stein estimator and the positive-part Stein estimator, which arise when d= 2. This choice of d is known to minimize the risk of ^AE(^).

The selection of d to minimize the risk of ^E(^) is less clear. A numerical study based on 40,000 Monte Carlo samples and double-precision arithmetic suggests that, when q= 3 and d = 2:85 or when q = 5 and d= 2:7, the estimator ^E(^) dominates

(11)

the positive-part Stein estimator. Figure 2 exhibits the risk function dierence com- puted in the experiment forq= 3. We conjecture that, for everyq, there exist choices of d, depending on q, such that ^E(^) dominates the positive-part Stein estimator.

Any such improvement in risk must tend to zero as q ! 1, because of the next result.

Figure 2 goes here]

THEOREM 3.3. In the full N( I) model with ^2 = jXj2 ;q +d]+, the following risk approximations hold for every nite c>0:

(3:11) lim

q !1

sup

j j 2

q c jR

q(^E(^) );rE(j j2=q)j= 0 and likewise forRq(^AE(^) ), while

(3:12) lim

q !1

sup

j j 2

q c jR

q(^CE(^) );rCE(20=q)j = 0:

Moreover, the adaptive estimators ^E(^) and ^AE(^) are asymptotically equivalent in the sense that

(3:13) lim

q !1

sup

j j 2

q c q

;1

E

j^E(^); ^AE(^)j2 = 0:

The proof of this theorem is similar to that for Theorem 2.3, relying on an asymp- totic analysis of the exact risks given in (3.8) to (3.10). Equations (3.11) and (3.12) indicate that the exact risks are better plotted againstj j2=qrather than j j2. Figure 3 displays in this fashion the risk functions of the positive-part Stein estimator when

q= 35919 (solid curves), computed from 40,000 Monte Carlo samples. The dotted curve in Figure 3 is the limiting risk function as q !1, given by (3.11) and (2.25).

The rate of convergence seems quick.

Figure 3 goes here]

We turn now to the main result of this section|the asymptotic optimality of

^E(^) and ^AE(^) as dimensionq increases. An estimator ^ is said to be -admissible onBq(c) =f 2Rq :j j2 qcgif there does not exist another estimator ~ such that (3:14) Rq(~ )<Rq(^ );

for every 2Bq(c).

THEOREM 3.4. In the full N( I) model, for every nite c>0,

(3:15) liminf

q !1

inf

^

sup

j j 2

q c R

q(^ ) rE(c)

(12)

the inmum being taken over all estimators ^. If ^2 = jXj2;q+d]+ then for every nite c>0,

(3:16) lim

q !1

sup

j j 2

q c R

q(^E(^) ) =rE(c)

and ^E(^) is -admissible on Bq(c) for all suciently large q. The same assertions hold for ^AE(^), in which case ^2 =jXj2;q+d also works.

The theorem is proved in Section 4. It entails that both the positive-part Stein estimator and the Stein estimator, as well as the new estimator ^E(^), are asymp- totically minimax and asymptotically-admissible on large compact balls about the origin. The estimatorX has neither property, because

(3:17) lim

q !1

sup

j j 2

q c R

q(X ) = 1

in contrast to (3.16). Similarly, ^CE(^) lacks both asymptotic optimality properties.

3.3. Estimation of risk. Stein's formula (2.17) generates unbiased estimators for the risks of ^E(^) and ^AE(^) that are consistent as q ! 1. Because this approach requires considerable algebra, at least in the case of ^E(^), it seems worth looking for simpler bootstrap or asymptotic risk estimators. Three such risk estimators are discussed in what follows.

Let ^I be any orthogonally equivariant estimator of , with risk (3:18) Rq(^I ) =rq(j j2):

We say that ^I is regular if, for every sequence f q 2 Rqg such that j qj2=q! a as q increases, we have

(3:19) lim

q !1 r

q(j qj2) =r(a)

for some function r that does not depend on the sequence f qg. By Theorem 3.3, the estimators ^CE(^), ^E(^) and ^AE(^) are each regular in this sense, the function

r being respectivelyrCE, rE and rE.

The estimator ^CE(^), de ned by (2.11) and (3.7) is just (3:20) ^CE(^) = 1;(q;d)=jXj2]1=2+ X :

As estimators of the riskRq(^I ), let us consider the parametricbootstrapestimator (3:21) R^B =rq(j^CE(^)j2)

and theasymptotic estimator

(3:22) R^A =r(j^CE(^)j2=q):

Referenzen

ÄHNLICHE DOKUMENTE

Es wird keine Haftung übernommen für Schäden durch die Verwendung von Informationen aus diesem Online-Angebot oder durch das Fehlen von Informationen.. Dies gilt auch für

In unserer Studienrichtung steht die Erhaltung von Bauwerken und Skulpturen aus Naturstein, wie beispielsweise Sandstein, Kalkstein, Marmor oder Granit im Mittelpunkt von Theorie

This section presents additional simulation experiments, in which we compare the explicit measurement error modelling implemented by the MEPM model against the simulative approach

[r]

Corot-7b und Kepler-10b. Beide Planeten kreisen aber zu dicht um ihren Stern, als dass auf ihnen Le- ben möglich wäre. Für die meis ten Exoplaneten sind Masse und/oder Radius

This paper employs recently developed techniques, group-mean panel estimator (include group-mean panel FMOLS and DOLS), for investigating the sustainability of fiscal policy

fühlt hatte, daß weder Herr von Norbach noch seine Frau mit sich selbst darüber einig waren, welche Stellung sie in ihrem Hause dem jungen Mädchen geben

Redescription. Male: Body length 3 mm. Head: Eye bare. Frontal vitta brownish, reddish on anterior margin. Interocular space 0.4 mm. Two pairs of fr, with two in- tercalated