• Keine Ergebnisse gefunden

Modulation Estimators and Condence Sets

N/A
N/A
Protected

Academic year: 2022

Aktie "Modulation Estimators and Condence Sets"

Copied!
36
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Modulation Estimators and Condence Sets

Rudolf Beran and Lutz Dumbgen

y

University of California, Berkeley and Universitat Heidelberg January 1996, Revised August 1997

Abstract.

An unknown signal plus white noise is observed at

n

discrete time points.

Within a large convex class of linear estimators of

, we choose the estimator

b that minimizes estimated quadratic risk. By construction,

b is nonlinear. This estimation is done after orthogonal transformation of the data to a reasonable coordinate system.

The procedure adaptively tapers the coecients of the transformed data. If the class of candidate estimators satises a uniform entropy condition, then

bis asymptotically mini- max in Pinsker's sense over certain ellipsoids in the parameter space and shares one such asymptotic minimax property with the James-Stein estimator. We describe computational algorithms for

band construct condence sets for the unknown signal. These condence sets are centered at

b, have correct asymptotic coverage probability, and have relatively small risk as set-valued estimators of

.

AMS 1991 subject classications. Primary 62H12 secondary 62M10.

Key words and phrases. Adaptivity, asymptotic minimax, bootstrap, bounded varia- tion, coverage probability, isotonic regression, orthogonal transformation, signal recovery, Stein's unbiased estimator of risk, tapering.

Research supported in part by National Science Foundation Grant DMS95-30492 and in part by Sonderforschungsbereich 373 at Humboldt-Universitat zu Berlin.

yResearch supported in part by European Union Human Capital and Mobility Program ERB CHRX-CT 940693.

(2)

1 Introduction

The problem of recovering a signal from observation of the signal plus noise may be formulated as follows. Let

X

=

X

n=

X

(

t

)]t2T be a random function observed on the set

T

=

T

n=f1

2

:::n

g. The components

X

(

t

) are independent with IE

X

(

t

) =

(

t

) =

n(

t

) and Var

X

(

t

)] =

2 for every

t

2

T

. Working with functions on

T

rather than vectors in

R

n is very convenient for the present purposes. As just indicated, we will usually drop the subscript

n

for notational simplicity. The signal

and the noise variance

2 are both unknown. For simplicity we assume throughout that

X

is Gaussian. Portions of the argument that hold for non-Gaussian

X

are expressed by the lemmas in Section 6.2.

For any

g

2

R

T, the space of real-valued functions dened on

T

, let ave(

g

) :=

n

;1X

t2T

g

(

t

)

:

The loss of any estimator

bfor

is dened to be

L

(

b ) := ave(

b;

)2] (1.1)

and the corresponding risk of

bis

(

b 2) := IE

L

(

b )

:

The rst goal is to devise an estimator that is ecient in terms of this risk. If

and

X

are electrical voltages, then ave(

2) and

L

(

b ) are the time-averaged powers dissipated in passing the signal

and the error

b;

through a unit resistance.

Any estimator

bof

is governed by the asymptotic minimax bound liminfn!1 inf

b

ave(sup2)c

(

b 2)

2

c

2+

c

(1.2)

for every positive

c

and

2. Inequality (1.2) follows from a more general bound proved by Pinsker (1980) for signal recovery in Gaussian noise (see Nussbaum 1996 and Section 2). It may also be derived from ideas in Stein (1956) by considering best orthogonally equivariant estimators in the submodel where ave(

2) =

c

(see Beran 1996b). Let

b2 =

b2n be an estimator of

2 that is consistent as in display (2.2) of Section 2. Then

bS := 1;

b2

=

ave(

X

2)]+

X

(3)

is essentially the James-Stein (1961) estimator, where ]+ denotes the positive part func- tion. It achieves the Pinsker bound (1.2) because

lim

n!1 ave(sup

2)c

(

bS

2) =

2

c

2+

c

(1.3)

for every positive

c

and

2. The limit (1.3) follows from Corollary 2.3 or from asymptotics in Casella and Hwang(1982). For the maximum likelihood estimator

bML =

X

, the risk is always

2, which is strictly greater than the Pinsker bound.

Section 2 of this paper constructs estimators of

that are asymptotically minimax over a variety of ellipsoids in the parameter space while achieving, in particular, the asymptotic minimax bound (1.2) for every

c >

0. These modulation estimators take the form

fX

b =

f

b(

t

)

X

(

t

)]t2T. Here

f

b:

T

!0

1] depends on

X

and is chosen to minimize the estimated risk of the linear estimator

fX

over all functions

f

in a classF =Fn0

1]T. Many well-known estimators are of this form with special classesF. In the present paper we analyze such estimators under rather general assumptions on F. How large this class may be is at the heart of the analysis. Taking F to be the set of all functions from

T

to 0

1] leads to a poor modulation estimator. Successful is to let F be a closed convex set of functions with well-behaved uniform covering numbers. One example is the set of all functions in 0

1]T that are nonincreasing. The asymptotic theory of such modulation estimators, including links with the literature, is the subject of Section 2. Section 4 develops algorithms for computing

fX

b in the example ofF just cited.

Section 3 constructs condence sets that are centered at a modulation estimator

fX

b and have asymptotic coverage probability for

. The risk of the modulation estimator at the center is shown to determine the risk of the condence set, when that is viewed as a set-valued estimator for

. In this manner, eciency of a modulation estimator determines the eciency of the associated condence set.

Before estimation of

, the data

X

may be transformed orthogonally without changing its Gaussian character. A modulation estimator computed in the new coordinate sys- tem can be transformed back into the original coordinate system to yield an estimator of

. Standard choices for such preliminary orthogonal transformation include Fourier transforms, wavelet transforms, or analysis-of-variance transforms. When applied in this

(4)

manner, modulation estimators perform data-driven tapering of empirical Fourier, wavelet or analysis-of-variance coecients. Section 5 includes numerical examples of modulation estimators and condence bounds after Fourier transformation.

2 Modulation estimators

After dening modulation estimators, this section obtains uniform asymptotic approxima- tions to their risks. LetF =Fnbe a given subset of 0

1]T. Each function

f

2F is called a modulator and denes a candidate linear estimator

fX

=

f

(

t

)

X

(

t

)]t2T for

. The risk of this candidate estimator under quadratic loss (1.1) is

(

fX

2) = IE

L

(

fX

) = ave

2

f

2+

2(1;

f

)2]

:

(2.1)

For brevity, we will write

R

(

f

2) in place of

(

fX

2).

We will rst construct a suitably consistent estimator

R

b(

f

) of this risk. Suppose that

b

2=b

n2 is an estimator of

2, constructed (for instance) by one of the methods described later. Let

X

be a bootstrap random vector in

R

T such thatL(

X

j

X

b2) =NT(

X

b2

I

).

The corresponding bootstrap risk estimator for

R

(

f

2) is IE

L

(

fX X

)

X

b

2 =

R

(

fX

b2)

:

We call

R

(

fX

b2) the naive risk estimator because it is badly biased upwards, even asymptotically. The key point is

IE

R

(

fX

2) = ave

f

2

2+ (1;

f

)2(

2+

2)] =

R

(

f

2) + ave(1;

f

)2

2]

:

Two possible corrections to the naive risk estimator are

R

bC(

f

) := ave

f

2b

2+ (1;

f

)2(

X

2;

b2)] =

R

(

fX

b

2);ave(1;

f

)2

b2]

R

bB(

f

) := maxnave(

f

2b

2)

R

bC(

f

)o = ave(

f

2

b2) + ave(1;

f

)2(

X

2;

b2)]+

:

Risk estimator

R

bC is essentially Mallows' (1973)

C

L criterion or Stein's (1981) unbiased estimator of risk, with estimation of

2 incorporated. Risk estimator

R

bB corrects the possible negativity in ave(1;

f

)2(

X

2;

b2)] as an estimator for ave(1;

f

)2

2]. Let

X

(5)

be a random vector in

R

T such that L(

X

j

X

b2) is NT(

b b

2

I

), where

b=

b(

X

b

2) is a vector such that ave(

b2) = ave(1;

f

)2(

X

2;b

2)]+

=

ave(1;

f

)2

X

2]. Then the bootstrap risk estimator IE

L

(

fX

b2)j

X

b

2] is precisely

R

bB.

Let

R

b denote either

R

bC or

R

bB. We propose to estimate

by the modulation estimator

fX

b , where

f

b is any function in F that minimizes

R

b(

f

). Unless stated otherwise it is assumed throughout that

F is a closed convex subset of 0

1]T containing all constants

c

20

1].

Because both

R

bC() and

R

bB() are convex functions on 0

1]T, the minimizer

f

bover F exists in each case. These minimizers are unique with probability one because

R

bC(

f

) is strictly convex in

f

whenever

X

(

t

) 6= 0 for every

t

2

T

. Similarly, the risk function

R

(

f

2) dened through (2.1) is strictly convex over 0

1]T, with unique minimizer

f

e.

REMARK A. The modulation estimator

fX

b behaves poorly when the class F is too large. For instance, letFbe the class of all functions in 0

1]T. The minimizer of

R

(

2) over 0

1]T is the \oracle" modulator (cf. Donoho and Johnstone 1994)

e

g

:=

2

=

(

2+

2)

the division being componentwise, while the minimizer of

R

b() overF is now the greedy modulator

g

b+, where

b

g

:= (

X

2;b

2)

=X

2

:

To simplify the discussion, suppose that

2 is known and

b2

2. Then the estimator

b

g

+

X

is of the general form

b:=

S X

(

t

)]

t2T

for some measurable function

S

on the line.

Since the maximum likelihood estimator

X

is componentwise admissible, the risk function

(

b

2) of

bis either identical to

(

X

2)

2 or there is a real number

such that

R(

;

S

)2

d

N(

2)

>

2. Then, if

()

,

(

b 2)

>

2 =

(

X

2)

>

2

2

=

(

2+

2)

the latter being the asymptotic risk of the James-Stein estimator

bS. Thus, the maximum risk of

g

b+

X

is worse than that of estimators achieving Pinsker's asymptotic minimax bound (1.2) and is even worse than that of the naive estimator

X

.

(6)

It should be mentioned that greedy modulation can be made successful in some sense if one overestimates the variance

2 systematically. Donoho and Johnstone (1994) propose threshold estimators of the form

b= (1;

n

=

j

X

j)+

X

or

b= 1fj

X

j

n

g

X

, and prove that they have surprising optimality properties if

n= (2log

n

)1=2(1+

n) with a suitable sequence (

n)ntending to zero. These estimators are similar tob

g

+

X

if b

g

is computed with

b

n2 :=

2n

2. While showing good performance in case of \sparse signals", these estimators do not achieve the Pinsker bound (1.2) or the minimax bounds in Corollary 2.3 below.

Also, the construction of condence bounds for their loss seems to be intractable. Section 5 illustrates the possibly poor performance of hard thresholding for non-sparse signals.

REMARK B. Kneip's (1994) ordered linear smoothers are equivalent to certain modu- lation estimators computed after suitable orthogonal transformation of

X

. The conditions that we impose onFin this paper are substantially weaker than the ordering ofF required by Kneip. Consequently, our results also apply to the ridge regression, spline estimation, and kernel estimation examples discussed in Kneip's paper. The earlier paper of Li (1987) treated non-diagonal linear estimators indexed by a parameter

h

. Li's optimality result may be compared with Theorem 2.1 below. However, it does not seem easy to relate Li's conditions on the range of

h

to our conditions onF. The latter conditions give access to empirical process results that yield asymptotic distributions for the loss of

fX

b and hence condence sets for

centered at modulation estimators.

REMARK C. Nussbaum (1996) surveyed constructions of adaptive estimators that achieve Pinsker-type asymptotic minimax bounds. For instance, Golubev and Nuss- baum (1992) treated adaptive, asymptotically minimax estimation when

i=

g

(

x

i) and

g

lies in an ellipsoid of unknown radius within a Sobolev space of unknown order. Corol- lary 2.3 below is of related character. However, our results make no smoothness assump- tions on

. For instance, sample paths up to time

n

of suitably scaled, discrete-time, independent white noise ultimately lie, as

n

!1, within the ball ave(

2)

c

.

Useful classes of modulators F can be characterized through their uniform covering numbers, which are dened as follows. For any probability measure

Q

on

T

, consider the

(7)

pseudo-distance

d

Q(

fg

)2:=R(

f

;

g

)2

dQ

on 0

1]T. For every positive

u

, let

N

(

u

F

d

Q) := minn#Fo:Fo F

inf

f

o 2F

o

d

Q(

f

0

f

)

u

8

f

2Fo

:

Dene the uniform covering number

N

(

u

F) := sup

Q

N

(

u

F

d

Q)

where the supremum is taken over all probabilities on

T

. Let

J

(F) := Z01qlog

N

(

u

F)

du:

Throughout

C

denotes a generic universal real constant which does not depend on

n

,

,

2 orF, but whose value may be dierent in various places.

THEOREM 2.1

LetF be any closed subset of0

1]T containing0, let

f

ebe a minimizer of

R

(

f

2) over

f

2F, and let

f

bminimize either

R

bC(

f

) or

R

bB(

f

) over

f

2F. Then

IE

G

;

R

(

f

e 2)

C

J

(F)

2+

pave(

2)

p

n

+ IEjb

2;

2j

where

G

is any one of the following quantities:

L

(

fX

b )

inf

f2F

L

(

fX

)

R

bC(

f

b)

R

bB(

f

b)

:

In particular,

(

fX

b 2);

R

(

f

e 2)

C

J

(F)

2+

pave(

2)

p

n

+ IEjb

2;

2j

:

This theorem is about convergence of losses and risks. The next result uses convexity ofF to establish that

f

band

f

e, as well as

fX

b and

fX

e , converge to one another. Note that the second bound holds uniformly in

2

R

T.

THEOREM 2.2

Let

f

bbe the minimizer of

R

bC. Then

IEave(

2+

2)(

f

b;

f

e)2

CJ

(F)

2+

pave(

2)

p

n

+ IEjb

2;

2j

IE ave(

fX

b ;

fX

e )2

CJ

(F)

2

p

n

+ IEjb

2;

2j

:

(8)

Given consistency of

b and boundedness of

2+ ave(

2), a key assumption on F that ensures success of the modulation estimator

fX

b dened above is that

J

(F) =

o

(

n

1=2).

Here are some examples of modulator classes F to which Theorem 2.1 applies.

EXAMPLE 1 (Stein shrinkage). Suppose that F consists of all constant functions in 0

1]T. The minimizer overF of

R

(

f

2) is

f

eS 1;

2

=

2+ ave(

2)]

:

The minimizer of both

R

bC and

R

bB is

f

bS 1;b

2

=

ave(

X

2)]+

:

The resulting modulation estimator

f

bS

X

is the (modied) James-Stein (1981) estimator

bS. Here one easily shows that

N

(

u

F) 1 + (2

u

);1, whence

J

(F) is bounded by a universal constant.

EXAMPLE 2 (Multiple Stein shrinkage). Let B=Bn be a partition of

T

and dene

F :=nX

B2B

1B

c

(

B

) :

c

20

1]Bo

where 1B is the indicator function of

B

. The values of

c

(

B

) that dene

f

eand

f

b, respec- tively, are

e

c

(

B

) = ave(1B

2)

=

ave1B(

2+

2)]

b

c

(

B

) = ave1B(

X

2;

b2)]+

=

ave(1B

X

2)

:

The modulation estimator

fX

b now has the asymptotic form of the multiple shrinkage estimator in Stein (1966). Elementary calculations show that

N

(

u

F)1 + (2

u

);1]#B. Thus

J

(F) is bounded by a universal constant times (#B)1=2, so that

J

(F) =

o

(

n

1=2) follows from the intuitively appealing condition #B=

o

(

n

).

EXAMPLE 3 (Monotone shrinkage). LetFmonbe the set of all nonincreasing functions in 0

1]T. The class of candidate estimatorsf

fX

:

f

2Fmongincludes the nested model- selection estimators

f

k

X

, 0

k

n

, dened by

f

k(

t

) := 1f

t

k

g. In fact, Fmon is the

(9)

convex hull ofDMS:=f

f

0

f

1

:::f

ng. Elementary calculations show that

N

(

u

DMS) 1 +

u

;2 2

u

;2

for 0

< u

1. Together with Theorem 5.1 of Dudley (1987) it follows that log

N

(

u

Fmon)

Cu

;1 for all

u

2 ]0

1]

:

EXAMPLE 4 (Monotone shrinkage with respect to a quasi-order). Let be a quasi- order relation on

T

(cf. Robertson et al. 1988, Chapter 1.3), and let F be the set of all functions in 0

1]T that are nonincreasing with respect to. That means, for all

f

2F and

st

2

T

,

f

(

s

)

f

(

t

) if

s

t:

Here one can easily deduce from the conclusion of Example 3 that log

N

(

u

F)

CN

u

;1

for 0

< u

1, where

N

=

N

n is the minimal cardinality of a partition of (

T

) into totally ordered subsets. Thus

J

(F) is of order

O

(

N

1=2). To give an example, suppose that

X

consists of

n

= 2k+1 ;1 empirical Haar (or wavelet) coecients, arranged as a binary tree. If this tree is equipped with its natural order , then the monotonicity constraint

f

b2F means that

fX

b is a mixture of histogram estimators (cf. Engel 1994).

Here

N

= 2k

> n=

2. Therefore, in order to apply our theory one has to replace the class

F

with suitable subclasses.

EXAMPLE 5 (Shrinkage with bounded total variation). Let F(M) be all functions

f

in 0

1]T with total variation not greater than

M

=

M

n, i.e.

n

X

t=2

j

f

(

t

);

f

(

t

;1)j

M:

For instance, the class of functions

f

(

t

) := maxfminf

p

(

t

)

1g

0g, where

p

is a polynomial of degree less than or equal to

M

, belongs to F(M). Any

f

2 F(M) can be written as (

M

+ 1)(

f

1;

f

2) with

f

1

f

2 2Fmon. Hence

log

N

(

u

F(M)) 2log

N

2(

M

+ 1)];1

u

Fmon

C

(

M

+ 1)

u

;1

(10)

for 0

< u

1. In particular,

J

(F(M)) =

O

(

M

+ 1)1=2].

The minimizers

f

eand

f

bin Examples 3-5 lack closed forms. Section 4 describes com- putational algorithms for

f

eand

f

bin Examples 3-4. Example 5 diers from the remaining examples both theoretically as well as computationally and will be treated in detail else- where.

A particular consequence of Theorem 2.1 is that the modulation estimators are asymp- totically minimax optimal for a large class of submodels for (

2). Namely, for

a

21

1]T and

c >

0 dene the linear minimax risk

2(

ac

2) := inf

g 2 01]T ave(sup

a

2)c

R

(

g

2)

:

It is shown by Pinsker (1980) that the linear minimax risk approximates the unrestricted minimax risk in that

inf

b

ave(supa2)c

(

b 2)

=

2(

ac

2) ! 1 as

n

2(

ac

2)!1

:

Moreover,

2(

ac

2) = supave(

a

2)c

R

(

g

o

2) =

R

(

g

o

o

2)

where

g

o := 1;(

a=

o)1=2]+,

o2 :=

2(

o

=a

)1=2 ;1]+, and

o

>

0 is the unique real number satisfying ave(

a

(

o

=a

)1=2;1]+) =

c=

2. The special case

a

1 yields (1.2).

If the minimax modulator

g

o =

g

o(j

ac=

2) happens to be in F, which is certainly true for

a

1, then

ave(supa2)c

(

fX

b 2) ave(sup

a 2)c

(

fX

b 2);

R

(

f

e 2)+

2(

ac

2)

:

Thus Theorem 2.1 immediately implies the following minimax result, where the distribu- tion of (

X

b

2) is assumed to depend on (

2) only.

COROLLARY 2.3

Suppose that

J

(F) =

o

(

n

1=2), and that for every

c

2

>

0,

n(

c

2) := supave(

2)cIEjb

2;

2j ! 0 (

n

!1)

:

(2.2)

Then the modulation estimator

fX

b achieves the asymptotic minimax bound (1.2).

(11)

More generally, let

a

=

a

n21

1]T such that

1;(

a=

)1=2]+ 2 F for all constants

1

:

(2.3)

Then for every

c

2

>

0,

ave(supa2)c

(

fX

b 2)

2(

ac

2) +

O n

;1=2

J

(F) +

n(

c

2)]

:

2 Specically, let

a

(

t

) = 1 for

t

2

A

T

and

a

(

t

) =1 otherwise. Then ave(

a

2)

c

is equivalent to ave(

2)

c

and

2= 0 on

T

n

A

. Here one can easily see that condition (2.3) is equivalent to 1A 2F. The linear minimax risk equals

2(

ac

2) =

2ave(1A)

c

2ave(1A) +

c

which can be signicantly smaller than the bound in (1.2).

In case of F =Fmon condition (2.3) is equivalent to

a

being nondecreasing on

T

. We end this section with some examples for

b. Internal estimators of

2depend only on

X

and require additional smoothness or dimensionality restrictions on the possible values of

to achieve the consistency property (2.2). One internal estimator of

2, analyzed by Rice (1984) and by Gasser et al. (1986) is

b2(1)= 2(

n

;1)];1Xn

t=2

X

(

t

);

X

(

t

;1)]2

:

(2.4)

Here IEjb

2n;

n2j!0 as

n

!1 and

n

;1Xn

t=2

n(

t

);

n(

t

;1)]2 ! 0

:

External estimators of variance are available in linear models, where one observes an

N

-dimensional normal random vector

Y

with mean IE

Y

=

D

and covariance matrix Cov(

Y

) =

2

I

N for some design matrix

D

2

R

Nn,

N

=

N

n

> n

. After suitable linear transformation of

Y

and

one may assume that

is the expectation of the vector

X

:= (

Y

1

Y

2

:::Y

n). Then the standard estimator for

2 is given by

b2(2) := (

N

;

n

);1 XN

i=n+1

Y

i2

which is independent from

X

with (

N

;

n

)

;2

b2(2)

N;n. This estimator also satises (2.2), provided that

N

;

n

!1.

(12)

3 Condence sets

Having replaced the maximum likelihood estimator

X

with

fX

b , a natural question is to what extent

fX

b is closer to the unknown signal

than

X

. More precisely we want to compare the distance

L

(

X fX

b )1=2 with an upper condence bound

r

b = b

r

(

X

b2) for

L

(

fX

b )1=2. In geometrical terms, the condence ball of primary interest is dened by

C

b =

C

bn := f

2

R

T :

L

(

fX

b )

r

b2g

:

The radius

r

bis chosen so that the coverage probability IP(

2

C

b) converges to 2]0

1as

n

increases. The full denition of

C

b follows the theorem below. Underlying the construction is the condence set idea sketched at the end of Stein (1981). The quality of

C

b as a set-valued estimator of

will be measured through the quadratic loss

L

(

C

b ) := sup

2 b

C

L

(

) =

L

(

fX

b )1=2+

r

b]2

:

(3.1)

This is a natural extension of the quadratic loss dened in (1.1) and has an appealing projection-pursuit interpretation see Beran (1996a).

One main assumption for this section is that

X

nand

bn2 are independent with L(

n;2

bn2) depending only on

n

(3.2)

such that lim

n!1

m

L

n

1=2(

;2

bn2;1)]

N(0

2) = 0

:

Here

2 0 is a given constant and

m

(

) metrizes weak convergence of distributions on the line. For instance, the estimator

b2(2) of Section 2 satises Condition (3.2) with

:= 2limn!1

n=

(

N

n;

n

), provided that this limit exists. Condition (3.2) is made for the sake of simplicity. It could be replaced with weaker, but more technical conditions in order to include special internal estimators of variance such as

b2(1). A second key assumption is that

Z 1 0

rsup

n

N

(

u

Fn)

du <

1

:

(3.3)

Roughly speaking, this condition allows us to pretend that

f

bis equal to

f

e. It is satised in all Examples 1-5, provided that #Bn=

O

(1) in Example 2,

N

n=

O

(1) in Example 4, and

M

n=

O

(1) in Example 5.

(13)

At rst let us consider condence balls centered at the naive estimator

X

. Since

n

;2ave(

X

;

)2] has a chi-squared distribution with

n

degrees of freedom, we consider

C

bN :=n

2

R

T : ave(

X

;

)2]

b2(1 +

n

;1=2

c

)o

for some xed

c

. The inequality ave(

X

;

)2]

b2(1 +

n

;1=2

c

) is equivalent to

n

1=2

;2ave(

X

;

)2];1;

n

1=2(

;2

b2;1)

;2b

2

c

=

c

+

o

p(1)

:

Thus the Central Limit Theorem for the chi-squared distribution together with Condi- tion 3.2 implies that

c

= (2 +

2)1=2;1( ) yields a condence set

C

bN with

lim

n!1 sup

2R T

2

>0

IPf

2

C

bNg;

= 0

where ;1( ) denotes the -th quantile ofN(0

1). Moreover,

lim

n!1

sup

2R T

IPnj

L

(

C

bN

);4

2j

>

o = 0 8

>

0

:

In what follows we shall see that condence sets centered at a good modulation estimator

fX

b dominate the naive condence set

C

bN in terms of the loss

L

(

C

b ).

To construct these condence sets, we rst determine the asymptotic distribution of

d

b=

d

bn :=

n

1=2

L

(

fX

b );

R

bC(

f

b)]

:

This dierence compares the loss of

fX

b with an estimate for the expected loss of

fX

b .

THEOREM 3.1

Under Conditions (3.2, 3.3), lim

n!1

ave(sup2)c

m

L(

d

b)

N(0

2)] = 0

for arbitrary

c

2

>

0, where

2 =

n2(

2) := 2

4ave(2

f

e;1)2] +

2

4ave(2

f

e;1)]2+ 4

2ave

2(1;

f

e)2]

:

A consistent estimatorb

2=

bn2 of

2is obtained by substituting

b2 for

2,

f

bfor

f

eand

X

2;b

2for

2in the expression for

2. The implied estimator of the approximating normal

(14)

distributionN(0

2) isN(0

b2). This leads to the following denition of a condence ball for

that is centered at the modulation estimator

fX

b :

C

b := n

2

R

T :

L

(

fX

b )

R

bC(

f

b) +

n

;1=2

b;1( )o

:

The intended coverage probability of

C

b is . The next theorem establishes asymptotic properties of this condence set construction. Beran (1994) treats in detail the example where

fX

b is the James-Stein estimator. That situation is much easier to analyze than the general case.

THEOREM 3.2

Under the conditions of Theorem 3.1, for arbitrary

c

2

>

0, lim

n!1K!1

ave(sup2)cIPnj

L

(

C

b );4

R

(

f

e 2)j

Kn

;1=2o = 0

and lim

n!1K!1

ave(sup2)c IPnjb

r

2;

R

(

f

e 2)j

Kn

;1=2o = 0

:

Moreover, b

2 is consistent in that

lim

n!1 ave(sup

2)cIPnjb

2;

2j

>

o = 0 8

>

0

:

If

liminfn!1 ave(inf

2)c

n2(

2)

>

0

(3.4)

then

lim

n!1

ave(sup2)c

IPf

2

C

bg; = 0

:

A sucient condition for (3.4) is the following: For every

n

, F =Fn is such that 1f

f

c

g

f

2 F for all

f

2F and

c

20

1]

:

(3.5)

Condition (3.4) ensures that L(

d

b) does not approach a degenerate distribution. Note that Condition (3.5) is satised in Examples 1-4. When

R

(

f

e 2) =

O

(

n

;1=2) our con- dence ball has loss

L

(

C

b ) =

O

p(

n

;1=2). In fact, according to Theorem 2.1 of Li (1989) this is the smallest possible order of magnitude for a Euclidean condence ball, unless one imposes further constraints on the signal

. The result (3.2) on asymptotic coverage of

C

b may be compared with the lower bound in Theorem 3.2 of Li (1989).

(15)

A key step in the proof of Theorem 3.1 is that in the denition of

d

bone may replace

f

e with

f

b. Instead of the normal approximation underlying

C

b a bootstrap approximation of

H

=

H

n:=L(

d

b) that imitates the estimation of

f

eseems to be more reliable in moderate dimension. Precisely, let

H

b =

H

bn be the conditional distribution (function) of

d

b given (

X

b2), where

d

b is computed as

d

bwith the pair (

X

b2 ) in place of (

X

b2). More precisely, let

b=

b(j

X

b2) be an estimator for

. Let

S

n2 be a random variable with a specied distribution depending only on

n

such that

lim

n!1

m

L

n

1=2(

S

n2;1)]

N(0

2) = 0

where

S

n and (

X

b2) are independent. Then

L(

X

b2 j

X

b

2) = N(

b

b2

I

)L(b

2

S

n2j

X

b2)

the product of the probability measures N(

b

b2

I

) and L(

b2

S

n2j

X

b2). The resulting bootstrap condence bound

r

bb( ) for

L

(

fX

b ) is given by

r

b2b( ) =

R

b(

f

b) +

n

;1=2

H

b;1( )

:

The last theorem of this section states conditions, under which

H

b is a consistent estimator for

H

. An interesting fact is that neither

b=

X

nor

b=

fX

b satisfy these conditions.

THEOREM 3.3

Under the assumptions of Theorem 3.1, lim

n!1 ave(sup

2)cIPnj

m

(

H

bn

H

n)j

>

o = 0 8

>

0

provided that

f

b = argmin

f2F

R

(

f

b

b2) almost surely

(3.6)

limsup

n!1K!1

ave(sup2)c IPfave(

b2)

> K

g= 0

(3.7)

lim

n!1

ave(sup2)c IPnave

b2(1;

f

b)2];ave

2(1;

f

e)2]

>

o = 0 8

>

0

:

(3.8)

In particular, suppose that each Fn has the following property: For all

X

2

R

T with

X

2

>

0 and any

c

20

1],

c

= ave1f

f

b=

c

g(

X

2;

b2)]+

=

ave(1f

f

b=

c

g

X

2) iff

f

b=

c

g6=

c

= ave(1f

f

e=

c

g

2)

=

ave1f

f

e=

c

g(

2+

2)] if f

f

e=

c

g6=

:

Referenzen

ÄHNLICHE DOKUMENTE

This section presents additional simulation experiments, in which we compare the explicit measurement error modelling implemented by the MEPM model against the simulative approach

Two p53-binding proteins, 53BP1 and 53BP2, bind to wild type p53 via the DNA-binding domain of p53 and enhance p53-mediated transcriptional activation and induce the expression of

Slow wave activity (SWA, spectral power density in the 0.75-4.5 Hz range) and spindle frequency activity (SFA, spectral power density in the spindle frequency range) in the

The experiments carried out during my thesis have revealed that, under physiological conditions, inhibitory circuits in the LA can prevent the induction of LTP in vitro at

The Corollary 3.14 can be used in not so small samples for estimating variance and covariance of the conditional GR-estimators based on the initial ratio estimators under SI and

After synaptic vesicles fuse with presynaptic membrane, the expanded membrane undergoes endocytosis in order to maintain the presynaptic structure and transmission intact. So

Compared to studies with affectively arousing stimuli as T2s such as emotional verbs (see Keil and Ihssen, 2004), the T2 eating verbs in a hungry state also show higher

The asymptotic distribution of OLS in stationary stochastic regression models including long memory processes was first examined by Robinson and Hidalgo (1997).. Specifically,