• Keine Ergebnisse gefunden

Two adaptive rates of convergence in pointwise density estimation

N/A
N/A
Protected

Academic year: 2022

Aktie "Two adaptive rates of convergence in pointwise density estimation"

Copied!
27
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Two adaptive rates of convergence in pointwise density estimation

Cristina BUTUCEA

Humboldt-Universitat zu Berlin, SFB373, Spandauer Strasse 1 D 10117 Berlin, Germany

Universite Paris VI, UPRES - A 7055 CNRS, 4, Place Jussieu, F 75005 Paris, France

(E-mail: butucea@wiwi.hu-berlin.de, butucea@ccr.jussieu.fr)

Abstract

We consider density pointwise estimation and look for best attainable asymptotic rates of convergence. The problem is adaptive, which means that the regularity pa- rameter, , describing the class of densities, varies in a setB. We shall consider, suc- cessively, two classes of densities, issued from a generalization ofL2 Sobolev classes:

W( pL) andM( pL).

Keywords: nonparametric density estimation, adaptive rates, Sobolev classes

1 Introduction

1.1 Adaptivity

We want to estimate the common probability density

f

:

IR

!0 +1) of

n

independent, identically distributed random variables

X

1

::: X

n, at a real point

x

0. We assume that

f

belongs to a large nonparametric class of functions,

H

=

H

(

p L

), characterized by its smoothness (e.g., order of derivability),

, a norm Lp,

p >

1 and a radius

L >

0.

For any estimator

f

bn of

f

, xed

x

0 real and

q >

1 we consider a sequence

'

n of positive numbers and dene the

maximal risk

over the class

H

:

R

n (

f

bn

'

n

H

) = sup

f2H

'

;qn

E

fh

f

bn(

x

0);

f

(

x

0) qi (1.1) where

E

f() is the expectation with respect to the distribution

P

f of

X

1

::: X

n, when the underlying probability density is

f

.

We say that

'

n is an

optimal rate of convergence

over the class

H

=

H

(

p L

), if the maximal risk over this class stays positive, for all possible estimators, asymptotically and if there is an estimator whose maximal risk stays nite asymptotically. Minimax theory is concerned with nding the estimators attaining the optimal rates, which are given by minimizing the maximal risk, over all estimators.

We are interested in adaptive estimation, which means that the regularity parameter

is supposed unknown within a given set. An estimator

f

b is called

optimal rate adaptive

(2)

if, for the optimal rate of convergence over that class,

'

n , and a constant

C >

0, we have limsup

n!1

sup

2B

R

n

f

bn

'

n

H

C <

1 (1.2) where

B

is a non-empty set of values.

We shall prove here, that over two dierent classes of probability density functions, to be dened below, commonly denoted by

H

=

H

(

p L

), we can nd no optimal rate adaptive estimator. Similar results were obtained by Lepskii 20], Brown and Low 3] (on Holder classes of functions) and Tsybakov 27] (on L2 Sobolev classes), for the Gaussian white noise model. They are characteristic for the pointwise (not global) estimation. We shall then introduce the denition of adaptive rate of convergence, which is a modication by Tsybakov 27] of the denition of Lepskii 20] (see also Lepskii 21] and 22]). We also compute the adaptive rate over the same classes of functions as well as the corresponding rate adaptive estimators.

More precisely, let us dene the considered classes of densities. At rst, we dene for integer

1,

L >

0 and 1

< p <

1 the class of functions inLp

W

(

p L

) =

f

:

IR

!0 +1) :

Z

IR

f

= 1

Z

IR

f

( )(

x

) p

dx

L

p

where

f

( ), the derivative of order

of

f

, is supposed to exist.

Secondly, let us introduce for any absolutely integrable function

f

:

IR

!0 +1) its Fourier transform F(

f

)(

x

) = RIR

f

(

y

)

e

;ixy

dy

, for any

x

in

IR

. We dene now for real

>

1; 1p and 2

p <

1 the class of absolutely integrable functions whose Fourier transforms belong to Lp and

M

(

p L

) =

f

:

IR

!0 +1) :

Z

IR

f

= 1

Z

IR

jF(

f

)(

x

)jpj

x

jp

dx

L

p

:

From the results of optimal recovery of Donoho and Low 9], it is straightforward to obtain the optimal pointwise rates of convergence over these classes:

'

n (

W

(

p L

)) =

1

n

;1=p 2( ;1=p)+1

and

'

n (

M

(

p L

)) =

1

n

+1=p;1 2( +1=p);1

:

(1.3) In this paper, we prove that no optimal adaptive estimator can be found and we look for the adaptive rates of convergence on previously dened classes

W

(

p L

) and

M

(

p L

), for

belonging to a set

B

Nn to be dened for each class. We prove that the adaptive rates of convergence are within a factor log

n

slower than the optimal rates.

Remark 1.1

If

p

0 is the conjugate of

p

(i.e. 1

=p

+ 1

=p

0 = 1), then the optimal rates of convergence (1

:

3) coincide for integer

, on the classes

W

(

p L

) and

M

(

p

0

L

) (as well as the adaptive rates (2

:

5) below). Moreover, by a result of Stein and Weiss 24] we have that for integer

1 and 1

< p

2

M

;

p

0

L

W

(

p L

)

:

Thus, parts of our results on a scale of classes can be deduced from the results on the other scale, for certain values of the parameters. Nevertheless, our setups are considerably larger and the classes

W

and

M

are not compatible except in the particular case described above. For these reasons, we prefer the above notation and give independent proofs for both setups.

(3)

1.2 Previous results

The asymptotic study of minimax risks of estimators in the nonparametric framework, was developed considerably since the rst results of Stone 25] and 26], Bretagnolle et Huber 2], Ibragimov et Hasminskii 15] and 16]. Beside the density model, nonparametric regression and Gaussian white noise models were studied. Estimation was done over Holder, Sobolev or Besov classes. For an overview of the results in this area see Korostelev and Tsybakov 19] and Hardle, Kerkyacharian, Picard and Tsybakov 14].

Almost optimal rates of convergence in density pointwise estimation over Lp Sobolev classes,

W

(

p L

), were obtained by Wahba 28], where technics of Farrel 11] for the proof of the lower bounds issued a rate of (1

=n

)2( ;;11=s=s)+1, for

s

=

p

+

"

,

" >

0 arbitrary small.

Note that the optimal rate for

W

(

p L

), as noted in (1

:

3), is given by this expression with

"

= 0.

Technics of optimal recovery of Donoho 5], Donoho and Liu 8], Donoho and Low 9]

allow to compute optimal rates of convergence for dierent risks, in dierent setups. In these papers the classes

M

(

p L

) and the corresponding rate in (1

:

3) rst appear.

Lepskii 20], Brown and Low 3] showed that for pointwise estimation on the Holder classes optimal rate adaptive estimators can not be found, both in Gaussian white noise and density models. In the Gaussian white noise model, Lepskii 20] rst considered the problem of nding the adaptive rates. He showed that a loss of logarithmic order is un- avoidable and introduced a procedure providing the adaptive estimator. For a detailed overview of adaptive rates of convergence we refer to Donoho, Johnstone, Kerkyacharian, Picard 6], Hardle, Kerkyacharian, Picard, Tsybakov 14] who give adaptive rates over Besov classes using the wavelet thresholding procedure. Lepski, Mammen and Spokoiny 23], Goldenshluger and Nemirovski 12], Juditsky 17] gave also adaptive rates of con- vergence using Lepski's scheme of adaptation. Most of these results are obtained for the Gaussian white noise model.

In density estimation, wavelet techniques were used in the minimax adaptive setup for Besov classes andLp risk, by Donoho, Johnstone, Kerkyacharian, Picard 7], Kerkyachar- ian, Picard and Tribouley 18] and Juditsky 17]. Sharp results, where the asymptotic value of the maximal risk was found explicitely, were obtained over L2 Sobolev classes in

L2 risk by Efromovich 10] and Golubev 13] and pointwise in Butucea 4].

In this paper, we are interested in adaptive rates in pointwise density estimation over

L

p Sobolev classes,

W

(

p L

) and

M

(

p L

).

2 Results

We consider adaptive density estimation problem, at xed real point

x

0, over the classes

H

=

H

(

p L

), when

belongs to the discrete set

B

Nn =f

1

:::

Nng.

Assumption (A)

The set

B

Nn is such that

1

< ::: <

Nn

<

1, for a non-decreasing sequence of positive integers

N

n. From now on, we shall consider two setups. When

H

=

W

(

p L

) the set

B

Nn contains positive integer values of

(

1 1) and

p >

1, while

H

=

M

(

p L

) implies that

can take real values, (

1

>

1;1

=p

) and

p

2. Moreover, we suppose that lim

n!1

Nn =1 and if n = min

i=1::Nn;1j

i+1;

ij

(4)

we assume that it satises

limsup

n!1

n

<

+1 (2.1)

together with

lim

n!1

nlog

n

Nn2 log log

n

=1

:

(2.2)

The following denition of an adaptive rate of convergence was introduced by Lepski, see Tsybakov 27]. The original denition of adaptive rate of convergence by Lepskii 20]

is not used here since it has a more special form.

Denition 2.1

The sequence

n is an

adaptive rate of convergence

over the scale of classes f

H

2

B

Nng, if

1. There exists an estimator

f

n, independent of

over

B

Nn, which is called

rate adaptive estimator

, such that

limsup

n!1

sup

2B

N

n

R

n (

f

n

n

H

)

<

1 (2.3) 2. If there exist another sequence of positive reals n and an estimator

f

n such that

limsup

n!1

sup

2B

Nn

R

n (

f

n n

H

)

<

1 and, at some

0 in

B

Nn, n 0

n 0

!

n!1

0, then there is another

0 0 in

B

Nn such that

n 0

n 0

n 00

n 00

!

n!1

+1

:

Note that condition (2

:

3) introduces a wide class of rates. We choose between those rates by a criterion of uniformity over the set

B

Nn, expressed in the second part of De- nition 2

:

1. If some other rate satises a condition similar to (2

:

3) and if this rate is faster at some point

0 then the loss at some other point

0 0 has to be innitely greater for large sample sizes

n

.

Remark 2.2

If an optimal adaptive estimator exists, it is also rate adaptive.

Indeed, an optimal adaptive estimator satises (2

:

3) by denition, for the optimal rate of convergence

n =

'

n . We can easily verify that in this case condition 2 in Denition 2

:

1 is redundant, since such a sequence n can not exist.

In what follows we assign to any

in

B

Nn the value

e=

e(

H

) =

;1

=p

+ 1

=

2, if

H

=

W

+ 1

=p

;1

=

2, if

H

=

M

(2.4)

where equalities

H

=

W

and

H

=

M

denote the cases when we consider the scales of classesf

W

(

p L

)

2

B

Nngorf

M

(

p L

)

2

B

Nng, respectively. We remark that in both setups:

>

e 1

=

2.

(5)

Let us dene

B

;=

B

Nn nf

Nng and

n =

n (

H

) =

8

<

:

(log

n=n

)e;21e=2, if

2

B

;

(1

=n

)e;21e=2, if

=

Nn

:

(2.5) Then the rate

n (

H

) is slower than the optimal rate of convergence, except for the last point

Nn. As by our hypothesis lim

n!1

Nn =1, this asymptotic phenomenon is not characteristic and we can use the set

B

; instead of

B

Nn.

2.1 The adaptive procedure

Let us proceed to the construction of the estimator

f

n called adaptive estimator. We start for each

in

B

Nn with the corresponding kernel estimator

f

n (

x

0) = 1

nh

n

n

X

i=1

K

X

i;

x

0

h

n

:

(2.6)

Here the kernel

K

is dened in the next section (dierently for each setup) and the bandwidth is in both problems

h

n =

log

n n

1

2e, if

2

B

; and

h

n Nn =

1

n

1 2eNn

where

e=

e(

H

) and

eNn =

eNn;

H

Nn in (2

:

4). We shall evaluate the regularity

of the estimated density and replace it into the kernel estimator

f

n in order to obtain

f

n, the adaptive estimator, in the spirit of Lepskii 20].

More precisely, let

a >

0 be a suciently large constant and

n =

a

log

n n

e

;1=2 2e

:

Then, we dene

b=

b(

H

) = maxf

2

B

Nn :j

f

n (

x

0);

f

n(

x

0)j

n 8

<

2

B

Nng

:

In the sequel, e

(appearing in

n) is dened as in (2

:

4). Finally,

f

n(

x

0) =

f

nb(

x

0)

:

(2.7)

2.2 Statement of results

Theorem 2.3

In both pointwise density estimation problems described above, we can nd no optimal rate adaptive estimators (see De nition 1

:

2) over the scale of classes

f

H

(

p L

)

2

B

g, as soon as

B

has at least two dierent elements and

B

B

Nn, where

B

Nn satis es

Assumption (A)

.

Theorem 2.4

The estimator

f

n(

x

0) of

f

(

x

0), in (2

:

7), is rate adaptive estimator and

n (

H

) in (2

:

5) is the adaptive rate of convergence in the sense of De nition 2

:

1, over the scale

H

(

p L

)

B

, where the set

B

satis es

Assumption (A)

.

(6)

The proof is organized as follows. In Section 3 we prove that

f

n(

x

0) in (2

:

7) satises, for a constant

C >

0,

limsup

n!1

sup

2B

Nn

R

n (

f

n

n

H

)

C <

1

:

(2.8) This result will be called the upper bound. Section 4 is devoted to the proof of the lower bound

liminf

n!1

inf

f

n

sup

2f g

R

n(

f

n

n

H

)

c >

0

where

and

are in

B

Nn, arbitrary chosen elements such that

<

,

c >

0 and the inmum is taken over all possible estimators

f

nof

f

. These relations, Theorem 2

:

3 (proved in Section 5) and the fact that

n (

H

) is the adaptive rate of convergence over the set

B

Nn (see also Section 5) imply Theorem 2

:

4.

3 Upper bounds

We shall prove that the estimator

f

n, independent of

in

B

Nn, dened in (2

:

7), is such that the upper bound (2

:

8) holds. Throughout this section,

C

,

c

i and

C

i,

i

= 1 2

:::

, denote positive constants, depending possibly on xed

q

,

1 and

L

.

3.1 Auxiliary results

Denition 3.1

Let the density

f

belong to the class

H

=

H

(

p L

). De ne for any kernel estimator

f

n of

f

(see(2

:

6)), with

,

in

B

Nn such that

its bias term

B

n =

B

n(

x

0

H

) =j

E

f

f

n(

x

0)];

f

(

x

0)j and its stochastic term

Z

n =

Z

n(

x

0

H

) =j

f

n(

x

0);

E

f

f

n(

x

0)]j

:

Besov, Il'in and Nikol'skii 1], Theorem 15.1 implies the following:

Lemma 3.2

Let

,

be integers and 0

<

, 1

p

0

p

1

p

1,

>

1

=p

. If there exists

2(

=

1) such that

p

10 ;

= (1;

) 1

p

1 +

1

p

;

(3.1) then any function

f

2L 1 (

IR

) with

f

( )p

<

1 satis es

f

()

p0

C

k

f

k1p;1

f

( )

p

where

C

is a constant that depends only on

p

0,

p

1,

p

,

,

.

(7)

Lemma 3.3

There exists a nite constant depending on

L

,

and

p

only such that sup

f2H( pL)k

f

k1

:

Proof.

For

f

2

W

(

p L

), we apply the previous result with

= 0,

p

0 =1,

p

1 = 1.

Then (3

:

1) takes the form

0 = (1;

) +

1

p

;

which implies

= 1

=

(

+ 1;1

=p

). Then

2(

=

1) if

>

1

=p

which holds by hypoth- esis. Thus, we apply the previous result, Lemma 3

:

2, and get

k

f

k1

C

k

f

k11;

f

( )

p

CL

for all

f

in

W

(

p L

).

If

f

2

M

(

p L

), then k F(

f

)k11 since

f

is a density. We have

k

f

k1 21

Z

IR

jF(

f

)(

y

)j

dy

= 12

Z

IR

jF(

f

)(

y

)j1 +j

y

j

dy

1 +j

y

j

21

Z

IR

jF(

f

)(

y

)jp1 +j

y

j p

dy

1=p

0

B

@ Z

IR

dy

1 +j

y

j p0

1

C

A

1=p0

where 1

=p

+ 1

=p

0 = 1. This is less than a constant (

L p

)

>

0, for

f

in the class

M

(

p L

). 2

Lemma 3.4

If

f

2

H

(

p L

) and

is in

B

Nn such that

<

, then

f

2

H

(

p L

0), where

L

0

>

0 depends only upon

L

and

p

.

Proof.

For classes

W

(

p L

) put

p

0 =

p

,

p

1 = 1 in the auxiliary Lemma 3

:

2. Then (3

:

1) takes form

1

p

;

=1;

e+

e

1

p

;

which gives

e= (

+ 1;1

=p

)

=

(

+ 1;1

=p

) and thus

e2(

=

1) if

>

1

=p

(true, by hypothesis). By Lemma 3

:

2 we get

f

()

p

C

ek

f

k11;e

f

( )e

p

CL

e e

for all

f

in

W

(

p L

).

For

f

2

M

(

p L

), as

p >

1 and kF(

f

)k11, we write

Z

IR

jF(

f

)(

y

)jpj

y

jp

dy

Z

jyj1j F(

f

)(

y

)jp

dy

+

Z

jyj>1jF(

f

)(

y

)jpj

y

jp

dy

1 +

L

p

:

(8)

Lemma 3.5

If

and

are in

B

Nn such that

and if

f

belongs to

H

=

H

(

p L

) then there exists

b

(

H

)

>

0 (given in the proof and depending also on

L

and

p

), such that

B

n(

x

0

H

)

b

(

H

)

h

n;1=p, if

H

=

W

(

p L

),

B

n(

x

0

H

)

b

(

H

)

h

n;1+1=p, if

H

=

M

(

p L

), and

E

f

Z

n(

x

0

H

)]2 k

K

k22

nh

n Def=

s

2n. (3.2)

Moreover, for the kernels f

K

2

B

Nng used in the proof, we can nd constants

K

max,

k

min,

k

max and

b

max depending possibly on xed

p

and

1, such that

k

K

k1

K

max

k

mink

K

k2

k

max

for all

in

B

Nn and

b

(

H

)

b

max for all

and

in

B

Nn such that

.

Remark 3.6

From now on, e

= e

(

H

) is obtained as in (2

:

4). Then Lemma 3

:

5 says that

B

n(

x

0

H

)

b

h

ne(H );1=2.

Proof.

If

H

=

W

=

W

(

p L

), let us introduce a kernel

K

of order

, in the expression of the kernel estimator (2

:

6). Such a kernel must be bounded uniformly in

(k

K

k1

K

max, for all

in

B

Nn), absolutely integrable, with a bounded L2 norm (

k

min k

K

k2

k

max, for all

in

B

Nn), such thatRIR

K

(

y

)

dy

= 1, RIR

y

j

K

(

y

)

dy

= 0 for

j

= 1

:::

;1 and

Z

IR

j

K

(

y

)jj

y

j;1=p

dy

L

0

<

1 (3.3) where

L

0 depends only on xed

p

and

1. It is not dicult to nd examples of such kernels. For example, the kernel

K

having Fourier transform F(

K

)(

u

) = 1

=

(1 +j

u

jp) satisfy these conditions and the proofs are given later on.

From now on we denoteR =RIR. Then the bias can be bounded as follows

B

n(

x W

) =

Z

K

(

y

)

f

(

x

+

yh

n);

f

(

x

)]

dy

Z

K

(

y

);X1

j=1

(

yh

n)j

j

!

f

(j)(

x

)

dy

+

Z

K

(

y

)

Z

x+yhn

x

(

x

+

yh

n;

u

);1

(

;1)!

f

()(

u

)

dudy

;1

X

j=1

h

jn

j

!

f

(j)(

x

)

Z

y

j

K

(

y

)

dy

+ +

Z

j

K

(

y

)j

f

()

p

j

yh

nj;1=p

(

;1)!((

;1)

p

0+ 1)1=p0

dy

(9)

where the rst term is zero by hypotheses on the kernel and we applied the Holder in- equality with 1

=p

+ 1

=p

0 = 1 for the second term. This gives

B

n(

x W

)

L

0

(

;1)!

h

;n1=p ((

;1)

p

0+ 1)1=p0

Z

j

K

(

y

)jj

y

j;1=p

dy

b

(

W

)

h

en(W );1=2 where

b

(

W

) =

L

0 (

;1)!

R

j

K

(

y

)jj

y

j;1=p

dy

((

;1)

p

0+ 1)1=p0

:

We can also see that

b

(

W

)

b

max,

b

max depending only on

p

,

L

and

1, for all

and

in

B

Nn,

.

If

H

=

M

=

M

(

p L

), let us choose the kernel

K

dened by its Fourier transform as follows

F(

K

)(

u

) = 1 1 +j

u

jp

:

This kernel has, by Plancherel's formula:

k

K

k2 = 1p2

k F(

K

)k2 = 1p2

Z

du

(1 +j

u

jp)2

1

p2

Z

juj1

du

1 +j

u

jp 12 =

k

min(

p

1) also

k

K

k2 1 + 1p2

Z

juj>1

du

1 +j

u

jp 12 =

k

max(

p

1) and

k

K

k1 21

Z

j F(

K

)(

u

)j

du

1 + 12

Z

juj>1

du

1 +j

u

jp 1 =

K

max(

p

1) since

p

1

>

1, in our setting. Then the bias is

B

n(

x M

) = Z 1

h

n

K

y

;

x h

n

f

(

y

)

dy

;

f

(

x

)

= 12

Z

F(

f

)(

y

)

e

ixyF(

K

)(

h

n

y

);1]

dy

21

Z

jF(

f

)(

y

)j j

h

n

y

jp 1 +j

h

n

y

jp

dy:

(10)

Then we apply Holder's inequality for 1

=p

+ 1

=p

0 = 1 as follows

B

n(

x M

)

h

n

2

Z

j F(

f

)(

y

)jj

y

j j

h

n

y

j(p;1) 1 +j

h

n

y

jp

dy

L

0

h

;n1=p0 2

Z

j

y

jp (1 +j

y

jp)p0

dy

!1=p0

=

b

(

M

)

h

en(M);1=2 where

L

0 is the constant from Lemma 3

:

4 and

b

(

M

) =

L

0 2

Z

j

y

jp (1 +j

y

jp)p0

dy

!1=p0

:

This term is bounded as follows

b

(

M

)

L

0 2

0

B

@ Z

jyj1

dy

1 +j

y

jp 1p0 +

Z

jyj>1

dy

j

y

jp0 1

1

C

A

1=p0

=

b

max(

p L

1)

:

Let us check at last that condition (3

:

3) is fullled:

Z

IR

j

K

(

y

)jj

y

j;1=p

dy

Z

jyj1j

K

(

y

)j

dy

+Z

jyj>1j

K

(

y

)jj

y

j;1=p

dy

K

max+

Z

jyj>1j

K

(

y

)j2j

y

j2

dy

!1=2Z

jyj>1j

y

j;2=p

dy

!1=2

L

0(

p

1)

:

For the variance term, we write, using Lemma 3

:

3

E

f

Z

n(

x H

)]2 1

nh

n

Z 1

h

n

K

2

y

;

x h

n

f

(

x

)

dx

k

K

k22

nh

n

:

2

Let us recall the following inequalities (see e.g. Hardle, Kerkyacharian, Picard, Tsy- bakov 14]).

Lemma 3.7 Rosenthal's inequality

: Let

q

2 and

Y

1

::: Y

n be independent random variables such that

E Y

i] = 0,

E

j

Y

ijq]

<

1. Then there exists

C

(

q

) a constant depending on

q

such that

E

"

n

X

i=1

Y

i

q

#

C

(

q

)

8

<

: n

X

i=1

E

j

Y

ijq] +

n

X

i=1

E

Y

i2

!

q=29=

:

Bernstein's inequality

: Let

Y

1

::: Y

n be i.i.d. random variables such that j

Y

ij

M

,

E Y

i] = 0 and denote

b

2n=Pni=1

E

Y

i2. Then for any

>

0,

P

"

n

X

i=1

Y

i

#

2exp

;

2 2(

b

2n+

M=

3)

:

(11)

Lemma 3.8

If

f

belongs to

H

=

H

(

p L

) and

<

, if

K

is the kernel function and

Z

n(

x

0

H

) = 1

nh

n

n

X

i=1

K

X

i;

x

0

h

n

;

E

f

K

X

i;

x

0

h

n

then for any

u >

0

P

f

Z

n(

x

0

H

)

u

]2exp

;

u

2 2

s

2n(1 +

c

0

u

)

where

c

0

>

0 does not depend on

.

Proof.

Indeed, we can apply Bernstein's inequality for

=

nu

and the i.i.d., centered variables

Y

i = 1

h

n

K

X

i;

x

0

h

n

;

E

f

K

X

i;

x

0

h

n

bounded as follows: j

Y

ij 2k

K

k1

=h

n. Then

b

2n

s

2n = k

K

k22

=

(

nh

n) by (3

:

2) and, by Lemma 3

:

5, 2k

K

k1

=

k

K

k222

K

max

=

;

k

2min=

c

0, which does not depend on

.

2

Remark 3.9

For

q >

1, we can nd a constant

c

(

q

)

>

0 such that the stochastic term of the kernel estimator satis es

E

f

Z

n(

x

0

H

)]q

c

(

q

)

s

qn where we denoted

s

2n =k

K

k22

=

(

nh

n).

Indeed, for

q >

2, we apply Rosenthal's inequality to the previous centered variables

Y

i, bounded as follows: j

Y

ij2k

K

k1

=h

n. Then we can nd a constant depending on

q

,

c

0(

q

), such that

E

f

"

n

1

n

X

i=1

Y

i

#

q

c

0(

q

)

(

2k

K

k1

nh

n

q;2 1

nE

f

Y

21+

1

nE

f

Y

21q=2

)

and this leads to our result for some constant

c

(

q

), because of the inequality (3

:

2). We can easily deduce this result by standard convexity inequalities, for 1

< q

2, from (3

:

2).

Let us introduce the sequence

n2 =

C

qs

2n21

e

; 21

e

log

n

where

<

are in

B

Nn,

C

>

0 ande

and

eare dened by (2

:

4).

(12)

Lemma 3.10

1. If the set

B

Nn satis es conditions (2

:

1) and (2

:

2), then log

n

eN n!1! 1, log

eN

s

eN

log

n

n!1! 0 and log 1n

s

eN

log

n

n!1! 0 where

eN =

eNn is de ned by the transformation (2

:

4).

2. If

,

are in

B

Nn such that

<

then there exist constants

C

1,

C

2 depending only on previously xed constants such that

sup

f2H

B

nq (

x

0

H

) +

s

qn

qn

C

1

sup

f2H

B

nq (

x

0

H

) +

nq

qn

C

2plog

n

1

n

;12 12e;1 2e

: Proof.

1. The limits are easy consequences of hypotheses (2

:

1) and (2

:

2).

2. By Lemma 3

:

5, there exist

b

max and

k

max not depending on

, such that

b

b

max

and k

K

k2

k

max, for any

in

B

Nn. Thus, for

2

B

; and

<

:

B

n (

x

0

H

)

n

b

max,

s

n

n

k

max

s log

n

and

B

n(

x

0

H

)

n

b

max

log

n n

;12 12e;1 2e

:

Finally,

n

n

qC

a

e1

q

eN

log

n n

;12 12e;1 2e

:

Because

eN

=

log

n

!0 when

n

!1 we get the lemma for

2

B

;. For the case

=

Nn, denoted

N:

B

n N(

x

0

H

)

n N

b

max,

s

n N

n N

k

max

p

:

Moreover,

B

n(

x

0

H

)

n N

b

maxplog

n

n

1

;1212e; 1 2eN

n

n N 2

qC

a

s

eNn

log

n

1

n

;12 12e; 1 2eN

:

2

Referenzen

ÄHNLICHE DOKUMENTE

Casti (1974) gave algorithms for the stochastic inflow-nonlinear objective reservoir control problem; Szollosi-Nagy (1975) outlined the closed-loop control of linear stochastic

To avoid messy formulae, one can express partial derivatives of J (·) in terms of higher order versions of J (·) by means of the recursion (3).. Here we collect and extend some

[r]

• Iterative algorithm for ML-estimation of systems with hidden/missing values. • Calculates expectance for hidden values based on observed data and joint distribution. • Slow

• Number of required samples may be very large (much larger than would be required if we knew the form of the unknown density). • In case of PW and KNN computationally

But we intentionally choose the simplest partial linear model to demonstrate why the second order theory is essential in semiparametric estimation.. We will make comments on

In Figure 1 we show a typical data set in the Laplace case (a) together with box plots for the absolute error of the different methods in 1000 Monte Carlo repetitions: local means

Given an observation history of the process y t , the problem then consists in estimating recursively both the current state zt of the system (filtering) as well