• Keine Ergebnisse gefunden

Comparison of Nonparametric Goodness of Fit Tests

N/A
N/A
Protected

Academic year: 2022

Aktie "Comparison of Nonparametric Goodness of Fit Tests"

Copied!
16
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Comparison of Nonparametric Goodness of Fit Tests

Henning Lauter, Cornelia Sachsenweger

University of Potsdam, Institute of Mathematics PF 60 15 53, 14415 Potsdam

Abstract

We consider two tests for testing the hypothesis that a density lies in a parametric class of densities and compare them by means of simulation. Both considered tests are based on the integrated squared distance of the kernel density estimator from its hypothetical expectation. However, dierent kernels are used. The unknown parameter will be replaced by its maximum-likelihood-estimation (m.l.e.).

The power of both tests will be examined under local alternatives. Although both tests are asymptotically equivalent, it will be shown that there is a dierence between the power of both tests when a nite number of random variables is used.

Furthermore it will be shown that asymptotically equivalent approximations of the power can dier signicantly when nite sample sizes are used.

Keywords:

Goodness of t, kernel estimator, local alternatives, simulation

AMS 1991 Subject Classication:

Primary 62G10, 62G20.

The research for this paper was carried out within Sonderforschungsbereich 373 and was printed using funds made available by the Deutsche Forschungsgemeinschaft.

1

(2)

2

1 INTRODUCTION

1 Introduction

Let

X

n1

::: X

nn be a sample of i.i.d. random variables with density

f

n. We consider the problem of testing the hypothesis

H

:

f

n2F=f

f

n(

#

)j

#

2Rkg against the nonparametric alternative

K

:

f

n62F

where F is a parametric class of density functions. Our test statistic is the

L

2-norm of the distance of the kernel density estimator

f

^n(

t

) = (

nh

n);1Xn

i=1

K t

;

X

i

h

n

from its expectation under the hypothesis density

f

n(

#

) =

f

# E ^

f

n(

t

) =

e

hn(

t f

#) =

Z

K

(

x

)

f

(

t

;

xh

n

#

)d

x:

Under certain assumptions on the density of the random variables, the kernel function, the bandwidth and the weight function of the kernel density estimator follows with Liero/Lauter/Konakov (1998):

Theorem 1.1

If for a parameter estimator ^

#

n under the hypothesis

H

with

X

i

f

# for each

#

2

p

n

j

#

^n;

#

j= OP(1)

is true and if

h

n(ln

n

)!0 for a

>

k2, then follows for all

n

!1 for all

x

P

f

# 0

@

h

;n12(

Q

n(

f

#n^ );

(

f

#n^ ))

(

f

#n^ )

x

1

A

!(

x

)

(3)

3

with

Q

n(

f

#^n) =

nh

nZ ( ^

f

n(

t

);

e

h(

t f

#^n))2

a

(

t

)d

t

with

e

h(

t f

#n^ ) = Z

K

(

x

)

f

#n^ (

t

;

xh

n)d

x

a

(

t

) = weight function and

f

^n(

t

) = (

nh

n);1Xn

i=1

K t

;

X

ni

h

n

(

f

#n^ ) =

Z

f

#n^ (

t

)

a

(

t

)d

t

with

=

Z

K

2(

x

)d

x

2(

f

#n^ ) = 2

Z

f

#n2^ (

t

)

a

2(

t

)d

t

with

=

Z

(

K K

)2(

x

)d

x

(

u

) = 1;

and R :=R;11 .

This way we get an asymptotic

-test, if we reject the hypothesis if

Q

n(

f

#n^ )

(

f

#n^ ) +

u

h

n12

(

f

#n^ )

:

The power of a test is the probability that the test rejects the hypothesis.

That is

P Q

n(

f

#n^ )

(

f

#n^ ) +

u

h

n12

(

f

#n^ )

:

We compare the power of the test with the Epanechnikov kernel to that with the Gau kernel, always using

a

(

t

)1. Therefore we use local alternatives of the form

K

n:

f

n(

#

n) =

f

(

#

n) +

N

n

w

((;

c

)

b

;1n ) (1) with

#

n=

#

+

n

; with an unknown vector ,

>

0 and

#

xed. Let

N

nbe a sequence of positive numbers converging to zero and

c

in the support of

f

. Let

w

be a function with limited support, for whichR

w

(

x

)d

x

= 0. For

b

n

b

we get the Pitman alternative and for

b

n;;;!

n!1

0 the sharp peak alternative.

In Figure 1 there are three of such local alternatives in comparison with the normal density

f

(

x

0

:

2 1).

(4)

4

2 THEMAXIMUMLIKELIHOOD ESTIMATION

0

:

00 0

:

10 0

:

20 0

:

30 0

:

40 0

:

50 0

:

60 0

:

70 0

:

80

;

1

;

0

:

5 0 0

:

5 1

x

f

n0:381

f

n0:0381

f

n0:380:1

f

Figure 1:

Local alternatives fn with Nn = 0:38bn = 1, with Nn = 0:038bn = 1, with

N

n= 0:38bn= 0:1 and hypothesisf.

2 The Maximum Likelihood Estimation

Because the parameter

#

is unknown, is has to be estimated. If the hypothesis holds then

#

^n;

#

=

n

;12

Z

n

is true for the m.l.e. ^

#

n of the parameter.

Z

n is asymptotically normal dis- tributed with expectation zero. It can be shown that ^

#

n under the local alternatives behave similar as under the hypothesis under following weak as- sumptions. The open kernel of the parameter space will be denoted by .

A1 For all

x

2 R there exist the second derivative @#@i2

@#

j

f

(

x #

)

i j

= 1

::: k

. They are continuous for all

#

2 and it is

Z

r

#

f

(

x #

)d

x

= 0

Z

@

2

@#

i

@#

j

f

(

x #

)d

x

= 0

:

(2) A2 For each

#

2 exists such a

=

(

#

) and such a ball

U

(

#

) with

the radius

that for all

i j

= 1

::: k

@

2

@#

i

@#

j ln

f

(

x #

0)

m

(

x #

) 8

#

02

U

(

#

) (3) for a measurable function

m

with R

m

(

x #

)

f

(

x #

)d

x <

1.

A3 The determinant of the Fisher Information

I

(

#

) =

Z

r

#ln

f

(

x #

)(r#ln

f

(

x #

))t

f

(

x #

)d

x

(4) is positive for each

#

2.

(5)

5

We put

n(

t

) =;

n

; tr#

f

(

t #

~);

N

n

w

((

t

;

c

)

b

;1n ) with a suitable ~

#

between ^

#

n and

#

.

Then it follows:

Theorem 2.1

If the assumptions A1, A2 and A3 hold then

Z

r

#ln

f

(

t #

)

n(

t

)d

t

= O(

n)

for the sequence

n=

n

;+

N

n

b

n and we get under

K

n for the m.l.e.

#

^n;

#

=

n

;12

Z

n+

n

S

+ oP(

n)

:

(5)

Z

n is asymptotically normal distributed with expectation zero and

S

a non- random vector.

The proof can be found in Liero/Lauter/Konakov (1998). In the family F of normal distribution densities with the k(= 2)-dimensional unknown parameter (

#

(1)

#

(2)) with expectation

#

(1) =

and variance

#

(2) =

2, which we will consider in the following, the assumptions A1, A2 and A3 hold.

Following Theorem 2.1 for the sequence

n=

n

;+

N

n

b

n

R

r

#ln

f

(

t #

)

n(

t

)d

t

=

Z

(x;g (#

(1) ))g

0

(#

(1) )

2

(x;g (#

(1) ))

2

; 2

2 4

!

n(

t

)d

t

= O(

n)

:

Consequently, with Theorem 2.1 follows that the m.l.e. behave under the local alternatives similar as under the hypothesis.

It is obvious that the m.l.e. for

g

(

#

n(1)) equals the arithmetic mean of the observations. For getting the values of the m.l.e. ^

#

n(1) out of

g

(^

#

n(1)) we demand unique invertibility of the function

g

. In the following examples the function

g

has this property.

The m.l.e. of

#

2 =

2 is always

^2n= 1

n

n

X

i=1

(

x

i;

g

(^

#

n(1)))2= 1

n

n

X

i=1

x

2i ;

g

(^

#

n(1))2

:

Therefore we will only give the values of the m.l.e. for

#

(1)in the following.

Now we want to determine the nonrandom vector

S

in Theorem 2.1 which is independent of

n

. From the proof of the theorem we get

nT

n;1(

X

n

#

)

n

12Enr#ln

f

(

X

nl

#

) =

n

12

n(

S

+ oP(1))

:

(6)

6

2 THEMAXIMUMLIKELIHOOD ESTIMATION

T

n=

T

nij is the matrix

T

nij(

x #

) =

@

2

@#

i

@#

j ln

n

Y

l=1

f

(

x

nl

#

) and with the law of large numbers we see

n

;1

T

n(

X

n

#

);;;!

n!1

;

I

(

#

) with the Fisher Information

I

(

#

). It follows

;

n;1

I

(

#

);1Enr#ln

f

(

X

nl

#

) =: ~

S

n;;;!

n!1

S :

We get

I

(

#

);1= 2

6

g

0(

#

(1))2

1

2

4 0

0 g0(#(1)2 )2

!

=

2

g 0

(#

(1) )

2 0 0 2

4

!

and with t= ( 1 2) Enr#ln

f

(

X

nl

#

)

=

0

@ R

(t;g (#

(1) ))g

0

(#

(1) )

2

n(

t

)d

t

R

(t;g (#

(1) ))

2

6

; 1

2

2

n(

t

)d

t

1

A

=

0

B

@

; n

;

g 0

(#

(1) )

2

1

2

; N

n g

0

(#

(1) )

2

R

tw

((

t

;

c

)

b

;1n )d

t

; n

;

2

6

; Nn

6

R(

t

;

g

(

#

(1)))2

w

((

t

;

c

)

b

;1n )d

t

1

C

A

:

Now for the vector ~

S

n we get

S

~n = ;

n;1

I

(

#

);1Enr#ln

f

(

X

nl

#

)

(

n

;+

N

n

b

n);1

0

@

n

; 1+ g0Nn(#1 )

R

tw

((

t

;

c

)

b

;1n )d

t

2

2

;

n

; 2+

N

nR(

t

;

g

(

#

1))2

w

((

t

;

c

)

b

;1n )d

t

1

A

:

With the substitution

z

:= t;cbn and R

w

(

z

)d

z

= 0 it is

S

~n(

n

;+

N

n

b

n);1

0

B

B

B

@

n

; 1+ gb0(#2nNn(1) )

R

zw

(

z

)d

z

2

2

;

n

; 2+

N

n

b

3nR

z

2

w

(

z

)d

z

+ 2

N

n

b

2n(

c

;

g

(

#

(1)))R

zw

(

z

)d

z

1

C

C

C

A

:

Taking limits we determine the vector

S

which is independent of

n

. Therefore we consider various cases according to the two local alternatives. We will nd out that depending on

n we get dierent vectors

S

. Consequently, the

(7)

7

vector

S

depends on the local alternatives. Let the vector ~

S

n for

n

! 1 converge to

S

1j in case of Pitman alternatives and to

S

2j in case of sharp peak alternatives. Let

j

= 1 for nNn; !1,

j

= 2 for nNn; ! 0 and

j

= 3 for

n

;

N

n

! const. First let

K

n be Pitman alternatives, that is

b

n

b

. 1.1 If nNn; !1 we get

n=

n

; 1 +

N

n

n

;

b

n

;

and the terms of the vector ~

S

n containing the factor nN;n can be elimi- nated for determining the vector

S

. We get

S

11=

1

2

2

2

!

:

This vector

S

is consequently independent of the considered model, that is of

g

(

#

(1)).

1.2 If nNn; !0 we get

n

N

n

b

and the terms of the vector ~

S

n containing the factor nNn; can be eliminated for determining the vector

S

. In this case we get

S

12=

b

g 0

(#

(1) )

R

zw

(

z

)d

z

2b 2

2

R

z

2

w

(

z

)d

z

+ 4b(c;g (#2 (1)))R

zw

(

z

)d

z

!

:

1.3 If nNn; ! const. =:

L

1 we get

n

;

L

1

N

n and

n (

L

1 +

b

)

N

n. In this case we have

S

13=

0

@

L

1 1

L

1

+b +(L1+b)gb20(#(1) )

R

zw

(

z

)d

z

2L

1 2

(L

1 +b)

2 +2(L2b13 +b)

R

z

2

w

(

z

)d

z

+4b2(c;g (#2(L1 (1))) +b)

R

zw

(

z

)d

z

1

A

:

For sharp peak alternatives, that is

b

n;;;!

n!1

0, we get:

2.1 If Nnbnn; !1 then follows

S

21=

1

2

2

2

!

. 2.2 If Nnn;

b

n

!0 then follows

S

22=

0 0

!

:

2.3 If Nnbnn; !

L

2 then follows

S

23=

L

2 1

L

2 +1

2L

2 2

(L

2 +1)

2

!

:

Obviously under the sharp peak alternatives always the vector

S

is indepen- dent of

g

(

#

(1)).

(8)

8

3 POWER OF THE TESTS

Now the m.l.e. and the vectors

S

12 and

S

13 will be determined for two models of normal distribution under the special local alternatives with the error function

w

(

t

) =

( sin(;2

t

) for

t

2;12 12]

0 else

:

We have independent, identically normal distributed random variables

Y

i =

+

"

. Let

"

be normally distributed with expectation zero and variance

2, that is

Y

i

N

( 2). It is

g

(

#

(1)) =

and the m.l.e.

^= 1

n

n

X

i=1

x

i

:

For the vectors

S

12 and

S

13 we get

S

12 =

; b

2

; 2b(c;)

2

!

S

13 =

0

@

L1 1

L1+b

; b

2

2 (L1+b)

2L

1 2

2

(L

1 +b)

; 2b

2

(c;)

2

(L

1 +b)

1

A

:

Now we have independent, identically normal distributed random vari- ables with E

Y

= e;t. The parameter

#

(1) =

is to be estimated by

^n=;lnPnn

i=1 x

t

i

:

For the vectors

S

12 and

S

13 we get

S

12 =

0

@

; be

;t

2 t

;

2b(c;et)

2 1

A

S

13 =

0

@ L

1 1

L

1 +b

; b

2

e

;t

2 (L

1 +b)t

2L

1 2

(L1+b) 2

; 2b

2(c;et)

2

(L1+b) 1

A

:

3 Power of the tests

When we use the test which rejects the hypothesis if

Q

n(

f

#n^ )

(

f

#n^ ) +

u

h

n12

(

f

#n^ )

(9)

9

then ^

#

n will be the m.l.e. in the following.

For the asymptotic power under the local alternatives we can nd explicit approximations. In Liero/Lauter/Konakov (1998) the following result was proved:

Theorem 3.1

Given

nh

n12

N

n2

b

n=

c

1n

n

1;2

h

n12 =

c

2n

nh

n12

n2 =

c

3n

h

;n12

N

n

b

n=

c

4n

h

;n12

n

; =

c

5n

h

;n12

n=

c

6n

:

Under similar assumptions as in Theorem 1.1 and

h

n

b

;1n !0 we get for the power (

f

n) :=

P

(

Q

n(

f

#n^ )

(

f

#n^ ) +

u

h

n12

(

f

#n^ )j

f

n)

lim

n!1

((

f

n);(

U

1n)) = 0 with

U

1n =

;1(

f

#)

c

1nZ

w

2(

t

)

a

(

c

;

tb

n)d

t

+

Z

(p

c

2n ;p

c

3n

S

)tr#

f

(

t #

)]2

a

(

t

)d

t

+ 2p

c

1n

b

nZ (p

c

2n ;p

c

3n

S

)tr#

f

(

c

;

tb

n

#

)

w

(

t

)

a

(

c

;

tb

n)d

t

+

(

c

4nZ

w

(

t

)

a

(

c

;

tb

n)d

t

+

Z

(

c

5n ;

c

6n

S

)tr#

f

(

t #

)

a

(

t

)d

t

)

;

u

:

Considering the asymptotic behaviour we get

c

1n =

nh

n12

N

n2 and

c

4n =

h

;n12

N

n as leading coecients in the approximation of the power. Therefore we get the following simplier approximation of power

U

2n which is asymp- totically equivalent to

U

1n.

Theorem 3.2

(Pitman alternative) Given

b

n

b

. For

U

2n =

;1(

f

#)

"

c

1nZ

w tb

;

S

tr

f

(

c

;

tb

n

#

)

2 d

t

#

;

u

we have

lim

n!1

((

f

n);(

U

2n)) = 0

: Theorem 3.3

(Sharp peak alternative) Given

b

n;;;!

n!1

0. For

U

2n=

;1(

f

#)

c

1n

b

nZ

w

2(

t

)

a

(

t

)d

t

+

T

n;

u

with

T

n=

c

4n

b

n

b

nZ

w

(

t

)

t

d

ta

0(

c

);

Z

r

#

f

#(

t

)t

a

(

t

)d

tS

(10)

10

4 EXACTNESS OF THEAPPROXIMATIONS

we have

lim

n!1

((

f

n);(

U

2n)) = 0

:

Under assumption A1 and the choice

a

(

t

)1 we have

T

n= 0. Consequently we get for Pitman and sharp peak alternatives:

Theorem 3.4

If hbnn ;;;!

n!1

0 and

nh

n32 ;;;!

n!1

1 then lim

n!1

(

f

n)

8

>

<

>

:

=

> <

1 for

b

n

c

1n!

= 1

8

>

<

>

:

0

>

0

1

:

4 Exactness of the approximations

In the following we consider how (

U

1n) and (

U

2n) approximate the power when we have a nite number of observations. Let F be the class of normal distributions

N

(

1

2

I

) for

1

t = (1

:::

1) and

=

g

(

#

n(1)). For constructing the local alternatives we choose

w

(

t

) =

( sin(;2

t

) for

t

2;12 12]

0 else

c

= 0 and

= 1000. With

S

= (

S

(1)

S

(2))t,

f

1 = @f@#(t#)(1) ,

f

2 = @f(t#)@2 ,

f

11 = @f(c;tb@#(1)n#) and

f

22 = @f(c;tb@2n#) as well as assumption A1 about the hypothesis density we get

U

1n =

;1(

f

#)

c

1n 2 +

Z

(;p

c

3n

S

(1)

f

1+ (p

c

2n;p

c

3n

S

(2))

f

2)2d

t

+ 2p

c

1n

b

nZ 12

; 1

2

(;p

c

3n

S

(1)

f

11+ (p

c

2n;p

c

3n

S

(2))

f

22)sin(;2

t

)d

t

!

;

u

U

2n =

;1(

f

#)

"

c

1nZ 12

; 1

2

sin ;2

tb

;

S

(1)

f

1;

S

(2)

f

2

2 d

t

#

;

u

(Pitman alternative)

U

2n =

;1(

f

#)

c

1n

b

n

2 ;

u

(sharp peak alternative)

:

As values for

N

n and

b

n we choose 0

:

38 0

:

2 0

:

038 and 1

:

0 0

:

5 0

:

1, re- spectively. Let

n

be 50, 100 or 150.

Every of these cases we can embed in the Pitman or sharp peak alterna- tives. The classi cation of a special case with xed

n

into either one or the

(11)

11

other of both classes is somewhat subjective. We choose sharp peak alterna- tives as model for our considerations.

For some

N

nand

b

none can nd the approximations (

U

1n0:12) in Table 1 and the simplier approximations (

U

2n0:12) in Table 2.

Epanechnikov kernel Gau kernel

b

n

b

n

n N

n

1

:

0 0

:

5 1

:

0 0

:

5 0

:

1 50 0

:

380 0

:

9924 0

:

5921 1

:

0000 0

:

8958 0

:

1714

0

:

200 0

:

4358 0

:

7170 0

:

3102 0

:

1333 0

:

038 0

:

1353 0

:

1272 0

:

1228 100 0

:

380 1

:

0000 0

:

9957

0

:

200 0

:

9900 0

:

5699

0

:

038 0

:

1232

150 0

:

200 0

:

9957

Table 1: Approximations of power (

U1n0:12

)

Epanechnikov kernel Gau kernel

b

n

b

n

n N

n

1

:

0 0

:

5 1

:

0 0

:

5 0

:

1 50 0

:

380 0

:

9958 0

:

9260 1

:

0000 0

:

9958 0

:

8359

0

:

200 0

:

5117 0

:

8287 0

:

5361 0

:

2843 0

:

038 0

:

1369 0

:

1317 0

:

1268 100 0

:

380 1

:

0000 1

:

0000

0

:

200 0

:

9957 0

:

9106

0

:

038 0

:

1313

150 0

:

200 0

:

9957

Table 2: Approximations of power (

U2n0:12

)

In simulations we considered the power of the test using the Epanechnikov kernel or the Gau kernel.

Power when using the Epanechnikov kernel

In Figure 2 one can see the hypothesis, the local alternative for

N

n= 0

:

38

(12)

12

4 EXACTNESS OF THEAPPROXIMATIONS

and

b

n= 1, the density estimator with the Epanechnikov kernel

K

(

x

) =

(

3

4(1;

x

2) forj

x

j1

0 else

with a creation of 50 random numbers and the expectation of the density estimator. The results of the simulation can be found in Table 3. In

0

:

00 0

:

10 0

:

20 0

:

30 0

:

40 0

:

50 0

:

60 0

:

70 0

:

80

;

0

:

8

;

0

:

6

;

0

:

4

;

0

:

2 0 0

:

2 0

:

4 0

:

6 0

:

8 1

t

e

E0:15

^

f f

E50

f

50

Figure 2: Local alternative

f50

with

N50

= 0

:

38,

b50

= 1, hypothesis

f

, expectation

e

E0:15

and density estimator ^

fE50

Table 4 are the dierences (

U

i500:12);1(

f

50)

i

= 1 2 and in Table 5 are the relative distances j1(f50);(U1 i500:12)j

(f

50

) .

b

50

N

50

1

:

0 0

:

5 0

:

380 0

:

79 0

:

37 0

:

200 0

:

28

Table 3: Power

1

(

f50

) using the Epanechnikov kernel

Power when using the Gau kernel

In Figure 3 one nds the hypothesis, the local alternative for

N

n= 0

:

38 and

b

= 1, the density estimator with Gau kernel with a creation of 50 random numbers and the expectation of the density estimator. In Table 6 are the results of the simulations using the Gau kernel.

We got the dierences shown in Table 7 and the relative distances shown in Table 8.

(13)

13 (

U1500:12

)

;

1

(

f50

) (

U2500:12

)

;

1

(

f50

)

b

50

b

50

N

50

1

:

0 0

:

5 1

:

0 0

:

5

0

:

380 0

:

2024 0

:

2221 0

:

2058 0

:

5560 0

:

200 0

:

1558 0

:

2317

Table 4: Dierences between the power

1

(

f50

) and the power approximation (

Ui500:12

) using the Epanechnikov kernel

j

(

U1500:12

)

;

1

(

f50

)

j

1

(

f50

)

j

(

U2500:12

)

;

1

(

f50

)

j

1

(

f50

)

b

50

b

50

N

50

1

:

0 0

:

5 1

:

0 0

:

5 0

:

380 0

:

2562 0

:

6003 0

:

2605 1

:

5030 0

:

200 0

:

5564 0

:

8275

Table 5: Relative distances between the power

1

(

f50

) and the power approxima- tions (

Ui500:12

) using the Epanechnikov kernel

For the case

b

n = 0

:

1

N

n = 0

:

038 we repeated the simulation as well with

n

= 50 as with

n

= 100 random numbers. Using

n

= 50 random numbers we got the same power 2(

f

50) = 0

:

15 as in the rst simulation.

However, using

n

= 100 random numbers we got the power 2(

f

100) =

7

30

0

:

2333 which is still higher. From Figure 4 we can guess the reason for this phenomenon. In the gure one nds a density estimation with

n

= 50 random numbers, its expectation, the hypothesis, the alternative for

b

50= 0

:

1 and

N

50= 0

:

038 as well as the random numbers

x

50i which fall into the considered interval. It is obviously that the density estimator is not near the alternative. The reason for this is that the number of random variables is too small for a convenient choice of the bandwidth

b

n when we have such a tiny

b

n. Consequently, a simulation with

b

n= 0

:

1 and

N

n= 0

:

038 demands a much greater number of random variables.

Interpretation of the results

With the simulations we could con rm the fact, that both

L

2-tests rec- ognize a distance from the hypothesis density worse with declining in- terval of distance and declining distance. Very small distances in small intervals will be interpreted as random errors and consequently the hy-

(14)

14

4 EXACTNESS OF THEAPPROXIMATIONS

0

:

00 0

:

10 0

:

20 0

:

30 0

:

40 0

:

50 0

:

60 0

:

70 0

:

80

;

1

;

0

:

5 0 0

:

5 1

t

e

G0:15

^

f f

G50

f

50

Figure 3: Local alternative

f50

with

N50

= 0

:

38,

b50

= 1, hypothesis

f

, expectation

e

G0:15

and density estimator ^

fG50

b

n

n N

n

1

:

0 0

:

5 0

:

1 50 0

:

380 0

:

72 0.27 0

:

05

0

:

200 0

:

19 0

:

13 0

:

05 0

:

038 0

:

06 0

:

07 0

:

15 100 0

:

380

2930

0

:

9667

1130

0

:

3667

0

:

200

308

0

:

2667

301

0

:

0333

0

:

038

307

0

:

2333

150 0

:

200 0

:

50

Table 6: Power

2

(

fn

) using the Gau kernel

pothesis will not be rejected. Both tests discerned a distance from the hypothesis worse when the intervals of distance were longer (

b

n= 1) but the distance itself smaller (

N

n= 0

:

2) than when the intervals of distance were smaller (

b

n= 12) and the distance greater (

N

n= 0

:

2). That is due to the squaring of the distance in

L

2-tests. If the distances are less than one they become still less this way and so a rejection of the hypothesis becomes improbable.

Furthermore, the simulations show signi cant dierences between both tests. The power is always higher when using the Epanechnikov ker- nel. Though the decisions based on these tests are the same when we have an unlimited number of random variables, the dierences should

(15)

15 (

U1n0:12

)

;

2

(

fn

) (

U2n0:12

)

;

2

(

fn

)

b

n

b

n

n N

n

1

:

0 0

:

5 0

:

1 1

:

0 0

:

5 0

:

1 50 0

:

380 0

:

2800 0

:

6258 0

:

1214 0

:

2800 0

:

7258 0

:

7859

0

:

200 0

:

5270 0

:

1802 0

:

0833 0

:

6387 0

:

4061 0

:

2343 0

:

038 0

:

0753 0

:

0572

;

0

:

0272 0

:

0769 0

:

0617

;

0

:

0232 100 0

:

380 0

:

0333 0

:

6290 0

:

0333 0

:

6333

0

:

200 0

:

7233 0

:

5366 0

:

7290 0

:

8773

0

:

038

;

0

:

1101

;

0

:

1020

150 0

:

200 0

:

4957 0

:

4957

Table 7: Dierences between power

2

(

fn

) and power approximations (

Uin0:12

) using the Gau kernel

j

(

U1500:12

)

;

1

(

f50

)

j

1

(

f50

)

j

(

U2500:12

)

;

1

(

f50

)

j

1

(

f50

)

b

n

b

n

n N

n

1

:

0 0

:

5 0

:

1 1

:

0 0

:

5 0

:

1 50 0

:

380 0

:

3889 2

:

3178 2

:

4280 0

:

3889 2

:

6881 15

:

7180

0

:

200 2

:

7737 1

:

3862 1

:

6660 3

:

3616 3

:

1238 4

:

6860 0

:

038 1

:

2550 0

:

8171 0

:

1813 1

:

2817 0

:

8814 0

:

1547 100 0

:

380 0

:

0344 1

:

7153 0

:

0344 1

:

7270

0

:

200 2

:

7112 16

:

1141 2

:

7334 26

:

3453

0

:

038 0

:

4719 0

:

4372

150 0

:

200 0

:

9914 0

:

9914

Table 8: Relative distances between power

2

(

fn

) and power approximation (

Uin0:12

) using the Gau kernel

be considered when the number of random variables is limited.

Moreover, the asymptotically equivalent approximations of power (

U

1n) and (

U

2n) dier when the number of random variables is nite. We see, that with a nite number of random variables (

U

1n) approxi- mates the power nearly always better and never worse than (

U

2n).

Consequently, the simpli cations in (

U

2n) do not seem convenient for a small number of random variables.

Referenzen

ÄHNLICHE DOKUMENTE

[r]

One interesting result from this analysis is that if no assumptions regarding the specic form of the mean term are made while a linear trend is excluded then a test is available

In this Chapter, we attempt to generalize the theory of score tests. The situation is similar to the one in estimation theory. There is a classical estimation method based on the use

Keywords and Phrases : Chi-squared test, goodness-of-t test, density, kernel estima- tors, maximum likelihood estimator, local alternative, Pitman alternative, sharp peak

Alcatel belongs to the most frequently traded stocks on the Paris Stock Exchange (Paris Bourse). On average the shares of Alcatel are exchanged every 52 seconds. The data consist

By contrast, the P values in Panel B of Table 5.3 suggest that the FGARCH(1,1) model is generally acceptable for modelling the conditional heteroscedasticity of all three curve

During this analysis we classified tests into unit and integration tests according to the definitions of the Institute of Electrical and Electronics Engineers (IEEE) and

The Monte Carlo simulation results to compare the finite sample properties of the proposed tests with the previous tests such as Cox test and J-test show that the proposed