Comparison of Nonparametric Goodness of Fit Tests
Henning Lauter, Cornelia Sachsenweger
University of Potsdam, Institute of Mathematics PF 60 15 53, 14415 Potsdam
Abstract
We consider two tests for testing the hypothesis that a density lies in a parametric class of densities and compare them by means of simulation. Both considered tests are based on the integrated squared distance of the kernel density estimator from its hypothetical expectation. However, dierent kernels are used. The unknown parameter will be replaced by its maximum-likelihood-estimation (m.l.e.).
The power of both tests will be examined under local alternatives. Although both tests are asymptotically equivalent, it will be shown that there is a dierence between the power of both tests when a nite number of random variables is used.
Furthermore it will be shown that asymptotically equivalent approximations of the power can dier signicantly when nite sample sizes are used.
Keywords:
Goodness of t, kernel estimator, local alternatives, simulationAMS 1991 Subject Classication:
Primary 62G10, 62G20.The research for this paper was carried out within Sonderforschungsbereich 373 and was printed using funds made available by the Deutsche Forschungsgemeinschaft.
1
2
1 INTRODUCTION1 Introduction
Let
X
n1::: X
nn be a sample of i.i.d. random variables with densityf
n. We consider the problem of testing the hypothesisH
:f
n2F=ff
n(#
)j#
2Rkg against the nonparametric alternativeK
:f
n62Fwhere F is a parametric class of density functions. Our test statistic is the
L
2-norm of the distance of the kernel density estimatorf
^n(t
) = (nh
n);1Xni=1
K t
;X
ih
n
from its expectation under the hypothesis density
f
n(#
) =f
# E ^f
n(t
) =e
hn(t f
#) =Z
K
(x
)f
(t
;xh
n#
)dx:
Under certain assumptions on the density of the random variables, the kernel function, the bandwidth and the weight function of the kernel density estimator follows with Liero/Lauter/Konakov (1998):
Theorem 1.1
If for a parameter estimator ^#
n under the hypothesisH
withX
if
# for each#
2p
n
j#
^n;#
j= OP(1)is true and if
h
n(lnn
)!0 for a>
k2, then follows for alln
!1 for allx
P
f
# 0
@
h
;n12(Q
n(f
#n^ );(f
#n^ )) (f
#n^ )x
1
A
!(
x
)3
with
Q
n(f
#^n) =nh
nZ ( ^f
n(t
);e
h(t f
#^n))2a
(t
)dt
withe
h(t f
#n^ ) = ZK
(x
)f
#n^ (t
;xh
n)dx
a
(t
) = weight function andf
^n(t
) = (nh
n);1Xni=1
K t
;X
nih
n(
f
#n^ ) = Zf
#n^ (t
)a
(t
)dt
with =Z
K
2(x
)dx
2(f
#n^ ) = 2Zf
#n2^ (t
)a
2(t
)dt
with =Z
(
K K
)2(x
)dx
(u
) = 1;and R :=R;11 .
This way we get an asymptotic
-test, if we reject the hypothesis ifQ
n(f
#n^ )(f
#n^ ) +u
h
n12(f
#n^ ):
The power of a test is the probability that the test rejects the hypothesis.
That is
P Q
n(f
#n^ )(f
#n^ ) +u
h
n12(f
#n^ )
:
We compare the power of the test with the Epanechnikov kernel to that with the Gau kernel, always using
a
(t
)1. Therefore we use local alternatives of the formK
n:f
n(#
n) =f
(#
n) +N
nw
((;c
)b
;1n ) (1) with#
n=#
+n
; with an unknown vector ,>
0 and#
xed. LetN
nbe a sequence of positive numbers converging to zero andc
in the support off
. Letw
be a function with limited support, for whichRw
(x
)dx
= 0. Forb
nb
we get the Pitman alternative and forb
n;;;!n!1
0 the sharp peak alternative.
In Figure 1 there are three of such local alternatives in comparison with the normal density
f
(x
0:
2 1).4
2 THEMAXIMUMLIKELIHOOD ESTIMATION0
:00 0
:10 0
:20 0
:30 0
:40 0
:50 0
:60 0
:70 0
:80
;
1
;0
:5 0 0
:5 1
x
f
n0:381
f
n0:0381
f
n0:380:1
f
Figure 1:
Local alternatives fn with Nn = 0:38bn = 1, with Nn = 0:038bn = 1, withN
n= 0:38bn= 0:1 and hypothesisf.
2 The Maximum Likelihood Estimation
Because the parameter
#
is unknown, is has to be estimated. If the hypothesis holds then#
^n;#
=n
;12Z
nis true for the m.l.e. ^
#
n of the parameter.Z
n is asymptotically normal dis- tributed with expectation zero. It can be shown that ^#
n under the local alternatives behave similar as under the hypothesis under following weak as- sumptions. The open kernel of the parameter space will be denoted by .A1 For all
x
2 R there exist the second derivative @#@i2@#
j
f
(x #
)i j
= 1::: k
. They are continuous for all#
2 and it isZ
r
#
f
(x #
)dx
= 0Z
@
2@#
i@#
jf
(x #
)dx
= 0:
(2) A2 For each#
2 exists such a=(#
) and such a ballU
(#
) withthe radius
that for alli j
= 1::: k
@
2@#
i@#
j lnf
(x #
0)
m
(x #
) 8#
02U
(#
) (3) for a measurable functionm
with Rm
(x #
)f
(x #
)dx <
1.A3 The determinant of the Fisher Information
I
(#
) =Z
r
#ln
f
(x #
)(r#lnf
(x #
))tf
(x #
)dx
(4) is positive for each#
2.5
We put
n(t
) =;n
; tr#f
(t #
~);N
nw
((t
;c
)b
;1n ) with a suitable ~#
between ^#
n and#
.Then it follows:
Theorem 2.1
If the assumptions A1, A2 and A3 hold thenZ
r
#ln
f
(t #
)n(t
)dt
= O(n)for the sequence
n=n
;+N
nb
n and we get underK
n for the m.l.e.#
^n;#
=n
;12Z
n+nS
+ oP(n):
(5)Z
n is asymptotically normal distributed with expectation zero andS
a non- random vector.The proof can be found in Liero/Lauter/Konakov (1998). In the family F of normal distribution densities with the k(= 2)-dimensional unknown parameter (
#
(1)#
(2)) with expectation#
(1) = and variance#
(2) = 2, which we will consider in the following, the assumptions A1, A2 and A3 hold.Following Theorem 2.1 for the sequence
n=n
;+N
nb
nR
r
#ln
f
(t #
)n(t
)dt
=Z
(x;g (#
(1) ))g
0
(#
(1) )
2
(x;g (#
(1) ))
2
; 2
2 4
!
n(t
)dt
= O(n):
Consequently, with Theorem 2.1 follows that the m.l.e. behave under the local alternatives similar as under the hypothesis.It is obvious that the m.l.e. for
g
(#
n(1)) equals the arithmetic mean of the observations. For getting the values of the m.l.e. ^#
n(1) out ofg
(^#
n(1)) we demand unique invertibility of the functiong
. In the following examples the functiong
has this property.The m.l.e. of
#
2 =2 is always ^2n= 1n
n
X
i=1
(
x
i;g
(^#
n(1)))2= 1n
n
X
i=1
x
2i ;g
(^#
n(1))2:
Therefore we will only give the values of the m.l.e. for
#
(1)in the following.Now we want to determine the nonrandom vector
S
in Theorem 2.1 which is independent ofn
. From the proof of the theorem we getnT
n;1(X
n#
)n
12Enr#lnf
(X
nl#
) =n
12n(S
+ oP(1)):
6
2 THEMAXIMUMLIKELIHOOD ESTIMATIONT
n=T
nij is the matrixT
nij(x #
) =@
2@#
i@#
j lnn
Y
l=1
f
(x
nl#
) and with the law of large numbers we seen
;1T
n(X
n#
);;;!n!1
;
I
(#
) with the Fisher InformationI
(#
). It follows;
n;1I
(#
);1Enr#lnf
(X
nl#
) =: ~S
n;;;!n!1
S :
We getI
(#
);1= 26g
0(#
(1))2
1
2
4 0
0 g0(#(1)2 )2
!
=
2
g 0
(#
(1) )
2 0 0 2
4!
and with t= ( 1 2) Enr#ln
f
(X
nl#
)=
0
@ R
(t;g (#
(1) ))g
0
(#
(1) )
2
n(t
)dt
R
(t;g (#
(1) ))
2
6
; 1
2
2
n(t
)dt
1
A
=
0
B
@
; n
;
g 0
(#
(1) )
2
1
2
; N
n g
0
(#
(1) )
2
R
tw
((t
;c
)b
;1n )dt
; n
;
2
6
; Nn
6
R(
t
;g
(#
(1)))2w
((t
;c
)b
;1n )dt
1
C
A
:
Now for the vector ~S
n we getS
~n = ;n;1I
(#
);1Enr#lnf
(X
nl#
)(
n
;+N
nb
n);10
@
n
; 1+ g0Nn(#1 )R
tw
((t
;c
)b
;1n )dt
2
2
;
n
; 2+N
nR(t
;g
(#
1))2w
((t
;c
)b
;1n )dt
1
A
:
With the substitutionz
:= t;cbn and Rw
(z
)dz
= 0 it isS
~n(n
;+N
nb
n);10
B
B
B
@
n
; 1+ gb0(#2nNn(1) )R
zw
(z
)dz
2
2
;
n
; 2+N
nb
3nRz
2w
(z
)dz
+ 2N
nb
2n(c
;g
(#
(1)))Rzw
(z
)dz
1
C
C
C
A
:
Taking limits we determine the vectorS
which is independent ofn
. Therefore we consider various cases according to the two local alternatives. We will nd out that depending on n we get dierent vectorsS
. Consequently, the7
vector
S
depends on the local alternatives. Let the vector ~S
n forn
! 1 converge toS
1j in case of Pitman alternatives and toS
2j in case of sharp peak alternatives. Letj
= 1 for nNn; !1,j
= 2 for nNn; ! 0 andj
= 3 forn
;
N
n
! const. First let
K
n be Pitman alternatives, that isb
nb
. 1.1 If nNn; !1 we get n=n
; 1 +N
nn
;b
n
;and the terms of the vector ~
S
n containing the factor nN;n can be elimi- nated for determining the vectorS
. We getS
11=
1
2
2
2
!
:
This vector
S
is consequently independent of the considered model, that is ofg
(#
(1)).1.2 If nNn; !0 we get
nN
nb
and the terms of the vector ~S
n containing the factor nNn; can be eliminated for determining the vectorS
. In this case we getS
12=
b
g 0
(#
(1) )
R
zw
(z
)dz
2b 2
2
R
z
2w
(z
)dz
+ 4b(c;g (#2 (1)))Rzw
(z
)dz
!
:
1.3 If nNn; ! const. =:
L
1 we getn
;L
1N
n and n (L
1 +b
)N
n. In this case we haveS
13=0
@
L
1 1
L
1
+b +(L1+b)gb20(#(1) )
R
zw
(z
)dz
2L
1 2
(L
1 +b)
2 +2(L2b13 +b)
R
z
2w
(z
)dz
+4b2(c;g (#2(L1 (1))) +b)R
zw
(z
)dz
1
A
:
For sharp peak alternatives, that isb
n;;;!n!1
0, we get:
2.1 If Nnbnn; !1 then follows
S
21=
1
2
2
2
!
. 2.2 If Nnn;
b
n
!0 then follows
S
22=0 0
!
:
2.3 If Nnbnn; !L
2 then followsS
23=
L
2 1
L
2 +1
2L
2 2
(L
2 +1)
2
!
:
Obviously under the sharp peak alternatives always the vector
S
is indepen- dent ofg
(#
(1)).8
3 POWER OF THE TESTSNow the m.l.e. and the vectors
S
12 andS
13 will be determined for two models of normal distribution under the special local alternatives with the error functionw
(t
) =( sin(;2
t
) fort
2;12 12]0 else
:
We have independent, identically normal distributed random variables
Y
i = +"
. Let"
be normally distributed with expectation zero and variance 2, that isY
iN
( 2). It isg
(#
(1)) =and the m.l.e. ^= 1n
n
X
i=1
x
i:
For the vectorsS
12 andS
13 we getS
12 =
; b
2
; 2b(c;)
2
!
S
13 =0
@
L1 1
L1+b
; b
2
2 (L1+b)
2L
1 2
2
(L
1 +b)
; 2b
2
(c;)
2
(L
1 +b)
1
A
:
Now we have independent, identically normal distributed random vari- ables with E
Y
= e;t. The parameter#
(1) = is to be estimated by ^n=;lnPnni=1 x
t
i:
For the vectorsS
12 andS
13 we getS
12 =0
@
; be
;t
2 t
;
2b(c;et)
2 1
A
S
13 =0
@ L
1 1
L
1 +b
; b
2
e
;t
2 (L
1 +b)t
2L
1 2
(L1+b) 2
; 2b
2(c;et)
2
(L1+b) 1
A
:
3 Power of the tests
When we use the test which rejects the hypothesis if
Q
n(f
#n^ )(f
#n^ ) +u
h
n12(f
#n^ )9
then ^
#
n will be the m.l.e. in the following.For the asymptotic power under the local alternatives we can nd explicit approximations. In Liero/Lauter/Konakov (1998) the following result was proved:
Theorem 3.1
Givennh
n12N
n2b
n=c
1nn
1;2h
n12 =c
2nnh
n12n2 =c
3nh
;n12N
nb
n=c
4nh
;n12n
; =c
5nh
;n12n=c
6n:
Under similar assumptions as in Theorem 1.1 and
h
nb
;1n !0 we get for the power (f
n) :=P
(Q
n(f
#n^ )(f
#n^ ) +u
h
n12(f
#n^ )jf
n)lim
n!1
((
f
n);(U
1n)) = 0 withU
1n = ;1(f
#)c
1nZw
2(t
)a
(c
;tb
n)dt
+Z
(p
c
2n ;pc
3nS
)tr#f
(t #
)]2a
(t
)dt
+ 2pc
1nb
nZ (pc
2n ;pc
3nS
)tr#f
(c
;tb
n#
)w
(t
)a
(c
;tb
n)dt
+
(c
4nZw
(t
)a
(c
;tb
n)dt
+Z
(
c
5n ;c
6nS
)tr#f
(t #
)a
(t
)dt
)
;
u
:
Considering the asymptotic behaviour we getc
1n =nh
n12N
n2 andc
4n =h
;n12N
n as leading coecients in the approximation of the power. Therefore we get the following simplier approximation of powerU
2n which is asymp- totically equivalent toU
1n.Theorem 3.2
(Pitman alternative) Givenb
nb
. ForU
2n = ;1(f
#)"
c
1nZw tb
;
S
trf
(c
;tb
n#
)
2 d
t
#
;
u
we havelim
n!1
((
f
n);(U
2n)) = 0: Theorem 3.3
(Sharp peak alternative) Givenb
n;;;!n!1
0. For
U
2n=;1(f
#)
c
1nb
nZw
2(t
)a
(t
)dt
+T
n;u
withT
n=c
4nb
n
b
nZw
(t
)t
dta
0(c
);Z
r
#
f
#(t
)ta
(t
)dtS
10
4 EXACTNESS OF THEAPPROXIMATIONSwe have
lim
n!1
((
f
n);(U
2n)) = 0:
Under assumption A1 and the choice
a
(t
)1 we haveT
n= 0. Consequently we get for Pitman and sharp peak alternatives:Theorem 3.4
If hbnn ;;;!n!1
0 and
nh
n32 ;;;!n!1
1 then lim
n!1
(
f
n)8
>
<
>
:
=
> <
1 forb
nc
1n!= 1
8
>
<
>
:
0
>
01
:
4 Exactness of the approximations
In the following we consider how (
U
1n) and (U
2n) approximate the power when we have a nite number of observations. Let F be the class of normal distributionsN
(1
2I
) for1
t = (1:::
1) and =g
(#
n(1)). For constructing the local alternatives we choosew
(t
) =( sin(;2
t
) fort
2;12 12]0 else
c
= 0 and = 1000. WithS
= (S
(1)S
(2))t,f
1 = @f@#(t#)(1) ,f
2 = @f(t#)@2 ,f
11 = @f(c;tb@#(1)n#) andf
22 = @f(c;tb@2n#) as well as assumption A1 about the hypothesis density we getU
1n = ;1(f
#)c
1n 2 +Z
(;p
c
3nS
(1)f
1+ (pc
2n;pc
3nS
(2))f
2)2dt
+ 2pc
1nb
nZ 12; 1
2
(;p
c
3nS
(1)f
11+ (pc
2n;pc
3nS
(2))f
22)sin(;2t
)dt
!
;
u
U
2n = ;1(f
#)"
c
1nZ 12; 1
2
sin ;2
tb
;
S
(1)f
1;S
(2)f
2
2 d
t
#
;
u
(Pitman alternative)U
2n = ;1(f
#)c
1nb
n2 ;
u
(sharp peak alternative):
As values for
N
n andb
n we choose 0:
38 0:
2 0:
038 and 1:
0 0:
5 0:
1, re- spectively. Letn
be 50, 100 or 150.Every of these cases we can embed in the Pitman or sharp peak alterna- tives. The classi cation of a special case with xed
n
into either one or the11
other of both classes is somewhat subjective. We choose sharp peak alterna- tives as model for our considerations.
For some
N
nandb
none can nd the approximations (U
1n0:12) in Table 1 and the simplier approximations (U
2n0:12) in Table 2.Epanechnikov kernel Gau kernel
b
n
b
n
n N
n
1
:0 0
:5 1
:0 0
:5 0
:1 50 0
:380 0
:9924 0
:5921 1
:0000 0
:8958 0
:1714
0
:200 0
:4358 0
:7170 0
:3102 0
:1333 0
:038 0
:1353 0
:1272 0
:1228 100 0
:380 1
:0000 0
:9957
0
:200 0
:9900 0
:5699
0
:038 0
:1232
150 0
:200 0
:9957
Table 1: Approximations of power (
U1n0:12)
Epanechnikov kernel Gau kernel
b
n
b
n
n N
n
1
:0 0
:5 1
:0 0
:5 0
:1 50 0
:380 0
:9958 0
:9260 1
:0000 0
:9958 0
:8359
0
:200 0
:5117 0
:8287 0
:5361 0
:2843 0
:038 0
:1369 0
:1317 0
:1268 100 0
:380 1
:0000 1
:0000
0
:200 0
:9957 0
:9106
0
:038 0
:1313
150 0
:200 0
:9957
Table 2: Approximations of power (
U2n0:12)
In simulations we considered the power of the test using the Epanechnikov kernel or the Gau kernel.
Power when using the Epanechnikov kernel
In Figure 2 one can see the hypothesis, the local alternative for
N
n= 0:
3812
4 EXACTNESS OF THEAPPROXIMATIONSand
b
n= 1, the density estimator with the Epanechnikov kernelK
(x
) =(
3
4(1;
x
2) forjx
j10 else
with a creation of 50 random numbers and the expectation of the density estimator. The results of the simulation can be found in Table 3. In
0
:00 0
:10 0
:20 0
:30 0
:40 0
:50 0
:60 0
:70 0
:80
;
0
:8
;0
:6
;0
:4
;0
:2 0 0
:2 0
:4 0
:6 0
:8 1
t
e
E0:15
^
f fE50
f
50
Figure 2: Local alternative
f50with
N50= 0
:38,
b50= 1, hypothesis
f, expectation
e
E0:15
and density estimator ^
fE50Table 4 are the dierences (
U
i500:12);1(f
50)i
= 1 2 and in Table 5 are the relative distances j1(f50);(U1 i500:12)j(f
50
) .
b
50
N
50
1
:0 0
:5 0
:380 0
:79 0
:37 0
:200 0
:28
Table 3: Power
1(
f50) using the Epanechnikov kernel
Power when using the Gau kernel
In Figure 3 one nds the hypothesis, the local alternative for
N
n= 0:
38 andb
= 1, the density estimator with Gau kernel with a creation of 50 random numbers and the expectation of the density estimator. In Table 6 are the results of the simulations using the Gau kernel.We got the dierences shown in Table 7 and the relative distances shown in Table 8.
13 (
U1500:12)
;1(
f50) (
U2500:12)
;1(
f50)
b
50
b
50
N
50
1
:0 0
:5 1
:0 0
:5
0
:380 0
:2024 0
:2221 0
:2058 0
:5560 0
:200 0
:1558 0
:2317
Table 4: Dierences between the power
1(
f50) and the power approximation (
Ui500:12) using the Epanechnikov kernel
j
(
U1500:12)
;1(
f50)
j 1(
f50)
j
(
U2500:12)
;1(
f50)
j 1(
f50)
b
50
b
50
N
50
1
:0 0
:5 1
:0 0
:5 0
:380 0
:2562 0
:6003 0
:2605 1
:5030 0
:200 0
:5564 0
:8275
Table 5: Relative distances between the power
1(
f50) and the power approxima- tions (
Ui500:12) using the Epanechnikov kernel
For the case
b
n = 0:
1N
n = 0:
038 we repeated the simulation as well withn
= 50 as withn
= 100 random numbers. Usingn
= 50 random numbers we got the same power 2(f
50) = 0:
15 as in the rst simulation.However, using
n
= 100 random numbers we got the power 2(f
100) =7
30
0
:
2333 which is still higher. From Figure 4 we can guess the reason for this phenomenon. In the gure one nds a density estimation withn
= 50 random numbers, its expectation, the hypothesis, the alternative forb
50= 0:
1 andN
50= 0:
038 as well as the random numbersx
50i which fall into the considered interval. It is obviously that the density estimator is not near the alternative. The reason for this is that the number of random variables is too small for a convenient choice of the bandwidthb
n when we have such a tinyb
n. Consequently, a simulation withb
n= 0:
1 andN
n= 0:
038 demands a much greater number of random variables.Interpretation of the results
With the simulations we could con rm the fact, that both
L
2-tests rec- ognize a distance from the hypothesis density worse with declining in- terval of distance and declining distance. Very small distances in small intervals will be interpreted as random errors and consequently the hy-14
4 EXACTNESS OF THEAPPROXIMATIONS0
:00 0
:10 0
:20 0
:30 0
:40 0
:50 0
:60 0
:70 0
:80
;
1
;0
:5 0 0
:5 1
t
e
G0:15
^
f fG50
f
50
Figure 3: Local alternative
f50with
N50= 0
:38,
b50= 1, hypothesis
f, expectation
e
G0:15
and density estimator ^
fG50b
n
n N
n
1
:0 0
:5 0
:1 50 0
:380 0
:72 0.27 0
:05
0
:200 0
:19 0
:13 0
:05 0
:038 0
:06 0
:07 0
:15 100 0
:380
29300
:9667
11300
:3667
0
:200
3080
:2667
3010
:0333
0
:038
3070
:2333
150 0
:200 0
:50
Table 6: Power
2(
fn) using the Gau kernel
pothesis will not be rejected. Both tests discerned a distance from the hypothesis worse when the intervals of distance were longer (
b
n= 1) but the distance itself smaller (N
n= 0:
2) than when the intervals of distance were smaller (b
n= 12) and the distance greater (N
n= 0:
2). That is due to the squaring of the distance inL
2-tests. If the distances are less than one they become still less this way and so a rejection of the hypothesis becomes improbable.Furthermore, the simulations show signi cant dierences between both tests. The power is always higher when using the Epanechnikov ker- nel. Though the decisions based on these tests are the same when we have an unlimited number of random variables, the dierences should
15 (
U1n0:12)
;2(
fn) (
U2n0:12)
;2(
fn)
b
n
b
n
n N
n
1
:0 0
:5 0
:1 1
:0 0
:5 0
:1 50 0
:380 0
:2800 0
:6258 0
:1214 0
:2800 0
:7258 0
:7859
0
:200 0
:5270 0
:1802 0
:0833 0
:6387 0
:4061 0
:2343 0
:038 0
:0753 0
:0572
;0
:0272 0
:0769 0
:0617
;0
:0232 100 0
:380 0
:0333 0
:6290 0
:0333 0
:6333
0
:200 0
:7233 0
:5366 0
:7290 0
:8773
0
:038
;0
:1101
;0
:1020
150 0
:200 0
:4957 0
:4957
Table 7: Dierences between power
2(
fn) and power approximations (
Uin0:12) using the Gau kernel
j
(
U1500:12)
;1(
f50)
j 1(
f50)
j
(
U2500:12)
;1(
f50)
j 1(
f50)
b
n
b
n
n N
n
1
:0 0
:5 0
:1 1
:0 0
:5 0
:1 50 0
:380 0
:3889 2
:3178 2
:4280 0
:3889 2
:6881 15
:7180
0
:200 2
:7737 1
:3862 1
:6660 3
:3616 3
:1238 4
:6860 0
:038 1
:2550 0
:8171 0
:1813 1
:2817 0
:8814 0
:1547 100 0
:380 0
:0344 1
:7153 0
:0344 1
:7270
0
:200 2
:7112 16
:1141 2
:7334 26
:3453
0
:038 0
:4719 0
:4372
150 0
:200 0
:9914 0
:9914
Table 8: Relative distances between power
2(
fn) and power approximation (
Uin0:12) using the Gau kernel
be considered when the number of random variables is limited.
Moreover, the asymptotically equivalent approximations of power (
U
1n) and (U
2n) dier when the number of random variables is nite. We see, that with a nite number of random variables (U
1n) approxi- mates the power nearly always better and never worse than (U
2n).Consequently, the simpli cations in (