• Keine Ergebnisse gefunden

The Asymptotic Properties of Burg Estimators

N/A
N/A
Protected

Academic year: 2022

Aktie "The Asymptotic Properties of Burg Estimators"

Copied!
16
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Estimators

by

Gu nterHainz

UniversitatHeidelb erg

Address:

Gu nterHainz

Institut fur Angewandte Mathematik

UniversitatHeidelb erg

ImNeuenheimerFeld294

D-69120 Heidelb erg

Germany

Abbreviated title: Prop ertiesof Burg Estimators

(2)

1 0

n n

1 Preliminaries

AMS1991subjectclassications.

Keywords andphrases.

There are estimators for multivariate autoregressive mo dels which are regarded as

multivariateversionsof Burg'sunivariateestimator. For twoof thesemultivariateBurg

estimatorsthe asymptotic equivalencewith the Yule-Walkerestimator is established in

thispap er,socentral limittheoremsfortheYule-Walkerestimatorextendtothese esti-

mators. Furthermore,the asymptoticbias ofthe univariateBurg estimatorto termsof

isshownto b e thesameas thebiasoftheleast-squares estimator; isthenumb er

ofobservations. Themainresults aretrueeven formis-sp eciedmo dels.

Primary62M10.

Burgestimator;asymptoticbias;centrallimittheorem.

Themostp opularestimatorsforautoregressivemo dels seemtob etheYule-Walker,theleast

squares, and the Burg estimators. The Burg estimator wasintro duced by Burg (1968) for

univariate timeseries, and it was generalized to multivariate mo dels by Morf etal. (1978),

Strand (1977)and others(cf. Jones, 1978).

CentrallimittheoremsarewellknownfortheYule-Walkerandtheleastsquaresestimators,as

istheasymptoticbias ofthem intheunivariate case(Shamanand Stine,1988). Nicholls and

Pop e (1988)also calculated theasymptotic bias of themultivariate leastsquaresestimator.

KayandMakhoul(1983)showedtheasymptoticequivale nceoftheunivariateBurgestimator

and the Yule-Walker estimator, but neither the asymptoticdistribution of the multivariate

Burg estimator nor the bias seem to b e known. Simulations indicate that the bias of the

univariate Burg estimatorisab out aslargeasthebias oftheleastsquaresestimator(Lysne

andTjstheim,1987),whichtendstob esmallerthanthebiasoftheYWestimator,esp eciall y

if the pro cess has ro ots near the unit circle (Shaman and Stine,1988). But unlike the least

squaresestimator,theBurgestimatorsarestable(or: causal),whichisaprop ertyoftenasked

for.

Inthispap ertheasymptoticprop ertiesoftheBurgestimatorsareinvestigatedfurther: After

havingdened theestimators,theasymptoticequivalenc eofthemultivariateBurgestimator

and the Yule-Walker estimator is established in Section 2; the equivalence holds even for

mis-sp ecie d mo dels. In Section 3 the asymptotic bias of the univariate Burg estimator is

showntob ethesame asthe biasof theleast squaresestimator.

(3)

R

1

1

0 1

0

> >

>

0

> >

0

>

0

>

0

>

2

0 0 0 0

0 0

0 0

1

2

P

P

0

@

1

A

0

@

1

A

0

B

B

B

B

B

B

B

@

1

C

C

C

C

C

C

C

A

P

P

X A X

X B X A B d d

S E X X X X S E X X X X

A A

B B

R

S

S

;

R

R R R

R R R

R R R

R EX X d d

d A B S S A

S S R X X R R i

X ; ;X

X

R R A ; ;A

B ; ;B S ;S ; ; S

R R

S S R

k t

d

f

t

p

j p

j t j

b

t

p

j p

j t j

p

j

p

j

f

p

t f

t t

f

t

b

p

t b

t t

b

t

p p

p

p

p

p

p

f

p

b

p

p

p

p

p p

i t i

t

p

j

p

j

f

p

b

p

p

j

p

j

p

f

p

i

n n i

i

t i

t i

i

n

n n

i i

i i

p p

p

p p

p f

p b

p

p p

p

p

p p

b f

( )

=1 ( )

()

=1 ( )

+

( ) ( )

( ) ( ) ( ) () ( ) ( )

( )

1

( )

( ) ( )

1

( +1)

( )

()

( +1)

0 1

1

0 1

1

0

+

( ) ( ) ( ) () ( ) ( )

( )

1

=1 +

1

1

=1

( )

1

( )

( )

1

( ) ( ) ( ) ( )

1

( )

( +1) ( +1)

( )

0

( )

0

0

with values in ; let the mean of its comp onents b e zero and thevariance b e nite. The

pro cess canb eforecastedbythelinearpredictor

^

:= or,in reversedtime,

by

^

:= , where and are the matrices which minimize the

traces of := (

^

)(

^

) and := (

^

)(

^

) .

It iswell knownthat this leads totheYule-Walker equations

...

...

=

0 ... 0

0 ... 0

(1.1)

where

:=

...

...

.

.

.

.

.

. .

.

.

...

(1.2)

isaregularmatrixofauto covariances := ; and arethe zeroandidentity

matrices. If = 1, then = and = hold, and we dene := and

:= . The auto covariances areestimated by

^

:= ,

^

:=

^

(

0), where ... are the available data. If the mean of the pro cess is unknown, we

subtract thearithmetic mean from thedata.

After replacing by

^

in (1.2) we get the Yule-Walker (YW) estimator

^

...

^

,

^

...

^

,

^ ^

assolution of (1.1); in the univariate case we get

^

...

^

,

^

.

^

denotesthe corresp onding estimatorof .

The YW estimator can b e calculated recursively using the (multivariate) Levinson-Durbi n

algorithm (Whittle,1963):

^

:=

^

:=

^

,

recursion for 1:

(4)

1

1

X

X

P

P

X X

X X X

0

0

0

0

0 0

0 0

0 0

0 0

1 1

1 1

0 1

0 1

1 0

0 0

0

>

0

0

>

0

0

0

0

0

0 0

0

0 0

0

0 0

0

0 0

0

0 0

0

0

0

0

0

0

0

0 0

0 0

0

0

0

0 0 >

0

0 0 >

0

0

0

0 >

0

0>

0

0

0

0

>

0

0

0 0

0

0

0

0

0

>

A R A R S ;

B R B R S ;

S A B S ;

S B A S ;

A A A B j k ;

B B B A j k :

d

e

e

;

e

e X X X X ;

e e e

e e e ;

A S S ;

B S S ;

e e A ;

B e ;

A

A A A

k

k

k

j k

j

k j b

k

k

k

k k

j k

j

k j f

k

f

k

k

k k

k f

k

b

k

k

k k

k b

k

k

j

k

j

k

k k

k j

k

j

k

j

k

k k

k j

k

k

k

k

n

t k k

t k

t k

n

t k k

t

k

t k

k

t

k

t

k

t

t k

j k

j

t j

k

t

t k

j k

j

t j

k

t

k

t

k

k k

t k

k

t k k

t k k

k k

t

k

n

t k k

t k

t

= n

t k k

t k

t k

n

t k k

t k k

t k

=

k

k

f =

k

k

b =

k

k

k

b =

k

k

f =

k

k

t

k

t

k

k k

t k

k

t k

k

t k

k

k k

t

=

= =

( )

=1

( 1) () 1

1

( )

1

=1

( 1) ( ) 1

1

( ) ( ) ( ) ( )

1

() ( ) ( ) ()

1

( ) ( 1) ( ) ( 1)

( ) ( 1) ( ) ( 1)

( )

( ) = +1

( 1) ( 1)

1

2

= +1

( 1)2 ( 1)2

( 1) ( 1)

( 1)

1

=1

( 1) ( 1)

1

=1 ( 1)

+

( ) ( 1) ( ) ( 1) ( ) ( 1) ( ) ( 1)

= +1

( 1) ( 1)

1 2

= +1

( 1) ( 1)

= +1

( 1) ( 1)

2

( ) ( )1 2

1

( ) 1 2

1

( ) ( )1 2

1

( ) 1 2

1

( ) ( 1) ( ) ( 1)

( ) ( 1) ( ) ( 1)

1 2

1 2 2

^

:= (

^ ^ ^

)

^

^

:= (

^ ^ ^

)

^

^

:= (

^ ^

)

^

^

:= (

^ ^

)

^

^

:=

^ ^ ^

( =1... 1)

^

:=

^ ^ ^

( =1... 1)

(1.3)

For =1,Burg (1968)usedthesamerecursive algorithmbut estimated by

~

:=

~ ~

(~ +~ )

(1.4)

where ~ and ~ arethe estimatedforwardandbackwardprediction errors

~ =

~

and ~ =

~

(1.5)

whichcanb ecalculatedrecursivelyby~ =~

~

~ and ~ =~

~

~ .

For the multivariate case several versions of this metho d were prop osed (see Jones, 1978),

the most p opular seeming to b e the one describ ed by Morf et al. (1978), which uses the

Levinson-Durbin algorithmwithsomealterations:

~ := ( ~ ~ ) ( ~ ~ )( ~ ~ )

~

:=

~

~

~

~

:=

~

~

~

(1.6)

~ := ~

~

~

~ := ~

~

~

where is the lower triangular matrix with p ositive diagonal elements dened by the

Cholesky decomp osition of thesymmetric, p ositive denite matrix = ; further

(5)

0

X X

X

X X X

Theorem1

Pro of.

> > 0 0 0> 0 >

0

0

0 >

0

0

0

0 0 > 0

0

0 0 >

0

0

0

0

0 0

>

> 0 >

0

0

>

0 0>

0

0 0>

0

0

2 Asymptotic Distribution of Multivariate Burg Estimators

Under the assumptions onthe process mentioned in section1,

hold, where etc.

A A A A A A S S A B

j k

A

A

n

S

n

e e S A

n

e S ;

B S A S

d

A

A A O =n ; B B O =n ;

S S O =n ; S S O =n

A A ; ;A

A S

n

XX

n

XX

n

X X S

S S O =n R S O =n S

R R O =n A O =n ;

= = = = = =

k k j j

k

k

k

k

n

t k k

t k k

t k

b

k

n

t k k

t k

t

f

k

k

k

n

t k k

t k

t k b

k

k

k

f

k

k

k b

k

p

p p

p

p p

p

f

p

f

p

p

b

p

b

p p

p

p p

p

f = n

t t

t

= n

t t

t

n

t t

t

=

b =

f = f

p

=

b

p

=

b =

p p

2 1 2 1 2 1 2 1 2 1 2

( )

( )

= +1

( 1) ( 1) () 1

1

= +1

( 1) ( 1) ( ) 1

1 ( )

= +1

( 1) ( 1) ( ) 1

1

( ) ( ) 1

1

( ) ()

1

( )

( ) ( ) ( ) ( )

( ) ( ) ( ) ( )

( )

( )

1

( )

(1)

1

( )1 2

0

=2

1 2

=2

1

=2 1

1 2

() 1 2

0

( )1 2

0

( )

0

1 2

1 ()

0

2

() 1 2

0

1 1

0

(1)

1

:=( ) , :=( ) , :=( ) . Thematrices

~

,

~

,

~

,

~

( =1... 1)aredened as in theLevinson-Durbi n algorithm.

A dierent versionwassuggested byStrand (1977): Here

~

is thesolution of

~

( 1

~ ~ )

~

+ ( 1

~ ~ )

~ ~

=

= 2

~ ~

~

(1.7)

and

~

:=(

~ ~ ~

) .

Unlike theother metho d,thealgorithmprop osedbyStrand(1977)reducestotheunivariate

Burg estimator (1.4) for = 1, but it is more exp ensive to calculate. Both Burg estima-

tors are known to b e stable and are calculable recursively, which is particularly useful if a

mo del selection pro cedure has tob ep erformedsimultaneously; thesameistrue fortheYW

estimator.

We want tond centrallimit theoremsforb oth multivariate versions of theBurg estimator

byshowingtheasymptoticequivale ncewiththeYule-Walkerestimator. Inthefollowing

~

always denotesone ofthe Burgestimators ofsection 1.

~

=

^

+ (1 )

~

=

^

+ (1 )

~

=

^

+ (1 )

~

=

^

+ (1 )

~

=(

~

...

~

)

Firstthelemmaisprovedbyinductionforthemetho dofMorfetal.,thenforStrand'smetho d.

~

=

~

( 1

) (

1

)(

1

)

~

=

=

~

(

~

+ (1 ))

^

(

~

+ (1 ))

~

=

=

^ ^

+ (1 )=

^

+ (1 )

(6)

1

X

X

X

X

X X

P 0

0

0

1

1

0 0

0 0

0

0

0 0 0 0

0 0 0 0

0

0 0 >

0

0

0

0 >

0

0

0 0

0 0 >

0

0

0 0

0

0

0 >

0 0 > 0 0

0 0

0

0 0

0 0

0

>

0

0

0

0

0 0

0

0 0

0 >

0

0

0 0

0 >

0

0

0

0

0

0 0 0

0

0

0

0 >

0

B B O =n

S A B O =n S O =n S O =n

S S O =n

; ;k

p

A A O =n ; B B O =n ;

S S O =n ; S S O =n :

k

S

n

e e O =n ;

S

n

O =n ;

A S

n

e O =n ;

B S

n

e O =n :

k k

n

e e

n

e A e A

S A B S

A S A A S A O =n

S A B S O =n S O =n ;

S O =n

p

f

p

f

p

f

p

b b

p

k k

p

k k

p

f

k

f

k

p

b

k

b

k

p

f

k

n

t k k

t k

t

p

b

k

n

t k k

t k k

t k

p

k

k

b

k

n

t k k

t k

t k

p

k

k

f

k

n

t k k

t k k

t

p

n

t k k

t k

t

n

t k k

t

k

k k

t k k

t

k

k k

t k

f

k

k

k

k

k

f

k

k

k b

k k

k

k

k b

k k

k

p

f

k

k

k

k

k

f

k

p

f

k

p

b

k n

n

t k k

t k k

t k

p

1 1

( )

1

(1)

1 (1)

1

( )

0

( )

1

( )

1

()

1

( 1) ( 1) ( 1) ( 1)

( )

1

( )

1

( )

1 ( )

1

( )

2

=

( 2) ( 2)

( )

2

=

( 2)

+1 ( 2)

+1

( 1)

1 ()

2

=

( 2) ( 2)

+1

( 1)

1

( )

2

=

( 2)

+1 ( 2)

= +1

( 1) ( 1)

= +1

( 2) ( 1)

1

( 2)

+1

( 2) ( 1)

1

( 2)

+1

( )

2

( 1)

1

( 1)

1 ( )

2

( 1)

1 ( )

2 ( 1)

1

( 1)

1 ( )

2 ( 1)

1

( )

2

( 1)

1

( 1)

1 ( )

2

( )

1

( )

1 1

= +1

( 1) ( 1) and similarly

~

=

^

+ (1 ),fromwhich

~

=(

^ ^

+ (1 ))(

^

+ (1 ))=

^

+ (1 )

and

~

=

^

+ (1 ) follow.

Nowweassume thatthe statement ofthelemma hasalready b eenproved for1 ... 1

1, i.e.

~

=

^

+ (1 )

~

=

^

+ (1 )

~

=

^

+ (1 )

~

=

^

+ (1 )

(2.1)

Also let thefollowing assertions b eshown,whichare obviouslytrue for =2:

~

= 1

~ ~ + (1 )

~

= 1

~ ~ + (1 )

(2.2)

^ ^

= 1

~ ~ + (1 )

(2.3)

^ ^

= 1

~ ~ + (1 )

(2.4)

Nextweprove (2.2){(2.4)for +1insteadof :

Pro ofof (2.2):

1

~ ~ =

1

(~

~

~ )(~

~

~ ) =

=

~ ~ ^ ^

^ ^ ~

+

~ ~ ~

+ (1 )=

=

~ ~ ~ ~

+ (1 )=

~

+ (1 )

using (2.1){(2.4). Similarl y

~

= ~ ~ + (1 )can b eshown.

(7)

2

0 0

0 0

1

0

1 1

X X X X

X X

X

X X

0 0 >

0

0

0

0 0

0

0

0

>

0

0

0 >

0

0

0

0

0

00

0 >

0

0 0

>

0

0

0 > 0 0 >

0

>

0

>

0

0 0

0

0 0

0>

0

0

0 0

0

0 0

0

0 0 0

n

e

n

X A X X B X

R R B A R

A R B O =n

A S O =n ;

A

A S B S

n

e

n

e A S O =n

B S O =n :

k

A S S O =n A S O =n S O =n S

A S S O =n A O =n :

B ;S ;S ;A ;B j k

A A O =n

A S O =n S O =n S A S A S O =n :

n

t k k

t k

t k

n

t k t

k

j k

j

t j t k k

j k

j

t k j

k k

j

k j k

j

k

j k

j

k j

k

j;i k

j

k j i k

i

p

k

k b

k

p

k

k

k

k b

k

k

k f

k

n

t k k

t k k

t

n

t k k

t k

t k

k

k b

k

p

k

k f

k

p

k

k

f =

k

f

k

p

= k

k b

k

p

b

k

p

=

b =

k

k

k b

k b

k

p

k

k

p

k

k f

k b

k k

j k

j

k

k

k

k p

k

k b

k

p

f

k

p

f

k

k

k b

k

k

k b

k

p

= +1

( 1) ( 1)

= +1

1

=1 ( 1)

1

=1 ( 1)

+

1

=1

( 1)

1

=1 ( 1)

1

=1

( 1) ( 1)

( ) ()

1

( )

( ) ()

1

( ) ( )

1

= +1

( 1) ( 1)

= +1

( 1) ( 1) ( ) ( )

1

( ) ( )

1

( ) ( )1 2

1

( )

1

1 2

( ) ( )

1

()

1

2

( ) 1 2

1

( ) ()

1 ( ) 1

1

( )

( ) ( ) ( ) ( ) ( )

( ) ( )

( ) ()

1

( )

1

( ) 1

1

( ) ()

1

( ) ( )

1 1

~ ~ =

1

(

~

)(

~

) =

=

^ ^ ^ ^ ^

+

+

^ ^ ^

+ (1 )=

=

^ ^

+ (1 )

using (2.1),(1.1),and thedenition of

^

in theLevinson-Durbin algorithm (1.3).

Pro ofof (2.4): Because

^ ^

=(

^ ^

) holds and(2.3) hasb eenshown,

1

~ ~ = (

1

~ ~ ) =(

^ ^

+ (1 )) =

=

^ ^

+ (1 )

Fromthese considerations follows(2.1) for +1:

~

=

~

(

~

+ (1 )) (

^ ^

+ (1 ))(

~

+ (1 ))

~

=

=

^ ^ ~

+ (1 )=

^

+ (1 )

The pro of for

~ ~ ~ ~ ~

( =1... 1)is obvious.

The lemma can b eprovedthesamewayfor theestimatorofStrand (1977),b ecause

~

=

^

+ (1 )

follows from (1.7),using (2.2)and (2.3):

~

(

~

+ (1 ))+(

~

+ (1 ))

~ ~ ~

= 2 (

^ ^

+ (1 ))

It should b e noted that the equivalence still holds if the mean of the pro cess has to b e

estimated by thearithmetic mean ofthedata.

Now central limit theorems for the Yule-Walker estimator can b e extended to the Burg

estimatorsunder theverygeneralassumptions of section 1.

(8)

P P

P P

Theorem2 Z

N

0

> 0

0 0

0 0

0 0

0

0 0

0 0

0

0 0

0

!1

0

f g 2

p

0 ) N

0!

1 j 0 j j j

2

1

0

0

1 0 1

3 Bias of the Univariate Burg Estimator

t p

p p

p

f

p

p

p p

p

p

p

p p

p

p

t

p p

r

t

r

p

r

p

p

n n

t p p

t

p

t p

p

p

j p

j j

p

n n

t p p

t p

t p

p p

p

j p

j

p j

p

p

p p

p

p

p p

n

p p

p p

X t d p A ; ;A

n A A ;R ;

S

A A ; ;A

A

; ; n

EX < E R R O A

A

r

EX < ; EN O ; EN O ;

N e N R R

Z e Z R R

Z =N Z =N

n E R d

1

( ) ( ) 1

( )

( )

( )

( )

1

( )

( )

( )

( )

1

( )

1

( )

16

1

( )

1

( ) 8

1

2

= +1

( 1)2 ( 1)2

0

1

=1 ( 1)

1

= +1

( 1) ( 1)

1

=1 ( 1)

( ) ( )

( ) ( ) 1

( ) ( )

Let , ,bea -variatestableAR( )-processwithcoecients

andwithindependentandidenticallydistributedinnovations;the innovationshavezero mean

vectorand a regularcovariance matrix . Then

vec vec

inprobability

hold. Herevec vec isthevector whichiscreatedby stackingthecolumns

of , and denotes the Kronecker product.

...

6

(

~

) = (0 6)

~

6

= ( ... )

This theoremis proved in Hannan (1970),ch.VI.2,Th.1.,fortheYW estimator.

The same way central limit theorems for mis-sp ecied mo dels, as stated e.g. in Lewis and

Reinsel (1988),(3.5),can b eextended totheBurg estimators,to o.

In thefollowing we investigatethe asymptoticbias oftheunivariate Burg estimator

~

= (

~

...

~

) to terms of order ; it will b e shown that it is as large as the

asymptotic bias of the least squares estimator

, which was calculated by Shaman and

Stine (1988)fora truemo del.

They used the assumptions and (

^

) = (1), where is the

absolute valueof thelargesteigenvalue of thematrix .

In additiontothat we assumeforall (forsimplic ity)

^

= (1)

~

= (1)

where

~

:= (~ +~ )and

^

:=

^ ^ ^

;these assumptions

guaranteethe uniformintegrabilityof theterms app earingin thepro of.

Withthe denitions

~

:= ~ ~ and

^

:=

^ ^ ^

,

^

=

^ ^

and

~

=

~ ~

hold.

AstheBurgestimatorisdenedrecursivelyandnootherusefulrepresentationofitisknown,

wecompare itwiththeYWestimator,whichcan b ecalculated recursively bytheLevinson-

Durbin algorithm, to o. Forthedierence b etween YWand leastsquaresestimators

lim (

^

) =

(9)

X

P

P

P Theorem3

Pro of.

>

0

0

!1

0

!1 !1

!1 !1

0

0

0

0

0

j 0 j

0

1 0 0 1

1 0 1 0

1 0 1 0

0

0

j 1 1 j !

1 0

1

1 0! 0 1

0 !0 1 0

0

0

f 0 gf 0

0

g

p p; p;p

p ;j p

k

j k p

k

p p p

j

p

j

n

p p

p p

p

n

p p

n

p p

n

p p

n

p p

t t t

n n

j j

n n

t

t t

n n

t t

t

n

n

n n

n p n

n

n

i i

i i

p

p

p p p

p p p p

p p p

p p

p

p

n

d d ; ;d

d j k R

; ; j p :

n E R d ;

n E n E

n E n E :

EX X X

X

XX

X X

R

R X X

nR

X X h :

h O n E n h p

n E E

X X

R

n h R d :

nE R d i p

Z Z Z

N N N N

Z Z Z

N N

N

h ;

Under the assumptions mentioned above and in section 1, the bias of the uni-

variate Burgestimator is equal tothe biasof the leastsquares estimator, ( ) ( )1 ( )

( )

=0

( )

( )

1

( )

0

( ) ( )

( ) ( ) 1

( ) ( )

( )

( ) ( ) ( ) ( )

( ) ( ) ( ) ( )

1

=1

(1)

1

2

=2

1

1

=2

2 2

1

1

0 1

2 2

1 2

(1)

1

0 2

1

2 (1)

(1)

2

(1) (1)

1

(1)

1

(1)

1

(1)

1 2

1 2

0

(1) (1)

1

(1)

1

1

(1) (1)

() ()

1

() ()

( ) ( )

holds (see Shaman and Stine,1988, (3.7)), where

^

is theYW estimator,

is the least

squaresestimatorand :=( ... ) , with

:= 8

and

8 :=0 8 := 1 8 := ( =1... )

Wewill prove that

lim (

~ ^

)= (3.1)

sothat forthebias of

~

lim (

~

) = lim (

)

holds.

lim (

~

)= lim (

)

The pro of follows by induction; if is estimated, has to b e substituted by

everywhere.

~

=

( + )

=

^

^

( + )

=

^

(1+ 1

2

^

( + )+

^

)

Thepro ofoftheuniformintegrabilityofthetermsapp earinghereandin thefollowing isnot

dicult and therefore omitted.

As

^

= ( )and (

^ ^

) 0,theassertion of thetheoremholds for =1:

(

~ ^

)= (

^

( + )

2

^

+

^ ^

) =

Letnow (

~ ^

) b eshownfor1 1. Using aTaylorexpansion

~

=

^

+(

~ ^

)

^

+(

~ ^

)

= 1

^

^

+(

~ ^

) 1

~ ^

^ +

^

(10)

P P P

P P P

P

P P

n

X X

X

X X

X

X

X

o

n

X

X

X

X X

X

X

X

o

0 0

0

111

111

0

0 1 0 0

0 0 0 0

0 0j 0 j 0

0 0 0j 0 j 0

0 0 0 0j 0 j 2

2 0 0

0 0 0 0

0 0 0

0 0 0 0 0

0j 0 j

0 0 0j 0 j

0 0j 0 j

0

0 0

0

0 0

0

0

0

0

0 0

0

0 0

0 0

0

0 0

0 0

0

0 0

0

0

0

0 0

0 0

0

0 0

0

0

0 0

0

0

0

0

0

0 0 0 0

00

0

0 0 0

00

0

0

0 0

j 0j

0

0

0

0

0

0

j 0j

0

0

0

0

0

0 0

j0j

0

0 0

0

0 0

0

0

0

0 0 0 0

0

0

0 0 0

0

0

0 0

j0j

0

0 0 0 0

j0j

0

0 0 0

j0j

h O n

XX X X X X

X X X X X

X X

X X X X X

N

Z R

n

j XX

n

j XX

R R

n

p j i XX

n

p j i XX

n

p j i XX

nN

p X n R

p j XX

p j XX

n R n R

p j i XX

p j i XX

p j i XX h :

XX X X X X

2

( )

2

= +1

1

=1

( 1)

1

=1 +

( 1)

1

= +1 2

1

=1

( 1)

1

=1

( 1) ( 1)

1

=1

( 1) ( 1)

+

2

1

=1 +

( 1)

1

=1

( 1) ( 1)

+ +

1

=1

( 1) ( 1)

1

=1

( 1) ( 1)

+

1

=1 ( 1)

+

1

=1

( 1) ( 1) ( 1) ( 1)

1

=1

( 1) ( 1) ( 1)

1

=1

( 1) ( 1)

+

1

=1

( 1) ( 1) ( 1)

+

1

=1

( 1) ( 1) ( 1) ( 1)

+

2

1

=1

( 1) ( 1)

1

=1

( 1) ( 1)

+

1

=1 ( 1)

+

1

=1

( 1) ( 1) ( 1) ( 1)

1

=1

( 1) ( 1) ( 1)

1

=1

( 1) ( 1)

+

1

=1

( 1) ( 1) ( 1) ( 1)

+

1

=1

( 1) ( 1) ( 1)

+

( )

+ 1 +1 +

n p

p

p

n n

t p

t t p t p p

j

t j p

j

t p

j

t p j p

j

n n

t p t

t p

j

t j p

j

p

j;i p

j p

i

t j t i

p

j;i p

j p

i

t j t p i

t p

t p p

j

t p j p

j

p

j;i p

j p

i

t p j t p i

p p

p

j p

j

p

j

p j p

j p

j

p

j

t t p j

p

j p

j

t t p j

p

j;i p

j

p

j

p

i

p

i

p j i p

j;i p

j

p

j

p

i

p j i

p

j;i p

p j p

i

t t j i

p

j;i p

p j

p

p j p

i

t t j i

p

j;i p

p j

p

p j p

i

p

i

t

t j i

p

t

p

j p

j

p

j

j

p

j p

j

p

j

t t j

p

j p

j

t t j

p

j;i p

j

p

j

p

i

p

i

j i

p

j;i p

j

p

j

p

i

j i

p

j;i p

j p

i

t t j i

p

j;i p

j

p

j

p

i

p

i

t

t j i

p

j;i p

j

p

j

p

i

t

t j i

p

n

t t k k j k j

where

^

is theremainderterm oforder ( ),wend from (1.4)and (1.5)that

~

=

=

(

~ ~

+...

( 2

~

+

~ ~

+...

...+

~ ~

)

...+ 2

~

+

~ ~

)

=

= 1

^

^

2 (

~ ^

)

^

+ 1

(

~ ^

)(2 terms ofform )+

+ 1

^

(2 terms ofform )+

+ (

~ ^

)(

~ ^

)

^

+2 (

~ ^

)

^

^

1

^ ^

( terms of form )

2

(

~ ^

)

^

( terms of form )

1

(

~ ^

)(

~ ^

)( terms ofform )

1+ 1

2

^

[(2 terms of form )+4 (

~ ^

)

^

2 (

~ ^

)(2 2 terms ofform )

2

^

(2 2 terms of form )

2 (

~ ^

)(

~ ^

)

^

4 (

~ ^

)

^

^

+

+2

^ ^

( terms ofform )+

+2 (

~ ^

)(

~ ^

)( terms ofform )+

+4 (

~ ^

)

^

( terms ofform )]+

^

Here"jtermsofform "etc. meansasumofsuchterms,e.g. +...+ ,

(11)

X X

X X

X

X

X X

0

@

1

A 0

B

B

B

@

1

C

C

C

A 0

B

B

B

@

1

C

C

C

A

1 0 0!

0! 0 0j 0 j

1 0 0 0j 0 j

1 0 0! 0

0

1 0 1 0 1 0 0

0

1 0

1 0 0 1 0 0 1 0 0!

0!0 0

0 0

0 1

0

0

0

0

0

0 0

0

0

0

0

0 0

0

0

0

0

0

0 0

0

0

0

0

0 0

0

0

0 0 0

0

0

0

0

0

0

#

0

0

0

>

0

>

0

>

0 0

0

#

0

#

0

#

0

0 0

0

#

0

0 0

#

0

0 0

# 0

0

0 #

0

0 0

0

#

0

0

0 #

0

0

0

n E

N

j R

N

p j i R

N

p R p j R p j i R ;

o =n

n E

N

j R j R

N

d

N pR

N

j R d pR g

n E n E n E

j p

; ;

n E ; ; ; ;

n E n E n E

R d g R d :

R d R d

R d g R d

g

R R

R R

d

d

: p

p

p

p

p p

j p

j

p j

p p

j;i p

p j p

i

j i

p

p

p

p

j

p

j j

p

j;i p

j p

i

j i

p

p

p

p

p p

j

p

j

p j p

p p

p j p j

p p

j p

p j p;j

p

p

p

p p

j p

j p j

p

j p

p j p ;j

p

p

p

p p

p

j

p

j

p

j

p

j

p

p j p

p

p

p

p

p p

p j

p

p j

p p

p

p

p p

p

p p

p

p p p

p

p

p p

p

p

p p

p p

p p

p

p

p p

p p

p p

p p

p

p p

p

p p

p

p

p

p;

p;p ( ) ( )

1

=1

( 1)

1

=1

( 1) ( 1)

( )

0

1

=1

( 1)

1

=1

( 1) ( 1)

( ) ( )

1

=1

( 1)

( ) ( 1)

1

=1 ( 1)

( )

( )

0

1

=1 ( )

1

=1 ( 1)

( )

( )

0

( ) ( )

( ) ( ) ( 1) ( 1) ( 1)

( ) ( ) ( )

( 1) ( 1)

( 1) ( 1)

1

( 1)

1

( )

1

( )

1

( )

1

( )

1

( 1) ( 1) ( ) ( )

( 1)

( )

( 1) ( 1)

1

( 1) ( 1)

( 1)

( ) 1

( 1) ( 1)

1

( 1) ( 1)

1

( 1) ( 1)

1

( 1) ( 1)

( 1) ( )

1

( 1) ( 1)

0 1

1 0

1

( )1

( ) bracketsand using theYule-Walker equationsone nds that

(

~ ^

)

2 1

( ) +

+ ( 2 ( ) + ( ) )

b ecause theexp ectationsof most oftheresulting termsare (1 ).

The limit can b erewrittenas

(

~ ^

)

1

( )+

+ 1

+ =

= 1

( + + )=: ,

and fortheother comp onentsof

~ ^

(

~ ^

) = (

~ ^

)+ (

^

(

^ ~

)+

~

(

^ ~

))

( =1... 1)

hold b ecauseof (1.3).

Intro ducing :=( ... ) etc. we nd byinduction that

((

~

...

~

) (

^

...

^

) )=

= (

~ ^

) (

~ ^

)

^ ~

(

~ ^

)

+ ( )

As( ) = holds,wehave toshow:

+

=

...

.

.

. .

.

. .

.

.

...

.

.

.

(12)

0

0 1

1

0 0

0 0

1 0 0

0 j 0 j j 0 0 j 0 0

0 j 0 0 j j 0 j 0 0

0 0 0

0 j 0 j j 0 0 j

0 j 0 0 j j 0 j

0

0

0

0 j 0 j 0 j 0 0 j

0 j 0 j 0 0 0

0 j 0 j 0

0

B

B

B

@

1

C

C

C

A 0

@

1

A

0

B

B

B

B

B

B

B

@

0

B

B

B

@

1

C

C

C

A

1

C

C

C

C

C

C

C

A

0

B

B

B

B

B

B

B

@

P P P

P P P

P

1

C

C

C

C

C

C

C

A

0

B

B

B

B

B

B

B

@

P P

P P

1

C

C

C

C

C

C

C

A

X X

X

X

0 0

0 0

0

00

0

#

0

0

0

>

0

0

0

0 0

0

#

0

0

0 #

0 0

0

#

0 #

0

0

0

0 0

0

#

0

0

0 #

0

0 0

0

0 0 0

0 0

0

0 0

0

0 0

0 0

0

0 0 0

0

0 0

0 >

#

0

0 >

#

0 #

0

0

0

0 0

0

0 0 0

0

0 0 0

0

0 0

0 >

#

0

0 >

#

0 #

0

0 0

0

00 0

0

0

0 0

0

0

0

R N R R R

R i p R R ; ;R

R R

R R

R d g R d

g

d g R d

R

R g

R ; ;R R d g R d R g

kR p kR g R R

p kR kR g R R

d d g R R

kR p kR

p kR kR

d d g N

d

i ; ;p

i kR p i kR

i kR iR p i R

i kR d ;

p

p

p

k p

k k

p

k p

p k

p i k

i

p

p

p

p

p

p p

p

p p

p

p p

p

p

p

p

p p

p

p

p

p

p

p p

p

p p

p

p p

p

p

k

k p

k

p

p p

k

p k p

k

p p

k p

p k

k p

p

k

p k

p

k

p

p p

k

k p

k

p p

k p

p k p k

p

p

p

p p

p

p p

k k

p

k

p

k

k p

k

p

p p

k

p k p

k

p

k

p k

p

k

p

p p

k

k p

k

p

p

p

p p

p

p p

p

p

k

i k p

k

p

p p

k

p i k p

k

p

k

i k p

k

p

p p

p k

i p

p

p i

p

k

i k p

k

p ;i

After multiplication with , using = , =

(=1... 1), and = ( ... ) ,wesee that

...

.

.

. .

.

. .

.

.

...

+

=

=

+ +

.

.

.

( ... ) ( + )+

=

=

1 8 + 1 8 ( )

.

.

.

1 8 + 1 8 ( )

+ ( )

=

=

1 8 + 1 8

.

.

.

1 8 + 1 8

+ +

!

= ;

thelastidentity still hastob eproved.

It holds forall rows:

1. rows =1 ... 1:

( 8 8 )=

= ( (8 8 ) + ( ) )=

= 8 =

( )

0

1

=1 ( 1)

1

=1 ( 1)

( 1)

1

( 1)

1 1

0 1

1 0

1

( 1) ( 1)

( 1) ( )

1

( 1) ( 1)

( 1) ( 1)

( 1) ( )

( 1)

1

1

1 1

1

( 1) ( 1)

( 1) ( )

1

( 1) ( 1)

0

1

=0

1

( 1) ( )

1

=0

1

( 1)

1

=1 ( 1)

1 1

1

=0

1

( 1) ( ) 1

=0

1

( 1) 1

=1 ( 1)

1 1

( 1)

( 1)

( ) ( 1)

( 1)

1

=1

( 1)

0

1

=0

1

( 1) ( )

1

=0

1

( 1)

1

=0

1

( 1) ( )

1

=0

1

( 1)

( 1)

( 1)

( ) ( 1)

( 1)

( )

1

=0

( 1)

( ) 1

=0

( 1)

1

=1

( 1)

( ) ( 1)

( )

=0

( )

( )

(13)

2

Z Theorem4

0

0

0

0 0

0 0 0

f g 2

1

1 0 0!

0 1

0 1

0

0

0

X X X

X X X

X

P

8

>

>

>

<

>

>

>

:

P

P

P

X

8

<

:

8

<

: 0 >

#

0

0 >

#

0 #

0 >

#

0

>

0

0

0

0

0 0

0

0

0 0 0

0

0

0

0

1 0

0 0

>

0

0

>

>

( 1)

( 1) ( )

( 1)

( 1)

( 1)

( )1 ( ) 1

1

=1 ( 1)

( )

1

=1 ( )

1

=1 ( 1)

( )

( )

0

=1 ( )

=1 ( )

=1

( )

=0

( )

( )

=1 ( )

2

( ) ( )

2 1

=0

( ) ( )

1 2( 1)

=0

( )

1 ( )

1

=1

1

1

=0

( ) ( )

1

1

d d g N

d ; ;d g N

d k R d pR

k R k R p R R

p k R d ;

d

X ;t p X X ;

E <E <

n E

v a ;

v b ;

X EX X

c c ; ;c ; c ;

a b v

a a ; ;;a ; a

i j ;j ; ;p j

b b ; ;b ; b

i j ;j ; ;p j p

p

p

p p

p

p p

p

p ; p ;p

p p

p

k p

p k p ;k

p

k p

k p k

p

k p

p k p ;k

p

p

p

k p

k p k

p

k p

k p k

p

k

p k p

k p

p

k

p k p

k

p;p

t t

p

j p

j

t j t

t t

t

p p

p=

j

p

j

p

p j j

= p

j

p

j

p

p j j

t t

n n

t t

p j

j

i p

i

p

p i

j j

j j; j;p j;i

j j; j;p j;i

Let ,bea univariatestableAR( )-process,

where areindependentandidenticallydistributed innovations, , ,and

let the assumptions of Theorem 3 hold. Then

if p is even,

if p is odd,

if the mean of is known. If has to be estimated by , the additional bias

term

appears.

+ + =

= ( ... ) + =

= + + + =

= = ( ) =

= ( ) 8 =

so(3.1)and thestatementof thetheorem follow.

The sametheorem can b eshownin a very similar wayfor theBurg estimatorused by Morf

etal. (1978)in thecase =1.

Asaconsequence wegetthebias oftheunivariate Burgestimatorforatrue mo delfrom the

result ofShaman and Stine (1988):

= +

=0 0

(

~

)

+ (8 8 )

+ (8 8 )

=( ... ) := (8 8 )

Here , and aredened by

:=( ... ) :=

1 if = +2 +4 ... ,

0 else,

:=( ... ) :=

1 if = +1 +3 ... ,

0 else,

(14)

>

0

0

0

0 Theorem5

Q

X X

X

X

X X

X Y

X Y X

0 1 1

0

0 0! 0

0 0! 0

0 0!0 0 j 0 j

0 0!0 0 0

0 0j 0 j

0 0 j 0 j

0 0!0 0 0

0 0 0

0 0! 0 0 0 0

Under the assumptions of Theorem4, the bias of is givenby

for known , and

for estimated mean.

1 2

( )

=1

( )2

0

+1

=1 ( )

=1

( ) ( )

+1 +1 0

=1 ( )

=1

( ) ( )

=1 ( )

=1

( ) ( )

+1

+1

+1

=0

( )2

+1

( )2

= +1

( )2

+1

=0 =+1

( )2

=0

v ; ; ;p :

S R

S

nE S S pS

EX

nE S S p S

S N S

nE S S pS j R j i R ;

EX

nE N N p R p j R

p j i R

p S j R j i R ;

N

nE N S p S pS p S :

N S

n

e ;

nE N S S S p S

p

p

p p

j

j

j

p

p p p

t

p p p

p p p

p p p

p

j p

j j

p

j;i

p

j p

i j i

t

p p

p

j p

j

j

p

j;i p

j p

i

j i

p p

j p

j j

p

j;i

p

j p

i j i

p

p p p p p

p p

p

j

j

j

j

n j p

k j

k

k

p p

p

j j

p

k j

k

k

p

j

p p

:= ( 2 ... )

Westill havetoinvestigate theasymptoticbiasof

~

= (1

~

)

^

:

~

(

~

)

(

~

) ( +1)

Pro of: Shaman (1983)showed that the YW estimator

^

=

^

of has the asymptotic

bias

(

^

) +2 (3.2)

if isknown. Fromthe pro ofof Theorem3 follows

(

~ ^

) ( +1) +2 (+1 )

(+1 ) =

= ( +1) 2 +

sothebias of

~

is

(

~

) ( +1) = (2 +1)

It iseasyto seethat

~ ~

= (

1

2

(~ +~ ) (1

~

))

and consequently

(

~ ~

) (1 )= = ( +1)

holds, which provestheassertion.

(15)

2 0

X X

0 0!0 0 j 0 j

0 !

=1 ( )

=1

( ) ( )

( ) ( )

nE S S p S j R j i R

nE

4 Conclusion

References

p p p

p

j p

j j

p

j;i

p

j p

i j i

p

p p

p

Acknowledgement:

Burg,J. P. (1968)

Hannan,E. J. (1970)

NATOAdvancedStudy

Institute onSignal Processingwith Emphasis onUnderwater Acoustics (

^

) ( +1) +2

instead of(3.2)(cf. Zhang,Th.4.2,(2.10)and Th.3.1).

TheasymptoticbiasoftheYWestimatorcanb ecomeverylargeifthepro cesshasro otsnear

the complex unit circle, but it can b e reduced by (variable) tap ering (Zhang,1992). For the

Burg estimator no improvement can b e achieved in this way, b ecause the untap ered Burg

estimatorhasthesameasymptoticbiasasthetap eredYWestimator,and (

~ ^

) 0

if b oth estimatorsaretap ered.

It isknown from simulations that the Burg estimatorhas a smaller bias than theYW esti-

matorif aro otof thepro cess is nearthe unitcircle, but apro of hasb een missing. We have

shownthatthe univariateBurg estimatorhasthesameasymptotic biasasthe leastsquares

estimator, which is usually smaller than the bias of the YW estimator. This makes there-

duction of the bias p ossible. Also the Burg estimator is recursively computable and stable,

in contrast to theleast squares estimator, so the Burg estimator has the majoradvantages

of b othofthese well knownestimators. We havealso shownthatthemultivariate Burgesti-

matorshavethe sameasymptotic distribution asthe multivariate YW estimator, and some

simulations (Strand,1977, Morf etal.,1978)seem to indicate that their bias is smaller than

that ofthe YWestimator, but noanalyticresults onthis areknownyet.

Iwouldlike tothankProf.Dr.R.Dahlhaus,who sup ervised my

Diplomarb eit, the major results of which are presented here, and Dr. D. Janas for their

supp ort.

ANewAnalysisTechniqueforTimeSeriesData.

.

MultipleTime Series. New York:Wiley.

(16)

Kay, S. and Makhoul,J. (1983)

Lewis,R. A. and Reinsel,G. C. (1988):

Lysne,D. and Tjstheim,D.(1987)

Morf,M., Vieira,A. , Lee,D. T. L. and Kailath,T. (1978)

Nicholls, D. F. and Pop e,A.L. (1988)

Shaman,P.(1983)

| and Stine,R. A.(1988)

Strand, O. N. (1977)

Whittle,P. W. (1963)

Zhang,H.-C. (1992) TimeSeriesAnalysis.

IEEETrans. Acoust.,Speech,SignalProcessing31

J. TimeSer. Anal. 9

Biometrika 74

IEEETrans.Geosci. Electron.16

Austral. J.Statist. 30A

StudiesinEconometrics,TimeSeries,andMultivariateStatis-

tics

J.Amer.Statist.Assoc. 83

IEEE Trans. Automat. Control22

Biometrika 50

J. TimeSer. Anal. 13

Pro ceedingsoftheFirstApplie d TimeSeriesSymp osium,Tulsa,

Okla., 1976(ed. D.F.Findley). New York: Academic Press,pp. 139-162.

Statistics oftheestimated ReectionCo ecientsof an

AutoregressivePro cess. ,1447-1455.

Prediction Error of Multivariate Time Series

withmis-sp eci ed Mo dels. ,43-57.

Lossofsp ectralp eaksinautoregressivesp ectrales-

timation. ,200-206.

Recursive Multichannel

MaximumEntropySp ectral Estimation. ,85-94.

BiasintheEstimationofMultivariateAutore-

gressions. ,Sp ec. Issue, 296-309.

Prop erties of Estimatesofthe MeanSquareErrorofPrediction inAu-

toregressiveMo dels. In:

(eds. S. Karlin, T. Amemiya, L. A. Go o dman), New York: Academic Press, pp.

331-343.

TheBias ofAutoregressive Co ecient Estimators.

, 842-848.

Multichannel Complex Maximum Entropy (Autoregressive) Sp ec-

tralAnalysis. ,634-640.

Onthettingof multivariateautoregressionsandtheapproximate

canonical factorizationof asp ectral densitymatrix. ,129-134.

Reduction oftheAsymptoticBiasofAutoregressiveandSp ectralEs-

timators byTap ering. ,451-469.

Institut fur Angewandte Mathematik

Universita t Heidelb erg

Im Neuenhei mer Feld 294

69120 Heidelb e rg

Germany

Referenzen

ÄHNLICHE DOKUMENTE

Section 4 reports simulations that demonstrate that the ESPL and the CESPL estimator have smaller bias than currently available estimators and that new tests for the validity of

The present paper shows that the same result can be achieved with estimators based on Tyler's (1987) M-functional of scatter, assuming only elliptical symmetry of L ( y i ) or

To …ll this gap, the aim of this article is to analyse the multihorizon forecasting performance of several strategies to generate forecasts from the simple stationary AR(1) model in

This paper employs recently developed techniques, group-mean panel estimator (include group-mean panel FMOLS and DOLS), for investigating the sustainability of fiscal policy

We consider seven estimators: (1) the least squares estimator for the full model (labeled Full), (2) the averaging estimator with equal weights (labeled Equal), (3) optimal

In this paper I use the National Supported Work (NSW) data to examine the finite-sample performance of the Oaxaca–Blinder unexplained component as an estimator of the population

2) be asymptotically efficient when the cdf belongs to the linear exponential family; 3) have a standard asymptotic normal distribution even when one or several coefficients of

The asymptotic distribution of OLS in stationary stochastic regression models including long memory processes was first examined by Robinson and Hidalgo (1997).. Specifically,