• Keine Ergebnisse gefunden

Bootstrap Inference for Partially Linear Model with Many Regressors

N/A
N/A
Protected

Academic year: 2022

Aktie "Bootstrap Inference for Partially Linear Model with Many Regressors"

Copied!
14
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Munich Personal RePEc Archive

Bootstrap Inference for Partially Linear Model with Many Regressors

Wang, Wenjie

3 March 2021

Online at https://mpra.ub.uni-muenchen.de/106391/

MPRA Paper No. 106391, posted 05 Mar 2021 03:53 UTC

(2)

Bootstrap Inference for Partially Linear Model with Many Regressors

Wenjie Wang March 3, 2021

Abstract

In this note, for the case that the disturbances are conditional homoskedastic, we show that a properly re-scaled residual bootstrap procedure is able to consistently estimate the limiting distri- bution of a series estimator in the partially linear model even when the number of regressors is of the same order as the sample size. Monte Carlo simulations show that the bootstrap procedure has superior finite sample performance than asymptotic approximations when the sample size is small and the number of regressors is close to the sample size.

Keywords: Bootstrap approximation, Partially linear model, Many regressors asymptotics

Division of Economics, School of Social Sciences, Nanyang Technological University, HSS-04-65, 14 Nanyang Drive, 637332, Singapore. E-mail address: wang.wj@ntu.edu.sg.

(3)

1 Introduction

In recent years, various efforts have been made to relax linear regression model assumptions and hence widen their applicability, since a wrong model on the regression function can lead to excessive modeling biases and erroneous conclusion. Of importance is the partially linear regression model which allows the relationship between response and partial covariates to be not specified. A partially linear regression model can be written as

yi =xiβ+g(zi) +ǫi, i= 1, ..., n

where β is parameter of interest and g(·) is unknown function. Donald and Newey (1994) discussed estimating the model using series approximation (e.g., polynomials or splines) tog(zi) and gave condi- tions for the asymptotic normality of this estimator using standard asymptotics. However, this result is not robust to the number of terms in the series approximation since it is assumed to be fixed when the number of sample size goes to infinity. Recently, Cattaneo, Jansson and Newey (2018) propose an alternative asymptotic framework which allows the number of terms to grow as fast as the sample size. They show that under many regressors asymptotics, the limiting distribution of the series esti- mator has a larger than usual asymptotic variance. Moreover, when the disturbance is homoskedastic, this larger variance can be consistently estimated by the usual variance estimator provided that a degrees-of-freedom correction is used.

However, as can be seen from our simulation result, inference based on the asymptotic normal approximation of Cattaneo et al. (2018) can have size distortion in finite samples, especially when the sample size is small and the number of regressors is close to the sample size. It is therefore tempting to consider whether other methods, such as the bootstrap, is able to provide a better alternative than their asymptotic normal approximation. Indeed, in a similar context of linear instrumental variable model where the number of instruments is allowed to be a nontrivial fraction of sample size, the bootstrap, when designed properly, is found to be asymptotically valid and has better finite-sample performance than conventional asymptotic approximations in terms of size control for hypothesis testing; e.g., see Wang and Kaffo (2016), Kaffo and Wang (2017), and Wang (2020).

In this paper, we propose a residual-based i.i.d. bootstrap which puts mass 1/nat each (re-scaled) residual in order to approximate the limiting distribution of series estimator. The residuals are properly re-scaled to account for the affect of using a large number of regressors. We show analytically that our bootstrap techniques provide a valid method to approximate the limiting distribution of the series estimator under Cattaneo et al. (2018)’s many regressors asymptotics. A Monte Carlo experiment shows that confidence intervals based on Cattaneo et al. (2018)’s normal approximation can have size distortion in finite sample, especially when the number of regressors is large. Our bootstrap procedure reduces these distortions.

The rest of this paper is organized as follows. Section 2 describes setup of the model and the many

(4)

regressor asymptotics in the partially linear model. Then, we present the result for bootstrap-based inference on the partially linear model with many series regressors. In Section 3, we present some Monte Carlo simulation results, while Section 4 concludes.

2 Setup and Main Results

Let (yi, xi, zi), i = 1, ..., n be a random sample of the random vector (y, x, z) where y ∈ R is a dependent variable and x ∈ R, z ∈ Rdz×1 are explanatory variables. The partially linear model is given by

yi =xiβ+g(zi) +ǫi, E[ǫi|xi, zi] = 0,

andxi =h(zi)+viwithh(zi) =E[xi|zi];g(·) andh(·) are unknown functions. We also assume that the disturbances are homoskedastic: E[ǫ2i|xi, zi] =σǫǫ, E[vi2|zi] = Σvv. Following Cattaneo et al. (2018), we will condition on Z throughout the following discussion (alternatively, we could assume that Z is non-random, as pointed out by Cattaneo et al. (2018).

A series estimator of β is obtained by using approximating function of zi. To describe the esti- mator, let pk(z) = (p1k(z), ..., pkk(z)) be a vector of approximating functions, such as polynomials or splines, where k denotes the number of terms in the regression. Here the unknown functiong(z) will be approximated by a linear combination of the approximating functions. Therefore, letting Y = [y1, ..., yn] ∈R1 and X= [x1, ..., xn] ∈R1, and PZ = [pk(z1), ..., pk(zn)], a series estimator of β is given by

βˆ= (XMZX)1XMZY, MZ =I−QZ, QZ =PZ(PZPZ)PZ

where A denotes a generalized inverse of a matrix A (satisfying AAA =A) and XMZX will be non-singular with probability approaching one under appropriate conditions.

The limiting distribution of ˆβwas derived by Donald and Newey (1994) under conventional asymp- totics where the number of series terms is assumed to be fixed. However, such asymptotics cannot provide a good approximation for the distribution of ˆβ when the number of regressors become large.

To obtain a better approximation in the case of many regressors, Cattaneo et al. (2018) recently proposed an alternative asymptotic framework, in which the number of terms in the series approx- imation is allowed to grow as fast as the sample size. More precisely, it is shown that when the disturbances are homoskedastic,

√n( ˆβ−β)→dN(0,Ω)

under many regressors asymptotics, where Ω =σǫǫΓ1,Γ = (1−α)Σvvwithα∈[0,1). Apparently, the presence of many regressors leads to a larger asymptotic variance which captures term assumed away by the classical asymptotic result. Moreover, Cattaneo et al. (2018) show that this larger variance can be consistently estimated by using the usual asymptotic variance estimator with a proper degrees

(5)

of freedom correction. In particular, they show that

Ωb1/2( ˆβ−β)→dN(0,1)

where Ω = ˆb s21,sˆ2 = ˆǫˆǫ/(n−k−1),ˆǫ=MZ(Y −Xβ),ˆ Γ =b XMZX/n.

However, as can be seen from our simulation result, inference based on Cattaneo et al. (2018)’s asymptotic approximation can still have serious distortion, especially when the number of regressors is large relative to the sample size. Here, we propose a bootstrap method as an alternative to their asymptotic normal approximation. We show both analytically and numerically that this method is valid under many regressors asymptotics in the sense that our method consistently estimate the limiting distribution of ˆβ even when the number of terms in the series approximation possibly grows as fast as the sample size.

Specifically, our bootstrap procedure proceeds as follows.

1. We obtain properly re-scaled residuals ˆ

ǫ=MZ(Y −Xβ),ˆ Vˆ =MZX Denote ¯ǫ = n1Pn

i=1ˆǫi. Let ˆFnǫ be the empirical distribution of q

n

n−k(ˆǫi−ǫ), so ˆ¯ Fnǫ puts mass n1 at q

n

nk(ˆǫi−¯ǫ) and R

xdFˆnǫ(x) = 0. Similarly, denote ¯V = n1Pn

i=1i. Let ˆFnv be the empirical distribution of q

n nk

i−V¯

, so ˆFnv puts mass n1 at q

n nk

i−V¯ and R xdFˆnv(x) = 0.

2. Generate {ǫ1, ..., ǫn}, which are conditionally independent with common distribution ˆFnǫ and {V1, ..., Vn}, which are conditionally independent with common distribution ˆFnv.

3. We set the bootstrap data generating process (DGP) as X = QZX+V

Y = Xβˆ+QZ(y−Xβ) +ˆ ǫ whereǫ = (ǫ1, ..., ǫn) and V = (V1, ..., Vn).

4. Generate ˆβ =

XMZX1

XMZY using bootstrap pseudo-data.

Remark 1. Note that in our bootstrap procedure, the residuals (ˆǫ,ˆv) are properly re-scaled to take account for the affect of using many terms in the series approximation. In the bootstrap literature, such degree-of-freedom correction are often implemented to obtain a better finite sample performance. Here, our result gives a large sample justification for the correction because under many regressors asymptotics, this is no longer a problem of finite sample performance, and re-scaling the residuals becomes essential to establishing the validity of residual-based bootstrap, as will be shown in Theorem 2.1 below. Intuitively, accounting for the correct degrees of freedom is important whenever the number of terms in the linear model is large relative to the sample size.

(6)

In order to formally investigate the asymptotic properties of the bootstrap estimator ˆβ, we begin with the following assumptions. These assumptions are also used in Cattaneo et al. (2018).

Assumption 1.

(a) For some αh > 0, there is nonrandom ηh ∈ Rk such that Pn i=1E

kh(zi)−ηhpk(zi)k2 /n = O k2αh

.

(b) For some αg > 0, there is nonrandom ηg ∈ Rk such that Pn i=1E

kg(zi)−pk(zi)ηgk2 /n = O k2αg

.

As pointed out by Cattaneo et al. (2018), these conditions are implied by conventional assumptions from approximation theory. They are needed for the control of bias from approximating unknown func- tions by a linear combination ofpk(·). Next, we also assume that certain moments of the disturbances are bounded.

Assumption 2.

There isC <∞ such thatE[Vi4]≤C and E[ǫ4i]≤C.

We are now ready to establish the main results.

Theorem 2.1 Suppose that Assumptions 1-2 hold. Then, if αn=k/n→α∈[0,1), supxRPh√

n( ˆβ−β)ˆ ≤xi

−Ph√

n( ˆβ−β)≤xi→p 0, and

supx∈RPh

Ωb∗−1/2( ˆβ−β)ˆ ≤xi

−Ph

Ωb1/2( ˆβ−β)≤xi→p 0

where P denotes the probability measure induced by the bootstrap procedure proposed in this section.

Remark 2. The first result in Theorem 2.1 guarantees the asymptotic validity of percentile type tests and confidence intervals (CIs) based on our bootstrap method. More precisely, Percentile type bootstrap intervals based on ˆβand ˆβ are computed as

βˆ±q0.95, (1)

where q0.95 is such that Pβˆ−βˆ≤q0.95

= 0.95. Moreover, the second result in Theorem 2.1 shows that percentile-t tests and CIs based on our bootstrap method is also valid. Therefore, we can construct CI for ˆβ using

βˆ±z0.95

pΩ,b (2)

where z0.95 is such that P

βˆβˆ b

≤z0.95

= 0.95. And we can define percentile and percentile-t type bootstrap tests accordingly.

Remark 3. It is easy to see that our bootstrap is also valid under Donald and Newey (1994)’s conventional asymptotics where αn = k/n → 0. Therefore, this procedure can be seen as a unified inference approach which is valid regardless of the number of regressors used in the series estimation.

(7)

Table 1. Empirical rejection frequency at 5 percent nominal level.

n= 100 n= 200

k= 10 k= 40 k= 70 k= 10 k= 40 k= 70 tHO,1 10.7 6.7 1.4 10.6 8.3 6.6

tHO,2 9.0 1.8 0 9.7 5.2 2.1

βˆpart 5.3 5.0 5.8 5.8 4.7 5.7 βˆlhs 5.6 3.0 0.3 5.7 4.0 2.8

Note: tHO,1 and tHO,2 denote the t-ratios studied by Cattaneo et al. (2018). ˆβpart denotes the i.i.d.

bootstrap proposed in this note. ˆβlhs denotes the i.i.d. bootstrap proposed in Liang et al. (2000).

3 Simulation

We conduct a Monte Carlo experiment to explore the finite sample performance of the bootstrap procedures proposed in the previous sections. Throughout this section, the simulation study is based on 5000 replications. We set sample size n= 100 and n= 200 and we set the number of bootstrap replication asB = 399.

For the partially linear model, we consider the following setting:

yi = xiβ+g(zi) +ǫi xi = h(zi) +vi

where β= 0, zi∼U(−1,1), ǫi ∼N(0,1), vi∼U(−1,1), and g(zi) =zi(2 +zi)1/2.

The estimator considered in the Monte Carlo experiment is based on power series approximation.

Specifically, we approximate g(zi) bypk(zi)γ withpk(zi) = (1, zi, zi2, ..., zik). For the choices ofk, the number of regressors in the partially linear model, we setk= 10,40,70. To explore the consequences of introducing many regressors in the partially linear model, we focus on the finite sample size properties of 4 competing methods: tHO,m, m=1,2, which denote the two t-ratios studies in Cattaneo et al.

(2018), the percentile type bootstrap test based on the residual i.i.d. procedure proposed in this note, and the percentile type bootstrap test based on the procedure proposed in Liang et al. (2000).

The main finding from the simulation is presented in Table 1. It turns out that the finite sample rejection frequency oftHO,m, m= 1,2 is quite sensitive to the choice ofk. Specifically, these tests tend to over-reject the null hypothesis whenkis relatively small, but tend to under-reject whenkbecomes large. Also, the performance of tHO,1 is not always superior to that of tHO,2, when k is large, the degrees of freedom corrected tHO,1 has difficulty to reject the null hypothesis. For the two bootstrap procedures, Liang, H¨ardle, and Sommerfeld (2000)’s bootstrap is able to well control the size whenk is small, but it also under-rejects whenkbecomes a non-trivial fraction of the sample size. In contrast, Our bootstrap procedure has close-to-correct empirical size for the full range of kin the simulation.

(8)

4 Conclusion

In this note, we employ the asymptotic result introduced by Cattaneo et al. (2018) to derive results concerning bootstrap based inference for the partially linear model and the instrumental variable model. For the homoskedastic case, We show that when the residuals are properly re-scaled to account for the presence of many regressors, the i.i.d. bootstrap is able to well mimic the limiting distribution of the series estimator in the partially linear model, even when the number of regressors goes to infinity at the same speed as the sample size. For future research, we shall investigate the higher order properties of the bootstrap procedure when many regressors are involved by investigating the property of edgeworth expansions in such circumstances.

(9)

References

Cattaneo, M. D., M. Jansson, and W. K. Newey (2012): “Alternative Asymptotics and the Partially Linear Model with Many Regressors,” CREATES Research Papers 2012-02, School of Economics and Management, University of Aarhus.

Cattaneo, M. D., M. Jansson, and W. K. Newey (2018): “Alternative asymptotics and the partially linear model with many regressors,” Econometric Theory, 34(2), 277–301.

Donald, S. G., and W. K. Newey (1994): “Series Estimation of Semilinear Models,” Journal of Multivariate Analysis, 50(1), 30–40.

Hansen, C., J. Hausman,andW. Newey(2008): “Estimation With Many Instrumental Variables,”

Journal of Business & Economic Statistics, 26, 398–422.

Kaffo, M., and W. Wang(2017): “On bootstrap validity for specification testing with many weak instruments,” Economics Letters, 157, 107–111.

Liang, H., W. H¨ardle, and V. Sommerfeld (2000): “Bootstrap approximation in a partially linear regression model,” Journal of statistical planning and inference, 91(2), 413–426.

Wang, W. (2020): “On Bootstrap Validity for the Test of Overidentifying Restrictions with Many Instruments and Heteroskedasticity,” Discussion Paper No. 104858, University Library of Munich, Germany.

Wang, W.,andM. Kaffo(2016): “Bootstrap inference for instrumental variable models with many weak instruments,” Journal of Econometrics, 192(1), 231–268.

(10)

A Mathematical Proofs

Throughout this Appendix, letC be a generic positive constant that may be different in different use.

For any bootstrap statisticTwe writeTp 0 in probability whenlimn→∞P[P(|T|> δ)> δ] = 0 for anyδ > 0, i.e. P(|T|> δ) =op(1). Also, we say that T =Op(nλ) in probability if and only if

∀δ >0, There exists a Mδ <∞ such that limn→∞P[P(|n−λT|> Mδ)> δ] = 0, i.e. ∀δ >0, There exists aMδ<∞such that P(|nλT|> Mδ) =op(1). Finally, we writeTd Din probability, for any distribution D, when weak convergence under the bootstrap probability measure occurs in a set with probability converging to one.

Lemma A.1 If Assumption 1 is satisfied, then if for some δ >0, E[Vi4]<∞ and E[ǫ4i]<∞, then E

Vi4

and E ǫi4

are bounded in probability.

Proof of Lemma A.1 We give the proof forE

Vi4

, the proof forE ǫi4

is similar.

E Vi4

= 1

n Xn i=1

r n n−k

i−V¯4

=

1 1−αn

4 1 n

Xn i=1

i+ ˜Vi−V¯4

≤ C (1

n Xn i=1

˜hi4 + 1

n Xn

i=1

i−V¯4)

whereαn=k/n, ˜hi=hi−Pn

j=1Qijhj and ˜Vi =Vi−Pn

j=1QijVj;Qij denotes the ijth element ofQZ. The inequality follows from Minkowski’s inequality.

Furthermore, note that Xn

i=1

˜h2i =trace HMZH

≤ Xn i=1

h(zi)−pk(zi)ηh2

=Oas nk2αh

by Assumption 1. Therefore,Pn

i=1˜h4i ≤Pn i=1˜h2i2

=Oas n2k4αh

. Moreover, we have Eh

i4i

≤C Xn j=1

Xn l=1

Mij2Mil2E Vj2Vl2

≤C by properties of the idempotent matrix and by Assumption 2.

Therefore, we have by Minkowski’s inequality 1

n Xn

i=1

i−V¯4

≤ C (1

n Xn

i=1

i4+ ¯V4 )

p Cn Eh

i4io

(11)

since ¯V4p (E[Vi])4= 0. Putting these results together, we obtain E

Vi4

≤ C (1

n Xn i=1

˜h4i + 1 n

Xn i=1

i4 )

= Oas nk4αh

+Op(1) =Op(1) by Assumption 1. i.e.,E[Vi4] is bounded in probability.

Lemma A.2 If Assumption 1 and 2 are satisfied, then

Γˆ = Γ +op(1) where Γˆ=XMZX/n,Γ = (1−α)Σvv.

Proof of Lemma A.2

From the bootstrap DGP, we have E

"

VQZV k

#

= 1

kEh trace

VQZVi

= 1

ktrace

QZEh

VVi

= trace(QZ) k E

Vi2

=E Vi2 sinceEh

ViVji

=E[Vi]Eh Vji

= 0 for i6=j by the property of i.i.d. bootstrap.

Furthermore, note that E

"

VQZV

k −E Vi2#2

= 1

k2 Xn

i=1

Xn j=1

Xn l=1

Xn m=1

QijQlmE

ViVjVlVm

−2 kE

Vi2Xn

i=1

Xn j=1

QijE ViVj

+ E Vi22

= 1

k2E

Vi4Xn

i=1

(Qii)2+ 2 k2 E

Vi22Xn

i=2 i−1

X

j=1

(Qij)2

+



 2 k2 E

Vi22

 Xn

i=2 i−1

X

j=1

QiiQjj+ (Qij)2

− E Vi22



≡ L1+L2+L3

Let us first focus onL1. Note that L1 ≤ 1

kE Vi4

=Op 1

k

by Lemma A.1 and by the fact thatPn

i=1(Qii)2≤Pn

i=1Qii=k.

(12)

Next, forL2, we note that by our bootstrap DGP E[Vi2] =

n n−k

1 n

Xn i=1

i−V¯2!

= n

n−k

n − V¯2!

pΣvv since VˆnVˆp (1−α)Σvv and ¯V →pE[Vi] = 0. Moreover, we have

Xn i=1

(Qii)2+ 2 Xn i=2

i1

X

j=1

(Qij)2 =T r(QZQZ) =T r(QZ) =k given thatQZ is symmetric and idempotent. Therefore,

L2 ≤ 1

k2 E[Vi2]2

 Xn

i=1

(Qii)2+ 2 Xn i=2

i−1

X

j=1

(Qij)2

= 1

k E[Vi2]2

=Op 1

k

ForL3, we note that

|L3| =

1

k2 E[Vi2]2"

(trace(QZ))2+trace(QZQZ)−2 Xn i=1

(Qii)2

#

− E[Vi2]2

=

1

k2 E[Vi2]2

k2+k−2 Xn

i=1

(Qii)2

!

− E[Vi2]2

=

1

k2 E[Vi2]2

k−2 Xn

i=1

(Qii)2!≤ 1

k E[Vi2]2 + 2

k2 E[Vi2]2Xn

i=1

Qii=Op 1

k

Putting these results together, we obtain Eh

V∗′QZV

k −E[Vi2]i2

= Op k1

and V∗′QkZV − E[Vi2] =Op

1

k

by Markov’s inequality.

Using similar arguments, we can show that V∗′nV −E[Vi2] =Op

1

n

. Finally, we obtain

VMZV

n −(1−αnvv

= VV

n −VQZV

n −(1−αnvv

= VV

n −E[Vi2]

!

−αn VQZV

k −E[Vi2]

!

+ (1−αn) E[Vi2]−Σvv

= Op

1

√n

−αnOp

1

√k

+ (1−αn)op(1) =op(1)

and the result follows because XMZX/n= (QZX+V)MZ(QZX+V)/n=VMZV/n.

Proof of Theorem 2.1

We proceed by checking that 1

nVMZǫsatisfies the condition of Lemma A2 in Hansen, Hausman

(13)

and Newey (2008), conditionally on the original sample with probability converging to 1.

LetWi = 1

nMiiViǫi. First, by our bootstrap DGP, {Wi, Vi, ǫi}, i= 1, ..., nare (conditionally) independent acrossi. Second,Ei] =n1Pn

i=1

hq n

n−k(ˆǫi−¯ǫ)i

= 0, E[Vi] =n1Pn i=1

hq n n−k

i−V¯i

= 0. Third, Ei4] and E[Vi4] are bounded in probability by Lemma A.1. Fourth,

Xn i=1

E[Wi4] = n2 Xn i=1

Mii4E[Vi4ǫi4]

= n2 Xn i=1

Mii4E[Vi4]Ei4]

= Op(1)n2 Xn

i=1

Mii4 ≤Op(1)n2 Xn i=1

Mii=Op(1)n−k n2p 0

where the second equality follows from the property of the bootstrap DGP, the third equality from Lemma A.1, and the inequality from the fact thatMii4 ≤Mii andPn

i=1Mii=Pn

i=1(1−Qii) =n−k.

Finally, we can show that Xn

i=1

E[Wi2] +

1− Pn

i=1Q2ii k

αnh

E[Vi2]Ei2] + (E[Viǫi])2i

= Pn

i=1Mii2

n E[Vi2]Ei2] +

1− Pn

i=1Q2ii k

αnE[Vi2]Ei2]

=

Pn i=1Mii2

n +

1−

Pn i=1Q2ii

k

αn

E[Vi2]Ei2]

p (1−α)σǫǫΣvv

where the first equality follows from the fact that E[Vi2ǫi2] = E[Vi2]Ei2] and E[Viǫi] = E[Vi]Ei] = 0; the convergence in probability follows from

Pn i=1Mii2

n +

1−

Pn i=1Q2ii

k

αn→1−α and E[Vi2]→pΣvv, Ei2]→pσǫǫ.

Therefore, by Lemma A2 of Hansen, Hausman, and Newey (2008) VMZǫ

√n = Xn

i=1

Wi−αnX

i6=j

ViQijǫj

√n →d N(0,(1−α)σǫǫΣvv) in probability.

As has been show in Theorem 1 of Cattaneo, Jansson, and Newey (2012),

√n βˆ−β

dN(0,Ω) under Assumption 1 and 2, where Ω =σǫǫΓ1,Γ = (1−α)Σvv.

Note that √ n

βˆ−βˆ

= bΓ1

X∗′MZǫ

n =

1

V∗′MZǫ

n . Using the results of Lemma A.2,

(14)

we obtain by Contiunous Mapping Theorem

√n

βˆ−βˆ

dN(0,Ω)

in probability. The first result of the theorem then follows by applying Polya’s Theorem, given that the normal distribution is everywhere continuous.

For the second result, note that ˆ

s2 = ˆǫˆǫ n−k−1

= (y−Xβˆ)MZ(y−Xβˆ) n−k−1

= ǫMZǫ

n−k−1+( ˆβ−βˆ)VMZǫ

n−k−1 +ǫMZV( ˆβ−βˆ)

n−k−1 +( ˆβ−βˆ)VMZV( ˆβ−βˆ) n−k−1

= ǫMZǫ

n−k−1+Op 1

√n

+Op 1

√n

+Op 1

n

p σǫǫ

where the third equality follows from the fact that ǫ∗′n−k−MZV1p 0,Vn−k−∗′MZV1p Σvv which can be shown by using arguments similar to the proof for A.2, and from the fact that ˆβ−βˆ =Op

1

n

; the convergence in probability follows from ǫn−k−∗′MZǫ1p σǫǫ.

Moreover, we know from Theorem 1 of Cattaneo, Jansson, and Newey (2012) that Ωb1/2

βˆ−β

dN(0,1) under Assumption 1 and 2.

Using the result of Lemma A.2 and the result that ˆs2p σǫǫ, we obtain by Continuous Mapping Theorem

Ωb∗−1/2

βˆ−βˆ

= ˆ

s2Γb∗−11/2

βˆ−βˆ

d N(0,1) in probability. The second result then follows by applying Polya’s Theorem.

Referenzen

ÄHNLICHE DOKUMENTE

 We are sorry to note that the goods (ordered) have not arrived yet..  We are

In this exercise we want to show that the model construction for FO 2 -formulae from the lecture is optimal in the following sense: in general it does not suffice to take only

a) Formalize the following statement as a formula in first order predicate logic: If every node has a loop or has at least one other node that it is connected to, then every node

We want to discuss the origin of the BRST symmetry in a more general context, and show that, by quoting Zinn-Justin, the ”Slavnov Taylor identities in gauge theories owe less to

Liang and Cheng (1993) discussed the second order asymptotic eciency of LS estimator and MLE of : The technique of bootstrap is a useful tool for the approximation of an unknown

To improve coverage probability accuracy of the Stein condence set C nB , we can pursue a more sophisticated strategy: First transform the root in a one- to-one way so that

which corresponds to the case where the pure factor model is correctly specified and there is no additional confounding based on the factor residuals, and the Double Selection

Key words: Measurement error, Partially linear model, Regression calibration, Non-parametric function, Semi-parametric regression, Uniform confidence surface, Simultaneous