• Keine Ergebnisse gefunden

1 Finit Maximaler MSE f¨ur n= 16 und Normale Lokation. . . v 2 Empirischer Fehler zweiter Art von Tests auf Normalit¨at unter Cniper

Kontamination . . . vi 3 Empirischer MSE f¨ur Normale Lokation, Stichprobenumfang n= 16

und Radius r= 0.2 unter Cniper Kontamination . . . vii 4 Finite-Sample Maximum MSE for Normal Location and Sample Size

n= 16 . . . xxxvii 5 Empirical Type II Error of Tests for Normality under Cniper

Con-tamination . . . xxxix 6 Empirical MSE for Normal Location, Sample Size n= 16 and

Ra-dius r= 0.2 under Cniper Contamination . . . xxxix

3 Binomial Model 69

3.1 Empirical and Asymptotic MSEs for A Small Simulation Study (∗=c)106 3.2 MSE–inefficiencies for A Small Simulation Study (∗=c) . . . 107

4 Poisson Model 109

4.1 Empirical and Asymptotic MSEs for A Small Simulation Study (∗=v)151 4.2 MSE–inefficiencies for A Small Simulation Study (∗=v) . . . 152

5 Exponential Scale and Gumbel Location Model 153 5.1 Least Favorable Radius and Maximum MSE–inefficiency for

Lognor-mal Scale and NorLognor-mal Location (∗=c, v) . . . 159 5.2 Least Favorable Radius and Maximum MSE–inefficiency (∗=c) . . 164 5.3 Least Favorable Radius and Maximum MSE–inefficiency (∗=v) . . 164 5.4 Implemented Scale and Location Models . . . 165

6 Gamma Model 167

6.1 MSE–inefficiencies of Dθη˜ϑ,r for θ= (2,2)τ (∗=c) . . . 171 6.2 Least Favorable Radius r0 and MSE–Inefficiency (∗=c) . . . 172

LIST OF TABLES lxi

7 Regression and Scale 180

7.1 Comparison between Ms and BM Estimators in case of Normal Lo-cation and Scale . . . 236 7.2 Minimax Asymptotic MSE for AL, M, MK, ALc, Mc and BM

Esti-mators in case K= Unif ({1.0,5.0}) .. . . 241 7.3 MSE–inefficiency for AL, M, MK, ALc, Mc and BM Estimators in

case K= Unif ({1.0,5.0}) . . . 242 7.4 Minimax Asymptotic MSE for AL, M, MK, ALc, Mc and BM

Esti-mators in case K= 5/6·I{1}+1/6·I{5}. . . 242 7.5 MSE–inefficiency for AL, M, MK, ALc, Mc and BM Estimators in

case K= 5/6·I{1}+1/6·I{5}. . . 242 7.6 Minimax Asymptotic MSE for AL, M, MK, ALc, Mc and BM

Esti-mators in case K= 25/26·I{1}+1/26·I{5}. . . 243 7.7 MSE–inefficiency for AL, M, MK, ALc, Mc and BM Estimators in

case K= 25/26·I{1}+1/26·I{5}. . . 243 7.8 Minimax Asymptotic MSE for AL, M, MK, ALc, Mc and BM

Esti-mators in case K= Unif ({1.0,2.0,3.0,4.0,5.0}) . . . 243 7.9 MSE–inefficiency for AL, M, MK, ALc, Mc and BM Estimators in

case K= Unif ({1.0,2.0,3.0,4.0,5.0}) . . . 244 7.10 Minimax Asymptotic MSE for AL, M, MK, ALc, Mc and BM

Esti-mators in case K=pP5

k=1k−1I{k} (p≈0.438 ). . . 244 7.11 MSE–inefficiency for AL, M, MK, ALc, Mc and BM Estimators in

case K=pP5

k=1k−1I{k} (p≈0.438 ). . . 244 7.12 Minimax Asymptotic MSE for AL, M, MK, ALc, Mc and BM

Esti-mators in case K=pP5

k=1k−2I{k} (p≈0.683 ). . . 245 7.13 MSE–inefficiency for AL, M, MK, ALc, Mc and BM Estimators in

case K=pP5

k=1k−2I{k} (p≈0.683 ). . . 245 7.14 Minimax Asymptotic MSE and MSE–inefficiency for ALs, Ms and

BM Estimators in case K= Unif ({1.0,5.0}) . . . 246 7.15 Minimax Asymptotic MSE and MSE–inefficiency for ALs, Ms and

BM Estimators in case K= 5/6·I{1}+1/6·I{5}. . . 246 7.16 Minimax Asymptotic MSE and MSE–inefficiency for ALs, Ms and

BM Estimators in case K= 25/26·I{1}+1/26·I{5}. . . 247 7.17 Minimax Asymptotic MSE and MSE–inefficiency for ALs, Ms and

BM Estimators in case K= Unif ({1.0,2.0,3.0,4.0,5.0}) . . . 247 7.18 Minimax Asymptotic MSE and MSE–inefficiency for ALs, Ms and

BM Estimators in case K=pP5

k=1k−1I{k} (p≈0.438 ). . . 247 7.19 Minimax Asymptotic MSE and MSE–inefficiency for ALs, Ms and

BM Estimators in case K=pP5

k=1k−2I{k} (p≈0.683 ). . . 248 8 Normal Location and Scale – a Comparative Study 253 8.1 Least Favorable Radius and MSE–Inefficiency . . . 255 8.2 Minimax Asymptotic MSE for AL, M, BM, Huber and Hampel

Es-timators . . . 279 8.3 Minimax Asymptotic MSE for Andrews, Tukey and MM2 Estimators279

lxii LIST OF TABLES

8.4 MSE–Inefficiency for AL, M, BM, Huber and Hampel Estimators . . 280 8.5 MSE–Inefficiency for Andrews, Tukey and MM2 Estimators . . . 280

9 Robust Adaptivity 287

9.1 Robust Non-Adaptivity for Two Points Regressor (∗ = c, t = 0 , t=α= 1 ). . . 301 9.2 Robust Non-Adaptivity for Three Points Regressor (∗ =c, t = 0 ,

t=α= 1 ). . . 303 9.3 Robust Non-Adaptivity for Piecewise Uniform Regressor (∗ = c,

t= 0 , t=α= 1 ). . . 305 9.4 Mean, Variance and Skewness of Hθ,1 and Hθ,1\ in case of AR(1)

with Gumbel Innovations . . . 321 9.5 Results of 10 Monte Carlo Simulations in case of AR(1) with Gumbel

Innovations . . . 321 9.6 Robust Non-Adaptivity for AR(1) with Gumbel Distributed

Innova-tions (∗=c, t=α= 1 ) . . . 322 9.7 Robust Non-Adaptivity for ARCH(1) with Lognormal Innovations

(∗=c, t=α= 1 ) . . . 326 9.8 Robust Non-Adaptivity for ARCH(1) with Lognormal Innovations

and Parameters α1= 1.5, β= 1.0 (∗=c, t=α= 1 ) . . . 327

11 One-Dimensional Normal Location 354

11.1 Precision of the Computation of the Finite-Sample Risk for Sample Size n= 2 and Contamination Neighborhoods (∗=c). . . 369 11.2 Precision of the Computation of the Finite-Sample Risk for Sample

Size n= 2 and Total Variation Neighborhoods (∗=v) . . . 370 11.3 Comparison between Algorithm A, Algorithm B and Empirical

Re-sults (∗=c). . . 371 11.4 Approximation of the Finite-Sample Risk via Edgeworth Expansions

and Saddlepoint Approximations (∗=c). . . 374 11.5 Approximation of the Finite-Sample Risk via Edgeworth Expansions

and Saddlepoint Approximations (∗=v). . . 375 11.6 Comparison of Optimal Clipping Bounds and Finite-Sample Risks

(∗=c). . . 381 11.7 Maximum Relative Risk (∗=c). . . 384 11.8 Least Favorable Radii for the Finite-Sample Minimax Estimator (∗=

c). . . 385 11.9 Comparison of Optimal Clipping Bounds and Finite-Sample Risks

(∗=v). . . 394 11.10Maximum Relative Risk (∗=v). . . 397 11.11Least Favorable Radii for the Finite-Sample Minimax Estimator (∗=

v). . . 398

LIST OF TABLES lxiii

12 One-Dimensional Normal Linear Regression 404 12.1 Comparison between Algorithm A, Algorithm B and Empirical

Re-sults for Discrete Regressor Distribution K (∗=c, t= 0 ). . . 420 12.2 Comparison between Algorithm A, Algorithm B and Empirical

Re-sults for Discrete Regressor Distribution K (∗=v, t= 0 ). . . 421 12.3 Comparison between Algorithm A, Algorithm B and Empirical

Re-sults for Absolutely Continuous Regressor Distribution K (∗ = c, t= 0 ). . . 422 12.4 Comparison between Algorithm A, Algorithm B and Empirical

Re-sults for Absolutely Continuous Regressor Distribution K (∗ = v, t= 0 ). . . 423 12.5 Comparison between Algorithm A and Empirical Results for Discrete

Regressor Distribution K (∗=c, t=ε). . . 433 12.6 Comparison between Algorithm A and Empirical Results for Discrete

Regressor Distribution K (∗=v, t=δ). . . 434 12.7 Comparison between Algorithm A and Empirical Results for

Abso-lutely Continuous Regressor Distribution K (∗=c, t=ε). . . 435 12.8 Comparison between Algorithm A and Empirical Results for

Abso-lutely Continuous Regressor Distribution K (∗=v, t=δ). . . 435 12.9 Comparison of Optimal Clipping Bounds and Finite-Sample Risks

for Discrete Regressor Distribution K (∗=c, t= 0 ). . . 442 12.10Comparison of Optimal Clipping Bounds and Finite-Sample Risks

for Absolutely Continuous Regressor Distribution K (∗=c, t= 0 ). 444 12.11Maximum Relative Risk for Discrete Regressor Distribution K (∗=

c, t= 0 ). . . 449 12.12Maximum Relative Risk for Absolutely Continuous Regressor

Dis-tribution K (∗=c, t= 0 ). . . 450 12.13Least Favorable Radii for the Finite-Sample Minimax Estimator and

Discrete Regressor Distribution K (∗=c, t= 0 ). . . 451 12.14Least Favorable Radii for the Finite-Sample Minimax Estimator and

Absolutely Continuous Regressor Distribution K (∗=c, t= 0 ). . . 452 12.15Comparison of Optimal Clipping Bounds and Finite-Sample Risks

for Discrete Regressor Distribution K (∗=v, t= 0 ). . . 464 12.16Comparison of Optimal Clipping Bounds and Finite-Sample Risks

for Absolutely Continuous Regressor Distribution K (∗=v, t= 0 ). 466 12.17Maximum Relative Risk for Discrete Regressor Distribution K (∗=

v, t= 0 ). . . 471 12.18Maximum Relative Risk for Absolutely Continuous Regressor

Dis-tribution K (∗=v, t= 0 ). . . 472 12.19Least Favorable Radii for the Finite-Sample Minimax Estimator and

Discrete Regressor Distribution K (∗=v, t= 0 ). . . 473 12.20Least Favorable Radii for the Finite-Sample Minimax Estimator and

Absolutely Continuous Regressor Distribution K (∗=v, t= 0 ). . . 474

lxiv LIST OF TABLES

C Convolution via Fast Fourier Transform 540 C.1 Precision of the Convolution of Binomial Distributions via FFT . . . 546 C.2 Precision of the Convolution of Poisson Distributions via FFT. . . . 547 C.3 Precision of the Convolution of Normal Distributions via FFT is

Independent of the Parameters µ and σ . . . 548 C.4 Precision of the Convolution of Normal Distributions via FFT. . . . 549 C.5 Precision of the Convolution of Exponential Distributions via FFT

is Independent of the Parameter λ . . . 549 C.6 Precision of the Convolution of Exponential Distributions via FFT . 550

D Optimally Robust Estimation by means of S4 Classes and

Meth-ods 551

D.1 Loading Times for the RPackages included inRBundleRobASt . . . 551 D.2 Generating Functions of Package DistrEx . . . 554 D.3 New Generic Functions of Package distrEx . . . 555 D.4 Generating Functions of Package RandVar . . . 558 D.5 New Generic Functions of Package RandVar . . . 559 D.6 Generating Functions of Package ROptEst(Part 1) . . . 567 D.7 Generating Functions of Package ROptEst(Part 2) . . . 568 D.8 Methods for Generic Function optICin PackageROptEst . . . 569 D.9 Generic Functions for the Computation of Optimal (Robust) ICs in

PackageROptEst . . . 569 D.10 Further New Generic Functions in PackageROptEst . . . 572 D.11 Generating Functions of Package ROptRegTS(Part 1) . . . 578 D.12 Generating Functions of Package ROptRegTS(Part 2) . . . 579 D.13 Methods for Generic Function optICin PackageROptRegTS . . . 579 D.14 Generic Functions for the Computation of Optimal (Robust) ICs in

PackageROptRegTS. . . 580

Notation

Abreviations

a.e. almost everywhere, almost surely

ibid. ibidem, in the same place; confer the book, chapter, article, or page cited just before

i.i.d. stochastically independent, identically distributed

s.t. subject to

w.r.t. with respect to, relative to

AL asymptotically linear

DFT discrete Fourier transform

IC influence curve

FFT fast Fourier transform

ksMD Kolmogorov(–Smirnov) minimum distance (estimator)

MD minimum distance

MLE maximum likelihood estimator

MSE mean square error

maxMSE minimax asymptotic MSE relMSE MSE–inefficiency

RHS right-hand side

//// QED

Sets and Functions

N the natural numbers {1,2, . . .}

N0 the natural numbers including 0{0,1,2, . . .}

Z the integers {. . . ,−1,0,1, . . .}

R the real numbers (−∞,∞)

R¯ the extended real numbers [−∞,∞], homeomorphic to [0,1]⊂Rvia the smooth isometryz7→ eze+1z

C the complex numbers

× Cartesian product of sets

IA, I(A) indicator function of a set or statement A; i.e., for any set A, we may write IA(x) = I(x∈A)

sign sign(x) =−1,0,1 for xnegative/zero/positive f(x±0) right/left-hand limit at xof a functionf

Λ L2derivative

I Fisher information

lxvi LIST OF TABLES

σ Algebras

B,B¯ Borelσalgebras onRand ¯R, respectively

N product ofσalgebras

Measures

M1(A) the probability measures (mass 1) onA

supportP smallest closed subsetAof Ω (separable, metric) such that P(Ω\A) = 0; cf. II Definition 2.1 ofParthasarathy(1967)

domination of measures

N product of measures

∗ convolution of measures

−→w weak convergence of (bounded) measures Random Variables and Expectation

∼ distributed according to

LP(X) law ofX underP

EX expectation ofX

VarX variance ofX

CovX covariance ofX

E conditional expectation given the regressor

Pn

−→ stochastic convergence, convergence inPn probability Laws

Ia (Dirac) one-point measure ina; Ia(A) = IA(a)

Binom (m, p) binomial distribution with size m ∈ N and probability of successp∈[0,1]

Exp (λ) exponential distribution with scale λ∈(0,∞)

Gamma (σ, α) Gamma distribution with scaleσ ∈(0,∞) and shapeα∈ (0,∞)

Gumbel (µ, σ) Gumbel distribution with location parameter µ ∈ R and scaleσ∈(0,∞)

N(µ, σ2) normal law on (Rm,Bm) with mean µ∈Rmand standard deviationσ

ϕ,Φ standard normal density and distribution function on R Pois (λ) Poisson distribution with meanλ∈(0,∞)

Unif (M) uniform distribution on M ⊂R Mathematical Symbols

⊂,⊃ subset/supset

≤ less or equal, coordinatewise onRm

∗ convolution of periodic sequences of complex numbers x+, x positive, negative parts

∧,min minimum

∨,max maximum

LIST OF TABLES lxvii

inf, sup pointwise infimum/supremum infP,supP P essential infimum, supremum

inf,sup conditional essential extrema given the regressor

↑,↓ monotone convergence form below/above of numbers

o,O the Landau symbols

lin (x1, . . . , xk) linear space generated byx1, . . . , xk

Matrices and Vectors

Ik the unit k×k matrix

A∈Rp×k a real matrix with prows andkcolumns Aτ transpose of a matrix A

rkA rank ofA

trA trace of A

minev(A) minimal eigenvalue ofA

C(A) column space of A

N(A) null space of A

AB A−B positive definite AB A−B positive semidefinite

A⊗B Kronecker product of matricesAandB; cf. DefinitionB.1.1 vec (A) vec operator: transforms matrix A ∈ Rp×k to a

pk-dimensional column vector; cf. DefinitionB.1.2

vech (A) vech operator: transforms matrixA∈Rk×kto ak(k +1)/2-dimensional column vector; cf. DefinitionB.2.1

diag (a1, . . . , ak) diagonal matrix with diagonal a1, . . . , ak

||A||op operator norm of A; i.e.,||A||op= sup

|x|≤1

|Ax|

|x| Euclidean norm of x∈Rm Function Spaces

D1 functions : ¯R→R which are infinitely differentiable onR, continuous on ¯Rand whose derivativeϕ0 has compact sup-port

Cc1 functions : R → R which are continuously differentiable functions and have compact support

Cc functions : R → R which are infinitely differentiable and have compact support

Lk2(P) the Hilbert space of (equivalence classes of)Rk-valued func-tionsf such thatR

|f|2dP <∞;L2(P) =L12(P)

Lk(P) the space of (equivalence classes of) Rk-valued functionsf such that supP|f|<∞;L(P) =L1(P)

Z(θ) Lk(Pθ)∩ {Eθ= 0}; the tangents at Pθ

Ψα(θ), ΨDα(θ) set of square integrable (α = 2), and bounded (α = ∞), influence curves atPθ; respectively, partial influence curves atPθ, with some matrixD∈Rp×k such that rkD=p≤k;

cf. Definition1.1.1

lxviii LIST OF TABLES

Ψα•(θ), ΨDα•(θ) set of square integrable (α = 2), and bounded (α = ∞), conditionally centered (partial) influence curves, respec-tively; cf. DefinitionA.1.2

ΨM(θ) square integrable influence curves for general M estimators;

cf. (7.1.50)

ΨMc(θ) conditionally centered, square integrable influence curves for general M estimators; cf. (7.1.51)

ΨMs(θ) conditionally centered, square integrable influence curves for sectionwise M estimators; cf. (7.1.53)

ΨBM(θ) conditionally centered, square integrable influence curves for BM estimators; cf. (7.1.52)

Neighborhoods and Bias Terms

∗=c, v, κ type of balls and metric: contamination, total variation, Kolmogorov

t= 0, ε, δ, α type of neighborhoods: (general parameter) unconditional (t = 0), and conditional regression neighborhoods with fixed contamination curve (∗ = ε, δ), respectively average (square) conditional regression balls of exponentα= 1,2;

cf. Sections10.1andA.1 U(θ),U∗,t(θ) neighborhood system aboutPθ U(θ, r),

U∗,t(θ, r)

such a neighborhood aboutPθ of radiusr∈(0,∞); in the infinitesimal robust setup, usuallyr= O(1/√

n) G(θ),G∗,t(θ) corresponding tangent classes

ω∗,t standardized (infinitesimal) bias terms

ωM standardized (infinitesimal) bias terms for general M esti-mators

ωMc standardized (infinitesimal) bias terms for general M esti-mators with conditionally centered ICs

ωMs standardized (infinitesimal) bias terms for sectionwise M es-timators

ωBM standardized (infinitesimal) bias terms for BM estimators

Part I

Asymptotic Theory of