• Keine Ergebnisse gefunden

andasymptotic distribution

N/A
N/A
Protected

Academic year: 2021

Aktie "andasymptotic distribution"

Copied!
6
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

63

* 3.4 Influencefunction

andasymptotic distribution

Theone-dimensional a

case

• h T

X ,...,

1

X i→

n

t

,consistency

Functional

• h T

i F

. Appliedto

data:

T b F D

n

E

.

consistency:

T b F D

n

E

→ h T

i F

Central

limittheorem:

T b F D

n

E

≈∼N h

h T i F ,v i /n

= v E

h IF

; X T, i F

2

.

(2)

64

Modelwith

density

h f i x,θ

Maximum

− →

likelihood est.

Moregener

al:M-estimators

P ψ

i

D X ,

i

b θ E

=0

Functional:

ψ R h i x,θ

h dF i x

=0

MLE:

h ψ i x,θ

= log

∂θ

h h f ii x,θ

sh =:

i x,θ

.

• IFhx;

i F

1

= ψ

c

hx,θ i

= c

R ψ

∂θ

hx,θ i h f i x,θ

= dx R −

h ψ i x,θ

sh i x,θ

h f i x,θ

dx

(3)

65 3.4 Morethan b

onepar ameter

andpossib lymore

thanone

X

• h T

X ,...,

1

X i→

n

t

,consistency

Functional

• h T

i F

. Appliedto

data:

T b F D

n

E

.

consistency:

T b F D

n

E

→ h T

i F

Central

limittheorem:

T b F D

n

E

≈∼N h

h T i F , /n V

i

= V E

h IF

; X T, i F

h IF

; X T, i F

T

.

(4)

66

Modelwith

density

h f i x,θ

Maximum

− →

likelihood est.

Moregener

al:M-estimators

P ψ

i

D X ,

i

b θ E

=0

Functional:

ψ R h i x,θ

h dF i x

=0

MLE:

h ψ i x,θ

= log

∂θ

h h f ii x,θ

s =:

h i x,θ

.

• IFhx;

i F

=

−1

M hx,θ ψ

i

= M

R ψ

∂θ

hx,θ i h f i x,θ

= dx R −

h ψ i x,θ

sh i x,θ

f

T

h i x,θ

dx

(5)

67

* 3.5 Robust

Estimators

MultidimensionalLocation. a

f x,µ

f =

0

x

− µ

f h

0

i z c =

· exp

D P −

(

j (

z

j

)

)

/

2

2 E

,

/c 1 π =(2

m/2

)

M-estimators b

P ψ

i j

x

i

µ

,

=0

Natural choice:

ψ

− x µ

w = h i u

− (x

,

µ) u

=

i

k x

i 2

µk

b µ

= X

w

i

h u i

i

(x

i

µ) .X w

i

h u i

i

= M Z w

k

− x µk

x (

i

)( µ x

i

) µ f

T 0

x

− µ dx

= Z h w

i u

(

uf

) u

h i u

· du

= I

1

I b

,

(6)

68

Sensitivität,Huber-Funktion. c

γ h T, i F

x

=sup hk h IF

x T, ; ik F

i

Censorthe scores

− x

!

µ

− → h w

i u h =min

,c/u 1 i

M-Schätzungenfür d

|

Σ

.

|

b Σ

= X

w

i

|

Σ h u i

i

(x

i

µ)(x

i T

µ)

.X w

i

|

Σ h u i

i

u

=(

i

x

i

) µ b

T

|

Σ

1

x (

i

) µ

.

Breakdown e

Point.

Referenzen

ÄHNLICHE DOKUMENTE

10907 Pattern Recognition Exercise 1 Fall 2018.. 10907

Refer to PARALLEL PORTS Double-Buffered Input lransfers for a sample timing diagram IFigure 111. The operation of H2 and H4 may be selected by programming the

In each iteration candidate solutions are generated, suggested to the future potential users for evaluation, the machine learning component is trained on the basis of the

Skorohod’s Theorem is based on a general method to transform uniformly distributed ‘random numbers’ from ]0, 1[ into ‘random numbers’ with distribution Q.. Namely, if U is

And Wong and Kolter [2017] present a linear-programming (LP) based upper bound on the robust error or loss that can be suffered under norm-bounded.. perturbation, then minimize

[r]

Computer Vision I: Robust Two-View Geometry 69 Red Newton’s method;. green

When data batches have been completely written, processed, and verified, they are transferred on command to the mainframe computing system via data communications