• Keine Ergebnisse gefunden

Ensemble smoothing under the influence of nonlinearity

N/A
N/A
Protected

Academic year: 2022

Aktie "Ensemble smoothing under the influence of nonlinearity"

Copied!
32
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Ensemble smoothing

under the influence of nonlinearity

Lars Nerger

Alfred Wegener Institute for Polar and Marine Research Bremerhaven, Germany

Svenja Schulte and Angelika Bunse-Gerstner

University of Bremen, Germany University of Reading, July 2, 2013

(2)

Lars Nerger – Nonlinearity and smoothing

Outline

  Ensemble smoothers

  Influence of nonlinearity

  Influence of localization

  Smoothing in a real model

(3)

Lars Nerger – Nonlinearity and smoothing

Ensemble Smoothers

(4)

Lars Nerger – Nonlinearity and smoothing

Smoothers

Filters (e.g. Ensemble Kalman filter)

  Estimate using observations until analysis time Smoothers perform retrospective analysis

  Use future observations for estimation in the past

  Example applications:

  Reanalysis

  Parameter estimation

(5)

Lars Nerger – Nonlinearity and smoothing

Smoother:

X

ak 1|k

= X

ak 1|k 1

C

k

Ensemble smoothing

  Smoothing is very simple (ensemble matrix )

(see e.g. Evensen, 2003)

Filter: X

ak|k

= X

fk|k 1

C

k

In the numerical experiments, the matrix ˜ D

δ

is constructed using a 5th order polynomial function (Eq. 4.10 of Gaspari and Cohn 1999), which mimicks a Gaussian function but has compact support. The distance between the analysis and observation grid points at which the functions becomes zero is used here to a define the localization length.

c. The smoother extension ESTKS

The smoother extension of the ESTKF is formulated analogous to the ensemble Kalman smoother (EnKS, Evensen 2003). The sequential smoother computes a state correction at an earlier time t

i

, i < k utilizing the filter analysis update at time t

k

.

For the smoother, the notation is extended according to the notation used in estimation theory (see, e.g., Cosme et al. 2010): A subscript i | j is used, where i refers to the time that is represented by the state vector and j refers to the latest time for which observations are taken into account. Thus, the former analysis state x

ak

is written as x

ak|k

and the forecast state x

fk

is denoted as x

fk|k1

. In this notation, the superscripts a and f are redundant.

To formulate the smoother, the transformation equation (14) is first written as a product of the forecast ensemble with a weight matrix as

X

ak|k

= X

fk|k1

G

k

(19)

with

G

k

= 1

(m)

+ T !

W

k

+ W

k

"

. (20)

Here the relation X

fk|k1

= X

fk|k1

1

(m)

is used with the matrix 1

(m)

that contains the value m

1

in all entries. The smoothed state ensemble at time t

k1

taking into account all obser-

8

  Ensemble smoothing is cheap

  e.g. E. Kalnay: “no-cost smoother”

  weight matrix already computed in filter

  just recombine previous ensemble states (actually the most costly part of the filter)

  but: smoothing is recursive – application of each

for all previous times within lag C

k

(6)

Lars Nerger – Nonlinearity and smoothing

Smoother with linear model

Smoother is optimal for linear systems:

➜  Forecast of smoothed state = filter analysis at later time

X

ai|k

= X

ai,i

Y

k

j=i+1

C

j

X

ak|k

= M

k,i

X

ai,i

Y

k

j=i+1

C

j

➜  Based on ensemble cross-correlation between two time instances

➜  Each additional lag reduces error

(if covariances are correctly estimated, Cohn et al. 1994) (Ensemble perturbation matrix )

X

0

= X X ¯

x

ai|k

= x

ai|k 1

+ X

0ia|k 1

X

0ka|k

T

E

(7)

Lars Nerger – Nonlinearity and smoothing

Smoother and Nonlinearity

(8)

Lars Nerger – Nonlinearity and smoothing

Smoother and nonlinearity

  Optimality doesn’t hold with nonlinear systems!

influenced by nonlinear model

➜   What is the effect of the nonlinearity?

➜   Do ensembles just decorrelate?

(mentioned e.g. by Cosme et al. 2010)

➜  Consider smoother performance relative to filter (Smoother reduces estimation error from the filter)

x

ai|i+1

= x

ai|i

+ X

0ia|i

X

0i+1a |i+1

T

E ˜

(9)

Lars Nerger – Nonlinearity and smoothing

Numerical study with Lorenz-96

  Cheap and small model (state dimension 40)

  Local and global filters possible

  Nonlinearity controlled by forcing parameter F

  Up to F=4: periodic waves; perturbations damped

  F>4: non-periodic

  Nonlinearity of assimilation also influenced by forecast length

  Experiments over 20,000 time steps

  Use smoother with ESTKF (Nerger et al., 2012)

  Tune covariance inflation for minimal RMS errors

  Implemented in open source assimilation software PDAF

(http://pdaf.awi.de)

(10)

Lars Nerger – Nonlinearity and smoothing

Square root of covariance matrix (ensemble size N, state dim n)

T is specific for filter algorithm:

ETKF:

T removes ensemble mean

(usually, compute directly ) Z has dimension nN

SEIK:

T removes ensemble mean and drops last column Z has dimension n(N-1)

Analysis

X

fk

= ⌃

x

fk(1)

, . . . , x

fk(N)

(167) X

fk

= ⌃

x

fk

, . . . , x

fk

(168)

Z

fk

= X

fk

X

fk

(169)

Z = X

f

T (170)

P ˇ

fk

= 1

N 1

N

l=1

⇤ x

fk(l)

x

fk

⌅⇤

x

fk(l)

x

fk

T

(171) P ˇ

fk

= 1

N 1 Z

fk

Z

fk

T

(172) P ˇ

fk

= Z

fk

G ⇤

Z

fk

T

(173) G := 1

N 1 I (174)

x

ak

= x

fk

+ Z

fk

w

k

(175) w

k

= A

k

(H

k

Z

fk

)

T

R

k 1

y

ok

H

k

x

fk

(176)

A

k 1

= G

1

+ (H

k

Z

fk

)

T

R

k 1

H

k

Z

fk

(177) A

k 1

= (N 1)I + (H

k

Z

fk

)

T

R

k 1

H

k

Z

fk

(178) P ˇ

ak

= Z

fk

A

k

(Z

fk

)

T

(179) A

1

= I + (HZ)

T

R

1

HZ

f

(180)

P

a

= ZAZ

T

(181)

Ensemble transformation

X

a

= X

a

+ X

fk

W (182)

X

ak

= X

fk

+ Z

fk

W

k

+ W

k

(183) W

k

= ⌃

w

k

, . . . , w

k

(184) P

ak

= 1

N 1 Z

ak

(Z

ak

)

T

(185) Z

ak

= ⇥

N 1Z

fk

A

1/2k

(186)

Z

ak

= Z

fk

W

k

(187)

W

k

= ⇥

N 1U

k

S

k 1/2

U

Tk

(188)

U

k

S

k

V

k

= A

k 1

(189)

15 Analysis

X f k = ⌃

x f k (1) , . . . , x f k (N )

(167) X f k = ⌃

x f k , . . . , x f k

(168)

Z f k = X f k X f k (169)

Z = X f T (170)

P f = ZZ T (171)

P ˇ f k = 1

N 1

⇧ N

l=1

⇤ x f k (l) x f k ⌅⇤

x f k (l) x f k ⌅ T

(172) P ˇ f k = 1

N 1 Z f k

Z f k ⌅ T

(173) P ˇ f k = Z f k G ⇤

Z f k ⌅ T

(174) G := 1

N 1 I (175)

x a k = x f k + Z f k w k (176) w k = A k (H k Z f k ) T R k 1

y k o H k x f k

(177)

A k 1 = G 1 + (H k Z f k ) T R k 1 H k Z f k (178) A k 1 = (N 1)I + (H k Z f k ) T R k 1 H k Z f k (179) P ˇ a k = Z f k A k (Z f k ) T (180) A = G + (HZ) T R 1 HZ ⇥ 1

(181)

P a = ZAZ T (182)

Ensemble transformation

X a = X a + X f k W (183)

X a k = X f k + Z f k W k + W k

(184) W k = ⌃

w k , . . . , w k

(185) P a k = 1

N 1 Z a k (Z a k ) T (186) Z a k = ⇥

N 1Z f k A 1/2 k (187)

Z a k = Z f k W k (188)

W k = ⇥

N 1U k S k 1/2 U T k (189)

U k S k V k = A k 1 (190)

15

Analysis

X

fk

= ⌃

x

fk(1)

, . . . , x

fk(N)

(168) X

fk

= ⌃

x

fk

, . . . , x

fk

(169)

Z

fk

= X

fk

X

fk

(170)

Z = X X (171)

Z = X

f

T (172)

P

f

= ZZ

T

(173)

P ˇ

fk

= 1

N 1

N

l=1

⇤ x

fk(l)

x

fk

⌅⇤

x

fk(l)

x

fk

T

(174) P ˇ

fk

= 1

N 1 Z

fk

Z

fk

T

(175) P ˇ

fk

= Z

fk

G ⇤

Z

fk

T

(176) G := 1

N 1 I (177)

x

ak

= x

fk

+ Z

fk

w

k

(178) w

k

= A

k

(H

k

Z

fk

)

T

R

k 1

y

ko

H

k

x

fk

(179)

A

k 1

= G

1

+ (H

k

Z

fk

)

T

R

k 1

H

k

Z

fk

(180) A

k 1

= (N 1)I + (H

k

Z

fk

)

T

R

k 1

H

k

Z

fk

(181) P ˇ

ak

= Z

fk

A

k

(Z

fk

)

T

(182) A = G + (HZ)

T

R

1

HZ ⇥

1

(183)

P

a

= ZAZ

T

(184)

Ensemble transformation

X

a

= X

a

+ X

fk

W (185)

X

a

⇥ ZW (186)

WW

T

= A (187)

X

ak

= X

fk

+ Z

fk

W

k

+ W

k

(188) W

k

= ⌃

w

k

, . . . , w

k

(189)

15

Transformation matrix in ensemble space (small matrix)

ETKF:

A has dimension N

2

G = I (identity matrix) SEIK:

A has dimension (N-1)

2

G = ( T T

T

)

-1

Analysis

X

fk

= ⌃

x

fk(1)

, . . . , x

fk(N)

(167) X

fk

= ⌃

x

fk

, . . . , x

fk

(168)

Z

fk

= X

fk

X

fk

(169)

Z = X

f

T (170)

P

f

= ZZ

T

(171)

P ˇ

fk

= 1

N 1

N

l=1

⇤ x

fk(l)

x

fk

⌅⇤

x

fk(l)

x

fk

T

(172) P ˇ

fk

= 1

N 1 Z

fk

Z

fk

T

(173) P ˇ

fk

= Z

fk

G ⇤

Z

fk

T

(174) G := 1

N 1 I (175)

x

ak

= x

fk

+ Z

fk

w

k

(176) w

k

= A

k

(H

k

Z

fk

)

T

R

k 1

y

ok

H

k

x

fk

(177)

A

k 1

= G

1

+ (H

k

Z

fk

)

T

R

k 1

H

k

Z

fk

(178) A

k 1

= (N 1)I + (H

k

Z

fk

)

T

R

k 1

H

k

Z

fk

(179) P ˇ

ak

= Z

fk

A

k

(Z

fk

)

T

(180) A = G + (HZ)

T

R

1

HZ ⇥

1

(181)

P

a

= ZAZ

T

(182)

Ensemble transformation

X

a

= X

a

+ X

fk

W (183)

X

ak

= X

fk

+ Z

fk

W

k

+ W

k

(184) W

k

= ⌃

w

k

, . . . , w

k

(185) P

ak

= 1

N 1 Z

ak

(Z

ak

)

T

(186) Z

ak

= ⇥

N 1Z

fk

A

1/2k

(187)

Z

ak

= Z

fk

W

k

(188)

W

k

= ⇥

N 1U

k

S

k 1/2

U

Tk

(189)

U

k

S

k

V

k

= A

k 1

(190)

15

Analysis state covariance matrix

Analysis

X

fk

= ⌃

x

fk(1)

, . . . , x

fk(N)

(167) X

fk

= ⌃

x

fk

, . . . , x

fk

(168)

Z

fk

= X

fk

X

fk

(169)

Z = X

f

T (170)

P ˇ

fk

= 1

N 1

N

l=1

⇤ x

fk(l)

x

fk

⌅⇤

x

fk(l)

x

fk

T

(171) P ˇ

fk

= 1

N 1 Z

fk

Z

fk

T

(172) P ˇ

fk

= Z

fk

G ⇤

Z

fk

T

(173) G := 1

N 1 I (174)

x

ak

= x

fk

+ Z

fk

w

k

(175) w

k

= A

k

(H

k

Z

fk

)

T

R

k 1

y

ko

H

k

x

fk

(176)

A

k 1

= G

1

+ (H

k

Z

fk

)

T

R

k 1

H

k

Z

fk

(177) A

k 1

= (N 1)I + (H

k

Z

fk

)

T

R

k 1

H

k

Z

fk

(178) P ˇ

ak

= Z

fk

A

k

(Z

fk

)

T

(179)

A

1

= I + (HZ)

T

R

1

HZ (180)

P

a

= ZAZ

T

(181)

Ensemble transformation

X

a

= X

a

+ X

fk

W (182)

X

ak

= X

fk

+ Z

fk

W

k

+ W

k

(183) W

k

= ⌃

w

k

, . . . , w

k

(184) P

ak

= 1

N 1 Z

ak

(Z

ak

)

T

(185) Z

ak

= ⇥

N 1Z

fk

A

1/2k

(186)

Z

ak

= Z

fk

W

k

(187)

W

k

= ⇥

N 1U

k

S

k 1/2

U

Tk

(188)

U

k

S

k

V

k

= A

k 1

(189)

15

The ESTKF: First compare ETKF and SEIK

Ensemble transformation based on square root of A

Very efficient:

Transformation matrix computed in space of dim. N or N-1

Analysis

X

fk

= ⌃

x

fk(1)

, . . . , x

fk(N)

(167) X

fk

= ⌃

x

fk

, . . . , x

fk

(168)

Z

fk

= X

fk

X

fk

(169)

Z = X

f

T (170)

P

f

= ZZ

T

(171)

P ˇ

fk

= 1

N 1

N

l=1

⇤ x

fk(l)

x

fk

⌅⇤

x

fk(l)

x

fk

T

(172) P ˇ

fk

= 1

N 1 Z

fk

Z

fk

T

(173) P ˇ

fk

= Z

fk

G ⇤

Z

fk

T

(174) G := 1

N 1 I (175)

x

ak

= x

fk

+ Z

fk

w

k

(176) w

k

= A

k

(H

k

Z

fk

)

T

R

k 1

y

ko

H

k

x

fk

(177)

A

k 1

= G

1

+ (H

k

Z

fk

)

T

R

k 1

H

k

Z

fk

(178) A

k 1

= (N 1)I + (H

k

Z

fk

)

T

R

k 1

H

k

Z

fk

(179) P ˇ

ak

= Z

fk

A

k

(Z

fk

)

T

(180) A = G + (HZ)

T

R

1

HZ ⇥

1

(181)

P

a

= ZAZ

T

(182)

Ensemble transformation

X

a

= X

a

+ X

fk

W (183)

X

a

⇥ X

f

L (184)

LL

T

= A (185)

X

ak

= X

fk

+ Z

fk

W

k

+ W

k

(186) W

k

= ⌃

w

k

, . . . , w

k

(187)

15 Analysis

X

fk

= ⌃

x

fk(1)

, . . . , x

fk(N)

(167) X

fk

= ⌃

x

fk

, . . . , x

fk

(168)

Z

fk

= X

fk

X

fk

(169)

Z = X

f

T (170)

P

f

= ZZ

T

(171)

P ˇ

fk

= 1

N 1

N

l=1

⇤ x

fk(l)

x

fk

⌅⇤

x

fk(l)

x

fk

T

(172) P ˇ

fk

= 1

N 1 Z

fk

Z

fk

T

(173) P ˇ

fk

= Z

fk

G ⇤

Z

fk

T

(174) G := 1

N 1 I (175)

x

ak

= x

fk

+ Z

fk

w

k

(176) w

k

= A

k

(H

k

Z

fk

)

T

R

k 1

y

ok

H

k

x

fk

(177)

A

k 1

= G

1

+ (H

k

Z

fk

)

T

R

k 1

H

k

Z

fk

(178) A

k 1

= (N 1)I + (H

k

Z

fk

)

T

R

k 1

H

k

Z

fk

(179) P ˇ

ak

= Z

fk

A

k

(Z

fk

)

T

(180) A = G + (HZ)

T

R

1

HZ ⇥

1

(181)

P

a

= ZAZ

T

(182)

Ensemble transformation

X

a

= X

a

+ X

fk

W (183)

X

a

⇥ ZL (184)

LL

T

= A (185)

X

ak

= X

fk

+ Z

fk

W

k

+ W

k

(186) W

k

= ⌃

w

k

, . . . , w

k

(187)

15

L. Nerger et al., Monthly Weather Review 140 (2012) 2335-2345

(11)

Lars Nerger – Nonlinearity and smoothing

The T matrix

Matrix T projects onto the error space spanned by ensemble SEIK and ETKF use different projections T

For identical forecast ensembles both filters

  yield identical analysis state

  perform slightly different ensemble transformations

  also: SEIK is slightly faster than ETKF

Analysis

X

fk

= ⌃

x

fk(1)

, . . . , x

fk(N)

(167) X

fk

= ⌃

x

fk

, . . . , x

fk

(168)

Z

fk

= X

fk

X

fk

(169)

Z = X

f

T (170)

P ˇ

fk

= 1

N 1

N

l=1

⇤ x

fk(l)

x

fk

⌅⇤

x

fk(l)

x

fk

T

(171) P ˇ

fk

= 1

N 1 Z

fk

Z

fk

T

(172) P ˇ

fk

= Z

fk

G ⇤

Z

fk

T

(173) G := 1

N 1 I (174)

x

ak

= x

fk

+ Z

fk

w

k

(175) w

k

= A

k

(H

k

Z

fk

)

T

R

k 1

y

ko

H

k

x

fk

(176)

A

k 1

= G

1

+ (H

k

Z

fk

)

T

R

k 1

H

k

Z

fk

(177) A

k 1

= (N 1)I + (H

k

Z

fk

)

T

R

k 1

H

k

Z

fk

(178) P ˇ

ak

= Z

fk

A

k

(Z

fk

)

T

(179) A

1

= I + (HZ)

T

R

1

HZ

f

(180)

P

a

= ZAZ

T

(181)

Ensemble transformation

X

a

= X

a

+ X

fk

W (182)

X

ak

= X

fk

+ Z

fk

W

k

+ W

k

(183) W

k

= ⌃

w

k

, . . . , w

k

(184) P

ak

= 1

N 1 Z

ak

(Z

ak

)

T

(185) Z

ak

= ⇥

N 1Z

fk

A

1/2k

(186)

Z

ak

= Z

fk

W

k

(187)

W

k

= ⇥

N 1U

k

S

k 1/2

U

Tk

(188)

U

k

S

k

V

k

= A

k 1

(189)

15

  ETKF provides minimum transformation

  desirable for least disturbing ensemble states

  How to get minimum transformation into SEIK?

(12)

Lars Nerger – Nonlinearity and smoothing

Error Subspace Transform Kalman Filter (ESTKF)

Combine advantages of SEIK and ETKF

Redefine T:

1.  Remove ensemble mean from all columns

2.  Subtract fraction of last column from all others 3.  Drop last column

L. Nerger et al., Monthly Weather Review 140 (2012) 2335-2345

Features of the ESTKF:

•  Same ensemble transformation as ETKF

•  Slightly cheaper computations

•  Direct access to ensemble-spanned error space

(13)

Lars Nerger – Nonlinearity and smoothing

T-matrix in SEIK and ESTKF

  Efficient implementation as subtraction of means & last column

  ETKF: improve compute performance using a matrix T

SEIK:

Analysis

X

fk

= x

fk(1)

, . . . , x

fk(N)

(76)

P ˇ

fk

= 1

N 1

N

l=1

⇤ x

fk(l)

x

fk

⌅⇤

x

fk(l)

x

fk

T

(77)

P ˇ

fk

= 1

N 1 X

fk

T(T

T

T)

1

T

T

(X

fk

)

T

(78)

T :=

↵ I

rr

0

1r

1 N

⇤ 1

Nr

(79)

T

i,j

=

⌥ ⌦

⌦ ⌦

⌦ ⌦

1

N1

for i = j, i < N

1

N

for i ⇥ = j, i < N

1

N

for i = N

(80)

P ˇ

fk

= L

k

GL

Tk

(81)

L

k

:= X

fk

T , G := 1

N 1 T

T

T ⇥

1

(82)

U

k1

= G

1

+ (H

k

L

k

)

T

R

k 1

H

k

L

k

(83)

x

ak

= x

fk

+ K ˇ

k

y

ko

H

k

x

fk

✏ ⌅

(84) x

ak

= x

fk

+ K ˇ

k

y

ok

H

k

x

fk

(85) K ˇ

k

= L

k

U

k

L

Tk

H

Tk

R

k 1

(86)

P ˇ

ak

= L

k

U

k

L

Tk

(87)

Re-Init

P ˇ

ak

= L

k

C

Tk Tk k

C

k

L

Tk

(88)

C

k 1

(C

k 1

)

T

= U

k 1

(89)

x

a(l)k

= x

ak

+ ⇤

N 1 L

k

C

Tk Tk,l

(90)

X

ak

= X

ak

+ ⇤

N 1 L

k

C

Tk Tk

(91)

7 ESTKF:

13 ESTKF

Init

x

a0

⇤ R

n

(200)

P

a0

:= 1

N 1 L

0

L

T0

, L

0

⇤ R

nN 1

(201) { x

a(l)0

, l = 1, . . . , N } (202) X

a0

= ⌦

x

a(1)0

, . . . , x

a(N0 )

(203) L

ak

= X

ak

; ⇤ R

NN 1

(204)

T ˆ

i,j

=

1

N1 p11

N +1

for i = j, i < N

1 N

11

pN +1

for i ⌅ = j, i < N

p1

N

for i = N

(205)

x

a0

⇥ x

a0

(206)

P ˇ

a0

:= 1

N 1

N

l=1

⇤ x

a(l)0

x

a0

⌅⇤

x

a(l)0

x

a0

T

(207)

P ˇ

a0

:= 1

N 1

⇤ X

ak

X

ak

⌅⇤

X

ak

X

ak

T

(208)

X

a0

= [x

a0

, . . . , x

a0

] (209)

X

a0

= x

a0

, . . . , x

a0

(210)

Forecast

x

fi (l)

= M

i,i 1

[x

a(l)i 1

] +

(l)i

(211)

17

(14)

Lars Nerger – Nonlinearity and smoothing

Effect of forcing on the smoother – optimal lag

  Assimilate at each time step

  Ensemble size N=34

  Global ESTKF

  Inflation tuned for minimal RMS errors (account

for inflation in smoother)

0 50 100 150 200

0 0.05 0.1 0.15 0.2

mean RMS error for different forcings

lag [time steps]

mean RMS error

F=10 F=8 F=6 F=5 F=4

  Up to F=4

  very small RMS errors

  F>4

  Strong growth in RMS

  Clear impact of smoother

  Optimal lag:

minimal RMS error (red lines)

(15)

Lars Nerger – Nonlinearity and smoothing

Stronger nonlinearity

  F=7

  Forecast length: 9 steps

  Clear error minimum at lag=2 analysis steps

➜   the optimal lag

  Error increase beyond optimal lag (here 50%!)

➜   spurious correlations

0 50 100 150 200

0.97 0.975 0.98 0.985 0.99 0.995 1

relative error reduction by smoother

lag [analysis steps]

RMS error relative to lag=0

Optimal lag 50% less smoother effect

(16)

Lars Nerger – Nonlinearity and smoothing

2 4 6 8 10

0 0.05 0.1 0.15 0.2 0.25

mean RMS error at optimal lag

F

mean RMS error

Filter Smoother

2 4 6 8 10

0 50 100 150 200

Optimal lag

F

optimal lag [time steps]

7x error

doubling time

2 4 6 8 10

0 50 100 150 200

Optimal lag

F

optimal lag [time steps]

N=34 N=20

2 4 6 8 10

0 0.05 0.1 0.15 0.2 0.25

mean RMS error at optimal lag

F

mean RMS error

N34 N20

Impact of smoothing

  Optimal lag (minimal RMS error)

  Behavior similar to error-doubling time

  RMS error at optimal lag

  Smoother reduces error by 50% for all F>4

  Effect of sampling errors visible with smaller ensemble

(17)

Lars Nerger – Nonlinearity and smoothing

Vary forecast length (F=7)

  Forecast length = time steps over which nonlinearity acts on ensemble

  Longer forecasts:

➜  Optimal lag shrinks

➜  RMS errors grow for filter and smoother

➜  Improvement by smoother shrinks (depends on forcing strength)

2 4 6 8

20 40 60 80 100 120

Optimal lag

forecast length [time steps]

optimal lag [time steps]

~2x error doubling time

2 4 6 8

0 0.1 0.2 0.3 0.4 0.5 0.6

mean RMS error at optimal lag

forecast length [time steps]

mean RMS error

Filter Smoother

(18)

Lars Nerger – Nonlinearity and smoothing

Vary forecast length – different forcing strength

  Improvement by smoother depends on forcing strength

  Small forcing (F=5)

➜  Approx. constant improvement by smoother

  Larger forcing (F=7)

➜  Decreasing smoother effect

2 4 6 8

20 40 60 80 100 120

Optimal lag

forecast length [time steps]

optimal lag [time steps]

~2x error doubling time

2 4 6 8

0 0.1 0.2 0.3 0.4 0.5 0.6

mean RMS error at optimal lag

forecast length [time steps]

mean RMS error

Filter Smoother

2 4 6 8

0 0.1 0.2 0.3 0.4 0.5 0.6

mean RMS error at optimal lag

forecast length [time steps]

mean RMS error

Filter

Smoother

F=5 F=7

(19)

Lars Nerger – Nonlinearity and smoothing

Impact of Localization

(20)

Lars Nerger – Nonlinearity and smoothing

Domain & observation localization

Local Analysis:

  Update small regions

(like single vertical columns)

  Observation localizations:

Observations weighted according to distance

  Consider only observations with weight >0

  State update and ensemble transformation fully local

Similar to localization in LETKF (e.g. Hunt et al, 2007)

S: Analysis region

D: Corresponding data region

(21)

Lars Nerger – Nonlinearity and smoothing

10 20 30 40 50 60 70 80

0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2 0.22 0.24

Localization radius

MRMSE

F=8, ensemble size N=34 Thin lines: global analysis

10 20 30 40 50 60 70 80

10 20 30 40 50 60 70 80 90

Localization radius l opt [time steps]

Optimal lag

Influence of Localization on Smoothing

  Reduced RMS errors from filter and smoother by localization

  localization is useful even for N=34

Mean RMS error of optimal lag

Filter

Smoother

  Localization increases optimal lag

  more observational information useable

(22)

Lars Nerger – Nonlinearity and smoothing

10 20 30 40 50 60 70 80

−0.09

−0.085

−0.08

−0.075

−0.07

−0.065

−0.06

−0.055

Localization radius MRMSE smoother MRMSE filter

Influence of Localization on Smoothing (2)

  Use filter error as baseline

  Smoother results in additional reduction

  Smoother is more efficient with localization than for global filter

Error reduction by smoother

(23)

Lars Nerger – Nonlinearity and smoothing

10 20 30 40 50 60 70 80

0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2 0.22 0.24

Localization radius

MRMSE

10 20 30 40 50 60 70 80

10 20 30 40 50 60 70 80 90

Localization radius l opt [time steps]

N=34 N=20 N=15 N=10

Smoothing with localization – smaller ensembles

  Larger effect of localization with smaller ensembles

  Optimal lag shrinks (impact of sampling errors)

  Localization radius for maximum optimal lag slightly larger than for minimum RMS error

Optimal lag Mean RMS error of optimal lag

Filter

Smoother

(24)

Lars Nerger – Nonlinearity and smoothing

10 20 30 40 50 60 70 80

−0.09

−0.085

−0.08

−0.075

−0.07

−0.065

−0.06

−0.055

Localization radius MRMSE smoother MRMSE filter

N=34 N=20 N=15 N=10

Smoother error reduction – smaller ensembles

  Smoother impact grows with ensemble size

  Effect of sampling errors

  RMS error from smoother decreases faster than from filter

  Amplification effect (multiple use of matrix C)

Error reduction by smoother

(25)

Lars Nerger – Nonlinearity and smoothing

10 20 30 40 50 60 70 80

−0.09

−0.085

−0.08

−0.075

−0.07

−0.065

−0.06

−0.055

Localization radius MRMSE smoother MRMSE filter

N=34 N=20 N=15 N=10

10 20 30 40 50 60 70 80

0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2 0.22 0.24

Localization radius

MRMSE

Optimal localization radius

Same localization radius for

  minimum filter RMS error

  largest smoother impact

➜  No re-tuning of localization radius for optimal smoothing!

Mean RMS error of optimal lag

Filter

Smoother

Error reduction by smoother

(26)

Lars Nerger – Nonlinearity and smoothing

Smoothing in a Real Model

(27)

Lars Nerger – Nonlinearity and smoothing

Global ocean model

FESOM (Finite Element Sea-ice Ocean model, Danilov et al. 2004) Global configuration

  1.3o resolution, 40 levels

  horizontal refinement at equator

  state vector size 107

  weak nonlinearity (not easy to change)

Drake passage

Twin experiments with sea surface height data

  ensemble size 32

  assimilate each 10th day over 1 year

  ESTKF with smoother extension and localization (using PDAF environment as single program)

  inflation tuned for optimal performance (ρ=0.9)

  run using 2048 processor cores

(Timings: forecasts 8800s, filter+smoother 200s)

(28)

Lars Nerger – Nonlinearity and smoothing

Effect of smoothing on global model

Typical behavior

  RMSe reduced by smoother Error reductions:

~15% at initial time

~8% over the year

  Large impact of each lag up to 60 days

  Further reduction over full experiment

(optimal lag = 350 days)

0 100 200 300

0.005 0.01 0.015 0.02 0.025 0.03 0.035

day

RMS error

SSH: RMS errors over time

forecast & analysis smoothed (50 days)

0 50 100 150 200 250 300 350

0.006 0.008 0.01 0.012 0.014 0.016 0.018 0.02

lag [days]

RMS error

SSH: RMS error for different lags initial error

mean error

(29)

Lars Nerger – Nonlinearity and smoothing

0 50 100 150 200 250 300 350

0.017 0.0172 0.0174

lag [days]

RMS error

0 50 100 150 200 250 300 350

0.017 0.0171 0.0172

lag [days]

RMS error

0 50 100 150 200 250 300 350

0.156 0.158 0.16

lag [days]

RMS error

0 50 100 150 200 250 300 350

0.044 0.045 0.046 0.047

lag [days]

RMS error

Multivariate effect of smoothing – 3D fields

temperature salinity

merid. velocity zonal velocity

-1.0% at lag 40 -2.9% at lag 350

-0.9% at lag 40 -1.3% at lag 250

3D fields:

  Multivariate impact smaller & specific for each field

  Optimal lag specific for field

  Optimal lag smaller than for SSH (e.g. temperature directly influenced by atmospheric forcing, Brusdal et al. 2003)

(30)

Lars Nerger – Nonlinearity and smoothing

0 50 100 150 200 250 300 350

0.0246 0.0248 0.025 0.0252

lag [days]

RMS error

0 50 100 150 200 250 300 350

0.172 0.173 0.174 0.175

lag [days]

RMS error

0 50 100 150 200 250 300 350

0.0256 0.0258 0.026

lag [days]

RMS error

0 50 100 150 200 250 300 350

0.0256 0.0258 0.026

lag [days]

RMS error

0 50 100 150 200 250 300 350

0.088 0.09 0.092 0.094

lag [days]

RMS error

Multivariate effect of smoothing – surface fields

temperature salinity

merid. velocity zonal velocity

-0.9% at lag 30 -3.7% at lag 350

-0.9% at lag 30 -0.9% at lag 20

Ocean surface:

  Relative smoother impact not larger than for full 3D

  Deterioration for meridional velocity at long lags

➜  What is the optimal lag for multivariate assimilation?

(31)

Lars Nerger – Nonlinearity and smoothing

Conclusion

  Multivariate assimilation:

➜  Lag specific for field

➜  Choose overall optimal lag or separate lags

➜  Best filter configuration also good for smoother

  Nonlinearity:

➜  Introduces spurious correlations in smoother

➜  Error increase beyond optimal lag

➜  Optimal lag: few times error doubling time

  Localization:

➜  Increases smoother impact

➜  Increases optimal lag

Lars.Nerger@awi.de – Nonlinearity and smoothing

Thank you!

(32)

Lars Nerger – Nonlinearity and smoothing

Web-Resources

www.data-assimilation.net

Lars.Nerger@awi.de – Nonlinearity and smoothing

pdaf.awi.de

Referenzen

ÄHNLICHE DOKUMENTE

Spicules above the limb do not posses a simulta- neously observed broadband object, so it is expected that their spatial resolution is lower, since there are no multiple objects,

• In the likelihood code, we use the TE power spectrum data at 23&lt;l&lt;500, assuming that the distribution of hig h-l TE power spectrum is a Gaussian.  An excellent

Calculated mean statistical values and dominating trajectory sectors for the clusters obtained by the performed HCA (long-term sun photometer measurements).. These results

The combined investigations of the microchemical and micromorphological characters of CRP- 1 smectites, together with the analysis of rare earth elements of the clay

Given in the appendix are listings of these parameters at standard depth levels for each station, including pressure, depth, tem- perature, salinity, potential

Applying the bene fi t chain concept, we compare the costs of GEOSS-type data of some 12,000 USD to the incremental bene fi ts of more accurate information on current ecosystem

The domain terms extracted from ritual research literature are used as a basis for a common vocabulary and thus help the creation of ritual specific frames.. We applied the tf*idf, χ

The  other  relationship  drawn  between  the  business  group  structure  and