• Keine Ergebnisse gefunden

On the Impact of Nonlinearity on Ensemble Smoothing

N/A
N/A
Protected

Academic year: 2022

Aktie "On the Impact of Nonlinearity on Ensemble Smoothing"

Copied!
15
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

On the Impact of Nonlinearity on Ensemble Smoothing

Lars Nerger

Alfred Wegener Institute for Polar and Marine Research Bremerhaven, Germany

Svenja Schulte and Angelika Bunse-Gerstner

University of Bremen, Germany

www.data-assimilation.net

EGU 2013, Vienna, April 8-12

(2)

Smoothers

Lars Nerger –Nonlinearity and smoothing

Filters (e.g. Ensemble Kalman filter)

  Estimate using observations until analysis time Smoothers perform retrospective analysis

  Use future observations for estimation in the past

  Example applications:

  Reanalysis

  Parameter estimation

(3)

Ensemble smoothing

Lars Nerger –Nonlinearity and smoothing

  Smoothing is very simple (ensemble matrix )

(see e.g. Evensen, 2003)

Filter:

In the numerical experiments, the matrix ˜ D

δ

is constructed using a 5th order polynomial function (Eq. 4.10 of Gaspari and Cohn 1999), which mimicks a Gaussian function but has compact support. The distance between the analysis and observation grid points at which the functions becomes zero is used here to a define the localization length.

c. The smoother extension ESTKS

The smoother extension of the ESTKF is formulated analogous to the ensemble Kalman smoother (EnKS, Evensen 2003). The sequential smoother computes a state correction at an earlier time t

i

, i < k utilizing the filter analysis update at time t

k

.

For the smoother, the notation is extended according to the notation used in estimation theory (see, e.g., Cosme et al. 2010): A subscript i | j is used, where i refers to the time that is represented by the state vector and j refers to the latest time for which observations are taken into account. Thus, the former analysis state x

ak

is written as x

ak|k

and the forecast state x

fk

is denoted as x

fk|k1

. In this notation, the superscripts a and f are redundant.

To formulate the smoother, the transformation equation (14) is first written as a product of the forecast ensemble with a weight matrix as

X

ak|k

= X

fk|k1

G

k

(19) with

G

k

= 1

(m)

+ T !

W

k

+ W

k

"

. (20)

Here the relation X

fk|k1

= X

fk|k1

1

(m)

is used with the matrix 1

(m)

that contains the value m

1

in all entries. The smoothed state ensemble at time t

k1

taking into account all obser-

8

Smoother:

vations up to time t

k

is now computed from the analysis state ensemble X

ak1|k1

as

X

ak1|k

= X

ak1|k1

G

k

. (21)

The smoothing at time t

i

with i < k by future observations at different analysis times is computed by multiplying X

ai|i

with the corresponding matrices G

j

for all the analysis times t

j

, i ≤ j ≤ k . Thus, the smoothed state ensemble at time t

i

using the observations at all analysis times up to time t

k

is given by

X

ai|k

= X

ai|i

k

!

j=i+1

G

j

. (22)

Equations (19) to (22) are likewise valid for the global and local filter variants. Thus, G

k

can be computed for the global analysis and then applied to all rows of a global matrix X

i|j

, or for the local weights of section 2b and applied to the ensemble of corresponding local analysis domain σ .

A particular property of the smoother is that it will work even in the case that the matrix Λ in Eq. (13) is a random matrix. This is due to the fact that the random transformation of an analysis at time t

i

is contained in the forecast and analysis ensembles at future times.

d. Properties of the smoother with linear and nonlinear systems

The ensemble smoothers like the ESTKS in section 2c are optimal for linear dynamical systems in the sense that the forecast of the smoothed state ensemble X

ai|k

with the linear model until the time t

k

results in a state ensemble that is identical to the analysis state ensemble X

ak|k

. This property can be easily derived by applying the linear model operator

9

In the numerical experiments, the matrix ˜ D

δ

is constructed using a 5th order polynomial function (Eq. 4.10 of Gaspari and Cohn 1999), which mimicks a Gaussian function but has compact support. The distance between the analysis and observation grid points at which the functions becomes zero is used here to a define the localization length.

c. The smoother extension ESTKS

The smoother extension of the ESTKF is formulated analogous to the ensemble Kalman smoother (EnKS, Evensen 2003). The sequential smoother computes a state correction at an earlier time t

i

, i < k utilizing the filter analysis update at time t

k

.

For the smoother, the notation is extended according to the notation used in estimation theory (see, e.g., Cosme et al. 2010): A subscript i | j is used, where i refers to the time that is represented by the state vector and j refers to the latest time for which observations are taken into account. Thus, the former analysis state x

ak

is written as x

ak|k

and the forecast state x

fk

is denoted as x

fk|k1

. In this notation, the superscripts a and f are redundant.

To formulate the smoother, the transformation equation (14) is first written as a product of the forecast ensemble with a weight matrix as

X

ak|k

= X

fk|k1

G

k

(19)

with

G

k

= 1

(m)

+ T !

W

k

+ W

k

"

. (20)

Here the relation X

fk|k1

= X

fk|k1

1

(m)

is used with the matrix 1

(m)

that contains the value m

1

in all entries. The smoothed state ensemble at time t

k1

taking into account all obser-

8

  Optimal for linear systems:

➜  Forecast of smoothed state = analysis at later time

➜  Each additional lag reduces error

  Not valid for nonlinear systems!

➜   What is the effect of the nonlinearity?

➜   Do ensembles just decorrelate?

(see e.g. Cosme et al. 2010)

(4)

Numerical study with Lorenz-96

  Cheap and small model (state dimension 40)

  Local and global filters possible

  Nonlinearity controlled by forcing parameter F

  Up to F=4: periodic waves; perturbations damped

  F>4: non-periodic

  Nonlinearity of assimilation also influenced by forecast length

  Experiments over 20,000 time steps

  Tune covariance inflation for minimal RMS errors

  Implemented in open source assimilation software PDAF (http://pdaf.awi.de)

Lars Nerger –Nonlinearity and smoothing

(5)

Effect for forcing – optimal lag

  Assimilate at each time step

  Ensemble size N=34

  Global ESTKF

(Nerger et al., MWR 2012)

  Up to F=4

  very small RMS errors

  F>4

  Strong growth in RMS

  Clear impact of smoother

  Optimal lag:

minimal RMS error (red lines)

0 50 100 150 200

0 0.05 0.1 0.15 0.2

mean RMS error for different forcings

lag [time steps]

mean RMS error

F=10 F=8 F=6 F=5 F=4

Lars Nerger – Nonlinearity and smoothing

(6)

Stronger nonlinearity

  F=7

  Forecast length: 9 steps

  Clear error-minimum at 2 analysis steps

➜   the optimal lag

  Error increase beyond optimal lag (here 50%!)

➜   spurious correlations

Lars Nerger – Nonlinearity and smoothing

0 50 100 150 200

0.97 0.975 0.98 0.985 0.99 0.995 1

relative error reduction by smoother

lag [analysis steps]

RMS error relative to lag=0

Optimal lag 50% less smoother effect

(7)

2 4 6 8 10 0

0.05 0.1 0.15 0.2 0.25

mean RMS error at optimal lag

F

mean RMS error

Filter Smoother

2 4 6 8 10

0 50 100 150 200

Optimal lag

F

optimal lag [time steps]

7x error

doubling time

2 4 6 8 10

0 50 100 150 200

Optimal lag

F

optimal lag [time steps]

N=34 N=20

2 4 6 8 10

0 0.05 0.1 0.15 0.2 0.25

mean RMS error at optimal lag

F

mean RMS error

N34 N20

Impact of smoothing

  Optimal lag (minimal RMS error)

  Behavior similar to error-doubling time

  RMS error at optimal lag

  Smoother reduces error by 50% for all F>4

  Effect of sampling errors visible with smaller ensemble

Lars Nerger – Nonlinearity and smoothing

(8)

Vary forecast length (F=7)

  Forecast length = time steps over which nonlinearity acts on ensemble

  Longer forecasts:

➜  Optimal lag shrinks

➜  RMS errors grow for filter and smoother

➜  Improvement by smoother shrinks (depends on forcing strength)

Lars Nerger – Nonlinearity and smoothing

2 4 6 8

20 40 60 80 100 120

Optimal lag

forecast length [time steps]

optimal lag [time steps]

~2x error doubling time

2 4 6 8

0 0.1 0.2 0.3 0.4 0.5 0.6

mean RMS error at optimal lag

forecast length [time steps]

mean RMS error

Filter Smoother

(9)

Vary forecast length (F=7)

  Forecast length = time steps over which nonlinearity acts on ensemble

  Longer forecasts:

➜  Optimal lag shrinks

➜  RMS errors grow for filter and smoother

➜  Improvement by smoother shrinks (depends on forcing strength)

Lars Nerger – Nonlinearity and smoothing

2 4 6 8

20 40 60 80 100 120

Optimal lag

forecast length [time steps]

optimal lag [time steps]

~2x error doubling time

2 4 6 8

0 0.1 0.2 0.3 0.4 0.5 0.6

mean RMS error at optimal lag

forecast length [time steps]

mean RMS error

Filter Smoother

2 4 6 8

0 0.1 0.2 0.3 0.4 0.5 0.6

mean RMS error at optimal lag

forecast length [time steps]

mean RMS error

Filter

Smoother

F=5 F=7

(10)

Smoothing with global ocean model

FESOM (Finite Element Sea-ice Ocean model, Danilov et al. 2004) Global configuration

  1.3o resolution, 40 levels

  Horizontal refinement at equator

  State vector size 107

  Weak nonlinearity

Lars Nerger – Nonlinearity and smoothing

Drake passage

Twin experiments with sea surface height data

  Ensemble size 32

  Assimilate each 10th day over 1 year

  ESTKF with smoother extension and localization (Using PDAF environment as for Lorenz-96)

  Inflation tuned for optimal performance (ρ=0.9)

(11)

Effect of smoothing on global model

Typical behavior

  RMSe reduced by smoother Error reductions:

~15% at initial time

~8% over the year

  Large impact of each lag up to 60 days

  Further reduction over full experiment

(optimal lag = 350 days)

Lars Nerger – Nonlinearity and smoothing

0 100 200 300

0.005 0.01 0.015 0.02 0.025 0.03 0.035

day

RMS error

SSH: RMS errors over time

forecast & analysis smoothed (50 days)

0 50 100 150 200 250 300 350

0.006 0.008 0.01 0.012 0.014 0.016 0.018 0.02

lag [days]

RMS error

SSH: RMS error for different lags initial error

mean error

(12)

0 50 100 150 200 250 300 350 0.017

0.0172 0.0174

lag [days]

RMS error

0 50 100 150 200 250 300 350

0.017 0.0171 0.0172

lag [days]

RMS error

0 50 100 150 200 250 300 350

0.156 0.158 0.16

lag [days]

RMS error

0 50 100 150 200 250 300 350

0.044 0.045 0.046 0.047

lag [days]

RMS error

Multivariate effect of smoothing – 3D fields

temperature salinity

merid. velocity zonal velocity

-1.0% at lag 40 -2.9% at lag 350

-0.9% at lag 40 -1.3% at lag 250

3D fields:

  Multivariate impact smaller & specific for each field

  Optimal lag specific for field

  Optimal lag smaller than for SSH

Lars Nerger – Nonlinearity and smoothing

(13)

0 50 100 150 200 250 300 350 0.0246

0.0248 0.025 0.0252

lag [days]

RMS error

0 50 100 150 200 250 300 350

0.172 0.173 0.174 0.175

lag [days]

RMS error

0 50 100 150 200 250 300 350

0.0256 0.0258 0.026

lag [days]

RMS error

0 50 100 150 200 250 300 350

0.0256 0.0258 0.026

lag [days]

RMS error

0 50 100 150 200 250 300 350

0.088 0.09 0.092 0.094

lag [days]

RMS error

Multivariate effect of smoothing – surface fields

temperature salinity

merid. velocity zonal velocity

-0.9% at lag 30 -3.7% at lag 350

-0.9% at lag 30 -0.9% at lag 20

Ocean surface:

  Relative smoother impact not larger than for full 3D

  Deterioration for meridional velocity at long lags

➜  What is the optimal lag for multivariate assimilation?

Lars Nerger – Nonlinearity and smoothing

(14)

Conclusion

  Multivariate assimilation:

➜  Lag specific for field

➜  Choose overall optimal lag or separate lags

➜  Best filter configuration also good for smoother

  Nonlinearity:

➜  Introduces spurious correlations in smoother

➜  Error increase beyond optimal lag

➜  Optimal lag: few times error doubling time

Lars.Nerger@awi.de – Nonlinearity and smoothing

Thank you!

(15)

Web-Resources

www.data-assimilation.net

Lars.Nerger@awi.de – Nonlinearity and smoothing

pdaf.awi.de

Referenzen

ÄHNLICHE DOKUMENTE

Considering the level and the recent developments of the unemployment rate in Eastern Europe, in our opinion it is not recommendable to apply a progressive taxation, because

The only option left to the ECB to regain its credibility with financial markets and the public at large is to launch a ‘quantitative easing’ (QE) programme entailing large

The ideology of establishing an Islamic state or caliphate espoused by various Islamist terrorist groups is not new to Pakistan: most violent and many non-violent Islamist groups

Zeigen Sie, daß in diesem Fall die fast sichere Konvergenz ¨aquivalent zur stochastischen

Ensemble-smoothing can be used as a cost- efficient addition to ensemble square root Kalman filters to improve a reanalysis in data assimilation.. To correct a past state estimate,

Once or twice a year, the Senckenberg Gesell- schaft für Naturforschung (SGN) supports a confer- ence as part of its official series. For the year 2011, the conference

Dijkstra iterates as long as the expanded nodes correspond to tree edges and delivers a solution path only when a node corresponding to a sidetrack edge is expanded.. Blind K

offers capabilities never before available in a desk calculator, including three storage registers, automatic decimal handling, 24 decimal digit numbers, and cathode