• Keine Ergebnisse gefunden

Aaaaaaaa Aaaaaaaa aaaaaaaaa

N/A
N/A
Protected

Academic year: 2022

Aktie "Aaaaaaaa Aaaaaaaa aaaaaaaaa"

Copied!
1
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Alfred Wegener Institute Helmholtz Center for Polar and

Marine Research

Building Ensemble-Based Data Assimilation Systems

Lars Nerger and Paul Kirchgessner

Alfred Wegener Institute, Helmholtz Center for Polar and Marine Research, Bremerhaven, Germany

Contact: Lars.Nerger@awi.de/Paul.Kirchgessner@awi.de

·

http://www.awi.de http://www.data-assimilation.net

We discuss different strategies for implementing ensemble-based data assimilation systems. Ensemble filters like ensemble Kalman filters and particle filters can be implemented so that they are nearly independent from the model into which they assimilate observations.

Offline coupling through disk files avoids changes to the numerical model, but is computationally not efficient.

An online coupling strategy is computational efficient.

In this coupling strategy, subroutine calls for the data assimilation are directly inserted into the source code of an existing numerical model and augment the numerical model to become a data assimilative model.

Using the example of the parallel data assimilation framework (PDAF, http://pdaf.awi.de) and the ocean model NEMO, we demonstrate how the online coupling can be achieved with minimal changes to the numerical model.

single program

Model

initialization time integration post processing

Filter

initialization analysis transformation

Observations

obs. vector obs. operator

obs. error

state time

state

observations

Core of PDAF

mesh data

Exchange through Fortran modules Explicit interface (subroutine calls)

Logical separation of the assimilation system

The data assimilation system can be separated into three components: Model, filter algorithm, and observations. The filter algorithms are model-independent, while the model and subroutines to handle observations are provided by the user.

The routines are either directly called in the program code or share information, e.g., through Fortran modules.

2-level parallelization of the assimilation system

Ensemble-data assimilation can be performed using a 2- level parallelization:

1. Each model integration can be parallelized.

2. All model tasks are executed concurrently.

Thus, ensemble integrations can be performed fully parallel.

In addition, the filter analysis step uses parallelization.

Initialize ensemble

Forecast ensemble

states

Perform filter analysis step

Aaaaaaaa Aaaaaaaa aaaaaaaa a

Start

Stop

Initialize NEMO

Time stepper

Post-processing init_parallel_pdaf

WHILE istp ≤ nitend init_pdaf

Model Extension for

data assimilation

Legend:

Add 2

nd

-level parallelization

assimilate_pdaf init. parallelization

1 line added in mynode

(lib_mpp.F90)

Changes in NEMO source code

1 line added in nemo_init

(nemogcm.F90)

1 line added in stp (step.F90) Additions to

program flow

Assimilative model

NEMO is coupled with PDAF [2,3] by adding three subroutine calls the model source code and utilizing parallelization.

The model time stepper does not need to exist as a separate subroutine.

Operations specific to the model and the observations are performed in user- supplied call-back routines that are called through PDAF. The ensemble forecast is also controlled by user-supplied routines.

Model

Aaaaaaaa Aaaaaaaa aaaaaaaa a

Start

Stop

read ensemble of restart files

analysis step

(generic core + call-back routines)

Aaaaaaaa Aaaaaaaa aaaaaaaaa

Start

Stop

Do i=1, nsteps

Initialize Model

generate mesh Initialize fields

Time stepper

consider BC Consider forcing

Post-processing

Assimilation program

write ensemble of restart files

file

exchange

For the offline coupling the ensemble fore- cast is performed by running the model once for each ensemble member. The fore- casts are stored in restart files. These files are read in by the assimilation program.

The assimilation program computes the analysis step and writes new restart files.

Then the next ensemble forecast is com- puted by the model. It reads each single restart file and performs the integration.

The online coupling shows a good compu- tational scalability on supercomputers and is hence well suited for high-dimensional numerical models, including coupled earth system models.

Further, a clear separation of the model and data assimilation components allows to continue the development of both com- ponents separately.

Implementations using online coupling have been performed also for other mod- els like FESOM, BSHcmod, HBM, NOBM, ADCIRC, and MITgcm.

PDAF is coded in Fortran with MPI paral- lelization. It is available as free software.

Further information and the source code of PDAF are available on the web site:

http://pdaf.awi.de

Assimilation experiments are performed with a box configuration (SEABASS) of NEMO that simulates a double-gyre (see [1]). The configuration is one of the bench- marks of the SANGOMA project. To sim- ulate a high-dimensional model, the reso- lution is increased to 1/12o. The grid has 361

×

241 grid points and 11 layers. The state vector has a size of about 3 million.

Synthetic observations of sea surface height at ENVISAT and Jason-1 satellite tracks and temperature profiles on a 3

×

3

grid are assimilated each 48 hours over 360 days. Observation errors are respec- tively set to 5cm and 0.3C. The assimila- tion uses the local ESTKF filter [4].

5 10 15 20 25

0 2 4 6 8 10 12

# Processors

# Speedup

Parallel speedup of NEMO and assimilation

NEMO+PDAF NEMO

The parallel compute performance of the assimilation system is described by the speedup (ratio of the computing time on

n

processes to the time on one process). The speedup of the assimilation system is dom- inated by the speedup of the NEMO model itself. The assimilation slightly increases the speedup due to a better scalability.

[1] Cosme E., Brankart J.-M., Verron J., Brasseur P. and Krysta M. (2010). Implementation of a reduced-rank, square-root smoother for high resolu- tion ocean data assimilation. Ocean Modelling, 33:

87–100

[2] Nerger, Hiller, and Schr ¨oter (2005). PDAF - The Paral- lel Data Assimilation Framework: Experiences with Kalman Filtering, in Use of High Performance Computing in Meteo- rology - Proceedings of the 11th ECMWF Workshop / Eds.

W. Zwieflhofer, G. Mozdzynski. World Scientific, pp. 63–83

[3] Nerger, L. and W. Hiller (2013). Software for Ensemble-based Data Assimilation Sys- tems – Implementation Strategies and Scal- ability. Computers & Geosciences. 55: 110–

118

[4] Nerger, L., T. Janji´c, J. Schr ¨oter, J., and W.

Hiller (2012). A unification of ensemble square root Kalman filters. Mon. Wea. Rev. 140: 2335–2345

Introduction A Parallel Data Assimilation System

Online-Coupling of NEMO and PDAF Offline Coupling

Summary Parallel Performance of Online Coupling

References

Referenzen

ÄHNLICHE DOKUMENTE

Catching the young fish of large species like cod, results in a large reduction in population biomass.. Looking at figures 2 & 3, which fishing strategy results

• Overview of ensemble data assimilation • Data assimilation software PDAF Parallel Data Assimilation Framework • Implementation example MITgcm.. Tutorial: Ensemble Data

In this step the model state vectors are collected on the filter communicator with the help of the coupling communicator (see Fig. Then the observation data are read from netCDF

2.2 The Finite Element Sea Ice-Ocean Model (FESOM) The sea ice-ocean component in the coupled system is represented by FESOM, which allows one to simulate ocean and

Large scale data assimilation: Global ocean model. •  Finite-element sea-ice ocean

Extending NEMO for Ensemble Data Assimilation on Supercomputers with the Parallel Data Assimilation Framework PDAF.. Lars Nerger and

operational circulation model for the North and Baltic Seas: Inference about the data. Assimilating NOAA SST data into the BSH operational circulation model for the North and

Sequential data assimilation methods based on ensem- ble forecasts, like ensemble-based Kalman filters, pro- vide such good scalability.. This parallelism has to be combined with