• Keine Ergebnisse gefunden

2-level parallelization of the assimilation system

N/A
N/A
Protected

Academic year: 2022

Aktie "2-level parallelization of the assimilation system"

Copied!
1
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Alfred Wegener Institute Helmholtz Center for Polar and

Marine Research

Extending NEMO for Ensemble Data Assimilation on Supercomputers with the Parallel Data Assimilation Framework PDAF

Lars Nerger and Paul Kirchgessner

Alfred Wegener Institute, Helmholtz Center for Polar and Marine Research, Bremerhaven, Germany Contacts: Lars.Nerger@awi.de / Paul.Kirchgessner@awi.de

·

http://www.awi.de

We introduce a data assimilation system for the ocean circulation model NEMO that is built using the parallel data assimilation framework PDAF [http://pdaf.awi.de].

Inserting three subroutine calls to the source code of NEMO, one extends NEMO to a data assimilation sys- tem that consists of a single program. Utilizing the paral- lelization capacity of today’s supercomputers, the system performs both the ensemble forecasts and the analysis step of the filter algorithm in a single execution of the program.

This system is in contrast to other assimilation systems that run NEMO separately from the assimilation algo- rithms. These exchange data through disk files holding the ensemble of model states. Using the online-coupled data assimilation system with PDAF, repeated storage of all ensemble states in disk files is avoided. In addition, re-starting the model is not required. These features lead to a computationally very efficient data assimilation pro- gram. A square-box configuration of NEMO [see 1] is used to test the assimilation system.

single program

Model

initialization time integration post processing

Filter

initialization analysis transformation

Observations

obs. vector obs. operator

obs. error

state time

state

observations

Core of PDAF

mesh data

Exchange through Fortran modules Explicit interface (subroutine calls)

Logical separation of the assimilation system

PDAF separates the data assimilation system into three components: Model, filter algorithm, and observations. The filter algorithms are part of PDAF’s core, while the model and subroutines to handle observations are provided by the user. A standard interface for all filter algorithms connects the three components. All user-supplied subroutines can be implemented like model routines.

2-level parallelization of the assimilation system

PDAF provides support for a 2-level parallelization of the as- similation system:

1. Each model task can be parallelized.

2. All model tasks are executed concurrently.

Thus, ensemble integrations can be performed fully parallel.

In addition, the filter analysis step uses parallelization. All components are combined in a single program.

Initialize ensemble

Forecast ensemble

states

Perform filter analysis step

Aaaaaaaa Aaaaaaaa aaaaaaaa a

Start

Stop

Initialize Model

generate mesh Initialize fields

Time stepper

consider BC Consider forcing

Post-processing init_parallel_pdaf

WHILE istp nitend init_pdaf

Model Extension for

data assimilation

Legend:

Add 2

nd

-level parallelization

assimilate_pdaf init. parallelization

1 line added in mynode

(lib_mpp.F90)

Changes in NEMO source code

1 line added in nemo_init

(nemogcm.F90)

1 line added in stp (step.F90) Additions to

program flow

NEMO is coupled with PDAF [2,3] by adding three subroutine calls the model source code and utilizing paralleliza- tion. In contrast to other frameworks, the model does not need to exist as a separate subroutine. Model- and observation-specific operations are per- formed in user-supplied call-back rou- tines that are called through PDAF. The ensemble forecast is also controlled by user-supplied routines.

Implementations using this online cou- pling scheme have been performed also for other models like FESOM, BSHcmod, HBM, NOBM, ADCIRC, and PARODY.

PDAF is coded in Fortran with MPI paral- lelization. It is available as free software.

Further information and the source code of PDAF are available on the web site:

http://pdaf.awi.de

Assimilation experiments are performed to vali- date the assimilation system. A box configuration of NEMO (“SQB”) is used that simulates a double- gyre. The SQB-configuration is one of the bench- marks of the SANGOMA project [http://www.data- assimilation.net]. The grid has 121

×

81 grid

points at a horizontal resolution of 0.25 and 11 layers. Synthetic observations of sea surface height at ENVISAT and Jason-1 satellite tracks and temperature profiles on a 3

×

3 grid are as- similated each 48 hours over 360 days. Observa- tion errors are respectively set to 5cm and 0.3C.

The assimilation uses the ESTKF filter [4] with lo- calization [5]. An ensemble of 32 states is used.

The errors in all fields are significantly reduced by the assimilation (see SSH below).

−60 −55 −50 −45 −40 −35 −30

24 28 32 36 40 44

Longitude (degree)

Latitide (degree)

SSH observations (m)

−0.6

−0.4

−0.2 0 0.2 0.4

−60 −55 −50 −45 −40 −35 −30

24 28 32 36 40 44

Longitude (degree)

Temperature observations, surface (degC)

Latitide (degree)

12 12.5 13 13.5 14 14.5 15

True sea surface height at 1st analysis time

Longitude (degree)

Latitide (degree)

−60 −55 −50 −45 −40 −35 −30

24 28 32 36 40

44 −0.6

−0.4

−0.2 0 0.2 0.4 0.6

True sea surface height at last analysis time

Longitude (degree)

Latitide (degree)

−60 −55 −50 −45 −40 −35 −30

24 28 32 36 40

44 −0.6

−0.4

−0.2 0 0.2 0.4 0.6

Longitude (degree)

Latitide (degree)

Estimated SSH at 1st analysis time

−60 −55 −50 −45 −40 −35 −30

24 28 32 36 40

44 −0.6

−0.4

−0.2 0 0.2 0.4 0.6

Estimated SSH at last analysis time

Longitude (degree)

Latitide (degree)

−60 −55 −50 −45 −40 −35 −30

24 28 32 36 40

44 −0.6

−0.4

−0.2 0 0.2 0.4 0.6

2 4 6 8 10 12 14 16

0 2 4 6 8 10 12 14 16

Number of processes per model

Speedup

Parallel speedup NEMO and assimilation

Speedup NEMO

Speedup NEMO/PDAF linear speedup

The parallel compute performance of the assim- ilation system is described by the speedup (ratio of the computing time on

n

processes to the time on one process). The speedup of the assimila- tion system is dominated by the speedup of the NEMO model itself. The assimilation leads only to a small reduction of the speedup.

[1] Cosme E., Brankart J.-M., Verron J., Brasseur P. and Krysta M. (2010). Implementation of a reduced-rank, square-root smoother for high resolu- tion ocean data assimilation. Ocean Modelling, 33:

87–100

[2] Nerger, Hiller, and Schr ¨oter (2005). PDAF - The Paral- lel Data Assimilation Framework: Experiences with Kalman Filtering, in Use of High Performance Computing in Meteo- rology - Proceedings of the 11th ECMWF Workshop / Eds.

W. Zwieflhofer, G. Mozdzynski. World Scientific, pp. 63–83

[3] Nerger, L. and W. Hiller (2012). Software for Ensemble-based Data Assimilation Sys- tems – Implementation Strategies and Scal- ability. Computers & Geosciences. 55: 110–

118

[4] Nerger, L., T. Janji´c, J. Schr ¨oter, J., and W.

Hiller (2012). A unification of ensemble square root Kalman filters. Mon. Wea. Rev. 140: 2335–2345

[5] L. Nerger, S. Danilov, W. Hiller, and J. Schr ¨oter (2006). Using sea-level data to constrain a finite- element primitive-equation ocean model with a local SEIK filter. Ocean Dynamics 56: 634–649

Introduction A Parallel Data Assimilation System

Coupling NEMO with PDAF Assimilation Experiments

References

Referenzen

ÄHNLICHE DOKUMENTE

2.2 The Finite Element Sea Ice-Ocean Model (FESOM) The sea ice-ocean component in the coupled system is represented by FESOM, which allows one to simulate ocean and

2.2 The Finite Element Sea Ice-Ocean Model (FESOM) The sea ice-ocean component in the coupled system is represented by FESOM, which allows one to simulate ocean and

• Overview of ensemble data assimilation • Data assimilation software PDAF Parallel Data Assimilation Framework • Implementation example MITgcm.. Tutorial: Ensemble Data

In this step the model state vectors are collected on the filter communicator with the help of the coupling communicator (see Fig. Then the observation data are read from netCDF

2.2 The Finite Element Sea Ice-Ocean Model (FESOM) The sea ice-ocean component in the coupled system is represented by FESOM, which allows one to simulate ocean and

Large scale data assimilation: Global ocean model. •  Finite-element sea-ice ocean

•  complete parallelism in model, filter, and ensemble integrations. Aaaaaaaa Aaaaaaaa

Sequential data assimilation methods based on ensem- ble forecasts, like ensemble-based Kalman filters, pro- vide such good scalability.. This parallelism has to be combined with