• Keine Ergebnisse gefunden

Flow Model Coupling Service

Im Dokument Grid Infrastructures (Seite 134-137)

Hydrodynamic Simulation

Variant 2: Iterative Substructuring

7.4. The “Big Picture”: Flood Simulation Grid-WPS

7.4.3. Flow Model Coupling Service

The flow model coupling service enables the interaction between coupled subregion models of a distributed Kalypso 1D2D simulation. This service has been designed for use in scenarios where boundary information needs to be exchanged between flow models in a coordinated way. The implementation is tailored to the iterative substructuring algorithm sketched in Section 7.3 and the RMA·Kalypso numerical model.

The distributed simulation setup ensures the correct execution of the algorithm on level 2 of the multilevel parallelization, i. e. communication across clusters. Each cluster handles one subdomain and uses many computational nodes to compute the subdomain solutions. For the subdomain solves, an algorithm from level1is used to distribute the work across the cluster nodes.

Once the separate simulations for a coupled calculation unit have been started on all clusters by callingExecuteRMAKalypso, the flow model coupling service creates a grid resource in the flood simulation service at the respective cluster site. A flood simulation

Figure7.2.:Setup of the distributed Kalypso 1D2D simulation across two sites.

grid resource is associated with the execution of exactly one subdomain on one cluster.

It then sets up the communication between adjacent subdomains by monitoring the subdomain output files in the Grid-WPS sandbox directory (see Figure7.2).

Once an iteration is completed, the flow model coupling service updates an internal resource property containing a reference to a file for each internal boundary line containing the current boundary data. Grid resources for adjacent subdomain are registered to receive notifications about new boundary data from their neighbors.

Updated boundary data files are copied to the corresponding sandbox. The flow model coupling service operates in a similar fashion to the “ServOSims” framework developed by Floros and Cotronis [FC06] (see Section7.2).

A crucial step is to determine the global convergence of the simulation. After each iteration, a subregion’s flood simulation grid resource updates its convergence status.

An arbitrarily selected root process in the tree of all subregions collects the convergence status of all other processes and reports back the global status. Each flood simulation

grid resource then issues the command to either continue the iteration, proceed with the next time step, or terminate the simulation. Commands are encoded in the file name of a special file in the sandbox directory.

Exchanging boundary information in files is a very crude approach. Nevertheless, it has the advantage that no further communication mechanism, like RPC, needs to be implemented between the flow model coupling service and the calculation core.

The most obvious disadvantage is a performance degradation because (1) the sandbox is typically accessed from the grid service via GridFTP and (2) both the service and the executable have to poll for file existence in specified (millisecond) intervals. As long as there is no middleware support for creating this communication link from service to executable and vice versa, the depicted approach provides an acceptable yet unstandardized workaround.

Outlook

The WPS application profile for flood simulation developed in this chapter together with the Kalypso 1D2D software is a step towards an open architecture for flood modeling. [KJ+04] have identified open architecture as the crucial advancement in software development maturity to creating interoperable software applications mod-eling hydraulic and hydrologic problems. They define software architecture as “the conceptual structure and logical organisation of a computer or computer-based system”.

Currently, hydraulic software is in the development stage of closed architecture lacking the possibility to “plug in” novel software components into existing systems. It is often based on proprietary, usually commercial products that are not compatible with each other and serve the specific needs of an organization. Open architecture, on the contrary, is based on open standards and allows users to shape an application to their needs from “off-the-shelf” software products.

A possible complementary approach to coupling hydrodynamic models in an open architecture is the Open Modeling Interface (OpenMI). Gregersen et al. [GGW07] state that “OpenMI is a pull-based pipe-and-filter architecture, which consists of communicating components [. . . ] that exchange memory-based data in a predefined way and in a predefined format”. However, the existing implementation of the OpenMI Environment does not work in heterogeneous systems. Components implemented in the .NET and Java languages, for example, cannot be connected [HOS08]. Even though the framework would generally be suited to coupling adjacent2D domains and controlling the simulation, there have been no endeavors to support a real distributed calculation, as in grid computing, using OpenMI so far.

The aim of this thesis had been to make a contribution to the implementation of open standards from spatial data and grid infrastructures in the field of flood modeling.

For the first time, the process of flood modeling by two-dimensional hydrodynamic simulation — flow model creation, flood simulation, and results evaluation — has been formalized as a sequence of geoprocessing tasks. This was accomplished by the development of geoprocessing grid services for flood modeling. Two time-consuming and data-intensive tasks in this process were selected for parallelization and prototypical implementation. The novel approach now enables flood modeling experts, for example, to set up large-scale two-dimensional hydrodynamic models using standard-conforming digital elevation data services, to run their flood simulations remotely over the web, and to store, manage, and analyze their simulation results on one or more computing and storage resources in a grid infrastructure.

Im Dokument Grid Infrastructures (Seite 134-137)