• Keine Ergebnisse gefunden

Adaptive mesh refinement (AMR) and parallelization

Im Dokument Massive stars shaping the ISM (Seite 69-73)

method shocks can be treated directly and sound waves and moving matter can be treated with the same precision. As it is a finite volume scheme, it strictly conserves mass, momentum and energy.

However, it might run into problems with the internal energy evolution.

In conclusion we decided that a second order Godunov method is the best tool for our study.

3.9 Adaptive mesh refinement (AMR) and parallelization

In this section we will discuss methods to speed up the simulations. We already mentioned that the number of dimensions has a large effect on the computational cost of a simulation. In addition to exploiting the symmetries of the problem (i.e. lowering the number of dimensions) and limiting the included physical processes (e.g. gravity or radiative cooling are computationally intensive) run times can be shortened by using several CPUs in parallel. Technically, this is implemented via MPI in the RAMSESand PLUTOcode, which are used for this work. Basically the simplest way of domain decomposition is sub-dividing the computational volume into parallel slabs. Cells on the boundary of these slabs are passed to all adjacent CPUs, whereas cells inside the slabs are passed to a single CPU. In practice in simulations with nonuniform resolution, cells are most commonly distributed between CPUs using a Peano-Hilbert curve which minimizes the number of cells which have to be passed to more than a single CPU and distributes the cells evenly among the CPUs. If some regions of space contain smaller cells, passing parallel slabs of the same volume to all CPUs could lead to sub-optimal load leveling.

Another way of speeding up mesh simulations is to use lower resolution in areas which are less interesting. Whereas resolution naturally follows the distribution of mass in an SPH simulation, this functionality has to be added to grid codes. If it is known beforehand, in which part of the computational volume high resolution will be desired, it is advantageous to define a fixed non-uniform grid. In most of our simulations, however, the areas with steep gradients move inside the box. In such situationsadaptive mesh refinement(AMR) is the method of choice.AMRoptimizes the computational cost of a simulation by increasing the resolution in crucial areas and lowering it in smooth regions. Technically, the RAMSES code’s mesh has several levels: think of the cube containing the computational domain as level zero: In level one this cube is divided into 2ν cells, where ν is the number of dimensions. For level two a cell of level one is again subdivided into 2n cells. The advantage of AMRis that not all cells of a level have to be subdivided but that the AMRregion can follow the interesting regions of the simulation. For example theAMRregion in RAMSES can be computed by checking gradients of the primitive variables or via a geometrical criterion. E.g. this can be helpful, if one checks for density gradients and the initial conditions in a part of the volume contain a density jump in pressure equilibrium. In this situation it can help to exclude a part of the volume from refinement.

In the RAMSES code the interpolation at the borders of regions with different grid levels can be controlled with theinterpol_typevariable in thenamelistfile. Repeating the same simulation (60M, infinite cloud, spherical cavity) withinterpol_type=0(no interpolation), 1 (MinMod7) and 2 (MonCen) showed that the feedback energy efficiency is similar in all three simulations.

This is expected, since the algorithm optimizing the extent and location of highly resolved areas tries to put the grid borders in areas with small gradients.

In our production runs we used MonCen slope limiting for the Riemann solver (unless it became unstable and crashed) and MinMod interpolation between grid levels. Simulations with this

mix-7See Sect.3.7for a definition of MinMod and MonCen.

v

v vx

vy

Figure 3.11: Sketch of numerical fusion. The amount of numerical dif-fusion in a simulation is not only con-strained by the choice of Riemann solver but also by the angle between the grid axes and the direction of the flow. Transversal flows, as shown in the right sketch are prone to higher numer-ical diffusivity.

ture of slope calculations did not differ significantly form a simulation with the same interpolation scheme for the Riemann solver and the grid levels, since the grid boundary should be in a smooth region of the flow anyway.

3.9.1 Pitfalls of AMR

IfAMRis uses in a simulation, one has to be aware of its side effects. For example in simulations with movingAMRregions one can observe shocks seeded by the grid interfaces.

In our simulations we also observed cooling introduced by grid interfaces. The simplest test to reproduce this behavior is a stellar wind in a homogeneous ambient medium with a geometrical refinement criterion which limits the highest resolution grid to a cube centered around thefeedback region’s center. In this test also a gradient based refinement criterion is used. If the geometrical criterion starts to limit the refinement in areas which the gradient based refinement criterion would refine, the finest grid starts loosing track of the dense shell. Ultimately this results in enhanced cooling near boundaries of the finest grid. Since the region’s boundaries now can happen be placed near strong gradientsinterpol_type=0makes the simulation stable enough to survive this phase.

This is related to the code crashes with negative temperatures behind the shock! Obviously this problem is an over oscillation of the interpolation scheme near the edge of the grid levels. Interpo-lating with scheme 0 made the simulations very stable but diffusive.8

3.9.2 Numerical diffusion

Numerical diffusion9 is a problem arising in grid codes, because the discretized hydrodynamical equations have truncation errors which tend to make them more diffusive than the differential equa-tions. This error is smaller for higher order schemes. Also incorrect evaluation of field gradients (which can be caused by too coarse meshes or bad flux limiting functions) and an angle between the flow direction and the grid axes increases this problem.

8A re-run on the serverOPTIMALactually showed that in a 2D cut alongz = 0cooling in cells inside the bubble occurs. I.e. radiative losses in cells with high temperatures and low densities were found. However, such cells should not cool. This is e.g. seen at the positive x axis. Affected cells are close to the grid boundary and the problem is caused by the geometric refinement criterion. The grids loose track of the dense shell. Another pitfall of the geometric refinement criterion is seen in the velocity. Here, the geometric refinement criterion of the finest grid leads to a cross like feature in velocity, since the grid boundary is reached first along the grid axes. Type 1 and 2 crash after 177 code time units (output 12). This is the time when the geometrical refinement strategy already took over. In this phase the finest grid cannot follow the dense shock any more.

9Numerical diffusion is also known as “artificial” or “false” viscosity, damping or dissipation.

3.9 Adaptive mesh refinement (AMR) and parallelization 53 In the two computational domains shown in Fig.3.11density, pressure and velocity are constant.

The conservative passive scalar shown in blue has a step which is parallel to the flow direction (small black arrow). If the flow is not parallel to the grid, numerical diffusion will smear out the step in the conservative passive scalar. Numerical diffusion is zero if there is either no gradient of the passive scalar (x, left sketch) or no velocity component (y, left sketch). Transversal flows (right sketch) show gradients of the passive scalar and have velocity components in all directions. Hence the discretization errors will lead to numerical diffusion.

Chapter 4

Basic building blocks of simulations

The aim of this work is to combine stellar feedback with the physics of theISM. If such simulations are carried out with a grid code, this problem can be subdivided into a number of sub-problems which have to be solved accurately (enough). Many of these simplified problems have an analytic solution. Thus before putting it all together and looking at the simulations as a whole, we check how well the code recovers the analytic solutions of these sub-problems and we will discuss how we technically implement the stellar feedback. Readers not interested in these technical details can skip this section.

As mentioned in Sect.3.5, the typical sub-problem arising at every cell interface at every time step is the Riemann problem. Basically the code subdivides the volume into cells. Each cell has an average density, an average pressure and average velocities. At each cell face that lies inside the simulated volume1 the cell touches another cell. In 1D one can sketch this problem with a step function e.g. in a density vs. spatial coordinate diagram. If the averaged velocities in both cells are zero, this is the initial state of the Sod shock tube test (Sect. 4.2): Two media with possibly differing gas properties are separated by a diaphragm. As soon as this barrier is removed, gradients in the gas will lead to waves. In the hydrodynamic case, we will find an entropy wave, a rarefaction wave and a density wave. The Sod shock tube test is sometimes also called “dam-break-problem”.

As soon as we are convinced that our numerical method can treat individual cell interfaces with sufficient accuracy, we can proceed to blast waves. The Sedov-Taylor problem, which is a blast wave without radiative cooling losses, should be recovered (Sect.4.3).

Another agent in our models are stellar winds. Sect.5.2checks the conservation of mass and energy in our feedback prescription. It also compares the solution to analytic solutions of a constant wind without radiative cooling.

Im Dokument Massive stars shaping the ISM (Seite 69-73)