• Keine Ergebnisse gefunden

3.5 Analysis of the primal-dual active set strategy

4.1.3 Mesh deformation and mesh generation

Once, that these two smoothing approaches are used the resulting polygon is no longer fixed to the original mesh. Thus, three different approaches are suited.

• Remesh the domainΩsuch that the new interface is contained in the new grid.

• Move the nodes of the current mesh such that the new interface is contained in the new grid.

• Use a finite element method that is capable to cope with that situation.

4.1.3 Mesh deformation and mesh generation 125 Although the third idea is probably the best fitted approach, it is beyond the scope of this thesis and the reader is referred to literature onunfitted finite element methods[8,9,74] andextended finite element methods (XFEM)[63,30,28]. Moreover, there is considerably progress in thearbitrary Lagrangian-Eulerian (ALE) methods; cf. the survey article [45] and the references therein. Those methods are widely used in the context ofcomputational fluid dynamicsand in the simulation ofstructural mechanics.

Movement of the mesh nodes is an efficient approach as long as deformation is of moderate size and this approach corresponds to the Lagrangian description used in ALE methods. Large deformations typically yield mesh entanglement and therefore require remeshing of the domain. However, as long as distortion is small enough, the movement of the mesh nodes is efficient and can be implemented by means of a three step strategy:

1. extend the velocity field(Wi·nJi)nJi to the bulk of the domain (or at least to a narrow band around the interface),

2. move the nodes and 3. regularize the mesh.

The first step is mandatory, if the displacement of the interface nodes is larger than the mesh size, since the nodes in the bulk of the domain have to be moved too in order to prevent entanglement of the mesh.

Hence, one requires efficient schemes for extending the velocity field. As already mentioned inSection 3.1 onpage 93one can make use of ideas which are developed in the context of level set and fast marching methods. Furthermore, it is possible to apply methods of linear elasticity, see [93,48], where the mesh is regarded as an elastic solid, whose outer boundaryΓis fixed while the interior is deformed in such a way that the interfaceβiis mapped toβi+1. By that means one obtains a displacement field for all nodes of the mesh. A less sophisticated and less robust but easier approach was chosen for the computations in this thesis. The extension of the velocity field is obtained by means of interpolation. For this purpose, the spacial coordinates are treated separately. All boundary nodes are fixed, i. e. they have zero displacement and thus it is possible to compute the coordinates of the extended vector field at any node of the mesh by interpolation of the normal component of the velocity field at the interface nodes.

Applying the transformation approach of path following (seepage 92), the movement of every node is nothing but adding the extension velocity field to its position. When the deformation of the mesh is completed successfully, a regularization of the mesh is typically indicated. The quality of the obtained mesh may be low due to sharp angles of some elements. Hence common strategies which jiggle the mesh, while interface and outer boundary nodes remain fixed, can be applied.

Another benefit of mesh deformation is a simple implementation of the transport of discretized function variables to the new mesh, which is required by total linearization methods (see step2cofAlgorithm 5).

Since the whole mesh topology is preserved, nothing has to be done when using continuous and piece-wise linear FE. The function values are attached to their corresponding nodes and are transported by means of the displacement of the nodes. However, the movement of nodes which is due to the mesh regularization step, which is not necessary from the perspective of the algorithms, but only for reasons of numerical stability, has to be applied independently. Thus there is need for interpolation actually (and, if required, extrapolation too). These effects are neglected in the implementation of the algorithms of this thesis, since mesh jiggling in order to increase the quality of the mesh has a minor impact on location of the grid points.

However, mesh deformation is not always possible. In particular, if topology changes of the above de-scribed type occur or when distortion is too large, a complete remesh of the domain is used. Since mesh generation is costly, this situation should be avoided as often as possible. However, it is typically neces-sary during the first iterations of the algorithms fromSection 3.4, since the updates are large. Especially in that situation the current guessBi is far from optimal and high accuracy is of minor interest. Hence, it is reasonable to use coarse grids then and refine them during the course of iteration. However, the shape calculus based methods call for a sufficiently amount of nodes on the interfaceβi such that up-date velocity fields are reliable. This fact constraints the mesh size from above and has to be taken into account when remeshing. In particular, if remeshing cannot be avoided, it should be carried out such that an anew need for mesh generation is unlikely. That is to say, use a smooth interface and ensure that the interface nodes are arranged regularly. Consequently, it is appropriate to use a smoothed spline interpolation of the current interface as input for the mesh generator.

Moreover, small connection components of the active set may occur, which consist of one single node in an extreme case. This is typically caused, when a protuberance is cut off but not completely eliminated

(like inFigure 4.8). For reasons of stability and efficiency such small artificial connection components are deleted: otherwise, unreliable FE approximations would be produced on the one hand if the mesh remained as coarse, and on the other hand a pointlessly fine mesh would have to be generated in order to get a suitable resolution of the very small connection component.5 If the optimal in-/active set has such small connection components actually an appropriately fine mesh is needed anyway and thus the small components are not small in relation to mesh size any more.

Figure 4.8:Incomplete cut off of a protuberance.

All in all, the implemented mesh update routine roughly works as follows:

• If the normal componentWi·nJiis large or if there is an interface nodeBwhose distortion(Wi(BnJi(B))nJi(B)is not significantly smaller thanκJi(B)(this means that self-intersection of the inter-face may occur), Huygens’ principle is used for the update. In addition, FFT or smoothing spline methods are applied to smooth the new interface nodes. Finally, a remesh is performed.

• If the normal component Wi·nJi is of moderate size and local self-intersection can be excluded the interface is updated by means of the transformation approach (see Figure 4.3). In addition, FFT or smoothing spline methods are applied to smooth the new interface nodes and a remesh is performed.

• Otherwise a mesh deformation strategy as described above is used. If mesh entanglement occurs one of the other two branches of the routine are on hand as fall-back option.

The second branch is used for several reasons. On the one hand it is more robust than the mesh deforma-tion strategy and thus is a good alternative. On the other hand it is less robust than Huygens’ principle, but cheaper (no need for nodal computation ofz) and more accurate, since the update does not have to be bigger than the mesh size. Note, that Huygens’ principle update can only be applied if the nodes (at least one, to be more precise) are shifted more than one mesh size. Moreover, it should be mentioned that it is a nontrivial task to detect self-intersection of the interface. It has been illustrated that Huygens’

principle is capable to cope with such situations, but usage of smoothing methods induce additional dif-ficulties illustrated inFigure 4.9. The smoothed version of a given connection component of the interface may intersect itself, or intersect with another connection component or with the outer boundaryΓ. Those different incidents have to be detected and handled adequately.

All those more or less sophisticated ideas are mainly devoted to one goal, namely to increase stability of the implementation of the algorithms. Actually, they help to cope with problems which are related to the lack of global convergence of shape calculus based algorithms. Questions like changing the topology of the active set during the iteration are actually not an issue of those methods. Moreover, the numerical schemes should be initialized with sets that are not only of the right topological type but with sets that actually are “near” the optimal set. The numerical practice shows that most of the discussed issues only occur during the “pseudo-global” phase of iteration and that once the current guess is near the optimal active set everything works fine. In particular, all smoothing and miscellaneous strategies intervene only if they are necessary and they typically do not influence if the current iterate is sufficiently near a critical shape. Nonetheless, the presented coping strategies enable (though not guaranteeing) convergence even if the initial guess is far away from any critical point; cf. paragraphs4.2.3and4.2.6.

5Note, that the resolution has to be high enough such that the discrete polygon has no sharp vertices, since it has to mirror theC1,1 regularity of the interface adequately.

127

Γ

βi+1 smoothed version ofβi+1

Figure 4.9:Prototypic intersections of the smoothed interface.