• Keine Ergebnisse gefunden

Part I: Background and Existing Work 8

7.1 Open Questions

There are a number of open questions which arise from Aberth’s presentation of the al-gorithm and design issues which need to be addressed, rendering an implementation non-trivial. In some cases a translation from informal algorithm description to precise specifica-tion and program code is highly intricate and introduces ‘hidden’ design choices, for which it is unclear how to proceed. We list these questions here, and explore them further in this chapter.

Performance

The performance of the algorithm is undocumented; it is only stated that it was implemented. The performance characteristics, range of practical applicability, and complexity are all unknown and require investigation. The dependence of the al-gorithm upon the performance of the interval arithmetic used is also unclear. For instance, if the interval ranges generated are too wide, a non-terminating subdivision of faces may be a threat.

7 Computation of Topological Degree

Face Processing Strategy

The main workspace of the algorithm consists of an array of faces L, cf. Subsection 4.4.4, which are in the first instance taken from X. There is an issue concerning the ordering of these faces inL, and the manner in which generated sub-faces are added to it. In an abstract sense, it is more accurate to say that the structure of faces which are generated and analysed is a tree. There is then a choice of preferential order for face processing. In [Abe94] it is stated that sub-faces are added to the front of L, implying a depth–first strategy. It is not clear that this choice is the most efficient.

This design choice only affects the non-functional performance, however.

Face Subdivision Strategy

The given strategy for subdividing faces into sub-faces is that all interval fields of the face are bisected. However, this is not a functional requirement, since any sensible means of partitioning a face into smaller parts might plausibly work just as well. It is possible to subdivide in one or only some dimensions, provided that face edge widths may ultimately reach arbitrary smallness. This generates fewer immediate sub-faces, but increases the likelihood of requiring further subdivisions. Furthermore, it may be that a good choice of subdivision point reduces the number of faces processed overall. A heuristic to choose such a point may be possible. It is unclear if bisection is an optimal choice in this regard. Since we have observed that the sub-faces form a tree, it is clearly desirable to try to reduce both the branching factor and depth of this tree, to minimise the computational effort. The entire computation is based on recursive face generation and processing, thus it seems that the number of faces generated, especially in the early stages, should be a major deterministic factor in (non-functional) algorithm performance.

Overlap Resolution Strategy

In Subsection 4.4.4 (Reduction of Faces) the necessity to eliminate any overlaps be-tween child faces was explained. In the first instance, there is the question of whether the problem of boundary zeros for sub-problems occurs with sufficient frequency to merit a strategy of resolving overlaps in all cases. If this is indeed necessary, there are issues concerning both how to structure the search for overlaps between face pairs, and how to resolve overlaps once found. We may wish to resolve either by minimising immediate effort, or alternatively by minimising the number of generated faces for later work.

Ordering of Variables

In the default case we intersect the image of the box with a ray emanating from the origin in the positive direction ofx1 (see Figure 4.4), but this choice is arbitrary. We could choose any coordinate-aligned direction, without essentially changing the algo-rithm; a choice of initial ray in the direction of a different variable simply corresponds to a variable reordering. It may be possible to make a beneficial choice of ray

direc-7 Computation of Topological Degree

supposition, at least. In Figure 4.2, if we take a ray emanating in a perpendicular direction to that which is illustrated (i.e. in the direction of the secondary variable), there is no intersection with the box image at all, so it seems that less computational effort would be needed. It may be that, at least in some cases, an optimal reordering of the variables is possible.

We address two of these issues immediately before proceeding further:

7.1.1 Face Processing Strategy

As noted above, in each major iteration of the algorithm, we are presented with an arrayL of faces which have to be tested (with interval arithmetic function evaluations) and either discarded (negative result), retained and placed in the output array L1 (positive result), or divided into sub-faces for further testing (ambiguous result). The structure of faces and sub-faces forms a tree. Choices of either prepending or appending new sub-faces to L correspond to processing of the face tree in depth-first or breadth-first order, respectively.

Both breadth-first and depth-first strategies were implemented and compared. A detailed analysis is omitted for the sake of brevity, but this comparison showed clearly that the depth-first choice had a much smaller memory requirement. This is a common observation for many types of tree search algorithms.

The computation time seems to be largely unaffected by this design choice, since the same number of faces need to be processed in either case. The reason for the marked observed difference in memory usage becomes clear once we consider the branching factor of this process; 2n−1 sub-faces are generated from each face in the first major iteration of an n-dimensional problem. In the depth-first case, the number of sub-faces which need to be stored (for subsequent analysis) is proportional to the depth of the current sub-face in the tree. In the breadth-first case, however, the number of deferred sub-faces is proportional to the breadth of the tree at the current depth. For given n, the maximum tree depth is the deterministic memory requirement factor, and this requirement grows linearly with depth for the depth-first search, but exponentially with depth for the breadth-first search.

A possible additional advantage of the depth-first strategy is that sub-faces are entered intoL1 in an order which has potential benefit from the viewpoint of searching for overlap-ping faces (discussed below). In this case, the elements of L1 are ordered with respect to the original face inLthat they are a subset of, i.e. L1 is constructed as a series of sequential

‘blocks’ of faces, where all faces within a block were generated by subdivision of the same original face. The probability of two elements of L1 sharing a common edge is therefore greater if they are in the same block. It is plausible (although highly non-trivial) to imagine that by exploiting this structure, the number of required comparisons when checking the new listLof generated child faces may be lower. In contrast, with the breadth-first strategy the elements of L1 are ordered with respect to size. There would seem to be no easy way to exploit this structure to optimise the overlap search.

The depth-first face processing strategy is therefore favoured. This is implemented by recursing on all sub-faces before advancing to the next face in the input list L, i.e. by

7 Computation of Topological Degree prepending sub-faces to L.

It may also be worth noting that a good choice of face processing strategy also depends on the face subdivision mechanism that is used, since this determines the structure of the sub-face tree. For instance if one only subdivides in a single variable, then the maximum branching factor is drastically reduced (to2).

7.1.2 Overlap Elimination Strategy

After the subdivision of faces is complete, all those elements positively selected (in L1) are decomposed into their child faces and entered into a new array L, ready for the next major iteration. Those faces inL1 that are geometrically adjacent will yield overlapping (or identical) child faces which will, by construction, have opposite orientation.

The first question which is posed is whether a common zero of the reduced subsystem (i.e. fi = 0, . . . , fn = 0, where i >1) occurs on the boundary of a face with any sufficient frequency to justify a systematic elimination of all overlapping regions. Although it seems that for categories of randomly-generated problems this phenomenon should occur with zero probability, testing quickly revealed that, for the kinds of simple problems a user may be likely to input (e.g. low degree polynomials with integer coefficients over a unit box), this failure case arises significantly often. A systematic elimination therefore seems fully justified and is implemented. In any case, it may even be that the computational effort required for this elimination is outweighed by the benefits of possibly reducing the total number of faces for later analysis. This is because, although the failure case notionally arises with zero probability, non-critical face overlap occurs more often than not. Whether the overlap elimination reduces the number of faces overall, or generates more, depends on whether an overlap is an exact duplicate (removing two faces) or partial (removing two faces, but generating possibly several more).

In implementing the overlap resolution there are two major design choices. Firstly, how should one structure the search for overlapping faces, and secondly, how should overlaps be best resolved, once found?

Let #L be the length of the array L to be searched. In the absence of any information concerning the positions of pairs which cannot by construction overlap, a complete check of all pairs of faces, requiring #L(#L−1)2 comparisons, is required. We do have some such information; L is composed sequentially of small subsets of child faces with a common parent inL1, therefore each such face need not be compared with its immediate neighbours.

However, eliminating these comparisons would yield only a very minor reduction, which is likely not worth the implementation cost. As discussed in Subsection 7.1.1 above, further information could potentially be derived from the order in which the faces were entered into L1. Although it certainly seems that a more structured search, using fewer than #L(#L−1)2 comparisons, is therefore possible, this is a complex problem; indeed it is not at all clear whether any saving would not be offset by the effort required in generating and processing structural meta-data forL. It was therefore decided to proceed with an unmodified search.

After the search, those faces which are not matched (overlapped) with any other are

re-7 Computation of Topological Degree

duplicated regions. In general, new faces must be created to represent the unduplicated portions of any copies. To remove the duplicated portion of a pair of faces, the correspond-ing component intervals of each must be compared. This is a somewhat involved process which involves determining a number of split points: points which are an endpoint of an interval from one face, but which occur inside the corresponding interval of the other. The duplicated region is given by the product of duplicated interval segments, and the new faces are generated as products of intervals formed by breaking up the component intervals for each face at the split points. Due to the intricacies of this process, it occupies a notable portion of the software code, and can take up a fair chunk of the execution time. For brevity, technical coding details are omitted.

There are a number of different possibilities for the number of new faces that are generated.

Letmbe the common dimension of the overlapping faces. The best case, of course, is when all the interval fields coincide, requiring no new faces. The next-best case is when there is but a single split point, which requires a single new face for the unduplicated region. There are a number of intermediate cases, and the worst case arises when there are 2 split points in all interval fields. Each interval pair here may overlap end-to-end or one may be contained within the other. If all pairs overlap end-to-end,2(2m−1)new faces are generated, if one face is entirely inside another, there are 3m−1 new faces. These cases are illustrated in Figure 7.1.

1 2

1

4 5

6

3 1

2

1 2 3

5

8 7

6 4

worst case quasi−worst case

best case next−best case next−next−best case

Figure 7.1: Overlapping faces in R2 and new faces generated (numbered) for the undupli-cated regions.

Although this seems the simplest way of generating the new faces, it is certainly possible to reduce the number of these faces, since some of them could be viably merged (i.e. their

7 Computation of Topological Degree

union is expressible as a product of intervals). Minimising the number of faces is desirable, but this refinement has a significant associated computational cost, so there is a conflict between minimising the effort immediately expended or the number of faces (which may save effort later). Due to the siginificant computational effort already required by the overlap resolution, it was decided not to burden it further.

It is finally worth commenting that this phenomenon of overlap resolution also occurs similarly in geometric modelling systems. For example, the modelling packageSvLis[Bow99]

deals with constructing objects from simple primitives, and can test for overlap of these objects inR3.