• Keine Ergebnisse gefunden

In this section, we discuss possibilities for future research on the topic of preprocessing.

The preprocessing for VNMP instances produces a lot of useful information, especially when using Path Enumeration. It might be possible to develop specialized VNMP problem formula-tions that make use of this information, for instance a model that works on the simplified block tree with additional variables that select a specific path within crossed biconnected component.

One important caveat to the presented preprocessing method is that it works best with sparse graphs. A computational study is still necessary to determine the characteristics of this method for dense graphs, for applications outside of the telecommunication network domain. It is clear that the decomposition based on the block tree will not work any more, since we will most likely end up with a single biconnected component. Therefore well performing pruning and fixing algorithms will be key. We expect that methods that take the delay into account are going to perform far better in terms of pruning or fixing performance than the alternatives, but this also warrants further investigation.

We saw that the order in which the virtual arc domains are calculated can influence the execution speed. As a refinement, it could be tested if additional improvements can be achieved by looking at the target nodes of virtual arcs in addition of the source nodes. That means, when we know the virtual arcs that may start at a particular substrate node, we could group them according to their possible target nodes. For one particular pair of source and target nodes and the virtual arcs that may be implemented between them, it might be beneficial to impose an additional order based on the delay. Here different strategies (high delay first, low delay first, . . . ) could be tested. As an additional benefit, after we have solved the substrate domain problems for one particular source node in the substrate, it is possible to erase all memoized data associated with this node, since we will not need it again. This could make preprocessing applicable to even larger instances.

At the moment, the selection of pruning and fixing methods is static, with the exception of PathEnumeration, which falls back to other methods if the component graphs grow too large.

It might be beneficial to add more fine-grained control here, and fall back in a staggered way, e.g., first try to apply PathEnumeration, if the problem is too complex fall back to Testing or APSP, if this still uses too much time, just use None. However, for the tested instance sizes, there was never a big run-time difference between applying Testing/APSP and None, so this might only be beneficial for even larger VNMP instances. Another possibility would be to apply PathEnumeration only for a fraction of calls to a pruning method to derive better bounds. It might also be possible to initialize the stored domains within a component with interesting values, for instance with the domains for all pairs of nodes while using the shortest possible delay as delay bound.

A refinement for the pruning methods could be using the All Pair Shortest Path pruning while keeping track of the shortest paths (instead of just their delays). If we accept a node, but the shortest paths fromsto this node and from this node totare not disjoint, then we use an exact method to determine if this node really belongs to PNds,t. Since the pruning performance of APSP is very close to the exact methods, it might be feasible to use an ILP based approach also for the largest instance sizes without too much of a performance hit.

We have discarded two ILP formulation ideas for the node testing problem because they did not fit our requirements. However, it might be interesting to see if they offer advantages for larger, denser graphs and in situations were reconfigurability is not a huge concern. For solving the FIXFLOW model, it was beneficial to only have one constraint which has to be removed instead of having|V|constraints where coefficients have to be adapted. This modification might also be possible for the TWOFLOW and FLOWINFLOW models and speed up preprocessing.

A further idea to improve preprocessing efficiency while using ILP pruning or fixing would be collecting all required domains within components and then solving them in one go for the same s, tandkvalues, but with increasing d, because the possible nodes for lowdvalues are still possible for higher dand do not need to be tested again. The same applies to fixed nodes, for increasing d, only nodes that have been fixed for lower dneed to be tested again. Depending on the number of differentdvalues that have to be tested, it might even be beneficial to solve another problem: what is the lowestdsuch that a simple path fromstotoverkexists or what is the highestdsuch that no simple path fromstotnot containingkexists respectively. With this, we would have to solve two optimization problems for eachk, instead of k satisfaction problems for each delay bound. This idea directly leads to a more space efficient method of storing domains for memoization. Currently, we store complete domains within components for one source-target node pair sorted by delay. Another possibility would be to store for each node (and source and target node) the minimum allowed delay required to make it possible and the maximum allowed delay to keep it fixed. Depending on the number of different values for the delay bound that occur this could lead to significant memory savings, but of course causes the domain extraction time to be linear in the number of nodes since we need to assemble the domain based on delay values.

We have shown that the combination of substrate domains to build the domains of virtual arcs doubles the possible nodes and arcs and halves the fixed ones. However, we usually combine a lot of different substrate domains (for size 1000 2500 for the expected case) to achieve this doubling. That means that each additional substrate domain only adds very little additional in-formation. It might be possible to rank substrate domain calculation according to their potential of adding new information. For instance, if a substrate domain calculation is done for a connec-tion that starts and ends at nodes for which no such calculaconnec-tion has been executed until now, then it has a high potential for adding new possible nodes/arcs and removing fixed nodes and arcs. It might be possible to calculate substrate domains exactly for the ones with the highest potential and then use the knowledge already gathered about the domain of the virtual arc to speed up the calculation of the complete domain. If we use the information about the already gathered domain only in this limited way, it might be possible to avoid the run-time penalty that we have observed.

A surprising number of substrate nodes or arcs can be fixed, i.e., they have to be used. This information might be useful for defining more intelligent neighborhoods, for instance the clear-ing neighborhood as discussed in Chapter 6 can skip evaluatclear-ing nodes which have to be used anyway. Such nodes and arcs are also interesting in their own right. If they are removed from the substrate, the flexibility afforded by being able to chose mapping locations for virtual nodes is not enough to ensure a feasible solution to the VNMP instance, so they are very critical. Of course, when capacity constraints are considered also other nodes might be critical in this sense.

CHAPTER 10

Constraint Programming

10.1 Introduction

In this chapter, we investigate Constraint Programming (CP) approaches for solving the VNMP.

For a treatment of the basic working principles of CP, see Section 2.2.11. Section 10.2 in-troduces CP formulations for the VNMP. In Section 10.3, we apply the lessons learned while designing Construction Heuristics (Chapter 6) to devise heuristic branching rules that guide CP towards feasible solutions. Methods for strengthening propagation are discussed in Section 10.4.

The analysis of the different improvements for the CP approach can be found in Section 10.5.

Section 10.6 summarizes the results and Section 10.7 shows possible directions for future work.