• Keine Ergebnisse gefunden

We generated four types of networks, and 500 networks of each type. The first type is the standard unit disc graph. In the second type, we place two large obstacles within the area where the nodes are distributed. The obstacles occupy the diagonal of the Euclidian space and basically divide it in two halves, leaving just a narrow bridge between the two halves. Then, we place the nodes uniformly at random within the free area and connect by an edge all nodes lying within a distance of 12 units from each other. The size of the Euclidean space is fixed to 100 x 100 units and all networks contain 250 nodes.

The third type of test networks is similar to the second one. The only difference is that this time, we place three equally large obstacles, instead of two. This results in two narrow bridges between areas accessible for nodes. Analogically, we place four obstacles in the fourth model, resulting in three bridges.

Note that the existence of a bridge between the obstacles does not necessarily mean that there are nodes occupying it, nor that it is used to connect different parts of the network. Thus, there is also no guarantee that a network generated with

Network Type Push-Sum BridgeFinder

3 Bridges 469 214

2 Bridges 597 282

1 Bridge 823 367

Unit Disk 94 72

Table 5.1: Convergence comparison between PushSum and BridgeFinder on diff er-ent network models measured in average number of exchange steps per node.

models 1 to 4 will even be connected. Therefore, we generated as many networks from each type as necessary to acquire 500 connected instances from each type.

We ran the Push-Sum algorithm on the 500 instances of each network type. The averaged results over each 500 runs are displayed in Table 5.1. For a definition of convergence speed, please refer to Section 5.1. The result for a single network is measured as the required average number of exchange operations per node. We observe a large discrepancy in the required exchange operations between the ideal case, the unit disc graph, and the most demanding model of just one single bridge.

Convergence Speed Observation In our tests, we saw that there is one very important condition on which the convergence speed of the algorithm depends, namely, which node starts the algorithm. The faster a node can distribute information within the network, the larger the benefit of starting the algorithm from that node. If we know the nodes with best topological positions, then we can improve the running time of the algorithm by starting it from such nodes.

An intuitive solution arises. After running the algorithm once, the fastest converging node from the last run is responsible for starting the next run. If it is not able to do that, the second fastest converging node becomes responsible for starting the algorithm, and so on.

This mechanism is integrated in BridgeFinder. Recall from Section 5.4 that the list of the fastest converging nodes is constantly exchanged among nodes, and that at the end of the algorithm, each node possesses the list of the 10 fastest converging nodes. Thus, all nodes within the network are aware which node should start the next run.

Adaptation of Protocol Initiation We also need to overcome the problem of starting the algorithm for the first time, as well as avoiding different nodes from starting and running BridgeFinder simultaneously. To achieve that, each node, which decides to start the algorithm and has not already participated in it, picks a random number and sends it over the network together with its values. If a node receives values with a different random number in it, this means that two simultaneous instances of BridgeFinder are running. The node ignores the gossiping messages with the smaller random number and processes only the larger values.

1 2 0 0 6 Class Flow Label Payload Length NH=0 TTL

Source Address

Destination Address IPv6-Header

Hop-by-Hop Options:

NH=58 Len=8 OT=$3e ODL=8 Fish

Water Padding

e.g., ICMPv6 Echo Request 0

8

24

40

0

16

0 type=128 code=0 checksum

identifier sequence number

8

16 data ...

with result update:

without result update:

NH=58 Len=24 OT=$3e ODL=28 Fish

Water Peer Iterations

0

16

Peer Address 32

8

8

Figure 5.4: Implementation of BridgeFinder with IPv6 optional headers

Thus, BridgeFinder instances started with larger random numbers take priority over those started with smaller numbers. This produces a slight overhead during the first run of the algorithm, as initially computed values are being thrown away, but still gives each node the possibility of starting the algorithm. This resolves the problem of having to select a starting node within a distributed environment. Any node can start the algorithm.

The benefit of starting BridgeFinder from the best converging nodes from previous runs of the algorithm can be seen in Table 5.1. We ran the algorithm over all 500 instances of each network model and averaged the results. The values of BridgeFinder are always measured at the end of the second run, as the first one is used to determine the top converging nodes within the network. Using that approach, BridgeFinder performs better than the original Push-Sum algorithm in all four network models. The acquired speed-up ranges from 20% to 220%.

Ultimately, even in a hostile environment like one-bridge network models, our algorithm requires only a few more exchange operations per node than the overall number of nodes in the network.