• Keine Ergebnisse gefunden

5.2 Consistency models without synchronization

5.2.5 PRAM or FIFO Consistency

forwarding costs in Table 5.5. In order to evaluate how this data traffic influences the consistency of the data storage 10 test runs, 5 minutes each, were performed for the test network and the sequences of replica updates on each nodes were observed. Since the update requests are propagated by a single node, the update forwarding performs well. For the test setup and settings it happened that about 5 percent of the update requests were not delivered to individual nodes. But, it could be observed that the up-date acknowledgment messages collided with each other causing the returned numbers of updated replicas to be wrong and indicating that an actually successful update was regarded as failed. It was also noticeable, that the delivery of some of the write requests failed on the way to the sequencer. Figure 5.18 present an extract of a test run with node11 as sequencer and two nodes, node12 and node15, are performing a sequence of 10 write operations, each. Both,node12 andnode15 write the same shared variable, ap-proximately once every 7 seconds. This period was chosen due to the measured duration of the complete write operation, which requires approximately 5 seconds. In this chosen extract all the update operations were performed successfully, but it is noticeable, that some of the write requests issued by the nodes were not delivered to the sequencer. The w10 operation bynode12 and thew1 andw2 operations bynode15 are missing.

For higher write frequencies, in order to protect the update requests from different write operations from colliding, it is possible to force the requester to process one write operation at a time, disabling the parallel request processing. This could be realized by a new policy parameter–serialized processing (see Table F.7). Enabling this policy parameter would create a queue of requests in the request buffer. The requests from this queue are processed one after another, but the processing of the next one starts after the processing of the current request is completed. This setting would require setting large values for theexternal request timeoutpolicy parameter, otherwise the timeout period for the requests stored in the queue elapses and they are regarded as failed, before they can be handled.

Figure 5.19: An example flow for PRAM/FIFO consistency model

replica is updated and the update request is broadcast to the other nodes, which update their replicas. Additionally, local writes are immediately visible by the local node. As shown in Figure 5.19, in this implementation, each node is a sequencer of its own write requests, but the global view does not have to be consistent in a strict or sequential sense.

After the execution of these example operations in Figure5.19, the replica of the shared data item on each node has a different value.

The definition of variables supporting the PRAM consistency model is given in Listing 5.5. The implementation based on this definition was evaluated further in this section.

#define pram f i f o p r o c e s s i n g \\

r e p l i c a t i o n r a n g e : 0 \\

r e p l i c a t i o n d e n s i t y : 1 0 0 \\

r e p l i c a t i o n c o p i e s : 1 6 \\

w r i t a b l e r e p l i c a s \\

d i s t r i b u t e d g l o b a l pram u i n t 3 2 t A;

d i s t r i b u t e d g l o b a l pram u i n t 3 2 t B ; d i s t r i b u t e d g l o b a l pram u i n t 3 2 t C ;

Listing 5.5: The definition of variables supporting the PRAM consistency model

Table5.8gives the cost and delay figures for the tinyDSM PRAM consistency imple-mentation. Again, the values for the write request without acknowledgments are given only for comparison. The PRAM consistency model requires that all the replicas are up-dated, so all the nodes observe the same order of write operations issued by a single node.

If a replica is not updated, then the model is violated. Since the concept of the PRAM model was to speed up the read operation, by reading always from the local replica, it is not advised to provide a cooperative read operation in an implementation of this model.

And in the presented tinyDSM implementation, it is actually not possible, because there are no owner informations stored on the nodes to allow restoring the sequence of write requests issued by a given node.

The write operations are simply standard write requests issued by the local nodes and

Table 5.8: pram

Operation Forwarding Costs Delay

read 0 ≈0

write without 1x flood 1x request forwarding acknowledgments

write with 2x flood 1x request forwarding acknowledgments 1x update convergecast

followed by the update request propagation, as already depicted in Figure5.8. However, what is interesting is the presence of multiple sources of the update requests. Figure5.20 presents an example sequence of updates performed on the nodes in the test network as a result of a sequence of write requests issued from bothnode1 andnode16. Again, these two nodes perform a write operation approximately every 5 and 7 seconds, respectively.

As can be read from the chart in Figure5.20, the amount of collisions is quite large. This is caused by the fact, that the propagation of update requests and their corresponding acknowledgments generates quite large and, what is maybe more important, non system-atic and not coordinated traffic. It also happens that complete update propagations fail, i.e., the write request is issued, the local replica is updated, but the message starting the update propagation collides with another one and none of the other replicas is updated.

In the test run presented in Figure5.20it can be noticed that thenode9 is not providing any state data.

Similar as it was for the implementation of sequential consistency, the feedback re-ceived from the nodes regarding the number of performed updates for a given write request, can be used by the application logic to trigger repetition of the write operation.

It was also observed that the number of positively performed write requests was quite low (below 30 per cent). Thus, even if the replicas are updated, the application logic can regard the request as failed and issue the write request again. Similar as it was the case for the implementation of the sequential consistency model, the use of the repeatable policy parameter can allow reissuing a failed request, so that it is observed by other nodes as a repetition of the previous request.

In order to investigate, if the avoiding of acknowledgment sending increases the quality of the storage a series of 15 experiment runs, 5 minutes each, with write requests without acknowledgments were performed. These tests have shown that the update rate increased (to about 85 per cent), but as already mentioned such a setting is not allowed by the requirements of the consistency model. Figure 5.21 shows an example extract of the sequence of replica updates. It can be noticed, that the number of collisions dropped.

This extract also shows, that the CC2520 radio on some of the nodes may stuck, making the node unable to receive the incoming requests. This test exposed the immaturity of the software driver for this radio transceiver.

In case of systems with large number of update issuers it might be necessary to involve more sophisticated coordination mechanisms on the protocol layer, to reduce the collision rate.

Figure 5.20: The sequence of replication updates for consecutive write request issued by node1 andnode16 for the implementation of the PRAM consistency model

Figure 5.21: The sequence of replication updates for consecutive write request issued by node13 and node16 for the implementation of the PRAM consistency model, without write acknowledgments