• Keine Ergebnisse gefunden

An important aspect of the DSM system is that its services shall be transparent to the applications that are using it. An application issues read and write requests and the way these requests are identified in the application source code depends on the level of access transparency provided by the DSM system. The application engineer can either apply

Figure 3.8: The algorithm of filtering the instances to reduce the frequency of storing and update requests

the accesses to the shared data in a specific way or use automatic tools that support the application development in the pre-compilation phase. This section analyses the required pre-compilation steps needed to be applied, depending on the support provided by the operating system.

Any read or write operation involving a shared data item that is not exclusively available locally implies waiting for the result. And since the actual realization of the data access can differ for different hardware and software platforms, the issue becomes important for heterogeneity support.

Suppose the following operation in the application code:

a=b (3.7)

Where both,a andb are shared data items. This operation can be expressed as the following function calls, with the naive assumption that an operation is always successful and returns immediately with the result.

Figure 3.9: An idealized flow of a distributed operation

write(a, read(b)) (3.8)

A simplified and idealized flow of the operation is shown in Figure3.9. Depending on the capabilities of the operating system there are three main possibilities for the DSM system to handle it:

ˆ block–the thread is blocked by a wait-loop, until the result can be delivered.

ˆ freeze–the current context is frozen until the result is available and, if there are other tasks in the scheduler list, the CPU control is given to these, otherwise the CPU is idle or can be put into sleep mode.

ˆ split-phase–the operation is broken into requests and result handler routines, i.e., the flow is not kept inside one function or instruction block.

The TinyOS operating system is generally based on the split-phase manner, but pro-viding a blocking functionality as an artificial construct is possible in this WSN operating system. Similar applies to the IHPOS system. In Contiki and Reflex it is possible to freeze the current context and yield the control to one of the other scheduled threads.

Since the network communication is always a split-phase process, freezing and blocking can actually only represent more or less efficient ways to adapt the data accesses to a non-split-phase manner.

Both, freezing and blocking fit best the non-distributed way of thinking while pro-gramming, i.e., accessing a variable results either in its value or a change of its value, right now and right here–without significant delays. In case of distributed operations, delays have to be considered, but both freezing and blocking cause the control related to the current thread to stay at the point where the request was issued and proceed further from that point as soon as the result is available. The difference between these two handling solutions is the way the control is kept at the request issuing point, i.e., the continue conditionthe result is available is either checked in the wait-loop inside the DSM system or in the scheduler of the operating system, because the thread related to the access operation yielded the control. The blocking is easy to implement, but usually implies energy wasting. On the other hand, the freezing is more energy efficient, but requires support from the operating system, causing the operating system to be more complex.

Figure 3.10: The blocking flow of a distributed operation

Figure 3.11: The split-phase flow of a distributed operation

For both freezing and blocking the abstraction of the access to a variable is as simple as a function call. This does not require significant changes to the program flow, i.e., the read and write operations are transparently exchanged by appropriate function calls.

This simplifies the task of the supporting tools, since the flow in that case (see Figure 3.10) does not differ much from the idealized one presented in Figure3.9.

In contrast, the split-phase is a more natural way to handle a distributed operation, i.e., the request is issued, the control is given away and the incoming result triggers a handler routine that takes care of the correct interpretation of the result. Such a request-response manner is also similar to the way hardware works, making it suitable for embedded systems, where any kind of no-operation loop implies waste of energy.

Keeping the operation abstraction to be as simple as a function call, is not that easy anymore. The split-phase flow is presented in Figure 3.11. The disadvantage here is that the DSM system needs to store the context information by itself to be able to link the result with the request and to continue the application code starting from the right point.

The different ways of handling a distributed operation influence the changes in the original source code needed to keep the original flow of the application. The following paragraphs discuss the required modifications of the original code and show pseudo code snippets for the above mentioned handling options. Taking an example operation, which reads a value from one shared data item and stores it in a second one, and extending it by operations that precede and follow it, results in the following pseudo code snippet that represents the original operation with its context used for the further analysis.

(...)

preceding operations;

a = b;

following operations;

(...)

The handling of distributed operation that involves blocking or freezing in the access functions results in the following code snippet.

(...)

preceding operations;

write(a, read(b));

following operations;

(...)

What is important, this code modification is independent of the location in the orig-inal application source code. Thus, the DSM layer may be realized really transparent to the application and does not require dramatic changes in the program flow.

For the split-phase approach the original code results in the following representation.

start(){

(...)

preceding operations;

read(b);

}

//handler for the read() function readHandler(valB){

write(a, valB);

}

//handler for the write() function writeHandler(){

following operations;

(...) }

This shows that the mapping of operations is not easy anymore, and the task of pre-compilation may become tricky, especially for applications with many accesses to the shared data items. Additionally, if multiple threads are allowed, the task of binding handler calls with requests requires storing the context from which the request was called to continue with the right operations.

There are two solutions to the problem of flow control in case of the split-phase operation handling; either to decompose the application in the pre-compilation phase to a state machine, or to require the application engineer to think in the split-phase way and provide a split-phase interface between application and the DSM system. The following section describes the latter option.