• Keine Ergebnisse gefunden

Description of the Implementation

6.3 M ODULAR I MPLEMENTATION OF THE C OMMUNICATION H ARDCORE

6.3.1 Description of the Implementation

The implementation was conducted under RTlinux versions 3.1 and 3.2pre. It is based on Orinoco 802.11b Standard wireless cards for which an RTLinux driver had to be written.

The driver was implemented as a separate kernel module. The protocol stack itself was implemented as a kernel module too. Most of the source code was written in C++; only some particularly time-critical parts were programmed in C.

6.3.1.1 Object Structure of the Implementation

Each process of the formal specification has been implement as a separate class. All these classes are derived from a common base class SDLProcess (cf. Figure 6-9). This base class has two virtual methods Init and DispatchSignal, which each derived process class must implement. Furthermore, it has two attributes State and PId, the former being the explicit state of the process’ state machine and the later its process ID. Each derived process class adds its own attributes corresponding the data in the SDL process it implements. The Init method mainly implements the initial transition of the process’s state machine and is called at startup of the protocol stack. The DispatchSignal method is the core of the class. It im-plants the transitions of the state machine. Whenever a process consumes a signal in the formal model, this means the DispatchSignal method of the corresponding class is called in the implementation with the signal being provided as a parameter. Within this method, the transition to be performed is selected based on the actual parameters of the method and the State attribute of the object.

Polling_AP

pl : PollingList DispatchSignal() Init()

SDLProcess PId : PIdSort

DispatchSignal() State : StateSort

Init()

PId : PIdSort

DispatchSignal() State : StateSort

Init() pl : PollingList

DispatchSignal() Init()

Figure 6-9. The implementation base class and an example of a derived class

Figure 6-9 shows the base class and the class Polling_AP as an example of a derived class.

The class Polling_AP overwrites both abstract methods and adds the polling list, which is a data item in the formal specification, as an attribute.

The classes communicate via a global signal queue. When a class outputs a signal, it adds the signal to the global queue. The global signal queue is an active object. It reads signals from the queue, determines the receiving process, and calls the dispatch method of the cor-responding object. Using a centrally maintained signal queue rather than a separate queue for each process is more efficient because the central signal scheduler calls the dispatch method only if there is actually a signal to process for the object. Figure 6-10 illustrates the signal exchanging between the global queue and the process objects. It shows an object of the class QScheduler that is connected to several process objects. In the figure, object Poll-ing_dyn_AP outputs a signal to the global queue. The scheduler dispatches this signal to the object DynMedAcc_AP by calling the method DispatchSignal of DynMedAcc_AP. Dur-ing the execution of the transition the object outputs another signal by addDur-ing it to the global signal queue.

:QScheduler

:RM_AP2CL_dyn_AP

:RM_CL2AP_dyn_AP

:DynMedAcc_AP

:Polling_dyn_AP 2:DispatchSignal

3:Output

1:Output

Figure 6-10. Signal exchange between the process objects and the central signal queue

6.3.1.2 Achieving Efficiency

As explained above, it is crucial to keep the time overhead of the communication between the processes small. To accomplish this, two problems have to be tackled:

1. Avoid copying of the application messages, which usually constitute the largest part of the PDUs the protocols send. The protocols themselves typically add some header information only.

2. Avoid scheduling delays between executing the sender and the receiver of a signal.

To address the first point, a shared memory module was developed. When the application wants to transmit a message, it requests a buffer from the shared memory module. All buff-ers have a fixed size, which is sufficient for a maximum application message plus all head-ers possibly added by the micro protocols. The application copies its message at the end of the buffer and passes a handle to that buffer to the protocol stack. Within the protocol stack the content of the buffer is never copied. Rather, each micro protocol adds its header in front of the headers added by the higher layers. This means that only the pointer to the start of the frame is moved. Thus, the frame is successively extended from the end of the buffer towards its beginning. When the buffer has been passed through the whole stack, all head-ers have been added and the complete frame has been constructed within the buffer. The protocol stack then passes the handle to the driver, which copies the frame into the network interface card. In this approach, the user data are never copied; only the handle is passed from protocol to protocol. Note that this holds also for the communication between the application and the protocol stack as well as between the protocol stack and the driver.

Communication in the other direction works very much the same way. In this case the driver requests the buffer. It reads the frame from the network interface card into the buffer and passes the handle to the protocol stack. In the protocol stack, the micro protocols suc-cessively remove their headers; that is, they move the pointer indicating the start of the frame towards the end of the buffer. Thus, the frame is successively shrunk until only the application message remains in the buffer. At the end, the protocol stack passes the handle to the application, which can then read the message.

To avoid scheduling delays between executing successive micro protocols, a single thread executes all the protocols. This thread waits for messages to arrive from either the driver or the application. Whenever this happens, it puts a corresponding signal into the signal queue and then starts processing the signal as described above. When the queue is empty, the thread again waits for the next message to arrive.

The thread waits for message arrivals passively. A single semaphore is used to signal mes-sage arrivals from both the application and the driver. If one of both passes a buffer to the protocol stack, it increments the semaphore. The thread in the protocol stack, in its turn, decrements the semaphore whenever it receives a buffer and hence blocks when it has processed all buffers passed by the application or the driver.

6.3.1.3 Configuration and API

Configuring stacks is done at compile time. To change the configuration of the protocol stack, a single header file has to be edited. The user decides which protocols are part of the stack in which order by defining their process IDs in this header filer. Additionally, there is a single flag determining whether dynamic network scheduling is used or not. From this header file, a script generates all the configuration specific code to initialize the protocols and establish a matching between the protocol IDs and the corresponding objects. This script also modifies the makefiles controlling the compilation of the protocols. Figure 6-11 shows how the module size varies with the chosen configuration. Obviously, the fewer services are part of the configuration, the smaller is the memory footprint of the resulting module.

Configuration Size of the AP

module [KB] Size of the client module [KB]

All services 62 50

Polling, Reliable Multicast,

Atomic Multicast 52 41

Polling, Reliable Multicast 49 35

Polling 42 31

Figure 6-11. Sizes of the client and AP modules depending on configuration

Applications can access the services of the protocol stack through a simple API. To send a message, an application program first requests a buffer from the shared memory module (shm_alloc). It copies the message it wants to transmit into the buffer, putting parameters, such as the resiliency, in front of it. It then notifies the shared memory module of the buffer’s being ready for transmission (shm_done). After this call, the shared memory mod-ule wakes-up the thread in the protocol stack so that it can process the message. The appli-cation program has two possibilities to receive messages. For a non-blocking receive, the program just checks if there has been any buffer delivered for it (shm_getnextbuf). As long as no message is waiting to be received, this call returns a null handle. The second method blocks the program while waiting for messages. To use this method, the program calls sem_wait on a semaphore the shared memory module provides. When the semaphore is open, the next call to shm_getnextbuf is ensured to deliver a handle to a buffer containing a message for the application.