• Keine Ergebnisse gefunden

5. Application-oriented Adaptation 89

5.4. Design of Lifetime Planning

5.4.3. QoS Modeling

In this section, we provide a QoS model that relates application-level QoS metrics to low-level parameters. The goal is to explore any possible conflicts between the various QoS metrics when adapting LP.

5.4.3.1. Analytical Model

For our analysis, we account for three major QoS metrics, i.e., reliability (PDR), la-tency (delay), and energy consumption. These mapping functions are devoted to control the node’s behavior at run-time. Specifically, our QoS model is based on merging the Hoes’s model [HBT+07] and the probabilistic Markov chain model of the IEEE802.15.4 standard [ISA11] referred to as Park’s model [PDMFJ13].

Reliability: In our model, reliability R(sij) is defined as the probability of correct-ness and success of packet transmission between the nodes i and j. Specifically, our model differentiates itself from Park’s model [PDMFJ13] by further considering the sig-nal corruption in the accompanied noise (i.e., correctness), while Park’s model accounts only for the contention-loss probability (i.e., successful transmission). The multiplicative reliability metric R(sij) is expressed in terms of the Signal to Interference plus Noise Ratio (SINR) and the approximated probability of successful packet transmissionR(s˜ ij) as given in Equation 5.4.

R(sij) =

1−Q√

2×SINRb

×R(s˜ ij) (5.4)

wherebis the packet size (in bit) andQ(·)is the tail probability of the standard normal distribution. The second part of Equation 5.4 represents the probability of successful packet reception. It mainly depends on the packet generation rate, the operational duty cycle, and the (re-)transmission times. The term SINR has a direct relationship with the transmission power via Equation 5.5.

SINR= Prx

N

= Kr×Ptx×(d0/d)r

K×T ×B (5.5)

where Prx and Ptx are the received and transmitted power respectively. The noise levelN is expressed by the Boltzmann’s constantK, the effective temperature in Kelvin T, and the receiver bandwidthB. The remaining terms are as follows: ris the path-loss coefficient (r ≥2). Kr is the constant gain factor. d0 is the reference distance, whiled is the actual distance between a transmitting node T xand a receiving one Rx.

Latency: In Park’s model [PDMFJ13], the latency is defined as the time interval from the time point a packet is at the head of its MAC queue and ready to be transmitted until the transmission is successful and an acknowledgment has been received. In contrast,

our model extends this notion via considering the time span from the arrival of a nearby objective until its detection (detection delay) as well. Equation 5.6 describes the additive delay metrics D(sij) as a function of the sampling rate rs, the detection duration Ds, and the transmission delayD.˜

D(sij) = 1

rs

+Ds

+ ˜D (5.6)

Energy Consumption: In this work, we assume that the radio transceiver is in sleep mode during the back-off mechanism specified by the IEEE802.15.4 standard [ISA11].

Moreover, we presume that packet transmission and reception have identical energy consumption. Accordingly, the total energy consumptionP(sij)can be expressed by the Equation 5.7.

P(sij) =f(sij) (Es×rs+Pmcu) +Eradio(ro+κ×ri) (5.7) where Es is the energy consumption of the sensing module, Pmcu is the consumed power for processing the sampled data, andf(sij)is the (active) radio duty cycle of the transceiver. The output traffic rater0 is the average rate of packet transmission whileri

represents the received traffic rate from κ neighboring nodes. Neglecting the consumed energy in sleep mode, the term Eradio reflects the average power consumption during channel sensing, MAC back-off state, and packet transmission including both, successful transmission and packet collision.

5.4.3.2. Model Validation

We discuss the validation of the proposed QoS analytical model. We implement our aforementioned analytical model in MATLAB and compare the results with the sim-ulations by the Cooja simulator [ÖDE+06] in Contiki OS [DGV04]. We estimate the contention loss and the transmission delay in the various simulation scenarios. Simul-taneously, we measure the parameters in the MAC layer that would be applied later in our analytical model in MATLAB, such as the probability of the first CCA α and the probability of the second one β. During our simulations, we utilize the powertrace application [Dun11] to estimate the power consumption.

Table 5.2 presents the general configurations of the simulations. Considering the PER and the contention loss ratio, we carry out simulations by setting various values of parameters in Cooja, such as number of nodes, channel check rate, and idle time interval, based on the equations stated above.

For the number of nodes, we select the values of4,8, and16for a single cluster head.

Besides, we set the channel check rate to 8, 16, 32, and 64 (Hz), respectively, in each simulation. For simplicity, we fix the idle time interval to 1000 milliseconds (ms) for all simulations. In this case, we run 12 (3×4×1) simulations in total. Taking the matter of time into account, we run each simulation in approximately 10 minutes for each scenario. In each simulation, multiple transmitters (i.e., child nodes) generate a data packet in every idle time interval. Then, each node performs two CCAs before transmission. If these two consecutive CCAs are both clear, meaning that the channel is idle at the moment, then the node transmits that data packet to the receiver (i.e.,

Table 5.2.: Simulation parameters.

Parameter Configuration

Framer 802.15.4framer

Radio Duty Cycling ContikiMAC

MAC CSMA/CA

Network Rime

Radio Model Unit disk graph medium

Simulated Node Type Tmote sky

Packet Size 32 bytes

Transmission Power -7 dBm

Transmission Range 10 meters

Idle Time Interval 1000 ms

Number of Nodes 4, 8, 16

Channel Check Rate 8, 16, 32, 64 Hz

cluster head).

Reliability. In our model, the reliability represents the probability that a sensor node correctly and successfully transmits a data packet. Thus, it describes the probability that neither a single bit error occurs in the packet during transmission, nor that the data packet is discarded as a result of a contention of the communication medium.

By utilizing the measurements from simulations, we obtain the average PDR of all the sensor nodes in different simulation scenarios. Then, using the reliability formula (Equation 5.4), we obtain the average PDR of our analytical model in the corresponding scenario via MATLAB.

Figure 5.4(a) shows the deviation of the average PDRs between the analytical model and the simulation. As shown in the figure, the deviation of the average PDR between the analytical model and the simulation decreases along with the increment of the channel check rate and the number of nodes. Even in the scenario with four nodes and 8 Hz channel check rate, the difference between the analytical and the simulated results is less than4%.

Latency. As stated in Equation 5.6, the average transmission latency in our model contains two parts,Dsand D. The first part — the detection delay — is easy to obtain˜ once the sensor’s sample rate, the duration of sampling, and the duration of detection are fixed. The second part, the transmission delay, is the average delay for a successful packet transmission. It is defined as the time interval from the time instant the packet is at the head of its MAC queue and ready to be transmitted until the transmission was successful and the acknowledgment has been received. Therefore, we mainly discuss the validation of the average transmission delay D.˜

Similarly, we obtain the average transmission delay of all the sensor nodes from simu-lations in different scenarios. By using the settings of parameters in the MAC layer, we obtain the average transmission delay in the analytical model.

Figure 5.4(b) illustrates the deviation of average transmission delay between our an-alytical model and the simulations. As shown in the figure, the average transmission delay is affected by the number of nodes when the channel check rate is less than 32 Hz. Additionally, the delay is affected by the channel check rate whenever the rate is

greater than or equal to 32 Hz. In the scenario of 16 nodes, the difference between the analytical model and the simulated one is still less than 10 ms. To conclude, our proposed analytical model considers several layers of the network stack. According to the validation work, the model is feasible enough to be invoked for detecting possible QoS conflicts due to LP.

8 16 32 64

Channel check rate (Hz) 0.5

0.6 0.7 0.8 0.9 1.0

Average PDR

Analytical, 4 nodes Simulated, 4 nodes Analytical, 8 nodes Simulated, 8 nodes Analytical, 16 nodes Simulated, 16 nodes

(a) Average PDR versus channel check rate

8 16 32 64

Channel check rate (Hz) 20

40 60 80 100 120 140

Average latency (ms)

Analytical, 4 nodes Simulated, 4 nodes Analytical, 8 nodes Simulated, 8 nodes Analytical, 16 nodes Simulated, 16 nodes

(b) Average latency versus channel check rate

Figure 5.4.: Performance metrics with various channel check rates evaluated in Cooja simulations.

5.4.3.3. Discussion

In this section, we extracted the main model parameters to search for possible conflicts.

Figure 5.5 summarizes the aforementioned relationships via delineating a mapping model between the low-level controllable parameters and the high-level QoS metrics. The rectangles represent constant values, while ovals are metrics that depend on the other parameters. The considered quality metrics are reliability, latency, and lifetime. Lines with a filled circle at the end embody direct proportions, i.e., if a low-level parameter is increased, then the corresponding metric increases as well. Similarly, lines with an open circle indicate inverse proportions.

The model depicts the possible trade-offs where adjusting a parameter has a positive influence on some quality metrics and a negative influence on others. These trade-offs are implicitly avoided in LP. This fact emerges from intentionally reducing the lifetime below the maximal operational lifetime. Moreover, other metrics — such as reliability and delay — have no inter-contradictions, as can be seen in Figure 5.5. Therefore, LP not only improves the provided QoS, but also avoids using any sophisticated algorithm to optimize contradicting metrics. However, to ensure the correctness of these remarks, we have to evaluate our novel QoS model. As a proof of concept, the next section discusses our implementation of an office monitoring scenario. This case study is investigated to examine the network performance with and without lifetime planning.

Transmission Power

Sampling Rate

Duty Cycle

Reliability

Latency

Lifetime

Controlable Parameters Quality Metrics

SINR

Communication Reliability Packet

Generation Rate

Detection Speed Backoff Delay

Energy Consumption Output Rate

Received Traffic Rate

Battery Capacity

Figure 5.5.: Hierarchical relationships among QoS metrics.