• Keine Ergebnisse gefunden

3.4 Concluding remarks

4.1.4 Example and Interpretation

As a first example consider a setup similar to the one in Example 4.1. The MAC output is the XOR sum of two binary inputs. The BC channel consists of one lossless channel with binary input to receiver 2. The channel to receiver 1 is a binary symmetric channel with a probability p1,BC that the output bit is inverted. The MAC channel has a maximum sum rate of 1 bit.

The channel to receiver 1 can transport 1−h(p1,BC) bits per channel use. If we use a uniform input distribution on X1, X2 and an identity mapping from yR to ˆyR we have I(X1;YR|X2,Q) = I(X1; ˆYR|X2,Q)= I( ˆYR;YR|Q)= I(X1X2; ˆYR|Q)=1 bit. ThereforeI(X1X2; ˆYR|Q)−I( ˆYR;YR|Q)= 0 and the achievable rate region is given by

R1 ≤ min{α,(1−α)}

R2 ≤ min{α,(1−α)(1−h(p1,BC))}

for someα∈[0,1]. Comparison with the outer bound in Lemma 1.1 shows, that this is indeed the capacity of the considered channel.

If we do not use joint decoding the constraints in (3.2) enforceα <(1−α)(1−h(p1,BC)) if we use the identity mapping and an uniform input distribution to the MAC1. This degrades the achievable rate region. Figure 4.1 shows the capacity region for this example together with the region that is achievable without joint decoding using the sketched strategy. The third region shown is achievable by decode-and-forward.

Note that in a similar setup with a symmetric binary erasure multiple access channel as considered in Section 2.1.5 the same rate pairs are achievable by compress-and-forward with joint decoding. The strategy uses a mapping of the MAC output such that the virtual channel

1A non-uniform input distribution achieves some more rate pairs, but the analysis of this is out of the scope of the example. Some remarks on this effect will be given in the discussion of the next example.

p(ˆyR|x1,x2) is the channel considered in the example above. For the optimal (uniform) input distribution on X1, X2 , we still haveI(X1X2; ˆYR|Q)−I( ˆYR;YR|Q) = 0 while the individual rate constraints do not change due to that mapping, i.e. we haveI(X1;YR|X2,Q) = I(X1; ˆYR|X2,Q).

The resulting rate expressions are that of the cutset outer bound on the capacity region (Lemma 1.1). Therefore we conclude that for the example in Section 2.1.5 capacity can be achieved by compress-and-forward with joint decoding.

Now consider a setup with a noisy MAC. The setup is based on the MAC of the above example, where the output is the XOR sum of two binary inputs. Some binary noise is added to the channel output, that is independent of the channel inputs, i.e. we invert the MAC output with probability pMAC. Using the identity mapping at the relay and a uniform input distribution leads to I(X1; ˆYR|X2,Q)= 1−h(pMAC) andI(X1X2; ˆYR|Q)−I( ˆYR;YR|Q)= −h(pMAC). Therefore we get some penalty if we use this strategy. This penalty is caused by the quantization: We spent some bits to describe the MAC output, which contains noise that has no information for the receivers. Whenever I(X1X2; ˆYR|Q)− I( ˆYR;YR|Q) = −I( ˆYR;YR|X1,X2,Q) < 0 the noise is still included in quantized representative. Therefore some bits are wasted on the noise. We can decrease this penalty be using a less fine quantization. One way2 of achieving this is to use a quantized variable such that p(ˆyR|yR) is a binary symmetric channel with crossover probability pQ. Thereby we degrade the MAC performance and we have for uniform distributed channel inputs I(X1X2; ˆYR|Q)− I( ˆYR;YR|Q) = h(pQ)−h(pMAC+ pQ−2pMACpQ) andI(X1; ˆYR|X2,Q) = 1−h(pMAC+pQ−2pMACpQ). Now suppose we optimizeαto achieve a high rate for receiver 1 ignoring the rate of receiver 2. Fixing the strategy as discussed above leads to

α= 1−h(p1,BC)

2−h(pQ)−h(p1,BC) and a maximum rate

R2= 1−h(p1,BC)

2−h(pQ)−h(p1,BC)(1−h(pMAC+ pQ−2pMACpQ)).

Therefore we can calculate the optimal parameter pQ for this strategy and for the rate R2. In Figure 4.2 the rate R2 is plotted over the parameter pQ for the quantization assuming fixed pMAC = 0.3 and p1,BC = 0.2 and a corresponding optimal α. It turns out that the optimal pQ depends on both,pMACandp1,BC. In particular, the optimal parameterspQandαwill be different if the goal is to maximize R1. The figure shows, that the degradation of the MAC output can increase the rate in the overall communication.

Similar effects, i.e. the degradation of the performance in one of the transmission steps to increase the overall performance, might be used to increase the rate ofR1in the first example of the noiseless MAC without joint decoding above: In that example using a non-uniform

distri-2We do not argue that this is the optimal way of quantization. But this quantization serves the purpose to show some effects which can occur for the compress-and-forward strategy with joint decoding.

0.1 0.2 0.3 0.4 0.5 p 0.005

0.01 0.015 0.02 0.025 R

Figure 4.2: The figure the achievable rateR = R1 for receiver 1 over the parameter p = pQ for the quantization assuming fixed pMAC =0.3 and p1,BC =0.2.

bution for X2allows for largerα. This in turn increases the rateR1. With this strategies for the example at hand the rate region with joint decoding and without joint decoding are the same (up to boundary effects du to the strict inequalities in the constraint (3.2)), but different strategies need to be used. Note that in general it is not possible to use a different input distribution onX1 without affecting the rates achievable forR2in the MAC phase.

Back to the example with a noisy MAC: Suppose for now we chooseαand pQto maximize R1. We haveh(pQ)− h(pMAC + pQ−2pMACpQ) < 0 for pQ , 0.5. Therefore to achieve the maximum rate R1 it might be necessary to set R2 = 0 if p1,BC is close to 0.5. If the BC is orthogonal for both receivers, a simple solution for the two problems sketched above is to use a different quantization for the two receivers. This strategy is analyzed in Chapter 5. Note that we cannot use differentα, as this parameter determines the timesharing and is common for both receivers.

The example with the noisy MAC output leads the way to a different strategy at the relay:

There exists [64, 65] a capacity achieving sequence of linear codes for the binary symmetric channel with parameter pMAC. Now, if both nodes use the same of these linear codes, the XOR sum of the codewords is again a codeword du to the linearity of the code. Therefore the relay will be able to decode the XOR sum of the two messages. As already pointed out in Chapter 2, the relay can use the XOR sum of the messages as an input of coding for the BC with side information at the receivers. Therefore we conclude that for this channel the cutset bound given in Lemma 1.1 is achievable. The strategy in this example is based on the structure of the codes.

Because of this structure the relay need to choose one out of only 2αnmax{R1,R2}codewords. Such a reduction of the number of effective codewords the relay has to pick from cannot be ensured with a random coding argument, unless one of the nodes uses all available codewords as in the case if there is no noise in the MAC. Until today the achievable rate region with such structured codes is only known for very few channels.

In [66] a structured code based on nested lattices for certain Gaussian channels was proposed and an achievable rate region was stated. A related topic was introduced as computational coding in [67]. In computational coding the goal is to receive a certain function of several random variables at the receiver of a MAC. Receiving the XOR sum can be seen to be an example of such a computational code. For general channels the achievable rate region for decoding the XOR sum of two messages with structured coding remains unknown. A related result for source coding can be found in [68, 35]. In this references structured codes are used to encode the XOR sum of two dependent random variables. During the work on this thesis we obtained some results for the transmission of correlated binary data via an AWGN MAC, which are related to the design of structured codes. These results were published in [3, 4]. The analysis of computational coding for a MAC is out of the scope of this thesis.

4.2 Partial Decode-and-Forward with Joint Decoding at the