nbhkdz.com冰点文库

Capacity Bounds for Two-Way Relay Channels

时间:2011-06-25


Capacity Bounds for Two-Way Relay Channels
Wooseok Nam, Sae-Young Chung, and Yong H. Lee
School of EECS, KAIST, Daejeon, Republic of Korea E-mail: {wsnam}@stein.kaist.ac.kr, {sychung, yohlee}@ee.kaist.ac.kr

Node 1

Node 2
Encoder 2

Abstract—We provide achievable rate regions for two-way relay channels (TRC). At ?rst, for a binary TRC, we show that the subspace-sharing of linear codes can achieve the capacity region. And, for a Gaussian TRC, we propose the subset-sharing of lattice codes. In some cases, the proposed lattice coding scheme can achieve within 1/2-bit the capacity and is asymptotically optimal at high signal-to-noise ratio (SNR) regimes.

Encoder 1

Relay

Decoder 1

Decoder 2

I. I NTRODUCTION We consider two-way relay channels (TRC) as shown in Fig. 1. Nodes 1 and 2 want to exchange messages with each other only through a relay node. There is no direct communication link between them, and all nodes operate in full-duplex mode. We are interested in the capacity region of the TRC. Although the capacity region of the general TRC is still unknown, several schemes have been proposed and their achievable rate region were studied. [1]-[6]. Especially, as the network coding gathers much interest in these days, communication schemes for the TRC are revisited in the context of network coding for wireless networks [5]-[8]. In this work, we introduce some capacity bounds for TRC’s. In particular, for a binary TRC, we show that the subspacesharing of linear codes can achieve the capacity region. For a Gaussian TRC, we propose the subset-set sharing of lattice codes. We also show that, in some cases, the proposed scheme achieves the upper bound of capacity region within 1/2-bit, and is asymptotically optimal at high SNR regimes. II. S YSTEM M ODEL AND SOME BASIC BOUNDS We consider a two-way relay channel with three full-duplex nodes as Fig. 1. The variables of the channel are: nRi ? Wi ∈ 1, . . . , 2 : Messages of node i, T ? Xi = [Xi1 , . . . , Xin ] : Codewords of node i, T ? YR = [YR1 , . . . , YRn ] : Channel output at the relay, T ? XR = [XR1 , . . . , XRn ] : Codewords of the relay, T ? Yi = [Yi1 , . . . , Yin ] : Channel outputs at node i, ? i ∈ 1, . . . , 2nRi : Message estimates at node i, ? W where i = 1, 2 and Ri is the information rate of node i, i = 1, 2. Node i transmits its symbol Xik to the relay through the memoryless uplink channel speci?ed by p(yR |x1 , x2 ). The transmit symbol Xik is generally a function of message Wi and past channel outputs Yik?1 = Yi1 , . . . , Yik?1 , i.e., Xik = fik Wi , Yik?1 . At the same time, the relay transmits symbol XRk to nodes 1 and 2 through the memoryless downlink channel speci?ed by p(x1 , x2 |yR ). Since the relay
This research was supported in part by Brain Korea 21 project.

Fig. 1.

Two-way relay channel

has no messages of its own, XRk is formed as a function k?1 of past channel outputs YR = YR1 , . . . , YRk?1 , i.e., XRk = k?1 ? 2 is computed fRk YR . At node 1, the message estimate W as a function of its past channel outputs and its message, i.e., ? 2 = g1 (W1 , Y1 ). The decoding of node 2 follows similarly. W Now, we introduce some well-known bounds of the capacity region for the general TRC. A) Outer bound (cut-set bound): Based on the cut-set bound [15], we can compute the outer bound of the capacity region for the TRC. The cut-set bound is computed as follows [1]: R1 ≤ min {I (X1 ; YR |X2 ), I (XR ; Y2 )} , R2 ≤ min {I (X2 ; YR |X1 ), I (XR ; Y1 )} . (1)

B) Inner bound (decode-and-forward): As we see in Fig 1, the uplink of the TRC is a multiple-access channel. Thus we can consider the scheme that the relay decodes both of the messages from nodes 1 and 2. The downlink of TRC is basically a broadcast channel except that the decoders have side-information to exploit [9], namely their own messages transmitted in the uplink. This scheme is called decode-and-forward (DF) relaying and the achievable rate region is studied in [2] for half-duplex cases. However, it can be directly extended to the full-duplex cases as R1 R2 R1 + R2 ≤ ≤ ≤ min {I (X1 ; YR |X2 ), I (XR , Y2 )} , (2a) min {I (X2 ; YR |X1 ), I (XR , Y1 )} , (2b) I (X1 , X2 ; YR ). (2c)

Note that (2a) and (2b) are the same as (1). However, there is another limitation to the sum rate (2c), which is sometimes called the multiplexing loss [1], [2]. III. B INARY TRC As a speci?c case of the TRC, we ?rst consider a binary TRC, where all links are composed of memoryless binary

symmetric channels (BSC). In this case, the received signal at the relay is given by yR = x1 ⊕ x2 ⊕ zR , (3)

After obtaining uR without error, each receiver can estimate the other’s message as follows: ?2 = u ? 2 ⊕ u12 , Node 1: u ? 12 ] = [u11 (? Node 2: [? u11 u u2 ⊕ u2 )]. Summarizing (8) and (9), we have R1 ≤ min {1 ? H ( R ), 1 ? H ( 2 )} R2 ≤ min {R1 , 1 ? H ( 1 )} . (11) (10)

where ⊕ denotes binary addition, and zR is the error vector whose components are i.i.d. Bernoulli random variables with Pr {ZRk = 1} = R , k = 1, . . . , n. Similarly, the signal received by node i, i = 1 or 2, is given by yi = xR ⊕ zi , (4)

where zi is the error vector whose components are i.i.d. Bernoulli random variables with Pr {Zik = 1} = i , k = 1, . . . , n. Using the result of Section II, we can compute the outer bound of the capacity region for the binary TRC as follows [1]: R1 ≤ min {1 ? H ( R2 ≤ min {1 ? H (
R ), 1

The achievable region in (11) meets the outer bound (5) when R1 ≥ R2 . We can do similarly when R1 ≤ R2 , and thus (5) is the capacity region for the binary TRC. IV. G AUSSIAN TRC In this time, we consider a Gaussian TRC. The received signal at the relay is given by yR = x1 + x2 + zR , (12)

? H ( 2 )} , R ), 1 ? H ( 1 )} .

(5)

In [1], it was shown that this outer bound can really be achieved. The technique used in [1] for the uplink is the linear coding with time sharing. This can be seen as a special case of the subspace-sharing scheme, which will be explained below. If R1 ≥ R2 , we de?ne binary linear codes (BLC) for node 1 and 2 as follows: C1 x1 |x1 = u11 u12 u11 ∈ {0, 1} C2 G11 , G12 , u12 ∈ {0, 1}
nR2 nR2

where zR is the noise vector whose components are real i.i.d. 2 . Gaussian random variables with zero mean and variance σR There are also power constraints P1 and P2 on the transmitted signals, i.e., E x1 2 ≤ P1 and E x2 2 ≤ P2 . Similarly, the received signal at node i, i = 1 or 2, is given by yi = xR + zi , (13)

n(R1 ?R2 )

, (6)

x2 |x2 = u2 G12 , u12 ∈ {0, 1}

,

where zi is the noise vector whose components are real i.i.d. 2 Gaussian random variables with distribution N (0, σi ), and 2 E xR ≤ PR . The outer bound of the capacity region for this Gaussian TRC is given by [1] R1 ≤ min R2 ≤ min 1 P1 log 1 + 2 2 σR 1 P2 log 1 + 2 2 σR 1 PR log 1 + 2 2 σ2 1 PR , log 1 + 2 2 σ1 , , . (14)

where G11 and G12 are n (R1 ? R2 ) × n and nR2 × n binary matrices, respectively. It is well known that some linear codes can achieve the capacity of a BSC [10]. We assume that T T is the generating matrix of such a capacityGT 11 G12 achieving linear code. Since C2 ? C1 , node 1 and 2 share the transmitting subspace C2 . The received signal at the relay node is written as yR = x1 ⊕ x2 ⊕ zR = v ⊕ zR , (7)

where v x1 ⊕ x2 . From the group property of linear codes, v is an element of C1 . Thus, the relay can decode v instead of decoding x1 and x2 separately. If there is no decoding error, the decoded message at the relay will be given by a uniform ? 2 ], where u ?2 [u11 u u12 ⊕ u2 . The binary vector uR condition for the reliable uplink communication is given by R1 ≤ 1 ? H (
R ).

In Section III, we showed that a BLC can achieve the capacity region for the binary TRC. To achieve it, the group structure of the BLC played an important role. Thus, for the Gaussian TRC, we consider a lattice code, which also has a group structure. In the following argument, we will assume that P1 ≥ P2 . Let us consider an n-dim lattice pair (Λ, ΛC ), where Λ ? ΛC . We also assume that the second moment per dimension of the Voronoi region of Λ is P1 . Then, a nested lattice code is de?ned as follows: C1 {(ΛC + s) ∩ RV (Λ)} , (15)

(8)

In the downlink, the relay transmit uR to both receiver nodes. In the binary TRC, node 1 knows [u11 u12 ] and node 2 knows u2 . In this case, using the nested coding scheme [9], ? 2 ] if node 1 and 2 can reliably decode uR = [u11 u R1 ≤ 1 ? H ( 2 ), R2 ≤ 1 ? H ( 1 ). (9)

where RV (·) denotes the Voronoi region of a lattice, and s ∈ RV (ΛC ) is a translation vector that will be determined later. We further assume that the ?ne lattice ΛC is “good for AWGN channel coding”, and the coarse lattice Λ is “good for shaping” [11], [12]. The codebook C1 is used for node 1. We now determine the codebook of node 2 as a subset of C1 . To satisfy the transmit power constraint, we remove some codewords of the largest power one by one from C1 until the average power of remaining codewords becomes less

than or equal to P2 . We call this process Lattice reshaping. Thus obtained set of codewords is denoted by C2 and used as the codebook of node 2. From the construction process, C2 has the largest cardinality among all subsets of C1 , whose average power is less than or equal to P2 . Then, letting R1 and R2 denote the code rate of C1 and C2 respectively, we have the following lemma. Lemma 1: Consider a scaled lattice γ Λ, 0 ≤ γ ≤ 1, which has P2 as the second moment per dimension of RV (γ Λ). If γ Λ ? ΛC , there exist at least a translation vector s ∈ RV (ΛC ) such that 1 P2 R2 ≥ R1 + log . 2 P1 Proof: Consider a codebook de?ned as ?2 (s) C {(ΛC + s) ∩ RV (γ Λ)} , ?2 (s) denote its average power. Since we have assumed and let P ?2 (s) is a subset of C1 . If we let V be the volume of γ ≤ 1, C RV (Λ), the volume of RV (γ Λ) is γ n V . Since the second moment per dimension of RV (Λ) and RV (γ Λ) is P1 and P2 respectively, we have γ = P2 /P1 . And also, since γ Λ ? ΛC , ?2 (s)| = |C γ V = 2nR1 γ n = 2nR1 VC
n

C . This property is analogous to the group property of BLC. From the in?ated lattice lemma [14], the received signal at the relay (12) is transformed to the following modulo lattice additive (MLAN) channel signal: ? R = (αyR ? u ? 2s) mod Λ y ?R ) mod Λ, = (v + z (20)

where α is a scaling factor that will be determined later, and ?R z ?(1 ? α) (x1 + x2 ) + αzR . (21)

As in the binary case in Section III, the relay can decode v instead of decoding v1 and v2 separately. If v1 is chosen from C1 equally likely, from the crypto lemma [11], v is independent of v2 and x2 , and uniform over C . Then, since v is independent of x1 , x2 , and zR , it is also independent of the effective noise ?R . Thus, the capacity of this MLAN channel is given by z C1 = 1 ?R I V; Y n 1 ?R ? 1h = h Y n n 1 P1 ≥ log ? 2 G(Λ)

?R Z

mod Λ (22)

P2 P1

n/2

1 ?R , h Z n

.

(16)

Assuming that s is an instance of a random vector S ? ?2 (S)} = P2 , which implies Unif (RV (ΛC )), we have ES {P that there must be at least one s ∈ RV (ΛC ) such that ?2 (s) ≤ P2 [13]. By choosing such a s, from (16), we have P 1 1 ?2 | = R1 + log R2 = log |C2 | ≥ log |C n n P2 P1 , (17)

where G(Λ) is the normalized second moment [11] of Λ. From ? 1 , X2 , and ZR , we have the mutual independence of X 1 E n ?R Z
2 2 . ≤ (1 ? α)2 (P1 + P2 ) + α2 σR

(23)

which proves the lemma. One of the exemplary cases that lemma 1 holds is when P1 = P2 and, in this case, γ = 1. From now on, we only consider the cases that Lemma 1 holds. Nodes 1 and 2 map their messages w1 and w2 to codewords v1 ∈ C1 and v2 ∈ C2 , respectively. The transmitted signals of node 1 and 2 are denoted by x1 and x2 , respectively, which are formed follows: x1 x2 (v1 + u) v2 , mod Λ, (18)

If we choose the value of α to minimize the right hand side of (23), 2 1 σ 2 (P1 + P2 ) ?R (24) E Z ≤ R 2 . n P1 + P2 + σR Then, since a Gaussian random vector has the largest entropy for a given second moment, we have
2 1 ? R ≤ 1 log 2πe σR (P1 + P2 ) h Z 2 n 2 P1 + P2 + σR

,

(25)

and, thus, using (25) in (22), C1 ≥ 1 log 2 P1 P1 + 2 P1 + P2 σR ? 1 log (2πeG(Λ)) . 2 (26)

where u is a random dither vector which is uniform over RV (Λ) and independent of x1 and x2 . u is also assumed to be known to both transmitter and receiver. Then, from the crypto lemma [11], x1 is independent of v1 and uniform over RV (Λ). Now we consider the group property of the lattice code. Let us de?ne a codebook C {ΛC ∩ R(Λ)} = {ΛC mod Λ} . (19)

Since we have assumed that Λ is good for shaping [11], log(2πeG(Λ)) can be made arbitrarily small. For suf?ciently large dimension n, the existence of nested lattice pairs (Λ, ΛC ), where Λ is good for shaping and, at the same time, ΛC is good for AWGN channel coding, is shown in [12]. Thus, by assuming the use of such lattice pairs, the uplink achievable region is given by R1 ≤ 1 log 2 1 R2 ≤ log 2 P1 P1 + 2 P1 + P2 σR P2 P2 + 2 P1 + P2 σR + o(1), + o(1), (27)

? 1 = (v1 ? s) mod Λ and v ? 2 = (v 2 ? s) If we de?ne v ? 1 and v ? 2 are elements of C , and it can be mod Λ, then v ? 2 ) mod Λ is also an element of easily seen that v (? v1 + v

where lim o(1) = 0.
n→∞

For downlink communication, we can rely on the nested coding scheme [9]. The relay transmits v to both receiver nodes using a Gaussian random codebook. The receivers do the jointly-typical decoding. The size of this codebook is 2nR1 . ? 2 ) mod Λ and node 1 already However, since v = (? v1 + v ? 1 , node 1 only needs to search for a jointly typical knows v sequence in a subset of the codebook, whose size is 2nR2 . Thus, the downlink achievable region is given by R1 ≤ 1 PR log 1 + 2 2 σ2 1 PR R2 ≤ log 1 + 2 2 σ1 , . (28)

Cut-set bound DF region Lattice code

After obtaining v without error, each receiver can estimate the other’s message as follows: ? 2 = (v ? v ? 1 + s) Node 1: v ? 1 = (v ? v ? 2 + s) Node 2: v 1 log 2 1 log 2 P1 P1 + 2 P1 + P2 σR P2 P2 + 2 P1 + P2 σR mod Λ, mod Λ. (29)

Fig. 2.

Achievable region of the Gaussian TRC

R EFERENCES
[1] R. Knopp, “Two-way wireless communication via a relay station,” GDRISIS meeting, Mar. 2007. [2] ——, “Two-way radio network with a star topology,” Int. Zurich Seminar on Comm., Feb. 2006. [3] B. Rankov and A. Wittneben, “Spectral ef?cient signaling for half-duplex relay channels,” in Proc. Asilomar Conference on Signals, Systems, and Computers 2005, Paci?c Grove, CA, Nov. 2005. [4] ——, “Achievable rate region for the two-way relay channel,” in Proc. IEEE Int. Symposium on Inf. Theory, (Seatle, USA), pp. 1668-1672, July 2006. [5] S. Zhang, S. Liew, and P. Lam, “Physical-layer network coding,” in ACM Mobicom ‘06. [6] K. Narayanan, M. P. Wilson, and A. Sprintson, “Joint physical layer coding and network coding for bi-directional relaying,” in 45th Allerton Conf. Commun., Control, and Computing, Allerton House, Monticello, IL, Sept. 2007. [7] S. Katti, S. Gollakota, and D. Katabi, “Embracing wireless interference: Analog network coding,” in ACM SIGCOMM ‘07. [8] B. Nazer and M. Gastpar, “Lattice coding increases multicast rates for Gaussian multiple-access networks,” in 45th Allerton Conf. Commun., Control, and Computing, Allerton House, Monticello, IL, Sept. 2007. [9] Y. Wu, “Broadcasting when receivers know some messages a priori,” Proc. IEEE Int. Symposium on Inf. Theory, (Nice, France), pp. 11411145, June 2007. [10] A. Barg and G. D. Forney Jr., “Random codes: Minimum distance and error exponents,” IEEE Trans. Inform. Theory, vol. 48, pp. 2568-2573, Sept. 2002. [11] G. D. Forney Jr., “On the role of MMSE estimation in approaching the information theoretic limits of linear Gaussian channels: Shannon meets Wiener,” in Proc. 41st Annu. Allerton Conf. Communication, Control, and Computing, Allerton House, Monticello, IL, Oct. 2003, pp. 430-439. [12] U. Erez and R. Zamir, “Achieving 1 log(1 + SN R) on the AWGN 2 channel with lattice encoding and decoding,” IEEE Trans. Inform. Theory, vol. 50, pp. 2293-2314, Oct. 2004. [13] H. A. Loeliger, “Averaging bounds for lattices and linear codes,” IEEE Trans. Inform. Theory, vol. 43, pp. 1767-1773, Nov. 1997. [14] U. Erez, S. Shamai, and R. Zamir, “Capacity and lattice strategies for cancelling known interference,” IEEE Trans. Inform. Theory, vol. 51, pp. 3820-3833, Nov. 2005. [15] T. Cover and J. A. Thomas, Element of Information Theory, John Wiley & Sons, 1991.

Summarizing (27) and (28), when n → ∞, we have R1 ≤ min R2 ≤ min 1 PR , log 1 + 2 2 σ2 1 PR , log 1 + 2 2 σ1 , . (30)

Note that, for the symmetric case, (30) coincides with the result of [6]. Comparing (30) with the upper bound (14), we can easily verify that the rate gap is always less than or equal to 1/2 bit for any choices of channel parameters, such as the transmit power and noise variance. Especially, as the signal to noise ratio increases, this gap vanishes and the lower bound approaches asymptotically the upper bound. The achievable rate region (30) is shown in Fig. 2. In Fig. 2 2 2 = σR = σ 2 and P1 = P2 = = σ2 2, it is assumed that σ1 P PR . Thus, in this case, the uplink is the limiting factor. For comparison, cut-set bound (1) and DF region (2) are also shown in Fig. 2. Clearly, we can think of the time sharing between DF relaying and lattice scheme, and thus the convex hull of the two achievable regions, which is depicted as the shaded region in Fig. 2, is achievable. V. C ONCLUSION We studied achievable rate regions for TRC. First, we have seen that the subspace-sharing of linear codes can achieve the capacity region for a binary TRC. Then, we proposed the subset-sharing of lattice codes for a Gaussian TRC. It was also shown that, in some cases, the proposed lattice-coding scheme achieves to within 1/2-bit capacity region of the Gaussian TRC. At high SNR regimes, the achievable region asymptotically approaches the capacity upper bound.


赞助商链接