Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning-Based Compress-and-Forward Schemes for the Relay Channel (2405.09534v1)

Published 15 May 2024 in cs.IT and math.IT

Abstract: The relay channel, consisting of a source-destination pair along with a relay, is a fundamental component of cooperative communications. While the capacity of a general relay channel remains unknown, various relaying strategies, including compress-and-forward (CF), have been proposed. In CF, the relay forwards a quantized version of its received signal to the destination. Given the correlated signals at the relay and destination, distributed compression techniques, such as Wyner--Ziv coding, can be harnessed to utilize the relay-to-destination link more efficiently. Leveraging recent advances in neural network-based distributed compression, we revisit the relay channel problem and integrate a learned task-aware Wyner--Ziv compressor into a primitive relay channel with a finite-capacity out-of-band relay-to-destination link. The resulting neural CF scheme demonstrates that our compressor recovers binning of the quantized indices at the relay, mimicking the optimal asymptotic CF strategy, although no structure exploiting the knowledge of source statistics was imposed into the design. The proposed neural CF, employing finite order modulation, operates closely to the rate achievable in a primitive relay channel with a Gaussian codebook. We showcase the advantages of exploiting the correlated destination signal for relay compression through various neural CF architectures that involve end-to-end training of the compressor and the demodulator components. Our learned task-oriented compressors provide the first proof-of-concept work toward interpretable and practical neural CF relaying schemes.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (38)
  1. E. Ozyilkan, F. Carpi, S. Garg, and E. Erkip, “Neural compress-and-forward for the relay channel,” arxiv preprint 2404.14594, 2024.
  2. E. C. van der Meulen, “Three-terminal communication channels,” Advances in Applied Probability, vol. 3, no. 1, p. 120–154, 1971.
  3. A. Sendonaris, E. Erkip, and B. Aazhang, “User cooperation diversity–Part I: System description,” IEEE Transactions on Communications, vol. 51, no. 11, pp. 1927–1938, 2003.
  4. J. Laneman, D. Tse, and G. Wornell, “Cooperative diversity in wireless networks: Efficient protocols and outage behavior,” IEEE Transactions on Information Theory, vol. 50, no. 12, pp. 3062–3080, 2004.
  5. G. Kramer, M. Gastpar, and P. Gupta, “Cooperative strategies and capacity theorems for relay networks,” IEEE Transactions on Information Theory, vol. 51, no. 9, pp. 3037–3063, 2005.
  6. D. Gesbert, S. Hanly, H. Huang, S. Shamai Shitz, O. Simeone, and W. Yu, “Multi-cell MIMO cooperative networks: A new look at interference,” IEEE Journal on Selected Areas in Communications, vol. 28, no. 9, pp. 1380–1408, 2010.
  7. M. Najla, Z. Becvar, P. Mach, and D. Gesbert, “Integrating UAVs as transparent relays into mobile networks: A deep learning approach,” in 2020 IEEE 31st Annual International Symposium on Personal, Indoor and Mobile Radio Communications, 2020, pp. 1–6.
  8. S.-Y. Lien, D.-J. Deng, C.-C. Lin, H.-L. Tsai, T. Chen, C. Guo, and S.-M. Cheng, “3GPP NR sidelink transmissions toward 5G V2X,” IEEE Access, vol. 8, pp. 35 368–35 382, 2020.
  9. T. Cover and A. Gamal, “Capacity theorems for the relay channel,” IEEE Transactions on Information Theory, vol. 25, no. 5, pp. 572–584, 1979.
  10. A. Wyner and J. Ziv, “The rate–distortion function for source coding with side information at the decoder,” IEEE Trans. Inf. Theory, vol. 22, no. 1, pp. 1 – 10, 1976.
  11. W. Kang and S. Ulukus, “Capacity of a class of diamond channels,” IEEE Trans. Inf. Theory, vol. 57, no. 8, pp. 4955–4960, 2011.
  12. E. Özyılkan, J. Ballé, and E. Erkip, “Learned Wyner–Ziv compressors recover binning,” in 2023 IEEE ISIT, 2023, pp. 701–706.
  13. E. Ozyilkan and E. Erkip, “Distributed compression in the era of machine learning: A review of recent advances,” to appear in 58th Annual Conference on Information Sciences and Systems (CISS), 2024.
  14. Y.-H. Kim, “Coding techniques for primitive relay channels,” in Forty-Fifth Annual Allerton Conference, 2007.
  15. O. Simeone, E. Erkip, and S. Shamai, “On codebook information for interference relay channels with out-of-band relaying,” IEEE Trans. Inf. Theory, vol. 57, no. 5, pp. 2880–2888, 2011.
  16. M. Uppal, Z. Liu, V. Stankovic, and Z. Xiong, “Compress-forward coding with BPSK modulation for the half-duplex Gaussian relay channel,” IEEE Trans. Signal Process., vol. 57, no. 11, pp. 4467–4481, 2009.
  17. A. Chakrabarti, A. Sabharwal, and B. Aazhang, “Practical quantizer design for half-duplex estimate-and-forward relaying,” IEEE Transactions on Communications, vol. 59, no. 1, pp. 74–83, 2011.
  18. Z. Liu, S. Cheng, A. Liveris, and Z. Xiong, “Slepian–Wolf coded nested quantization (SWC-NQ) for Wyner–Ziv coding: performance analysis and code design,” in Proceedings of the 2024 Data Compression Conference (DCC 2004), 2004, pp. 322–331.
  19. S. Pradhan and K. Ramchandran, “Distributed source coding using syndromes (DISCUS): Design and construction,” IEEE Transactions on Information Theory, vol. 49, no. 3, pp. 626–643, 2003.
  20. C. Bian, Y. Shao, H. Wu, and D. Gunduz, “Deep joint source-channel coding over cooperative relay networks,” in 2024 IEEE ICMLCN, 2024.
  21. C. Bian, Y. Shao, H. Wu, E. Ozfatura, and D. Gunduz, “Process-and-forward: Deep joint source-channel coding over cooperative relay networks,” arxiv preprint 2403.10613, 2024.
  22. E. Arda, E. Kutay, and A. Yener, “Semantic forwarding for next generation relay networks,” to appear in 58th Annual Conference on Information Sciences and Systems (CISS), 2024.
  23. E. Ozyilkan, J. Ballé, and E. Erkip, “Neural distributed compressor does binning,” in ICML 2023 Workshop Neural Compression: From Information Theory to Applications, 2023. [Online]. Available: https://openreview.net/forum?id=3Dq4FZJSga
  24. E. Ozyilkan, J. Ballé, and E. Erkip, “Neural distributed compressor discovers binning,” IEEE Journal on Selected Areas in Information Theory, pp. 1–1, 2024.
  25. M. Stark, F. Ait Aoudia, and J. Hoydis, “Joint learning of geometric and probabilistic constellation shaping,” in 2019 IEEE Globecom Workshops.
  26. K. Hornik, M. Stinchcombe, and H. White, “Multilayer feedforward networks are universal approximators,” Neural Networks, vol. 2, no. 5, p. 359–366, jul 1989.
  27. M. Leshno, V. Y. Lin, A. Pinkus, and S. Schocken, “Multilayer feedforward networks with a nonpolynomial activation function can approximate any function,” Neural Networks, vol. 6, no. 6, pp. 861–867, Jan. 1993. [Online]. Available: https://doi.org/10.1016/s0893-6080(05)80131-5
  28. J. Rissanen and G. Langdon, “Universal modeling and coding,” IEEE Transactions on Information Theory, vol. 27, no. 1, pp. 12–23, 1981.
  29. J. Li, Z. Tu, and R. Blum, “Slepian-Wolf coding for nonuniform sources using turbo codes,” in Proceedings of the 2004 Data Compression Conference (DCC 2004), 2004, pp. 312–321.
  30. J. Ballé, V. Laparra, and E. P. Simoncelli, “End-to-end optimized image compression,” in International Conference on Learning Representations, 2017.
  31. J. Ballé, D. Minnen, S. Singh, S. J. Hwang, and N. Johnston, “Variational image compression with a scale hyperprior,” in International Conference on Learning Representations, 2018.
  32. J. Ballé, P. A. Chou, D. Minnen, S. Singh, N. Johnston, E. Agustsson, S. J. Hwang, and G. Toderici, “Nonlinear transform coding,” IEEE Journal of Selected Topics in Signal Processing, vol. 15, no. 2, pp. 339–353, 2021.
  33. N. Rahaman, A. Baratin, D. Arpit, F. Draxler, M. Lin, F. Hamprecht, Y. Bengio, and A. Courville, “On the spectral bias of neural networks,” in Proceedings of the 36th International Conference on Machine Learning, vol. 97, 2019, pp. 5301–5310.
  34. J. Bradbury, R. Frostig, P. Hawkins, M. J. Johnson, C. Leary, D. Maclaurin, G. Necula, A. Paszke, J. VanderPlas, S. Wanderman-Milne, and Q. Zhang, “JAX: Composable transformations of Python+NumPy programs,” 2018.
  35. E. J. Gumbel, “Statistical theory of extreme values and some practical applications : A series of lectures,” 1954.
  36. C. J. Maddison, A. Mnih, and Y. W. Teh, “The concrete distribution: A continuous relaxation of discrete random variables,” in International Conference on Learning Representations, 2017.
  37. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in International Conference on Learning Representations, 2015.
  38. A. Goldsmith and S.-G. Chua, “Variable-rate variable-power MQAM for fading channels,” IEEE Transactions on Communications, vol. 45, no. 10, pp. 1218–1230, 1997.
Citations (1)

Summary

We haven't generated a summary for this paper yet.