Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reinforcement Learning for Optimal Transmission of Markov Sources over Noisy Channels: Belief Quantization vs Sliding Finite Window Codes (2310.06742v3)

Published 10 Oct 2023 in math.OC, cs.IT, and math.IT

Abstract: We study the problem of zero-delay coding for the transmission a Markov source over a noisy channel with feedback and present a rigorous reinforcement theoretic solution which is guaranteed to achieve near-optimality. To this end, we formulate the problem as a Markov decision process (MDP) where the state is a probability-measure valued predictor/belief and the actions are quantizer maps. This MDP formulation has been used to show the optimality of certain classes of encoder policies in prior work. Despite such an analytical approach in determining optimal policies, their computation is prohibitively complex due to the uncountable nature of the constructed state space and the lack of minorization or strong ergodicity results which are commonly assumed for average cost optimal stochastic control. These challenges invite rigorous reinforcement learning methods, which entail several open questions addressed in our paper. We present two complementary approaches for this problem. In the first approach, we approximate the set of all beliefs by a finite set and use nearest-neighbor quantization to obtain a finite state MDP, whose optimal policies become near-optimal for the original MDP as the quantization becomes arbitrarily fine. In the second approach, a sliding finite window of channel outputs and quantizers together with a prior belief state serve as the state of the MDP. We then approximate this state by marginalizing over all possible beliefs, so that our coding policies only use the finite window term to encode the source. Under an appropriate notion of predictor stability, we show that such policies are near-optimal for the zero-delay coding problem as the window length increases. We give sufficient conditions for predictor stability to hold. Finally, we propose a reinforcement learning algorithm to compute near-optimal policies.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (20)
  1. J. C. Walrand and P. Varaiya, “Optimal causal coding-decoding problems,” IEEE Transactions on Information Theory, vol. 19, pp. 814–820, 1983.
  2. D. Teneketzis, “On the structure of optimal real-time encoders and decoders in noisy communication,” IEEE Transactions on Information Theory, vol. 52, pp. 4017–4035, September 2006.
  3. R. G. Wood, T. Linder, and S. Yüksel, “Optimal zero delay coding of Markov sources: Stationary and finite memory codes,” IEEE Transactions on Information Theory, vol. 63, pp. 5968–5980, 2017.
  4. A. Mahajan and D. Teneketzis, “Optimal design of sequential real-time communication systems,” IEEE Transactions on Information Theory, vol. 55, pp. 5317–5338, November 2009.
  5. M. Ghomi, T. Linder, and S. Yüksel, “Zero-delay lossy coding of linear vector Markov sources: Optimality of stationary codes and near optimality of finite memory codes,” IEEE Transactions on Information Theory, vol. 68, no. 5, pp. 3474–3488, 2021.
  6. N. Saldi, S. Yüksel, and T. Linder, “On the asymptotic optimality of finite approximations to Markov decision processes with Borel spaces,” Mathematics of Operations Research, vol. 42, no. 4, pp. 945–978, 2017.
  7. N. Saldi, S. Yüksel, and T. Linder, “Finite model approximations for partially observed Markov decision processes with discounted cost,” IEEE Transactions on Automatic Control, vol. 65, 2020.
  8. A. Kara, N. Saldi, and S. Yüksel, “Q-learning for MDPs with general spaces: Convergence and near optimality via quantization under weak continuity,” Journal of Machine Learning Research, 2023 (arXiv:2111.06781), 2021.
  9. L. Cregg, F. Alajaji, and S. Yüksel, “Reinforcement learning for zero-delay coding over a noisy channel with feedback,” in IEEE Conference on Decision and Control, to appear, 2023.
  10. A. Kara and S. Yüksel, “Convergence of finite memory Q-learning for POMDPs and near optimality of learned policies under filter stability,” Mathematics of Operations Research (also arXiv:2103.12158), 2023.
  11. P. Chigansky and R. Liptser, “Stability of nonlinear filters in non-mixing case,” Annals of Applied Probability, vol. 14, pp. 2038–2056, 2004.
  12. C. McDonald and S. Yüksel, “Exponential filter stability via Dobrushin’s coefficient,” Electronic Communications in Probability, vol. 25, 2020.
  13. ——, “Stochastic observability and filter stability under several criteria,” IEEE Transactions on Automatic Control (to appear), arXiv:1812.01772, 2022.
  14. ——, “Robustness to incorrect priors and controlled filter stability in partially observed stochastic control,” SIAM Journal on Control and Optimization, vol. 60, no. 2, pp. 842–870, 2022.
  15. H. Yu and D. P. Bertsekas, “On near optimality of the set of finite-state controllers for average cost POMDP,” Mathematics of Operations Research, vol. 33, no. 1, pp. 1–11, 2008.
  16. A. Kara and S. Yüksel, “Near optimality of finite memory feedback policies in partially observed Markov decision processes,” Journal of Machine Learning Research, vol. 23, no. 11, pp. 1–46, 2022.
  17. R. Dobrushin, “Central limit theorem for nonstationary Markov chains. i,” Theory of Probability & Its Applications, vol. 1, no. 1, pp. 65–80, 1956.
  18. C. J. C. H. Watkins and P. Dayan, “Q-learning,” Machine Learning, vol. 8, pp. 279–292, 1992.
  19. N. Farvardin and V. Vaishampayan, “Optimal quantizer design for noisy channels: An approach to combined source - channel coding,” IEEE Transactions on Information Theory, vol. 33, no. 6, pp. 827–838, 1987.
  20. A. Amanullah and M. Salehi, “Joint source-channel coding in the presence of feedback,” in Proceedings of 27th Asilomar Conference on Signals, Systems and Computers, vol. 2, 1993, pp. 930–934.

Summary

We haven't generated a summary for this paper yet.