Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reinforcement Learning Evaluation and Solution for the Feedback Capacity of the Ising Channel with Large Alphabet (2008.07983v1)

Published 18 Aug 2020 in cs.IT and math.IT

Abstract: We propose a new method to compute the feedback capacity of unifilar finite state channels (FSCs) with memory using reinforcement learning (RL). The feedback capacity was previously estimated using its formulation as a Markov decision process (MDP) with dynamic programming (DP) algorithms. However, their computational complexity grows exponentially with the channel alphabet size. Therefore, we use RL, and specifically its ability to parameterize value functions and policies with neural networks, to evaluate numerically the feedback capacity of channels with a large alphabet size. The outcome of the RL algorithm is a numerical lower bound on the feedback capacity, which is used to reveal the structure of the optimal solution. The structure is modeled by a graph-based auxiliary random variable that is utilized to derive an analytic upper bound on the feedback capacity with the duality bound. The capacity computation is concluded by verifying the tightness of the upper bound by testing whether it is BCJR invariant. We demonstrate this method on the Ising channel with an arbitrary alphabet size. For an alphabet size smaller than or equal to 8, we derive the analytic solution of the capacity. Next, the structure of the numerical solution is used to deduce a simple coding scheme that achieves the feedback capacity and serves as a lower bound for larger alphabets. For an alphabet size greater than 8, we present an upper bound on the feedback capacity. For an asymptotically large alphabet size, we present an asymptotic optimal coding scheme.

Citations (4)

Summary

We haven't generated a summary for this paper yet.