On Channel Simulation with Causal Rejection Samplers (2401.16579v2)
Abstract: One-shot channel simulation has recently emerged as a promising alternative to quantization and entropy coding in machine-learning-based lossy data compression schemes. However, while there are several potential applications of channel simulation - lossy compression with realism constraints or differential privacy, to name a few - little is known about its fundamental limitations. In this paper, we restrict our attention to a subclass of channel simulation protocols called causal rejection samplers (CRS), establish new, tighter lower bounds on their expected runtime and codelength, and demonstrate the bounds' achievability. Concretely, for an arbitrary CRS, let $Q$ and $P$ denote a target and proposal distribution supplied as input, and let $K$ be the number of samples examined by the algorithm. We show that the expected runtime $\mathbb{E}[K]$ of any CRS scales at least as $\exp_2(D_\infty[Q || P])$, where $D_\infty[Q || P]$ is the R\'enyi $\infty$-divergence. Regarding the codelength, we show that $D_{KL}[Q || P] \leq D_{CS}[Q || P] \leq \mathbb{H}[K]$, where $D_{CS}[Q || P]$ is a new quantity we call the channel simulation divergence. Furthermore, we prove that our new lower bound, unlike the $D_{KL}[Q || P]$ lower bound, is achievable tightly, i.e. there is a CRS such that $\mathbb{H}[K] \leq D_{CS}[Q || P] + \log_2 (e + 1)$. Finally, we conduct numerical studies of the asymptotic scaling of the codelength of Gaussian and Laplace channel simulation algorithms.
- J. Ziv, “On universal quantization,” IEEE Transactions on Information Theory, vol. 31, no. 3, pp. 344–347, 1985.
- J. Ballé, P. A. Chou, D. Minnen, S. Singh, N. Johnston, E. Agustsson, S. J. Hwang, and G. Toderici, “Nonlinear transform coding,” IEEE Journal of Selected Topics in Signal Processing, vol. 15, no. 2, pp. 339–353, 2020.
- G. Flamich, M. Havasi, and J. M. Hernández Lobato, “Compressing images by encoding their latent representations with relative entropy coding,” Advances in Neural Information Processing Systems, vol. 33, 2020.
- E. Agustsson and L. Theis, “Universally quantized neural compression,” in Advances in Neural Information Processing Systems, vol. 33, 2020.
- G. Flamich, S. Markou, and J. M. Hernández Lobato, “Fast relative entropy coding with A* coding,” in International Conference on Machine Learning, 2022, pp. 6548–6577.
- ——, “Faster relative entropy coding with greedy rejection coding,” in Advances in Neural Information Processing Systems, 2023.
- Z. Guo, G. Flamich, J. He, Z. Chen, and J. M. Hernández Lobato, “Compression with Bayesian implicit neural representations,” in Advances in Neural Information Processing Systems, 2023.
- J. He, G. Flamich, Z. Guo, and J. M. Hernández-Lobato, “RECOMBINER: Robust and enhanced compression with bayesian implicit neural representations,” in International Conference on Learning Representations, 2024.
- A. Shah, W.-N. Chen, J. Balle, P. Kairouz, and L. Theis, “Optimal compression of locally differentially private mechanisms,” in International Conference on Artificial Intelligence and Statistics. PMLR, 2022, pp. 7680–7723.
- A. M. Shahmiri, C. W. Ling, and C. T. Li, “Communication-efficient laplace mechanism for differential privacy via random quantization,” arXiv preprint arXiv:2309.06982, 2023.
- L. Theis and E. Agustsson, “On the advantages of stochastic encoders,” in Neural Compression Workshop at ICLR, 2021. [Online]. Available: https://arxiv.org/abs/2102.09270
- C. T. Li and V. Anantharam, “A unified framework for one-shot achievability via the poisson matching lemma,” IEEE Transactions on Information Theory, vol. 67, no. 5, pp. 2624–2651, 2021.
- P. Harsha, R. Jain, D. McAllester, and J. Radhakrishnan, “The communication complexity of correlation,” in Twenty-Second Annual IEEE Conference on Computational Complexity (CCC’07). IEEE, 2007, pp. 10–23.
- G. Flamich and L. Theis, “Adaptive greedy rejection sampling,” in IEEE International Symposium on Information Theory (ISIT), 2023, pp. 454–459.
- C. T. Li and A. El Gamal, “Strong functional representation lemma and applications to coding theorems,” IEEE Transactions on Information Theory, vol. 64, no. 11, pp. 6967–6978, 2018.
- L. Theis and N. Yosri, “Algorithms for the communication of samples,” in International Conference on Machine Learning, 2022.
- M. Hegazy and C. T. Li, “Randomized quantization with exact error distribution,” in 2022 IEEE Information Theory Workshop (ITW). IEEE, 2022, pp. 350–355.
- C. W. Ling and C. T. Li, “Vector quantization with error uniformly distributed over an arbitrary set,” in 2023 IEEE International Symposium on Information Theory (ISIT). IEEE, 2023, pp. 856–861.
- J. Liu and S. Verdu, “Rejection sampling and noncausal sampling under moment constraints,” in 2018 IEEE International Symposium on Information Theory (ISIT). IEEE, 2018, pp. 1565–1569.
- P. Muldowney, K. Ostaszewski, and W. Wojdowski, “The Darth Vader rule,” Tatra Mountains Mathematical Publications, vol. 52, no. 1, pp. 53–63, 2012.
- G. Flamich, “Greedy poisson rejection sampling,” in Advances in Neural Information Processing Systems, 2023.
- H. Alzer, “On some inequalities for the gamma and psi functions,” Mathematics of computation, vol. 66, no. 217, pp. 373–389, 1997.
- I. Sason and S. Verdú, “f𝑓fitalic_f-divergence inequalities,” IEEE Transactions on Information Theory, vol. 62, no. 11, pp. 5973–6006, 2016.
- L. Horváth, “Uniform treatment of integral majorization inequalities with applications to hermite-hadamard-fejér-type inequalities and f-divergences.” Entropy, 2023.