Distributed and Rate-Adaptive Feature Compression
Abstract: We study the problem of distributed and rate-adaptive feature compression for linear regression. A set of distributed sensors collect disjoint features of regressor data. A fusion center is assumed to contain a pretrained linear regression model, trained on a dataset of the entire uncompressed data. At inference time, the sensors compress their observations and send them to the fusion center through communication-constrained channels, whose rates can change with time. Our goal is to design a feature compression {scheme} that can adapt to the varying communication constraints, while maximizing the inference performance at the fusion center. We first obtain the form of optimal quantizers assuming knowledge of underlying regressor data distribution. Under a practically reasonable approximation, we then propose a distributed compression scheme which works by quantizing a one-dimensional projection of the sensor data. We also propose a simple adaptive scheme for handling changes in communication constraints. We demonstrate the effectiveness of the distributed adaptive compression scheme through simulated experiments.
- J. N. Tsitsiklis, “Decentralized detection,” in Advances in Statistical Signal Processing (H. V. Poor and J. B. Thomas, eds.), vol. 2, pp. 297–344, JAI Press, 1993.
- Z.-Q. Luo, “Universal decentralized estimation in a bandwidth constrained sensor network,” IEEE Transactions on information theory, vol. 51, no. 6, pp. 2210–2219, 2005.
- Y. Zhang, J. Duchi, M. I. Jordan, and M. J. Wainwright, “Information-theoretic lower bounds for distributed statistical estimation with communication constraints,” Advances in Neural Information Processing Systems, vol. 26, 2013.
- S. Du, Y. Xu, H. Zhang, C. Li, P. Grover, and A. Singh, “Novel quantization strategies for linear prediction with guarantees,” in Proceedings of ICML 2016 Workshop on On-Device Intelligence, 2016.
- O. A. Hanna, Y. H. Ezzeldin, T. Sadjadpour, C. Fragouli, and S. Diggavi, “On distributed quantization for classification,” IEEE Journal on Selected Areas in Information Theory, vol. 1, no. 1, pp. 237–249, 2020.
- J. Shao, Y. Mao, and J. Zhang, “Task-oriented communication for multidevice cooperative edge inference,” IEEE Transactions on Wireless Communications, vol. 22, no. 1, pp. 73–87, 2022.
- A. Van Den Oord, O. Vinyals, et al., “Neural discrete representation learning,” Advances in neural information processing systems, vol. 30, 2017.
- D. Monderer and L. S. Shapley, “Potential games,” Games and economic behavior, vol. 14, no. 1, pp. 124–143, 1996.
- Y. Linde, A. Buzo, and R. Gray, “An algorithm for vector quantizer design,” IEEE Transactions on communications, vol. 28, no. 1, pp. 84–95, 1980.
- A. Grønlund, K. G. Larsen, A. Mathiasen, J. S. Nielsen, S. Schneider, and M. Song, “Fast exact k-means, k-medians and bregman divergence clustering in 1d,” arXiv preprint arXiv:1701.07204, 2017.
- K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in Proceeding of ICLR 2015, 3rd International Conference on Learning Representations, 2015.
- A. Krizhevsky and G. Hinton, “Learning multiple layers of features from tiny images,” Tech. Rep. 0, University of Toronto, Toronto, Ontario, 2009.
- D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in Proceedings of ICLR 2015, 3rd International Conference on Learning Representations, 2015.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.