Papers
Topics
Authors
Recent
Search
2000 character limit reached

Distributed and Rate-Adaptive Feature Compression

Published 2 Apr 2024 in cs.IT, cs.AI, math.IT, and stat.ML | (2404.02179v1)

Abstract: We study the problem of distributed and rate-adaptive feature compression for linear regression. A set of distributed sensors collect disjoint features of regressor data. A fusion center is assumed to contain a pretrained linear regression model, trained on a dataset of the entire uncompressed data. At inference time, the sensors compress their observations and send them to the fusion center through communication-constrained channels, whose rates can change with time. Our goal is to design a feature compression {scheme} that can adapt to the varying communication constraints, while maximizing the inference performance at the fusion center. We first obtain the form of optimal quantizers assuming knowledge of underlying regressor data distribution. Under a practically reasonable approximation, we then propose a distributed compression scheme which works by quantizing a one-dimensional projection of the sensor data. We also propose a simple adaptive scheme for handling changes in communication constraints. We demonstrate the effectiveness of the distributed adaptive compression scheme through simulated experiments.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (13)
  1. J. N. Tsitsiklis, “Decentralized detection,” in Advances in Statistical Signal Processing (H. V. Poor and J. B. Thomas, eds.), vol. 2, pp. 297–344, JAI Press, 1993.
  2. Z.-Q. Luo, “Universal decentralized estimation in a bandwidth constrained sensor network,” IEEE Transactions on information theory, vol. 51, no. 6, pp. 2210–2219, 2005.
  3. Y. Zhang, J. Duchi, M. I. Jordan, and M. J. Wainwright, “Information-theoretic lower bounds for distributed statistical estimation with communication constraints,” Advances in Neural Information Processing Systems, vol. 26, 2013.
  4. S. Du, Y. Xu, H. Zhang, C. Li, P. Grover, and A. Singh, “Novel quantization strategies for linear prediction with guarantees,” in Proceedings of ICML 2016 Workshop on On-Device Intelligence, 2016.
  5. O. A. Hanna, Y. H. Ezzeldin, T. Sadjadpour, C. Fragouli, and S. Diggavi, “On distributed quantization for classification,” IEEE Journal on Selected Areas in Information Theory, vol. 1, no. 1, pp. 237–249, 2020.
  6. J. Shao, Y. Mao, and J. Zhang, “Task-oriented communication for multidevice cooperative edge inference,” IEEE Transactions on Wireless Communications, vol. 22, no. 1, pp. 73–87, 2022.
  7. A. Van Den Oord, O. Vinyals, et al., “Neural discrete representation learning,” Advances in neural information processing systems, vol. 30, 2017.
  8. D. Monderer and L. S. Shapley, “Potential games,” Games and economic behavior, vol. 14, no. 1, pp. 124–143, 1996.
  9. Y. Linde, A. Buzo, and R. Gray, “An algorithm for vector quantizer design,” IEEE Transactions on communications, vol. 28, no. 1, pp. 84–95, 1980.
  10. A. Grønlund, K. G. Larsen, A. Mathiasen, J. S. Nielsen, S. Schneider, and M. Song, “Fast exact k-means, k-medians and bregman divergence clustering in 1d,” arXiv preprint arXiv:1701.07204, 2017.
  11. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in Proceeding of ICLR 2015, 3rd International Conference on Learning Representations, 2015.
  12. A. Krizhevsky and G. Hinton, “Learning multiple layers of features from tiny images,” Tech. Rep. 0, University of Toronto, Toronto, Ontario, 2009.
  13. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in Proceedings of ICLR 2015, 3rd International Conference on Learning Representations, 2015.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 2 tweets with 2 likes about this paper.