Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
121 tokens/sec
GPT-4o
9 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FOOL: Addressing the Downlink Bottleneck in Satellite Computing with Neural Feature Compression (2403.16677v3)

Published 25 Mar 2024 in cs.LG, cs.CV, cs.DC, cs.NI, and eess.IV

Abstract: Nanosatellite constellations equipped with sensors capturing large geographic regions provide unprecedented opportunities for Earth observation. As constellation sizes increase, network contention poses a downlink bottleneck. Orbital Edge Computing (OEC) leverages limited onboard compute resources to reduce transfer costs by processing the raw captures at the source. However, current solutions have limited practicability due to reliance on crude filtering methods or over-prioritizing particular downstream tasks. This work presents FOOL, an OEC-native and task-agnostic feature compression method that preserves prediction performance. FOOL partitions high-resolution satellite imagery to maximize throughput. Further, it embeds context and leverages inter-tile dependencies to lower transfer costs with negligible overhead. While FOOL is a feature compressor, it can recover images with competitive scores on quality measures at lower bitrates. We extensively evaluate transfer cost reduction by including the peculiarity of intermittently available network connections in low earth orbit. Lastly, we test the feasibility of our system for standardized nanosatellite form factors. We demonstrate that FOOL permits downlinking over 100x the data volume without relying on prior information on the downstream tasks.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (84)
  1. R. Tubío-Pardavila and N. Kurahara, “18 - ground station networks,” in Cubesat Handbook (C. Cappelletti, S. Battistini, and B. K. Malphrus, eds.), pp. 353–364, Academic Press, 2021.
  2. L. Leung, V. Beukelaers, S. Chesi, H. Yoon, D. Walker, and J. Egbert, “Adcs at scale: Calibrating and monitoring the dove constellation,” in Proceedings of the AIAA/USU Conference on Small Satellites, 2018.
  3. S. Lee, A. Hutputanasin, A. Toorian, W. Lan, R. Munakata, J. Carnahan, D. Pignatelli, et al., “Cubesat design specification rev. 13,” California Polytechnic State University, San Luis Obispo, USA, 2009.
  4. R. P. Sishodia, R. L. Ray, and S. K. Singh, “Applications of remote sensing in precision agriculture: A review,” Remote Sensing, vol. 12, no. 19, 2020.
  5. A. Teodoro and L. Duarte, “Chapter 10 - the role of satellite remote sensing in natural disaster management,” in Nanotechnology-Based Smart Remote Sensing Networks for Disaster Prevention (A. Denizli, M. S. Alencar, T. A. Nguyen, and D. E. Motaung, eds.), Micro and Nano Technologies, pp. 189–216, Elsevier, 2022.
  6. K. Devaraj, R. Kingsbury, M. Ligon, J. Breu, V. Vittaldev, B. Klofas, P. Yeon, and K. Colton, “Dove high speed downlink system,” in Small Satellite Conference, 2017.
  7. D. Vasisht, J. Shenoy, and R. Chandra, “L2d2: Low latency distributed downlink for leo satellites,” in Proceedings of the 2021 ACM SIGCOMM 2021 Conference, pp. 151–164, 2021.
  8. B. Tao, M. Masood, I. Gupta, and D. Vasisht, “Transmitting, fast and slow: Scheduling satellite traffic through space and time,” in Proceedings of the 29th Annual International Conference on Mobile Computing and Networking, MobiCom ’23, (New York, NY, USA), p. Oct., Association for Computing Machinery, 2023.
  9. B. Denby and B. Lucia, “Orbital edge computing: Nanosatellite constellations as a new class of computer system,” in Proceedings of the Twenty-Fifth International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS ’20, (New York, NY, USA), p. 939–954, Association for Computing Machinery, 2020.
  10. G. Furano, G. Meoni, A. Dunne, D. Moloney, V. Ferlet-Cavrois, A. Tavoularis, J. Byrne, L. Buckley, M. Psarakis, K.-O. Voss, et al., “Towards the use of artificial intelligence on the edge in space systems: Challenges and opportunities,” IEEE Aerospace and Electronic Systems Magazine, vol. 35, no. 12, pp. 44–56, 2020.
  11. C. Wu, Y. Li, M. Xu, C. Guo, Z. Yin, W. Gao, and C. Chi, “A comprehensive survey on orbital edge computing: Systems, applications, and algorithms,” 2023.
  12. G. Giuffrida, L. Fanucci, G. Meoni, M. Batič, L. Buckley, A. Dunne, C. van Dijk, M. Esposito, J. Hefele, N. Vercruyssen, G. Furano, M. Pastena, and J. Aschbacher, “The ΦΦ\Phiroman_Φ-sat-1 mission: The first on-board deep neural network demonstrator for satellite earth observation,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–14, 2022.
  13. S. Wang and Q. Li, “Satellite computing: Vision and challenges,” IEEE Internet of Things Journal, vol. 10, no. 24, pp. 22514–22529, 2023.
  14. C. Wu, Y. Li, M. Xu, C. Guo, Z. Yin, W. Gao, and C. Xi, “A comprehensive survey on orbital edge computing: Systems, applications, and algorithms,” arXiv preprint arXiv:2306.00275, 2023.
  15. Y. Matsubara, R. Yang, M. Levorato, and S. Mandt, “Supervised compression for resource-constrained edge computing systems,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 2685–2695, 2022.
  16. Y. Matsubara, R. Yang, M. Levorato, and S. Mandt, “SC2 Benchmark: Supervised Compression for Split Computing,” Transactions on Machine Learning Research, 2023.
  17. A. Furutanpey, P. Raith, and S. Dustdar, “Frankensplit: Saliency guided neural feature compression with shallow variational bottleneck injection,” arXiv preprint arXiv:2302.10681, 2023.
  18. R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (Los Alamitos, CA, USA), pp. 586–595, IEEE Computer Society, jun 2018.
  19. Y. Matsubara, M. Levorato, and F. Restuccia, “Split computing and early exiting for deep learning applications: Survey and research challenges,” ACM Comput. Surv., vol. 55, dec 2022.
  20. Y. Yang, S. Mandt, and L. Theis, “An introduction to neural data compression,” Found. Trends. Comput. Graph. Vis., vol. 15, p. 113–200, apr 2023.
  21. J. Ballé, P. A. Chou, D. Minnen, S. Singh, N. Johnston, E. Agustsson, S. J. Hwang, and G. Toderici, “Nonlinear transform coding,” IEEE Journal of Selected Topics in Signal Processing, vol. 15, no. 2, pp. 339–353, 2020.
  22. G. Giuffrida, L. Diana, F. de Gioia, G. Benelli, G. Meoni, M. Donati, and L. Fanucci, “Cloudscout: A deep neural network for on-board cloud detection on hyperspectral images,” Remote Sensing, vol. 12, no. 14, p. 2205, 2020.
  23. Q. Zhang, X. Yuan, R. Xing, Y. Zhang, Z. Zheng, X. Ma, M. Xu, S. Dustdar, and S. Wang, “Resource-efficient in-orbit detection of earth objects,” arXiv preprint arXiv:2402.01675, 2024.
  24. A. Gadre, S. Kumar, and Z. Manchester, “Low-latency imaging and inference from lora-enabled cubesats,” arXiv preprint arXiv:2206.10703, 2022.
  25. A. Lu, Y. Cheng, Y. Hu, Z. Cao, Y. Chen, and Z. Li, “Satellite-terrestrial collaborative object detection via task-inspired framework,” IEEE Internet of Things Journal, 2023.
  26. D. MacKay, “Fountain codes,” IEE Proceedings - Communications, vol. 152, pp. 1062–1068(6), December 2005.
  27. B. Denby, K. Chintalapudi, R. Chandra, B. Lucia, and S. Noghabi, “Kodan: Addressing the computational bottleneck in space,” in Proceedings of the 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 3, pp. 392–403, 2023.
  28. D. Minnen, J. Ballé, and G. D. Toderici, “Joint autoregressive and hierarchical priors for learned image compression,” in Advances in Neural Information Processing Systems (S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, eds.), vol. 31, Curran Associates, Inc., 2018.
  29. J. Carnahan, A. Hutputanasin, A. Johnstone, W. Lan, S. Lee, A. Mehrpavar, R. Munakata, D. Pignatelli, and A. Toorian, “Cubesat design specification rev. 14.1,” Tech. Rep. 141, California Polytechnic State University, San Luis Obispo, CA, USA, Feb. 2022.
  30. B. Denby and B. Lucia, “Orbital edge computing: Machine inference in space,” IEEE Computer Architecture Letters, vol. 18, pp. 59–62, Mar. 2019.
  31. S. Singh, S. Abu-El-Haija, N. Johnston, J. Ballé, A. Shrivastava, and G. Toderici, “End-to-end learning of compressible features,” in 2020 IEEE International Conference on Image Processing (ICIP), pp. 3349–3353, 2020.
  32. M. D. Wilkinson, M. Dumontier, I. J. Aalbersberg, G. Appleton, M. Axton, A. Baak, N. Blomberg, J.-W. Boiten, L. B. da Silva Santos, P. E. Bourne, et al., “The fair guiding principles for scientific data management and stewardship,” Scientific data, vol. 3, no. 1, pp. 1–9, 2016.
  33. C. E. Shannon, “Coding theorems for a discrete source with a fidelity criterion,” in IRE National Convention Record, 1959, vol. 4, pp. 142–163, 1959.
  34. J. Ballé, D. Minnen, S. Singh, S. J. Hwang, and N. Johnston, “Variational image compression with a scale hyperprior,” in International Conference on Learning Representations, 2018.
  35. G. Ma, Y. Chai, T. Jiang, M. Lu, and T. Chen, “Tinylic-high efficiency lossy image compression method,” arXiv preprint arXiv:2402.11164, 2024.
  36. R. Wightman, H. Touvron, and H. Jégou, “Resnet strikes back: An improved training procedure in timm,” arXiv preprint arXiv:2110.00476, 2021.
  37. O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al., “Imagenet large scale visual recognition challenge,” International journal of computer vision, vol. 115, no. 3, pp. 211–252, 2015.
  38. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” 2015.
  39. Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo, “Swin transformer: Hierarchical vision transformer using shifted windows,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10012–10022, 2021.
  40. Z. Liu, H. Mao, C.-Y. Wu, C. Feichtenhofer, T. Darrell, and S. Xie, “A convnet for the 2020s,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11976–11986, 2022.
  41. J. Ballé, V. Laparra, and E. P. Simoncelli, “End-to-end optimized image compression,” in 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings, OpenReview.net, 2017.
  42. Y. Matsubara, S. Baidya, D. Callegaro, M. Levorato, and S. Singh, “Distilled split deep neural networks for edge-assisted real-time systems,” in Proceedings of the 2019 Workshop on Hot Topics in Video Analytics and Intelligent Edges, HotEdgeVideo’19, (New York, NY, USA), p. 21–26, Association for Computing Machinery, 2019.
  43. M. Sbai, M. R. U. Saputra, N. Trigoni, and A. Markham, “Cut, distil and encode (cde): Split cloud-edge deep inference,” in 2021 18th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON), pp. 1–9, 2021.
  44. M. Awais, M. Naseer, S. Khan, R. M. Anwer, H. Cholakkal, M. Shah, M.-H. Yang, and F. S. Khan, “Foundational models defining a new era in vision: A survey and outlook,” arXiv preprint arXiv:2307.13721, 2023.
  45. L. Jing and Y. Tian, “Self-supervised visual feature learning with deep neural networks: A survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 43, no. 11, pp. 4037–4058, 2021.
  46. N. Tishby and N. Zaslavsky, “Deep learning and the information bottleneck principle,” in 2015 IEEE Information Theory Workshop (ITW), pp. 1–5, 2015.
  47. D. Gündüz, Z. Qin, I. E. Aguerri, H. S. Dhillon, Z. Yang, A. Yener, K. K. Wong, and C.-B. Chae, “Beyond transmitting bits: Context, semantics, and task-oriented communications,” IEEE Journal on Selected Areas in Communications, vol. 41, no. 1, pp. 5–41, 2022.
  48. A. Reuther, P. Michaleas, M. Jones, V. Gadepally, S. Samsi, and J. Kepner, “Ai and ml accelerator survey and trends,” in 2022 IEEE High Performance Extreme Computing Conference (HPEC), pp. 1–10, IEEE, 2022.
  49. M. Drusch, U. Del Bello, S. Carlier, O. Colin, V. Fernandez, F. Gascon, B. Hoersch, C. Isola, P. Laberinti, P. Martimort, A. Meygret, F. Spoto, O. Sy, F. Marchese, and P. Bargellini, “Sentinel-2: Esa’s optical high-resolution mission for gmes operational services,” Remote Sensing of Environment, vol. 120, pp. 25–36, 2012. The Sentinel Missions - New Opportunities for Science.
  50. J. Duda, “Asymmetric numeral systems: entropy coding combining speed of huffman coding with compression rate of arithmetic coding,” 2014.
  51. J. Townsend, “A tutorial on the range variant of asymmetric numeral systems,” arXiv preprint arXiv:2001.09186, 2020.
  52. P. Esser, R. Rombach, and B. Ommer, “Taming transformers for high-resolution image synthesis,” in IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, June 19-25, 2021, pp. 12873–12883, Computer Vision Foundation / IEEE, 2021.
  53. Z. Cheng, H. Sun, M. Takeuchi, and J. Katto, “Learned image compression with discretized gaussian mixture likelihoods and attention modules,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 7939–7948, 2020.
  54. M.-H. Guo, T.-X. Xu, J.-J. Liu, Z.-N. Liu, P.-T. Jiang, T.-J. Mu, S.-H. Zhang, R. R. Martin, M.-M. Cheng, and S.-M. Hu, “Attention mechanisms in computer vision: A survey,” Computational visual media, vol. 8, no. 3, pp. 331–368, 2022.
  55. A. Barroso-Laguna, E. Riba, D. Ponsa, and K. Mikolajczyk, “Key.Net: Keypoint Detection by Handcrafted and Learned CNN Filters,” in Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision, 2019.
  56. J. Liang, J. Cao, G. Sun, K. Zhang, L. Van Gool, and R. Timofte, “Swinir: Image restoration using swin transformer,” in Proceedings of the IEEE/CVF international conference on computer vision, pp. 1833–1844, 2021.
  57. P. Charbonnier, L. Blanc-Feraud, G. Aubert, and M. Barlaud, “Two deterministic half-quadratic regularization algorithms for computed imaging,” in Proceedings of 1st International Conference on Image Processing, vol. 2, pp. 168–172 vol.2, 1994.
  58. P. Raith, T. Rausch, P. Prüller, A. Furutanpey, and S. Dustdar, “An end-to-end framework for benchmarking edge-cloud cluster management techniques,” in 2022 IEEE International Conference on Cloud Engineering (IC2E), pp. 22–28, 2022.
  59. P. Raith, T. Rausch, A. Furutanpey, and S. Dustdar, “faas-sim: A trace-driven simulation framework for serverless edge computing platforms,” Software: Practice and Experience, vol. 53, no. 12, pp. 2327–2361, 2023.
  60. S. Cantrell, J. Christopherson, C. Anderson, G. L. Stensaas, S. N. R. Chandra, M. Kim, and S. Park, “System characterization report on the worldview-3 imager,” tech. rep., US Geological Survey, 2021.
  61. C. Donlon, B. Berruti, A. Buongiorno, M.-H. Ferreira, P. Féménias, J. Frerick, P. Goryl, U. Klein, H. Laur, C. Mavrocordatos, J. Nieke, H. Rebhan, B. Seitz, J. Stroede, and R. Sciarra, “The global monitoring for environment and security (gmes) sentinel-3 mission,” Remote Sensing of Environment, vol. 120, pp. 37–57, 2012. The Sentinel Missions - New Opportunities for Science.
  62. National Aeronautics and Space Administration, “Landsat-8 / ldcm (landsat data continuity mission).” https://www.eoportal.org/satellite-missions/landsat-8-ldcm#eop-quick-facts-section, 2024. Accessed: 20 March 2024.
  63. G. Jocher, A. Chaurasia, and J. Qiu, “YOLO by Ultralytics,” Jan. 2023.
  64. J. Ding, N. Xue, G.-S. Xia, X. Bai, W. Yang, M. Yang, S. Belongie, J. Luo, M. Datcu, M. Pelillo, and L. Zhang, “Object detection in aerial images: A large-scale benchmark and challenges,” IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1–1, 2021.
  65. A. Van Etten, D. Lindenbaum, and T. M. Bacastow, “Spacenet: A remote sensing dataset and challenge series,” arXiv preprint arXiv:1807.01232, 2018.
  66. D. Lam, R. Kuzma, K. McGee, S. Dooley, M. Laielli, M. Klaric, Y. Bulatov, and B. McCord, “xview: Objects in context in overhead imagery,” arXiv preprint arXiv:1802.07856, 2018.
  67. Class, “Airbus aircraft detection dataset.” https://universe.roboflow.com/class-dvpyb/airbus-aircraft-detection, jan 2023. visited on 2024-01-31.
  68. R. Bahmanyar, E. Vig, and P. Reinartz, “Mrcnet: Crowd counting and density map estimation in aerial and ground imagery,” arXiv preprint arXiv:1909.12743, 2019.
  69. R. Hänsch, J. Arndt, D. Lunga, M. Gibb, T. Pedelose, A. Boedihardjo, D. Petrie, and T. M. Bacastow, “Spacenet 8-the detection of flooded roads and buildings,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1472–1480, 2022.
  70. A. Van Etten and D. Hogan, “The spacenet multi-temporal urban development challenge,” arXiv preprint arXiv:2102.11958, 2021.
  71. J. Shermeyer, D. Hogan, J. Brown, A. Van Etten, N. Weir, F. Pacifici, R. Hansch, A. Bastidas, S. Soenen, T. Bacastow, et al., “Spacenet 6: Multi-sensor all weather mapping dataset,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pp. 196–197, 2020.
  72. J. Shermeyer, T. Hossler, A. Van Etten, D. Hogan, R. Lewis, and D. Kim, “Rareplanes: Synthetic data takes flight,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 207–217, 2021.
  73. T. BAKIRMAN and E. SERTEL, “A benchmark dataset for deep learning-based airplane detection: Hrplanes,” International Journal of Engineering and Geosciences, vol. 8, no. 3, p. 212–223, 2023.
  74. M. Rahnemoonfar, T. Chowdhury, A. Sarkar, D. Varshney, M. Yari, and R. Murphy, “Floodnet: A high resolution aerial imagery dataset for post flood scene understanding,” arXiv preprint arXiv:2012.02951, 2020.
  75. A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, et al., “Pytorch: An imperative style, high-performance deep learning library,” arXiv preprint arXiv:1912.01703, 2019.
  76. J. Bégaint, F. Racapé, S. Feltman, and A. Pushparaja, “Compressai: a pytorch library and evaluation platform for end-to-end compression research,” arXiv preprint arXiv:2011.03029, 2020.
  77. Y. Matsubara, “torchdistill: A modular, configuration-driven framework for knowledge distillation,” in International Workshop on Reproducible Research in Pattern Recognition, pp. 24–44, Springer, 2021.
  78. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
  79. Y. Collet and M. Kucherawy, “Zstandard Compression and the application/zstd Media Type.” RFC 8478, Oct. 2018.
  80. M. Mitry, “Routers in space: Kepler communications’ cubesats will create an internet for other satellites,” IEEE Spectrum, vol. 57, no. 2, pp. 38–43, 2020.
  81. J. Su, B. Xu, and H. Yin, “A survey of deep learning approaches to image restoration,” Neurocomputing, vol. 487, pp. 46–65, 2022.
  82. X. Zhang, D. Zhai, T. Li, Y. Zhou, and Y. Lin, “Image inpainting based on deep learning: A review,” Information Fusion, vol. 90, pp. 74–94, 2023.
  83. D. Algorithms Team, “Introducing yolo-nas-sat: Small object detection at the edge,” Mar 2024.
  84. T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft coco: Common objects in context,” in Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pp. 740–755, Springer, 2014.
Citations (1)

Summary

We haven't generated a summary for this paper yet.