Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 171 tok/s
Gemini 2.5 Pro 47 tok/s Pro
GPT-5 Medium 32 tok/s Pro
GPT-5 High 36 tok/s Pro
GPT-4o 60 tok/s Pro
Kimi K2 188 tok/s Pro
GPT OSS 120B 437 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

Sense, Predict, Adapt, Repeat: A Blueprint for Design of New Adaptive AI-Centric Sensing Systems (2312.07602v1)

Published 11 Dec 2023 in eess.SP, cs.AI, cs.SY, and eess.SY

Abstract: As Moore's Law loses momentum, improving size, performance, and efficiency of processors has become increasingly challenging, ending the era of predictable improvements in hardware performance. Meanwhile, the widespread incorporation of high-definition sensors in consumer devices and autonomous technologies has fueled a significant upsurge in sensory data. Current global trends reveal that the volume of generated data already exceeds human consumption capacity, making AI algorithms the primary consumers of data worldwide. To address this, a novel approach to designing AI-centric sensing systems is needed that can bridge the gap between the increasing capabilities of high-definition sensors and the limitations of AI processors. This paper provides an overview of efficient sensing and perception methods in both AI and sensing domains, emphasizing the necessity of co-designing AI algorithms and sensing systems for dynamic perception. The proposed approach involves a framework for designing and analyzing dynamic AI-in-the-loop sensing systems, suggesting a fundamentally new method for designing adaptive sensing systems through inference-time AI-to-sensor feedback and end-to-end efficiency and performance optimization.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (36)
  1. Brian Krzanich. Data is the new oil in the future of automated driving. Intel Editorial, 2016.
  2. SRC. Decadal plan for semiconductors, January 2021.
  3. David L Donoho. Compressed sensing. IEEE Transactions on information theory, 52(4):1289–1306, 2006.
  4. A processing-in-pixel-in-memory paradigm for resource-constrained tinyml applications. Scientific Reports, 12(1):14396, 2022.
  5. Fully embedding fast convolutional networks on pixel processor arrays. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXIX 16, pages 488–503. Springer, 2020.
  6. Convolutional neural networks using logarithmic data representation. arXiv preprint arXiv:1603.01025, 2016.
  7. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015.
  8. Model compression for deep neural networks: A survey. Computers, 12(3), 2023.
  9. Artificial intelligence techniques for cognitive sensing in future iot: State-of-the-art, potentials, and challenges. Journal of Sensor and Actuator Networks, 9(2):21, 2020.
  10. Efficient processing of deep neural networks: A tutorial and survey. Proceedings of the IEEE, 105(12):2295–2329, 2017.
  11. Xnor-net: Imagenet classification using binary convolutional neural networks. In European conference on computer vision, pages 525–542. Springer, 2016.
  12. On the adversarial robustness of quantized neural networks. In Proceedings of the 2021 on Great Lakes Symposium on VLSI, pages 189–194, 2021.
  13. A survey of quantization methods for efficient neural network inference. In Low-Power Computer Vision, pages 291–326. Chapman and Hall/CRC, 2022.
  14. Neural architecture search: Insights from 1000 papers. arXiv preprint arXiv:2301.08727, 2023.
  15. Efficientnet: Rethinking model scaling for convolutional neural networks. In International conference on machine learning, pages 6105–6114. PMLR, 2019.
  16. A comprehensive survey on hardware-aware neural architecture search. arXiv preprint arXiv:2101.09336, 2021.
  17. Simon Haykin. Cognitive radar: a way of the future. IEEE signal processing magazine, 23(1):30–40, 2006.
  18. Cognitive internet of things: a new paradigm beyond connection. IEEE Internet of Things journal, 1(2):129–143, 2014.
  19. Mohammad Abdul Matin. Towards Cognitive IoT Networks. Springer, 2020.
  20. Automatic brain tumor segmentation using convolutional neural networks with test-time augmentation. In Alessandro Crimi, Spyridon Bakas, Hugo Kuijf, Farahani Keyvan, Mauricio Reyes, and Theo van Walsum, editors, Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, pages 61–72, Cham, 2019. Springer International Publishing.
  21. Masanari Kimura. Understanding test-time augmentation. In International Conference on Neural Information Processing, pages 558–569. Springer, 2021.
  22. Alex Graves. Adaptive computation time for recurrent neural networks. arXiv preprint arXiv:1603.08983, 2016.
  23. Efficient adaptive inference for deep convolutional neural networks using hierarchical early exits. Pattern Recognition, 105:107346, 2020.
  24. Adaptive inference through early-exit networks: Design, challenges and directions. In Proceedings of the 5th International Workshop on Embedded and Mobile Deep Learning, pages 1–6, 2021.
  25. Resolution adaptive networks for efficient inference, 2020.
  26. Dynamic neural networks: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(11):7436–7456, nov 2022.
  27. A data-driven waveform adaptation method for mm-wave gait classification at the edge. IEEE Signal Processing Letters, 29:26–30, 2022.
  28. Multimae: Multi-modal multi-task masked autoencoders. In European Conference on Computer Vision, pages 348–367. Springer, 2022.
  29. Imagebind: One embedding space to bind them all. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15180–15190, 2023.
  30. Path aggregation network for instance segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 8759–8768, 2018.
  31. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4510–4520, 2018.
  32. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6848–6856, 2018.
  33. Temporal segment networks: Towards good practices for deep action recognition. In European conference on computer vision, pages 20–36. Springer, 2016.
  34. Action-net: Multipath excitation for action recognition. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 13214–13223, 2021.
  35. Ar-net: Adaptive frame resolution for efficient action recognition. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part VII 16, pages 86–104. Springer, 2020.
  36. " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144, 2016.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.