Compilation and Optimizations for Efficient Machine Learning on Embedded Systems (2206.03326v2)
Abstract: Deep Neural Networks (DNNs) have achieved great success in a variety of ML applications, delivering high-quality inferencing solutions in computer vision, natural language processing, and virtual reality, etc. However, DNN-based ML applications also bring much increased computational and storage requirements, which are particularly challenging for embedded systems with limited compute/storage resources, tight power budgets, and small form factors. Challenges also come from the diverse application-specific requirements, including real-time responses, high-throughput performance, and reliable inference accuracy. To address these challenges, we introduce a series of effective design methodologies, including efficient ML model designs, customized hardware accelerator designs, and hardware/software co-design strategies to enable efficient ML applications on embedded systems.
- Xiaofan Zhang (79 papers)
- Yao Chen (187 papers)
- Cong Hao (51 papers)
- Sitao Huang (22 papers)
- Yuhong Li (33 papers)
- Deming Chen (62 papers)