Towards Latency-aware DNN Optimization with GPU Runtime Analysis and Tail Effect Elimination (2011.03897v2)
Abstract: Despite the superb performance of State-Of-The-Art (SOTA) DNNs, the increasing computational cost makes them very challenging to meet real-time latency and accuracy requirements. Although DNN runtime latency is dictated by model property (e.g., architecture, operations), hardware property (e.g., utilization, throughput), and more importantly, the effective mapping between these two, many existing approaches focus only on optimizing model property such as FLOPS reduction and overlook the mismatch between DNN model and hardware properties. In this work, we show that the mismatch between the varied DNN computation workloads and GPU capacity can cause the idle GPU tail effect, leading to GPU under-utilization and low throughput. As a result, the FLOPs reduction cannot bring effective latency reduction, which causes sub-optimal accuracy versus latency trade-offs. Motivated by this, we propose a GPU runtime-aware DNN optimization methodology to eliminate such GPU tail effect adaptively on GPU platforms. Our methodology can be applied on top of existing SOTA DNN optimization approaches to achieve better latency and accuracy trade-offs. Experiments show 11%-27% latency reduction and 2.5%-4.0% accuracy improvement over several SOTA DNN pruning and NAS methods, respectively
- Fuxun Yu (39 papers)
- Zirui Xu (25 papers)
- Tong Shen (41 papers)
- Dimitrios Stamoulis (23 papers)
- Longfei Shangguan (11 papers)
- Di Wang (407 papers)
- Rishi Madhok (2 papers)
- Chunshui Zhao (3 papers)
- Xin Li (980 papers)
- Nikolaos Karianakis (10 papers)
- Dimitrios Lymberopoulos (6 papers)
- Ang Li (472 papers)
- Yiran Chen (176 papers)
- Xiang Chen (343 papers)
- Chenchen Liu (24 papers)