A Computationally Efficient Neural Video Compression Accelerator Based on a Sparse CNN-Transformer Hybrid Network (2312.10716v2)
Abstract: Video compression is widely used in digital television, surveillance systems, and virtual reality. Real-time video decoding is crucial in practical scenarios. Recently, neural video compression (NVC) combines traditional coding with deep learning, achieving impressive compression efficiency. Nevertheless, the NVC models involve high computational costs and complex memory access patterns, challenging real-time hardware implementations. To relieve this burden, we propose an algorithm and hardware co-design framework named NVCA for video decoding on resource-limited devices. Firstly, a CNN-Transformer hybrid network is developed to improve compression performance by capturing multi-scale non-local features. In addition, we propose a fast algorithm-based sparse strategy that leverages the dual advantages of pruning and fast algorithms, sufficiently reducing computational complexity while maintaining video compression efficiency. Secondly, a reconfigurable sparse computing core is designed to flexibly support sparse convolutions and deconvolutions based on the fast algorithm-based sparse strategy. Furthermore, a novel heterogeneous layer chaining dataflow is incorporated to reduce off-chip memory traffic stemming from extensive inter-frame motion and residual information. Thirdly, the overall architecture of NVCA is designed and synthesized in TSMC 28nm CMOS technology. Extensive experiments demonstrate that our design provides superior coding quality and up to 22.7x decoding speed improvements over other video compression designs. Meanwhile, our design achieves up to 2.2x improvements in energy efficiency compared to prior accelerators.
- T. Wiegand et al., “Overview of the H.264/AVC video coding standard,” IEEE TCSVT, vol. 13, no. 7, pp. 560–576, Jul. 2003.
- G. Sullivan et al., “Overview of the high efficiency video coding (HEVC) standard,” IEEE TCSVT, vol. 22, no. 12, pp. 1649–1668, Sep. 2012.
- B. Bross et al., “Overview of the versatile video coding (VVC) standard and its applications,” IEEE TCSVT, vol. 31, no. 10, pp. 3736–3764, 2021.
- G. Lu et al., “DVC: An end-to-end deep video compression framework,” in CVPR, Jun. 2019, pp. 10 998–11 007.
- Z. Hu et al., “FVC: An end-to-end framework towards deep video compression in feature space,” IEEE TPAMI, vol. 45, no. 4, pp. 4569–4585, Apr. 2023.
- G. Lu et al., “Content adaptive and error propagation aware deep video compression,” in ECCV, vol. 456-472, Aug. 2020, pp. 456–472.
- O. Rippel et al., “ELF-VC: Efficient learned flexible-rate video coding,” in ICCV, Oct. 2021, pp. 14 459–14 468.
- J. Li et al., “Deep contextual video compression,” in NeurIPS, Sep. 2021, pp. 18 114–18 125.
- F. Mentzer et al., “VCT: A video compression transformer,” in NeurIPS, 2022.
- G. Li et al., “Block convolution: Toward memory-efficient inference of large-scale CNNs on FPGA,” IEEE TCAD, vol. 41, no. 5, pp. 1436–1447, May 2022.
- H. Le et al., “Mobilecodec: neural inter-frame video compression on mobile devices,” in MMSys, Jun. 2022, pp. 324–330.
- C. Jia et al., “FPX-NVC: an FPGA-accelerated P-frame based neural video coding system,” in VCIP, Dec. 2022.
- W. Mao et al., “FTA-GAN: A computation-efficient accelerator for GANs with fast transformation algorithm,” IEEE TNNLS, vol. 34, no. 6, pp. 2978–2992, Jun. 2023.
- S. Zhang et al., “An efficient accelerator based on lightweight deformable 3D-CNN for video super-resolution,” IEEE TCAS-I, vol. 70, no. 6, pp. 2384–2397, Jun. 2023.
- X. Wang et al., “WinoNN: Optimizing FPGA-based convolutional neural network accelerators using sparse winograd algorithm,” IEEE TCAD, vol. 39, no. 11, pp. 4290–4302, Nov. 2020.
- J.-W. Chang et al., “Towards design methodology of efficient fast algorithms for accelerating generative adversarial networks on FPGAs,” in ASP-DAC, 2020, pp. 283–288.
- R. Zou et al., “The devil is in the details: Window-based attention for image compression,” in CVPR, Jun. 2022, pp. 17 471–17 480.
- Z. Liu et al., “Swin transformer: Hierarchical vision transformer using shifted windows,” in ICCV, Oct. 2021, pp. 9992–10 002.
- A. Lavin et al., “Fast algorithms for convolutional neural networks,” in CVPR, Jun. 2016, pp. 4013–4021.
- T. Xue et al., “Video enhancement with task-oriented flow,” IJCV, vol. 127, no. 8, pp. 1106–1125, Feb. 2019.
- A. Mercat et al., “UVG dataset: 50/120fps 4k sequences for video codec analysis and development,” in MMSys, May. 2020, pp. 297–302.
- H. Wang et al., “MCL-JCV: a jnd-based h.264/avc video quality assessment dataset,” in ICIP, Sep. 2016, pp. 1509–1513.
- Z. Wang et al., “Multi-scale structural similarity for image quality assessment,” in ACSSC, vol. 2, Nov. 2003, pp. 1398–1402.
- Y. Zhao et al., “Dnn-chip predictor: An analytical performance predictor for dnn accelerators with various dataflows and hardware architectures,” in ICASSP, vol. 2, May. 2020, pp. 1593–1597.
- Z. Shao et al., “Memory-efficient cnn accelerator based on interlayer feature map compression,” IEEE TCAS-I, vol. 69, no. 2, pp. 668–681, Feb. 2022.
- Y. Wang et al., “An efficient deep learning accelerator architecture for compressed video analysis,” IEEE TCAD, vol. 41, no. 9, pp. 2808–2820, Sep. 2022.
- Siyu Zhang (32 papers)
- Wendong Mao (13 papers)
- Huihong Shi (18 papers)
- Zhongfeng Wang (50 papers)