Papers
Topics
Authors
Recent
Search
2000 character limit reached

FVC: A New Framework towards Deep Video Compression in Feature Space

Published 20 May 2021 in eess.IV and cs.CV | (2105.09600v2)

Abstract: Learning based video compression attracts increasing attention in the past few years. The previous hybrid coding approaches rely on pixel space operations to reduce spatial and temporal redundancy, which may suffer from inaccurate motion estimation or less effective motion compensation. In this work, we propose a feature-space video coding network (FVC) by performing all major operations (i.e., motion estimation, motion compression, motion compensation and residual compression) in the feature space. Specifically, in the proposed deformable compensation module, we first apply motion estimation in the feature space to produce motion information (i.e., the offset maps), which will be compressed by using the auto-encoder style network. Then we perform motion compensation by using deformable convolution and generate the predicted feature. After that, we compress the residual feature between the feature from the current frame and the predicted feature from our deformable compensation module. For better frame reconstruction, the reference features from multiple previous reconstructed frames are also fused by using the non-local attention mechanism in the multi-frame feature fusion module. Comprehensive experimental results demonstrate that the proposed framework achieves the state-of-the-art performance on four benchmark datasets including HEVC, UVG, VTL and MCL-JCV.

Citations (211)

Summary

  • The paper introduces FVC, a novel framework that shifts all video compression operations to the feature space using deep neural networks.
  • It employs a deformable compensation module and non-local attention to improve motion estimation and mitigate residual compression.
  • Empirical results demonstrate that FVC reduces bit-rate by 23.75% on benchmark datasets compared to traditional codecs like H.265.

An Overview of "FVC: A New Framework towards Deep Video Compression in Feature Space"

The paper by Zhihao Hu, Guo Lu, and Dong Xu, titled "FVC: A New Framework towards Deep Video Compression in Feature Space," proposes a novel approach to video compression leveraging feature-space operations. Traditional video compression methods predominantly rely on pixel-space operations such as motion estimation, motion compensation, and residual compression, which can face challenges with non-rigid motion and require significant temporal and spatial redundancy reduction. The proposed method, FVC, ushers in a paradigm shift by conducting these operations within the feature space, resulting in enhanced inefficiencies in motion estimation and compensation processes.

Theoretical Framework and Methodology

FVC stands out from existing approaches by executing all major video coding operations within the feature space, including motion estimation, motion compression, motion compensation, and residual compression. The method utilizes deep neural networks to improve the accuracy of these processes. A critical component of this framework is the deformable compensation module, leveraging the robust representational capabilities of deep features. This module employs motion estimation to generate motion information via offset maps in the feature space. Following estimation, an auto-encoder style network compresses these offset maps, aiding in more efficient motion handling.

In particular, deformable convolution plays a pivotal role in motion compensation by applying flexible kernels for accurate feature alignment between consecutive frames. This is crucial for managing non-rigid motion patterns inherent in complex video data, thereby enhancing the predicted feature's accuracy while relieving the residual compression module's burden. Furthermore, the inclusion of non-local attention mechanisms for fusing reference features from multiple reconstructed frames facilitates superior frame reconstruction quality.

Empirical Results and Performance

The paper presents comprehensive experimental results substantiating the efficacy of the FVC framework across several benchmark datasets—HEVC, UVG, VTL, and MCL-JCV. The findings indicate that FVC surpasses state-of-the-art hybrid and learning-based video compression performance benchmarks. Quantitative assessments demonstrate significant bit-rate savings, yielding improved results over traditional codecs such as H.265 and other contemporary methods. Specifically, FVC achieves a 23.75% bit-rate reduction on the HEVC Class B dataset compared to H.265.

Implications and Future Directions

This framework's advancements suggest promising implications for both practical and theoretical dimensions of video compression. By executing operations in the feature space, the FVC framework aligns with the trend toward leveraging deep learning paradigms for more adaptive and intelligent data compression methods. Practically, this could facilitate more effective handling of high-resolution and complex video content applications, potentially supporting advancements in streaming technologies and storage solutions.

On a theoretical level, the findings encourage further exploration into deformable networks and feature-space operations. Future research directions may involve integrating multi-scale feature operations, exploring bi-directional prediction schemes, or devising more efficient multi-frame fusion strategies to improve computational efficiency while maintaining the framework's compression efficacy.

In conclusion, the paper exemplifies a significant stride toward evolving the video compression landscape, with FVC providing a robust foundation for future innovations and enhancements in this domain.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Authors (3)

Collections

Sign up for free to add this paper to one or more collections.