Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DeepQTMT: A Deep Learning Approach for Fast QTMT-based CU Partition of Intra-mode VVC (2006.13125v3)

Published 23 Jun 2020 in eess.IV and cs.MM

Abstract: Versatile Video Coding (VVC), as the latest standard, significantly improves the coding efficiency over its ancestor standard High Efficiency Video Coding (HEVC), but at the expense of sharply increased complexity. In VVC, the quad-tree plus multi-type tree (QTMT) structure of coding unit (CU) partition accounts for over 97% of the encoding time, due to the brute-force search for recursive rate-distortion (RD) optimization. Instead of the brute-force QTMT search, this paper proposes a deep learning approach to predict the QTMT-based CU partition, for drastically accelerating the encoding process of intra-mode VVC. First, we establish a large-scale database containing sufficient CU partition patterns with diverse video content, which can facilitate the data-driven VVC complexity reduction. Next, we propose a multi-stage exit CNN (MSE-CNN) model with an early-exit mechanism to determine the CU partition, in accord with the flexible QTMT structure at multiple stages. Then, we design an adaptive loss function for training the MSE-CNN model, synthesizing both the uncertain number of split modes and the target on minimized RD cost. Finally, a multi-threshold decision scheme is developed, achieving desirable trade-off between complexity and RD performance. Experimental results demonstrate that our approach can reduce the encoding time of VVC by 44.65%-66.88% with the negligible Bj{\o}ntegaard delta bit-rate (BD-BR) of 1.322%-3.188%, which significantly outperforms other state-of-the-art approaches.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Tianyi Li (84 papers)
  2. Mai Xu (48 papers)
  3. Runzhi Tang (1 paper)
  4. Ying Chen (333 papers)
  5. Qunliang Xing (11 papers)
Citations (103)