Papers
Topics
Authors
Recent
AI Research Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 75 tok/s
Gemini 2.5 Pro 46 tok/s Pro
GPT-5 Medium 26 tok/s Pro
GPT-5 High 27 tok/s Pro
GPT-4o 104 tok/s Pro
Kimi K2 170 tok/s Pro
GPT OSS 120B 468 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

CoDiCodec: Unified Audio & Video Compression

Updated 15 September 2025
  • CoDiCodec is a unified codec framework that compresses audio and video using continuous embeddings and discrete tokens to balance fidelity and bitrate.
  • It leverages Finite Scalar Quantization with FSQ-dropout to enhance latent space expressivity and robustness during training.
  • The system supports both autoregressive and parallel decoding methods, enabling efficient reconstruction for generative and machine vision tasks.

CoDiCodec refers to a class of codecs and related frameworks that efficiently represent high-dimensional signals—primarily audio and video—through flexible compressed embeddings, supporting both continuous and discrete tokenization. Recent works under the "CoDiCodec" designation have unified competing paradigms in generative modeling, advanced machine-perception-aware compression, and addressed fundamental bottlenecks in both fidelity and bitrate reduction. The term covers innovations in audio autoencoding, multi-modal generative modeling, and perceptually adaptive video compression frameworks (Sun et al., 27 Mar 2025, Pasini et al., 11 Sep 2025).

1. Unified Continuous and Discrete Latent Representations

Conventional autoencoders enforce a choice: continuous embeddings favor generative diffusion models, while discrete tokens are optimal for autoregressive, sequence-based generation. CoDiCodec unifies these. The model is trained end-to-end to yield both:

  • Global summary embeddings: Highly compressed continuous features extracted via transformer blocks and nonlinear projections from spectrogram input.
  • Discrete tokens: Derived from the same summary embedding by Finite Scalar Quantization (FSQ), providing a low-bitrate representation attuned to token-based generative architectures.

This approach achieves compression rates up to 128× for stereo 44.1 kHz audio and provides continuous embeddings at ~11 Hz and discrete tokens at 2.38 kbps within a single model architecture (Pasini et al., 11 Sep 2025). This duality facilitates seamless integration with downstream generative tasks in both paradigms.

2. Finite Scalar Quantization and FSQ-Dropout Mechanisms

Finite Scalar Quantization (FSQ) discretizes continuous activations via:

z^=round(Ntanh(z))N\hat{z} = \frac{\mathrm{round}(N \cdot \tanh(z))}{N}

Here, zz are continuous summary embeddings, and NN sets the codebook granularity. This generates a set of (2N+1)(2N+1) quantized values per dimension.

FSQ-dropout further enhances expressivity. During training, with configurable probability, quantization is skipped, forcing the decoder to reconstruct from both quantized and raw continuous inputs. This prevents excessive clustering at discrete codebook levels, boosting dynamic range and robustness in the latent space. The model thus generalizes across both representation modes without additional losses.

3. Decoding Approaches: Autoregressive and Parallel Strategies

CoDiCodec supports both standard autoregressive and novel parallel decoding methods:

  • Autoregressive decoding: Reconstructs audio chunk-by-chunk, conditioning each output on previous segments. Optimal for interactive, low-latency scenarios.
  • Parallel decoding: Adjacent chunks are paired and decoded concurrently; cross-conditioning between overlap regions reduces boundary artifacts. Shifting pairings at each denoising step facilitates efficient information propagation across segments.

Empirical evaluation demonstrates that parallel decoding not only speeds up inference (lower denoising iterations) but also yields higher-quality reconstructions, as measured by perceptual metrics (Fréchet Audio Distance, SI-SDR), compared to existing continuous and discrete autoencoders at matched bitrates (Pasini et al., 11 Sep 2025).

4. Single Consistency Loss Training

Unlike multi-stage adversarial or auxiliary loss regimes, CoDiCodec training is based solely on a single consistency loss. The decoder learns to reconstruct clean audio from spectrogram input contaminated with Gaussian noise. The consistency loss formulation (e.g., via pseudo-Huber distance between outputs at adjacent noise levels):

L=E[1Δσd(Decσ+Δσ(xnoise),stopgrad(Decσ(xnoise)))]L = \mathbb{E}\left[\frac{1}{\Delta \sigma} \, d\big(\mathrm{Dec}_{\sigma+\Delta \sigma}(x_\text{noise}), \mathrm{stopgrad}(\mathrm{Dec}_\sigma(x_\text{noise}))\big)\right]

ensures that the generative mapping is invertible and reliable for both continuous and discrete latent modes, obviating the need for multi-objective balancing.

5. Applications Across Generative Audio and Machine Vision

CoDiCodec enables deployment of a unified autoencoder in diverse settings:

  • Generative audio synthesis: Compatible with diffusion, GAN, and autoregressive models for music, speech, and ambient sound generation.
  • Audio transformation/enhancement: High-compression summary embeddings support manipulation, style transfer, or restoration tasks.
  • Music Information Retrieval (MIR): Compressed features facilitate similarity search, tagging, and classification.
  • Video coding for machines: The CDRE framework (Sun et al., 27 Mar 2025), referenced as "CoDiCodec," extracts feature-domain distortion representations for embedding into downstream machine vision tasks. The pipeline involves a compression-sensitive extractor, lightweight VAE-like distortion codec, and prompt-like embedding into model backbones (CNN/Transformer), directly boosting rate–task performance in detection and segmentation.
CoDiCodec Variant Modality Latent Types Downstream Integration
Audio CoDiCodec Audio Continuous, Discrete Diffusion, autoregressive, MIR
Video CDRE ("CoDiCodec") Video Distortion features Object detection, segmentation

These models report improved generative quality (lower distortion, higher MOS/SMOS in TTS, reduced Fréchet Audio Distance), significant BD-rate reductions in vision tasks (–66.88% for detection), and minimal overhead in memory or computation.

6. Technical Implications and Future Directions

Unifying continuous and discrete compressed representations enables flexible model deployment and cross-paradigm generative inference within a consistent framework. FSQ-dropout resolves expressivity bottlenecks of scalar quantization, while parallel decoding mechanisms address speed–quality trade-offs.

This suggests future compression frameworks will continue to decouple representation types from architectural constraints, support cross-modal adaptation, and leverage attention-informed embedding of distortion signals into machine-perception pipelines. A plausible implication is the proliferation of multi-modal codecs that bridge generative and discriminative modeling for audio, speech, and video streams, with downstream models increasingly informed by perceptually calibrated latent codes.

In summary, CoDiCodec advances signal representation by simultaneously supporting continuous and discrete compressed modalities, offering robust, efficient, and high-fidelity generative modeling and machine vision integration (Pasini et al., 11 Sep 2025, Sun et al., 27 Mar 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to CoDiCodec.