Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

UniCMs: A Unified Consistency Model For Efficient Multimodal Generation and Understanding (2502.05415v2)

Published 8 Feb 2025 in cs.CV and cs.AI

Abstract: Consistency models (CMs) have shown promise in the efficient generation of both image and text. This raises the natural question of whether we can learn a unified CM for efficient multimodal generation (e.g., text-to-image) and understanding (e.g., image-to-text). Intuitively, such a model could be acquired by applying the consistency distillation (CD) to existing unified multimodal models. However, the key challenge is establishing a unified denoising perspective for both image and text generation, which is essential for establishing the consistency mapping. To tackle this, at the representation level, we advocate for discrete tokens for both modalities to best preserve LLMing capabilities. Critically, instead of defining the text denoising trajectory via recent discrete diffusion LLMing principles, we specify it using the parallel decoding trace of an autoregressive LLM, benefiting from the latter's superior performance in general text generation tasks. The denoising trajectory of image tokens adheres to standard discrete diffusion. We train our unified consistency models (UniCMs) on these combined multimodal trajectories simultaneously with a unified objective. We introduce a trajectory segmentation strategy to further improve the training convergence. Empirically, in text-to-image generation, UniCMs outperform SD3 on GenEval, Image Reward, and CLIP Score metrics, while requiring only approximately ${1}/{8}$ of the sampling time. Meanwhile, in image-to-text generation, UniCMs surpass Show-o on the MMMU benchmark while being $1.5 \times$ faster at long-sequence generating speed. The code is available at https://github.com/zhijie-group/UniCMs.

An Insightful Overview of "Show-o Turbo: Towards Accelerated Unified Multimodal Understanding and Generation"

The paper "Show-o Turbo: Towards Accelerated Unified Multimodal Understanding and Generation," authored by Xu et al., presents an innovative approach to enhancing the efficiency of multimodal models in both image and text generation tasks. Show-o Turbo builds upon the existing Show-o model, which integrates text-to-image and image-to-text generation within a unified framework, yet suffers from inefficiencies during the inference phase. This paper addresses these inefficiencies by proposing methodologies to reduce the complexity and duration of the generative process.

Key Contributions and Methodologies

The authors of the paper identify the inefficiency inherent in the separate processing of image and text tokens within Show-o. Specifically, the focus is on the challenges of progressive denoising for image tokens and autoregressive decoding for text tokens. To mitigate these issues, the paper introduces a unified denoising approach coupled with both empirical consistency distillation (CD) and parallel decoding techniques.

  1. Unified Denoising Perspective: Show-o Turbo integrates a denoising perspective for image and text generation processes, thereby leveraging parallel decoding algorithms, especially Jacobi Decoding, which enables simultaneous refinement of multiple text tokens. This approach effectively aligns the text token generation speed with that typically applied in image generation, reducing the gap between the modalities.
  2. Consistency Distillation (CD): The paper extends the application of CD, originally used in diffusion models, to align the denoising trajectories of multimodal tasks in Show-o. This involves mapping any trajectory point on Show-o’s path to a stable endpoint in fewer steps, accelerating the overall processing time.
  3. Trajectory Segmentation and Curriculum Learning: To improve convergence and training efficiency, Show-o Turbo employs a trajectory segmentation strategy, dividing the sampling path into smaller segments with varied learning stages. Curriculum learning, evolving from simpler to more complex segments, allows the model to enhance its generative capacity efficiently.

Experimental Evaluation

The efficacy of Show-o Turbo is demonstrated through various experimental benchmarks:

  • Text-to-Image (T2I) Generation: Empirical results showed that Show-o Turbo achieved a GenEval score of 0.625 at just 4 sampling steps without needing classifier-free guidance — a noteworthy improvement over Show-o's performance with 8 steps and additional guidance.
  • Image-to-Text Generation: The model achieved a 1.5x speedup in image-to-text tasks without substantially sacrificing performance, illustrating significant inference efficiency enhancements.

Implications and Future Directions

The development of Show-o Turbo signifies a notable advance in the quest to create efficient, unified multimodal generation frameworks. Practically, the introduction of the unified denoising approach and trajectory segmentation could lead to more efficient real-time applications where reduced computational cost is critical. Theoretically, the framework prompts further inquiry into potential improvements in architectural design for multimodal models, especially in aligning and accelerating diverse token processing without compromising generative quality.

Looking ahead, the results open several avenues for future research. Further exploration into refining the consistency distillation approach, extending trajectory segmentation into other multimodal tasks, and testing across larger and more diverse datasets could prove fruitful. Moreover, combining these techniques with more advanced MMU datasets might aid in overcoming the performance drop on tasks with lengthy responses, achieving an optimal balance between speed and accuracy.

Overall, by addressing inefficiency in unified multimodal models, this work contributes to the ongoing development of comprehensive and fast multimodal AI systems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Chenkai Xu (4 papers)
  2. Xu Wang (319 papers)
  3. Zhenyi Liao (5 papers)
  4. Yishun Li (1 paper)
  5. TianQi Hou (18 papers)
  6. Zhijie Deng (58 papers)
Github Logo Streamline Icon: https://streamlinehq.com