Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

One Diffusion to Generate Them All (2411.16318v1)

Published 25 Nov 2024 in cs.CV and cs.AI

Abstract: We introduce OneDiffusion, a versatile, large-scale diffusion model that seamlessly supports bidirectional image synthesis and understanding across diverse tasks. It enables conditional generation from inputs such as text, depth, pose, layout, and semantic maps, while also handling tasks like image deblurring, upscaling, and reverse processes such as depth estimation and segmentation. Additionally, OneDiffusion allows for multi-view generation, camera pose estimation, and instant personalization using sequential image inputs. Our model takes a straightforward yet effective approach by treating all tasks as frame sequences with varying noise scales during training, allowing any frame to act as a conditioning image at inference time. Our unified training framework removes the need for specialized architectures, supports scalable multi-task training, and adapts smoothly to any resolution, enhancing both generalization and scalability. Experimental results demonstrate competitive performance across tasks in both generation and prediction such as text-to-image, multiview generation, ID preservation, depth estimation and camera pose estimation despite relatively small training dataset. Our code and checkpoint are freely available at https://github.com/lehduong/OneDiffusion

Overview of "One Diffusion to Generate Them All"

The paper introduces OneDiffusion, a comprehensive diffusion model designed to handle diverse tasks related to image synthesis and comprehension. It posits a unified approach leveraging the capabilities of diffusion models across various domains such as text-to-image generation, image manipulation, and understanding tasks, including depth and pose estimation, segmentation, and multiview generation. What sets OneDiffusion apart is its training framework, which treats every task as sequences of frames differentiated by noise scales. This paradigm eliminates the necessity for bespoke architectures, enriches scalability, and promotes adaptability to tasks irrespective of their resolution requirements.

Methodological Insights

OneDiffusion employs flow matching to train the diffusion model. This involves the formulation of a time-dependent vector field facilitating the transformation between two probability distributions. The model essentially treats the input conditions and target images as sequences of views with distinct noise levels through a linear interpolation schedule. This unique training approach enables the framework to support bidirectional processing—both the synthesis of images from conditions and the derivation of underlying conditions from images.

The versatility of this model architecture allows its application without auxiliary losses, enabling seamless bidirectional generation between image synthesis and understanding tasks. OneDiffusion excels in the complexity of multiview generation tasks and conditional generation from various modalities such as text, depth, and semantic maps. The model's architecture is grounded on the Next-DiT transformer, allowing for flexible input modalities across different tasks.

Experimental Validation

The empirical evaluation encompasses various tasks including text-to-image conversion where OneDiffusion achieves favorable results comparable with state-of-the-art models across benchmarks. For the ImageNet validation set, significant performance is noted, with strong adherence to prompt details, fine image quality across diverse styles, and efficient parameter usage. In bidirectional image synthesis tasks, OneDiffusion proficiently aligns with conditioning inputs across HED, depth, human poses, semantic maps, and bounding boxes.

The multiview generation results underscore the model's ability to consistently generate various views with high photorealism, leveraging a small number of conditional inputs. Importantly, the model can predict unknown camera poses and generate consistent viewpoints, showcasing robust adaptability.

In ID customization experiments, OneDiffusion demonstrates strong generalization, effectively varying expressions, and viewpoints, often outperforming models reliant on face embeddings. Robustness in open-vocabulary depth estimation and competitive results on NYUv2 and DIODE benchmarks highlight the comprehensive predictive potential of the model.

Theoretical and Practical Implications

The contribution of OneDiffusion significantly streamlines the process of task integration across image synthesis and understanding domains, exploiting the inherent capabilities of diffusion models without additional complexities of fine-tuning specific modules. The approach advocates a scalable architecture that can dynamically adjust to a plethora of visual inputs and outputs, making it a candidate model that closely resembles the flexibility seen in LLMs for text-based tasks.

In theory, the unified treatment of multiple tasks facilitated by the model architecture advances the discussion towards universal image models. This approach could similarly influence the ongoing developments in multi-disciplinary AI systems, emphasizing generalization and the ability to interpolate across disparate input-output modalities with high accuracy.

Speculation on Future Directions

While the current implementation of OneDiffusion is robust, there remains an avenue for exploring further scalability to large dataset sizes and model parameters, potentially enhancing its multimodal capabilities even further. Additionally, focusing on optimization techniques that can handle more intricate scene generation with minimal data could be another fruitful research direction. As interest in unified vision models continues to grow, future research may consider merging these models with other multimodal architectures to explore cross-domain synergies, thereby pushing the boundaries of universal AI models in practical applications.

Overall, the paper presents a well-rounded contribution to advancing the flexibility and capability of diffusion-based models for image synthesis and understanding, offering insights that could be pivotal for future advancements in the field.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Duong H. Le (5 papers)
  2. Tuan Pham (20 papers)
  3. Sangho Lee (25 papers)
  4. Christopher Clark (27 papers)
  5. Aniruddha Kembhavi (79 papers)
  6. Stephan Mandt (100 papers)
  7. Ranjay Krishna (116 papers)
  8. Jiasen Lu (32 papers)
Github Logo Streamline Icon: https://streamlinehq.com
Reddit Logo Streamline Icon: https://streamlinehq.com