Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

OmniGen: Unified Image Generation (2409.11340v2)

Published 17 Sep 2024 in cs.CV and cs.AI

Abstract: The emergence of LLMs has unified language generation tasks and revolutionized human-machine interaction. However, in the realm of image generation, a unified model capable of handling various tasks within a single framework remains largely unexplored. In this work, we introduce OmniGen, a new diffusion model for unified image generation. OmniGen is characterized by the following features: 1) Unification: OmniGen not only demonstrates text-to-image generation capabilities but also inherently supports various downstream tasks, such as image editing, subject-driven generation, and visual-conditional generation. 2) Simplicity: The architecture of OmniGen is highly simplified, eliminating the need for additional plugins. Moreover, compared to existing diffusion models, it is more user-friendly and can complete complex tasks end-to-end through instructions without the need for extra intermediate steps, greatly simplifying the image generation workflow. 3) Knowledge Transfer: Benefit from learning in a unified format, OmniGen effectively transfers knowledge across different tasks, manages unseen tasks and domains, and exhibits novel capabilities. We also explore the model's reasoning capabilities and potential applications of the chain-of-thought mechanism. This work represents the first attempt at a general-purpose image generation model, and we will release our resources at https://github.com/VectorSpaceLab/OmniGen to foster future advancements.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Shitao Xiao (38 papers)
  2. Yueze Wang (14 papers)
  3. Junjie Zhou (28 papers)
  4. Huaying Yuan (9 papers)
  5. Xingrun Xing (13 papers)
  6. Ruiran Yan (5 papers)
  7. Shuting Wang (11 papers)
  8. Tiejun Huang (130 papers)
  9. Zheng Liu (312 papers)
  10. Chaofan Li (13 papers)
Citations (14)

Summary

Unified Image Generation with OmniGen: An Expert Overview

The paper "OmniGen: Unified Image Generation," authored by Shitao Xiao et al., introduces a pioneering approach in the field of visual generation models. The research addresses a significant gap by proposing a unified model framework, OmniGen, which is capable of handling a diverse array of image generation tasks. This work sets a precedent by illustrating the feasibility and advantages of a generalized approach in image generation, akin to the versatility demonstrated by LLMs in NLP.

Key Features of OmniGen

OmniGen distinguishes itself through three primary features: unification, simplicity, and knowledge transfer.

  1. Unification: OmniGen demonstrates the ability to perform a variety of tasks including text-to-image generation, image editing, subject-driven generation, and visual-conditional generation within a single model framework. This integrative approach extends the model's capabilities to encompass traditional computer vision tasks, redefined as image generation tasks. This feature contrasts with the modular extensions observed in other diffusion models like ControlNet and IP-Adapter.
  2. Simplicity: The architecture of OmniGen is streamlined, utilizing a combinatory structure of a Variational Autoencoder (VAE) and a transformer model without additional encoders. This design is intended to be more user-friendly and cost-efficient by eliminating the need for extra preprocessing steps. The model accepts any modality of text and image inputs, which simplifies the workflow significantly.
  3. Knowledge Transfer: OmniGen's training on a unified dataset format allows it to transfer knowledge effectively across different tasks. This capability enables the model to handle unseen tasks and domains, demonstrating novel abilities, including reasoning and in-context learning, reminiscent of capabilities seen in LLMs.

Performance Evaluation and Results

The efficacy of OmniGen is underscored by its strong performance across multiple benchmarks and tasks:

  • Text-to-Image Generation: On the GenEval benchmark, OmniGen achieves competitive results compared to state-of-the-art models such as Stable Diffusion 3 (SD3) and DALLE-3, despite its relatively smaller parameter size and training data. The model's architecture promotes efficient parameter utilization, further bolstering its competitiveness.
  • Image Editing: Evaluated on the EMU-Edit dataset, OmniGen demonstrates performance on par with specialized models like EMU-Edit, particularly in maintaining image integrity and adhering to textual instructions.
  • Subject-Driven Generation: On the DreamBench, OmniGen exhibits superior subject fidelity and competitive text fidelity compared to models that require fine-tuning, highlighting its generalization capabilities without specific training for new entities.
  • Visual Conditional Controls: OmniGen maintains high performance in various visually conditioned tasks, such as segmentation mask and edge map generation, outperforming models like ControlNet and ControlNet++ in specific benchmarks.

Emerging Capabilities and Reasoning

OmniGen's architecture and training paradigm endow it with several emergent capabilities:

  • Task Composition: The model successfully handles composite instructions spanning multiple tasks within a single prompt, showcasing its versatility.
  • Implicit Task Combination: By leveraging its learned knowledge, OmniGen can perform implicit task compositions without explicit preprocessing, reducing the need for additional model components and operations.
  • In-Context Learning for Unseen Tasks: The model demonstrates effective in-context learning abilities, extending its application to novel tasks and improving performance in new domains through example-based prompts.
  • Reasoning and CoT: OmniGen exhibits reasoning capabilities by identifying and manipulating specific objects based on textual instructions. Preliminary exploration of a step-by-step generation process suggests potential applications of Chain-of-Thought (CoT) methodologies in image generation, although further optimization is required.

Implications and Future Directions

OmniGen's unified approach paves the way for more integrative and efficient systems in AI-driven image generation. The simplified architecture and its versatility in handling a wide range of tasks present substantial practical benefits, particularly in reducing complexity and cost in real-world applications. The model's capabilities in emergent tasks and reasoning suggest promising directions for future research, including deeper exploration of process supervision and CoT methods to enhance image generation quality and complexity handling. Additionally, the model's framework could be extended to incorporate text generation, further blending the capabilities of LLMs and image generation models into a truly universal generative foundation.

In conclusion, "OmniGen: Unified Image Generation" represents a significant contribution to the field of AI-driven visual generation, offering a robust and flexible solution that challenges and extends the boundaries of current diffusion models. The insights and methods proposed in this paper hold substantial potential for further advancements in both theoretical and practical aspects of AI and image generation technologies.

Github Logo Streamline Icon: https://streamlinehq.com

GitHub

Youtube Logo Streamline Icon: https://streamlinehq.com
Reddit Logo Streamline Icon: https://streamlinehq.com

Reddit

  1. OmniGen: Unified Image Generation (17 points, 10 comments)
  2. OmniGen - Paper (4 points, 4 comments)
  3. [2409.11340] OmniGen: Unified Image Generation (1 point, 0 comments)