Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep Reward Supervisions for Tuning Text-to-Image Diffusion Models (2405.00760v1)

Published 1 May 2024 in cs.CV and cs.AI

Abstract: Optimizing a text-to-image diffusion model with a given reward function is an important but underexplored research area. In this study, we propose Deep Reward Tuning (DRTune), an algorithm that directly supervises the final output image of a text-to-image diffusion model and back-propagates through the iterative sampling process to the input noise. We find that training earlier steps in the sampling process is crucial for low-level rewards, and deep supervision can be achieved efficiently and effectively by stopping the gradient of the denoising network input. DRTune is extensively evaluated on various reward models. It consistently outperforms other algorithms, particularly for low-level control signals, where all shallow supervision methods fail. Additionally, we fine-tune Stable Diffusion XL 1.0 (SDXL 1.0) model via DRTune to optimize Human Preference Score v2.1, resulting in the Favorable Diffusion XL 1.0 (FDXL 1.0) model. FDXL 1.0 significantly enhances image quality compared to SDXL 1.0 and reaches comparable quality compared with Midjourney v5.2.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Xiaoshi Wu (10 papers)
  2. Yiming Hao (5 papers)
  3. Manyuan Zhang (14 papers)
  4. Keqiang Sun (20 papers)
  5. Zhaoyang Huang (27 papers)
  6. Guanglu Song (45 papers)
  7. Yu Liu (786 papers)
  8. Hongsheng Li (340 papers)
Citations (9)

Summary

We haven't generated a summary for this paper yet.