Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Identity Encoder for Personalized Diffusion (2304.07429v1)

Published 14 Apr 2023 in cs.CV

Abstract: Many applications can benefit from personalized image generation models, including image enhancement, video conferences, just to name a few. Existing works achieved personalization by fine-tuning one model for each person. While being successful, this approach incurs additional computation and storage overhead for each new identity. Furthermore, it usually expects tens or hundreds of examples per identity to achieve the best performance. To overcome these challenges, we propose an encoder-based approach for personalization. We learn an identity encoder which can extract an identity representation from a set of reference images of a subject, together with a diffusion generator that can generate new images of the subject conditioned on the identity representation. Once being trained, the model can be used to generate images of arbitrary identities given a few examples even if the model hasn't been trained on the identity. Our approach greatly reduces the overhead for personalized image generation and is more applicable in many potential applications. Empirical results show that our approach consistently outperforms existing fine-tuning based approach in both image generation and reconstruction, and the outputs is preferred by users more than 95% of the time compared with the best performing baseline.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Yu-Chuan Su (22 papers)
  2. Kelvin C. K. Chan (34 papers)
  3. Yandong Li (38 papers)
  4. Yang Zhao (382 papers)
  5. Han Zhang (338 papers)
  6. Boqing Gong (100 papers)
  7. Huisheng Wang (18 papers)
  8. Xuhui Jia (22 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.