Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Domain-Agnostic Tuning-Encoder for Fast Personalization of Text-To-Image Models (2307.06925v1)

Published 13 Jul 2023 in cs.CV, cs.GR, and cs.LG

Abstract: Text-to-image (T2I) personalization allows users to guide the creative image generation process by combining their own visual concepts in natural language prompts. Recently, encoder-based techniques have emerged as a new effective approach for T2I personalization, reducing the need for multiple images and long training times. However, most existing encoders are limited to a single-class domain, which hinders their ability to handle diverse concepts. In this work, we propose a domain-agnostic method that does not require any specialized dataset or prior information about the personalized concepts. We introduce a novel contrastive-based regularization technique to maintain high fidelity to the target concept characteristics while keeping the predicted embeddings close to editable regions of the latent space, by pushing the predicted tokens toward their nearest existing CLIP tokens. Our experimental results demonstrate the effectiveness of our approach and show how the learned tokens are more semantic than tokens predicted by unregularized models. This leads to a better representation that achieves state-of-the-art performance while being more flexible than previous methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Moab Arar (13 papers)
  2. Rinon Gal (28 papers)
  3. Yuval Atzmon (19 papers)
  4. Gal Chechik (110 papers)
  5. Daniel Cohen-Or (172 papers)
  6. Ariel Shamir (46 papers)
  7. Amit H. Bermano (46 papers)
Citations (62)