Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
120 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

TokenVerse: Versatile Multi-concept Personalization in Token Modulation Space (2501.12224v1)

Published 21 Jan 2025 in cs.CV

Abstract: We present TokenVerse -- a method for multi-concept personalization, leveraging a pre-trained text-to-image diffusion model. Our framework can disentangle complex visual elements and attributes from as little as a single image, while enabling seamless plug-and-play generation of combinations of concepts extracted from multiple images. As opposed to existing works, TokenVerse can handle multiple images with multiple concepts each, and supports a wide-range of concepts, including objects, accessories, materials, pose, and lighting. Our work exploits a DiT-based text-to-image model, in which the input text affects the generation through both attention and modulation (shift and scale). We observe that the modulation space is semantic and enables localized control over complex concepts. Building on this insight, we devise an optimization-based framework that takes as input an image and a text description, and finds for each word a distinct direction in the modulation space. These directions can then be used to generate new images that combine the learned concepts in a desired configuration. We demonstrate the effectiveness of TokenVerse in challenging personalization settings, and showcase its advantages over existing methods. project's webpage in https://token-verse.github.io/

Summary

  • The paper introduces multi-concept personalization by disentangling fine-grained visual attributes using a novel token modulation space.
  • It leverages diffusion transformers with token-specific modulation to enable precise semantic control over complex image generation.
  • Experimental evaluations demonstrate superior performance in concept extraction and composition, proving its scalability and adaptability.

The paper "TokenVerse: Versatile Multi-concept Personalization in Token Modulation Space" introduces a novel framework for multi-concept personalization in text-to-image diffusion models, addressing the limitations of existing methods in handling multiple images and concepts concurrently. The approach leverages a pre-trained diffusion transformer model (DiT) with innovative use of the modulation space for concept extraction and generation, asserting a strong capability to disentangle complex visual attributes from minimal input images while allowing for versatile combinations.

Key Contributions:

  1. Multi-concept Disentanglement and Personalization:
    • TokenVerse focuses on disentangling visual elements including objects, accessories, materials, poses, and lighting from a single input image. This approach enables the extraction of visual concepts with high granularity and accuracy, independently processing each input image without additional supervision or manual segmentation.
  2. Utilization of the Modulation Space:
    • The framework builds upon the modulation space associated with shift and scale parameters in DiT models, which has been demonstrated to facilitate semantic control over image generation. TokenVerse modifies the modulation space by devising a distinct direction for each text token from the input caption, allowing for fine-tuned, localized manipulations of visual concepts within the generated image.
  3. Optimization Framework for Concept Learning:
    • An optimization-based method is developed to map each token in the caption to a unique direction in the modulation space, effectively learning a personalized representation for each token. This results in the ability to generate images that preserve the integrity and uniqueness of the extracted concepts when combined in novel configurations.
  4. Enhanced Composition without Joint Training:
    • TokenVerse permits plug-and-play composition of learned concepts from multiple images, an aspect where existing approaches struggle, particularly when handling non-object aspects like pose or lighting. By supporting the integration of concepts without necessitating their joint training, the method demonstrates exceptional adaptability for content creation.

Methodology:

  • The framework leverages a diffusion transformer architecture where both text and image tokens are simultaneously processed. The input text affects generation through integrated pathways: attention mechanisms and modulation. TokenVerse modulates individual text tokens rather than global modulation, allowing precise, semantic editing of associated image concepts.
  • A two-stage training process is adopted to optimize both global and per-block modulation vector offsets for each text token, capturing intricate visual attributes while maintaining high fidelity to the input concepts.

Comparative Analysis and Results:

  • Compared to contemporaneous approaches, TokenVerse exhibits superior performance in both concept extraction and multi-concept composition tasks. This outperformance is quantitatively backed by evaluations using established benchmarks like DreamBench++ and through qualitative demonstrations that highlight its effectiveness in diverse personalization settings.
  • The framework’s modular design allows separate learning of visual concepts from different images, promoting scalability in terms of the number of concepts that can be managed and combined.

Applications and Limitations:

  • TokenVerse’s versatile capabilities suggest potential applications in storytelling and personalized content creation, offering enhanced control over visual narratives.
  • However, some limitations are noted, such as potential blending when similar concepts are independently extracted and collisions in token naming across different images. Nonetheless, suggested mitigations like joint training and contextual token differentiation are discussed as avenues for refinement.

In summary, TokenVerse introduces a significant advance in the field of text-to-image generation, expanding the boundaries of what is feasible in multi-concept personalization by leveraging semantic modulation spaces within diffusion transformers. It provides a robust framework for creating coherent and highly personalized visual content, reflecting a deep understanding of both textual prompts and their corresponding visual manifestations.

Github Logo Streamline Icon: https://streamlinehq.com

GitHub