AnyEdit: Mastering Unified High-Quality Image Editing for Any Idea (2411.15738v2)
Abstract: Instruction-based image editing aims to modify specific image elements with natural language instructions. However, current models in this domain often struggle to accurately execute complex user instructions, as they are trained on low-quality data with limited editing types. We present AnyEdit, a comprehensive multi-modal instruction editing dataset, comprising 2.5 million high-quality editing pairs spanning over 20 editing types and five domains. We ensure the diversity and quality of the AnyEdit collection through three aspects: initial data diversity, adaptive editing process, and automated selection of editing results. Using the dataset, we further train a novel AnyEdit Stable Diffusion with task-aware routing and learnable task embedding for unified image editing. Comprehensive experiments on three benchmark datasets show that AnyEdit consistently boosts the performance of diffusion-based editing models. This presents prospects for developing instruction-driven image editing models that support human creativity.
- Qifan Yu (14 papers)
- Wei Chow (11 papers)
- Zhongqi Yue (17 papers)
- Kaihang Pan (17 papers)
- Yang Wu (175 papers)
- Xiaoyang Wan (1 paper)
- Juncheng Li (121 papers)
- Siliang Tang (116 papers)
- Hanwang Zhang (161 papers)
- Yueting Zhuang (164 papers)