Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
143 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Condition-Aware Neural Network for Controlled Image Generation (2404.01143v1)

Published 1 Apr 2024 in cs.CV and cs.AI

Abstract: We present Condition-Aware Neural Network (CAN), a new method for adding control to image generative models. In parallel to prior conditional control methods, CAN controls the image generation process by dynamically manipulating the weight of the neural network. This is achieved by introducing a condition-aware weight generation module that generates conditional weight for convolution/linear layers based on the input condition. We test CAN on class-conditional image generation on ImageNet and text-to-image generation on COCO. CAN consistently delivers significant improvements for diffusion transformer models, including DiT and UViT. In particular, CAN combined with EfficientViT (CaT) achieves 2.78 FID on ImageNet 512x512, surpassing DiT-XL/2 while requiring 52x fewer MACs per sampling step.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (38)
  1. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684–10695, 2022.
  2. Scaling up gans for text-to-image synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10124–10134, 2023.
  3. ediffi: Text-to-image diffusion models with an ensemble of expert denoisers. arXiv preprint arXiv:2211.01324, 2022.
  4. Photorealistic text-to-image diffusion models with deep language understanding. Advances in Neural Information Processing Systems, 35:36479–36494, 2022.
  5. Video generation models as world simulators. 2024.
  6. Stable video diffusion: Scaling latent video diffusion models to large datasets. arXiv preprint arXiv:2311.15127, 2023.
  7. Adding conditional control to text-to-image diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3836–3847, 2023.
  8. Large scale gan training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096, 2018.
  9. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4401–4410, 2019.
  10. Arbitrary style transfer in real-time with adaptive instance normalization. In Proceedings of the IEEE international conference on computer vision, pages 1501–1510, 2017.
  11. Film: Visual reasoning with a general conditioning layer. In Proceedings of the AAAI conference on artificial intelligence, volume 32, 2018.
  12. All are worth words: a vit backbone for score-based diffusion models. In NeurIPS 2022 Workshop on Score-Based Methods, 2022.
  13. Efficientvit: Lightweight multi-scale attention for high-resolution dense prediction. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 17302–17313, 2023.
  14. Scalable diffusion models with transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4195–4205, 2023.
  15. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840–6851, 2020.
  16. Condconv: Conditionally parameterized convolutions for efficient inference. Advances in neural information processing systems, 32, 2019.
  17. Understanding and improving information transfer in multi-task learning. arXiv preprint arXiv:2005.00944, 2020.
  18. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538, 2017.
  19. François Chollet. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1251–1258, 2017.
  20. Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8110–8119, 2020.
  21. Improving image captioning with better use of captions. arXiv preprint arXiv:2006.11807, 2020.
  22. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017.
  23. Clipscore: A reference-free evaluation metric for image captioning. arXiv preprint arXiv:2104.08718, 2021.
  24. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748–8763. PMLR, 2021.
  25. Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598, 2022.
  26. Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps. Advances in Neural Information Processing Systems, 35:5775–5787, 2022.
  27. Diffusion models beat gans on image synthesis. Advances in neural information processing systems, 34:8780–8794, 2021.
  28. Unipc: A unified predictor-corrector framework for fast sampling of diffusion models. Advances in Neural Information Processing Systems, 36, 2024.
  29. Progressive distillation for fast sampling of diffusion models. arXiv preprint arXiv:2202.00512, 2022.
  30. Consistency models. arXiv preprint arXiv:2303.01469, 2023.
  31. Once-for-all: Train one network and specialize it for efficient deployment. arXiv preprint arXiv:1908.09791, 2019.
  32. Slimmable neural networks. arXiv preprint arXiv:1812.08928, 2018.
  33. Hypernetworks. In International Conference on Learning Representations, 2017.
  34. Smash: one-shot model architecture search through hypernetworks. arXiv preprint arXiv:1708.05344, 2017.
  35. Proxylessnas: Direct neural architecture search on target task and hardware. arXiv preprint arXiv:1812.00332, 2018.
  36. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015.
  37. Network augmentation for tiny deep learning. arXiv preprint arXiv:2110.08890, 2021.
  38. Q-diffusion: Quantizing diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 17535–17545, 2023.
Citations (5)

Summary

  • The paper introduces a novel mechanism that manipulates neural network weight spaces for controlled image synthesis.
  • It identifies optimal network layers and demonstrates superior performance with minimal computational overhead, including a 52× MAC reduction on ImageNet 512×512.
  • The study validates CAN’s effectiveness on diffusion transformer models, paving the way for efficient deployment in resource-constrained environments.

Condition-Aware Neural Network Enhances Controlled Image Generation

Introduction to CAN

Recent advancements in generative models have shown promising results in the synthesis of photorealistic images and videos. Nevertheless, the potential of these models has yet to be fully unlocked, particularly concerning the controllability aspect of the generation process. The Condition-Aware Neural Network (CAN) offers a novel approach by dynamically altering the neural network's weights based on input conditions, such as class labels or textual descriptions. This contrasts with the conventional method of manipulating features within the network. CAN's significance is demonstrated through substantial improvements in image generative models, particularly with diffusion transformer architectures like DiT and UViT.

Key Findings and Contributions

The implementation of CAN signifies a shift toward manipulating the weight space for conditional control in image generative models. The central contributions of the paper are as follows:

  • Introduction of a novel conditional control mechanism: This research is pioneering in demonstrating that weight manipulation can serve as an effective strategy for adding control to image generative models.
  • Practical design insights for CAN: Through extensive experimental evaluation, the paper uncovers critical insights for applying CAN effectively. Notably, it identifies the optimal subset of network layers to be made condition-aware and discusses the superiority of directly generating conditional weight over adaptive kernel selection methods.
  • Demonstrated efficiency and effectiveness: CAN consistently outperforms prior conditional control methods across different image generative models, highlighting its effectiveness. Importantly, this is achieved with minimal computational cost overhead, thus also enhancing deployment efficiency. For instance, integrating CAN with EfficientViT leads to a 52× reduction in MACs per sampling step on ImageNet 512×512, without compromising performance.

Experimental Insights

The empirical evaluation of CAN, especially when applied to diffusion transformer models, underscores its practical utility. The paper judiciously identifies the network components that benefit most from condition-aware weight adjustment and elucidates the effectiveness of directly generating the conditional weight. Moreover, the experimental results on class-conditional generation and text-to-image synthesis validate the robustness and generalizability of CAN across diverse tasks and datasets.

Implications and Future Directions

The introduction of CAN opens up new avenues for research in generative models and conditioned image synthesis. From a theoretical standpoint, this work expands our understanding of conditional control mechanisms by showcasing the potential of weight space manipulation. Practically, the efficiency gains facilitated by CAN present opportunities for deploying advanced image generative models on resource-constrained devices, thereby broadening their applicability.

Looking forward, the extension of CAN to tasks beyond image generation, such as large-scale text-to-image synthesis and video generation, presents an exciting area for future exploration. Additionally, integrating CAN with other efficiency-enhancing techniques could further revolutionize the deployment and performance of generative models in real-world applications.

Conclusion

In summary, the Condition-Aware Neural Network marks a significant step forward in the controlled generation of images. By effectively manipulating the neural network's weights based on input conditions, CAN achieves superior performance and efficiency, setting a new benchmark for future developments in the field of generative AI.