Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Denoising Diffusion Probabilistic Models via Exploiting Shared Representations (2311.16353v1)

Published 27 Nov 2023 in cs.LG, cs.AI, cs.CV, eess.IV, and eess.SP

Abstract: In this work, we address the challenge of multi-task image generation with limited data for denoising diffusion probabilistic models (DDPM), a class of generative models that produce high-quality images by reversing a noisy diffusion process. We propose a novel method, SR-DDPM, that leverages representation-based techniques from few-shot learning to effectively learn from fewer samples across different tasks. Our method consists of a core meta architecture with shared parameters, i.e., task-specific layers with exclusive parameters. By exploiting the similarity between diverse data distributions, our method can scale to multiple tasks without compromising the image quality. We evaluate our method on standard image datasets and show that it outperforms both unconditional and conditional DDPM in terms of FID and SSIM metrics.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Delaram Pirhayatifard (1 paper)
  2. Mohammad Taha Toghani (13 papers)
  3. Guha Balakrishnan (42 papers)
  4. César A. Uribe (75 papers)

Summary

We haven't generated a summary for this paper yet.