Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Meta-Learning and Self-Supervised Pretraining for Real World Image Translation (2112.11929v1)

Published 22 Dec 2021 in cs.CV and cs.LG

Abstract: Recent advances in deep learning, in particular enabled by hardware advances and big data, have provided impressive results across a wide range of computational problems such as computer vision, natural language, or reinforcement learning. Many of these improvements are however constrained to problems with large-scale curated data-sets which require a lot of human labor to gather. Additionally, these models tend to generalize poorly under both slight distributional shifts and low-data regimes. In recent years, emerging fields such as meta-learning or self-supervised learning have been closing the gap between proof-of-concept results and real-life applications of machine learning by extending deep-learning to the semi-supervised and few-shot domains. We follow this line of work and explore spatio-temporal structure in a recently introduced image-to-image translation problem in order to: i) formulate a novel multi-task few-shot image generation benchmark and ii) explore data augmentations in contrastive pre-training for image translation downstream tasks. We present several baselines for the few-shot problem and discuss trade-offs between different approaches. Our code is available at https://github.com/irugina/meta-image-translation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Ileana Rugina (3 papers)
  2. Rumen Dangovski (27 papers)
  3. Mark Veillette (3 papers)
  4. Pooya Khorrami (8 papers)
  5. Brian Cheung (24 papers)
  6. Olga Simek (5 papers)
  7. Marin Soljačić (141 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.