Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

RangeAugment: Efficient Online Augmentation with Range Learning (2212.10553v1)

Published 20 Dec 2022 in cs.CV, cs.AI, and cs.LG

Abstract: State-of-the-art automatic augmentation methods (e.g., AutoAugment and RandAugment) for visual recognition tasks diversify training data using a large set of augmentation operations. The range of magnitudes of many augmentation operations (e.g., brightness and contrast) is continuous. Therefore, to make search computationally tractable, these methods use fixed and manually-defined magnitude ranges for each operation, which may lead to sub-optimal policies. To answer the open question on the importance of magnitude ranges for each augmentation operation, we introduce RangeAugment that allows us to efficiently learn the range of magnitudes for individual as well as composite augmentation operations. RangeAugment uses an auxiliary loss based on image similarity as a measure to control the range of magnitudes of augmentation operations. As a result, RangeAugment has a single scalar parameter for search, image similarity, which we simply optimize via linear search. RangeAugment integrates seamlessly with any model and learns model- and task-specific augmentation policies. With extensive experiments on the ImageNet dataset across different networks, we show that RangeAugment achieves competitive performance to state-of-the-art automatic augmentation methods with 4-5 times fewer augmentation operations. Experimental results on semantic segmentation, object detection, foundation models, and knowledge distillation further shows RangeAugment's effectiveness.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Sachin Mehta (48 papers)
  2. Saeid Naderiparizi (15 papers)
  3. Fartash Faghri (32 papers)
  4. Maxwell Horton (18 papers)
  5. Lailin Chen (2 papers)
  6. Ali Farhadi (138 papers)
  7. Oncel Tuzel (62 papers)
  8. Mohammad Rastegari (57 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.