Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

How much real data do we actually need: Analyzing object detection performance using synthetic and real data (1907.07061v1)

Published 16 Jul 2019 in cs.CV

Abstract: In recent years, deep learning models have resulted in a huge amount of progress in various areas, including computer vision. By nature, the supervised training of deep models requires a large amount of data to be available. This ideal case is usually not tractable as the data annotation is a tremendously exhausting and costly task to perform. An alternative is to use synthetic data. In this paper, we take a comprehensive look into the effects of replacing real data with synthetic data. We further analyze the effects of having a limited amount of real data. We use multiple synthetic and real datasets along with a simulation tool to create large amounts of cheaply annotated synthetic data. We analyze the domain similarity of each of these datasets. We provide insights about designing a methodological procedure for training deep networks using these datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Farzan Erlik Nowruzi (5 papers)
  2. Prince Kapoor (4 papers)
  3. Dhanvin Kolhatkar (5 papers)
  4. Fahed Al Hassanat (3 papers)
  5. Robert Laganiere (13 papers)
  6. Julien Rebut (6 papers)
Citations (72)