Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Training with the Invisibles: Obfuscating Images to Share Safely for Learning Visual Recognition Models (1901.00098v2)

Published 1 Jan 2019 in cs.CV and cs.LG

Abstract: High-performance visual recognition systems generally require a large collection of labeled images to train. The expensive data curation can be an obstacle for improving recognition performance. Sharing more data allows training for better models. But personal and private information in the data prevent such sharing. To promote sharing visual data for learning a recognition model, we propose to obfuscate the images so that humans are not able to recognize their detailed contents, while machines can still utilize them to train new models. We validate our approach by comprehensive experiments on three challenging visual recognition tasks; image classification, attribute classification, and facial landmark detection on several datasets including SVHN, CIFAR10, Pascal VOC 2012, CelebA, and MTFL. Our method successfully obfuscates the images from humans recognition, but a machine model trained with them performs within about 1% margin (up to 0.48%) of the performance of a model trained with the original, non-obfuscated data.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Dongmin Kang (2 papers)
  2. Kari Pulli (3 papers)
  3. Jonghyun Choi (50 papers)
  4. Tae-Hoon Kim (18 papers)
Citations (14)

Summary

We haven't generated a summary for this paper yet.