Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Analyzing and Mitigating JPEG Compression Defects in Deep Learning (2011.08932v2)

Published 17 Nov 2020 in cs.CV and cs.LG

Abstract: With the proliferation of deep learning methods, many computer vision problems which were considered academic are now viable in the consumer setting. One drawback of consumer applications is lossy compression, which is necessary from an engineering standpoint to efficiently and cheaply store and transmit user images. Despite this, there has been little study of the effect of compression on deep neural networks and benchmark datasets are often losslessly compressed or compressed at high quality. Here we present a unified study of the effects of JPEG compression on a range of common tasks and datasets. We show that there is a significant penalty on common performance metrics for high compression. We test several methods for mitigating this penalty, including a novel method based on artifact correction which requires no labels to train.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Max Ehrlich (14 papers)
  2. Larry Davis (41 papers)
  3. Ser-Nam Lim (116 papers)
  4. Abhinav Shrivastava (120 papers)
Citations (19)

Summary

We haven't generated a summary for this paper yet.