Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Target-Quality Image Compression with Recurrent, Convolutional Neural Networks (1705.06687v1)

Published 18 May 2017 in cs.CV

Abstract: We introduce a stop-code tolerant (SCT) approach to training recurrent convolutional neural networks for lossy image compression. Our methods introduce a multi-pass training method to combine the training goals of high-quality reconstructions in areas around stop-code masking as well as in highly-detailed areas. These methods lead to lower true bitrates for a given recursion count, both pre- and post-entropy coding, even using unstructured LZ77 code compression. The pre-LZ77 gains are achieved by trimming stop codes. The post-LZ77 gains are due to the highly unequal distributions of 0/1 codes from the SCT architectures. With these code compressions, the SCT architecture maintains or exceeds the image quality at all compression rates compared to JPEG and to RNN auto-encoders across the Kodak dataset. In addition, the SCT coding results in lower variance in image quality across the extent of the image, a characteristic that has been shown to be important in human ratings of image quality

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Michele Covell (12 papers)
  2. Nick Johnston (17 papers)
  3. David Minnen (19 papers)
  4. Sung Jin Hwang (10 papers)
  5. Joel Shor (20 papers)
  6. Saurabh Singh (95 papers)
  7. Damien Vincent (25 papers)
  8. George Toderici (22 papers)
Citations (17)