Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Convolutional Neural Network (CNN) to reduce construction loss in JPEG compression caused by Discrete Fourier Transform (DFT) (2209.03475v2)

Published 26 Aug 2022 in eess.IV, cs.CV, and cs.LG

Abstract: In recent decades, digital image processing has gained enormous popularity. Consequently, a number of data compression strategies have been put forth, with the goal of minimizing the amount of information required to represent images. Among them, JPEG compression is one of the most popular methods that has been widely applied in multimedia and digital applications. The periodic nature of DFT makes it impossible to meet the periodic condition of an image's opposing edges without producing severe artifacts, which lowers the image's perceptual visual quality. On the other hand, deep learning has recently achieved outstanding results for applications like speech recognition, image reduction, and natural language processing. Convolutional Neural Networks (CNN) have received more attention than most other types of deep neural networks. The use of convolution in feature extraction results in a less redundant feature map and a smaller dataset, both of which are crucial for image compression. In this work, an effective image compression method is purposed using autoencoders. The study's findings revealed a number of important trends that suggested better reconstruction along with good compression can be achieved using autoencoders.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (37)
  1. An autoencoder-based learned image compressor: Description of challenge proposal by nctu, 2019. URL https://arxiv.org/abs/1902.07385.
  2. Noise-enhanced convolutional neural networks. Neural Networks, 78:15–23, June 2016. ISSN 08936080. doi: 10.1016/j.neunet.2015.09.014. URL https://linkinghub.elsevier.com/retrieve/pii/S0893608015001896.
  3. Learning to inpaint for image compression. arXiv:1709.08855 [cs], November 2017. URL http://arxiv.org/abs/1709.08855. arXiv: 1709.08855.
  4. Deep convolutional autoencoder-based lossy image compression. In 2018 Picture Coding Symposium (PCS), pages 253–257, 2018. doi: 10.1109/PCS.2018.8456308.
  5. Kunal Rajan Deshmukh. Image compression using neural networks. Master of Science, San Jose State University, San Jose, CA, USA, May 2019. URL https://scholarworks.sjsu.edu/etd_projects/666.
  6. Eliminating the effect of image border with image periodic decomposition for phase correlation based remote sensing image registration. Sensors, 19(10):2329, May 2019. ISSN 1424-8220. doi: 10.3390/s19102329. URL https://www.mdpi.com/1424-8220/19/10/2329.
  7. Activation functions in deep learning: A comprehensive survey and benchmark, 2021. URL https://arxiv.org/abs/2109.14545.
  8. Adaptive learning rate clipping stabilizes learning. Machine Learning: Science and Technology, 1(1):015011, mar 2020. doi: 10.1088/2632-2153/ab81e2. URL https://doi.org/10.1088%2F2632-2153%2Fab81e2.
  9. An improvement of image registration based on phase correlation. Optik, 125(22):6709–6712, 2014. ISSN 0030-4026. doi: https://doi.org/10.1016/j.ijleo.2014.07.086. URL https://www.sciencedirect.com/science/article/pii/S0030402614010377.
  10. Digital image processing, 2009.
  11. Bhisham C. Gupta. Sampling Methods, pages 89–121. 2021. doi: 10.1002/9781119671718.ch4.
  12. Murilo Gustineli. A survey on recently proposed activation functions for deep learning, 2022. URL https://arxiv.org/abs/2204.02921.
  13. Evaluation of neural architectures trained with square loss vs cross-entropy in classification tasks, 2020. URL https://arxiv.org/abs/2006.07322.
  14. Developed JPEG algorithm applied in image compression. IOP Conference Series: Materials Science and Engineering, 928(3):032006, nov 2020. doi: 10.1088/1757-899x/928/3/032006. URL https://doi.org/10.1088/1757-899x/928/3/032006.
  15. Adam: A method for stochastic optimization, 2014. URL https://arxiv.org/abs/1412.6980.
  16. Attention-based image upsampling, 2020. URL https://arxiv.org/abs/2012.09904.
  17. Automatic and precise orthorectification, coregistration, and subpixel correlation of satellite images, application to ground deformation measurements. IEEE Transactions on Geoscience and Remote Sensing, 45(6):1529–1558, 2007.
  18. Learning a single model with a wide range of quality factors for jpeg image artifacts removal. IEEE Transactions on Image Processing, 29:8842–8854, 2020. ISSN 1941-0042. doi: 10.1109/tip.2020.3020389. URL http://dx.doi.org/10.1109/TIP.2020.3020389.
  19. Joint image encryption and compression schemes based on 16 * 16 dct. Journal of Visual Communication and Image Representation, 58:12–24, 2019.
  20. Introducing Hann windows for reducing edge-effects in patch-based image segmentation. PLOS ONE, 15(3):e0229839, March 2020. ISSN 1932-6203. doi: 10.1371/journal.pone.0229839. URL https://dx.plos.org/10.1371/journal.pone.0229839.
  21. Comparative performance analysis of hamming, hanning and blackman window. International Journal of Computer Applications, 96(18):1–7, June 2014. ISSN 09758887. doi: 10.5120/16891-6927. URL http://research.ijcaonline.org/volume96/number18/pxc3896927.pdf.
  22. A survey on deep learning: algorithms, techniques, and applications. ACM Computing Surveys, 51(5):1–36, September 2019. ISSN 0360-0300, 1557-7341. doi: 10.1145/3234150. URL https://dl.acm.org/doi/10.1145/3234150.
  23. Histogram alternation based digital image compression using base-2 coding. In 2018 Digital Image Computing: Techniques and Applications, pages 1–8, Canberra, Australia, December 2018. IEEE. ISBN 9781538666029. doi: 10.1109/DICTA.2018.8615830. URL https://ieeexplore.ieee.org/document/8615830/.
  24. Image compression based on 2D Discrete Fourier Transform and matrix minimization algorithm. Array, 6:100024, July 2020. ISSN 25900056. doi: 10.1016/j.array.2020.100024. URL https://linkinghub.elsevier.com/retrieve/pii/S2590005620300096.
  25. Overfitting in adversarially robust deep learning, 2020. URL https://arxiv.org/abs/2002.11569.
  26. Generative Compression. arXiv:1703.01467 [cs], June 2017. URL http://arxiv.org/abs/1703.01467. arXiv: 1703.01467.
  27. How does batch normalization help optimization?, 2018. URL https://arxiv.org/abs/1805.11604.
  28. A survey on Image Data Augmentation for Deep Learning. Journal of Big Data, 6(1):60, December 2019. ISSN 2196-1115. doi: 10.1186/s40537-019-0197-0. URL https://journalofbigdata.springeropen.com/articles/10.1186/s40537-019-0197-0.
  29. MM Siddeq and MA Rodrigues. A novel image compression algorithm for high resolution 3d reconstruction. 3D Research, 5(2):7, 2014a.
  30. MM Siddeq and Marcos Rodrigues. A new 2d image compression technique for 3d surface reconstruction. 2014b.
  31. Applied minimized matrix size algorithm on the transformed images by dct and dwt used for image compression. International Journal of Computer Applications, 70(15), 2013.
  32. SMPTE. The future of the JPEG standard, 2020. URL https://www.smpte.org/blog/the-future-of-the-jpeg-standard.
  33. On the regularization of autoencoders, 2021. URL https://arxiv.org/abs/2110.11402.
  34. Compression artifacts removal using convolutional neural networks. arXiv:1605.00366 [cs], May 2016. URL http://arxiv.org/abs/1605.00366. arXiv: 1605.00366.
  35. W3Techs. Usage statistics of image file formats for websites, 2022. URL https://w3techs.com/technologies/overview/image_format.
  36. Analysis of k-fold cross-validation over hold-out validation on colossal datasets for quality classification. In 2016 IEEE 6th International Conference on Advanced Computing (IACC), pages 78–83, 2016. doi: 10.1109/IACC.2016.25.
  37. Research on image compression technology based on huffman coding. Journal of Visual Communication and Image Representation, 59:33–38, 2019.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (1)
  1. Suman Kunwar (7 papers)

Summary

We haven't generated a summary for this paper yet.