Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

BM3D vs 2-Layer ONN (2103.03060v1)

Published 4 Mar 2021 in cs.CV, cs.AI, cs.LG, and cs.NE

Abstract: Despite their recent success on image denoising, the need for deep and complex architectures still hinders the practical usage of CNNs. Older but computationally more efficient methods such as BM3D remain a popular choice, especially in resource-constrained scenarios. In this study, we aim to find out whether compact neural networks can learn to produce competitive results as compared to BM3D for AWGN image denoising. To this end, we configure networks with only two hidden layers and employ different neuron models and layer widths for comparing the performance with BM3D across different AWGN noise levels. Our results conclusively show that the recently proposed self-organized variant of operational neural networks based on a generative neuron model (Self-ONNs) is not only a better choice as compared to CNNs, but also provide competitive results as compared to BM3D and even significantly surpass it for high noise levels.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Junaid Malik (21 papers)
  2. Serkan Kiranyaz (86 papers)
  3. Mehmet Yamac (18 papers)
  4. Moncef Gabbouj (167 papers)
Citations (10)

Summary

We haven't generated a summary for this paper yet.