Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

WIRE: Wavelet Implicit Neural Representations (2301.05187v1)

Published 5 Jan 2023 in cs.CV, cs.GR, and eess.IV

Abstract: Implicit neural representations (INRs) have recently advanced numerous vision-related areas. INR performance depends strongly on the choice of the nonlinear activation function employed in its multilayer perceptron (MLP) network. A wide range of nonlinearities have been explored, but, unfortunately, current INRs designed to have high accuracy also suffer from poor robustness (to signal noise, parameter variation, etc.). Inspired by harmonic analysis, we develop a new, highly accurate and robust INR that does not exhibit this tradeoff. Wavelet Implicit neural REpresentation (WIRE) uses a continuous complex Gabor wavelet activation function that is well-known to be optimally concentrated in space-frequency and to have excellent biases for representing images. A wide range of experiments (image denoising, image inpainting, super-resolution, computed tomography reconstruction, image overfitting, and novel view synthesis with neural radiance fields) demonstrate that WIRE defines the new state of the art in INR accuracy, training time, and robustness.

Citations (95)

Summary

  • The paper presents a novel wavelet activation function using a complex Gabor wavelet that enhances representational accuracy and robustness.
  • Its methodology integrates the continuous Gabor wavelet into an MLP, efficiently balancing learning speed and noise suppression.
  • Experiments demonstrate state-of-the-art performance in image denoising, inpainting, super-resolution, and CT reconstruction tasks.

Overview of WIRE: Wavelet Implicit Neural Representations

The paper "WIRE: Wavelet Implicit Neural Representations" explores a novel approach to implicit neural representations (INRs) utilizing wavelet-based activations. The authors propose the Wavelet Implicit Neural REpresentation (WIRE), which leverages the complex Gabor wavelet as a non-linear activation function within a multilayer perceptron (MLP). This paper addresses several challenges in the domain of INRs, namely accuracy, robustness, and training efficiency.

Key Contributions

The primary contribution of this paper is the introduction of WIRE, which uses a continuous complex Gabor wavelet as an activation function, resulting in high representational accuracy and improved robustness without compromising on training speed. The authors empirically demonstrate that this approach surpasses existing INRs, which often suffer from a trade-off between accuracy and robustness, especially in noise-affected conditions. WIRE is positioned as the new state-of-the-art across multiple tasks including image denoising, image inpainting, super-resolution, and computed tomography reconstruction.

Technical Insights

The incorporation of the Gabor wavelet in WIRE provides spatial and frequency compactness, which is shown to be superior in representing visual signals. This is rooted in the Gabor wavelet's properties, which align with the biases of many natural image distributions, thus facilitating better approximations. The paper further discusses the neural tangent kernel (NTK) perspective, indicating that WIRE inherently favors the learning of signal over noise early in the training process. This implicit bias is crucial for inverse problems where early stopping is used as a regularization technique.

Experimental Validation

The paper provides an extensive suite of experiments to validate the claims about WIRE's performance. The experiments demonstrate that WIRE achieves higher accuracy and faster convergence in tasks ranging from simple image representation to more complex ones like neural radiance fields for novel-view synthesis. For instance, in image denoising tasks, WIRE outperformed other non-linear activations, achieving superior peak signal-to-noise ratios (PSNR) and structural similarity indices (SSIM).

Comparative Evaluation

WIRE is contrasted against other popular activations within INRs, such as sinusoids and Gaussian functions. The detailed analysis shows that while competitors exhibit high representation capacity, they fall short in robustness to noise and efficient learning. WIRE's ability to maintain high performance across a broad range of hyperparameters further highlights its robustness and ease of deployment.

Implications and Future Work

The implications of WIRE extend beyond the tasks evaluated in this paper. Its framework suggests potential for broad applications in vision-related fields where modeling undersampled or noisy data is needed. The paper alludes to future developments, such as multidimensional wavelet activations and further exploration in high-dimensional data, which could be promising avenues to explore enhanced INR models.

In summary, the WIRE framework provides significant advancements in the field of INRs, presenting a balanced model that does not sacrifice robustness for accuracy or vice versa. Its ability to handle noise and undersampled conditions robustly comes as a valuable asset, paving the way for more reliable real-time applications in computer vision and related areas. Future work may involve exploring other wavelet functions or leveraging WIRE in combination with recurrent architectures to handle temporal data efficiently.

Github Logo Streamline Icon: https://streamlinehq.com