Papers
Topics
Authors
Recent
Search
2000 character limit reached

In Search of a Data Transformation That Accelerates Neural Field Training

Published 28 Nov 2023 in cs.LG and cs.CV | (2311.17094v2)

Abstract: Neural field is an emerging paradigm in data representation that trains a neural network to approximate the given signal. A key obstacle that prevents its widespread adoption is the encoding speed-generating neural fields requires an overfitting of a neural network, which can take a significant number of SGD steps to reach the desired fidelity level. In this paper, we delve into the impacts of data transformations on the speed of neural field training, specifically focusing on how permuting pixel locations affect the convergence speed of SGD. Counterintuitively, we find that randomly permuting the pixel locations can considerably accelerate the training. To explain this phenomenon, we examine the neural field training through the lens of PSNR curves, loss landscapes, and error patterns. Our analyses suggest that the random pixel permutations remove the easy-to-fit patterns, which facilitate easy optimization in the early stage but hinder capturing fine details of the signal.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (47)
  1. NTIRE 2017 challenge on single image super-resolution: Dataset and study. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2017.
  2. A closer look at memorization in deep networks. In ICML, 2017.
  3. The convergence rate of neural networks for learned functions of different frequencies. In NeurIPS, 2019.
  4. Learning continuous image representation with local implicit image function. In CVPR, 2021.
  5. Deep learning on implicit neural representations of shapes. In ICLR, 2023.
  6. Coin: Compression with implicit neural representations. 2021.
  7. From data to functa: Your data point is a function and you can treat it like one. In ICML, 2022a.
  8. COIN++: neural compression across modalities. Trans. Mach. Learn. Res., 2022b.
  9. Multiplicative filter networks. In ICLR, 2021.
  10. Plenoxels: Radiance fields without neural networks. In CVPR, 2022.
  11. Qualitatively characterizing neural network optimization problems. In ICLR, 2015.
  12. Generalised implicit neural representations. NeurIPS, 2022.
  13. Implicit regularization in matrix factorization. In NeurIPS, 2017.
  14. Gang Zeng Jiaxiang Tang, Xiaokang Chen. Joint implicit image function for guided depth super-resolution. In ACM MM, 2021.
  15. Scalable neural video representations with learnable positional features. In NeurIPS, 2022.
  16. E. Kodak. Kodak dataset, 1999.
  17. COOL-CHIC: Coordinate-based low complexity hierarchical image codec. arXiv preprint 2212.05458, 2022.
  18. Visualizing the loss landscape of neural nets. In NeurIPS, 2018.
  19. BACON: Band-limited coordinate networks for multiscale scene representation. In CVPR, 2022.
  20. Partition speeds up learning implicit neural representations based on exponential-increase hypothesis. In ICCV, pages 5474–5483, 2023.
  21. Neural sparse voxel fields. In NeurIPS. Curran Associates, Inc., 2020.
  22. Divide and conquer: Rethinking the training paradigm of neural radiance fields. arxiv preprint 2401.16144, 2024.
  23. What do neural networks learn when trained with random labels? In NeurIPS, 2020.
  24. ACORN: Adaptive coordinate networks for neural scene representation. ACM Transactions on Graphics, 40(4), 2021.
  25. NeRF: Representing scenes as neural radiance fields for view synthesis. In ECCV, 2020.
  26. Instant neural graphics primitives with a multiresolution hash encoding. ACM Transactions on Graphics, 41(4):102:1–102:15, 2022.
  27. In search of the real inductive bias: On the role of implicit regularization in deep learning. In ICLR, 2015.
  28. On the spectral bias of neural networks. In ICML, 2019.
  29. Implicit regularization in deep learning may not be explainable by norms. In NeurIPS, 2020.
  30. Wire: Wavelet implicit neural representations. In CVPR, 2023.
  31. Modality-agnostic variational compression of implicit neural representations. In ICML, 2023.
  32. Measuring the effects of data parallelism on neural network training. Journal of Machine Learning Research, 20(112):1–49, 2019.
  33. MetaSDF: Meta-learning signed distance functions. In NeurIPS, 2020a.
  34. Implicit neural representations with periodic activation functions. In NeurIPS, 2020b.
  35. Neural geometric level of detail: Real-time rendering with implicit 3d shapes. In CVPR, 2021.
  36. Fourier features let networks learn high frequency functions in low dimensional domains. In NeurIPS, 2020.
  37. Learned initializations for optimizing coordinate-based neural representations. In CVPR, 2021.
  38. Workshop and challenge on learned image compression (CLIC2020), 2020.
  39. Deep learning generalizes because the parameter-function map is biased towards simple functions. In ICLR, 2019.
  40. Neural fields in visual computing and beyond. Comput. Graph. Forum, 2022.
  41. Signal processing for implicit neural representations. In NeurIPS, 2022.
  42. A fine-grained spectral perspective on neural networks. arXiv preprint 1907.10599, 2019.
  43. L. P. Yaroslavsky. Compression, restoration, resampling, ‘compressive sensing’: Fast transforms in digital imaging. Journal of Optics, 2015.
  44. PlenOctrees for real-time rendering of neural radiance fields. In ICCV, 2021.
  45. A structured dictionary perspective on implicit neural representations. In CVPR, 2022.
  46. Understanding deep learning requires rethinking generalization. In ICLR, 2017.
  47. Kaiwei Zhang. Implicit neural representation learning for hyperspectral image super-resolution. IEEE Transactions on Geoscience and Remote Sensing, 2021.
Citations (2)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.