Towards a Sampling Theory for Implicit Neural Representations (2405.18410v1)
Abstract: Implicit neural representations (INRs) have emerged as a powerful tool for solving inverse problems in computer vision and computational imaging. INRs represent images as continuous domain functions realized by a neural network taking spatial coordinates as inputs. However, unlike traditional pixel representations, little is known about the sample complexity of estimating images using INRs in the context of linear inverse problems. Towards this end, we study the sampling requirements for recovery of a continuous domain image from its low-pass Fourier coefficients by fitting a single hidden-layer INR with ReLU activation and a Fourier features layer using a generalized form of weight decay regularization. Our key insight is to relate minimizers of this non-convex parameter space optimization problem to minimizers of a convex penalty defined over an infinite-dimensional space of measures. We identify a sufficient number of samples for which an image realized by a width-1 INR is exactly recoverable by solving the INR training problem, and give a conjecture for the general width-$W$ case. To validate our theory, we empirically assess the probability of achieving exact recovery of images realized by low-width single hidden-layer INRs, and illustrate the performance of INR on super-resolution recovery of more realistic continuous domain phantom images.
- J. A. Fessler, “Model-based image reconstruction for MRI,” IEEE signal processing magazine, vol. 27, no. 4, pp. 81–89, 2010.
- R. Archibald and A. Gelb, “A method to reduce the Gibbs ringing artifact in MRI scans while keeping tissue boundary integrity,” IEEE Transactions on Medical Imaging, vol. 21, no. 4, pp. 305–319, 2002.
- J. Veraart, E. Fieremans, I. O. Jelescu, F. Knoll, and D. S. Novikov, “Gibbs ringing in diffusion MRI,” Magnetic resonance in medicine, vol. 76, no. 1, pp. 301–314, 2016.
- G. Ongie and M. Jacob, “Recovery of piecewise smooth images from few fourier samples,” pp. 543–547, 2015.
- ——, “Off-the-grid recovery of piecewise constant images from few fourier samples,” SIAM Journal on Imaging Sciences, vol. 9, no. 3, pp. 1004–1041, 2016.
- Y. Xie, T. Takikawa, S. Saito, O. Litany, S. Yan, N. Khan, F. Tombari, J. Tompkin, V. Sitzmann, and S. Sridhar, “Neural fields in visual computing and beyond,” in Computer Graphics Forum, vol. 41, no. 2. Wiley Online Library, 2022, pp. 641–676.
- B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng, “Nerf: Representing scenes as neural radiance fields for view synthesis,” Communications of the ACM, vol. 65, no. 1, pp. 99–106, 2021.
- V. Sitzmann, J. Martel, A. Bergman, D. Lindell, and G. Wetzstein, “Implicit neural representations with periodic activation functions,” Advances in Neural Information Processing Systems, vol. 33, pp. 7462–7473, 2020.
- M. Tancik, P. Srinivasan, B. Mildenhall, S. Fridovich-Keil, N. Raghavan, U. Singhal, R. Ramamoorthi, J. Barron, and R. Ng, “Fourier features let networks learn high frequency functions in low dimensional domains,” Advances in Neural Information Processing Systems, vol. 33, pp. 7537–7547, 2020.
- P. Savarese, I. Evron, D. Soudry, and N. Srebro, “How do infinite width bounded norm networks look in function space?” in Conference on Learning Theory. PMLR, 2019, pp. 2667–2690.
- G. Ongie, R. Willett, D. Soudry, and N. Srebro, “A function space view of bounded norm infinite width relu nets: The multivariate case,” in International Conference on Learning Representations, 2019.
- R. Parhi and R. D. Nowak, “Banach space representer theorems for neural networks and ridge splines,” Journal of Machine Learning Research, vol. 22, no. 43, pp. 1–40, 2021.
- ——, “Deep learning meets sparse regularization: A signal processing perspective,” IEEE Signal Processing Magazine, vol. 40, no. 6, pp. 63–74, 2023.
- S. Gunasekar, J. Lee, D. Soudry, and N. Srebro, “Characterizing implicit bias in terms of optimization geometry,” in International Conference on Machine Learning. PMLR, 2018, pp. 1832–1841.
- N. Benbarka, T. Höfer, A. Zell et al., “Seeing implicit neural representations as fourier series,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2022, pp. 2041–2050.
- A. Krogh and J. Hertz, “A simple weight decay can improve generalization,” Advances in neural information processing systems, vol. 4, 1991.
- Y. Bengio, N. Roux, P. Vincent, O. Delalleau, and P. Marcotte, “Convex neural networks,” Advances in neural information processing systems, vol. 18, 2005.
- F. Bach, “Breaking the curse of dimensionality with convex neural networks,” Journal of Machine Learning Research, vol. 18, no. 19, pp. 1–53, 2017.
- T. Ergen and M. Pilanci, “Convex geometry and duality of over-parameterized neural networks,” Journal of machine learning research, vol. 22, no. 212, pp. 1–63, 2021.
- E. J. Candès and C. Fernandez-Granda, “Towards a mathematical theory of super-resolution,” Communications on pure and applied Mathematics, vol. 67, no. 6, pp. 906–956, 2014.
- C. Poon and G. Peyré, “Multidimensional sparse super-resolution,” SIAM Journal on Mathematical Analysis, vol. 51, no. 1, pp. 1–44, 2019.
- A. Eftekhari, T. Bendory, and G. Tang, “Stable super-resolution of images: theoretical study,” Information and Inference: A Journal of the IMA, vol. 10, no. 1, pp. 161–193, 2021.
- J. Nocedal and S. J. Wright, Numerical optimization. Springer.
- M. Guerquin-Kern, L. Lejeune, K. P. Pruessmann, and M. Unser, “Realistic analytical phantoms for parallel magnetic resonance imaging,” IEEE Transactions on Medical Imaging, vol. 31, no. 3, pp. 626–636, 2011.
- T. Müller, A. Evans, C. Schied, and A. Keller, “Instant neural graphics primitives with a multiresolution hash encoding,” ACM transactions on graphics (TOG), vol. 41, no. 4, pp. 1–15, 2022.
- K. Bredies and M. Carioni, “Sparsity of solutions for variational inverse problems with finite-dimensional data,” Calculus of Variations and Partial Differential Equations, vol. 59, no. 1, p. 14, 2020.
- Y. De Castro and F. Gamboa, “Exact reconstruction using beurling minimal extrapolation,” Journal of Mathematical Analysis and applications, vol. 395, no. 1, pp. 336–354, 2012.