Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Nonparametric Teaching of Implicit Neural Representations (2405.10531v1)

Published 17 May 2024 in cs.LG and cs.CV

Abstract: We investigate the learning of implicit neural representation (INR) using an overparameterized multilayer perceptron (MLP) via a novel nonparametric teaching perspective. The latter offers an efficient example selection framework for teaching nonparametrically defined (viz. non-closed-form) target functions, such as image functions defined by 2D grids of pixels. To address the costly training of INRs, we propose a paradigm called Implicit Neural Teaching (INT) that treats INR learning as a nonparametric teaching problem, where the given signal being fitted serves as the target function. The teacher then selects signal fragments for iterative training of the MLP to achieve fast convergence. By establishing a connection between MLP evolution through parameter-based gradient descent and that of function evolution through functional gradient descent in nonparametric teaching, we show for the first time that teaching an overparameterized MLP is consistent with teaching a nonparametric learner. This new discovery readily permits a convenient drop-in of nonparametric teaching algorithms to broadly enhance INR training efficiency, demonstrating 30%+ training time savings across various input modalities.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (82)
  1. Explicit defense actions against test-set attacks. In AAAI, 2017.
  2. Nerf in detail: Learning to sample for view synthesis. arXiv preprint arXiv:2106.05264, 2021.
  3. Maximum mean discrepancy gradient flow. In NeurIPS, 2019.
  4. On lower and upper bounds in smooth and strongly convex optimization. The Journal of Machine Learning Research, 17(1):4303–4353, 2016.
  5. Sal: Sign agnostic learning of shapes from raw data. In CVPR, 2020.
  6. On the inductive bias of neural tangent kernels. In NeurIPS, 2019.
  7. A kernel perspective for regularizing deep neural networks. In ICML, 2019.
  8. Convex optimization. Cambridge university press, 2004.
  9. Subspace-based adaptive generalized likelihood ratio detection. IEEE Transactions on Signal Processing, 44(4):912–927, 1996.
  10. Rapid-inr: Storage efficient cpu-free dnn training using implicit neural representation. In ICCAD, 2023.
  11. Deep neural tangent kernel and laplace kernel have the same rkhs. In ICLR, 2020.
  12. Coleman, R. Calculus on normed vector spaces. Springer Science & Business Media, 2012.
  13. Introduction to algorithms. MIT press, 2022.
  14. Training neural networks as learning data-adaptive kernels: Provable representation and approximation benefits. Journal of the American Statistical Association, 116(535):1507–1520, 2021.
  15. Coin: Compression with implicit neural representations. In ICLR Neural Compression Workshop, 2021.
  16. From data to functa: Your data point is a function and you can treat it like one. In ICML, 2022.
  17. Eastman Kodak Company. Kodak lossless true color image suite. http://r0k.us/graphics/kodak/, 1999. [Accessed 14-08-2023].
  18. Convergence of adversarial training in overparametrized neural networks. In NeurIPS, 2019.
  19. On the similarity between the laplace and neural tangent kernels. In NeurIPS, 2020.
  20. Calculus of variations. Courier Corporation, 2000.
  21. Godunov, S. K. Ordinary differential equations with constant coefficient, volume 169. American Mathematical Soc., 1997.
  22. Generalised implicit neural representations. In NeurIPS, 2022.
  23. Automated curriculum learning for neural networks. In ICML, 2017.
  24. Implicit geometric regularization for learning shapes. In ICML, 2020.
  25. Hall, B. C. Quantum theory for mathematicians. Springer, 2013.
  26. Hartman, P. Ordinary differential equations. SIAM, 2002.
  27. Henaff, O. Data-efficient image recognition with contrastive predictive coding. In ICML, 2020.
  28. Neural tangent kernel: Convergence and generalization in neural networks. In NeurIPS, 2018.
  29. On the generalization ability of online strongly convex programming algorithms. In NeurIPS, 2008.
  30. Adam: A method for stochastic optimization. In ICLR, 2015.
  31. Kuk, A. Y. Asymptotically unbiased estimation in generalized linear models with random effects. Journal of the Royal Statistical Society Series B: Statistical Methodology, 57(2):395–407, 1995.
  32. Lax, P. D. Functional analysis, volume 55. John Wiley & Sons, 2002.
  33. Wide neural networks of any depth evolve as linear models under gradient descent. In NeurIPS, 2019.
  34. Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural networks, 6(6):861–867, 1993.
  35. Learning spatially collaged fourier bases for implicit neural representation. In AAAI, 2024a.
  36. Asmr: Activation-sharing multi-resolution coordinate networks for efficient inference. In ICLR, 2024b.
  37. Regularize implicit neural representation by itself. In CVPR, 2023.
  38. Bacon: Band-limited coordinate networks for multiscale scene representation. In CVPR, 2022.
  39. Liu, Q. Stein variational gradient descent as gradient flow. In NeurIPS, 2017.
  40. Stein variational gradient descent: A general purpose bayesian inference algorithm. In NeurIPS, 2016.
  41. Iterative machine teaching. In ICML, 2017.
  42. Towards black-box iterative machine teaching. In ICML, 2018.
  43. Online batch selection for faster training of neural networks. In ICLR Workshop, 2015.
  44. Policy poisoning in batch reinforcement learning and control. In NeurIPS, 2019.
  45. Nerf in the wild: Neural radiance fields for unconstrained photo collections. In CVPR, 2021.
  46. Nerf: Representing scenes as neural radiance fields for view synthesis. In ECCV, 2021.
  47. Prioritized training on points that are learnable, worth learning, and not yet learnt. In ICML, 2022.
  48. Implicit neural representation in medical imaging: A comparative survey. In ICCV, 2023.
  49. Instant neural graphics primitives with a multiresolution hash encoding. ACM transactions on graphics (TOG), 41(4):1–15, 2022.
  50. NASA. True colors of pluto. https://solarsystem.nasa.gov/resources/933/true-colors-of-pluto/?category=planets/dwarf-planets_pluto, 2018.
  51. Librispeech: an asr corpus based on public domain audio books. In ICASSP, 2015.
  52. Deepsdf: Learning continuous signed distance functions for shape representation. In CVPR, 2019.
  53. Signal compression via neural implicit representations. In ICASSP, 2022.
  54. On the spectral bias of neural networks. In ICML, 2019.
  55. Policy teaching via environment poisoning: Training-time adversarial attacks against reinforcement learning. In ICML, 2020.
  56. A multi-implicit neural representation for fonts. In NeurIPS, 2021.
  57. Ruder, S. An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747, 2016.
  58. Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT press, 2002.
  59. Modality-agnostic variational compression of implicit neural representations. In ICML, 2023.
  60. Sinkhorn barycenter via functional gradient descent. In NeurIPS, 2020.
  61. Near-optimally teaching the crowd to classify. In ICML, 2014.
  62. Metasdf: Meta-learning signed distance functions. In NeurIPS, 2020a.
  63. Implicit neural representations with periodic activation functions. In NeurIPS, 2020b.
  64. Stanford Computer Graphics Laboratory. The stanford 3d scanning repository. https://graphics.stanford.edu/data/3Dscanrep/, 2007.
  65. Implicit neural representations for image compression. In ECCV, 2022.
  66. Learning large-scale neural fields via context pruned meta-learning. In NeurIPS, 2023.
  67. Fourier features let networks learn high frequency functions in low dimensional domains. In NeurIPS, 2020.
  68. Learned initializations for optimizing coordinate-based neural representations. In CVPR, 2021.
  69. Training data-efficient image transformers & distillation through attention. In ICML, 2021.
  70. scikit-image: image processing in python. PeerJ, 2:e453, 2014.
  71. A machine teaching framework for scalable recognition. In ICCV, 2021.
  72. Gradient-based algorithms for machine teaching. In CVPR, 2021.
  73. Neural implicit dictionary learning via mixture-of-expert training. In ICML, 2022.
  74. Discriminative subspace method for minimum error pattern recognition. In IEEE Workshop on Neural Networks for Signal Processing, 1995.
  75. Wright, S. J. Coordinate descent algorithms. Mathematical programming, 151(1):3–34, 2015.
  76. Diner: Disorder-invariant implicit neural representation. In CVPR, 2023.
  77. A structured dictionary perspective on implicit neural representations. In CVPR, 2022.
  78. Nonparametric teaching for multiple learners. In NeurIPS, 2023a.
  79. Nonparametric iterative machine teaching. In ICML, 2023b.
  80. Unlearn what you have learned: Adaptive crowd teaching with exponentially decayed memory learners. In SIGKDD, 2018.
  81. Zhu, X. Machine teaching: An inverse problem to machine learning and an approach toward optimal education. In AAAI, 2015.
  82. An overview of machine teaching. arXiv preprint arXiv:1801.05927, 2018.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets