Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 137 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 29 tok/s Pro
GPT-5 High 31 tok/s Pro
GPT-4o 90 tok/s Pro
Kimi K2 207 tok/s Pro
GPT OSS 120B 425 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

CoLoRA: Continuous low-rank adaptation for reduced implicit neural modeling of parameterized partial differential equations (2402.14646v2)

Published 22 Feb 2024 in cs.LG, cs.NA, math.NA, and stat.ML

Abstract: This work introduces reduced models based on Continuous Low Rank Adaptation (CoLoRA) that pre-train neural networks for a given partial differential equation and then continuously adapt low-rank weights in time to rapidly predict the evolution of solution fields at new physics parameters and new initial conditions. The adaptation can be either purely data-driven or via an equation-driven variational approach that provides Galerkin-optimal approximations. Because CoLoRA approximates solution fields locally in time, the rank of the weights can be kept small, which means that only few training trajectories are required offline so that CoLoRA is well suited for data-scarce regimes. Predictions with CoLoRA are orders of magnitude faster than with classical methods and their accuracy and parameter efficiency is higher compared to other neural network approaches.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (99)
  1. Intrinsic dimensionality explains the effectiveness of language model fine-tuning. In C. Zong, F. Xia, W. Li, and R. Navigli, editors, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 7319–7328, Online, Aug. 2021. Association for Computational Linguistics.
  2. Nonlinear model order reduction based on local reduced-order bases. International Journal for Numerical Methods in Engineering, 92(10):891–916, 2012.
  3. V. Anand and E. Gutmark. Rotating detonation combustors and their similarities to rocket instabilities. Progress in Energy and Combustion Science, 73:182–234, 2019.
  4. W. Anderson and M. Farazmand. Evolution of nonlinear reduced-order solutions for PDEs with conserved quantities. SIAM Journal on Scientific Computing, 44(1):A176–A197, 2022.
  5. A. C. Antoulas. Approximation of large-scale dynamical systems. SIAM, 2005.
  6. Interpolatory Methods for Model Reduction. SIAM, 2021.
  7. Parametric PDEs: sparse or low-rank approximations? IMA Journal of Numerical Analysis, 38(4):1661–1708, 09 2017.
  8. J. Barnett and C. Farhat. Quadratic approximation manifold for mitigating the Kolmogorov barrier in nonlinear projection-based model order reduction. Journal of Computational Physics, 464:111348, 2022.
  9. A survey of projection-based model reduction methods for parametric dynamical systems. SIAM review, 57(4):483–531, 2015.
  10. J. Berman and B. Peherstorfer. Randomized sparse Neural Galerkin schemes for solving evolution equations with deep networks. In Thirty-seventh Conference on Neural Information Processing Systems, 2023.
  11. M. Billaud-Friess and A. Nouy. Dynamical model reduction method for solving parameter-dependent dynamical systems. SIAM Journal on Scientific Computing, 39(4):A1766–A1792, 2017.
  12. Projection-based model reduction with dynamically transformed modes. ESAIM: M2AN, 54(6):2011–2043, 2020.
  13. N. Boullé and A. Townsend. A Mathematical Guide to Operator Learning, Dec. 2023. arXiv:2312.14688 [cs, math].
  14. JAX: composable transformations of Python+NumPy programs. 2018.
  15. Neural Galerkin schemes with active learning for high-dimensional evolution equations. Journal of Computational Physics, 496:112588, Jan. 2024.
  16. Model order reduction for problems with large convection effects. In B. N. Chetverushkin, W. Fitzgibbon, Y. Kuznetsov, P. Neittaanmäki, J. Periaux, and O. Pironneau, editors, Contributions to Partial Differential Equations and Applications, pages 131–150, Cham, 2019. Springer International Publishing.
  17. K. Carlberg. Adaptive h-refinement for reduced-order models. International Journal for Numerical Methods in Engineering, 102(5):1192–1210, 2015.
  18. Implicit neural spatial representations for time-dependent PDEs. In A. Krause, E. Brunskill, K. Cho, B. Engelhardt, S. Sabato, and J. Scarlett, editors, Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pages 5162–5177. PMLR, 23–29 Jul 2023.
  19. CROM: Continuous reduced-order modeling of PDEs using implicit neural representations. In The Eleventh International Conference on Learning Representations, 2023.
  20. Hypernetwork-based meta-learning for low-rank physics-informed neural networks. In Thirty-seventh Conference on Neural Information Processing Systems, 2023.
  21. A. Cohen and R. DeVore. Kolmogorov widths under holomorphic mappings. IMA J. Numer. Anal., 36(1):1–12, 2016.
  22. HyperPINN: Learning parameterized differential equations with physics-informed hypernetworks. In The Symbiosis of Deep Learning and Differential Equations, 2021.
  23. Model reduction of parametrized evolution problems using the reduced basis method with adaptive time-partitioning. In Proc. of ADMOS 2011, 2011.
  24. P. A. M. Dirac. Note on exchange phenomena in the thomas atom. Mathematical Proceedings of the Cambridge Philosophical Society, 26(3):376–385, 1930.
  25. Y. Du and T. A. Zaki. Evolutional deep neural network. Physical Review E, 104(4), Oct. 2021.
  26. J. L. Eftang and B. Stamm. Parameter multi-domain ‘hp’ empirical interpolation. International Journal for Numerical Methods in Engineering, 90(4):412–428, 2012.
  27. Nonlinear model reduction on metric spaces. Application to one-dimensional conservative PDEs in Wasserstein spaces. ESAIM Math. Model. Numer. Anal., 54(6):2159–2197, 2020.
  28. An asymptotic-preserving dynamical low-rank method for the multi-scale multi-dimensional linear transport equation. Journal of Computational Physics, 439:110353, 2021.
  29. L. Einkemmer and C. Lubich. A quasi-conservative dynamical low-rank algorithm for the vlasov equation. SIAM Journal on Scientific Computing, 41(5):B1061–B1081, 2019.
  30. Model-agnostic meta-learning for fast adaptation of deep networks. In D. Precup and Y. W. Teh, editors, Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 1126–1135. PMLR, 06–11 Aug 2017.
  31. J. Frenkel. Wave Mechanics, Advanced General Theor. Clarendon Press, Oxford, 1934.
  32. Latent-space dynamics for reduced deformable simulation. Computer Graphics Forum, 2019.
  33. R. Geelen and K. Willcox. Localized non-intrusive reduced-order modelling in the operator inference framework. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 380(2229):20210206, 2022.
  34. Operator inference for non-intrusive model reduction with quadratic manifolds. Computer Methods in Applied Mechanics and Engineering, 403:115717, 2023.
  35. J.-F. Gerbeau and D. Lombardi. Approximated lax pairs for the reduced order integration of nonlinear evolution equations. Journal of Computational Physics, 265:246–269, 2014.
  36. O. Ghattas and K. Willcox. Learning physics-based models from data: perspectives from inverse problems and model reduction. Acta Numerica, 30:445–554, 2021.
  37. A literature survey of low-rank tensor approximation techniques. GAMM-Mitteilungen, 36(1):53–78, 2013.
  38. C. Greif and K. Urban. Decay of the Kolmogorov N-width for wave problems. Applied Mathematics Letters, 96:216–222, 2019.
  39. Arbitrarily high order convected scheme solution of the vlasov–poisson system. Journal of Computational Physics, 270:711–752, Aug. 2014.
  40. Reduced basis methods for time-dependent problems. Acta Numerica, 31:265–345, 2022.
  41. LoRA: Low-rank adaptation of large language models. In International Conference on Learning Representations, 2022.
  42. C. Huang and K. Duraisamy. Predictive reduced order modeling of chaotic multi-scale problems using adaptively sampled projections. Journal of Computational Physics, 491:112356, 2023.
  43. T. J. R. Hughes. The Finite Element Method: Linear Static and Dynamic Finite Element Analysis. Dover Publications, 2012.
  44. A. Iollo and D. Lombardi. Advection modes by optimal mass transfer. Phys. Rev. E, 89:022923, Feb 2014.
  45. O. Issan and B. Kramer. Predicting solar wind streams from the inner-heliosphere to earth via shifted operator inference. Journal of Computational Physics, 473:111689, 2023.
  46. D. J. K. Jens L. Eftang and A. T. Patera. An hp certified reduced basis method for parametrized parabolic partial differential equations. Mathematical and Computer Modelling of Dynamical Systems, 17(4):395–422, 2011.
  47. The localized reduced basis multiscale method for two-phase flows in porous media. International Journal for Numerical Methods in Engineering, 102(5):1018–1040, 2015.
  48. Initialization and regularization of factorized neural layers. In International Conference on Learning Representations, 2021.
  49. Generalizable Implicit Neural Representations via Instance Pattern Composers. In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 11808–11817, Vancouver, BC, Canada, June 2023. IEEE.
  50. A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 451:110841, 2022.
  51. D. P. Kingma and J. Ba. Adam: A method for stochastic optimization, 2017.
  52. Mode-locked rotating detonation waves: Experiments and a model equation. Physical Review E, 101(1), Jan. 2020.
  53. O. Koch and C. Lubich. Dynamical low-rank approximation. SIAM Journal on Matrix Analysis and Applications, 29(2):434–454, 2007.
  54. Learning nonlinear reduced models from data with operator inference. Annual Review of Fluid Mechanics, 56(1):521–548, 2024.
  55. C. Lasser and C. Lubich. Computing quantum dynamics in the semiclassical regime. Acta Numerica, 29:229–401, 2020.
  56. K. Lee and K. T. Carlberg. Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Journal of Computational Physics, 404:108973, 2020.
  57. K. Lee and K. T. Carlberg. Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws. Proceedings of the AAAI Conference on Artificial Intelligence, 35(1):277–285, May 2021.
  58. K. Lee and E. J. Parish. Parameterized neural ordinary differential equations: applications to computational physics problems. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 477(2253):20210162, 2021.
  59. R. J. LeVeque. Finite Volume Methods for Hyperbolic Problems. Cambridge Texts in Applied Mathematics. Cambridge University Press, 2002.
  60. Measuring the intrinsic dimension of objective landscapes. In International Conference on Learning Representations, 2018.
  61. Fourier neural operator for parametric partial differential equations. In International Conference on Learning Representations, 2021.
  62. Learning nonlinear operators via deeponet based on the universal approximation theorem of operators. Nature Machine Intelligence, 3(3):218–229, Mar 2021.
  63. C. Lubich. From quantum to classical molecular dynamics: reduced models and numerical analysis, volume 12. European Mathematical Society, 2008.
  64. Global a priori convergence theory for reduced-basis approximations of single-parameter symmetric coercive elliptic partial differential equations. C. R. Math. Acad. Sci. Paris, 335(3):289–294, 2002.
  65. Y. Maday and B. Stamm. Locally adaptive greedy approximations for anisotropic parameter reduced basis spaces. SIAM Journal on Scientific Computing, 35(6):A2417–A2441, 2013.
  66. E. Musharbash and F. Nobile. Symplectic dynamical low rank approximation of wave equations with random parameters. Mathicse Technical Report nr 18.2017, 2017.
  67. E. Musharbash and F. Nobile. Dual dynamically orthogonal approximation of incompressible Navier Stokes equations with random boundary conditions. Journal of Computational Physics, 354:135 – 162, 2018.
  68. Error analysis of the dynamically orthogonal approximation of time dependent random pdes. SIAM Journal on Scientific Computing, 37(2):A776–A810, 2015.
  69. M. Ohlberger and S. Rave. Nonlinear reduced basis approximation of parameterized evolution equations via the method of freezing. Comptes Rendus Mathematique, 351(23):901 – 906, 2013.
  70. M. Ohlberger and S. Rave. Reduced basis methods: Success, limitations and future challenges. Proceedings of the Conference Algoritmy, pages 1–12, 2016.
  71. Neural implicit flow: a mesh-agnostic dimensionality reduction paradigm of spatio-temporal data. Journal of Machine Learning Research, 24(41):1–60, 2023.
  72. The neural network shifted-proper orthogonal decomposition: A machine learning approach for non-linear reduction of hyperbolic equations. Computer Methods in Applied Mechanics and Engineering, 392:114687, 2022.
  73. B. Peherstorfer. Model reduction for transport-dominated problems via online adaptive bases and adaptive sampling. SIAM Journal on Scientific Computing, 42:A2803–A2836, 2020.
  74. B. Peherstorfer. Breaking the Kolmogorov barrier with nonlinear model reduction. Notices of the American Mathematical Society, 69:725–733, 2022.
  75. Localized discrete empirical interpolation method. SIAM Journal on Scientific Computing, 36(1):A168–A192, 2014.
  76. B. Peherstorfer and K. Willcox. Online adaptive model reduction for nonlinear systems via low-rank updates. SIAM Journal on Scientific Computing, 37(4):A2123–A2150, 2015.
  77. Lift & learn: Physics-informed machine learning for large-scale nonlinear dynamical systems. Physica D: Nonlinear Phenomena, 406:132401, 2020.
  78. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics, 378:686–707, 2019.
  79. Nonidealities in rotating detonation engines. Annual Review of Fluid Mechanics, 55(1):639–674, 2023.
  80. On-the-fly reduced order modeling of passive and reactive species via time-dependent manifolds. Computer Methods in Applied Mechanics and Engineering, 382:113882, 2021.
  81. The shifted proper orthogonal decomposition: a mode decomposition for multiple transport phenomena. SIAM J. Sci. Comput., 40(3):A1322–A1344, 2018.
  82. Non-linear manifold reduced-order models with convolutional autoencoders and reduced over-collocation method. Journal of Scientific Computing, 94(3):74, Feb 2023.
  83. Reconstruction equations and the Karhunen–Loève expansion for systems with symmetry. Physica D: Nonlinear Phenomena, 142(1):1–19, 2000.
  84. Reduced basis approximation and a posteriori error estimation for affinely parametrized elliptic coercive partial differential equations. Archives of Computational Methods in Engineering, 15(3):229–275, 2008.
  85. Low-rank matrix factorization for deep neural network training with high-dimensional output targets. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 6655–6659, 2013.
  86. Dynamically orthogonal field equations for continuous stochastic dynamical systems. Physica D: Nonlinear Phenomena, 238(23):2347–2360, 2009.
  87. Low-rank lottery tickets: finding efficient low-rank neural networks via matrix differential equations. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems, volume 35, pages 20051–20063. Curran Associates, Inc., 2022.
  88. Nonlinear embeddings for conserving Hamiltonians and other quantities with Neural Galerkin schemes, Oct. 2023. arXiv:2310.07485 [cs, math].
  89. Lookahead data-gathering strategies for online adaptive model reduction of transport-dominated problems. Chaos: An Interdisciplinary Journal of Nonlinear Science, 2023.
  90. Implicit neural representations with periodic activation functions, 2020.
  91. Reduced basis techniques for nonlinear conservation laws. ESAIM Math. Model. Numer. Anal., 49(3):787–814, 2015.
  92. Factorized Fourier Neural Operators, Mar. 2023. arXiv:2111.13802 [cs].
  93. Evolve smoothly, fit consistently: Learning smooth latent dynamics for advection-dominated systems. In The Eleventh International Conference on Learning Representations, 2023.
  94. 2D Burgers Equations with Large Reynolds Number Using POD/DEIM and Calibration.
  95. Reduced-order modeling for parameterized PDEs via implicit neural representations, Nov. 2023. arXiv:2311.16410 [math-ph, physics:physics].
  96. Continuous PDE Dynamics Forecasting with Implicit Neural Representations, Feb. 2023. arXiv:2209.14855 [cs, stat].
  97. M. J. Zahr and C. Farhat. Progressive construction of a parametric reduced-order model for PDE-constrained optimization. International Journal for Numerical Methods in Engineering, 102(5):1111–1135, 2015.
  98. Extracting deep neural network bottleneck features using low-rank matrix factorization. In 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 185–189, 2014.
  99. Low-rank plus diagonal adaptation for deep neural networks. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5005–5009, 2016.
Citations (7)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 2 tweets and received 1 like.

Upgrade to Pro to view all of the tweets about this paper: